• 0 Posts
  • 9 Comments
Joined 10 months ago
cake
Cake day: February 12th, 2025

help-circle
  • I don’t think this is appeasing a bully, this is actually giving him very little. Appeasement would have involved actually giving him something. The increase to 3.5% is back to around cold war levels, which seems very appropriate for the current geopolitical situation. The final 1.5% is essentially an accounting trick to make whatever expenses you like count towards the 5%, like road maintenance or technological R&D, it would be hard not to reach this target. Plus this money can now be increasingly spent on Europe’s own companies instead of sending 1-2% of yearly GDP straight to the US economy, especially once economies of scale start picking up.

    This is just what Europe was planning to do on its own, but framing it in a way that strokes Trump’s ego and lets him claim it as his victory. Especially after a few years this will not be a positive change for the US. I’ll happily sacrifice Rutte’s pride if it means Europe gets exactly what it wanted.


  • Agreed with the points about intelligence definition, but on a pragmatic note, I’ll list some concrete examples of fields in AI that are not LLMs (I’ll leave it up to your judgement if they’re “more intelligent” or not):

    • Machine learning, most of the concrete examples other people gave here were deep learning models. They’re used a lot, but certainly don’t represent all of AI. ML is essentially fitting a function by tuning the function’s parameters using data. Many sub-fields like uncertainty quantification, time-series forecasting, meta-learning, representation learning, surrogate modelling and emulation, etc.
    • Optimisation, containing both gradient-based and black-box methods. These methods are about finding parameter values that maximise or minimise a function. Machine learning is also an optimisation problem, and is usually performed using gradient-based methods.
    • Reinforcement learning, which often involves a deep neural network to estimate state values, but is itself a framework for assigning values to states, and learning the optimal policy to maximise reward. When you hear about agents, often they will be using RL.
    • Formal methods for solving NP-hard problems, popular examples include TSP and SAT. Basically trying to solve these problems efficiently and with theoretical guarantees of accuracy. All of the hardware you use will have had its validity checked through this type of method at some point.
    • Causal inference and discovery. Trying to identify causal relationships from observational data when random controlled trials are not feasible, using theoretical proofs to establish when we can and cannot interpret a statistical association as a causal relationship.
    • Bayesian inference and learning theory methods, not quite ML but highly related. Using Bayesian statistical methods and often MCMC methods to perform statistical inference of the posterior with normally intractable marginal likelihoods. It’s mostly statistics with AI helping out to enable us to actually compute things.
    • Robotics, not a field I know much about, but it’s about physical agents interacting with the real world, which comes with many additional challenges.

    This list is by no means exhaustive, and there is often overlap between fields as they use each other’s solutions to advance their own state of the art, but I hope this helped for people who always hear that “AI is much more than LLMs” but don’t know what else is there. A common theme is that we use computational methods to answer questions, particularly those we couldn’t easily answer ourselves.

    To me, what sets AI apart from the rest of computer science is that we don’t do “P” problems: if there is a method available to directly or analytically compute the solution, I usually wouldn’t call it AI. As a basic example, I don’t consider computing y = ax+b coefficients analytically as AI, but do consider general approximations of linear models using ML AI.


  • Ukraine has one of the strongest militaries in Europe. This whole “they couldn’t even beat puny Ukraine” line I keep seeing is entirely too haughty for my liking. Their gear is less state-of-the-art, sure, but many European countries lack vital components of a functional military altogether. Including logistics and coordination of joint efforts which the Americans have until recently been doing.

    Sure, no need to panic yet, but certainly a need to get a move on and actually respond proactively to make up for gaps, and respond jointly, to ensure that it’s not going to be a matter of small countries getting steamrolled one by one.


  • While nice, this seems at odds with the budget cuts to science that are horribly undermining our existing, high-quality scientific institutions. It would be much nicer if luring these US-based scientists were an addition to a larger package to invest in, rather than cut and destroy, science in the country.

    We could certainly use the help, so they’d be very welcome, but if we’re still getting rid of hundreds of fully set up scientists while gaining a few new ones from this, that’s still a net loss…

    Plus, any US-based scientist who might consider doing this would surely look at these budget cuts, see how countries like France and Germany are actually investing in scientific infrastructure, and take this into account when selecting a destination. If you want to “lure” people over, you do need to have an actual high-quality and functional system to show off.


  • One caused by counting on internal division in the EU, the probability of which increases when we fail to have a unified response right now. Basically just gambling that countries like the Netherlands won’t be willing to defend, e.g., a Baltic country. Russia could certainly beat the militaries of small Baltic states one by one, if it is breaking even with Ukraine. No joint response would mean selling out member states and effectively disabling the whole concept of the EU. Joint response would mean war for everyone.

    I would prefer a future that minimises the probability of this gamble being made, and nobody gets invaded.


  • I suppose this is karma for me getting too excited about European unity getting a massive boost as a silver lining to the state of the world. My own country is joining Hungary in attempting to sabotage it.

    This is not the time to make an ideological show to your populist national electorate. If this doesn’t get implemented properly and the newfound unity is not credible, the continent and the EU will be faced with war. Which, if that on its own is not convincing enough, also tends to be somewhat suboptimal for fiscal stability and the economy.



  • Of course you’re right morally, but it’s still an interesting change in tone. This whole thing started when Russia threw a fit about Ukraine wanting closer ties to the EU instead of Russia. Now their official position is that even EU membership is totally fine. Seems like their position weakened quite a bit since 2014.

    On the other hand, maybe this means Russia wants to fight the entire EU with their mutual defence pact when they attack again after recovering for a few years through a ceasefire. Or maybe they’re gambling that the EU’s requirements are too strict for Ukraine to join.

    Or maybe it’s just all lies again, of course. But still, an interesting weaker flavour of lies, in that case.


  • Incredible news! We’ve been needing this for a long time; the research community has been calling for a “CERN for AI” for years at this point.

    As a publicly funded researcher working in this field it’s very frustrating to see so many of our excellent, well-educated students in Europe end up contributing to the performance of American tech giants (who then use that power to undermine our democratic society). It is also hard to overstate how dependent we are on American compute infrastructure, for example, Google colab, AWS or Google Earth Engine. This last one is especially frustrating because basically the entire European research community relies on access to a service by an American tech giant to access our own globally leading high-quality public access satellite data.

    I’ve seen a lot of negativity on this news as a waste of money. Personally I’m not too sold on the usefulness of LLMs either, their hype is very much overblown. But investing in AI is not the same as investing in LLMs, and Europe absolutely needs this. AI is being used, and has been for decades, in nearly everything we do. This includes not just LLMs and deep learning, but optimisation, formal logic, all sorts of probabilistic inference, forecasting, robotics, simulation, surrogate modelling, satisfiability, and much more. The correctness of the chips your phone uses has been verified using AI techniques. Weather forecasts and disaster warnings use AI methods. The food you eat has been monitored as it grew using AI. Air travel and general infrastructure needs AI to function, much of manufacturing and design needs it, etc etc. These are not just the chat bot “assistants” that tech companies try to push so hard on the public, but computational methods that answer vital questions we cannot otherwise answer.

    Being dependent on a country like the US (or China) for something this pervasive and important is a terrible idea. Compute infrastructure, central hubs of expertise, and continental instead of national scale investment opportunities all contribute strongly to European sovereignty in this regard, for all the fields mentioned above (not just the over-hyped ones).