Every time I read an empirical study on the actual performance of "agentic" AI, I can't help but think of that one experiment where they tried to get an AI to design a robot that can walk, but it never did because it figured out that just building a really tall robot that immediately falls over is a much cheaper way to maximise distance travelled.
Sometimes it feels like the entire business model for LLMs is built on convincing people that a really tall robot that immediately falls over is in fact the optimal solution to that problem.
first thing id do as a skeleton is drink red wine from a goblet and have it spill out everywhere . second thing id do is play my ribs like a xylophone
Sick and tired of every aspect of life feeling duplicitous or transactional
Let’s die by our mistakes this year 🤗





































