Mesa Project Adds Code Comprehension Requirement After AI Slop Incident

Recently [Faith Ekstrand] announced on Mastodon that Mesa was updating its contributor guide. This follows a recent AI slop incident where someone submitted a massive patch to the Mesa project with the claim that this would improve performance ‘by a few percent’. The catch? The entire patch was generated by ChatGPT, with the submitter becoming somewhat irate when the very patient Mesa developers tried to explain that they’d happily look at the issue after the submitter had condensed the purported ‘improvement’ into a bite-sized patch.

The entire saga is summarized in a recent video by [Brodie Robertson] which highlights both how incredibly friendly the Mesa developers are, and how the use of ChatGPT and kin has made some people with zero programming skills apparently believe that they can now contribute code to OSS projects. Unsurprisingly, the Mesa developers were unable to disabuse this particular individual from that notion, but the diff to the Mesa contributor guide by [Timur Kristóf] should make abundantly clear that someone playing Telephone between a chatbot and OSS project developers is neither desirable nor helpful.

That said, [Brodie] also highlights a recent post by [Daniel Stenberg] of Curl fame, who thanked [Joshua Rogers] for contributing a massive list of potential issues that were found using ‘AI-assisted tools’, as detailed in this blog post by [Joshua]. An important point here is that these ‘AI tools’ are not LLM-based chatbots, but rather tweaked existing tools like static code analyzers with more smarts bolted on. They’re purpose-made tools that still require you to know what you’re doing, but they can be a real asset to a developer, and a heck of a lot more useful to a project like Curl than getting sent fake bug reports by a confabulating chatbot as has happened previously.

Continue reading “Mesa Project Adds Code Comprehension Requirement After AI Slop Incident”

LLM Dialogue In Animal Crossing Actually Works Very Well

In the original Animal Crossing from 2001, players are able to interact with a huge cast of quirky characters, all with different interests and personalities. But after you’ve played the game for awhile, the scripted interactions can become a bit monotonous. Seeing an opportunity to improve the experience, [josh] decided to put a Large Language Model (LLM) in charge of these interactions. Now when the player chats with other characters in the game, the dialogue is a lot more engaging, relevant, and sometimes just plain funny.

How does one go about hooking a modern LLM into a 24-year-old game built for an entirely offline console? [josh]’s clever approach required a lot of poking about, and did a good job of leveraging some of the game’s built-in features for a seamless result.

Continue reading “LLM Dialogue In Animal Crossing Actually Works Very Well”

A Trail Camera Built With Raspberry Pi

You can get all kinds of great wildlife footage if you trek out into the woods with a camera, but it can be tough to stay awake all night. However, this is a task you can readily automate, as [Luke] did with his DIY trail camera.

A Raspberry Pi Zero 2W serves as the heart of the build. It’s compact and runs on very little power, but also provides a good amount more processing power than the original Raspberry Pi Zero. It’s kitted out with the Raspberry Pi AI Camera, which uses the Sony IMX500 Intelligent Vision Sensor — providing a great platform for neural networks doing image classification and similar machine learning tasks. A Witty Pi power management module is used both for its real time clock and to schedule start-ups and shutdowns to best manage the power on offer from the batteries. All these components are wrapped up in a 3D printed housing to keep the Pi safe out in the wild.

We’ve seen some neat projects in this vein before.

Continue reading “A Trail Camera Built With Raspberry Pi”

Fully-Local AI Agent Runs On Raspberry Pi, With A Little Patience

[Simone]’s AI assistant, dubbed Max Headbox, is a wakeword-triggered local AI agent capable of following instructions and doing simple tasks. It’s an experiment in many ways, but also a great demonstration not only of what is possible with the kinds of open tools and hardware available to a modern hobbyist, but also a reminder of just how far some of these software tools have come in only a few short years.

Max Headbox is not just a local large language model (LLM) running on Pi hardware; the model is able to make tool calls in a loop, chaining them together to complete tasks. This means the system can break down a spoken instruction (for example, “find the weather report for today and email it to me”) into a series of steps to complete, utilizing software tools as needed throughout the process until the task is finished.

Continue reading “Fully-Local AI Agent Runs On Raspberry Pi, With A Little Patience”

Moondream title with man's face visible in background.

Using Moondream AI To Make Your Pi “See” Like A Human

[Jaryd] from Core Electronics shows us human-like computer vision with Moondream on the Pi 5.

Using the Moondream visual language model, which runs directly on your Raspberry Pi, and not in the cloud, you can answer questions such as “are the clothes on the line?”, “is there a package on the porch?”, “did I leave the fridge open?”, or “is the dog on the bed?” [Jaryd] compares Moondream to an alternative visual AI system, You Only Look Once (YOLO).

Processing a question with Moondream on your Pi can take anywhere from just a few moments to 90 seconds, depending on the model used and the nature of the question. Moondream comes in two varieties, based on size, one is two billion parameters and the other five hundred million parameters. The larger model is more capable and more accurate, but it has a longer processing time — the fastest possible response time coming in at about 22 to 25 seconds. The smaller model is faster, about 8 to 10 seconds, but as you might expect its results are not as good. Indeed, [Jaryd] says the answers can be infuriatingly bad.

In the write-up, [Jaryd] runs you through how to use Moonbeam on your Pi 5 and the video (embedded below) shows it in action. Fair warning though, Moondream is quite RAM intensive so you will need at least 8 GB of memory in your Pi if you want to play along.

If you’re interested in machine vision you might also like to check out Machine Vision Automates Trainspotting With Unique Full-Length Portraits.

Continue reading “Using Moondream AI To Make Your Pi “See” Like A Human”

Regretfully: $3,000 Worth Of Raspberry Pi Boards

We feel for [Jeff Geerling]. He spent a lot of effort building an AI cluster out of Raspberry PI boards and $3,000 later, he’s a bit regretful. As you can see in the video below, it is a neat build. As Jeff points out, it is relatively low power and dense. But dollar for dollar, it isn’t much of a supercomputer.

Of course, the most obvious thing is that there’s plenty of CPU, but no GPU. We can sympathize, too, with the fact that he had to strip it down twice and rebuild it for a total of three rebuilds. One time, he decided to homogenize the SSDs for each board. The second time was to affix the heatsinks. It is always something.

With ten “blades” — otherwise known as compute modules — the plucky little computer turned in about 325 gigaflops on tests. That sounds pretty good, but a Framework Desktop x4 manages 1,180 gigaflops. What’s more is that the Framework turned out cheaper per gigaflop, too. Each dollar bought about 110 megaflops for the Pis, but about 140 for the Framework.

Continue reading “Regretfully: $3,000 Worth Of Raspberry Pi Boards”

A man holds a license plate in front of a black pickup (F-150 Lightning) tailgate. It is a novelty Georgia plate with the designation P00-5000. There are specks of black superimposed over the plate with a transparent sticker, giving it the appearance of digital mud in black.

A Deep Dive On Creepy Cameras

George Orwell might’ve predicted the surveillance state, but it’s still surprising how many entities took 1984 as a how-to manual instead of a cautionary tale. [Benn Jordan] decided to take a closer look at the creepy cameras invading our public spaces and how to circumvent them.

[Jordan] starts us off with an overview of how machine learning “AI” is used Automated License Plate Reader (ALPR) cameras and some of the history behind their usage in the United States. Basically, when you drive by one of these cameras, an ” image segmentation model or something similar” detects the license plate and then runs optical character recognition (OCR) on the plate contents. It will also catalog any bumper stickers with the make and model of the car for a pretty good guess of it being your vehicle, even if the OCR isn’t 100% on the exact plate sequence.

Where the video gets really interesting is when [Jordan] starts disassembling, building, and designing countermeasures to these systems. We get a teardown of a Motorola ALPR for in-vehicle use that is better at being closed hardware than it is at reading license plates, and [Jordan] uses a Raspberry Pi 5, a Halo AI board, and You Only Look Once (YOLO) recognition software to build a “computer vision system that’s much more accurate than anything on the market for law enforcement” for $250.

[Jordan] was able to develop a transparent sticker that renders a license plate unreadable to the ALPR but still plainly visible to a human observer. What’s interesting is that depending on the pattern, the system could read it as either an incorrect alphanumeric sequence or miss detecting the license plate entirely. It turns out, filtering all the rectangles in the world to find just license plates is a tricky problem if you’re a computer. You can find the code on his Github, if you want to take a gander.

You’ve probably heard about using IR LEDs to confuse security cameras, but what about yarn? If you’re looking for more artistic uses for AI image processing, how about this camera that only takes nudes or this one that generates a picture based on geographic data?

Continue reading “A Deep Dive On Creepy Cameras”