Macintosh System 7 Ported To X86 With LLM Help

You can use large language models for all sorts of things these days, from writing terrible college papers to bungling legal cases. Or, you can employ them to more interesting ends, such as porting Macintosh System 7 to the x86 architecture, like [Kelsi Davis] did.

When Apple created the Macintosh lineup in the 1980s, it based the computer around Motorola’s 68K CPU architecture. These 16-bit/32-bit CPUs were plenty capable for the time, but the platform ultimately didn’t have the same expansive future as Intel’s illustrious x86 architecture that underpinned rival IBM-compatible machines.

[Kelsi Davis] decided to port the Macintosh System 7 OS to run on native x86 hardware, which would be challenging enough with full access to the source code. However, she instead performed this task by analyzing and reverse engineering the System 7 binaries with the aid of Ghidra and a large language model. Soon enough, she had the classic System 7 desktop running on QEMU with a fully-functional Finder and the GUI working as expected. [Kelsi] credits the LLM with helping her achieve this feat in just three days, versus what she would expect to be a multi-year effort if working unassisted.

Files are on GitHub for the curious. We love a good port around these parts; we particularly enjoyed these efforts to recreate Portal on the N64. If you’re doing your own advanced tinkering with Macintosh software from yesteryear, don’t hesitate to let us know.

LeRobot Brings Autonomy To Hobby Robots

Robotic arms have a lot in common with CNC machines in that they are usually driven by a fixed script of specific positions to move to, and actions to perform. Autonomous behavior isn’t the norm, especially not for hobby-level robotics. That’s changing rapidly with LeRobot, an open-source machine learning framework from the Hugging Face community.

The SO-101 arm is an economical way to get started.

If a quick browse of the project page still leaves you with questions, you’re not alone. Thankfully, [Ilia] has a fantastic video that explains and demonstrates the fundamentals wonderfully. In it, he shows how LeRobot allows one to train an economical 3D-printed robotic arm by example, teaching it to perform a task autonomously. In this case, the task is picking up a ball and putting it into a cup.

[Ilia] first builds a dataset by manually operating the arm to pick up a ball and place it in a cup. Then, with a dataset consisting of only about fifty such examples, he creates a machine learning model capable of driving the arm to autonomously pick up a ball and place it in a cup, regardless of where the ball and cup actually are. It even gracefully handles things like color changes and [Ilia] moving the cup and ball around mid-task. You can skip directly to 34:16 to see this autonomous behavior in action, but we do recommend watching the whole video for a highly accessible yet deeply technical overview.

Continue reading “LeRobot Brings Autonomy To Hobby Robots”

A photo of the circuitry along with an oscilloscope

Eight Artificial Neurons Control Fully Autonomous Toy Truck

Recently the [Global Science Network] released a video of using an artificial brain to control an RC truck.

The video shows a neural network comprised of eight artificial neurons assembled on breadboards used to control a fully autonomous toy truck. The truck is equipped with four proximity sensors, one front, one front left, one front right, and one rear. The sensor readings from the truck are transmitted to the artificial brain which determines which way to turn and whether to go forward or backward. The inputs to each neuron, the “synapses”, can be excitatory to increase the firing rate or inhibitory to decrease the firing rate. The output commands are then returned wirelessly to the truck via a hacked remote control.

This particular type of neural network is called a Spiking Neural Network (SNN) which uses discrete events, called “spikes”, instead of continuous real-valued activations. In these types of networks when a neuron fires matters as well as the strength of the signal. There are other videos on this channel which go into more depth on these topics.

The name of this experimental vehicle is the GSN SNN 4-8-24-2 Autonomous Vehicle, which is short for: Global Science Network Spiking Neural Network 4 Inputs 8 Neurons 24 Synapses 2 Degrees of Freedom Output. The circuitry on both the vehicle and the breadboards is littered with LEDs which give some insight into how it all functions.

If you’re interested in how neural networks can control behavior you might like to see a digital squid’s behavior shaped by a neural network.

Continue reading “Eight Artificial Neurons Control Fully Autonomous Toy Truck”

An image of a light grey graphing calculator with a dark grey screen and key surround. The text on the monochrome LCD screen shows "Input: ENEB Result 1: BEEN Confidence 1: 14% [##] Result 2: Good Confidence 2: 12% [#] Press ENTER key..."

A Neural Net For A Graphing Calculator?

Machine learning and neural nets can be pretty handy, and people continue to push the envelope of what they can do both in high end server farms as well as slower systems. At the extreme end of the spectrum is [ExploratoryStudios]’s Hermes Optimus Neural Net for a TI-84 Plus Silver Edition.

This neural net is setup as an autocorrect system that can take four character inputs and match them to a library of twelve words. That’s not a lot, but we’re talking about a device with 24 kB of RAM, so the little machine is doing its best. Perhaps more interesting than any practical output is the puzzle solving involved in getting this to work within the memory constraints.

The neural net “employs a feedforward neural network with a precisely calibrated 4-60-12 architecture and sigmoid activation functions.” This leads to an approximate 85% accuracy being able to identify and correct the given target words. We appreciate the readout of the net’s confidence as well which is something that seems to have gone out the window with many newer “AI” systems.

We’ve seen another TI-84 neural net for handwriting recognition, but is the current crop of AI still headed in the wrong direction?

Continue reading “A Neural Net For A Graphing Calculator?”

Dual RGB Cameras Get Depth Sensing Powerup

It’s sometimes useful for a system to not just have a flat 2D camera view of things, but to have an understanding of the depth of a scene. Dual RGB cameras can be used to sense depth by contrasting the two slightly different views, in much the same way that our own eyes work. It’s considered an economical but limited method of depth sensing, or at least it was before FoundationStereo came along and blew previous results out of the water. That link has a load of interactive comparisons to play with and see for yourself, so check it out.

A box of disordered tools at close range is understood very well, and these results are typical for the system.

The FoundationStereo paper explains how researchers leveraged machine learning to create a system that can not only outperform existing dual RGB camera setups, but even active depth-sensing cameras such as the Intel RealSense.

FoundationStereo is specifically designed for strong zero-shot performance, meaning it delivers useful general results with no additional training needed to handle any particular scene or environment. The framework and models are available from the project’s GitHub repository.

While products like Microsoft’s Kinect have struggled to keep the consumer’s attention, depth sensing remains an enabling technology that opens possibilities and gives rise to interesting projects, like a headset that allows one to see the world through the eyes of a depth sensor.

The ability to easily and quickly gain an understanding of the physical layout of a space is a powerful tool, and if a system like this one can deliver such fantastic results with nothing more than two RGB cameras, that’s a great sign. Watch it in action in the video below.

Continue reading “Dual RGB Cameras Get Depth Sensing Powerup”

Understanding Linear Regression

Although [Vitor Fróis] is explaining linear regression because it relates to machine learning, the post and, indeed, the topic have wide applications in many things that we do with electronics and computers. It is one way to use independent variables to predict dependent variables, and, in its simplest form, it is based on nothing more than a straight line.

You might remember from school that a straight line can be described by: y=mx+b. Here, m is the slope of the line and b is the y-intercept. Another way to think about it is that m is how fast the line goes up (or down, if m is negative), and b is where the line “starts” at x=0.

[Vitor] starts out with a great example: home prices (the dependent variable) and area (the independent variable). As you would guess, bigger houses tend to sell for more than smaller houses. But it isn’t an exact formula, because there are a lot of reasons a house might sell for more or less. If you plot it, you don’t get a nice line; you get a cloud of points that sort of group around some imaginary line.

Continue reading “Understanding Linear Regression”

Digital Squid’s Behavior Shaped By Neural Network

In the 90s, a video game craze took over the youth of the world — but unlike today’s games that rely on powerful PCs or consoles, these were simple, standalone devices with monochrome screens, each home to a digital pet. Often clipped to a keychain, they could travel everywhere with their owner, which was ideal from the pet’s perspective since, like real animals, they needed attention around the clock. [ViciousSquid] is updating this 90s idea for the 20s with a digital pet squid that uses a neural network to shape its behavior.

The neural network that controls the squid’s behavior takes a large number of variables into account, including whether or not it’s hungry or sleepy, or if it sees food. The neural network adapts as different conditions are encountered, allowing the squid to make decisions and strengthen its algorithms. [ViciousSquid] is using a Hebbian learning algorithm which strengthens connections between neurons which activate often together. Additionally, the squid’s can form both short- and long-term memories, and the neural network can even form new neurons on its own as needed.

[ViciousSquid] is still working on this project, and hopes to eventually implement a management system in the future, allowing the various behavior variables to be tracked over time and overall allow it to act in a way more familiar to the 90s digital pets it’s modeled after. It’s an interesting and fun take on those games, though, and much of the code is available on GitHub for others to experiment with as well. For those looking for the original 90s games, head over to this project where an emulator for Tamagotchis was created using modern microcontroller platforms.