Posts in stories

watchOS 26: The MacStories Public Beta Preview

Author’s Note: Apple released the public beta of watchOS 26 last Thursday, two days after developer beta 4. Instead of immediately publishing a preview of watchOS 26, I took the time to review the OS again to ensure my preview accurately reflected the version released as a public beta.


Last year, watchOS 11 emerged from the bumpy launch of Apple Intelligence completely unscathed due to the fact that it included precisely zero AI features. Instead, what Apple Watch users got was a fully formed OS update that took some big swings in addition to refining many areas of the Apple Watch experience. It was a good year with notable updates across the system, particularly when it came to the Smart Stack and health and fitness features.

It’s unfortunate, though perhaps not surprising, that this year’s new watchOS release – dubbed version 26, like its OS brethren – is what might be considered a quiet year. However, after living with the beta for over a month, I’m happy to report that while there aren’t any substantial new features, there are still clever flourishes here and there that make my daily use of the Apple Watch more enjoyable.

Here’s a preview of what you can expect from watchOS 26.

Read more


My Latest Mac Automation Tool is a Tiny Game Controller

Source: 8BitDo.

Source: 8BitDo.

I never expected my game controller obsession to pay automation dividends, but it did last week in the form of the tiny 16-button 8BitDo Micro. For the past week, I’ve used the Micro to dictate on my Mac, interact with AI chatbots, and record and edit podcasts. While the setup won’t replace a Stream Deck or Logitech Creative Console for every use case, it excels in areas where those devices don’t because it fits comfortably in the palm of your hand and costs a fraction of those other devices.

My experiments started when I read a story on Endless Mode by Nicole Carpenter, who explained how medical students turned to two tiny 8BitDo game controllers to help with their studies. The students were using an open-source flashcard app called Anki and ran into an issue while spending long hours with their flashcards:

The only problem is that using Anki from a computer isn’t too ergonomic. You’re hunched over a laptop, and your hands start cramping from hitting all the different buttons on your keyboard. If you’re studying thousands of cards a day, it becomes a real problem—and no one needs to make studying even more intense than it already is.

To relieve the strain on their hands, the med students turned to 8BitDo’s tiny Micro and Zero 2 controllers, using them as remote controls for the Anki app. The story didn’t explain how 8BitDo’s controllers worked with Anki, but as I read it, I thought to myself, “Surely this isn’t something that was built into the app,” which immediately drew me deeper into the world of 8BitDo controllers as study aides.

8BitDo markets the Micro's other uses, but for some reason, it hasn't spread much beyond the world of medical school students. Source: 8BitDo.

8BitDo markets the Micro’s other uses, but for some reason, it hasn’t spread much beyond the world of medical school students. Source: 8BitDo.

As I suspected, the 8BitDo Micro works just as well with any app that supports keyboard shortcuts as it does with Anki. What’s curious, though, is that even though medical students have been using the Micro and Zero 2 with Anki for several years and 8BitDo’s website includes a marketing image of someone using the Micro with Clip Studio Paint on an iPad, word of the Micro’s automation capabilities hasn’t spread much. That’s something I’d like to help change.

Read more


Swift Assist, Part Deux

At WWDC 2024, I attended a developer tools briefing with Jason Snell, Dan Moren, and John Gruber. Later, I wrote about Swift Assist, an AI-based code generation tool that Apple was working on for Xcode.

That first iteration of Swift Assist caught my eye as promising, but I remember asking at the time whether it could modify multiple files in a project at once and being told it couldn’t. What I saw was rudimentary by 2025’s standards with things like Cursor, but I was glad to see that Apple was working on a generative tool for Xcode users.

In the months that followed, I all but forgot that briefing and story, until a wave of posts asking, “Whatever happened to Swift Assist?” started appearing on social media and blogs. John Gruber and Nick Heer picked up on the thread and came across my story, citing it as evidence that the MIA feature was real but curiously absent from any of 2024’s Xcode betas.

This year, Jason Snell and I had a mini reunion of sorts during another developer tools briefing. This time, it was just the two of us. Among the Xcode features we saw was a much more robust version of Swift Assist that, unlike in 2024, is already part of the Xcode 26 betas. Having been the only one who wrote about the feature last year, I couldn’t let the chance to document what I saw this year slip by.

Read more


Interview: Craig Federighi Opens Up About iPadOS, Its Multitasking Journey, and the iPad’s Essence

iPadOS 26. Source: Apple.

iPadOS 26. Source: Apple.

It’s a cool, sunny morning at Apple Park as I’m walking my way along the iconic glass ring to meet with Apple’s SVP of Software Engineering, Craig Federighi, for a conversation about the iPad.

It’s the Wednesday after WWDC, and although there are still some developers and members of the press around Apple’s campus, it seems like employees have returned to their regular routines. Peek through the glass, and you’ll see engineers working at their stations, half-erased whiteboards, and an infinite supply of Studio Displays on wooden desks with rounded corners. Some guests are still taking pictures by the WWDC sign. There are fewer security dogs, but they’re obviously all good.

Despite the list of elaborate questions on my mind about iPadOS 26 and its new multitasking, the long history of iPad criticisms (including mine) over the years, and what makes an iPad different from a Mac, I can’t stop thinking about the simplest, most obvious question I could ask – one that harkens back to an old commercial about the company’s modular tablet:

In 2025, what even is an iPad according to Federighi?

Read more


Hands-On: How Apple’s New Speech APIs Outpace Whisper for Lightning-Fast Transcription

Late last Tuesday night, after watching F1: The Movie at the Steve Jobs Theater, I was driving back from dropping Federico off at his hotel when I got a text:

Can you pick me up?

It was from my son Finn, who had spent the evening nearby and was stalking me in Find My. Of course, I swung by and picked him up, and we headed back to our hotel in Cupertino.

On the way, Finn filled me in on a new class in Apple’s Speech framework called SpeechAnalyzer and its SpeechTranscriber module. Both the class and module are part of Apple’s OS betas that were released to developers last week at WWDC. My ears perked up immediately when he told me that he’d tested SpeechAnalyzer and SpeechTranscriber and was impressed with how fast and accurate they were.

It’s still early days for these technologies, but I’m here to tell you that their speed alone is a game changer for anyone who uses voice transcription to create text from lectures, podcasts, YouTube videos, and more. That’s something I do multiple times every week for AppStories, NPC, and Unwind, generating transcripts that I upload to YouTube because the site’s built-in transcription isn’t very good.

What’s frustrated me with other tools is how slow they are. Most are built on Whisper, OpenAI’s open source speech-to-text model, which was released in 2022. It’s cheap at under a penny per one million tokens, but isn’t fast, which is frustrating when you’re in the final steps of a YouTube workflow.

An SRT file generated by Yap.

An SRT file generated by Yap.

I asked Finn what it would take to build a command line tool to transcribe video and audio files with SpeechAnalyzer and SpeechTranscriber. He figured it would only take about 10 minutes, and he wasn’t far off. In the end, it took me longer to get around to installing macOS Tahoe after WWDC than it took Finn to build Yap, a simple command line utility that takes audio and video files as input and outputs SRT- and TXT-formatted transcripts.

Yesterday, I finally took the Tahoe plunge and immediately installed Yap. I grabbed the 7GB 4K video version of AppStories episode 441, which is about 34 minutes long, and ran it through Yap. It took just 45 seconds to generate an SRT file. Here’s Yap ripping through nearly 20% of an episode of NPC in 10 seconds:

Replay

Next, I ran the same file through VidCap and MacWhisper, using its V2 Large and V3 Turbo models. Here’s how each app and model did:

App Transcripiton Time
Yap 0:45
MacWhisper (Large V3 Turbo) 1:41
VidCap 1:55
MacWhisper (Large V2) 3:55

All three transcription workflows had similar trouble with last names and words like “AppStories,” which LLMs tend to separate into two words instead of camel casing. That’s easily fixed by running a set of find and replace rules, although I’d love to feed those corrections back into the model itself for future transcriptions.

Once transcribed, a video can be used to generate additional formats like outlines.

Once transcribed, a video can be used to generate additional formats like outlines.

What stood out above all else was Yap’s speed. By harnessing SpeechAnalyzer and SpeechTranscriber on-device, the command line tool tore through the 7GB video file a full 2.2× faster than MacWhisper’s Large V3 Turbo model, with no noticeable difference in transcription quality.

At first blush, the difference between 0:45 and 1:41 may seem insignificant, and it arguably is, but those are the results for just one 34-minute video. Extrapolate that to running Yap against the hours of Apple Developer videos released on YouTube with the help of yt-dlp, and suddenly, you’re talking about a significant amount of time. Like all automation, picking up a 2.2× speed gain one video or audio clip at a time, multiple times each week, adds up quickly.

Whether you’re producing video for YouTube and need subtitles, generating transcripts to summarize lectures at school, or doing something else, SpeechAnalyzer and SpeechTranscriber – available across the iPhone, iPad, Mac, and Vision Pro – mark a significant leap forward in transcription speed without compromising on quality. I fully expect this combination to replace Whisper as the default transcription model for transcription apps on Apple platforms.

To test Apple’s new model, install the macOS Tahoe beta, which currently requires an Apple developer account, and then install Yap from its GitHub page.


Hand Crafted: Don’t Count Developers Out

Source: Apple.

Source: Apple.

We’re days away from WWDC, and I’m excited. As much as I enjoy a good Apple hardware event, it’s WWDC’s focus on software that I truly love. But what WWDC means to me runs much deeper than the OS updates we’ll hear about next week. Of course, Apple’s announcements are a big part of what makes WWDC a special time of the year, but for me, it’s overshadowed by the people.

I’ve been to every WWDC since 2013. That first year, I sat on the sidewalk at 3 AM on a cold pre-dawn morning. I hardly knew anyone in the Apple developer community then, but after hours in that line and attending the events surrounding the conference, I’d gotten to know a few developers.

By the time 2016 rolled around, I was writing at MacStories and interviewing developers for the site, including the founders of Workflow, which became Shortcuts. Now, they’re building Sky. After that WWDC, Federico hit the nail on the head in Issue 37 of MacStories Weekly:

…there’s something special about meeting someone you’ve known for a long time exclusively through the Internet. While I thought I knew some people and had made some special friendships through the years, getting to know them in person is different.

He’s right, and even though WWDC is much smaller than it used to be, I look forward to the chance to get to know the developers whose apps we’ve covered, meet new people, and reconnect with old friends.

What’s special about so many of the developers I’ve met over the years is how much they care about their craft. They sweat all the details. Over the years, we’ve seen many of them go from novices to the makers of apps with big, passionate followings among our readers.

We’ve also seen developers and their importance to Apple’s hardware success undervalued by the very company whose platforms they’re so dedicated to. That’s not new, but it’s gotten palpably worse as the years have worn on.

Since WWDC 2024, that trend has collided head-on with the rise of artificial intelligence. I imagine that our reaction to learning that Apple had scraped MacStories and every other website to train their LLMs was familiar to developers who have felt taken advantage of for years. That was a bitter pill to swallow, but one of the upsides of the experience is that over the past year, it’s forced me to spend a lot of time thinking about creativity, work, and our relationship with technology.

To hear the AI fans tell it, I, the developers we write about, and nearly everyone else will be out of jobs before long. Some days, that threat feels very real, and others, not so much. Still, it’s caused a lot of anxiety for a lot of people.

However, as I get ready to head to this year’s WWDC, I’m far more optimistic than I was after WWDC 2024. I don’t expect AI to replace our friends in the indie developer community; far from it. That’s because what sets a great app apart from the pack on the App Store is the care and humanity that’s poured into it. I’ve yet to see a vibe-coded app that comes anywhere close. Those apps will simply join the vast sea of mediocrity that has always made up a big part of the App Store. Instead, I expect AI will help solo developers and small teams tackle bigger problems that were once the exclusive domain of bigger teams with more resources.

I realize this all may sound like blasphemy to anyone who’s either devoted to AI or dead set against it, but I believe there’s room for AI to serve the artist instead of the other way around. So despite the challenges developers, writers, and others are facing, I’m heartened by the many excellent apps I’ve tried in the past year and look forward to meeting and reconnecting with as many of their creators as I can next week.

If you see me and Federico wandering about, stop us to say hi. We’d love to hear what you’re working on.


From the Creators of Shortcuts, Sky Extends AI Integration and Automation to Your Entire Mac

Sky for Mac.

Sky for Mac.

Over the course of my career, I’ve had three distinct moments in which I saw a brand-new app and immediately felt it was going to change how I used my computer – and they were all about empowering people to do more with their devices.

I had that feeling the first time I tried Editorial, the scriptable Markdown text editor by Ole Zorn. I knew right away when two young developers told me about their automation app, Workflow, in 2014. And I couldn’t believe it when Apple showed that not only had they acquired Workflow, but they were going to integrate the renamed Shortcuts app system-wide on iOS and iPadOS.

Notably, the same two people – Ari Weinstein and Conrad Kramer – were involved with two of those three moments, first with Workflow, then with Shortcuts. And a couple of weeks ago, I found out that they were going to define my fourth moment, along with their co-founder Kim Beverett at Software Applications Incorporated, with the new app they’ve been working on in secret since 2023 and officially announced today.

For the past two weeks, I’ve been able to use Sky, the new app from the people behind Shortcuts who left Apple two years ago. As soon as I saw a demo, I felt the same way I did about Editorial, Workflow, and Shortcuts: I knew Sky was going to fundamentally change how I think about my macOS workflow and the role of automation in my everyday tasks.

Only this time, because of AI and LLMs, Sky is more intuitive than all those apps and requires a different approach, as I will explain in this exclusive preview story ahead of a full review of the app later this year.

Read more


Early Impressions of Claude Opus 4 and Using Tools with Extended Thinking

Claude Opus 4 and extended thinking with tools.

Claude Opus 4 and extended thinking with tools.

For the past two days, I’ve been testing an early access version of Claude Opus 4, the latest model by Anthropic that was just announced today. You can read more about the model in the official blog post and find additional documentation here. What follows is a series of initial thoughts and notes based on the 48 hours I spent with Claude Opus 4, which I tested in both the Claude app and Claude Code.

For starters, Anthropic describes Opus 4 as its most capable hybrid model with improvements in coding, writing, and reasoning. I don’t use AI for creative writing, but I have dabbled with “vibe coding” for a collection of personal Obsidian plugins (created and managed with Claude Code, following these tips by Harper Reed), and I’m especially interested in Claude’s integrations with Google Workspace and MCP servers. (My favorite solution for MCP at the moment is Zapier, which I’ve been using for a long time for web automations.) So I decided to focus my tests on reasoning with integrations and some light experiments with the upgraded Claude Code in the macOS Terminal.

Read more


Are Pride Wallpapers and a Watch Band Enough in 2025?

Today, Apple introduced their 2025 Pride Collection, with a set of new LGBTQ+-themed wallpapers for iOS and iPadOS that will be available as part of iOS and iPadOS 18.5. The collection also includes an Apple Watch Pride Edition Sports band, which matches a new Pride Harmony watch face in watchOS 11.5.

Despite being just another installment in what has become an annual tradition for the company, the 2025 collection rings hollow in contrast with Apple’s stance regarding the current U.S. administration.

Image: Apple

Image: Apple

On January 20th, President Trump signed executive orders that have already gravely impacted trans people across the United States. Despite the President’s clear intentions to do so before he was sworn into office, Apple CEO Tim Cook chose to donate $1M to the President’s inauguration fund and attended the inauguration alongside other American tech company leaders, including Google’s Sundar Pichai, Amazon’s Jeff Bezos, and Meta’s Mark Zuckerberg. The latter three have all scrapped Diversity, Equity and Inclusion (DEI) efforts inside their respective companies, following the President’s executive order terminating U.S. government DEI initiatives and scrubbing governmental documents of all references to trans people. In February, Apple shareholders rejected a proposal to follow the government’s lead, choosing to preserve the company’s diversity programs. However, Cook hedged saying that the company “may need to make some changes to comply”, while also reassuring that Apple’s “north star of dignity and respect for everyone and our work to that end will never waver.” Then last week, Cook remotely appeared at a celebration of the President’s first 100 days in office.

This seemingly nuanced alignment with President Trump contrasts with Tim Cook’s outspoken support for the LGBTQ+ community when he came out in 2014, and Apple’s continued participation in the San Francisco Pride Parade. The same dissonance appears in the final sentence of the company’s press release which states that “Apple is proud to financially support organizations that serve LGBTQ+ communities.

Today’s announcement of the 2025 Pride Collection’s made me think back to Joe Rosensteel’s great piece that he published in January soon after the inauguration, in which he expresses immense disappointment in Tim Cook. In regard to Apple’s yearly Apple Watch Pride bands and its participation in the San Francisco Pride parade, he rightly asked:

How should people reconcile Tim’s explicit support of Trump with his support of trans and enby people working at Apple, buying products from Apple, and attending pride parades with Apple?

At a time when some trans people are actively seeking to flee the U.S. to preserve their fundamental right to a healthy, safe, and decent life free from the threat of President Trump’s actions, Apple doesn’t seem to be stepping up to its professed values to the extent that the situation requires. As of today, there have been no reports of the company increasing its financial support of organizations that support LGBTQ+ people in the U.S. Nor has Apple attempted to publicly and explicitly speak out against the administration’s attacks targeting trans people. Instead, Apple has chosen to simply iterate on its Pride wallpapers and watch bands, which will retail at $49.

Maybe I should feel relieved that Apple chose not to discontinue the Pride Collection. But considering the urgency felt by the LGBTQ+ community, Apple releasing Pride bands and wallpapers is simply not enough to compensate for its decision not to speak out against President Trump’s attacks on trans people. There are certainly risks to Apple if it were to do more to stand up for the LGBTQ+ community, but those risks pale in comparison to the increasing threats trans and other people in the LGBTQ+ community face in the U.S. and around the world every day. It’s time for Apple to step up and do more than wallpapers and a watch band.