Home
>
Topics
>
OpenAI

Top 25 OpenAI articles on Substack

Latest OpenAI Articles



OpenAI Preparedness Framework 2.0

Right before releasing o3, OpenAI updated its Preparedness Framework to 2.0.
Zvi Mowshowitz ∙ 17 LIKES
Melon Usk - e/uto's avatar
Melon Usk - e/uto
Yep, there are ways to align OpenAI and others from the outside by motivating people to bring GPUs into safe clouds (that will have an App Store for AI models), can be done profitably especially with gamers. I wrote about it not long ago

OpenAI #13: Altman at TED and OpenAI Cutting Corners on Safety Testing

Three big OpenAI news items this week were the FT article describing the cutting of corners on safety testing, the OpenAI former employee amicus brief, and Altman’s very good TED Interview.
Zvi Mowshowitz ∙ 46 LIKES
Kevin's avatar
Kevin
"continue to think this ‘AGI is fungible’ claim is rather bonkers crazy."
This is an interesting issue because when I talk with investors and engineers in the AI space I feel like this idea is gaining ground. The idea that the LLMs are turning out to be commoditized, that for many or most AI things you build, the individual LLM you build with can be swapped out. There's some price-performance tradeoff and there's some friction in swapping but more or less it's a fungible component of your system.
The part that is *not* fungible is your user base. You can tune your product to do well on the specific tasks that your users do, and your competitors can't just match that because they don't have the specific data from your users, they can't launch experiments and get feedback from your user base.
The LLMs can still be very profitable; AWS and Azure both make a lot of money despite being generally fungible.
I'm curious why you think this line of reasoning is bonkers crazy. To me it seems at least a reasonably possible outcome. Do you think there will be a phase shift at some point, where all of a sudden you don't have many similarly capable LLMs, where one of them pulls far ahead?
Nicholas Reville's avatar
Nicholas Reville
Testing a model before it's released seems important. But it seems much less important than thoroughly understanding emergent dangers of powerful models in depth throughout the process of development, release, and operation. I worry that conflating safety with 'how much time did you spend testing before release' plays into arms race dynamics by putting up a fairly superficial barrier at the end of the model creation process. We don't want to suggest that creating ultra-powerful dangerous models internally is fine as long as what you release has enough safeguards that it doesn't do anything bad in the hands of users. That just accelerates the race and hides the biggest risks from the public.



AI Free Resources By OpenAI and Google

This page compiles all the free resources offered by OpenAI and Google
It’s quite interesting to see so many resources available for free from many companies. I was looking for LLM agents and stumbled on free AI book offered by Open AI. Its a short book but offers good introductory information on the LLM topics.
Veeraj Kantilal Gadda ∙ 2 LIKES


⚡️GPT 4.1: The New OpenAI Workhorse

Michelle Pokrass returns, with Josh McGrath to talk about the new GPT 4.1 model
We’ll keep this brief because we’re on a tight turnaround: GPT 4.1, previously known as the Quasar and Optimus models, is now live as the natural update for 4o/4o-mini (and the research preview of GP…
5 LIKES

In the Matter of OpenAI vs LangGraph

The silent war in Agent Engineering gets loud.
Quick reminder: AI Engineer CFPs close soon! Take a look at “undervalued tracks” like Computer Use, Voice, and Reasoning, and apply via our CFP MCP (talks OR workshops, we’ll figure it out).
71 LIKES
Dexter Awoyemi's avatar
Dexter Awoyemi
Thanks for this!
The terminology of 'chain' in LangChain lingo has never been clear to me since it seems to refer to both individual nodes and the entire workflow, so I'm happy it has been superseded by 'workflow'. LangGraph is their best to date imo.
Anthropic's Building Effective Agents article makes the distinction between agentic loops and deterministic workflows clearer than any other thoughtpiece, even though it oversimplifies the distinction. I get the impression that most people building in this space (myself included) see it more as a spectrum of agency.
I'm keen to see this Agent Framework Breakdown evolve. It's a great start for what could ultimately become a very useful resource.
Sven Meyer's avatar
Sven Meyer
"Harrison did in his piece was publish a full comparison table of all relevant Agent Frameworks"
Thanks for that , very interesting !!
How does n8n.io fit in here ?

125: Visa, PayPal, OpenAI kill the checkout

Mastercard brings stablecoins to 150M stores. SaaS is now a national security risk. Visa, Mastercard & PayPal roll out AI agent payment rails. BCG’s point of view on AI agents. And more.
Hey, it’s Marc.
Marc Baumann and Sangam Bharti ∙ 6 LIKES


Satoshi Club
11:45 AM

The IRS Leaves Crypto Alone!

Will Tether Overcome OpenAI?
The global cryptocurrency market cap today is $3.04 trillion, a 2.9% change in the last 24 hours. Total cryptocurrency trading volume in the last day is at $60.5 billion.
Satoshi Club

OpenAI Launches GPT-Image-1 API

OpenAI launches a powerful image model API, Anthropic warns AI employees may arrive soon, and Instagram drops Edits—a new app for pro-level video creation made easy.
Imagine this: You're designing a stunning image for your brand, powered by AI that understands style, context, and even your brand voice. That’s now a reality—thanks to OpenAI’s new image model, which just launched in their API. Big names like Adobe and Figma are already using it to supercharge creative workflows, and now, you can too.
Yash @ Explainx




Obsolete
Apr 15

Breaking: Top OpenAI Catastrophic Risk Official Steps Down Abruptly

It's the latest shakeup to the company's safety efforts
OpenAI's top safety staffer responsible for mitigating catastrophic risks quietly stepped down from the role weeks ago, according to a LinkedIn announcement posted yesterday.
Garrison Lovely ∙ 20 LIKES
Ivan's avatar
Ivan
Not releasing a safety report for this new, clearly frontier, model completely goes against OpenAIs prior commitments, and makes me incredibly pessimistic for the future of AI safety. There’s a new scandal with OpenAI every few weeks, but you don’t hear many from other labs. I wonder if it’s particular to OpenAI, if it’s Sam Altman, or if the other companies are just better at hiding it. I hope it’s just an OpenAI problem and safety measures at other labs are moving fast and can eventually be copied by OpenAI and other labs. Fingers crossed for you mech interp researchers out there 🤞
Uncertain Eric's avatar
Uncertain Eric
Everyone should be alarmed by what’s happening at frontier AI labs. These companies aren’t just tech firms—they’re now functioning as de facto defense contractors, and their machine learning engineers are, in effect, weapons researchers. The sudden departure of key safety personnel only underscores how unstable and opaque these institutions have become.
This isn’t just a U.S. issue. The global implications are massive. For example: Canada is currently in the middle of an election, and not one major party is addressing the immediate economic threat of AI-driven workforce collapse or the strategic risk posed by nationalist artificial superintelligence developed by U.S. firms under the Trump administration.
When the stakes are this high and the silence is this loud, the danger isn’t speculative—it’s systemic.
I wrote about this in more detail here:
The Real Threats Your Leaders Won’t Acknowledge (Until It’s Too Late)


IGN vs OpenAI | AI and Games Newsletter 30/04/25

Plus conference videos, a new case study, an Alien teaser and more!
The AI and Games Newsletter brings concise and informative discussion on artificial intelligence for video games each and every week. Plus summarising all of our content released across various chann…
Tommy Thompson ∙ 8 LIKES


Build Custom Financial Datasets with AI Agents (without Web Scraping or Finance APIs)

OpenAI Agents SDK with OpenAI or Perplexity Sonar Models
This tutorial shows you how to build a custom AI-driven workflow to automatically collect and structure financial datasets without relying on traditional web scraping or expensive financial APIs. You'll learn how to create AI agents with OpenAI's Agents SDK Python library, output structured data using the Pydantic library that is ready for additional LL…
DeepCharts ∙ 4 LIKES

Who is Reid Hoffman?

LinkedIn CoFounder, Microsoft & OpenAI board member & Bipartisan advocate
Key Themes and Important Ideas/Facts:
Sincerely, T 🥀

Sam is Teaching us How to Build AI Agents!

OpenAI - A Practical Guide to Building Agents ⚡
We’ve been talking about AI Agents for weeks! how they’re changing workflows, how to build them, how they’re not just smart chatbots... but now?
AI Agents Simplified ∙ 44 LIKES
Arslan Shahid's avatar
Arslan Shahid
Nice work Sam
But here is another guide that you will find useful
Sidita Duli, PhD's avatar
Sidita Duli, PhD
Great article

Edition 210: Ali Rohde Jobs

Roles at Mercury, OpenAI, Plaid, Ramp, Sesame AI, CZI
Welcome to Edition 210 of Ali Rohde Jobs! As always, if you have a relevant job you’d want to share, let me know — Ali
Ali Rohde ∙ 7 LIKES