Skip to main content
Blog

PyTorch: The Open Language of AI

PyTorch The Open Language of AI

Key takeaways:

  • PyTorch today powers the generative AI world with major AI players like Meta, OpenAI, Microsoft, Amazon, Apple and many others building cutting edge AI systems.
  • PyTorch has evolved from a framework focused on AI research to supporting production, deep AI compilation and has become foundational to thousands of projects and companies in the AI ecosystem.
  • The PyTorch Foundation is expanding to be an umbrella organization and will now house some of the most popular and highly complementary projects making it easier for users to build AI at scale.
  • Overall, the PyTorch Foundation is uniquely positioned to support the AI transformation throughout the stack, from accelerating compute to supporting the next wave of agentic systems, from research to production. 

When we look back at the early days of PyTorch, our main focus was initially on accelerated training and developer experience for AI researchers. We wanted to empower researchers to easily express their ideas (no matter how crazy they were) and accelerate training, enabling them to quickly validate those ideas. This evolved to be broader when we established PyTorch 1.0, brought in Caffe2 and expanded the mission to become ‘research to production’. With PyTorch 2.0, the scope and vision yet again expanded to include a major focus on performance, including an expansion in our compiler investments, heterogenous hardware support, which has led to torch.compile, TorchInductor and investment in the Triton project. Throughout all of this, we maintained a design philosophy that values: (1) Usability over performance; (2) Simple over easy; and (3) Python first with a focus on language interoperability.

Moreover, when we first put the 3 year vision together for PyTorch back in 2020, the goals we set were: 

  1. Industry leading: Winning across research and production with a partner ecosystem that is strategically and commercially aligned and collaborating with us toward this vision; 
  2. Diverse: A global community from academia and industry contributing to an ecosystem made up of projects, platforms and research built on or around PyTorch and that continually pushes the field forward; and 
  3. Sustainable: Maintains its diversity of major contributors and productivity over a long period (3 years+) and can survive inherent changes such as new technologies or new products (e.g., from competitors) that can change the population (the community of users, developers etc).

With the PyTorch Foundation joining the Linux Foundation in 2022, this set the stage for the next phase of growth for the project. If we fast forward to today, the foundation is rapidly growing with 13 Premier members and a total of 30 member organizations, more diverse contributors than ever and a growing ecosystem. The PyTorch Foundation is well-positioned to continue to play a leadership role in the rapidly evolving field of AI. 

All of that said, we yet again have a major opportunity to evolve PyTorch to play a much more integral role in setting the direction of open source AI and the industry at large.

Challenges in the AI Space Today

Over the past two years, the AI landscape has undergone a remarkable transformation. Large Language Models (LLMs) have moved to the forefront, powering applications like ChatGPT and driving an open revolution of models spearheaded by Llama. Now, we’re witnessing agentic systems entering the mainstream. Despite these advances, significant challenges persist as we transition into a generative AI and agent-first world. To better understand the nature of these challenges, we should consider several key questions:

  1. How do we optimize and maximize the creation of intelligence per a given amount of power?
  2. How can we democratize the continual improvement, customization, and adaptation of intelligence?
  3. How can additional capabilities outside of models, such as tools and environments, be accelerated in the way that we’ve optimized other systems?
  4. And lastly, how do we effectively measure intelligence such that it aligns with what we want as end users?

These challenging questions require a collective community working towards common goals to address them effectively. By bringing together diverse perspectives, we can build a comprehensive framework that integrates all layers of AI development—from hardware acceleration primitives to sophisticated agentic development, evaluation, and deployment practices. What we’re envisioning transcends typical technological initiatives; it’s more akin to developing a new language or operating system—a foundational infrastructure that enables entirely new possibilities.

A Broader Vision

One way to frame a broader vision for PyTorch is for it to be “The Open Language of AI”. 

Modulo some wordsmithing, it feels like we should consider adding an additional goal for PyTorch:

Viewed as a foundational operating system: PyTorch powers the foundation of AI systems throughout the industry

As the depth of the AI stack expands, the reach of the PyTorch Foundation is also set to grow and expand.The PyTorch Foundation has thus just evolved into an umbrella foundation that expands the scope and impact of PyTorch well beyond its traditional roots as an AI framework, and allows the foundation to host high value projects in the AI landscape.

In welcoming new projects to the PyTorch Foundation, we will look to uphold the same design principles that have guided PyTorch to this day.

Starting with vLLM and DeepSpeed, we are going to bring together some of the most innovative communities in AI development and in the process building a broader and more comprehensive stack for AI developers. 

Look for more project announcements coming very soon!

Near-Term Focus

As we move forward here in 2025, the core PyTorch project continues to make progress across a number of areas. Some of the high-level themes include:

  1. Inference: We are investing in cleaning up and clarifying our APIs/runtimes across non-LLMs, LLMs, and edge/local deployment:
    1. vLLM, as well as SGLang (part of the PyTorch Ecosystem), for server LLMs
    2. ExecuTorch will be the umbrella API for non-server deployment 
    3. Non-LLM serving – we will deprecate TorchServe and promote the ecosystem solutions around us 
  2. Post training: We will be unifying our post training efforts to support end-to-end PyTorch native post training with online/async RL. 
  3. Large scale training: We are working on large scale disaggregated semi/async fault tolerant training with a focus on enabling high levels of parallelism and incorporating online RL. 
  4. Compiler: We will be doubling down on improvements for torch.compile + integrations with vLLM and SGLang, improving the devX of Triton, deeper integration with training frameworks – Titan, etc.. 
  5. Model compression: Low precision model support via TorchAO and integration with vLLM and SGLang. 
  6. Edge/local deployment: Increasing our investment in ExecuTorch, bringing it closer to core and expanding the scope to support AI PCs, Macbooks, and edge devices as well as supporting projects like Ollama. 

You can dig into the published roadmaps here for more details. 

Where We Go From Here…

With the PyTorch Foundation now an umbrella foundation, we are focused on bringing in high-quality and complementary projects that extend the scope and vision of PyTorch. You will see projects announced over the weeks and months joining the foundation that will strengthen the PyTorch vision and create a rich portfolio of AI projects that integrate well and create a frictionless user experience. Our focus is on creating a trusted landscape of projects that are open source, have demonstrable value to the community and solve problems in the AI lifecycle. 

Additionally, our community continues to grow rapidly, with 3 PyTorch Days announced for 2025 in Asia, Europe and India and our keystone event, the PyTorch Conference coming October 22nd and 23rd in the middle of Open Source AI Week 2025. The conference will also add an extra day this year with the Startup Showcase, Measuring Intelligence Summit, AI Infra Summit, hackathons, co-located open source AI events and networking opportunities. We’re excited about this awesome celebration of open source AI innovation and hope you’ll join us!

Cheers,
Joe & Luca