





















































You can now run and fine-tune Qwen3 and Meta's new Llama 4 models with 128K context length & superior accuracy. Unsloth is an open-source project that allows easy fine-tuning of LLMs and that also uploads accurately quantized models to Hugging Face.
GitHub repo: https://github.com/unslothai/unsloth
Unsloth's new Dynamic 2.0 quants outperform other quantization methods on 5-shot MMLU & KL Divergence benchmarks, meaning you can now run + fine-tune quantized LLMs while preserving as much precision as possible.
Read more here .
Tutorial for running Qwen3 here.
Tutorial for running Llama 4 here.
Welcome to another exciting edition of our AI_Distilled! This week, we're witnessing a surge in innovative AI solutions, with companies like OpenAI and Microsoft rolling out tools that streamline development and enhance user interaction. From Apple opening its models to developers to the fierce competition for AI's top talent, join us as we explore the latest breakthroughs shaping our digital world.
LLM Expert Insights,
Packt
In June 2025, a number of exciting AI conferences are already generating buzz. Here are the Top 5 not-to-miss events in the next month (for more information and registration details, please visit the links):
Date: June 3–5, 2025
Location: San Francisco, California, USA
Cost: $299–1,799 in-person
The AI Engineer World's Fair, from June 3-5, 2025, in San Francisco, is the largest technical conference for AI engineers. It would host approximately 3,000 attendees, featuring 150 talks and 100 practical workshops. Topics include Generative AI, AI agents, LLMs, infrastructure, and AI in Fortune 500 companies, offering unparalleled networking and learning opportunities for industry professionals.
Date: June 9–12, 2025
Location (Hybrid): San Francisco, California, US, and available online.
Cost: $1,395–1,895 in-person. Free for virtual admission. Discounted tickets are available with group-rate pricing.
The Data + AI Summit is a four-day event hosted by Databricks. It includes panel discussions, networking opportunities, and training workshops on topics such as data engineering, data governance, and machine learning.
Date: June 11–12, 2025
Location: Tobacco Dock, London, UK
Cost: £125–2,499
AI Summit London, spanning over two days, will cover a wide range of topics including agentic AI in action and ethical use of AI. With a strong lineup of sponsors and thousands of guests, the summit offers great opportunities for networking with leading AI practitioners.
4. Packt’s AI Agent Bootcamp (Build AI Agents Over the Weekend)
Date: June 21–22 and 28–29, 2025
Location: Live Virtual Workshop
Cost:
Our AI Agent Bootcamp aims to equip developers, ML engineers, data scientists, technical professionals, and software architects with the practical skills to design, build, and deploy AI agents using frameworks like LangChain, AutoGen, and CrewAI, moving from theoretical understanding of LLMs to practical application.
Date: June 25–26, 2025
Location: Washington, D.C., US
Cost: $499 in-person; Free for VP and C-level government executives.
The CDAO Government conference in Washington, D.C., is unique as it unites U.S. government data leaders to explore AI, governance, and ethical data use in public services. Celebrating its 13th anniversary, this event offers an excellent opportunity to learn how to securely leverage AI's capabilities for government data challenges.
This was just a quick peek into spaCy pipelines — but there’s much more to explore.
For instance, the spacy-transformers extension integrates pretrained transformer models directly into your spaCy pipelines, enabling state-of-the-art performance. Additionally, the spacy-llm plugin allows you to incorporate LLMs like GPT, Cohere, etc. for inference and prompt-based NLP tasks.
AI is no longer just a buzzword — it’s the most valuable skill of this decade– to make money, to get hired and to be future-paced.
That’s why, you need to join the 2-Day Free AI Upskilling Sprint by Outskill which comes with 16 hours of intensive training on AI frameworks, tools and tactics that will make you an AI expert.
Originally priced at $499, but the first 100 of you get in for completely FREE! Claim your spot now for $0! 🎁
If you're working on AI design or tool integration, the Model Context Protocol (MCP) offers a seamless, standardized way to connect AI tools, data sources, and LLM applications. Developed by Anthropic, MCP is an open protocol designed to simplify the often complex and time-consuming process of integrating rapidly evolving AI models with tools and services. Think of it as the USB-C of the AI world—plug-and-play, regardless of the LLMs or tools you're working with, and without diving into the intricate technicalities of MCP itself.
MCP operates on a client-server model, where your LLM application runs a local MCP client that communicates with one or more MCP servers. A service provider only needs to implement a single MCP server, which can then handle APIs, databases, and other services, without requiring constant code adjustments for each new integration.
MCP leverages the lightweight JSON-RPC message format (a simple remote procedure call protocol), stateful connections, server-client capability negotiation, and reflection. Reflection allows the client to query the server about its capabilities, which can then be surfaced to the LLM automatically via the orchestrating application’s prompt.
When designing with MCP, it's important to keep your architecture modular, test each component thoroughly, document your iterations, and ensure security by validating inputs and controlling access.
MCP is gaining traction with large organizations like Microsoft, which is integrating it into key products such as Semantic Kernel, Copilot Studio, and GitHub Copilot. I envision a near future where MCP-as-a-Service becomes the de facto standard, eliminating deployment overhead and enabling seamless AI-to-AI or agent-to-agent communication. For example, MCP endpoints could allow straightforward integration without server management, while internal repositories of MCP clients could democratize standardized tool access across organizations.
To read more about MCP, you can check out these resources: https://modelcontextprotocol.io and https://aka.ms/mcp. I’ll continue to share how our customers and various industries are adopting MCP and the lessons we’re learning along the way. Stay tuned for more.
Join Packt’s Accelerated Agentic AI Bootcamp this June and learn to design, build, and deploy autonomous agents using LangChain, AutoGen, and CrewAI. Hands-on training, expert guidance, and a portfolio-worthy project—delivered live, fast, and with purpose.
This is it.
35% off this Workshop - Limited Time Offer
If you’re in—move now.
Code: AGENT35
OpenAI Introduces Codex for Enhanced Code Generation
OpenAI has released Codex, a cloud-based AI agent for software engineering. Available in ChatGPT Pro, Enterprise, and Team, Codex (powered by codex-1) can write features, fix bugs, and answer codebase questions, operating in isolated environments. It learns from real-world tasks, producing human-like code and iteratively running tests. Developers can monitor progress, review changes with verifiable evidence, and guide Codex with AGENTS.md files.
Microsoft Unveils Windows AI Foundry and Native MCP for Future AI Agents
Microsoft is advancing its AI vision with native Model Context Protocol (MCP) in Windows and the Windows AI Foundry. This crucial groundwork, leveraging Anthropic's "USB-C of AI" protocol, aims to enable automated AI agents to seamlessly interact with apps, web services, and Windows functions. This initiative will empower features like natural language file searches and AI-powered system controls, reshaping how users engage with their devices.
Google Launches AI Ultra: A VIP Pass to Advanced AI
Google is launching Google AI Ultra, a new $249.99/month subscription (with an initial discount) offering the highest usage limits and access to its most capable AI models and premium features. Tailored for creative professionals, developers, and researchers, it includes Gemini with enhanced reasoning, Flow for cinematic video creation, Whisk for animated image generation, and advanced NotebookLM. Subscribers also get Gemini integration in Google apps (Gmail, Docs, Chrome), Project Mariner for multi-task management, YouTube Premium, and 30 TB storage.
Apple to Open AI Models for Developers
Apple is reportedly preparing to allow third-party developers to build software using its AI models, aiming to boost new application creation. This move, expected to be unveiled at WWDC on June 9th, would let developers integrate Apple's underlying AI technology into their apps, starting with on-device models. This could help Apple compete in the AI landscape and enhance Apple Intelligence's appeal.
GitHub Copilot Launches New AI Coding Agent
GitHub Copilot now features an AI coding agent that tackles low-to-medium complexity tasks by simply assigning it issues. It operates in secure, customizable environments, pushing commits to draft pull requests with transparent session logs. This agent, enhanced by Model Context Protocol (MCP) and vision models, allows developers to offload routine work, ensuring security through human approval for pull requests and adhering to existing policies.
📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.
If you have any comments or feedback, just reply back to this email.
Thanks for reading and have a great day!
That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️
We would love to know what you thought—your feedback helps us keep leveling up.
Thanks for reading,
The AI_Distilled Team
(Curated by humans. Powered by curiosity.)