
Microsoft held its annual developer conference, MS Build 2025, on Monday in Seattle, announcing new features and major updates including AI agents, developer tools, and open platforms.
During his keynote address, Microsoft CEO Satya Nadella introduced the concept of the Open Agentic Web – an internet environment where AI agents operate across personal, organizational, and business levels. Nadella declared that an era where AI can make decisions and perform tasks on behalf of users and organizations has arrived.
Microsoft announced that its cloud service Azure will now offer Grok 3 and Grok 3 Mini, developed by Elon Musk’s AI company xAI.
It also plans to provide models from French AI startup Mistral and Germany’s Black Forest Labs. With these additions, Azure users now have access to over 1,900 AI models.
Microsoft, which has a partnership with ChatGPT developer OpenAI, initially relied on ChatGPT for its cloud services following ChatGPT’s launch in November 2022. However, it is now diversifying its strategy, collaborating with various AI companies to expand its cloud service growth.
Microsoft also unveiled GitHub Copilot, a new AI coding agent. Unlike the existing agents that automatically generated only partial code based on what a developer was writing, GitHub Copilot can write entire code based on simple instructions and request user review upon completion.
The conference showcased Microsoft’s vision for a future where companies can build and deploy their own AI agents for various tasks. Microsoft explained that its Azure AI Foundry platform will enable businesses to create custom agents based on their preferred AI models.
Microsoft also introduced Microsoft Discovery, a platform designed to accelerate scientific innovation in fields such as drug development and environmental research through AI agents.
Additionally, the company announced that its products, including Windows, will support the Model Context Protocol (MCP), a standard set developed by AI startup Anthropic to regulate AI system interactions.
MCP is an interface protocol that facilitates interaction between large language models (LLMs) and external tools and data. This promotes compatibility among various AI models and systems, supporting the development of improved AI agents.