|
Welcome back! For AI agents to work properly in automating white-collar tasks, the companies developing the agents and the companies running the enterprise apps those agents use will need to agree on technical standards for how these technologies connect to each other. Some leading companies are preparing to do just that. As soon as this week, Anthropic, OpenAI, Google, Microsoft and a slate of other tech companies are set to announce a new group: the Agentic Artificial Intelligence Foundation, which will work together on open-source standards involving AI agents, according to a person involved in the project. The idea is similar to how the world’s banks have standardized interbank electronic payments, this person said. The Linux Foundation, a nonprofit that governs other open-source projects such as the Linux operating system, is organizing the new group, this person said. The group will initially focus on three existing open-source AI tools, this person said: model context protocol, an open-source protocol invented by Anthropic to standardize how AI agents connect to other applications; Agents.md, an OpenAI-invented format for giving instructions to coding agents, like how the agents should install an application or run tests on software; and Goose, an open-source AI agent invented by Block that runs locally on a single computer without needing an internet connection, the person said. MCP, which is effectively a way for AI models to talk to an application programming interface, has gained the most traction as firms including OpenAI, Microsoft, Google and Cursor have embraced it for their AI products or enterprise apps, including ChatGPT and Google Workspace. Companies can use MCP to help agents connect to applications. For instance, a company that uses ChatGPT can connect it to Slack via MCP. Then ChatGPT can read employees’ conversations in public Slack channels to answer a manager’s questions about who’s handling a particular customer account. The Linux-sponsored AI standards group aims to develop rules for how companies should contribute to the open-source software and share findings on technical problems they encounter. While chief information officers tell me writing MCP-friendly code is quickly becoming common among companies developing internal AI applications, such as chatbots, they say the technology isn’t perfect and companies need to agree on procedures such as how to patch potential security vulnerabilities. For instance, Alberto Martinez, head of IT at AngelList, which sells financial software for venture capital funds, said he worries about prompt injection attacks, where an AI agent could connect to an app that’s been compromised by hackers who trick the AI into revealing customer data. That’s relevant to AngelList because it has been developing its own software to connect AI agents it uses, such as coding agent Devin, to other applications it uses, such as PagerDuty, to notify engineers when its site has outages and suggest fixes. “MCPs have many security risks like prompt injection attacks—you can always find a way to trick” the AI, Martinez said. He said AngelList has been using tools from AI startup Runlayer to make it easier to track and manage the applications agents are connecting to over MCP. Who knows whether companies involved in the foundation group will see eye to eye on everything? Spats over open-source standards are as old as the tech industry itself. But this new group looks like a good omen for cooperation.
|