I’ve been using AI coding agents in my workflow for a while now. The most tedious part has always been the same: I see a failing network request in DevTools, copy the URL, paste it into the chat, describe what I’m seeing, wait for a response that’s missing half the context, then go back and copy more. It’s like describing a car crash to someone over the phone instead of just showing them.
Chrome M144 changed that. Chrome DevTools MCP lets your AI coding agent connect directly to your live browser session. You select a failing request in the Network panel, and your agent can inspect it right there. No copy-pasting. No context loss.
What it actually does
The key addition in the latest update is --autoConnect. Previously, Chrome DevTools MCP required you to spin up an isolated Chrome instance or configure a remote debug port manually. Now it attaches to the browser you’re already running, with your existing authenticated sessions intact. You don’t re-login. The agent sees what you see.
This uses the beta channel. If --autoConnect isn’t working, check that you’re on Chrome M144 or later. You enable remote debugging at chrome://inspect/#remote-debugging, then add this to your MCP config:
"chrome-devtools": {
"command": "npx",
"args": ["chrome-devtools-mcp@latest", "--autoConnect", "--channel=beta"]
}
The chrome-devtools-mcp package on npm is maintained by the Chrome DevTools team. Chrome shows a banner while an agent is connected and requires explicit approval for each new session. You know when it’s active.
What gets better
Debugging network failures is the obvious one. You see the broken request, select it, ask the agent to investigate. It has the headers, the response body, the status code. You stop narrating the problem.
I tested this with a stubborn 401 on /api/users that I’d been staring at for 20 minutes. I selected the request in the Network panel, asked the agent why it was failing, and it flagged a missing Authorization header in the preflight OPTIONS request (something I kept skipping past because I was focused on the main request). Two minutes, done.
The less obvious use is reverse engineering APIs, specifically your own services or ones you have explicit access to. The agent watches network traffic as you interact with a page, pieces together the API patterns, and can generate typed clients from them. I’ve done this manually for years with DevTools open and a notepad. Having something else handle the extraction is useful.
DOM inspection works the same way. Select a component that’s rendering wrong, hand off the analysis. The agent works from the live state, not a screenshot you described.
What concerns me
The Hacker News thread went straight to the obvious problem: when you give an agent access to an authenticated browser session, you’re exposing every cookie, token, and stored credential to the agent’s context window. One crafted prompt injection means full access to everything you’re logged into.
That’s not hypothetical. Prompt injection through web content is a known attack vector. If your agent reads a page containing adversarial instructions and has access to your live session, the blast radius is bigger than the same attack against an isolated browser instance.
The other issue is token cost. Browser state is verbose. For automated scenarios, Playwright MCP is the better fit: it’s more automation-focused and doesn’t carry the overhead of a live authenticated session. Use Chrome DevTools MCP when you need to stay in an existing session without spinning up a separate browser instance. If you’re running agentic loops, the token cost here adds up fast.
When to use it
Use it for interactive debugging where you’re already in the loop. It cuts the friction of switching between DevTools and a chat window. That’s the right fit.
Don’t leave it running during general browsing. Don’t point it at pages you don’t control without thinking through what those pages might inject into your agent’s context. Treat it the same way you’d treat any tool that has access to your authenticated state, deliberately, not by default.
MCP is moving from servers to browsers. That’s a real shift in what agents can reach. This tool is an early version of that. The security model hasn’t caught up yet.