Google shipped WebMCP in Chrome 146's early preview, and as of today, almost nobody has published a real implementation guide. We just built one of the most comprehensive WebMCP implementations on any B2B site — six imperative tools, seven declarative forms, and a full supporting infrastructure stack — and this is everything we learned, distilled into code you can ship this week.
If you haven't read the background on what WebMCP is or why it matters for B2B, start with our non-technical guide or the deep dive on why we made Cursive agent-ready. This post assumes you know what WebMCP does and you want to build it. The concepts are framework-agnostic — the standard is just HTML attributes and browser APIs — but every example here uses Next.js and React because that's what most B2B SaaS teams run on. If you're on Nuxt, Astro, SvelteKit, or even plain HTML, the patterns translate directly.
Let's build.
Prerequisites
Before writing any code, you need three things set up in your environment:
A modern web framework. Next.js 14+, Nuxt 3, Astro, SvelteKit — anything that renders HTML. WebMCP works at the browser level, so it doesn't care about your server framework. If you can write a <form> tag or execute JavaScript in the browser, you can implement WebMCP. The examples here use Next.js App Router with React, but the declarative API is pure HTML and the imperative API is vanilla JavaScript.
Chrome 146 Canary with the experimental flag. WebMCP shipped in Chrome 146's early preview program. Download Chrome Canary, navigate to chrome://flags, search for “Experimental Web Platform Features,” and enable it. Restart the browser. Without this flag, navigator.modelContext will be undefined and none of the imperative APIs will work. The declarative attributes will still be present in the DOM but won't be consumed by any agent until the flag is on.
Model Context Tool Inspector extension. This Chrome extension gives you a sidebar panel in DevTools that lists every WebMCP tool registered on the current page — both declarative (from form attributes) and imperative (from registerTool calls). You can invoke tools directly from the inspector, pass test parameters, and see the structured JSON responses. It's the single most useful debugging tool for WebMCP development.
That's it. No NPM packages to install, no build plugins to configure, no third-party SDKs. WebMCP is a browser-native standard. Your implementation is just HTML attributes and JavaScript.
Step 1 — Declarative API: The Quick Win
The fastest way to make your site agent-ready is the declarative API. If you already have HTML forms on your site — contact forms, demo requests, newsletter signups — you can make them WebMCP-compatible in under five minutes per form. No JavaScript required.
Here's a real example: a demo request form. This is the kind of form every B2B SaaS site has.
<form
toolname="requestDemo"
tooldescription="Request a product demo. Books a 30-minute call with the sales team."
>
<input
name="name"
type="text"
required
toolparamdescription="Full name of the person requesting the demo"
/>
<input
name="email"
type="email"
required
toolparamdescription="Work email address"
/>
<input
name="company"
type="text"
toolparamdescription="Company name"
/>
<button type="submit">Book Demo</button>
</form>Three new attributes do all the work:
toolname goes on the <form> element. This is the identifier agents use to discover and call the tool. Name it like you'd name an API endpoint — clear, action-oriented, camelCase. An agent scanning this page sees “requestDemo” in its tool list and immediately understands what it does.
tooldescription also goes on the <form>. This is the human-readable explanation that helps an agent decide whether to call this tool. Be specific. “Request a product demo” is good. “Request a product demo. Books a 30-minute call with the sales team.” is better. The agent uses this description to match user intent — when someone says “I want to see a demo of this product,” the agent needs enough context in the description to know this is the right tool.
toolparamdescription goes on each <input>. It tells the agent what data this field expects and why. The name attribute provides the parameter key, the type and required attributes provide validation hints, and toolparamdescription provides the semantic context. For fields where the name is already self-explanatory — like name="email" type="email" — the description adds marginal value but is still good practice for disambiguation (is it a personal email or a work email?).
Here's the key insight: if your forms already have clean name attributes and proper required fields, you're 80% of the way there. The WebMCP declarative API was designed to layer onto existing HTML forms with minimal friction. Most B2B SaaS sites can annotate every form on the site in an afternoon.
When an AI agent encounters this form, it doesn't need to guess which field is the email field by looking at placeholder text or label positioning. It reads the structured attributes, constructs the right parameters, and submits the form programmatically. One clean function call instead of a chain of “click this input, type this text, tab to the next field” browser interactions.
Step 2 — Imperative API: The Power Move
Declarative forms handle actions that already have UI — submitting a contact form, signing up for a newsletter. But the imperative API is where WebMCP gets genuinely powerful. It lets you register arbitrary JavaScript functions as tools that agents can call to get structured data back, even when there's no corresponding form or page on your site.
Think about the questions a buyer's AI agent would ask during evaluation: “What's the pricing?” “How does this compare to Competitor X?” “What features are in the Pro plan?” You can build a tool for each one. Here's a complete React component that registers a pricing tool:
"use client"
import { useEffect } from "react"
export function WebMCPProvider() {
useEffect(() => {
if (!navigator.modelContext) return
navigator.modelContext.registerTool({
name: "getProductPricing",
description: "Get current pricing for all plans including features and limits",
inputSchema: {
type: "object",
properties: {
plan: {
type: "string",
description: "Optional: specific plan name (e.g., 'starter', 'pro', 'enterprise')"
}
}
},
annotations: { readOnlyHint: true },
execute: async (params) => {
const plans = [
{
name: "Starter",
price: "$49/mo",
features: ["1,000 contacts", "Basic analytics", "Email support"]
},
{
name: "Pro",
price: "$149/mo",
features: ["10,000 contacts", "Advanced analytics", "Priority support"]
}
]
if (params.plan) {
const match = plans.find(p =>
p.name.toLowerCase() === params.plan.toLowerCase()
)
return match || { error: "Plan not found", available: plans.map(p => p.name) }
}
return { plans, pricing_page: "https://yoursite.com/pricing" }
}
})
}, [])
return null
}Let's walk through every piece of this.
name and description serve the same role as in the declarative API — they're what agents see in their tool list. The name should be action-oriented and specific: getProductPricing, not pricing or data. The description should include enough context for an agent to decide whether this tool answers the user's question. “Get current pricing for all plans including features and limits” tells the agent this tool returns not just prices but also feature breakdowns.
inputSchema is a standard JSON Schema definition. This tells the agent what parameters the tool accepts, their types, and whether they're required. In this example, the plan parameter is optional — the agent can call the tool with no arguments to get all plans, or pass a specific plan name to filter. The schema is the contract between your tool and every agent that might call it.
annotations provide metadata about the tool's behavior. The readOnlyHint: true annotation tells agents that calling this tool won't modify any state — it's a safe read operation. This matters because agents treat read-only tools differently from stateful ones. A read-only tool can be called speculatively during research. A stateful tool (like booking a demo or submitting a form) requires explicit user confirmation before execution.
execute is the function that runs when an agent calls the tool. It receives the parameters the agent passed (validated against your inputSchema), does whatever logic you need, and returns structured data. The return value goes directly back to the agent as JSON. No HTML, no markdown, no rendered templates — just clean data the agent can parse and present however it wants.
The “use client” directive and useEffect pattern ensure this code only runs in the browser. The feature detection check — if (!navigator.modelContext) return — means the component is a no-op in browsers that don't support WebMCP. Drop this component into your root layout, and it silently registers your tools when the API is available and does nothing when it's not.
In your Next.js layout, you'd include it like this:
// app/layout.tsx
import { WebMCPProvider } from "@/components/webmcp-provider"
export default function RootLayout({ children }) {
return (
<html>
<body>
<WebMCPProvider />
{children}
</body>
</html>
)
}Register as many tools as you need. Each registerTool call adds another entry to the agent's tool list. We run six imperative tools on meetcursive.com — pricing, competitor comparisons, capabilities, demo booking, case studies, and industry use cases — all registered in a single provider component.
Step 3 — Supporting Infrastructure
WebMCP tools are the primary interface for browser-based agents, but a complete agent-readiness strategy includes several supporting layers. These work independently of WebMCP and serve agents that don't operate in a browser context — API-based LLMs, search engine crawlers with AI features, and tools like ChatGPT or Perplexity that fetch URLs directly.
llms.txt
The llms.txt file is an emerging standard — think of it as robots.txt for AI. Place it at your site root, and it tells language models what your site is about, what tools are available, and where to find structured data. Here's a real example:
# llms.txt — YourSaaS.com
## About
YourSaaS is a B2B analytics platform that helps growth teams
understand pipeline attribution and optimize spend.
## Products
- Starter Plan: $49/mo — 1,000 contacts, basic analytics
- Pro Plan: $149/mo — 10,000 contacts, advanced analytics
- Enterprise: Custom pricing — unlimited contacts, SSO, SLA
## WebMCP Tools Available
- getProductPricing: Returns current pricing for all plans
- requestDemo: Books a 30-minute demo call
- getCapabilities: Returns platform feature overview
## Structured Data Endpoints
- /api/ai-info — Full product JSON
- /sitemap.xml — All pages
- /.well-known/schema.json — JSON-LD schemas
## Contact
- Demo: https://yoursite.com/demo
- Sales: sales@yoursite.com
- Docs: https://docs.yoursite.comThis file takes ten minutes to create and immediately gives any LLM a structured overview of your product, whether or not it supports WebMCP. Many AI tools fetch llms.txt as a first step when processing a URL.
/api/ai-info Endpoint
A JSON endpoint that returns your complete product information in one request. Any agent or LLM can fetch this URL — no browser required. Here's a Next.js route handler:
// app/api/ai-info/route.ts
import { NextResponse } from "next/server"
import { SimpleRelatedPosts } from "@/components/blog/simple-related-posts"
export async function GET() {
return NextResponse.json({
company: "YourSaaS",
description: "B2B analytics platform for pipeline attribution",
products: [
{
name: "Starter",
price: "$49/mo",
features: ["1,000 contacts", "Basic analytics", "Email support"]
},
{
name: "Pro",
price: "$149/mo",
features: ["10,000 contacts", "Advanced analytics", "Priority support"]
}
],
webmcp_tools: [
"getProductPricing",
"requestDemo",
"getCapabilities"
],
links: {
pricing: "https://yoursite.com/pricing",
demo: "https://yoursite.com/demo",
docs: "https://docs.yoursite.com"
}
}, {
headers: {
"Cache-Control": "public, max-age=3600"
}
})
}Cache it for an hour. Update it when your pricing or features change. This endpoint serves as the single source of truth for any AI system that wants to understand your product programmatically.
JSON-LD Structured Data
Structured data has been a best practice for SEO for years, but it takes on new importance in an agentic context. JSON-LD schemas give agents a machine-readable understanding of your pages without requiring any proprietary API. Here's an example for your pricing page:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "SoftwareApplication",
"name": "YourSaaS",
"applicationCategory": "BusinessApplication",
"offers": [
{
"@type": "Offer",
"name": "Starter",
"price": "49",
"priceCurrency": "USD",
"priceSpecification": {
"@type": "UnitPriceSpecification",
"billingDuration": "P1M"
}
},
{
"@type": "Offer",
"name": "Pro",
"price": "149",
"priceCurrency": "USD"
}
]
}
</script>JSON-LD complements WebMCP. The structured data is available to search engine crawlers and agents that parse the DOM but don't execute JavaScript. WebMCP tools serve agents that do execute JavaScript. Together, they cover every agent type.
robots.txt
Review your robots.txt and make sure you're not blocking AI crawlers. Several LLM providers use specific user agents — GPTBot, ClaudeBot, Google-Extended. If you've blocked these (some default WordPress configs do), your llms.txt and /api/ai-info endpoint won't be accessible. Allow the crawlers you want to reach your structured data.
Step 4 — Testing Your Implementation
WebMCP is new enough that there's no established testing framework yet. Here's the testing workflow we use at Cursive.
Chrome 146 Canary + experimental flag. This is your primary testing environment. Navigate to chrome://flags, enable “Experimental Web Platform Features,” and restart. Open your site. If your imperative tools are registered correctly, you should see a console log confirming registration. We log [WebMCP] Tools registered successfully from our provider component.
Model Context Tool Inspector. Install this Chrome extension, open DevTools, and navigate to the “Model Context” tab. It lists every tool on the current page — both declarative (from annotated forms) and imperative (from registerTool calls). Click any tool to see its schema. Pass test parameters and execute it directly from the inspector. This is the fastest way to verify your tools return the correct JSON.
Chrome DevTools console. For quick spot checks, open the console and run:
// Check if WebMCP is available
console.log(navigator.modelContext)
// List all registered tools
const tools = await navigator.modelContext.getTools()
console.log(tools)
// Call a tool manually
const result = await navigator.modelContext.callTool(
"getProductPricing",
{ plan: "pro" }
)
console.log(result)If navigator.modelContext is undefined, the experimental flag isn't enabled. If getTools() returns an empty array, your registration code isn't running — check that your provider component is mounted and that the useEffect isn't erroring silently.
Manual agent testing. The real test is using an actual AI agent. Open Claude in Chrome Canary (or any browser-based agent that supports WebMCP), navigate to your site, and ask it questions your tools should answer. “What's the pricing for the Pro plan?” should trigger your pricing tool. “Book me a demo” should trigger your demo form or booking tool. If the agent falls back to reading the page visually instead of calling your tools, your tool descriptions may not be matching the user's intent closely enough. Refine the descriptions until the agent consistently selects the right tool.
Tool Design Best Practices
After building and iterating on our WebMCP implementation, we've developed a set of design principles that make the difference between tools that agents actually use and tools that get ignored.
Name tools like API endpoints. Clear, action-oriented, unambiguous. getProductPricing is immediately understandable. pricing is vague — is it a page, a value, a tool? handlePricingStuff tells the agent nothing. Use the get/create/compare/book verb pattern that every developer already knows from REST APIs.
Write descriptions for the agent, not for developers. The description field is the primary signal an agent uses to decide whether to call your tool. Include what the tool does, what it returns, and when someone would want to use it. “Get current pricing for all plans including features, limits, and annual discount information” is significantly better than “Returns pricing data.” Think about it from the agent's perspective: it's scanning a list of tools trying to match a user's intent. The more context your description provides, the more accurately the agent matches.
Return structured JSON, never HTML or markdown. Your tool's return value goes directly to the agent as data. The agent decides how to present it to the user. If you return HTML, the agent has to parse it. If you return markdown, the agent has to interpret formatting. Return plain JSON objects with clear keys and values. The agent will format the presentation.
Include URLs in your responses. When your tool returns data, include relevant URLs so the agent can navigate users to the right pages. If your pricing tool returns plan details, include a link to the pricing page. If your comparison tool returns a feature matrix, include a link to the full comparison article. Agents use these URLs to provide “learn more” links and to navigate users deeper into your site.
// Good: includes actionable URLs
return {
plans: [...],
pricing_page: "https://yoursite.com/pricing",
demo_url: "https://yoursite.com/demo",
signup_url: "https://yoursite.com/signup"
}
// Bad: data island with no navigation
return {
plans: [...]
}Mark read-only tools with readOnlyHint: true. This annotation tells agents that calling this tool is safe and has no side effects. Pricing lookups, feature comparisons, and capability overviews are read-only. Demo booking and form submissions are stateful. The distinction matters because agents apply different confirmation policies — a read-only tool can be called without asking the user for permission, while a stateful tool typically requires explicit consent before execution.
Think like a buyer, not like a developer. The most useful exercise is to sit down and list every question a prospect's AI assistant would ask during evaluation. How much does it cost? How does it compare to alternatives? What industries do you serve? Can I see results? How do I get started? Each question maps to a tool. If you build tools that answer the questions buyers actually ask, agents will use them. If you build tools based on your internal data model, they won't.
Start Building
WebMCP is one of those rare standards where the first-mover advantage is real and the implementation cost is low. The declarative API takes an afternoon. The imperative API takes a day. The supporting infrastructure — llms.txt, /api/ai-info, JSON-LD — takes another day. In three days of engineering time, you can make your entire product accessible to every AI agent that visits your site.
We built this for Cursive and it's live right now. Visit meetcursive.com with Chrome Canary to see the tools in action. Check out our platform to see what Cursive does, or browse the pricing page to see how the structured data layers work in practice.
For the strategic context behind this implementation — why the buyer journey is going agentic and what that means for your pipeline — read our companion articles. For the full WebMCP specification, the GitHub repo has the latest draft. And VentureBeat's coverage provides solid third-party context on the industry implications.
The agentic web isn't coming. It's here. The question is whether your site is ready for it.
