⚠️ Experimental Preview - This extension is for developer testing only. Not intended for production use. Only use on sites you trust.
A browser extension that lets an AI agent interact with web pages through the WebMCP API. The agent discovers tools exposed by a page and calls them autonomously to complete a user-defined goal.
The extension supports multiple LLM providers. Choose one:
You need an Azure OpenAI resource with a deployed model. Currently tested with gpt-5.2-chat.
From Azure AI Foundry, grab:
- Endpoint - from Endpoint section, called "Target URI" (e.g.
https://your-resource.azure.com/openai/...) - API Key - from the Endpoint section, called "Key"
- Deployment Name - from Deployment info section, called "name" (e.g.,
gpt-5.2-chat)
New to Azure? Click here for step-by-step setup instructions.
1. Create an Azure Account
If you don't have an Azure subscription:
- Microsoft Employees: Follow the internal instructions to activate your benefit.
- Others: The Azure free account includes $200 credit for 30 days (as of Feb 2026), sufficient for testing.
2. Deploy the Model
- Go to Azure AI Foundry.
- Click "Create an agent".
- Name your project (creation may take a moment).
- When the "Deploy model" step appears, click "Go to model catalog" to see more options.
- Select
gpt-5.2-chatfrom the catalog. - Click "Use this model".
- Keep the default settings and click "Create resource and deploy".
- Once deployed, copy the Endpoint, API Key, and Deployment Name from the resource page.
- API Key - from OpenAI API settings
- Model - e.g.,
gpt-5.2
Any provider that implements the Chat Completions API format:
- Base URL - e.g.,
http://localhost:11434/v1(Ollama),http://localhost:1234/v1(LM Studio) - API Key - optional, leave empty if not required
- Model - e.g.,
qwen2.5:7b,llama3.1:8b,mistral:7b
The agent relies on function calling (tool use) to interact with web pages. Your model must support tool calling. This is especially important for local models — not all models support it.
Microsoft Edge Canary is recommended:
- Download Edge Canary
- Navigate to
edge://flags - Search for WebMCP for testing
- Set it to Enabled
- Restart the browser
-
Clone this repository into a local directory.
-
Load the extension in your browser:
- Go to
edge://extensions(orchrome://extensions) - Enable Developer mode
- Click Load unpacked and select the
dist/folder in this repository
- Go to
-
Click the extension icon in the toolbar to open the side panel.
-
Go to the ⚙️ Config tab, select your LLM provider, and enter your credentials.
We recommend testing with the Checkers Pizza demo site:
https://victorhuangwq.github.io/pizza-order-demo
- Open the pizza site in a tab
- Open the extension side panel (click the extension icon)
- Go to the 🔧 Tools tab - you should see tools like
select-order-type - Switch to the 🤖 Agent tab
- Enter a goal, e.g.:
Order a large pepperoni pizza for delivery to 1 Microsoft Way, Redmond, WA 98052
- Click ▶▶ Run for the full loop, or ▶ Step to execute one tool call at a time
| Button | Behavior |
|---|---|
| ▶ Step | Run one iteration (discover tools > LLM > execute one tool call > stop) |
| ▶▶ Run | Run the full agent loop until the goal is achieved or max iterations reached |
| ⏹ Stop | Abort the current run |
| ↺ Reset | Clear the log and goal |
| Auto-approve | When ON, tool calls execute without confirmation. ON by default. |
- Experimental preview - This is a developer tool for testing the WebMCP API. Do not use in production.
- Trusted sites only - The extension runs on all pages. Only use it on sites you trust.
- API costs - Each agent step makes an LLM API call. When using ▶▶ Run, monitor the agent closely as it can enter a loop and exhaust your token credits. Prefer ▶ Step for controlled execution.
- No guarantees - The agent may take unexpected actions. Use the Step button and disable Auto-approve to review each action before execution.
webmcp-explorer/
├── src/ # Source files (edit these)
│ ├── manifest.json
│ ├── background.js
│ ├── content.js
│ ├── sidebar.html
│ ├── sidebar.js
│ ├── sidebar.css
│ ├── agent.js # Provider-agnostic agent loop
│ ├── settings.js # Shared settings (iframe, iterations)
│ └── providers/ # LLM provider abstraction
│ ├── base.js # BaseProvider interface
│ ├── azure-openai.js
│ ├── openai.js
│ ├── openai-compat.js # Chat Completions API provider
│ └── registry.js # Provider registry & factory
├── scripts/ # Build tooling
│ ├── build.js # Copies src/ → dist/ and runs bundle
│ └── bundle.js # Bundles the OpenAI SDK via esbuild
├── dist/ # Loadable extension (committed, do not edit directly)
│ ├── ... (copied from src/)
│ └── openai-bundle.js # Generated OpenAI SDK bundle
├── package.json
├── README.md
└── .gitignore
If you want to modify the extension source code:
- Edit files in the
src/folder (notdist/). - Run
npm installto install dependencies and auto-build, or runnpm run buildto rebuild after making changes. - Reload the extension in your browser (
edge://extensions→ click the refresh icon on WebMCP Explorer).
The extension uses a provider abstraction that makes it easy to add new LLM providers:
- Create a new file in
src/providers/(e.g.,my-provider.js) - Extend
BaseProviderand implement the required methods:getSettings()/saveSettings()— read/write provider config fromchrome.storage.localisConfigured()— returntruewhen the provider has all required settingssendRequest({ instructions, messages, tools, providerState, signal })— send a request to the LLM and return{ toolCalls, text, providerState }
- Define static metadata on the class:
static id— unique string identifierstatic label— human-readable name for the UIstatic settingsFields— array of{ key, label, type, placeholder, required }for dynamic UI rendering
- Register it in
src/providers/registry.js— import and add to theprovidersarray
The UI will automatically render the settings fields and the provider will appear in the dropdown. No changes to agent.js or sidebar.js are needed.