
Key Features
- Widget Emulator - Render widgets exposed via
openai/outputTemplate(ChatGPT Apps) orui.resourceUri(MCP Apps) the same way a real host would, with support for Inline, Picture-in-Picture, and Fullscreen display modes - Chat with your Server - Talk to your MCP server from inside the App Builder using frontier models (Claude, GPT, Gemini, and more) at no cost, with no API key required — widgets render inline in the thread as tools get called
- Manual Tool Invocation - Invoke any tool directly from the Tools list with your own inputs to iterate on widgets without having to prompt a model
- Protocol Toggle - For apps that declare both
openai/outputTemplateandui.resourceUri, flip between thewindow.openaihost bridge and theui/*JSON-RPC host bridge to see how each protocol renders the same widget - Display Context Controls - Test across device viewports (desktop, tablet, mobile, custom), light/dark themes, Claude vs ChatGPT host styles, locales, timezones, CSP modes, device capabilities, and safe area insets
- Per-Widget Debugging - Inspect raw tool input/output,
widgetState, model context updates, and CSP violations for every tool call from a row of icons above each widget - Traces and X-Ray - Replay any conversation as a timestamped timeline with per-step latency and token counts, or open X-Ray to see the exact JSON payload sent to the model
- JSON-RPC Logger - Real-time feed of every message between the inspector and your server, including tool calls,
window.openaiAPI calls, widget state updates, andui/*messages - Save as a View - Snapshot any tool execution to the Views tab to browse, edit, and share offline
Getting Started
To start building apps with MCPJam:- Connect your MCP server - Use the Servers tab to connect to your MCP server that returns widget-enabled tools
- Navigate to App Builder - Switch to the App Builder tab in the inspector
- Render a widget - Either manually invoke a tool from the Tools list on the left, or chat with your server from the chat input at the bottom and let the model call the tool for you
openai/outputTemplate for ChatGPT apps or ui.resourceUri for MCP apps), it renders immediately in the widget emulator, and a row of debugging icons appears above it so you can inspect state, model context, data, and CSP violations for that exact call.
App Builder Layout
The App Builder uses a split-panel interface: Widget Emulator (Center)- Renders the active widget inside a device-accurate viewport
- Display mode, device type, theme, host style, locale, timezone, CSP mode, device capabilities, and safe area controls live in the toolbar above
- Debug panels (data, widget state, model context, CSP) open below the widget
- Tools list - Every tool exposed by your connected servers, with a protocol toggle for apps that support both ChatGPT and MCP host bridges
- Chat thread - Conversation history with inline tool calls and widgets; each tool result has its own row of debugging icons
- JSON-RPC logger - Real-time feed of MCP protocol messages at the bottom of the panel
- Send messages to the model, attach files, and pick skills or prompts with
/ - Configure the model, system prompt, temperature, and tool approval
- Open X-Ray with the scan-search icon to inspect the raw payload sent to the model
Display Context
Device Viewports
Test your widgets across different device types to ensure responsive design:- Desktop (1280x800) - Standard desktop viewport for full-screen experiences
- Tablet (820x1180) - Medium-sized viewport for tablet devices
- Mobile (430x932) - Small viewport mimicking phone screens
- Custom - Set arbitrary width and height values between 100 and 2560 pixels
Theme Testing
Toggle between light and dark modes to test your widget’s appearance in both themes.Host Styles
Switch between Claude and ChatGPT host styles and chat thread look in each host environment. For MCP Apps that use the MCP Apps style variables, the ChatGPT toggle translates the widget’s styles to ChatGPT’s design tokens.Locale Configuration
Test your app’s internationalization by selecting different locales from the locale selector. Choose from common BCP 47 locales (e.g.,en-US, es-ES, ja-JP, fr-FR) to verify that your widget properly handles different languages and regions.
Content Security Policy (CSP)
The App Builder includes CSP enforcement controls to help you test widget security configurations. Switch between two CSP modes using the shield icon:- Permissive (default) - Allows all HTTPS resources, suitable for development
- Strict - Only allows domains declared in your widget’s CSP metadata (
openai/widgetCSPfor ChatGPT Apps,ui.cspfor MCP Apps)
Device Capabilities
Configure device-specific capabilities to test different interaction patterns:- Hover - Enable/disable hover support to test mouse-based interactions vs touch-only interfaces
- Touch - Toggle touch input to simulate mobile and tablet devices
Safe Area Insets
Simulate device notches, rounded corners, and gesture areas with configurable safe area insets:- Preset Profiles - Quick access to common device configurations:
- None (0px)
- iPhone with Notch (44px top, 34px bottom)
- iPhone with Dynamic Island (59px top, 34px bottom)
- Android gesture navigation (24px top, 16px bottom)
- Custom Values - Manually adjust top, bottom, left, and right insets
Timezone
Select a timezone from the toolbar to test time-aware widgets. The selector includes 19 IANA timezones covering all major regions, plusUTC.
The timezone selector is only available for MCP Apps (SEP-1865).
Widget Controls
Protocol Selector
When an MCP App includes ChatGPT compatibility metadata (openai/outputTemplate alongside ui.resourceUri), a protocol toggle appears in the left panel header. This lets you switch which host bridge the emulator uses to render the widget:
- ChatGPT Apps - Renders using the
window.openaihost bridge - MCP Apps - Renders using the
ui/*JSON-RPC host bridge
Display Modes
The App Builder supports all three display modes for both ChatGPT Apps and MCP Apps:- Inline (default) - Widget renders within the chat message flow
- Picture-in-Picture - Widget floats at the top of the screen, staying visible while scrolling
- Fullscreen - Widget expands to fill the entire viewport for immersive experiences
Debugging Tools
Each tool result in the chat thread has a row of icons in its header. Click any icon to toggle the corresponding debug panel below the widget.Data
Inspect the raw tool input, output, and error details for each tool call.Widget State (ChatGPT Apps only)
View the currentwidgetState object and see when it was last updated. This is the state your widget sets via window.openai.setWidgetState that gets passed back to the model.
Model Context (MCP Apps only)
View the context your widget has sent back to the model viaui/update-model-context. This panel only appears when the widget has set model context.
CSP Debugging
When a widget violates CSP rules in strict mode, you’ll see a badge showing the number of blocked requests. The CSP debug tab shows:- Suggested fix - Copyable JSON snippet to add to your
openai/widgetCSPorui.cspfield - Blocked requests - List of CSP violations with the directive and source that was blocked
- Declared domains - The connect, resource, frame, and base URI domains your widget currently declares
Save View
Save a snapshot of the tool execution to the Views tab for later browsing, editing, and sharing.JSON-RPC Logging
All communication between the inspector and your MCP server is logged in real-time. The logger panel is embedded in the bottom of the left panel, below the tools list. Each log entry shows the direction (client-to-server or server-to-client), the method name, and a timestamp. Logged messages include:- Tool invocation requests and responses
window.openaiAPI calls from your widget- Widget state updates
- JSON-RPC message types for MCP Apps (
tools/call,ui/initialize,ui/message, etc.)
Chat Controls
Model Selector
Choose which LLM model to use for the chat conversation. The selector supports models from multiple providers, including Anthropic and OpenAI. You can also bring your own API key to use custom providers.System Prompt & Temperature
Configure the system prompt and temperature for the LLM. Click the settings icon to open a dialog where you can edit the system prompt text and adjust the temperature slider.Tool Approval
Toggle tool approval to require manual confirmation before the LLM executes any tool call. When enabled, each tool invocation pauses and waits for your approval before running.Traces
Every App Builder conversation can be replayed as a Trace — a timestamped timeline of user turns, agent turns, and tool calls, with per-step and total latency and token counts. Expand any step to inspect its raw input and output payloads. Toggle between Chat, Trace, and Raw views at the top of the panel without leaving the conversation.
X-Ray
X-Ray lets you inspect the exact payload that would be sent to the AI model. This is useful for debugging tool schemas, verifying system prompts, and understanding what the LLM actually sees when processing a conversation. Click the scan-search icon in the chat input toolbar to open X-Ray. The chat thread is replaced with a read-only JSON viewer showing three top-level fields:system- The full system prompt, including appended skill-tool instructionstools- A map of every available tool with its name, description, and JSON Schema input definitionmessages- The current conversation history as it would be sent to the model

