Understanding MCP, MCPB, transports, server types, and hosts
This page explains the foundational concepts you need to understand how tool.store works. If you just want to install and use tools, you can skim this and refer back when you need more detail. If you plan to build tools, reading this thoroughly will help you make better design decisions.
MCP is a protocol that allows AI applications to communicate with external services. Anthropic developed it so that applications like Claude Desktop could extend their capabilities without building every feature directly into the application.
The protocol follows a client-server model. The MCP client lives inside the AI application. The MCP server is a separate process that provides tools, prompts, or resources. When the AI needs to use a tool, the client sends a request to the server, the server executes the function, and the server sends back the result.
MCP uses JSON-RPC 2.0 for message formatting. Each request has a method name and parameters. Each response has a result or an error. The protocol defines standard methods like tools/list to enumerate available tools and tools/call to invoke a tool.
The separation between client and server means you can write an MCP server in any language. The server just needs to speak the protocol over the configured transport. This flexibility allows tool authors to use whatever runtime makes sense for their use case.
MCPB is a packaging format for MCP servers. It solves the distribution problem. Without a standard package format, sharing an MCP server means telling people how to clone a repository, install dependencies, and configure their MCP host manually. MCPB bundles everything into a single file that the CLI can install with one command.
An MCPB file is a zip archive containing:
manifest.json - metadata and configuration describing how to run the serverserver/ directorynode_modules/ directory; for Python, bundled packagesThe manifest is the most important part. It tells the CLI everything it needs to know: what command starts the server, what environment variables to set, what configuration the user needs to provide, and what platforms the tool supports.
The MCPB specification covers bundled stdio servers: the package contains code, and the host runs it as a local subprocess communicating over stdin/stdout. Both type and entry_point are required in the manifest, and there is no support for HTTP transport, remote servers, or orchestrator-managed resources.
MCPBX (.mcpbx) is a superset format for manifests that go beyond what MCPB supports. It adds HTTP transport, reference mode (pointing to external commands or remote URLs without bundling code), system configuration for host-managed resources like ports and directories, OAuth configuration, and template functions for constructing values like auth headers. Every valid .mcpb is a valid .mcpbx, but not vice versa. A .mcpbx file requires an mcpbx-aware host (like the tool CLI) to run.
The CLI handles both formats transparently. When you run tool pack, it automatically picks .mcpb or .mcpbx based on what your manifest uses.
When you install a tool from tool.store, the CLI downloads the bundle, extracts it to ~/.tool/tools/, and reads the manifest to understand how to run it.
MCP servers can expose three types of capabilities.
Tools are functions the AI can call. Each tool has a name, a description, and an input schema. The name identifies the tool in requests. The description helps the AI understand when to use it. The input schema defines what parameters the tool accepts using JSON Schema.
Here is an example of what a tool declaration looks like conceptually:
Name: read_file
Description: Read the contents of a file at the given path
Input schema:
- path (string, required): The file path to read
- encoding (string, optional): Character encoding, defaults to utf-8When the AI decides to use this tool, it sends a tools/call request with the tool name and arguments. The server reads the file and returns its contents.
Prompts are pre-written templates that guide the AI through specific tasks. A prompt might include instructions for code review, documentation generation, or data analysis. Prompts can have arguments that get substituted into the template text.
Resources are static data that the AI can read. A resource might be a configuration schema, API documentation, or reference data. Resources have URIs that identify them and MIME types that describe their format.
Most tools on tool.store primarily expose tools (the callable functions), but some also include prompts or resources. The manifest declares what the server provides so the CLI can display this information before you install.
MCP supports two transports for communication between client and server: stdio and HTTP.
Stdio transport is the default and most common. The MCP client spawns the server as a subprocess and communicates through standard input and output. The client writes JSON-RPC messages to the server's stdin, and the server writes responses to stdout.
Stdio works well for local tools. The server runs on the same machine as the AI application, starts when needed, and terminates when the application closes. The client has full control over the server lifecycle.
HTTP transport allows the server to run remotely or as a long-lived service. The client sends JSON-RPC messages as HTTP POST requests to a URL. This enables use cases like cloud-hosted tools, shared tool servers, and tools that need to maintain persistent state.
HTTP transport introduces more complexity. The server needs to be running before the client connects. Authentication and network configuration come into play. But for some use cases, the flexibility of HTTP transport outweighs these costs.
The manifest specifies which transport a tool uses. Most tools use stdio. If a tool uses HTTP, the manifest includes a URL field instead of (or in addition to) a command field. HTTP transport is an MCPBX extension — manifests that use it produce .mcpbx bundles.
The server type indicates what runtime the tool requires. The manifest declares one of three server types:
Node servers are JavaScript applications that run on Node.js. The CLI expects a JavaScript entry point like server/index.js and runs it with the node command. Node tools typically include a node_modules/ directory with their dependencies.
Python servers are Python applications. The CLI expects a Python entry point like server/main.py and runs it with python or a package manager command like uv run. Python tools might include bundled packages or expect the user to have certain packages installed.
Binary servers are pre-compiled executables. The CLI runs the binary directly without any runtime. Binary tools are self-contained but need separate builds for each platform and architecture combination.
The server type affects how the CLI installs and runs the tool. For Node tools, the CLI might run npm install after extraction. For Python tools, it might run uv sync or pip install. For binary tools, it just makes sure the executable has the right permissions.
Many tools need configuration from the user. An API client needs an API key. A database tool needs connection credentials. A filesystem tool might need to know which directories it can access.
The manifest declares these configuration requirements in a user_config section. Each field has a type, a title, an optional description, and flags indicating whether it is required or sensitive.
Supported types:
| Type | Description |
|---|---|
string | Text input, optionally masked if sensitive |
number | Numeric input with optional min/max constraints |
boolean | True or false |
directory | A path to a directory |
file | A path to a file |
When you run tool config set, the CLI reads these declarations and prompts for values. It stores the configuration in ~/.tool/config/ and injects the values when starting the server.
Sensitive fields like API keys get special treatment. The CLI masks them when displaying configuration and stores them securely.
System configuration differs from user configuration. Users set user configuration explicitly. The CLI or host sets system configuration automatically.
System configuration handles things like port allocation and directory management. Consider how a bundled HTTP server actually runs: the host spawns it as a child process, just like a stdio server. The user does not launch it manually. And the host might spawn multiple HTTP servers at the same time. If each server picks its own default port, or if users set port values through user_config, conflicts are inevitable. The host is the only entity that knows what is already running and which ports are free. It needs to be the one allocating ports. The same applies to managed directories — the host knows where to put persistent data and temp files consistently across installations.
Supported system configuration types:
| Type | Description |
|---|---|
port | A network port, automatically allocated from available ports |
temp_directory | A temporary directory that gets cleaned up |
data_directory | A persistent directory for storing data |
The manifest references system configuration values using template variables like ${system_config.port}. When the CLI starts the server, it substitutes the allocated values.
System configuration is an MCPBX extension. The base MCPB spec only supports user_config. Manifests that use system_config produce .mcpbx bundles.
An MCP host is an application that acts as an MCP client. The host manages connections to MCP servers and lets the AI use the tools they provide.
tool.store supports ten hosts:
Claude Desktop is Anthropic's desktop application for Claude. It stores MCP configuration in a JSON file whose location depends on your operating system.
Cursor is an AI-powered code editor. It has built-in MCP support and stores configuration in its settings directory.
VS Code with the Claude extension supports MCP tools. The extension reads MCP configuration from VS Code's settings.
Claude Code is Anthropic's command-line interface for Claude. It reads MCP configuration from a settings file in your home directory.
Codex is OpenAI's CLI agent. Windsurf is an AI-powered development environment. Zed is a high-performance code editor with AI features. Gemini CLI is Google's command-line AI interface. Kiro is AWS's AI-powered IDE. Roo Code is an AI coding assistant.
Each host has slightly different configuration formats and file locations. The tool host add command knows how to write the correct format for each host. It reads the tool's manifest, generates the appropriate configuration, and writes it to the right file.
After modifying host configuration, you need to restart the host for changes to take effect. Claude Desktop needs to be quit and relaunched. Cursor and VS Code typically need their MCP-related extensions reloaded.
Tools on tool.store have a reference format that identifies them uniquely:
namespace/name@versionThe namespace is usually the author's username or organization. The name identifies the tool within that namespace. The version is a semantic version number.
When you install a tool, you can specify different levels of precision:
| Reference | Installs |
|---|---|
namespace/name | Latest version |
namespace/name@1 | Latest 1.x.x version |
namespace/name@1.2 | Latest 1.2.x version |
namespace/name@1.2.3 | Exactly version 1.2.3 |
The CLI resolves these references against the registry and downloads the appropriate version. Local tools (installed from a path rather than the registry) do not have namespaces.
The CLI stores everything under ~/.tool/:
~/.tool/
├── auth/ # Registry authentication tokens
├── backups/ # Host config backups before modifications
├── config/ # Tool configuration files
├── credentials/ # OAuth and other credentials
├── data/ # Persistent data directories for tools
├── hosts/ # Host-related metadata
├── secrets/ # Encryption keys for sensitive data
└── tools/ # Installed toolsEach installed tool lives in its own directory under tools/. The directory name matches the tool reference. Configuration for each tool lives in a JSON file under config/.
When a tool needs a data directory, the CLI creates one under data/ and passes the path to the tool via system configuration.
Now that you understand the core concepts, you can: