top of page
Securing, Managing, and Scaling LLM Tools with MCP
- Security: The default MCP lacks authentication, posing security risks. Mitigation strategies include implementing a `.well-known/mcp-auth` endpoint, leveraging OAuth2 providers (Auth0, Clerk, Supabase Auth), or using signed JWTs, and mutual TLS with client certificates for internal tools.
- Risk Management: MCP treats all tools equally, regardless of risk. Proposed solutions involve defining a `permissions` field in tool manifests (e.g., `read`, `write`, `exec`, `dangerous`), requiring user confirmation for high-risk operations, and sandboxing sensitive actions using containers (Docker, Podman).
- Cost Control: Unrestricted tool outputs can lead to excessive costs. Recommendations include enforcing `max_output_size`, supporting `stream_output: true`, compressing outputs (Zstd, Brotli), and preemptively estimating token costs using `tiktoken` or `gpt-tokenizer`.
- Data Handling: MCP's reliance on plaintext exchanges is fragile. The suggested fix is to define expected inputs/outputs using JSON Schema in a `schema.json` file, validating at runtime with `ajv` (Node.js) or `pydantic` (Python), and including example payloads and error formats in the manifest.
- Prompt Engineering: LLMs require different prompt scaffolding, which MCP doesn't account for. The proposed solution is to attach prompt templates per model (e.g., `prompt.gpt`, `prompt.claude`) stored in a versioned registry (GitHub, Supabase), and using snapshot tests to ensure consistent behavior.
- Developer Experience: The current DIY developer experience hinders adoption. Improvements include scaffolding new tools with `create-mcp-tool` (including schema validation and auth handling), adding CLI support (e.g., `mcp-dev run`, `mcp-test`), and automating validation with GitHub Actions (linting manifests, checking schemas, verifying auth flow).
Source:
bottom of page