Skip to main content

Frequently Asked QuestionsFAQ

Find answers to common questions about MCP Trail and MCP security.

MCP Basics

What is the Model Context Protocol (MCP)?

MCP (Model Context Protocol) is an open protocol that enables AI assistants to connect to tools and data sources. It allows AI models to invoke tools, access databases, read/write files, and interact with external systems through a standardized JSON-RPC interface.

Why do I need security for MCP?

AI assistants can invoke tools with minimal visibility. A misconfigured MCP server can exfiltrate secrets, run destructive commands, or access sensitive data. MCP Trail provides the visibility and control your security team needs.

Is MCP Trail an MCP firewall?

Many teams describe MCP Trail as an MCP firewall: an application-layer control point that inspects MCP JSON-RPC traffic, applies allow/deny rules and policies, and logs outcomes before requests reach your upstream servers. It is not a traditional network firewall—it enforces Model Context Protocol semantics on the wire (the same role Guardian serves). Combine it with network segmentation and safe upstream design for defense in depth.

What's wrong with tool-name allowlists alone?

Risk lives in arguments, shell patterns, and payloads inside JSON—not just which tool is called. An allowlisted tool can still be used destructively. MCP Trail inspects arguments and applies shell safety heuristics.

Guardian Proxy

How does the Guardian proxy work?

You register your upstream MCP server URLs with MCP Trail. We provide a stable proxy endpoint with per-server bearer credentials. All JSON-RPC traffic flows through our Rust gateway instead of directly to your upstreams, enabling inspection, policy enforcement, and logging.

What credentials do I provide to AI assistants?

End users receive the Guardian proxy URL and access key from the dashboard. They never see or directly connect to your internal MCP servers. This keeps your internal infrastructure private and secure.

Can I rotate credentials?

Yes. Rotate credentials from the dashboard at any time. If a key leaks, you can revoke it immediately without changing your upstream server configuration.

HTTP, npm & Docker

Can MCP Trail protect Docker or npm-based MCP servers?

Guardian protects whatever upstream URL you register. Dockerized servers typically expose HTTP to the proxy (or sit behind a bridge). npm/npx servers are often stdio in local development; teams usually front them with an HTTP JSON-RPC bridge so Guardian enforces policy on the wire. Verify transport support against your deployment and product docs.

Is this only for HTTP MCP?

Enforcement is on the path through Guardian (commonly HTTP to upstream). Clients connect to Guardian; HTTP remote MCP is first-class. npm and Docker are common ways to host the server that Guardian calls—not separate product tiers.

Human-in-the-Loop

What is Human-in-the-Loop?

HITL allows you to mark certain tools as "risky." When an AI assistant calls those tools, the request pauses in the dashboard until a human approver reviews it. Approvers can approve or reject before the call reaches your upstream MCP server.

How do approvers get notified?

When HITL is configured with Slack notifications, approvers receive messages with approval links. They can approve or reject from Slack when your deployment supports it.

What's the HITL queue?

The HITL queue is an operational view in the dashboard showing pending approvals. It displays argument diffs and context suited for on-call teams making quick decisions.

DLP & Security

What does DLP scanning detect?

MCP Trail scans tool arguments for common secret and token patterns before traffic reaches your upstream MCP. You can configure monitor, block, or redact behavior per policy.

Are responses also scanned?

Responses from your MCP servers can be subject to DLP rules where the product supports it. Configure per-tool whether you want to monitor, block, or redact sensitive data in responses.

What are shell safety heuristics?

MCP Trail analyzes shell-class tool calls for dangerous patterns: recursive deletes, destructive redirects, pipe-to-shell, and similar. This reduces risk from prompt injection and typo-driven outages.

Limits, Budgets & Quotas

What are rate limits?

Rate limits throttle abusive clients by limiting requests per time window. Configure per-server or globally to prevent a noisy client from overwhelming your MCP infrastructure.

What are payload limits?

Payload limits cap oversized JSON requests (for example ~4 MiB at the gateway when configured that way). This reduces risk from excessively large arguments and timeouts.

How do credit budgets and per-server quotas work?

Commercial deployments may meter usage per Guardian server and/or with a pooled org quota. Exact counting—session handshakes, tools/list, tools/call, resources, HITL holds, scans—depends on how your workspace and contract are configured. Check the dashboard or contact us for current terms.

Can I export audit or configuration data?

On paid plans, you can export audit logs from the dashboard to PDF and CSV for reviews, tickets, and compliance evidence. The free (Starter) tier does not include audit export. MCP Trail does not offer general import/export of policies or server configuration—those are managed in the product UI.

What MCP Trail Is Not

  • Not a replacement for security training — Users still need to understand safe AI usage practices.
  • Not a substitute for vendor diligence or org-wide data policy — Contracts, data classification, and governance processes still apply.
  • Does not automatically make every AI app safe — Admins must still choose which servers and tools are allowed and who may use them.
  • Free playground is best-effort — The free MCP Playground may not work with stream-only transports or servers requiring auth before listing tools.

Still have questions?

Our team is here to help. Contact us and we'll get back to you as soon as possible.

Contact Support