In this section
5 Best Practices for Building MCP Servers
Building MCP Servers has become a mainstream gateway to externalize product capabilities to AI applications and AI-driven workflows. If you’re new to building MCP Servers, how do you follow good practices and conventions to make the most out of your MCP Server? I’ve compiled a list of best practices out of my own experience of building a few of them in Node.js.
If you followed prior articles on the Snyk resources hub, you’ve likely seen us review 7 MCP Servers for Product Managers, educating developers how to add MCP Servers to Cursor, and raising awareness about vibe coding security risks that result in a Node.js application security vulnerability.
This guide will curate some of the nuances and building blocks of patterns that make MCP Servers perform well and touch on the points of developer experience for MCP Clients and debugging capabilities.
MCP Server best practice 1: Stick to tool naming standards
MCP Servers expose tools by name and description and make them available in a tools list in response to consuming MCP Clients.
Here is an example of a simple Tool definition that defines the Tool name to be getNpmPackageInfo
:
server.tool(
"getNpmPackageInfo",
"Get information about an npm package",
{
packageName: z.string()
},
async ({ packageName }) => {
const output = execSync(`npm view ${packageName}`, {
encoding: "utf-8",
});
return {
content: [
{
type: "text",
text: output
},
],
};
}
);
Security disclaimer: The above example code for an MCP Server is deliberately vulnerable to command injection as part of an educational article on security vulnerabilities in MCP Servers.
You could have chosen to name the tool one of the following, as a few options:
getNpmPackageInfo
(what we used above in the example)get-Npm-Package-Info
get.Npm.Package.Info
get Npm Package Information
I’ve found that when you stray from de facto conventions of tool naming of either using a dash (-
), an underscore (_
) or snake case (getNpm…
) then MCP Clients are unable to surface the Tool for use and the end user experience it that the tool doesn’t get called.
My recommendation: Don’t use a space, don’t use a dot separator (.
), and don’t use round or square brackets like (
or )
in the tool name. It confuses it and disrupts the MCP tool's calling entirely. Always use a snake case convention for the Tool name. GPT-4o tokenization understands that practice best. Alternatives are to use a dash or an underscore as separators.
LLM tokenization is essential for the tool name
Why this is happening depends on the model used and the MCP Clients implementation and MCP Hosts that leverage it, but the LLM tokenization process is one of the reasons.
For example, consider the following different cases in how OpenAI models like GPT-4o tokenize text:
nodeCrypto
gets parsed as two tokens.node.Crypto
gets parsed as three tokens.node_crypto
gets parsed as two tokens.npm_Utils
gets parsed as three tokens.npm_Get_Package_Info
gets parsed as five tokens.
MCP Server best practice 2: MCP Server logging
How can you tell when the MCP Client-Server interaction doesn’t work well? Does the tool get called at all?
If you tried building an MCP Server, especially a simple one that relies on process STDIO (Standard Input/Output), you realized that console.log()
doesn’t work well.
Logging to the console or terminal in MCP Servers that are used as STDIO isn’t straightforward because that channel of communication is used to exchange information from the MCP Client to the MCP Servers.
However, forgoing logging entirely is not an option. Effective logging is paramount for MCP Servers, providing essential visibility into the program dynamics for resolving tool calls and other processing done within the tool definition logic. You’ll likely find yourself needing to be efficient in troubleshooting MCP Servers and get visibility into crucial insights for AI workflow execution.
Thus, for MCP Servers that follow the process STDIO transport type per the MCP Specification, the easiest lift would be to practice file logging so that the output channel is a logfile and log messages are written to a designated file for subsequent analysis.
My recommendation: Use a logging library such as pino
in Node.js streamlines implementation. It’s a great choice, coming from long-time and trusted Node.js collaborators and Fastify ecosystem library authors.
Consider the following example usage of pino
:
// import and initialize the logger
import pino from 'pino';
const logger = pino('/tmp/mcp-server.log');
// log data
logger.info(`Logger initialized`);
This demonstration illustrates the use of pino
to instantiate a logger that outputs logs to /tmp/mcp-server.log
.
Log entries, such as logger initialization, can then be systematically recorded. This practice enables meticulous monitoring and visibility into an MCP Server program flow.
MCP Servers with an HTTP transport type can handle logging differently and aren’t covered in this best practice recommendation.
MCP Server best practice 3: Avoid “not found” response text
This recommendation is more about the response text and logic implementation for tool calls, and it is grounded in my albeit limited experience. Still, my observations and experiments seem to correlate.
The premise is as follows: when you implement tool calls ike “search,” avoid responding with a “not found” text. Instead, just provide as much generalized data as possible for the LLM.
A practical use case of that was when I built an MCP Server for Node.js API Documentation, and I implemented a nodeSearch
tool. If users ask for a method-related API in the Node.js runtime, they might not find it at first because the nodeSearch
tool implementation finds Node.js core modules by name, not their actual methods from the underlying API.
Initially, the response I chose to return in those cases was Module <xyz> not found. <here is everything I have>
but I found that even if I provide the entirety of all module and methods the LLM will be steered by the “not found” text in the beginning of the response and won’t process to find the method in one of the supported Node.js core modules in the data I provided.
My recommendation: Do not return a “not found” text. Instead, provide any other data that would be relevant. Of course, this has caveats, because sometimes you can’t just return all data. For example, when you want to look up users, it would probably be incorrect, insecure, and a breach of privacy.
Interested to see the before and after of this practice in action? See below.
The following is an example of an IDE chat interaction that uses an MCP Server with a tool call returning a “not found” text. As you can see, it isn’t able to pick up the correct answer:

However, when we remove the # Module “color” not found
at the top of the text response, and reply with all available Node.js core modules and their methods, the LLM can iterate through potential modules and find a better solution for us.
It works:

MCP Server best practice 4: Avoid vulnerable third-party components
We can’t forgo security from a best practices series in an AI Security company like Snyk, can we?
For MCP Servers to be widely adopted within organizations and accepted by internal infrastructure teams, they must meet stringent security and compliance requirements. IT and product security teams are acutely aware of the risks of introducing potentially vulnerable dependencies into their workflows. This isn’t new and is part of the SBOM requirements made by Biden’s Executive Order following the SolarWinds attack fallout.
Avoiding vulnerable third-party components is critical. Any software that relies on third-party libraries or contains code with known vulnerabilities can become an entry point for malicious actors. An MCP Server, by its function, often has broad access and integration capabilities, making any vulnerability a significant risk.
Thus, ensuring your MCP Servers are free from vulnerabilities is not just a best practice; it’s necessary for their successful implementation.
My recommendation is to use Snyk, but how do you start securing MCP Servers, and where do you look?
Scan for vulnerable code: Snyk can analyze your codebase for security flaws and insecure code patterns, and provide code fixes in the IDE or other developer workflows.
Check for vulnerable dependencies: One of Snyk’s foundational strengths is securing open source software. Snyk scans the project's dependencies against a comprehensive database of known vulnerabilities and alerts you to any risks and which versions to upgrade.
Ensure compliance: Legal implications might be boring for developers, but they are of utmost importance to businesses. Snyk helps you, as a developer, to avoid legal liability issues by identifying vulnerabilities that violate security and licensing standards.
Here’s a practical demonstration of using Snyk to find vulnerable open source dependencies:

Snyk will also find vulnerable code in your own codebase, such as command injection, path traversal, and other types of insecure code you wrote, or maybe the LLM generated the vulnerable code for you. Regardless, Snyk will scan the code.
MCP Server best practice 5: Package the MCP Server as a Docker container
Building MCP Servers is beneficial, but deploying and managing them can introduce complexities on the end-user side.
Often, MCP Servers require a specific runtime environment, such as Python with uv
or Node.js with npm
. This dependency on a specific language environment can create hurdles for end users who consume these servers. They need to set up their environment precisely to run the MCP Server correctly, which can lead to inconsistencies, version conflicts, errors, and a complicated setup process.
One effective solution to this problem is packaging the MCP Server as a Docker container. Docker containers provide a standardized, isolated environment that includes all the necessary dependencies, libraries, and runtime components required for the MCP Server to function correctly. By distributing the MCP Server as a container image, users only need Docker installed to run it.
Docker even created a centralized hub of MCP Servers as container images for popular MCP Servers:

Docker acts as an acceptable and industry-accepted abstraction layer that packages applications and their dependencies into a portable container. This approach simplifies deployment, ensures consistency across different environments, and eliminates the "it works on my machine" problem.
Here are some benefits of offering Docker images for the MCP Servers you’ll build:
Consistency: Guarantees that the MCP Server runs the same way in development, testing, and production.
Isolation: Prevents conflicts between the MCP Server's dependencies and other applications on the host system.
Portability: It makes it easy to deploy the MCP Server on any system that supports Docker.
Simplified deployment: Reduces the steps required for end users to get the MCP Server up and running.
Resource management: Docker provides tools for managing resources like CPU, memory, and network, ensuring the MCP Server runs efficiently.
Best practices for securely developing with AI
10 tips for how to help developers and security professionals effectively mitigate potential risks while fully leveraging the benefits of developing with AI.