本セクションの内容:
Building Interactive MCP Servers Experience on the Terminal using Python Fast Agent AI Framework
How do you get started with building your own agentic and AI-enabled chat experiences from the command-line interface centered in the terminal? That’s where Fast Agent, a Python open source project, comes in.
You might have encountered many AI frameworks in the past, on your way to building agentic workflows, such as CrewAI, LangChain and others but Fast Agent really brings in a nicely written SDK with minimal abstractions and perhaps the icing on the cake - the most featureful MCP client support across any other MCP applications and libraries.
As far as MCP goes, the Model Context Protocol from Anthropic has boomed in adoption and the specification gets routine updates but Fast Agent isn’t left behind. Fast Agent already supports MCP client features like Sampling. All of this make it a good foundation to build MCP enhanced workflows on top of this project.
Setting up a Python development environment with uv
Let’s get started with getting a local Python development project with Fast Agent.
First things first is putting together a modern Python development environment. This used to be a hassle with how Python had worked in the past, sharing global packages installed with pip which caused version conflicts. On macOS and other environments Python interpreter versions were also difficult and unfriendly to work with due to Python runtime versions (2.x vs 3.x).
But, uv
to the rescue! The uv project is a new Python package manager written in Rust, extremely fast and a full of joyful developer experience to ease managing different Python interpreter versions, project-scoped dependencies and so on.
Once you get uv
installed, let’s make sure you have a working Python 3.x version. Type in:
uv python list
You should see a list of Python 3.x versions, and in the right column, it should give you a path for it installed on disk. If you don’t have any of them locally, then install one as follows:
uv python install 3.13
Installing Fast Agent in a Python environment
Now that we have uv installed and Python 3.x installed, we can get started with Fast Agent to build the AI MCP application.
Create a new directory for the project:
mkdir fast-agent-workflow
cd fast-agent-workflow
Initialize the uv
package manager in this directory:
uv init
The uv
init command will set up a new Python project with the relevant manifest files and the virtual environment needed for package dependencies resolution and Python 3.x as the interpreter by default.
Then we can instantiate a new virtual environment (which means it is a locally scoped project) to install Fast Agent:
uv venv
source .venv/bin/activate
uv pip install fast-agent-mcp
This will result in output such as:
Resolved 89 packages in 2.45s
Prepared 36 packages in 1.76s
Installed 89 packages in 236ms
+ a2a-types==0.1.0
+ aiohappyeyeballs==2.6.1
+ aiohttp==3.12.12
+ aiosignal==1.3.2
+ annotated-types==0.7.0
+ anthropic==0.53.0
+ anyio==4.9.0
+ attrs==25.3.0
+ azure-core==1.34.0
+ azure-identity==1.23.0
+ cachetools==5.5.2
+ certifi==2025.4.26
+ cffi==1.17.1
+ fast-agent-mcp==0.2.30
+ fastapi==0.115.12
+ mcp==1.9.3
+ openai==1.85.0
Next, let’s initialize the Fast Agent application file and configuration with the fast-agent
executable now available in your shell. Run:
fast-agent quickstart state-transfer
This will create a state-transfer directory with agent code files and the configuration file for Fast Agent along with secrets.
Side note: be cautious to avoid committing to the repository those secrets or revealing them in any way. If you’re curious about the potential hazardous impact of committing secrets I wrote about problems with secrets in environment variables but of course if you use Snyk in the IDE (just download the extension) then it will detect sensitive credentials and alert you of it.
A Fast Agent MCP-enabled AI Chat in the Terminal
We can now turn our attention to focus on actually building the AI application.
We’ll start by ensuring we have a working LLM integration. This means we need to grab an API key for one of the LLM providers. I’ll use OpenAI in this example.
Rename the file fastagent.secrets.yaml.example
to fastagent.secrets.yaml
, removing the .example
extension name.
Let’s update the Fast Agent application configuration too. Open the file fastagent.config.yaml
in the state-transfer
directory and make the following changes to default_model
and logger
default_model: gpt-4.1-mini
# Logging and Console Configuration:
logger:
level: "debug"
type: "file"
path: "/tmp/fast.jsonl"
# Switch the progress display on or off
progress_display: true
Ok, next we are going to focus on MCP Server configuration.
By the way, if you are starting to build out MCP Servers I’ve put together a list of 5 best practices for building MCP Servers that you might want to visit.
The current MCP Server configuration has a stub for agent_one
. Remove that and redeclare the MCP Servers as follows:
mcp:
servers:
fetch:
transport: "stdio"
command: "uvx"
args: ["mcp-server-fetch"]
filesystem:
transport: "stdio"
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
In the above configuration, we’re defining two new MCP Servers exposed via agents for filesystem
and for fetch
.
Next, edit the agent_one.py
file to reflect these new MCP Servers.
Before the change, the Fast agent annotation was as follows:
# Define the agent
@fast.agent(name="agent_one", instruction="You are a helpful AI Agent.")
async def main():
Edit it to reflect the new MCP Server configuration as follows:
# Define the agent
@fast.agent(name="agent_one", servers=["fetch", "filesystem"], instruction="You are a helpful AI Agent")
async def main():
Tip: MCP Servers are an incredibly powerful way to streamline AI code assistant workflows and vibe coding, so be sure to connect Snyk’s free MCP Server capabilities to your agentic workflows.
Live Fast Agent Application with MCP Servers
Now, inside the state-transfer
director, run the agent_one.py
file as follows:
uv run agent_one.py
And you’ll see an output such as:
Type /help for commands, @agent to switch agent. Ctrl+T toggles multiline mode.
agent_one >
agent_one Mode: NORMAL <Enter>:Submit | Ctrl+T:Multiline Editing | Ctrl+L:Clear | ↑/↓:History | v0.2.30
This suggests you are effectively within an interactive terminal user interface powered by Fast Agent and can interact with this LLM-powered application. Remember, we added an OpenAI API key? You have your own AI chatbot now.
Even more powerful, we gave this LLM access to the file system and to fetch URLs from web servers using the MCP server configuration.
Here’s a demo showing a live interaction with the Fast Agent application we built:

And the result is that indeed a new `SNYK.md` file was created on disk. Here is its contents:
# Latest News from Snyk.io
- Snyk has launched the AI Trust Platform to secure AI-generated code and AI native applications, emphasizing trust as the foundation for AI-driven secure software.
- The platform integrates AI-powered workflows for development and security stakeholders, leveraging agentic and assistant-based AI for automation, efficiency, and innovation.
- Snyk focuses on proactive, AI-powered security with its application security testing engines noted as the fastest, most accurate, and most comprehensive.
- Snyk supports organizations of all sizes in achieving application security, compliance, and DevSecOps governance goals.
- The company highlights its ecosystem approach with technology integrations that work with existing tools and workflows to bring secure AI solutions to life.
- Key resources featured include guides and articles such as the "CISOs Guide to Safely Unleashing Power of Gen AI," tips on engaging developers in security programs, and building strong AppSec programs from basics to best practices.
- Snyk invites users to start securing AI-generated code quickly or book a demo to explore developer security use cases.
This summary captures the latest focus and offerings highlighted on the Snyk homepage as of now.
What other MCPs can you connect to your agentic workflows? Here are a few ideas to further explore:
Don’t forget MCP security, so here are Top 12 AI Security Risks You Can’t Ignore
More on MCP Security attack vectors: MCP Security – What's Old is New Again
eBook
6 Best Practices for Developer Security in the Technology Industry
Download our eBook to discover six best practices for embedding security into your developer workflows. Learn how to streamline security efforts, protect your applications, and maintain innovation without sacrificing speed.