Learn how to create a minimal MCP (Model Context Protocol) server and agent in Rust, exposing tools and communicating with an external LLM API.
Building a Basic MCP Agent + Server in Rust
MCP (Model Context Protocol) provides a structured, secure way to let AI models interact with external systems.
Instead of custom ad-hoc APIs, MCP defines a clean standard for exchanging messages, tools, prompts, and results.
In this tutorial, we will build:
- A small MCP server using Axum
- One tool (βechoβ) that returns JSON
- A Rust-based MCP agent that calls the LLM
- An external LLM API integration using
reqwest
This guide uses standard Rust crates you likely already know β no proprietary systems needed.
Prerequisites
Install Rust via rustup:
curl https://sh.rustup.rs -sSf | sh
Initialize our server
cargo init mcp_server
cd mcp_server
Install required crates:
cargo add axum tokio serde serde_json reqwest tower-http
You will also need an LLM API, such as an OpenAI-compatible endpoint.
for local things you can use Ollama to retrieve and host existing models
Project Structure
mcp-demo-rs/
βββ src/
β βββ main.rs # MCP server
β βββ agent.rs # MCP agent
β βββ tools/
β βββ echo.rs # Example tool implementation
βββ Cargo.toml
Step 1 β Create a Minimal MCP Tool in Rust
Each tool accepts JSON input and returns JSON. Below is a simple echo tool.
// src/tools/echo.rs
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
pub struct EchoInput {
pub message: String,
}
#[derive(Serialize)]
pub struct EchoOutput {
pub echo: String,
}
pub fn run(input: EchoInput) -> EchoOutput {
EchoOutput {
echo: input.message,
}
}
Add a mod.rs file to your tools folder to point to your new echo.rs
pub(crate) mod echo;
Step 2 β Build the MCP Server Using Axum
The server exposes:
GET /tools β list available toolsPOST /call β execute a tool
// src/main.rs
use axum::{
routing::{get, post},
Json, Router,
};
use serde::{Deserialize, Serialize};
use tokio::net::TcpListener;
use std::net::SocketAddr;
mod tools;
use tools::echo::{run as echo_run, EchoInput};
#[derive(Serialize)]
struct ToolInfo {
name: String,
description: String,
}
async fn list_tools() -> Json<serde_json::Value> {
Json(serde_json::json!({
"type": "tool_list",
"tools": [
{
"name": "echo",
"description": "Echo a message back"
}
]
}))
}
#[derive(Deserialize)]
struct ToolCall {
name: String,
params: serde_json::Value,
}
async fn call_tool(Json(payload): Json<ToolCall>) -> Json<serde_json::Value> {
match payload.name.as_str() {
"echo" => {
let input: EchoInput = serde_json::from_value(payload.params).unwrap();
let output = echo_run(input);
Json(serde_json::json!({
"type": "tool_result",
"name": "echo",
"output": output
}))
}
_ => Json(serde_json::json!({ "error": "unknown tool" })),
}
}
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/tools", get(list_tools))
.route("/call", post(call_tool));
let addr = SocketAddr::from(([127, 0, 0, 1], 8001));
println!("MCP Server running on {}", addr);
let addr: SocketAddr = "127.0.0.1:8080".parse().unwrap();
println!("π REST API running at http://{}", addr);
let listener = TcpListener::bind(addr).await.unwrap();
axum::serve(listener, app)
.await
.unwrap();
Make sure your cargo.toml has your dependencies and looks something like the following
[package]
name = "mcp_server"
version = "0.1.0"
edition = "2024"
[dependencies]
axum = "0.8.7"
reqwest = "0.12.24"
serde = "1.0.228"
serde_json = "1.0.145"
tokio = {version='1.48.0', features=["rt-multi-thread", "full"]}
tower-http = "0.6.6"
Run the server:
cargo run
Step 3 β Building the MCP Agent in Rust
The agent performs:
- Fetch available tools
- Send a prompt to the LLM
- Detect if the model requests a tool
- Call that tool on the MCP server
- Return the final answer
In a new directory lets create our new cargo project
cargo init mcp_agent
cd mcp_agent
[dependencies]
reqwest = { version = "0.12", features = ["json", "rustls-tls"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1", features = ["full"] }o
// src/agent.rs
use reqwest::Client;
use serde_json::json;
const LLM_URL: &str = "https://api.openai.com/v1/chat/completions";
const MCP_SERVER: &str = "http://127.0.0.1:8001";
const API_KEY: &str = "YOUR_KEY_HERE";
pub async fn agent_cycle(user_input: &str) -> serde_json::Value {
let client = Client::new();
// 1. Get available tools
let tools: serde_json::Value = client
.get(format!("{}/tools", MCP_SERVER))
.send().await.unwrap()
.json().await.unwrap();
// 2. Send prompt to LLM
let response: serde_json::Value = client
.post(LLM_URL)
.bearer_auth(API_KEY)
.json(&json!({
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"content": "You are an MCP-compliant AI. To call tools, output JSON: {\"tool\":\"name\", \"params\":{...}}"
},
{ "role": "user", "content": user_input },
{ "role": "system", "content": format!("Available tools: {}", tools) }
]
}))
.send().await.unwrap()
.json().await.unwrap();
let content = response["choices"][0]["message"]["content"].as_str().unwrap();
// 3. Try to parse as a tool request
if let Ok(parsed) = serde_json::from_str::(content) {
if parsed.get("tool").is_some() {
let tool_name = parsed["tool"].as_str().unwrap();
let params = parsed["params"].clone();
let tool_output: serde_json::Value = client
.post(format!("{}/call", MCP_SERVER))
.json(&json!({
"name": tool_name,
"params": params
}))
.send().await.unwrap()
.json().await.unwrap();
return json!({
"assistant": parsed,
"tool_result": tool_output
});
}
}
// If not JSON, just return the text
json!({ "assistant": content })
}
}
Step 4 β Test the Agent
Create a small main.rs for the agent:
use agent::agent_cycle;
mod agent;
#[tokio::main]
async fn main() {
let result = agent_cycle("Please echo back 'MCP is easy'").await;
println!("{:#?}", result);
}
If everything works, the LLM will output a tool request such as:
{ "tool": "echo", "params": { "message": "MCP is easy" } }
The agent will call the MCP server, receive the tool result, and return structured JSON.
Whatβs Next?
Passing data in between steps using a higher level plan context and orchestrating multiple steps together using the same ideology.
MCP becomes powerful quickly when paired with:
- Database-backed tools (via SQLx)
- Filesystem tools (read/write server assets)
- Multi-step planning with tool chaining
- Authentication and multi-agent orchestration
With this base, you can build robust protocol-driven AI systems entirely in Rust.
Want help expanding your MCP architecture?
Reach out β we can help.