Beyond the Hype
Most of the AI ecosystem is built in Python. That makes sense for model training, experimentation, and prototyping. But the infrastructure layer — the systems that AI agents depend on to function reliably — demands different properties.
When I built Ariadne, a universal dependency graph for AI coding agents, Rust wasn't a trendy choice. It was the only choice that satisfied the constraints.
The Constraints
An MCP server that provides dependency graphs to AI agents needs to be:
- Fast — agents query it on every file change
- Correct — a wrong dependency edge means wrong code suggestions
- Memory-efficient — runs alongside the IDE and agent
- Cross-platform — developers use macOS, Linux, and Windows
Let's look at each:
Speed: Parsing Thousands of Files
Ariadne uses Tree-sitter for parsing. In Rust, we can process an entire repository in milliseconds:
use tree_sitter::{Parser, Language};
pub fn parse_imports(source: &str, language: Language) -> Vec<Import> {
let mut parser = Parser::new();
parser.set_language(&language).unwrap();
let tree = parser.parse(source, None).unwrap();
let root = tree.root_node();
// Walk the AST and extract import nodes
let mut imports = Vec::new();
let mut cursor = root.walk();
for node in root.children(&mut cursor) {
if node.kind() == "import_statement" {
imports.push(Import::from_node(node, source));
}
}
imports
}
The same operation in Python would be 10-50x slower due to interpreter overhead. For a tool that needs to respond in real-time, that difference matters.
Correctness: The Type System as Safety Net
Dependency graphs are inherently complex. A single wrong edge can cascade into incorrect analysis. Rust's type system catches entire categories of bugs:
// The type system ensures we can't mix up node types
pub enum DependencyEdge {
Import { from: ModuleId, to: ModuleId },
Reexport { from: ModuleId, to: ModuleId, symbols: Vec<Symbol> },
TypeDependency { from: TypeId, to: TypeId },
}
// This won't compile — you can't accidentally use a TypeId where a ModuleId is expected
// let edge = DependencyEdge::Import { from: type_id, to: module_id };
Memory: Running Alongside Everything Else
Developers already have their IDE, terminal, browser, and AI agent running. The dependency graph server can't be a memory hog:
// Petgraph gives us an efficient graph representation
use petgraph::graph::DiGraph;
pub struct DependencyGraph {
graph: DiGraph<FileNode, EdgeKind>,
// Interned strings to avoid duplicate allocations
paths: StringInterner,
}
A typical project's dependency graph fits in 2-5MB of memory. The Python equivalent would use 10-20x more.
The MCP Protocol
Ariadne exposes its graph through the Model Context Protocol (MCP), which means any AI agent can query it:
{
"method": "tools/call",
"params": {
"name": "get_dependencies",
"arguments": {
"file": "src/auth/handler.rs",
"depth": 2
}
}
}
The response includes the full dependency tree — what the file imports, what imports it, and the symbol-level connections between them.
Results
After publishing Ariadne on crates.io:
- Parse time: Full repository scan in under 100ms for most projects
- Memory: 2-5MB for a 10,000-file project
- Binary size: 3.2MB standalone
- Platforms: macOS, Linux, Windows — all from one codebase
When to Use Rust (and When Not To)
Rust is the right choice when you need:
- Predictable, low-latency performance
- Memory safety without a garbage collector
- Cross-platform native binaries
- Long-running processes that can't leak memory
It's the wrong choice when you need:
- Rapid prototyping (use Python or TypeScript)
- ML model training (use Python with PyTorch/JAX)
- Simple CRUD APIs (use whatever your team knows)
The AI ecosystem needs both: Python for the science, Rust for the infrastructure.