Comparison
QueryMT vs alternative LLM frameworks
No framework is best for everything. Here's how QueryMT compares and when to choose which.
| Feature | QueryMT | LiteLLM | LangChain | CrewAI | Ollama |
| Language | Rust | Python | Python/JS | Python | Go |
| Provider count | 17+ | 100+ | ~40 | ~10 | ~5 (local) |
| Plugin model | WASM/native from OCI | Python packages | Pip packages | Pip packages | Built-in |
| Sandboxed providers | WASM/Extism | — | — | — | — |
| Config-defined agents | TOML files | — | — | — | — |
| Multi-agent quorum | planner+delegates | — | — | limited | — |
| Mesh networking | libp2p/iroh | — | — | — | — |
| Credentialless sharing | signed invites | — | — | — | — |
| Desktop MCP (device data) | iMCP on-device | — | — | — | — |
| Scheduled agents | interval + event | — | — | — | — |
| Persistent knowledge | lifecycle tools | — | memory | memory | — |
| Local GPU inference | Metal/CUDA/Vulkan | proxy | — | — | ✓ |
| Parallel evaluation | ✓ | ✓ | — | — | — |
| MCP support | ✓ | — | ✓ | — | — |
| Structured output | ✓ | ✓ | ✓ | — | — |
| VS Code extension | ✓ | — | — | — | — |
When to use QueryMT
- You want config-driven agents, not code
- You need mesh networking or credentialless provider sharing
- You want on-device data access via MCP
- You need scheduled, autonomous agents
- You care about sandboxed, OCI-pulled providers
When to use something else
- LiteLLM: You need 100+ provider integrations and are happy with Python
- LangChain: You want a large ecosystem of chains, retrievers, and JS support
- CrewAI: You want simple multi-agent role-playing in Python
- Ollama: You just want to run a local model quickly