QueryMT vs alternative LLM frameworks

No framework is best for everything. Here's how QueryMT compares and when to choose which.

Feature QueryMT LiteLLM LangChain CrewAI Ollama
LanguageRustPythonPython/JSPythonGo
Provider count17+100+~40~10~5 (local)
Plugin modelWASM/native from OCIPython packagesPip packagesPip packagesBuilt-in
Sandboxed providersWASM/Extism
Config-defined agentsTOML files
Multi-agent quorumplanner+delegateslimited
Mesh networkinglibp2p/iroh
Credentialless sharingsigned invites
Desktop MCP (device data)iMCP on-device
Scheduled agentsinterval + event
Persistent knowledgelifecycle toolsmemorymemory
Local GPU inferenceMetal/CUDA/Vulkanproxy
Parallel evaluation
MCP support
Structured output
VS Code extension

When to use QueryMT

  • You want config-driven agents, not code
  • You need mesh networking or credentialless provider sharing
  • You want on-device data access via MCP
  • You need scheduled, autonomous agents
  • You care about sandboxed, OCI-pulled providers

When to use something else

  • LiteLLM: You need 100+ provider integrations and are happy with Python
  • LangChain: You want a large ecosystem of chains, retrievers, and JS support
  • CrewAI: You want simple multi-agent role-playing in Python
  • Ollama: You just want to run a local model quickly