Critique of Ollama's Practices in Local LLM Ecosystem
Founded in 2021 by Jeffrey Morgan and Michael Chiang, Ollama gained traction as an accessible interface for local large language models. Its inference capabilities were based on llama.cpp, developed by Georgi Gerganov in March 2023. For more than a year, Ollama's documentation failed to reference llama.cpp or include MIT license notices. GitHub issues, such as issue #3185, remained unresolved for over 400 days. In mid-2025, Ollama transitioned to a custom ggml implementation, which introduced various bugs. Community benchmarks indicated that llama.cpp performed 1.8 times faster. The company faced backlash for deceptive model naming and launched a closed-source GUI app in July 2025 without a license. Late 2025 saw the introduction of cloud-hosted models, which raised privacy issues, including CVE-2025-51471. Alternatives like llama.cpp provide similar features, while Gerganov's ggml.ai joined Hugging Face in February 2026.
Key facts
- Ollama was founded in 2021 by Jeffrey Morgan and Michael Chiang
- The tool originally relied entirely on llama.cpp created by Georgi Gerganov in March 2023
- Ollama failed to credit llama.cpp in documentation for over a year
- GitHub issue #3185 requesting license compliance went over 400 days without response
- In mid-2025, Ollama built a custom implementation that introduced bugs llama.cpp had solved
- Benchmarks show llama.cpp runs 1.8x faster than Ollama on the same hardware
- Ollama listed distilled DeepSeek models as "DeepSeek-R1" in January 2025 without proper distinction
- In July 2025, Ollama released a closed-source GUI app without an initial license
Entities
Artists
- Georgi Gerganov
- Jeffrey Morgan
- Michael Chiang
Institutions
- Ollama
- llama.cpp
- Y Combinator
- GitHub
- Hugging Face
- Docker Inc.
- Kitematic
- Hacker News
- XDA
- DeepSeek
- Alibaba Cloud
- Red Hat
- LM Studio
- Jan
- Msty
- koboldcpp
- ggml.ai
- Unsloth
- Bartowski
- MiniMax
- Sleepingrobots