ARTFEED — Contemporary Art Intelligence

Qwen3.6-27B Dense Model Surpasses Flagship MoE in Coding Benchmarks

ai-technology · 2026-04-24

Qwen has launched a new model called Qwen3.6-27B, which boasts 27 billion parameters and outperforms the previous open-source champion, Qwen3.5-397B-A17B, with its 397 billion parameters and 17 billion active MoE, in crucial coding tests. This latest model is much more efficient, taking up only 55.6GB on Hugging Face compared to the 807GB of its predecessor. Simon Willison tested the 16.8GB quantized version, Qwen3.6-27B-GGUF Q4_K_M, using llama-server and a technique from benob on Hacker News. It managed to create an SVG of a pelican on a bike, while another SVG of a North Virginia opossum on an e-scooter took 6,575 tokens and 4 minutes 25 seconds to generate, achieving 24.74 tokens per second. This was announced on April 22, 2026.

Key facts

  • Qwen3.6-27B is a 27B dense model.
  • It surpasses Qwen3.5-397B-A17B on all major coding benchmarks.
  • Qwen3.5-397B-A17B is 807GB on Hugging Face.
  • Qwen3.6-27B is 55.6GB on Hugging Face.
  • Simon Willison tested the 16.8GB Unsloth Qwen3.6-27B-GGUF Q4_K_M quantized version.
  • The test used llama-server and a recipe by benob on Hacker News.
  • The model generated an SVG of a pelican riding a bicycle.
  • An SVG of a North Virginia opossum on an e-scooter took 6,575 tokens, 4min 25s, 24.74 t/s.
  • The post was published on 22nd April 2026.

Entities

Institutions

  • Qwen
  • Hugging Face
  • Unsloth
  • Hacker News

Sources