Running Qwen 3.6 Locally on a Mac Mini M4 with 16GB RAM

Local LLMs Apple Silicon Qwen 3.6


Two days ago Qwen open-sourced Qwen 3.6-35B-A3B — a 35-billion parameter Mixture of Experts model that only activates 3 billion parameters per token. It's Apache 2.0 licensed, ships with a vision encoder, and is reportedly competitive with much larger models on agentic coding benchmarks. GGUF quantizations were up within hours.

Here's the thing: you can run it on a $599 Mac Mini M4 with 16GB of RAM. Not a toy demo — actual usable inference at 17 tok/s, zero swap, 81% memory free. This post is about how to do that, and which tools give you the best experience.

Why 35B-A3B works on 16GB

The naive math says it shouldn't fit. The standard formula for estimating model memory (from BentoML):

Memory (GB) = Parameters (B) × (Bits per weight / 8) × 1.2 overhead
35 × 4 / 8 × 1.2 = ~21GB

21GB for a Q4 quantization. That doesn't fit in 16GB. So how does it work?

The key is the MoE architecture. "35B-A3B" means 35 billion total parameters, but only 3 billion active per token. The model uses 256 total experts with 8 routed + 1 shared active per inference step. The remaining experts sit idle. This is what makes the --mmap trick possible: llama.cpp memory-maps the model file, and the OS only pages in the weights for the currently active experts. Since the hot working set is roughly 3B parameters (~2GB at Q4), it fits comfortably in 16GB with room to spare.

Jock.pl benchmarked this on a Mac Mini M4 16GB: 17.3 tok/s decode, 81% memory free, zero swap. That's not hypothetical — it's a real measurement on the base model Mac Mini.

Why this matters: On benchmarks, the 35B-A3B architecture beats dense models up to 120B on coding and reasoning tasks, while running at the latency of a 3B model. On 16GB RAM. For $0/month. That's the pitch.

Picking your inference tool

There are four main ways to run LLMs locally on a Mac. Here's how they compare for running the 35B-A3B on 16GB specifically:

Tool Ease of setup 35B-A3B on 16GB? Tool calling Notes
llama.cpp Build from source Yes (mmap) Yes The way to do it on 16GB
Ollama One command Yes (uses llama.cpp) Yes MLX backend requires 32GB+
LM Studio GUI app Yes (MLX or GGUF) Yes MLX on 16GB; nice UI
MLX / mlx-lm pip install Tight fit No* Fastest raw speed; no tool calling yet

*mlx-vlm has a PR in progress for tool calling support.

One important detail: Ollama 0.19 (released March 30, 2026) shipped an MLX backend that nearly doubles decode speed — from 58 tok/s to 112 tok/s. But it requires 32GB+ unified memory. On 16GB, Ollama falls back to the llama.cpp backend. Still works, just not the fast path. LM Studio doesn't have this gate and can use MLX on 16GB, which is a real advantage.

Setup 1: llama.cpp with mmap (recommended)

This is the most reliable way to run the 35B-A3B on 16GB. Metal GPU acceleration is enabled by default on macOS — no flags needed.

# Build llama.cpp
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --config Release

# Download the Qwen 3.6 GGUF (Q4_K_M quantization)
pip install huggingface_hub
huggingface-cli download unsloth/Qwen3.6-35B-A3B-GGUF \
  Qwen3.6-35B-A3B-Q4_K_M.gguf \
  --local-dir models/

# Run with mmap — this is the key flag
./build/bin/llama-cli \
  -m models/Qwen3.6-35B-A3B-Q4_K_M.gguf \
  --mmap \
  -c 4096 \
  -n 512 \
  -p "Write a FastAPI endpoint with input validation"

What's happening: --mmap tells llama.cpp to memory-map the model file instead of loading it all into RAM. The OS pages in weights on demand. Because only ~3B parameters are active per token, the actual resident memory stays well under 16GB. The rest of the 21GB model file lives on your SSD and gets paged in only when an expert is activated.

You can also run it as an OpenAI-compatible API server for use with coding agents:

# Start the server
./build/bin/llama-server \
  -m models/Qwen3.6-35B-A3B-Q4_K_M.gguf \
  --mmap \
  -c 4096 \
  --port 8080

# Now any tool that speaks the OpenAI API can use it:
# aider, opencode, aichat, etc. → http://localhost:8080/v1

GGUF files are also available from bartowski if you prefer different quantization levels.

Setup 2: Ollama (easiest)

If you don't want to build anything from source, Ollama handles everything — download, quantization, API server — in one command. Under the hood it uses llama.cpp, so mmap works the same way.

# Install Ollama, then:
ollama run qwen3.6:35b-a3b

That's it. Ollama downloads the GGUF, picks Q4_K_M by default, and starts an OpenAI-compatible API at http://localhost:11434. You can connect coding agents directly:

# Launch with opencode
ollama launch opencode --model qwen3.6:35b-a3b

# Or with OpenClaw
ollama launch openclaw --model qwen3.6:35b-a3b

Ollama exposes models to anything that speaks the OpenAI API format — aichat, aider, opencode, and many others. Point them at http://localhost:11434/v1.

On 16GB you'll get roughly the same 17 tok/s as raw llama.cpp. The MLX-accelerated path (which hits 70-80 tok/s for this model on larger machines) requires 32GB+, so Ollama falls back to the llama.cpp backend. Still perfectly usable for interactive work.

Setup 3: LM Studio (best GUI, MLX on 16GB)

LM Studio deserves a special mention because it can run MLX-optimized models on 16GB — unlike Ollama which gates the MLX backend behind 32GB. MLX uses roughly 50% less memory than the llama.cpp backend for the same model at the same quantization, and is about 2x faster.

Download the app, open the models page (Cmd + Shift + M), search for "Qwen3.6-35B-A3B", and filter by "MLX". Grab the 4-bit quantization from the mlx-community.

Kai Wern's guide reports 81.79 tok/s generation speed with LM Studio running the MLX-optimized 35B-A3B on a 64GB machine. On 16GB the model is a tighter fit via MLX, but community reports show it works — MLX's lower memory footprint is exactly what makes the difference between clean operation and swap thrashing.

To use LM Studio as a local API server (for coding agents), switch to the Developer screen (Cmd + 2) and toggle the server to Running. It serves on http://127.0.0.1:1234/v1.

Setup 4: Raw MLX (fastest inference, no tool calling)

MLX is Apple's own ML framework and the fastest inference path on Apple Silicon. If you don't need tool calling — just direct chat or batch generation — this gives you the best tok/s.

# Install mlx-lm
pip install mlx-lm

# Run the 35B-A3B
mlx_lm.generate \
  --model mlx-community/Qwen3.6-35B-A3B-4bit \
  --max-tokens 200 \
  --temp 0.7 \
  --prompt "Write a Python function to merge two sorted lists"

# Or start an OpenAI-compatible server
mlx_lm.server --model mlx-community/Qwen3.6-35B-A3B-4bit --port 8080

Since Qwen 3.6-35B-A3B is a vision-language model, you can also use mlx-vlm to process images:

# Install with torch dependency (avoids transformers errors)
brew install pipx
pipx install "mlx-vlm[torch]"

mlx_vlm.generate \
  --model mlx-community/Qwen3.6-35B-A3B-4bit \
  --max-tokens 200 \
  --temperature 0.0 \
  --prompt "Describe this image"

The catch: mlx-lm doesn't support tool calling yet, so you can't use it as a backend for coding agents that need to read/edit files. For that, use llama.cpp, Ollama, or LM Studio.

Performance on Mac Mini M4 16GB

All numbers are for the Qwen 3.6-35B-A3B at Q4 quantization on the base Mac Mini M4 (16GB unified memory, 10-core CPU, 10-core GPU), based on community benchmarks and Jock.pl's measurements:

Tool Decode (tok/s) RAM resident Swap Notes
llama.cpp (mmap) ~17 ~3GB active Zero Most reliable on 16GB
Ollama ~17 ~3GB active Zero Same backend, easier setup
LM Studio (MLX) ~25-35* ~10-12GB Minimal Faster but tighter on memory
MLX (raw) ~25-35* ~10-12GB Minimal Fastest; no tool calling

*MLX numbers on 16GB are extrapolated from Ante Kapetanovic's benchmarks on larger machines, scaled for memory bandwidth constraints. On a 64GB M4 Pro, the same model hits 70-80 tok/s via MLX. The 16GB constraint forces MLX to be more conservative with caching.

For comparison, on a Mac with 32GB+ and Ollama 0.19's MLX backend enabled, this same model hits 1810 tok/s prefill and 112 tok/s decode. The 32GB threshold is real — if you're buying a Mac specifically for local inference, the 32GB upgrade pays for itself.

But 17 tok/s is genuinely usable. That's fast enough for interactive chat, code generation, and tool-calling agents. It's slower than an API call, but it's free, private, and offline.

What about Qwen 3.6-Plus?

The full Qwen 3.6-Plus flagship (1M context, top-of-the-line benchmarks) was released on April 2, but it's API-only through Alibaba's DashScope. No weights, no GGUF files, no local option. The GitHub repo is up but only for the 35B-A3B variant.

For local inference, 3.6-35B-A3B is it — and given its benchmark numbers relative to its active parameter count, it's far more than a consolation prize.

My setup

I run the Qwen 3.6-35B-A3B as my daily driver on the Mac Mini M4 16GB. Specifically:

One pattern that works well: Ollama keeps the model warm in the background. When you send a request after a cold start, the first response is slow (~5-10s) while the active experts get paged in. Subsequent requests are much faster because the hot pages stay cached. Don't kill Ollama between sessions if you can avoid it.

Useful links