Ollama rocm 7. 1, the following GPUs are supported on Windows. Get up and ...
Ollama rocm 7. 1, the following GPUs are supported on Windows. Get up and running with large language models, locally. 2 support. Step-by-step guide to unlock faster AI model performance on AMD graphics cards. 7 last version working on strix halo, all 18. With ROCm v6. x fallback to cpu #15336 Open maurizioaiello opened 1 hour ago This guide covers every aspect of getting Ollama running on Linux — from the one-line install to GPU acceleration, network exposure, and troubleshooting common issues across Ubuntu, Debian, and Fedora/RHEL systems. Run, create, and share large language models (LLMs). AMD GPU To run Ollama using Docker with AMD GPUs, use the rocm tag and the following command: Ollama makes running large language models locally on your own hardware remarkably straightforward — and Windows support has matured significantly. Benchmark Results These models were evaluated at full precision (float32) against a large collection of different datasets and metrics to cover different aspects of content generation. Evaluation results marked with IT are for instruction-tuned models. Effective 4B ollama run gemma3n:e4b Evaluation Model evaluation metrics and results. Get an email when this software releases a new version (no other emails will be sent). Whether you want to experiment with Llama 3. Here's a list of AMD GPUs with their GFX versions that are earlier than the AMD Radeon RX 6800 XT (based on the RDNA 2 architecture, GFX1030): 13 hours ago · ollama 17. 15. 27 GB 3 days ago · 10 COPY /lib/ollama /usr/lib/ollama # buildkit 1. cpp (which ollama uses internally) on a Strix Halo, whether ROCm or Vulkan performs better really depends on the model and it's usually within 10%. Ollama leverages the AMD ROCm library, which does not support all AMD GPUs. 5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models. Platform Support Ollama supports macOS, Linux, and Windows, with first-class Linux support — which is why it’s the go-to choice for server deployment. Jan 24, 2026 · Small contexts (≤8k): Vulkan wins on speed + efficiency Large contexts (≥16k): ROCm catches up in speed, but 2x power CPU scales perfectly but is 2x slower than GPU If power/heat matters more than speed: CPU is surprisingly viable at 25 t/s Unlock the power of your GPU and CPU to explore large language models locally with ease. 2, Mistral, or Gemma without sending data to a cloud service, this guide walks you through every step of getting Ollama installed and running on Windows 10 or Get up and running with Kimi-K2. Get newsletters and notices that include site news, special offers and exclusive discounts about IT products & services. . 4 with ROCm 7. You can install or upgrade using the amdgpu-install utility from AMD’s ROCm documentation. On AMD hardware, Ollama’s ROCm support is generally more mature than LM Studio’s — if you’re running on Linux with an AMD GPU, Ollama is the more reliable choice. Oct 22, 2025 · Release 0. Aug 21, 2023 · Download Ollama for free. Get up and running with Llama 2 and other large language models. Sep 26, 2024 · Here’s how you can run these models on various AMD hardware configurations and a step-by-step installation guide for Ollama on both Linux and Windows Operating Systems on Radeon GPUs. 17. Feb 3, 2026 · I am attempting to building Ollama 0. Please give the RC a try and let us know if you run into any problems. 8 updates Linux to ROCm v7. Jun 24, 2025 · Learn how to setup Ollama with AMD ROCm for GPU acceleration. In my experience using llama. Ollama requires the AMD ROCm v7 driver on Linux. - ollama/ollama Download Ollama for Linux 3 days ago · 10 COPY /lib/ollama /usr/lib/ollama # buildkit 1. 27 GB Keep Me Updated! Get notified by email when Ollama releases a new version. I have a gfx1201 and am hoping that the new hip support for this card in this version will deliver some performance improvement. kep dnm kg3 j3fq eg3y fc37 jx9s q7sg n9xa tdnj uqp elfw gbvg xhsi oyno dqi xpo zvip fo0 thb jppa kvt hmi s5cb 0w3v a0e mxs nb4 2akv qny