Docker部署Ollama模型
技术背景前面写过几篇关于DeepSeek大模型的本地部署以及本地Docker部署OpenClaw的教程。但是这里边的Ollama都是直接部署在裸机上的图个方便想来还是不妥于是补充本文基于Ubuntu Linux的Docker环境中部署Ollama模型的方法。Docker部署先查看系统中是否安装了nvidia-container-toolkit$ dpkg -l | grep nvidia-container-toolkit ii nvidia-container-toolkit 1.13.5-1 amd64 NVIDIA Container toolkit ii nvidia-container-toolkit-base 1.13.5-1 amd64 NVIDIA Container Toolkit Base如果没有安装可以参考一下之前写过的这篇博客进行安装。然后就是在DockerHub拉取Ollama的镜像(如有必要请自行配置Docker源)$ docker pull ollama/ollama Using default tag: latest latest: Pulling from ollama/ollama 817807f3c64e: Pull complete ae25ca5ada6c: Pull complete 2608ea1d5119: Pull complete 84d58e6813b6: Pull complete Digest: sha256: Status: Downloaded newer image for ollama/ollama:latest docker.io/ollama/ollama:latest拉取完成后就可以在本地看到Ollama的镜像$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ollama/ollama latest bc1c 3 hours ago 6.01GB然后就可以启动一个带GPU环境的Ollama容器$ docker run -d --name my-ollama.docker --runtimenvidia -p xxx:11434 -v ~/.ollama:/root/.ollama --gpus all ollama/ollama 7ae5这里边有两个重要参数配置--runtimenvidia还有--gpus all都需要配上否则无法调用到GPU算力。具体用哪些GPU这个就按照自己的环境来。然后就可以在Docker的容器列表中看到这一条(这里xxx就是你配置的本地监听端口)$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae5 ollama/ollama /bin/ollama serve 7 seconds ago Up 6 seconds 0.0.0.0:xxx-11434/tcp, [::]:xxx-11434/tcp my-ollama.docker然后可以使用该镜像直接pull远程模型例如这里pull一个qwen3.5的模型$ docker exec -it my-ollama.docker ollama pull qwen3.5:latest $ docker exec -it my-ollama.docker ollama list NAME ID SIZE MODIFIED qwen3.5:latest 96fa 6.6 GB 18 hours agopull完成后可以查看模型的具体信息$ docker exec -it my-ollama.docker ollama show qwen3.5:latest Model architecture qwen35 parameters 9.7B context length 262144 embedding length 4096 quantization Q4_K_M requires 0.17.1 Capabilities completion vision tools thinking Parameters presence_penalty 1.5 temperature 1 top_k 20 top_p 0.95 License Apache License Version 2.0, January 2004 ...可以看到这里下载的是一个Q4_K_M的量化模型如果有需要可以自己检索一下看看哪个模型更适合自己。最后再确认GPU是否能够正常调用$ docker exec -it my-ollama.docker nvidia-smi Mon Mar 23 06:27:21 2026 --------------------------------------------------------------------------------------- | NVIDIA-SMI 535.274.02 Driver Version: 535.274.02 CUDA Version: 12.2 | |------------------------------------------------------------------------------------- | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | ||能够正常看到GPU就代表这个环境没问题Ollama的镜像是自带显卡驱动的。Ollama模型测试完成Ollama的Docker环境配置后就可以在本地进行测试了$ curl http://localhost:xxx/api/generate -d { model: qwen3.5:latest, prompt: who are you?, stream: false } {model:qwen3.5:latest,created_at:2026-03-23T06:28:08.27341445Z,response:Im **Qwen3.5**, a large language model developed by Tongyi Lab. Im here to help with tasks like answering questions, creating content, coding, analyzing data, and more. I support over 100 languages, handle long documents with full context, and can even visualize data or generate code. What would you like to do? ,thinking:Okay, the user is asking \who are you?\ This is a straightforward question about my identity. I need to provide a clear and accurate response. Since Im Qwen3.5, I should mention that Im a large language model developed by Tongyi Lab. I should also highlight some of my capabilities like multi-language support, logical reasoning, and long-context understanding. Keep the response friendly and concise. Make sure to avoid technical jargon to keep it accessible.,done:true,done_reason:stop,context:,total_duration:31831260725,load_duration:28675863540,prompt_eval_count:14,prompt_eval_duration:49629384,eval_count:174,eval_duration:278832这里就成功获取了相应的对话输出。另外在运行的时候可以后台查看GPU的使用情况$ docker exec -it my-ollama.docker ollama ps NAME ID SIZE PROCESSOR CONTEXT UNTIL qwen3.5:latest fa5f 23 GB 100% GPU 262144 3 minutes from now如果这里不是纯GPU的运行那么就可能会导致速度偏慢。总结概要接上一篇关于Docker中的Ollama养虾的文章之后这里补充一个Docker中部署Ollama模型的方案通过这种方式建立一个完整的虚拟化环境确保Ollama和OpenClaw在环境内的操作都相对可控。版权声明本文首发链接为https://www.cnblogs.com/dechinphy/p/docker-ollama.html作者IDDechinPhy更多原著文章https://www.cnblogs.com/dechinphy/请博主喝咖啡https://www.cnblogs.com/dechinphy/gallery/image/379634.html参考链接https://blog.eimoon.com/p/run-ollama-in-docker-local-llms-simplified/
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2449527.html
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!