Run Deepseek-R1 Locally with Ollama & Open WebUI in Docker — Access It Anywhere!

Featured on Hashnode
Run Deepseek-R1 Locally with Ollama & Open WebUI in Docker — Access It Anywhere!

Introduction:

The groundbreaking Chinese reasoning model, Deepseek-R1, has stunned the world with its exceptional performance across benchmarks. While its capabilities are exciting, some users may feel uneasy about running it directly through the official site due to privacy concerns.

The good news? Since Deepseek-R1 is open source, you can run it locally on your computer — and better still, make it accessible from anywhere in the world. Yes, it’s possible, and here’s how to do it!


How to Get Started:

We’ll use Ollama to run the model locally, Open WebUI for a clean and user-friendly interface, and ngrok to expose your local instance globally. Let’s break it down step by step.


Step 1: Setting Up Ollama & Deepseek

  1. Download Ollama:
    Grab it from the official website: ollama.com/download.

  2. Search and Download Deepseek-R1:
    Visit ollama.com/search and find the Deepseek-R1 model.

    • If you’re working with limited resources, you can use the 1.5B parameter version. Run the following command to download and start it locally:

        ollama run deepseek-r1:1.5b
      

Step 2: Setting Up Open WebUI

Now that Deepseek-R1 is running in your terminal, let’s set up a polished user interface with Open WebUI.

  1. Visit the Open WebUI GitHub:
    Find setup instructions on their official page: github.com/open-webui/open-webui.

  2. Run the Docker Image:
    Use this command to pull and run the Open WebUI Docker image:

     docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    
  3. Access Open WebUI:
    Open your browser and navigate to http://localhost:3000/. Register and log in, then select Deepseek-R1:1.5b from the dropdown menu. That’s it! Your model is now live on a sleek web interface.


Step 3: Make It Accessible Anywhere with ngrok

To access Open WebUI from any device, we’ll use ngrok to expose your local server.

  1. Download ngrok:
    For macOS users, download it here: ngrok.com/downloads/mac-os.

  2. Set Up Your Auth Token:
    Use the following command to configure ngrok:

     ngrok config add-authtoken <your_token>
    
  3. Expose Your Port:
    Run this command to expose port 3000 (where Open WebUI is running):

     ngrok http 3000
    

ngrok display:

Access from Anywhere:
Copy the provided URL (e.g., https://my-localhost123.ngrok-free.app) and open it on any device. Your locally hosted Deepseek-R1 instance is now accessible globally!


Extra Steps:

Using a Docker Compose YML File:

Create a docker-compose.yml file with the following configuration.

version: "3.8"
services:
  ollama:
    container_name: ollama
    image: ollama/ollama
    restart: always
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama

  open-webui:
    container_name: open-webui
    image: ghcr.io/open-webui/open-webui:main
    restart: always
    ports:
      - "3000:8080"
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
    depends_on:
      - ollama

volumes:
  ollama_data:

Run the following commands:

docker-compose up -d
docker exec -it ollama bash

#Run below commands on docker container tty:
ollama list
ollama rm deepseek-r1:1.5b
ollama run deepseek-r1:1.5b

Display:

Visit the URL localhost:3000 and log in using the username and password in Open WebUI.


Conclusion

And there you have it! A simple guide to running Deepseek-R1 privately on your computer while making it accessible from anywhere in the world.

Got feedback? Let me know — and happy experimenting! 😊

(P.S. Check out the attached image of Deepseek running on my laptop with laptop localhost!)

(P.S. Check out the attached image of Deepseek running on my with laptop ngrok!)

(P.S. Check out the attached image of Deepseek running on my mobile!)


Thank You! 😊

Follow me on YouTube below: