Local Deepseek r1 on Web UI : Open Source

In June 2023, when Rajan Anandan, a venture capitalist and former CEO of Google India, posed the query about the possibility of building a similar OpenAI or ChatGPT, Sam Altman replied it was "Hopeless" and "Impossible".

Cut to 2025, Deepseek happened, and today people run their own LLMs on modest hardware configurations. I can humbly attest that Deepseek R1 can run on modest desktops and desktops with GPUs such as GTX 970 to RTX 4070 Super.

Why Run LLMs Locally?

  • Data Privacy and Security: For some organizations, the sensitivity of the data they're working with necessitates keeping it on-premises. Local hosting reduces the risk of data breaches and unauthorized access since data remains within the organization’s control.

  • Customization and Control: Running LLMs locally allows for greater control over model configuration and fine-tuning. Organizations can tailor the models to better meet their specific needs and integrate them more seamlessly with their existing systems.

  • Latency and Performance: Locally hosted models can offer better performance and lower latency, especially in environments with limited or unreliable internet connectivity. This can be critical for applications requiring real-time processing.

  • Compliance and Regulations: Certain industries are subject to strict regulatory requirements regarding data handling. Hosting LLMs locally can help ensure compliance with these regulations.

  • Cost Considerations: While the initial setup for running LLMs locally might be higher due to hardware and maintenance costs, in the long run, it can be more cost-effective than paying for cloud-based services, especially for large-scale, continuous use.

Some Thoughts

Considering the above, securely and locally hosted LLMs at an appropriate scale can be invaluable for government entities, multinational companies, communities, and NGOs that promote similar values. The positive impact on productivity and efficiency overall would only accelerate the adoption and usage of LLMs across the board.

My Assist


 

A simple web-based chat interface powered by Ollama's Deepseek model. This tool provides an interactive, real-time chatting experience with the assistant, using Streamlit for the frontend.

Features

  • Completely Local: The solution is completely local; an internet connection is only needed for the first-time install and updating models.

  • Admin Role Not Needed: Admin rights are not needed to install or configure.

  • Real-time Chat Interface: Allows for dynamic interaction with the assistant.

  • Persistent Chat History: Maintains chat history between sessions.

  • Save & Load Chat History: Automatically saves chat history to a SQLite DB for persistent conversations.

  • Backup and Restore: The complete sessions and chat can be backed up by downloading the backup file to cloud storage and restored using the saved file.

Solution Stack

  • Ollama: A tool designed to simplify the process of running open-source large language models (LLMs) directly on your computer.

  • Deepseek R1: A large language model (LLM) developed by Deepseek-AI that uses reinforcement learning to enhance reasoning capabilities through a multi-stage training process from a Deepseek-V3-Base foundation.

  • PyArrow: An open-source Python library that provides a fast, efficient way to process and analyze large datasets, especially those in Apache Arrow format.

  • SQLite: An embedded, server-less relational database management system. It is an in-memory open-source library with zero configuration and does not require any installation.

Source:

Setup and Installation

  1. Install Ollama

  2. Pull Deepseek R1 Model:

    bash
    ollama pull deepseek-r1
    
  3. Clone Repo:

    bash
    git clone https://github.com/imnoor/my-assist.git
    cd my-assist
    
  4. Install Required Packages:

    bash
    pip install streamlit ollama
    

    If you still get build errors in the package pyarrow, try installing pyarrow as a binary:

    bash
    pip install --only-binary=:all: pyarrow
    

    If it still has errors, one option is to download and install the lib from local.

  5. Run the App:

    bash
    run.bat
    or
    streamlit run app.py
    

Inspired By and Thanks to:


Comments

Popular posts from this blog

Meaning of Summa Ameen

The Chicken Story- Shawai, Tanduri, BBQ or What?

Halaal Food E Codes