Posts Contact

Running deepseek locally

Deepseek locally

 

Deepseek took the world by suprise. Not just by its performance but also by its accessabilty Making deepseek models open source is a great step to involve more people in its development and also making accessable for devs

In this post I will show how to run locally

Downsides

 

While you can run it locally you still would need a powerful PC and you may not be able to run the models with the high parameters number

It may slow down your machine and be slow as well and make it unusable with other application or in general So most of the issue you will be facing are on the hardware side

How to run it

 

First you will need to install Ollama 

Installation instruction is in the Link

Then run in your command line

 

ollama run deepseek-r1:14b

The number 14b at the end indicates the number of parameters depending on how well your hardware would perform you will able to decide which version would be suitable

You can either use the command prompt chat or make Http requests to your local ollama server

 

curl http://localhost:11434/api/generate -d '{ "model": "deepseek-r1:14b", "prompt": "What year were you born ?" }'

Enjoy!