Up next

DeepSeek R1 Distill On NVIDIA Jetson Nano (WebUI Test and Tutorial)

0 Views· 01/27/25
Teacherflix
Teacherflix
5 Subscribers
5
In

Timestamps:

00:00 - Intro
00:59 - Pre-Reqs
01:31 - Getting Started
04:03 - Loading DeepSeek
07:16 - First Test
10:23 - Python Test
13:19 - Llama3-1B Comparison
15:16 - FP16 DeepSeek
19:53 - Closing Thoughts

Explore the power of the DeepSeek R1 family of models running locally on the NVIDIA Jetson Nano! In this video, we dive into the newly released distilled DeepSeek models and demonstrate how to set them up with Open WebUI on the Jetson Nano. Despite its compact size, the DeepSeek R1 1.5B model delivers impressive performance with plenty of room to spare on the Jetson.

We’ll guide you through the installation process, followed by a series of tests, including a Python reasoning test. To provide context, we compare the DeepSeek R1 to the Mini Llama 3.1 1B model, showcasing their differences in performance and capabilities. For those seeking even more power, we also demonstrate how to set up and test the FP16 version of the DeepSeek R1 Distilled 1.5B Qwen model, running seamlessly in WebUI with an Ollama backend.

By the end of this video, you’ll have the knowledge and tools to set up and test a powerful yet efficient local reasoning model on your Jetson Nano.

Show more

 0 Comments sort   Sort By


Up next