Skip to content

Commit

Permalink
adding llm test with minipc
Browse files Browse the repository at this point in the history
  • Loading branch information
JAlcocerT committed Mar 17, 2024
1 parent a0a21c2 commit 0c811d4
Show file tree
Hide file tree
Showing 4 changed files with 31 additions and 5 deletions.
4 changes: 3 additions & 1 deletion _posts/2023-04-17-rpi-wifi-ethernet-bridge.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,4 +188,6 @@ sudo reboot
## FAQ


Thanks also to [Novaspirit Tech](https://www.youtube.com/watch?v=qhe6KUw3D78)
Thanks also to [Novaspirit Tech](https://www.youtube.com/watch?v=qhe6KUw3D78)

* How to SelfHost your [VPN with Docker and Gluetun](https://fossengineer.com/gluetun-vpn-docker/)
32 changes: 28 additions & 4 deletions _posts/2024-03-15-minipc-vs-pi.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ sysbench cpu --threads=4 run #https://github.com/akopytov/sysbench#general-comma
```


![BMAX B4 - Sysbench Test](/img/minipc-vs-pis/sysbench_bmaxb4.JPG)
![BMAX B4 - Sysbench Test](/img/minipc-vs-pis/sysbench_bmaxb4.png)
_BMAX B4 - Sysbench Test_


Expand All @@ -61,12 +61,12 @@ docker build -t pytripplanner .
It took ~45 seconds - Instead of 3600s and 1700s.


![BMAX B4 - Docker Build Test](/img/minipc-vs-pis/buildingtest.JPG)
![BMAX B4 - Docker Build Test](/img/minipc-vs-pis/buildingtest.png)
_BMAX B4 - Docker Build Test_

And a max Temp of 64C:

![BMAX B4 - Temperature during Docker Build](/img/minipc-vs-pis/temperature_during_test.JPG)
![BMAX B4 - Temperature during Docker Build](/img/minipc-vs-pis/temperature_during_test.png)
_BMAX B4 - Temperature during Docker Build_

And these are the temperatures [registered by NetData](https://fossengineer.com/selfhosting-netdata/)
Expand All @@ -92,4 +92,28 @@ That's why I decided to switch to [a lighter Linux Distribution](https://jalcoce
### Using a MiniPC as Free Home Cloud

* <https://fossengineer.com/selfhosting-filebrowser-docker/>
* <https://jalcocert.github.io/RPi/posts/selfhosting-with-docker/>
* <https://jalcocert.github.io/RPi/posts/selfhosting-with-docker/>


### How to use LLMs in a MiniPC

We can use [Ollama together with Docker](https://fossengineer.com/selfhosting-llms-ollama/) and try one of the small models with the CPU.

```sh
docker run -d --name ollama -p 11434:11434 -v ollama_data:/root/.ollama ollama/ollama
#docker exec -it ollama ollama --version #I had 0.1.29

docker exec -it ollama /bin/bash
ollama run gemma:2b
```

![BMAX B4 - Trying LLMs with a MiniPC](/img/minipc-vs-pis/minipc-gemma2b.png)
_BMAX B4 - Trying LLMs with a MiniPC_

The system was using 12/16GB (im running couple other containers) and the replies with Gemma 2B Model were pretty fast.


You can see how for the python question, which answer was pretty detailed, took ~30s and a max Temp of ~70C (fan full speed).

![BMAX B4 - MiniPC Performance while LLM inference](/img/minipc-vs-pis/minipc_gemma_temps.png)
_BMAX B4 - MiniPC Performance while LLM inference_
Binary file added img/minipc-vs-pis/minipc-gemma2b.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/minipc-vs-pis/minipc_gemma_temps.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 0c811d4

Please sign in to comment.