PenpAI feat. Llamafile
This is a container definition meant to work with GroundSeg's PenpAI app, which allows you to host and communicate with local LLMs from your urbit ship.
Llamafiles are monolithic cross-platform executable LLM model files. They support a subset of the OpenAI API, which greatly reduces the complexity of selfhosting.
The start script checks for the $MODEL_NAME
env var to select which model to use. This currently only works on x86_64.