Self-hosted LLM installation
Installation guide for self-hosted LLM setup
Last updated
Installation guide for self-hosted LLM setup
Last updated
Prerequisites
A system with Ubuntu with docker and docker-compose installed (as mentioned before) and with the latest Nvidia driver for the GPU-s in the system.
Install the
Install git-lfs with the command: sudo apt-get install git-lfs
Copy the provided text-generation-webui folder to a server with one or more Nvidia GPU-s.
With multiple GPU-s
You can modify the compose file to add more instances if you have more GPU-s. For this copy the part from the ‘text-generation-webui-docker-1:’ as many times as the number of GPU-s in the system. Change the ‘device_ids:’ part to the corresponding GPU id, which you can check by executing the command ‘nvidia-smi’ in the system’s terminal.
Change the service and the container name to a different one than the first one (change the number).
Finally, change the ports exposed to a different one than the first for each instance.
Download the model
We will provide you with the most up-to-date LLM model during onboarding. But you can also use the model of your choice.