LogoLogo
  • Welcome to Kodesage
    • Introduction
    • The Kodesage engine
  • Key Features
    • Ask Kodesage
    • Automatic Documentation
    • Issue Analysis
  • Integrations
    • Git
    • Jira and Redmine
    • Confluence
  • Deploy Kodesage
    • Installation guide
    • Self-hosted LLM installation
    • Self-hosted Embedder installation
Powered by GitBook
On this page
Export as PDF
  1. Deploy Kodesage

Self-hosted LLM installation

Installation guide for self-hosted LLM setup

PreviousInstallation guideNextSelf-hosted Embedder installation

Last updated 4 months ago

Prerequisites

  • A system with Ubuntu with docker and docker-compose installed (as mentioned before) and with the latest Nvidia driver for the GPU-s in the system.

  • Install the

  • Install git-lfs with the command: sudo apt-get install git-lfs

Copy the provided text-generation-webui folder to a server with one or more Nvidia GPU-s.

With multiple GPU-s

You can modify the compose file to add more instances if you have more GPU-s. For this copy the part from the ‘text-generation-webui-docker-1:’ as many times as the number of GPU-s in the system. Change the ‘device_ids:’ part to the corresponding GPU id, which you can check by executing the command ‘nvidia-smi’ in the system’s terminal.

Change the service and the container name to a different one than the first one (change the number).

Finally, change the ports exposed to a different one than the first for each instance.

Download the model

We will provide you with the most up-to-date LLM model during onboarding. But you can also use the model of your choice.

Nvidia Container Toolkit.