LogoLogo
  • Welcome to Kodesage
    • Introduction
    • The Kodesage engine
  • Key Features
    • Ask Kodesage
    • Automatic Documentation
    • Issue Analysis
  • Integrations
    • Git
    • Jira and Redmine
    • Confluence
  • Deploy Kodesage
    • Installation guide
    • Self-hosted LLM installation
    • Self-hosted Embedder installation
Powered by GitBook
On this page
Export as PDF
  1. Deploy Kodesage

Self-hosted Embedder installation

Self-hosted Embedder installation guide

PreviousSelf-hosted LLM installation

Last updated 4 months ago

Prerequisites

  • A system with Ubuntu with docker and docker-compose installed and with the latest Nvidia driver for the GPU-s in the system.

  • Install the

  • Install git-lfs with the command: sudo apt-get install git-lfs

  • The embedder service can also be runned with multiple instances if necessary.

  • Copy the provided tar file and the embedder-wrapper folder into the server, where the embedder service will be hosted. After that, run this command:

  • docker load --input embedder-service.tar

For running multiple instances modify the docker-compose file, copy the part after the ‘services’ as many times as many instances you would like to run. Modify the service and the container name for each copy. Also modify the ports so the instances could run separately.

The model will be downloaded at the first start of the container.

Start the instances

Go back to the root folder and execute this command:

  • docker compose up –d

The embedder service should start without errors.

Load balancers

If you are running multiple instances of LLMs and/or embedders, you will need separate load balancers which will handle the concurrent loads. We will provide you with further instructions for installing load balanacers during the onboarding period.

Nvidia Container Toolkit.