Configuring Server Connections

Hollama connects to external Large Language Model (LLM) servers to process your requests. You can configure and manage these connections from the Settings page.

Connection Types

Hollama supports three types of server connections, as defined in src/lib/connections.ts:

export enum ConnectionType {
    Ollama = 'ollama',
    OpenAI = 'openai',
    OpenAICompatible = 'openai-compatible'
}

1. Ollama

For connecting to a standard Ollama server.

  • Base URL: The address of your Ollama server. Defaults to http://localhost:11434.
  • Model Names Filter: An optional prefix to filter the list of models. For example, llama3 will only show models whose names start with llama3.
  • Label: A custom name to identify this connection in the model selection dropdown.

2. OpenAI: Official API

For connecting to the official OpenAI API.

  • Base URL: Pre-filled with https://api.openai.com/v1 and is generally not changed.
  • API Key: Your secret API key from your OpenAI account.
  • Model Names Filter: Defaults to gpt to show only GPT models.

3. OpenAI: Compatible Servers

For connecting to any server that implements an OpenAI-compatible API, such as llama.cpp's server.

  • Base URL: The URL of your compatible server's API endpoint (e.g., http://localhost:8080/v1).
  • API Key: An API key, if your server requires one.

Managing Connections

  • Adding a Connection: Select a connection type from the dropdown and click Add connection.
  • Verifying: After adding or modifying a connection, click Verify. Hollama will attempt to fetch the list of models to confirm that the connection is working. A successful verification will automatically enable the server.
  • Enabling/Disabling: Use the "Use models from this server" checkbox to toggle whether models from a server appear in the model selection dropdown.
  • Deleting: Click the trash can icon to remove a server configuration.

Connecting to a Remote Ollama Server

If your Hollama instance (e.g., the live demo or a Docker container) is on a different machine from your Ollama server, you must configure Ollama to accept requests from Hollama's origin.

When you start the Ollama server, set the OLLAMA_ORIGINS environment variable:

# Replace with the URL of your Hollama instance
OLLAMA_ORIGINS=https://hollama.fernando.is ollama serve

This is a security measure to prevent unauthorized access to your Ollama server.