Skip to main content

Ollama and App Configuration

Ollama Host Configuration

Ensure that the Ollama host is installed on your local machine or available on your LAN. Then:

  1. Edit the systemd service file by running:

    sudo nano /etc/systemd/system/ollama.service
  2. Add the following environment variables in the [Service] section:

    Environment="OLLAMA_HOST=0.0.0.0"
    Environment="OLLAMA_ORIGINS=http://tauri.localhost"

    Note: The OLLAMA_HOST=0.0.0.0 setting is optional if the Ollama server is running on localhost and you do not need the Ollama server to be accessed from LAN.

    Note: The OLLAMA_ORIGINS=http://tauri.localhost setting is required only if you use the Windows app version of Agentic Signal to let Ollama know to accept requests from that origin.

  3. Save the file, then reload and restart the service:

    sudo systemctl daemon-reload
    sudo systemctl restart ollama.service

ModelRAM (CPU-only)GPU (VRAM)Recommended GPUDisk Space
Llama 3 8B16GB8GB+RTX 3060/3070/4060~4GB
Llama 2 7B12GB6GB+GTX 1660/RTX 2060+~4GB
Mistral 7B12GB6GB+GTX 1660/RTX 2060+~4GB
Llama 3 70B64GB+48GB+RTX 6000/A100/H100~40GB
Mixtral 8x7B64GB+48GB+RTX 6000/A100/H100~50GB
Phi-3 Mini8GB4GB+GTX 1650/RTX 3050+~1.8GB

Notes:

  • More RAM/VRAM = faster inference and ability to run larger models.
  • CPU-only is possible for small models, but much slower.
  • For best performance, use a modern NVIDIA GPU with CUDA support.
  • Disk space is for the model file only; additional space may be needed for dependencies.

Configure Agentic Signal Client Settings

Current Time Workflow

  1. Open the Settings panel from the dock in the Agentic Signal client.

  2. Browser Executable Path: You need to provide the path to your Chrome browser executable. This is required for web browsing capabilities.

    Note: The Browser Executable Path setting is required only if you use the Windows app version of Agentic Signal.

  3. Ollama Host: Enter the URL of your Ollama server in the "Ollama Host" field (e.g., http://localhost:11434).

  4. In the Ollama Models section:

    • Add a model: Enter the model name (e.g., llama3.1:8b) and click the plus (+) button. Download progress will be shown.
    • Remove a model: Click the trash icon next to any installed model to delete it.

Advanced: Manual Ollama CLI

If you prefer, you can still use the Ollama CLI:

# Pull a lightweight model (recommended for testing but it will be more error prone)
ollama pull llama3.2:1b

# Or pull a more capable model
ollama pull llama3.1:8b