Managing AI Models
AI models are what power the chat assistant. You need at least one model installed and set as the active model before anyone can start chatting.
What are models?
Think of models as different "brains" for the AI assistant. Some are smaller and faster, while others are larger and more capable. The admin page shows each model's name and size so you can choose what works best for your setup.
Installing a local model
- Go to Admin > AI Models.
- Click Add AI Model.
- On the Local tab, choose from the recommended list or type a custom model name.
- Click Install. You will see a download progress bar.
- Once installed, click Set Active to start using it.
Popular choices for getting started include llama3.2 (good all-rounder) and mistral (fast and capable).
Connecting a remote model
If you have Ollama running on another computer on your network (for example, a more powerful desktop), you can connect to its models:
- Click Add AI Model and switch to the Remote tab.
- Enter the remote machine's address (for example,
192.168.1.100:11434). - Click Test Connection to verify it is reachable.
- If successful, you will see the available models on that machine — click Connect next to the one you want.
- The remote model appears in your models list with a Remote label.
Setting the active model
Click Set Active on any installed or connected model. Only one model can be active at a time — it is shown with a green checkmark.
All family members' chats will use whichever model is currently active.
Removing models
- Local models: Click Delete to remove the model from your machine. This frees up disk space.
- Remote models: Click Disconnect to remove it from LanJAM. The model still exists on the remote machine — nothing is deleted there.
Troubleshooting
If the Admin > Status page shows Ollama as unreachable:
- Try clicking Start Ollama from the status page.
- If that does not work, check that Ollama is installed and running on the host machine.
- Ollama sometimes needs to be restarted after a system reboot.