Getting Started with Ollama
This guide will help you install Ollama and run your first language model locally.
Installation
Ollama is available for macOS, Linux, and Windows. Choose your platform below to get started:
Running Your First Model
Once you have Ollama installed, you can run your first model with a simple command:
ollama run llama2
This will download the Llama 2 model (if you don’t already have it) and start a chat session.
Basic Commands
Here are some basic commands to get you started:
ollama list- List all available modelsollama pull modelname- Download a modelollama run modelname- Run a modelollama rm modelname- Remove a model
Next Steps
Check out our Labs section for hands-on exercises that will help you build practical applications with Ollama.