Getting Started with Ollama

This guide will help you install Ollama and run your first language model locally.

Installation

Ollama is available for macOS, Linux, and Windows. Choose your platform below to get started:

Running Your First Model

Once you have Ollama installed, you can run your first model with a simple command:

ollama run llama2

This will download the Llama 2 model (if you don’t already have it) and start a chat session.

Basic Commands

Here are some basic commands to get you started:

  • ollama list - List all available models
  • ollama pull modelname - Download a model
  • ollama run modelname - Run a model
  • ollama rm modelname - Remove a model

Next Steps

Check out our Labs section for hands-on exercises that will help you build practical applications with Ollama.


Table of contents


Back to top

Copyright © 2025 Collabnix Community. Distributed with an MIT license.