Ollama

Ollama: Run, build, and share LLMs

1 min


Ollama

Run, build, and share LLMs

Ollama

Ollama is a tool for running large language models on macOS, especially on Apple Silicon devices. It simplifies the process of installing and running Llama2, a powerful and versatile language model that can handle various natural language tasks. With Ollama, users can easily create and deploy applications that use Llama2 without worrying about compatibility issues or performance bottlenecks.

Getting Started

To run and chat with the Meta’s new model, Llama 2:

ollama run llama2

Model library

ollama contains a library of open-source models:

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama2
Llama2 13B 13B 7.3GB ollama pull llama2:13b
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13B 7.3GB ollama pull nous-hermes
Wizard Vicuna Uncensored 13B 7.3GB ollama pull wizard-vicuna

The 3B models require a minimum of 8 GB of RAM, the 7B models need at least 16 GB of RAM, and the 13B models demand 32 GB of RAM or more.

Download Ollama

Examples

Run the model

ollama run llama2
>>> hi
Hello! How can I help you today?

Create a custom model

Pull a base model:

ollama pull llama2

Create the Modelfile:

FROM llama2

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Then, build and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

Check the examples directory for more examples

Pull a model from the registry

ollama pull orca

Listing local models

ollama list

Model packages

Overview

A Modelfile is a file that specifies the data, configuration, and model weights for Ollama bundles. Ollama bundles are packages that contain everything needed to run a machine-learning model.

Ollama

Build

go build .

To run it, you have to start the server:

./ollama serve &

Finally, run the model!

./ollama run llama2

REST API

POST /api/generate

Generate text from a model.

curl -X POST http://localhost:11434/api/generate -d '{"model": "llama2", "prompt":"Why is the sky blue?"}'

Join Guidady AI Mail List

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.


Like it? Share with your friends!

1
65 shares, 1 point

0 Comments

Your email address will not be published. Required fields are marked *

Belmechri

I am an IT engineer, content creator, and proud father with a passion for innovation and excellence. In both my personal and professional life, I strive for excellence and am committed to finding innovative solutions to complex problems.
Choose A Format
Personality quiz
Series of questions that intends to reveal something about the personality
Trivia quiz
Series of questions with right and wrong answers that intends to check knowledge
Poll
Voting to make decisions or determine opinions
Story
Formatted Text with Embeds and Visuals
List
The Classic Internet Listicles
Countdown
The Classic Internet Countdowns
Open List
Submit your own item and vote up for the best submission
Ranked List
Upvote or downvote to decide the best list item
Meme
Upload your own images to make custom memes
Video
Youtube and Vimeo Embeds
Audio
Soundcloud or Mixcloud Embeds
Image
Photo or GIF
Gif
GIF format