It's easier to run an open-source LLM locally than most people think.
Today, let's cover a step-by-step, hands-on demo of this.
Here's what the final outcome looks like:
We'll run Microsoft's phi-2 using Ollama, a framework to run open-source LLMs (Llama2, Llama3, and many more) directly from a local machine.
On a side note, we started a beginner-friendly crash course on RAGs recently with implementations. Read the first two parts here:
Let's begin!
Step 1) Download Ollama
Go to Ollama.com, download Ollama, and install it.
Ollama supports several open-source models (listed here). Here are some of them, along with the command to download them:
Step 2) Download phi-2
Next, download phi-2 by running the following command:
Expect the following in your terminal:
Done!
Step 3) Use phi-2
An open-source LLM is now running on your local machine, and you can prompt it as follows:
Customize model
Models running from Ollama can be customized with a prompt. Let's say you want to customize phi-2 to talk like Mario.
Make a copy of the existing modelfile
:
Next, open the new file and edit the PROMPT
setting:
Next, create your custom model as follows:
Done!
Now run the mario
model:
Using an LLM locally was simple, wasn't it?
That said, Ollama elegantly integrates with almost all LLM orchestration frameworks like LlamaIndex, Langchain, etc., which makes it easier to build LLM apps on open-source LLMs.
We have been using them in our beginner-friendly crash course on building RAG systems. Read the first two parts here:
👉 Over to you: What are some ways to run LLMs locally?
P.S. For those wanting to develop “Industry ML” expertise:
We have discussed several other topics (with implementations) in the past that align with such topics.
Here are some of them:
Learn sophisticated graph architectures and how to train them on graph data: A Crash Course on Graph Neural Networks – Part 1
Learn techniques to run large models on small devices: Quantization: Optimize ML Models to Run Them on Tiny Hardware
Learn how to generate prediction intervals or sets with strong statistical guarantees for increasing trust: Conformal Predictions: Build Confidence in Your ML Model’s Predictions.
Learn how to identify causal relationships and answer business questions: A Crash Course on Causality – Part 1
Learn how to scale ML model training: A Practical Guide to Scaling ML Model Training.
Learn techniques to reliably roll out new models in production: 5 Must-Know Ways to Test ML Models in Production (Implementation Included)
Learn how to build privacy-first ML systems: Federated Learning: A Critical Step Towards Privacy-Preserving Machine Learning.
Learn how to compress ML models and reduce costs: Model Compression: A Critical Step Towards Efficient Machine Learning.
All these resources will help you cultivate key skills that businesses and companies care about the most.
SPONSOR US
Get your product in front of 450k+ data scientists and other tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters — thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., who have influence over significant tech decisions and big purchases.
To ensure your product reaches this influential audience, reserve your space here or reply to this email to ensure your product reaches this influential audience.
Share this post