Options for accessing Llama 3 from the terminal using LLM
Run Llama 3 on your laptop or access it using a number of different API providers
In this newsletter:
Options for accessing Llama 3 from the terminal using LLM
Plus 9 links and 2 quotations
Options for accessing Llama 3 from the terminal using LLM - 2024-04-22
Llama 3 was released on Thursday. Early indications are that it's now the best available openly licensed model - Llama 3 70b Instruct has taken joint 5th place on the LMSYS arena leaderboard, behind only Claude 3 Opus and some GPT-4s and sharing 5th place with Gemini Pro and Claude 3 Sonnet. But unlike those other models Llama 3 70b is weights available and can even be run on a (high end) laptop!
My LLM command-line tool and Python library provides access to dozens of models via plugins. Here are several ways you can use it to access Llama 3, both hosted versions and running locally on your own hardware.
Llama-3-8B-Instruct locally with llm-gpt4all
If you want to run Llama 3 locally, the easiest way to do that with LLM is using the llm-gpt4all plugin. This plugin builds on the excellent gpt4all project by Nomic AI, providing a quantized (q4) Llama 3 8B instruct model that needs 8GB of RAM and requires a 4.34GB download.
If you don't have LLM installed yet, install it like this:
brew install llm
Or like this:
pipx install llm
Install the plugin:
llm install llm-gpt4all
The first time you run a prompt it will download the model:
llm -m Meta-Llama-3-8B-Instruct "Three great names for a pet emu"
What an interesting question! Here are three name suggestions for your pet emu:
Emmet: A playful take on the word "emu" that suits a friendly and curious bird.
Gizmo: Emus are known for their quirky behavior, so Gizmo is a fitting name to capture their unique personality.
Aussie: As an Australian native species, naming your emu Aussie would be a nod to its heritage.
Remember to choose a name that suits your pet's temperament and characteristics!
Each time you run a command like this the model will be loaded into memory from scratch. To keep it resident in memory, use the llm chat
command:
llm chat -m Meta-Llama-3-8B-Instruct
Here's an example session:
Chatting with Meta-Llama-3-8B-Instruct
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> javascript function to reverse a string
%
Here is the JavaScript code that reverses a given string:
```
function reverseString(str) {
return str.split("").reverse().join("");
}
```
You can use this function by passing your desired input string as an argument, like so:
```javascript
let reversedStr = reverseString("Hello World");
console.log(reversedStr); // Output: "dlroW olleH"
```
In the code above:
- `str.split("")` splits the original string into individual characters.
- `.reverse()` reverses the order of those characters.
- `.join("")` combines them back together as a single string.
This function works for any input string, not just "Hello World". Try it out with different strings to see how it handles punctuation and spaces!
> capitalize it too
%
Here's an updated JavaScript function that not only reverses the input string but also capitalizes each character:
```javascript
function reverseAndCapitalize(str) {
return str.split("").reverse().map(function(char) {
if (char.match(/[a-z]/i)) {
return char.toUpperCase();
} else {
return char;
}
}).join("");
}
```
(I'm not sure I'd write reverseAndCapitalize()
quite like that, but still a fun example!)
Consult the LLM documentation for more details on how to use the command-line tool.
Fast API access via Groq
Groq serve openly licensed LLMs at ludicrous speeds using their own custom LPU (Language Processing Unit) Inference Engine. They currently offer a free preview of their API: you can sign up and obtain an API key to start using it.
You can run prompts against Groq using their OpenAI compatible API endpoint.
Edit the file ~/Library/Application Support/io.datasette.llm/extra-openai-models.yaml
- creating it if it doesn't exist - and add the following lines to it:
- model_id: groq-openai-llama3
model_name: llama3-70b-8192
api_base: https://api.groq.com/openai/v1
api_key_name: groq
- model_id: groq-openai-llama3-8b
model_name: llama3-8b-8192
api_base: https://api.groq.com/openai/v1
api_key_name: groq
This tells LLM about those models, and makes them accessible via those configured model_id
values.
Run this command to confirm that the models were registered correctly:
llm models | grep groq
You should see this:
OpenAI Chat: groq-openai-llama3
OpenAI Chat: groq-openai-llama3-8b
Set your Groq API key like this:
llm keys set groq
# <Paste your API key here>
Now you should be able to run prompts through the models like this:
llm -m groq-openai-llama3 "A righteous sonnet about a brave owl"
Groq is fast.
There's also a llm-groq plugin but it hasn't shipped support for the new models just yet - though there's a PR for that by Lex Herbert here and you can install the plugin directly from that PR like this:
llm install https://github.com/lexh/llm-groq/archive/ba9d7de74b3057b074a85fe99fe873b75519bd78.zip
llm keys set groq
# paste API key here
llm -m groq-llama3-70b 'say hi in spanish five ways'
Local Llama 3 70b Instruct with llamafile
The Llama 3 8b model is easy to run on a laptop, but it's pretty limited in capability. The 70b model is the one that's starting to get competitive with GPT-4. Can we run that on a laptop?
I managed to run the 70b model on my 64GB MacBook Pro M2 using llamafile (previously on this blog) - after quitting most other applications to make sure the 37GB of RAM it needed was available.
I used the Meta-Llama-3-70B-Instruct.Q4_0.llamafile
Q4 version from jartine/Meta-Llama-3-70B-Instruct-llamafile - a 37GB download. I have a dedicated external hard disk (a Samsung T7 Shield) for this kind of thing.
Here's how I got it working:
curl -L -o Meta-Llama-3-70B-Instruct.Q4_0.llamafile 'https://huggingface.co/jartine/Meta-Llama-3-70B-Instruct-llamafile/resolve/main/Meta-Llama-3-70B-Instruct.Q4_0.llamafile?download=true'
# That downloads 37GB - now make it executable
chmod 755 Meta-Llama-3-70B-Instruct.Q4_0.llamafile
# And start it running:
./Meta-Llama-3-70B-Instruct.Q4_0.llamafile
A llamafile
is an executable that runs on virtually any platform - see my previous notes on Cosmopolitan and Actually Portable Executable for more on how that works.
This will take quite a while to start, because it needs to load that full 37GB of binary content into memory. Once it's finished loading a local web server becomes available at
http://127.0.0.1:8080/
- this serves a web UI you can use to interact with the model, and also serves another OpenAI-compatible API endpoint.
The easiest way to access this from LLM is to install the llm-llamafile plugin:
llm install llm-llamafile
All this plugin does is configure a model called llamafile
that attempts to access the model hosted on port 8080. You can run prompts like this:
llm -m llamafile "3 neat characteristics of a pelican"
Here are three neat characteristics of a pelican:
Unique Beak: Pelicans have a distinctive beak that is shaped like a scoop or a basket. This beak is specially designed to catch fish, and it can hold up to 3 gallons of water! The beak is also very sensitive, which helps pelicans detect the presence of fish in the water.
Waterproof Feathers: Pelicans have a special coating on their feathers that makes them waterproof. This is essential for their fishing lifestyle, as they need to be able to dive into the water without getting weighed down by wet feathers. The coating is made up of a waxy substance that helps to repel water.
Pouch-Like Throat: Pelicans have a unique throat pouch that allows them to catch and store fish. When they dive into the water, they use their beak to scoop up fish, and then they store them in their throat pouch. The pouch can expand to hold multiple fish, and the pelican can then swallow the fish whole or regurgitate them to feed their young. This pouch is a key adaptation that helps pelicans thrive in their aquatic environment.
If you don't want to install another plugin, you can instead configure the model by adding this to your openai-extra-models.yaml
file:
- model_id: llamafile
model_name: llamafile
api_base: http://localhost:8080/v1
api_key: x
One warning about this approach: if you use LLM like this then every prompt you run through llamafile
will be stored under the same model name in your SQLite logs, even if you try out different llamafile
models at different times. You could work around this by registering them with different model_id
values in the YAML file.
Paid access via other API providers
A neat thing about open weight models is that multiple API providers can offer them, encouraging them to aggressively compete on price.
Groq is currently free, but that's with a limited number of free requests.
A number of other providers are now hosting Llama 3, and many of them have plugins available for LLM. Here are a few examples:
Perplexity Labs are offering
llama-3-8b-instruct
andllama-3-70b-instruct
. The llm-perplexity plugin provides access -llm install llm-perplexity
to install,llm keys set perplexity
to set an API key and then run prompts against those two model IDs. Current price for 8b is $0.20 per million tokens, for 80b is $1.00.Anyscale Endpoints have
meta-llama/Llama-3-8b-chat-hf
($0.15/million tokens) andmeta-llama/Llama-3-70b-chat-hf
($1.0/million tokens) (pricing).llm install anyscale-endpoints
, thenllm keys set anyscale-endpoints
to set the API key.Fireworks AI have
fireworks/models/llama-v3-8b-instruct
for $0.20/million andfireworks/models/llama-v3-70b-instruct
for $0.90/million (pricing).llm install fireworks
, thenllm keys set fireworks
to set the API key.OpenRouter provide proxied accessed to Llama 3 from a number of different providers at different prices, documented on their meta-llama/llama-3-70b-instruct and meta-llama/llama-3-8b-instruct pages (and more). Use the llm-openrouter plugin for those.
Together AI has both models as well. The llm-together plugin provides access to
meta-llama/Llama-3-8b-chat-hf
andmeta-llama/Llama-3-70b-chat-hf
.
I'm sure there are more - these are just the ones I've tried out myself. Check the LLM plugin directory for other providers, or if a provider emulates the OpenAI API you can configure with the YAML file as shown above or described in the LLM documentation.
That's a lot of options
One key idea behind LLM is to use plugins to provide access to as many different models as possible. Above I've listed two ways to run Llama 3 locally and six different API vendors that LLM can access as well.
If you're inspired to write your own plugin it's pretty simple: each of the above plugins is open source, and there's a detailed tutorial on Writing a plugin to support a new model on the LLM website.
Quote 2024-04-18
I have a child who is also 2e and has been part of the NYC G&T program. We've had a positive experience with the citywide program, specifically with the program at The Anderson School.
Meta AI bot, answering a question on a forum
Link 2024-04-18 How cheap, outsourced labour in Africa is shaping AI English:
The word "delve" has been getting a lot of attention recently as an example of something that might be an indicator of ChatGPT generated content.
One example: articles on medical research site PubMed now use “delve” 10 to 100 times more than a few years ago!
Nigerian Twitter took offense recently to Paul Graham's suggestion that "delve" is a sign of bad writing. It turns out Nigerian formal writing has a subtly different vocabulary.
Alex Hern theorizes that the underlying cause may be related. Companies like OpenAI frequently outsource data annotation to countries like Nigeria that have excellent English skills and low wages. RLHF (reinforcement learning from human feedback) involves annotators comparing and voting on the "best" responses from the models.
Are they teaching models to favour Nigerian-English? It's a pretty solid theory!
Link 2024-04-18 Andrej Karpathy's Llama 3 review:
The most interesting coverage I've seen so far of Meta's Llama 3 models (8b and 70b so far, 400b promised later).
Andrej notes that Llama 3 trained on 15 trillion tokens - up from 2 trillion for Llama 2 - and they used that many even for the smaller 8b model, 75x more than the chinchilla scaling laws would suggest.
The tokenizer has also changed - they now use 128,000 tokens, up from 32,000. This results in a 15% drop in the tokens needed to represent a string of text.
The one disappointment is the context length - just 8,192, 2x that of Llama 2 and 4x LLaMA 1 but still pretty small by today's standards.
If early indications hold, the 400b model could be the first genuinely GPT-4 class openly licensed model. We'll have to wait and see.
Link 2024-04-19 A POI Database in One Line:
Overture maps offer an extraordinarily useful freely licensed databases of POI (point of interest) listings, principally derived from partners such as Facebook and including restaurants, shops, museums and other locations from all around the world.
Their new "overturemaps" Python CLI utility makes it easy to quickly pull subsets of their data... but requires you to provide a bounding box to do so.
Drew Breunig came up with this delightful recipe for fetching data using LLM and gpt-3.5-turbo to fill in those bounding boxes:
overturemaps download --bbox=$(llm 'Give me a bounding box for Alameda, California expressed as only four numbers delineated by commas, with no spaces, longitude preceding latitude.') -f geojsonseq --type=place | geojson-to-sqlite alameda.db places - --nl --pk=id
Link 2024-04-19 Ruff v0.4.0: a hand-written recursive descent parser for Python:
The latest release of Ruff - a Python linter and formatter, written in Rust - includes a complete rewrite of the core parser. Previously Ruff used a parser borrowed from RustPython, generated using the LALRPOP parser generator. Victor Hugo Gomes contributed a new parser written from scratch, which provided a 2x speedup and also added error recovery, allowing parsing of invalid Python - super-useful for a linter.
I tried Ruff 0.4.0 just now against Datasette - a reasonably large Python project - and it ran in less than 1/10th of a second. This thing is Fast.
Link 2024-04-20 llm-gpt4all:
New release of my LLM plugin which builds on Nomic's excellent gpt4all Python library. I've upgraded to their latest version which adds support for Llama 3 8B Instruct, so after a 4.4GB model download this works:
llm -m Meta-Llama-3-8B-Instruct "say hi in Spanish"
Link 2024-04-20 Tips on Adding JSON Output to Your CLI App:
Kelly Brazil - also the author of jc, the neat CLI tool that converts the output of common Unix utilities such as dig into JSON - provides some useful do's and don'ts for adding JSON output as an option to a command-line tool.
Kelly recommends defaulting to arrays of flat objects - or newline-delimited objects - and suggests including an "unbuffer" option for streaming tools that discourages the OS from buffering output that is being sent through a pipe.
Quote 2024-04-20
The blog post announcing the shutdown was done one day early. The idea was to take the opportunity of the new Pope being announced and Andy Rubin being replaced as head of Android, so that the [Google] Reader news may be drowned out. PR didn't apparently realize that the kinds of people that care about the other two events (especially the Pope) are not the same kind of people that care about Reader, so it didn't work.
Link 2024-04-21 doom-htop:
Ludicrous, brilliant hack: it runs Doom, converts each frame to ASCII art, then runs one process for each line of ASCII and sets each process to allocate enough memory such that sorting by M_VIRT will show the lines in the correct order. Then it updates the argv[0] for each process on every frame such that htop displays the state of the game.
Probably only works on Ubuntu.
From the FAQ: "Q: Why did you make this? A: I thought it would be funny."
Link 2024-04-21 tiny-world-map:
I love this project. It's a JavaScript file (694K uncompressed, 283KB compressed) which can be used with the Leaflet mapping library and provides a SVG base map of the world with country borders and labels for every world city with a population more than 48,000 - 10,000 cities total.
This means you can bundle an offline map of the world as part of any application that doesn't need a higher level of detail. A lot of smaller island nations are missing entirely though, so this may not be right for every project.
It even includes a service worker to help implement offline mapping support, plus several variants of the map with less cities that are even smaller.
Link 2024-04-21 qrank:
Interesting and very niche project by Colin Dellow.
Wikidata has pages for huge numbers of concepts, people, places and things.
One of the many pieces of data they publish is QRank - "ranking Wikidata entities by aggregating page views on Wikipedia, Wikispecies, Wikibooks, Wikiquote, and other Wikimedia projects". Every item gets a score and these scores can be used to answer questions like "which island nations get the most interest across Wikipedia" - potentially useful for things like deciding which labels to display on a highly compressed map of the world.
QRank is published as a gzipped CSV file.
Colin's hikeratlas/qrank GitHub repository runs weekly, fetches the latest qrank.csv.gz file and loads it into a SQLite database using SQLite's ".import" mechanism. Then it publishes the resulting SQLite database as an asset attached to the "latest" GitHub release on that repo - currently a 307MB file.
The database itself has just a single table mapping the Wikidata ID (a primary key integer) to the latest QRank - another integer. You'd need your own set of data with Wikidata IDs to join against this to do anything useful.
I'd never thought of using GitHub Releases for this kind of thing. I think it's a really interesting pattern.
Great content, thank you. I would add this in the text "dirname "$(llm logs path)", or the link to https://llm.datasette.io/en/stable/openai-models.html#adding-more-openai-models
A Trivial Jailbreak Against Llama 3:
https://github.com/haizelabs/llama3-jailbreak