Plus thoughts on the situation with the OpenAI board
That hugging face url won't download
https://huggingface.co/jartine/llava-v1.5-7B-GGUF/resolve/main/llamafile-server-0.1-llava-v1.5-7b-q4
Thanks, looks like that changed today - I updated the URLs in the post.
I’ve never been a Mac user, and struggle to find hardware specs for PCs (Windows or Linux) that can easily run local LLMs.
Ideally laptops.
Have any resources? Christmas time is approaching!
No idea myself I'm afraid, but there should be good guides on https://www.reddit.com/r/LocalLLaMA/
That hugging face url won't download
https://huggingface.co/jartine/llava-v1.5-7B-GGUF/resolve/main/llamafile-server-0.1-llava-v1.5-7b-q4
Thanks, looks like that changed today - I updated the URLs in the post.
I’ve never been a Mac user, and struggle to find hardware specs for PCs (Windows or Linux) that can easily run local LLMs.
Ideally laptops.
Have any resources? Christmas time is approaching!
No idea myself I'm afraid, but there should be good guides on https://www.reddit.com/r/LocalLLaMA/