Show HN: Ollama – Run LLMs on your Mac
Hi HN
A few folks and I have been working on this project for a couple weeks now. After previously working on the Docker project for a number of years (both on the container runtime and image registry side), the recent rise in open source language models made us think something similar needed to exist for large language models too.
While not exactly the same as running linux containers, running LLMs shares quite a few of the same challenges. There are "base layers" (e.g. models like Llama 2), specific configuration to run correctly (parameters, temperature, context window sizes etc). There's also embeddings that a model can use at runtime to look up data – we don't support this yet but it's something we're looking at doing soon.
It's an early project, and there's still lots to do!
Comments URL: https://news.ycombinator.com/item?id=36802582
Points: 45
# Comments: 30
from Hacker News: Front Page https://ift.tt/fJbBxzs
0 comments