And an async browser library for alert(), confirm() and prompt() built using OpenAI o1
How can I run this in a server environment?
I think Ollama will work on a server too, as will llama.cpp server - but to be honest I don't know what the best options for that are at the moment.
There's probably a huge opportunity for some PAAS provider to offer this. If Ollama is as good as you say, it can handle a lot of use cases at a much lower cost than OpenAI and Anthropic.
How can I run this in a server environment?
I think Ollama will work on a server too, as will llama.cpp server - but to be honest I don't know what the best options for that are at the moment.
There's probably a huge opportunity for some PAAS provider to offer this. If Ollama is as good as you say, it can handle a lot of use cases at a much lower cost than OpenAI and Anthropic.