Many companies are interested in running open large language models such as Gemma and Llama because it gives them full control over the deployment options, the timing of model upgrades, and the private data that goes into the model. Ollama is a very popular open-source LLM inference server that works great on localhost and in a container. In this talk, you’ll learn how to deploy an application that uses an open model with Ollama on Cloud Run with scale to zero, serverless GPUs.
Get notified about new features and conference additions.