Large language models are all the buzz recently, speeding up developer productivity and helping in areas like creative writing. One interesting method of super-powering your LLMs is called retrieval augmented generation (RAG). The concept implies that you have a local database of documents that you'd like to use in order to make your chatbot smarter, being able to answer domain specific questions. I'll be demoing some of the experimental work that was done in our group at CERN in order to build a chatbot that feeds from our internal developer documentation and helps you find what you need.
Get notified about new features and conference additions.