Multimodal Retrieval-Augmented Generation (RAG) is the art of using your own textual, visual and audio data to enhance prompts before they’re processed by a generative AI. Imagine users asking a question, the system querying your own data pool for answers, and then using generative AI to craft a coherent, relevant response… which is perfect for non-player characters (NPCs)! These are characters in video games that are not controlled by the player. Typically, NPCs have static and hardcoded responses, but with this approach we could make them more lifelike and fun to interact with!We can take it even further. Imagine NPCs interacting with each other, having goals, and moving around dynamically, all driven by complex RAG workflows. This idea is exactly what Alexander has explored and wants to share with you!Whether you’re a seasoned backend engineer or an adventurous developer seeking fresh ideas, Alexander will guide you through setting up basic to advanced RAG workflows. He'll demonstrate the challenges and benefits of using tools like Langchain4j to create RAG systems that provide up to date and factual answers.Alexander’s goal is to offer an engaging and fun look at RAG, inspiring you to dive in and explore its potential yourself. Get ready for a session that mixes education and absurd fun to discover the inner workings of multimodal RAG!
Get notified about new features and conference additions.