What AI can do nowadays is simply mind-blowing. I must admit that I cannot stop being surprised and sometimes literally jumping from my seat thinking: "I didn't imagine that AI could ALSO do this!". What is a bit misleading here is that today what we tend to identify with Artificial Intelligence is actually the employment of Large Language Model which is only a subset of all AI technologies available: ML is a fraction of the whole AI-story, while agentic AI enables the implementation of more general use cases, for instance also involving the usage of Symbolic AI to encode the business logic of a specific domain through a set of human-readable and transparent rules. In fact there are many situations where being surprised is the last thing that you may want. You don't want to jump from your seat when your bank refuses your mortgage without any human understandable reason, but only because AI said no. And even the bank may want to grant their mortgages only to applicants who are considered viable under their strict and well-defined business rules. Given these premises, it is interesting to mix different complementary technologies in order to overcome the limited auditability and the risk of hallucinations implicit in the usage of a Large Language Model alone. In this talk we will discuss, with practical examples, some of the multiple patterns emerging in this field like RAG, external tools invocation and guardrails. We will also demonstrate why this could be a winning architectural choice in many common situations and how Quarkus through its langchain4j and drools extensions makes the development of applications integrating those technologies straightforward.
Get notified about new features and conference additions.