This talk dives into the real-world challenges of handling large-scale data processing in a banking environment, with a focus on PostgreSQL, Hibernate, and data persistence best practices. We’ll explore methods for managing high-throughput systems, where performance and reliability are paramount. Key topics include Hibernate's StatelessSession for batch processing, Chronicle API integration to tackle long-running transactions. We’ll also discuss strategies for batch identifier generation and streaming large datasets efficiently, drawing on lessons learned from building data integration solutions for financial products. We will cover about pitfalls about fetching as well. Attendees will come away with practical tips on reducing database load, improving data consistency, and scaling PostgreSQL-backed systems. Whether you’re handling high volumes of data, designing scalable architecture, or looking to deepen your database knowledge, this session offers insights that can be applied directly to your projects.
Get notified about new features and conference additions.