Webinar
AI isn’t slowing down, and the Nordics are accelerating.
Discover how to evolve your AI stack from simple caching to a real-time context engine. We’ll explore how Redis enables semantic caching and agent memory - and how these patterns apply to teams building AI systems across the Nordics.
BCG’s 2026 study of 300+ Nordic executive leaders found that Nordic companies expect AI to drive revenue growth of ~30% and cost reductions of ~25% by 2029.
Semantic Caching for AI Apps
LLMs are powerful, but slow and expensive. Traditional caching doesn’t help when users ask the same question in different ways. Semantic caching changes that by matching on meaning rather than exact text, reusing previous responses.
We’ll cover:
- How Redis powers semantic caching in production
- Up to 15× faster responses and 30%+ lower costs
- Live demo
Agent Memory Server for AI Apps
AI agents quickly show limitations: they forget context and repeat themselves. Adding more context isn’t the solution - it’s slow and expensive. Agents need real memory across conversations and sessions.
We’ll cover:
- Why stateless LLM calls fall short
- Durable, low-latency memory with Redis
- Live demo
Speakers

Redis
Samuel Agbede
Developer Advocate
Register now
Join the Nordics AI community and explore how Redis can support your AI workloads.
Get started with Redis today
Speak to a Redis expert and learn more about enterprise-grade Redis today.