Speaking
GenAI Zürich 2026 - applied generative AI
In April 2026, I'm giving a talk at GenAI Zürich on testing LLM outputs in production. The talk covers how my team at Adobe went from a simple, testable LLM app to a complex skills-based system where a single change could break everything, and how we built a testing framework to keep it under control.
My background in distributed systems, cloud infrastructure, and application development gives me a full-stack perspective on what it takes to ship AI reliably.
Available for speaking
Interested in having me speak at your event? Get in touch.
ContactUpcoming
Confirmed
Testing LLM Outputs: Caging the Wind or Just Another Day in the Office?
As LLM-based applications scale and teams grow, you can no longer rely on intuition. This talk covers Adobe's journey from a simple LLM app to a sophisticated skills-based system, the shift to rigorous testing with Promptfoo, and lessons learned managing systems that feel unpredictable.
Promo
What I talk about
Building with LLMs
Integrating language models into real products - where they add value, where they don't, and how to make them reliable enough to ship.
AI Agents in Practice
Designing agent workflows that work in production: tool use, error handling, observability, and keeping humans in the loop.
Engineering for AI Products
The infrastructure and dev practices that make AI features maintainable - evals, versioning, deployment, and iteration speed.