In April 2026, I'm giving a talk at GenAI Zürich on testing LLM outputs in production. The talk covers how my team at Adobe went from a simple, testable LLM app to a complex skills-based system where a single change could break everything, and how we built a testing framework to keep it under control.

My background in distributed systems, cloud infrastructure, and application development gives me a full-stack perspective on what it takes to ship AI reliably.

Conference inquiries

Available for speaking

Interested in having me speak at your event? Get in touch.

Contact

Upcoming

Confirmed

Testing LLM Outputs: Caging the Wind or Just Another Day in the Office?

GenAI Zürich - April 2, 2026 - Tech & Startup Stage, Volkshaus Zürich - 11:00

As LLM-based applications scale and teams grow, you can no longer rely on intuition. This talk covers Adobe's journey from a simple LLM app to a sophisticated skills-based system, the shift to rigorous testing with Promptfoo, and lessons learned managing systems that feel unpredictable.

Promo

What I talk about

01

Building with LLMs

Integrating language models into real products - where they add value, where they don't, and how to make them reliable enough to ship.

02

AI Agents in Practice

Designing agent workflows that work in production: tool use, error handling, observability, and keeping humans in the loop.

03

Engineering for AI Products

The infrastructure and dev practices that make AI features maintainable - evals, versioning, deployment, and iteration speed.