Upcoming

Analytics Data Engineer (ADE) Bench, ft. Benn Stancil
Discover how ADEbench measures real-world analytics engineering tasks for AI-assisted workflows, why it matters for teams building with dbt, and how to run it yourself.
Featuring benchmark co-author Benn Stancil, this session will unpack the design, demo the workflow, and share early takeaways to help you adopt the Model Context Protocol (MCP) with confidence.
- What you’ll learn
- What ADEbench evaluates across accuracy, reliability, and speed
- How to run the benchmark and interpret results for your stack
- Practical guidance for adopting dbt MCP and agentic workflows
- When Fusion’s reliability advantages may matter for AI-driven development
- What ADEbench evaluates across accuracy, reliability, and speed
- Who should attend
- Analytics engineers
- Data engineers
- AI engineers and technical decision makers
- Speakers
- Benn Stancil, Benchmark co-author
- Jason Ganz, Sr Mgr, Developer Experience and AI
- When
- December 9 (virtual). Time 7pm London time, 2pm NY time, 11am California time.
- December 9 (virtual). Time 7pm London time, 2pm NY time, 11am California time.
- Why attend
- Elevate your team’s approach to AI-assisted data development
- Get hands-on with a credible, open benchmark you can rerun and extend
- See how to quantify improvements and share results with your org and the community
Register now to save your spot and get the replay link.
Meet the speakers:

Benn Stancil
Founder
Mode

Jason Ganz
Senior Manager of AI Strategy and Developer Experience
dbt Labs