With Cloud Next '26 rapidly approaching, we couldn't be more excited to showcase how Google Cloud empowers organizations to scale smarter, faster, and more efficiently.
Be sure to anchor your experience by attending the Infrastructure spotlight by Mark Lohmeyer, VP & GM of AI and Computing Infrastructure, where he will detail the future of our AI and compute ecosystem. And to learn more about Google’s agentic cross-cloud infrastructure, the spotlight by Sachin Gupta, VP & GM Infrastructure & Solutions Group is a great starting point.
To help you navigate the immense opportunities at Cloud Next, we've curated a set of essential breakout sessions across cloud infrastructure encompassing: Compute, AI Infrastructure, and Google Kubernetes Engine (GKE). Here are our top picks to optimize your Next agenda:
1. Overview and big picture
If you want to understand the strategic direction of Google Cloud infrastructure, this is where you start. We are covering the massive momentum behind our AI Hypercomputer and Compute Engine's AI-ready architecture, alongside the latest advancements in Google Distributed Cloud. This is also a particularly special year after wrapping up a decade of innovation on GKE, so join us as we explore what's new on GKE and how it helps businesses transform with AI.
BRK2:110 - What’s new with GKE: A look towards the next decade
BRK2-118 - What’s new for AI on GKE: Training, serving, and agents
BRK1-075 - What's new with Google Distributed Cloud?
BRK1-068 - AI hypercomputer: Resilient AI infrastructure at scale
BRK2-166 - What’s new with Compute Engine's workload and AI-ready infrastructure
BRK2-111 - Automating excellence: How Gemini and Config Connector help create 10x cloud teams
2. Migration and modernization
Transforming legacy environments is no longer just about moving to the cloud—it is about establishing a secure, AI-ready foundation wherever your workloads need to live. This category explores how to modernize complex enterprise infrastructure—from mainframes and VMware to Oracle —and seamlessly extend those capabilities to the edge and across multi-cloud environments. Discover how to reimagine modernization with Gemini and AI-driven migration factories.
BRK2-123 - From assessment to production: Building an AI-driven migration factory
BRK2-180 - Unlock Google Cloud's migration secrets: Security, speed, scale
BRK2-181 - Reimagining mainframe modernization in the Gemini era
BRK2-183 - Accelerate your path to AI readiness with Google Cloud VMware Engine
BRK2-184 - Migration accomplished: a customer panel on moving to an AI-ready cloud
BRK1-143 - Beyond lift & shift: Building AI-powered Oracle Workloads on Google Cloud
BRK2-195 - AI at the edge: Transform operations at the edge with Google Distributed Cloud
3. High-performance compute and AI infrastructure
The bedrock of any modern AI application is powerful, efficient, and scalable compute. In this category, we are diving deep into the hardware and architecture powering frontier AI, showcasing the latest advancements across our Cloud TPU and GPU roadmap. You will hear directly from industry pioneers like OpenAI, Anthropic, and Citadel on how they are architecting their hybrid HPC and Kubernetes clusters to push the absolute limits of inference, training, and research at scale.
BRK2-171 - The latest from Google’s TPU roadmap: Architecting for frontier AI
BRK2-125 - Build and scale SOTA inference: High performance on TPUs and GPUs with llm-d
BRK2-120 - How OpenAI builds Kubernetes GPU clusters
BRK2-124 - Build resilient AI: Maximize TPU performance and scale with GKE workloads
BRK2-172 - Scaling Claude: Inside Anthropic’s TPU strategy and architecture
BRK2-192 - Scaling Cloud’s infrastructure for the AI era
BRK2-176 - Accelerating alpha: Citadel's hybrid HPC and AI strategy
BRK2-177 - Next-gen pharma R&D: Redefining discovery with HPC and agentic AI
4. Agentic AI and the full AI workload lifecycle
Moving from AI experimentation to production requires orchestration and highly specialized infrastructure. This category explores the lifecycle of AI workloads, from large-scale model pre-training and reinforcement learning to cost-effective inference on CPUs and TPUs. Dive into the practical deployment of secure, agentic AI applications to rapidly accelerate developer velocity.
BRK2-112: Untrusted code, unprecedented speed: High velocity runtimes for AI agents
BRK2-126: Vibe coding agentic apps: Build secure workflows with GKE and Data Cloud
BRK2-194: Build agentic AI with Gemini & dev platforms on Google Distributed Cloud
BRK3-039: Improve AI developer velocity with Agentic cross-cloud network
BRK1-070: Accelerating the full lifecycle of large-scale model pre-training and RL fine-tuning
BRK1-067: CPU infrastructure for the age of inference: Lessons from industry leaders
5. Scale, Performance and Cost
For the practitioners engineering at the cutting edge, balancing extreme performance with cost efficiency is the ultimate challenge. This highly technical track explores how to build a global compute fabric and orchestrate secure, planetary-scale AI on Kubernetes. Discover how to navigate infrastructure complexity with Gemini, implement next-gen FinOps, and leverage advanced inference playbooks to maximize your compute value without ever compromising on reliability.
BRK2-127: Engineering the future of Kubernetes for AI at scale
BRK1-069: Scale open model serving on TPUs
BRK2-121: Snap-scale AI: Building a global compute fabric with GKE custom compute classes
BRK2-114: Limitless AI orchestration on GKE: Powering secure, planetary-scale AI
BRK2-113: The GKE inference playbook: Optimizing cost and performance
BRK1-072: Next-gen FinOps for the AI era
BRK2-167: Intelligent compute infra: Designing for performance, reliability, and cost
Register now. We hope to see you in Vegas, where you can join fellow visionaries, practitioners, and Google engineering experts to architect the future of cloud infrastructure. Whether you are scaling the next massive foundation model or celebrating a decade of Kubernetes innovation alongside us, we cannot wait to help you unlock new possibilities for your organization. See you there!