Learn how Zscaler uses AI for prod to get to RCA for 150K alerts in minutes

Davos is a unique experience. For one week, a tiny town in the Swiss Alps hosts a few thousand people, including more than 1,000 CEOs and over 75 world leaders. The town is small, the concentration of interesting people is unusually high, and almost any random encounter turns into a serious conversation.
This year, every conversation converged on AI. Industry did not matter. Geography did not matter. Role did not matter. Everything came back to AI.
Here are five insights from a week of those conversations.
Nearly every CIO I spoke with asked some version of the same question. We can generate code fast, but what comes next?
AI coding assistants went mainstream faster than anything we have seen before. Code creation is no longer the constraint. Operating that code is.
Every line of code runs on real infrastructure. It has dependencies. It serves unpredictable traffic. It needs to be debugged, secured, documented, and maintained, often for years. As one CIO put it, “I was assuming two years of development and ten years of running.” But now that development is compressed, what happens to the running?
The cost of writing code is now a fraction of the total lifecycle cost. Rather than slowing down, the solution is more AI. We need agents that can operate production as effectively as agents generate code. Otherwise the bottleneck simply moves downstream and shipping velocity stalls.
Coding agents optimize for generation speed. Production agents must optimize for understanding complex systems, reasoning across domains, and taking action when the stakes are high. These are fundamentally different problems.
AI touches every system across infrastructure, security, applications, data, and operations. The CIO is now tasked with bringing AI into the Enterprise and their role is becoming the center of gravity for organizations. CIOs are turning IT from a support function into a strategic partner. They are playing a pivotal role in defining how the organizations compete by deciding where AI gets embedded, what it can access, and how it operates.
The CIOs I met are leaning into this expanded responsibility. And the enterprises moving fastest on AI were the ones where the CIO had clear executive sponsorship and authority to act.
There's a narrative that AI agents will replace systems of record entirely. That’s not true, but their role needs to evolve or they will be sidestepped.
Systems of record used to be the full stack: data creation, storage, workflows, analytics, and actions. AI agents are unbundling this stack. They are the “system of intelligence” on top of systems of record where reasoning happens, actions get taken, and workflows run.
The new moat for systems of record will be the quality of context they can provide to AI agents. Raw data access through APIs isn't enough. Agents need context like relationships between objects, history of changes, and meaning behind the numbers.
SaaS platforms that expose this context will become essential infrastructure for agents. The ones that only expose tables and fields will become commoditized storage. And the ones that protect context behind walled gardens will find their moat eroding as agents route around them.
Build-vs-buy debate runs deep in engineering culture. But the enterprises moving fastest on AI are very pragmatic about this debate.
AI is developing faster than enterprise teams can build internal solutions. By the time you've built your own, the frontier has shifted. One CIO running tens of thousands of engineers put it bluntly: "Making AI work on 20 years of legacy systems is a full-time problem. I need my best engineers on what we sell, not what we run."
Every senior engineer working on internal AI infrastructure is an engineer not working on your core product. When velocity pressure increases but headcount stays flat, that opportunity cost compounds.
The fastest enterprises focus engineering on core IP and buy AI that operates on messy, real-world systems. Not because they can't build, but because the problem moves too fast to solve while also building what you sell.
Until recently, AI conversations centered on efficiency, automating boilerplate, reducing headcount, doing more with less. That is changing. At Davos this year, executives were asking how AI drives revenue? How does it prevent churn? How does it accelerate the business?
Cost-cutting is about doing the same things cheaper. Revenue protection and accelerations is about doing things that weren't possible before. That's a different question and it requires a different approach to building.
There's something about in-person conversation that cuts through the noise. You can read a hundred articles about AI strategy, but the conversation is different when you’re sitting across from CEOs setting their company strategy and CIOs who are deploying it. The pattern recognition happens faster when you're in the room.
Davos was a strong validation for my hypothesis that AI is accelerating and 2026 is when we will see broad adoption.
World class engineering teams are relying on agents to both build and run software. The next few months will be fascinating as software engineering is being reshaped faster than ever.
Excited to get back to building.

Spiros Xanthos
Founder and CEO
Spiros is the Founder and CEO of Resolve AI. He loves learning from customers and building. He helped create OpenTelemetry and started Log Insight (acquired by VMware) and Omnition (acquired by Splunk), most recently he was an SVP and the GM of the Observability business at Splunk.

Discover why most AI approaches like LLMs or individual AI agents fail in complex production environments and how multi-agent systems enable truly AI-native engineering. Learn the architectural patterns from our Stanford presentation that help engineering teams shift from AI-assisted to AI-native workflows.

Hear AI strategies and approaches from engineering leaders at FinServ companies including Affirm, MSCI, and SoFi.

Software engineering has embraced code generation, but the real bottleneck is production. Downtime, degradations, and war rooms drain velocity and cost millions. This blog explains why an AI SRE is the critical next step, how it flips the script on reliability, and why it must be part of your AI strategy now.