Claudionor Coelho is the Chief AI Officer at Zscaler, responsible for leading his team to find new ways to protect data, devices, and users through state-of-the-art applied Machine Learning (ML), Deep Learning and Generative AI techniques. Prior to joining Zscaler, he served as Chief AI Officer and Senior Vice President of engineering at Advantest. Previously, Coelho was a Vice President and Head of AI Labs at Palo Alto Networks. He also held ML and deep learning roles at Google.
Zscaler focuses on accelerating digital transformation by enabling organizations to achieve greater agility, efficiency, resilience, and security. The company’s cloud-native Zero Trust Exchange platform is designed to protect users from cyberattacks and data loss by securely connecting users, devices, and applications, regardless of their location. Zscaler serves thousands of customers worldwide, emphasizing robust security and seamless connectivity.
As Zscaler’s first Chief AI Officer, how have you shaped the company’s AI strategy, particularly in integrating AI with cybersecurity?
Zscaler has made significant advancements in AI for cybersecurity, which set it apart from competitors. Zscaler’s Zero Trust platform leverages AI to detect and stop credential theft and browser exploitation from phishing pages. The threat intelligence from over 400 billion daily transactions delivers real-time analytics that enhance defense against sophisticated cyberattacks. Additionally, we collaborate with NVIDIA to deliver generative AI-powered security and IT innovations like the Zscaler ZDX Copilot, which simplifies IT and network operations, while processing data from the Zero Trust Exchange™ platform to proactively defend enterprises against threats. Finally, with the Avalor acquisition, we have extended Zero Trust Exchange™ capabilities using Data Fabric for Security. With over 150 pre-built integrations, it identifies and predicts critical vulnerabilities while improving operational efficiencies.
You’ve founded multiple companies, including Kunumi, and held leadership roles in top companies. How has your entrepreneurial background influenced your approach as a corporate AI leader at Zscaler?
When I was SVP of Engineering at Jasper Design Automation, a startup on Electronic Design Automation, we competed against multi-billion dollar companies but achieved more than 70-80% market share because of innovation, business processes and agility. One of the books I always referred to during our strategy meetings was “Competing on the Edge: Strategy as Structured Chaos” by Prof. Kathleen M. Eisenhardt. Although this book is from 1998, it still applies to what we are seeing with Generative AI today.
Never before has a world-changing technology moved this fast. Motorola engineer Martin Cooper made the first cellular phone call in 1973, but it took the world 10 years until the first commercial network opened and 24 more years until the iPhone was released, changing the way we interact with computing machines.
ChatGPT was released in November of 2022. The next year, we discussed in a WEF-sponsored seminar that Artificial General Intelligence (AGI) was coming soon. At the time, only a few of us recognized that we can use Agents to create a lot of intelligent systems by filling the gaps of LLMs with tools–even before AGI. In 2024, the discussion shifted to AI Agents, and at the end of the year, we are starting to see several intelligent AI Agents (like ZDX Copilot or blogging platform Kiroku).
This speed can only be seen in a startup environment, so it is causing tremendous stress in large organizations, which are struggling to become agile enough to accommodate a technology with unprecedented speed.
Given your experience leading companies in both Brazil and the U.S., what are some of the key differences between the two markets in terms of AI and cybersecurity adoption?
Discussing startups is a good way to begin to illustrate the similarities and differences between the markets, since they are where you often see radical innovations before they reach large corporations. A common strategy in Brazil for startups has been to copy successful early-stage US startups, as US startups usually look at the internal market first (though this has been changing). However, the US has traditionally had a more stable capital system that makes it easier to start a company.
I created Kunumi in 2014 as the first Deep Learning company in Brazil. It was sold to Bradesco Bank earlier this year. In general, corporations in Brazil do not know how they will be adopting Generative AI, and you are going to see a lot of mistakes–also true in the US. I have built four Copilots in my life–the first one in 2016, while I was at Synopsys. It was an agent that could scan compilation and execution logs of large emulation machines, searching for information related to the user’s questions, with multi-language support. At that time, there were no transformers, no LLMs, and even translation was very different from what we have today.
In 2020, I was a researcher at Google working in Deep Learning model compression and quantization, with CERN using what I created in search for sub-atomic particles. When I thought that we were in a war over data, it became clear that cybersecurity is a global problem that is not localized to one country or another. That’s when I decided to move into it.
A few months ago, I was talking to a foreign government official who was saying that Cybersecurity was a problem of the US and his agency had nothing to worry about–only to have a cyberattack happen in his organization a few weeks later.
Finally, in comparing the state of Cybersecurity to charges of ransomware between Brazil and the US, the reality is that estimated ransomware charges are roughly the same.
How does the regulatory environment for AI and cybersecurity differ between Brazil and the U.S., and how does that impact innovation in these regions?
Because Generative AI is moving so fast, governments recognize the need to protect something but are often unclear on what it is they are trying to protect. What is the impact if we created laws for LLMs in 2023, and in 2024 we are using AI Agents? We need regulations, but we also need to make an unemotional analysis of the regulatory environment to see how we can better protect local citizens.
That said, when AI is making decisions solely on exact numeric inputs representing reasons or features, the analysis is often incomplete and yields a flawed real-life result. For example, if an AI algorithm makes a loan decision to a person based on an ambiguous criterion like “probability” and a factor like salary or race were included, you could easily see a scenario in which a person would be denied a loan based on the net effect of one of those two factors. With Generative AI, the problem becomes even worse, because of the inability of LLMs to bring external data to make reasoning assumptions. It is important to make sure we have regulations that do not allow flawed systems to make decisions (especially without deep supervision), as they are bound to make mistakes.
On the other hand, I have been extremely pleased with the full self-driving capability of Tesla cars, which, in comparison to humans, have been shown to exceed the number of miles driven before they are involved in accidents. Yes, they make mistakes, but even in airplanes with copilot on, pilots need to take over the controls in case of an emergency.
Regarding cybersecurity, several US organizations (e.g JCDC.AI, NIST, CISA, etc.) have discussed the need to address AI and cybersecurity. Of course, in fast-paced markets or technologies, you need to continuously adapt to changes, and when they move extremely fast, you need to operate at the edge of chaos.
Zscaler’s Zero Trust Exchange is a key part of its security model. How does AI enhance this platform, and what are some of the most exciting developments in this area?
Zscaler’s zero trust architecture helps organizations create a more secure environment for AI deployments, but the platform also leverages AI in numerous ways, beginning with ZDX Copilot which delivers generative AI-powered security innovations. Developed in collaboration with NVIDIA, the agent leverages Generative AI to proactively defend enterprises against threats and simplifies IT and network operations. Zscaler has also enhanced its predictive vulnerability identification by adding Avalor’s Data Fabric for Security to the Zscaler Zero Trust Exchange. Finally, AI lives at the core of Zscaler’s zero trust platform, detecting and stopping credential theft and browser exploitation from phishing pages. Real-time analytics based on threat intelligence from over 400 billion daily transactions enhance its defense against sophisticated cyberattacks.
AI has become increasingly central in the fight against cyber threats. How do you see AI evolving to address the growing complexity of cybersecurity risks, especially in the realm of IoT and OT devices?
The threat landscape has unequivocally evolved with the advent of AI-based cyberattacks, so organizations might fight AI with AI. The major evolution will be enhancing AI solutions with additional data sources.
As the number of cyber attacks increases, we need to use more automation with AI to detect and address cyber risks. It is worth noting that AI and Generative AI are being used right now to create new attack fronts, and because of that, we need to up the game by correlating more signals than we did before.
In the case of IoT and OT devices, they pose significant risks to organizations, as several IoT devices do not use the most up-to-date software stacks–despite the fact you can easily buy Wi-Fi switches, internet connected TVs, dishwashers, ovens, etc. For years, we have seen numerous articles that show the vulnerabilities that we are subject to in IoT/OT.
We need constant awareness and to enhance cybersecurity defense by analyzing all types of data and signals to detect anomalies and potential threats. To win this game, we need state-of-the-art AI models trained with massive amounts of data in real-time. Generative AI plays an instrumental role, by enabling companies to analyze and summarize results to users and security operators.
As a member of AI and Cybersecurity workgroups at the World Economic Forum, how do global discussions around AI ethics and cybersecurity shape your approach to your role at Zscaler?
Because technology is moving so fast, governments and organizations need to have grounding information, and I see this as the role of the World Economic Forum. AI and Cybersecurity alone have enough need to require separate groups, but when you merge the two of them, it is almost a new area by itself. For example, Gartner this year, showed that Generative AI increases the attack surface tremendously, taking it from prompt injection at the input and output to application code attacks, model attacks and even plug-in attacks.
Some of these attacks are specific to LLMs like ChatGPT, but if you consider we are moving from LLMs to AI Agents and Multi-Agent systems, you need to consider a lot more information. For example, in LLMs you may care about prompt injection, sleeper cell behavior (triggering LLM to respond differently based on special keywords), or proprietary information leakage. When discussing AI Agents, we need to consider attacks on tools and data sources as well–even assuming that SQL injection and OS command injection may be possible again.
Furthermore, if we add multi-agent systems, where agents may be residing in different locations, we have to imagine this implies a completely different network communicating with protocols. People have been experimenting with thousands of agents–just like a computer network.
Finally, we need to prepare our workforce to use Generative AI, providing tools and an environment where they can operate in this new world.
You have been a strong advocate for diversity and inclusion, especially as an Executive Sponsor for Zscaler’s Latino and Hispanic ERG, Sabor. How has your cultural background influenced your leadership style and approach to AI development?
As a proud Latino born and raised in Brazil, I’m passionate about supporting and empowering the Latino and Hispanic communities at Zscaler. I feel a great sense of accomplishment in being able to contribute to a better world through cybersecurity, where we help protect society in an increasingly complex world. My values helped get me where I am today, and I am extremely proud of where I came from.
My advice would be to never forget where you came from and what you have done. Always be proud of what makes you unique, but also recognize that diversity is king. I live with myself 24 hours a day. If I only hire people who are similar to me and agree with me, I won’t increase my knowledge. Hiring people from numerous locations and backgrounds helps us to better understand the specific needs of our global customer base.
Lastly, what excites you most about the future of AI in cybersecurity, and what role do you see Zscaler playing in that future?
AI does not change the fundamentals of effective cyber defense–it highlights their importance. We anticipate seeing transparency, robust security practices, and continuous monitoring proliferate across the industry. Organizations must adopt a comprehensive approach to security, implementing advanced measures to detect and respond to threats. This includes fostering a culture of security awareness, conducting regular security audits, and collaborating with stakeholders to develop effective security strategies. By doing so, organizations can reduce the risk of breaches and protect their sensitive information.
Zscaler is committed to safeguarding user privacy, employing the most advanced techniques to anonymize data and ensuring we keep it out of our LLMs, preventing the identification of individual users or organizations. While we may explore fine-tuning LLMs in the future, our strict data privacy measures to ensure that no user data is compromised will continue to be paramount. Our goal is to harness the power of AI to improve security without infringing on customer privacy.
Thank you for the great interview, readers who wish to learn more should visit Zscaler.