Twenty-five years in the past, Jay Bavisi based EC-Council within the aftermath of 9/11 with a simple premise: if attackers perceive techniques deeply, defenders want to know them simply as effectively. That concept led to Licensed Moral Hacker (CEH), which went on to develop into one of the vital well known credentials in cybersecurity.
Bavisi thinks we’re at an analogous inflection level once more—this time with AI.
The know-how is transferring quick. The workforce isn’t. And identical to the early days of software program improvement, a lot of the consideration is on what AI can do, not on the way to deploy it safely, responsibly, or at scale.
“We’re again in that period the place constructing one thing feels cool,” Bavisi informed me. “Within the early days of internet improvement, safety and governance had been afterthoughts. We’re doing the identical factor once more with AI—performance first, use circumstances first, and solely later asking what the dangers are.”
That’s the hole EC-Council is attempting to deal with with the biggest growth of its portfolio in 25 years: four new AI certifications and a revamped Licensed CISO program.
The Expertise Hole Isn’t Hypothetical
The information behind this push isn’t refined. IDC estimates unmanaged AI danger may attain $5.5 trillion globally. Bain tasks a 700,000-person AI and cybersecurity reskilling hole within the U.S. alone. The IMF and World Financial Discussion board have each landed on the identical conclusion: entry to know-how isn’t the constraint—persons are.
I’ve spent the final couple of years speaking with executives about AI, and the tone has shifted. Early on, practically everybody insisted AI wasn’t going to exchange jobs. It grew to become virtually ritualistic. Comprehensible, positive—however not solely sincere.
These days, the messaging has modified. Some roles will disappear. That’s not controversial anymore. The extra correct framing has at all times been: AI in all probability gained’t take your job, however somebody who is aware of the way to use AI higher than you would possibly. That’s the actual danger—and the actual alternative.
What EC-Council Is Really Launching
The brand new certifications are constructed round a framework EC-Council calls ADG: Undertake, Defend, Govern. It’s meant to present organizations a approach to consider AI intentionally, slightly than defaulting to “simply purchase a subscription and see what occurs.”
“It’s not nearly selecting Claude or Gemini or GPT,” Bavisi stated. “Your knowledge, your buyer info, your enterprise processes all get pulled in. You want guardrails.”
The 4 certifications are role-specific:
- AI Necessities (AIE) is baseline AI fluency—sensible, not theoretical.
- Licensed AI Program Supervisor (C|AIPM) focuses on implementing AI packages with accountability and danger administration.
- Licensed Accountable AI Governance & Ethics Skilled (C|RAGE) targets governance gaps, aligning with frameworks like NIST AI RMF and ISO/IEC 42001.
- Licensed Offensive AI Safety Skilled (COASP) teaches practitioners the way to assault LLM techniques so that they perceive the way to defend them.
That final one feels particularly on-brand. It’s primarily the CEH mindset utilized to AI: you possibly can’t defend what you don’t perceive.
Why This Isn’t Educational
Bavisi shared a current instance that places the urgency into perspective. EC-Council took half in a managed check with a top-ten international insurance coverage firm. They in contrast conventional human-led pen testing towards the AI strategy.
Throughout three rounds, people discovered 5 whole vulnerabilities. The AI discovered 37.
That’s not an indictment of human ability. It’s a reminder that AI doesn’t get drained, doesn’t neglect, and doesn’t function throughout the similar constraints. The job doesn’t disappear—however the expectations round the way it’s accomplished change dramatically.
The CISO Position Is Altering Too
Alongside the AI certifications, EC-Council up to date its Licensed CISO program to model 4. Safety leaders are actually accountable for techniques that study, adapt, and make choices autonomously, however that’s not what most CISOs educated for a decade in the past.
The up to date curriculum displays that actuality—much less guidelines safety, extra governance, danger possession, and accountability in AI-driven environments.
Why This Issues
Certifications don’t magically make somebody an professional. I’ve collected sufficient of them over time to know that. However they do matter. They open doorways. They sign baseline competency. And proper now, that sign carries extra weight than normal.
“There are cloud engineers and GRC professionals all over the place asking the identical query,” Bavisi stated. “How do you do governance and danger with AI? Till now, there haven’t been actual frameworks or actual coaching packages.”
AI isn’t slowing down. The workforce has to catch up. EC-Council is betting that structured, role-based schooling—grounded in sensible actuality slightly than hype—can assist shut that hole. Given what they did with CEH, it’s a guess price taking note of.
