AI and security at Black Hat: 5 key takeaways from a security expert panel

AI and security at Black Hat: 5 key takeaways from a security expert panel

1Password by 1Password on

In late July, we published new research on the risks of unmanaged AI, revealing four major security challenges companies face when AI slips under the radar.

Those findings set the stage for a lively expert panel at Black Hat, where security leaders explored “Weaponized Autonomy: The rise of AI agents as enterprise threat vectors.” The panel included:

  • Joe Carson, chief security evangelist and advisory CISO, Segura
  • Anand Srinivas, vice president of AI, 1Password
  • Wendy Nather, senior research director, 1Password
  • Dave Lewis, global advisory CISO

The conversation confirmed much of what our research uncovered: employees are adopting AI at a breakneck pace, governance is lagging, and opportunity and risk are growing in equal measure.

As organizations race to embrace AI, the panelists dug into the key considerations security leaders should keep in mind as they integrate AI agents and solutions:

1. Zero trust principles need an AI upgrade

Panelists agreed: zero trust isn’t going anywhere. Its core principles of validating every access request, minimizing privilege, and verifying continuously are still the right foundation.

The challenge? Wendy Nather noted that zero trust itself will need to evolve. “Now, most of the factors that we’re used to using for zero trust are bound to a human being, biometrics, and physical presence. With AI, we’re going to have to rethink a lot of that,” she said.

AI introduces non-human identities into the equation. Enterprises will need to extend their access controls to AI agents and tools, applying least privilege, just-in-time access, and revocability. Think of it as zero trust, but now applied to our new AI agent “digital colleagues”.

2. Criminals are getting creative fast

Until recently, AI hadn’t made a major appearance in the cybercriminal toolkit. That’s changing quickly, as attackers have started to take advantage of generative AI and AI agents. The panel shared examples of attackers using AI to:

  • Accelerate the analysis of stolen data to fine-tune ransom demands.
  • Replace human cybercriminals with AI bots to make it easier than ever to do mass outreach over messaging services, emails, and so forth.

Joe Carson also started to see cybercriminals attack new regions with AI, increasing their language capabilities. He shared, “The Estonian language, like Finnish, is very complex, so we very seldom get a lot of phishing attacks. But at the end of 2023, it accelerated significantly. We started seeing with artificial intelligence that language is no longer a barrier.”

As AI continues to lower barriers to entry for attackers, operations are accelerating, and scams are becoming sharper, more convincing, and harder to spot.

3. Shadow AI and governance gaps are a recipe for trouble

One theme was loud and clear: many CISOs don’t have visibility into how their organizations are using AI.

The panel pointed to research showing that more than half of security leaders (56%) estimate that between 26% and 50% of their AI tools and agents are unmanaged. Dave Lewis noted that while CISOs have governance frameworks for older IT implementations, most haven’t adapted them for AI. As one CISO at a CISO roundtable in Vienna put it, “We closed the door to AI products, but they’re now coming through the window.”

Their advice to get ahead of the AI governance challenge? Start with inventory and visibility, then implement clear governance rules that allow security to act quickly when unauthorized AI use appears.

4. Business demand is outpacing security preparedness

For many companies, AI has gone from “maybe” to “must-have” in record time. But panelists cautioned that hype is driving adoption faster than security can keep up.

Although securing AI shares similarities with past emerging technologies, the people building it are changing. Wendy Nather observed, “With web applications and then mobile, new groups came in who hadn’t learned the security lessons we had. With AI, we’re going to see a whole new crowd that doesn’t understand the practices we’ve improved, and they’ll make those same mistakes all over again.”

The result is a widening gap between business ambition and safe deployment. They recommended tabletop exercises to define how AI should be handled, including scenarios where security might need to cut off shadow AI, even if it’s the CEO using it.

5. Using AI can be a superpower and a liability

The closing sentiment was balanced: AI is a transformative tool for automation, speed, and efficiency, but it’s also a way to “shoot yourself in the foot.” Without strict controls, over-permissioned AI agents and poorly scoped applications can expose sensitive data and create new attack surfaces.

Anand Srinivas stressed that security must be paired with usability. “It also has to be easy to use, because if it’s not, people will find a way to circumvent it,” he said.

Looking forward, panelists see promise in agent-to-agent security standards, federated identity models, and extending digital wallet concepts to AI. Each of these will provide enterprises with better ways to manage identity, authentication, and access for AI and humans alike.

Final word

The takeaway from this panel wasn’t “fear AI,” it was “govern AI.” With 54% of security leaders admitting their governance enforcement is weak, the industry is running without the guardrails it needs for safe, successful AI adoption. The good news? It’s not too late. If organizations put the proper safeguards in place, AI becomes a force multiplier for security programs—accelerating detection, response, and resilience. Ignore them, and it could become the next big vulnerability we all scramble to contain.

1Password - 1Password -