3 Signals You Can’t Ignore from the 2025 Hacker-Powered Security Report
Cybersecurity defense has never been more complex.
Organizations are staring down a wave of new vulnerabilities as widespread implementations of AI continue, and the risks themselves are evolving from common cross-site scripting bugs to something more systemic.
As a result, the subsequent exploits are overloading triage teams. The latest Hacker-Powered Security Report charts the 210% jump in valid AI reports in just the past year.
But while AI is disrupting typical cybersecurity strategies, it’s also empowering and transforming security researchers. “Bionic hackers”, the nearly 70% of researchers using AI tools to supercharge their workflows, are driving record efficiency to help security leaders maintain a safe environment.
HackerOne pulled in experts to break down the challenges and opportunities of this shifting landscape in a recent webinar. Their advice: when cybersecurity seems chaotic, establishing order is key.
Top Themes Defining AI Security
Luke Stephens, founder & CEO of Haksec, hosted the panel of experts through the top three themes discovered in the latest report.

These shifts have implications for security leaders through 2026 and beyond. Our guest experts weigh in.
Experts Read the Signals
1. Valid AI reports jumped 210% YoY, and prompt injection attacks surged 540%
This increase in reports includes prompt injections that may reveal Personally Identifiable Information (PII) or other sensitive data, and if an agent or chatbot is connected to APIs, that can lead to state changing actions beyond just generating text.
Model Context Protocol (MCP) servers, a more recently introduced interface allowing AI agents to tap into pre-defined knowledge bases, makes hooking up AI to sensitive data even easier today.
“As somebody with their hands dirty hacking stuff, the amount of integration that we've seen of AI into actual business logic functionality has just skyrocketed in the past year,” said panelist Justin Gardner, a bug bounty hunter and host of the Critical Thinking podcast. “And I think that's where a lot of these actual vulnerabilities are are coming from”
And the relationship between AI and vulnerability reporting works in two directions, as panelist and HackerOne Director of Triage Jewel Timpe notes humans are submitting reports on AI assets, while she also sees AI-generated reports.
“There are security teams looking at the unintended consequences of them implementing AI into their systems, and then there are hackers finding these unintended consequences,” she said.
Within this evolving security front, the experts say an increase in payments for AI-related findings and an increase in AI assets being added to testing scopes is a clear indication that organizations see more value in this research.
Watch the webinar: Securing the Age of AI Autonomy: Priorities for 2026
2. IDOR reports have grown 116% over five years, while XSS has nearly flatlined
The increase in AI implementation is also shifting the makeup of submitted reports. Common XSS bugs that have been a staple for researchers to uncover over the past several years are declining in number for the first time, Stephens said.
In their place, more systemic vulnerabilities are climbing:
- Valid IDOR reports jumped 29% since 2024
- Valid Improper Access Control (IAC) reports increased 18% YoY
Mehan Kasinath, VP of Enterprise Information Security at IAC, sees the game changing quickly, and that security leaders must adapt to these new threats—misconfiguration, access control, and logic flaws.
There’s an inherent distrust of AI, Gardner says, so the security boundary lies where organizations expose the AI to data. Even though companies can convince their agents or chatbots to not release that information under certain conditions, malicious actors may find a workaround.
“As soon as the hacker comes along who is just a little bit more convincing than you, your data is gone,” he said.
3. 67% of researchers now use AI to speed up testing and reduce repetitive work
The good news? The AI security battle isn’t one-sided. Alongside the surge in valid AI reports, researchers have been training.
Stephens noted the growth of “bionic hackers”, defined as those who are combining creativity and contextual reasoning with speed and the scale and the autonomy of AI. The latest report revealed a 10 year-over-year percentage point increase in upskilling with AI and LLM security strategies.
Researchers are leveraging automation tools to boost their productivity through report writing, proof-of-concept generation, refining exploit code, and brainstorming.
This reminded Gardner of a recent conversation with fellow panelist and PortSwigger Director of Research, James Kettle, who mentioned he uses deep research with AI to get up to date on specific technology.
“I think that AI really helps develop refined and consistent proof of concepts for your vulnerabilities that you're submitting, and also helps tremendously with synthesizing data on a specific framework or technology stack that you may not be familiar with,” he said.
For Timpe, however, it’s not difficult to identify AI-generated reports from researchers. These can seem valid on the surface, she said, but stressed the importance of clear communication with researchers to confirm the true root of the vulnerability.
“HackerOne's perspective is that AI can get us so far, and AI validation can get us so far,” she said. “But, from our side of things, a human in the loop is always gonna be the one that has to sit there and ask, ‘Is this real?’”
Expert Insights: What is One Action Security Leaders and Researchers Should Take Next?
Security leaders and researchers share a common goal: progress through action. Automate what’s routine, focus human expertise where it matters, and don’t wait for perfect policies to start operationalizing AI security. Small, continuous improvements, like refining prompts, testing processes, and embracing automation, create a compounding edge.
We asked the experts a final question: What is the one action to take in the next 12 months?
For Security Leaders: Automate the Simple, Strengthen the Complex
"Automate what's easy, Focus your human resource on what is complex, so your design, your configuration, your testing, and then continue to fund security visibility tools because the attackers are not disappearing. They are just getting smarter."
—Mehan Kasinath, VP of Enterprise Information Security at IAC
"Operationalize AI security now. Don't wait for a perfect policy. Just start building now. Testing, validation, automation around your AI systems that you already have."
—Jewel Timpe, Director of Triage at HackerOne
For Researchers: Iterate, Automate, and Amplify
"Get the prompts you commonly use. Put them in a GitHub repo and iterate on them over time. Make them one percent better every time you try to use it. And as the technology catches up, that is going to be extremely valuable."
—Justin Gardner, bug bounty hunter, host of Critical Thinking podcast
"Explore ways to use automation to amplify your personal edge. This is way easier than it's been in all of history, and the potential that you can unlock by doing this is absolutely insane."
—James Kettle, Director of Research at PortSwigger
Turn AI Chaos into Clarity
As vulnerabilities evolve from traditional bugs to complex logic flaws, researchers are using AI to match pace and enhance efficiency. Security leaders must operationalize AI security now, automate the routine, and keep humans in the loop for what truly matters.
Hear directly from HackerOne experts and researchers in the full webinar



