By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of SAST FindingsSpeed of ScanningUsability & Dev Experience
DryRun SecurityVery high – caught multiple critical issues missed by othersYes – context-based analysis, logic flaws & SSRFBroad coverage of standard vulns, logic flaws, and extendableNear real-time PR feedback
Snyk CodeHigh on well-known patterns (SQLi, XSS), but misses other categoriesLimited – AI-based, focuses on recognized vulnerabilitiesGood coverage of standard vulns; may miss SSRF or advanced auth logic issuesFast, often near PR speedDecent GitHub integration, but rules are a black box
GitHub Advanced Security (CodeQL)Very high precision for known queries, low false positivesPartial – strong dataflow for known issues, needs custom queriesGood for SQLi and XSS but logic flaws require advanced CodeQL experience.Moderate to slow (GitHub Action based)Requires CodeQL expertise for custom logic
SemgrepMedium, but there is a good community for adding rulesPrimarily pattern-based with limited dataflowDecent coverage with the right rules, can still miss advanced logic or SSRFFast scansHas custom rules, but dev teams must maintain them
SonarQubeLow – misses serious issues in our testingLimited – mostly pattern-based, code quality orientedBasic coverage for standard vulns, many hotspots require manual reviewModerate, usually in CIDashboard-based approach, can pass “quality gate” despite real vulns
Vulnerability ClassSnyk (partial)GitHub (CodeQL) (partial)SemgrepSonarQubeDryRun Security
SQL Injection
*
Cross-Site Scripting (XSS)
SSRF
Auth Flaw / IDOR
User Enumeration
Hardcoded Token
ToolAccuracy of FindingsDetects Non-Pattern-Based Issues?Coverage of C# VulnerabilitiesScan SpeedDeveloper Experience
DryRun Security
Very high – caught all critical flaws missed by others
Yes – context-based analysis finds logic errors, auth flaws, etc.
Broad coverage of OWASP Top 10 vulns plus business logic issuesNear real-time (PR comment within seconds)Clear single PR comment with detailed insights; no config or custom scripts needed
Snyk CodeHigh on known patterns (SQLi, XSS), but misses logic/flow bugsLimited – focuses on recognizable vulnerability patterns
Good for standard vulns; may miss SSRF or auth logic issues 
Fast (integrates into PR checks)Decent GitHub integration, but rules are a black box (no easy customization)
GitHub Advanced Security (CodeQL)Low - missed everything except SQL InjectionMostly pattern-basedLow – only discovered SQL InjectionSlowest of all but finished in 1 minuteConcise annotation with a suggested fix and optional auto-remedation
SemgrepMedium – finds common issues with community rules, some missesPrimarily pattern-based, limited data flow analysis
Decent coverage with the right rules; misses advanced logic flaws 
Very fast (runs as lightweight CI)Custom rules possible, but require maintenance and security expertise
SonarQube
Low – missed serious issues in our testing
Mostly pattern-based (code quality focus)Basic coverage for known vulns; many issues flagged as “hotspots” require manual review Moderate (runs in CI/CD pipeline)Results in dashboard; risk of false sense of security if quality gate passes despite vulnerabilities
Vulnerability ClassSnyk CodeGitHub Advanced Security (CodeQL)SemgrepSonarQubeDryRun Security
SQL Injection (SQLi)
Cross-Site Scripting (XSS)
Server-Side Request Forgery (SSRF)
Auth Logic/IDOR
User Enumeration
Hardcoded Credentials
VulnerabilityDryRun SecuritySemgrepGitHub CodeQLSonarQubeSnyk Code
1. Remote Code Execution via Unsafe Deserialization
2. Code Injection via eval() Usage
3. SQL Injection in a Raw Database Query
4. Weak Encryption (AES ECB Mode)
5. Broken Access Control / Logic Flaw in Authentication
Total Found5/53/51/51/50/5
AI in AppSec
January 16, 2025

How AI and LLMs Will Shape AppSec in 2025

4 Predictions for AppSec in 2025

By Ken Johnson and Seth Law

In this joint blog from Seth Law at Redpoint Security and Ken Johnson at DryRun Security, we highlight how 2025 will be a pivotal year for large language models (LLMs) in AppSec. Building on the momentum of 2024, LLMs are moving from novelty to necessity, enabling deeper code analysis, automating security workflows, and providing real-time developer assistance. Organizations that adopt AI-driven AppSec will find and fix vulnerabilities faster, freeing security teams to focus on high-value tasks—and ultimately ship more secure code.

It’s no secret that artificial intelligence (AI)—particularly large language models (LLMs)—has taken the tech world by storm. While 2024 saw significant strides in how security practitioners applied AI to scanning and development workflows, 2025 looks poised to be even more transformative. Below are a few key predictions on how AI will influence application security (AppSec) and why you should pay attention.

1. AppSec Will Fully Embrace LLMs

A year or so ago, discussions about AI’s relevance in AppSec ranged from mild curiosity to outright skepticism. But that has largely disappeared. Now, more and more AppSec professionals see how LLMs can automate tedious processes and supplement expert-driven reviews. In 2025, we’ll see:

Shifting from “Is AI worth it?” to “Where can we apply AI next?”
Rather than questioning whether LLMs have a place, AppSec teams will begin embedding AI-driven tools wherever they can bring tangible benefits—such as pulling in threat data, analyzing code changes, and generating automated patch suggestions.

A rethinking of traditional security approaches:
LLMs excel at contextual understanding, especially as context windows grow larger and models become “agentic,” meaning they can iterate through multiple steps or queries. Security teams will rethink processes (like threat modeling) that used to be manual, consolidating them into fluid, AI-backed workflows.

2. More Nuanced, Context-Rich Analysis

One of the historical pain points with AI-based code analysis was context window length. If an LLM or AI agent couldn’t handle your entire codebase all at once, you ended up with either incomplete or inaccurate results. In 2025:

Long Context Windows and Agentic AI Will Be the Norm
When an LLM can “remember” and analyze vast portions of your code, it becomes far more capable of spotting both common and niche vulnerabilities. Agentic AI effectively chains tasks, learns from each query, and can refine results over time.

Security Practitioners Will Rely on AI to Get Deeper Insights
Manual techniques like static analysis or searching for known vulnerabilities will be augmented by LLMs that can correlate multiple parts of a project. Rather than triaging individual findings, developers and AppSec engineers will benefit from LLMs that surface complex vulnerability chains or logic flaws hidden deep in the code.

3. The Rise of “Agentic” Security Orchestration

Security platforms have long offered drag-and-drop workflows for event handling—think of solutions like Tines, which provide no-code solutions to everything from creating Jira tickets to sending out alerts. As LLMs become more powerful:

AI-Driven Orchestration Tools Will Emerge
Imagine chaining multiple specialized AI “agents” together, where one agent monitors new CVE data while another checks your repositories for vulnerable dependencies. A third agent might spin up automated proof-of-concept exploits if it suspects an issue. This isn’t far off—several startups and open-source tools are already heading in that direction.

Developers Get Security Assistance Without Leaving Their IDE
By 2025, many dev environments will likely come with built-in AI “assistants” that can provide context-specific security advice in real time. We’ll see a blending of DevOps, security automation, and AI, helping dev teams ship secure code faster.

4. 2025: A Pivotal Year for AI-Driven AppSec

Above all, there’s a sense of optimism and excitement in the air. If 2024 was the year LLMs finally got a foot in the door, 2025 is when they’ll take center stage.

Innovation Explosion
As more companies see tangible ROI—like fewer vulnerabilities making it to production, or significant time savings in code reviews—expect a new wave of startups and product features. We’ll likely see everything from AI-based threat modeling to continuous compliance checks that happen entirely behind the scenes.

A Culture Shift
Security teams will have less manual busywork and more time to focus on high-value tasks: deeper analysis, custom threat research, and meaningful engagement with developers. Once seen as a “bolt-on” solution, AI tools will become an integral part of the AppSec culture.

Conclusion

Put simply, 2025 will be the year we stop treating LLMs and agentic AI as experiments and start embracing them as must-have elements of our security strategy. From deeply contextual analyses of codebases to automated orchestrations of AppSec workflows, these tools will reshape how we plan, develop, and protect software.

The only question left is: How will you use them?

Now’s the perfect time to evaluate your current security processes, pinpoint your biggest bottlenecks, and consider how AI can help you solve them in novel ways. The creative possibilities are endless—and as the technology continues to mature, the companies that adopt intelligent AI workflows will be the ones setting the pace in AppSec.

Watch these recorded webinars with Ken Johnson for some overarching lessons learned and successes with LLMs in AppSec and a workshop walking you through how to use LLMs to detect specific issues:

From Lessons Learned to Success: Leveraging LLMs in AppSec

Hands-On with AI: Using LLMs to Detect IDOR and Auth Flaws

If you’d like some help right now with a tool that’s already providing code insights like never before, schedule some time with us and we’ll show you how you can get started.