Runtime Application Self-Protection (RASP)

Events

Views Navigation

Event Views Navigation

Today
  • AI-generated Code Is Shipping to Production. Is Your AppSec Pipeline Ready for What Comes Next?

    Eighty-one percent of organizations knowingly shipped vulnerable code in the past year. That number is about to get harder to manage. AI-assisted coding tools are accelerating output across engineering teams, and Gartner projects that by 2027, at least 30% of AppSec exposures will result from AI-driven "vibe coding" practices. The code patterns are different, the release cadences are faster, and the security assumptions baked into traditional testing tooling were not built for what AI produces. Organizations are deploying AI-generated code at a pace that outstrips their ability to review it.

    The challenge is not whether to allow AI-generated code. That decision has already been made by most engineering teams, with or without security's blessing. Addressing this requires rethinking how static and dynamic testing, software supply chain security, runtime protection, API security, and developer-native tooling work together across an AI-accelerated pipeline. Security teams that do not adapt their tooling and processes now will spend the next two years in reactive mode.

    Topics include:

    • New vulnerability patterns introduced by AI-generated and AI-assisted code
    • Adapting AppSec pipelines to handle accelerated release cycles without creating bottlenecks
    • Securing the AI-driven software supply chain, from dependencies and secrets to runtime behavior

    Explore how AppSec teams are retooling their programs to keep pace with AI-accelerated development before the gap becomes unmanageable.

    Topics:
    , , , , , , , ,

    From 570,000 Alerts to 202 That Matter: Risk-based AppSec Prioritization in Practice

    Benchmark data across 178 organizations found an average of 570,000 AppSec alerts per organization. Of those, 202 represented true critical issues that required action. That means 95-98% of findings generated by AppSec scanners are noise: redundant, irrelevant, or low-risk items that consume engineering time without reducing actual exposure. Security teams assign developers thousands of findings to fix. Developers lose trust in the process. The findings that actually matter get buried alongside the ones that do not.

    The cost of this noise is not just wasted time. It is the erosion of the relationship between security and engineering. When developers are handed a list of 3,000 findings and told everything is critical, they stop treating anything as critical. Addressing this requires coordination across ASPM, SAST, DAST, SCA, runtime protection, and vulnerability management platforms to correlate findings with exploit intelligence, runtime context, reachability analysis, and business impact. A missing authorization check on an internal-only endpoint is a different risk than the same flaw on an internet-facing API handling payment data. Tools that can make that distinction let security teams send developers a short, high-confidence list instead of a spreadsheet full of theoretical risk.

    Topics include:

    • Reducing AppSec alert noise through risk-based prioritization and reachability analysis
    • Correlating code-level findings with runtime context and exploit intelligence for accurate risk scoring
    • Rebuilding developer trust by sending fewer, higher-confidence findings that warrant action

    Learn how AppSec teams are cutting through the noise to focus remediation on the 2-5% of findings that represent genuine risk.

    Topics:
    , , , , ,