Artificial Intelligence (AI)

Events

Views Navigation

Event Views Navigation

Today
  • From Reactive to Resilient: Agentic AI and the Future of Cyber Defense

    Defenders have long played catch-up in the cybersecurity arms race. But agentic AI offers a chance to flip the script—enabling systems that anticipate, adapt, and respond to threats before they escalate. To stay ahead of evolving threats, organizations must shift from reactive security postures to resilient, autonomous systems powered by agentic AI. Topics include: • What makes agentic AI different from traditional automation • How agentic AI fits into the evolving cyber kill chain • Building trust, oversight, and resilience into autonomous systems Join us to gain practical insights into deploying agentic AI to stay ahead of evolving threats.
    Topics:

    Your SOC Has a Retention Problem. Your Tooling Might Be the Cause.

    Seventy percent of SOC analysts with five years or less of experience leave within three years. The typical explanation is burnout from an overwhelming threat landscape. The less comfortable explanation is that the tools meant to help analysts are making their jobs worse. Fragmented workflows, constant context-switching across disconnected platforms, and thousands of daily alerts with no actionable context are turning what should be a high-impact career into a repetitive grind. When analysts spend more time wrangling dashboards than investigating threats, the best ones leave.

    The retention problem is not just a staffing issue. It is an operational risk. Every departure takes institutional knowledge with it, increases the load on remaining team members, and widens the window for missed detections. Organizations that want to keep experienced analysts need to redesign how SOC work gets done, starting with how detection, investigation, automation, and analyst experience are delivered across the stack.

    Addressing this challenge requires coordination across SIEM, XDR, SOAR, MDR, and security analytics platforms to reduce friction, improve context, and make investigations more actionable.

    Topics include:

    • How fragmented tooling and manual workflows contribute to analyst turnover
    • Reducing cognitive load through unified investigation and automated triage
    • Building a SOC environment that retains talent by making the work sustainable

    Join us to explore how rethinking SOC tooling and workflows can address the retention crisis at its source.

    Topics:
    , , , , , , ,

    Shadow Data, AI Pipelines, and the 802,000 Files You’re Oversharing Right Now

    The average organization has more than 800,000 data files at risk from oversharing, erroneous access permissions, and inappropriate classification. That number is climbing as AI pipelines generate and ingest data faster than any manual classification effort can keep up. Half of all enterprise workloads are now cloud-based, and the rise of AI is accelerating data creation without guardrails or oversight. The result is shadow data: sensitive information scattered across environments that security teams cannot see, classify, or protect.

    Traditional data security strategies assume that most data lives in known locations with defined access controls. That assumption broke years ago. Today, 90% of business-critical documents are shared outside the C-suite, AI models are training on datasets that may contain PII or intellectual property, and unstructured content is multiplying across SaaS, cloud storage, and collaboration platforms.

    Regaining control requires visibility and coordination across data discovery, classification, access governance, and data protection controls – from DSPM and DLP to SaaS security and AI data governance.

    Topics include:

    • Discovering and classifying sensitive data across cloud, SaaS, and AI environments
    • Addressing the shadow data problem created by AI-driven data proliferation
    • Reducing oversharing risk through automated access governance and posture management

    Join us to learn how organizations are regaining visibility and control over data they did not know they were exposing.

    Topics:
    , , , , , ,

    AI in the SOC: Separating the Tools That Actually Work From the Ones That Add More Noise

    Every security vendor now claims AI capabilities. For SOC teams already processing thousands of alerts per day, the promise is appealing: automated triage, intelligent prioritization, faster investigations. The reality is more complicated. Poorly implemented AI can generate its own layer of noise, create false confidence in automated decisions, and introduce opaque reasoning that analysts cannot validate or trust.

    The SOC teams seeing real results from AI are the ones asking the right questions before deploying it. They are auditing data quality first, defining what “automated” should and should not mean for their environment, and measuring whether AI is reducing time-to-resolution or just shifting where analysts spend their time.

    Getting this right requires alignment across detection, triage, investigation, and automation layers of the SOC – from SIEM and XDR to SOAR, MDR, and AI-driven analytics platforms.

    Topics include:

    • Evaluating AI-driven SOC tools based on measurable outcomes, not vendor claims
    • Addressing data quality and pipeline readiness before deploying AI-powered detection
    • Defining the right division of labor between automated triage and human investigation

    Join us for an honest look at where AI is delivering real value in security operations and where it is falling short.

    Topics:
    , , , , , , ,