Shared research study link

NexRisx - Vulnerability Management & Security Tool Workflows

Understand how security professionals manage vulnerabilities, prioritize findings, interface with their security tool stack, and their perspectives on AI-assisted security operations

Study Overview Updated Jan 23, 2026
Research question: how security professionals manage vulnerabilities, prioritize findings, interface with their tool stack, and view AI-assisted operations.
Research group: 8 participants spanning SOC/blue-team (enterprise/MSP/manufacturing), GTM/Sales Ops, resource‑constrained IT support, and homelab/OSS practitioners-giving both enterprise and lean/OSS perspectives.
What they said: day-to-day is hygiene, not heroics-phishing/identity abuse, cloud/network misconfig, and patch lag/legacy dominate; teams aim for ~60/40 proactive/reactive over a quarter, but alert noise, missing asset ownership, and third‑party issues often flip weeks into firefighting.

Main insights: stacks are mature yet brittle-identity/asset context is inconsistent, connectors are fragile, licensing drives telemetry sampling, and small, deterministic automations beat “single pane” promises; identity is the join key for triage and deduplication.
Vulnerability management is a disciplined pipeline (intake→normalize→triage→owner→remediate→validate), with priority driven by exposure, exploitability (KEV/EPSS), and asset criticality-CVSS is a starting signal, not a decision-and bottlenecks are asset truth, scanner noise, change control, and legacy systems.
Knowledge sticks when attached to tickets/runbooks/decision logs that capture the “why”; AI is valued for summarization, NL→query, and dedup only if it is citation‑first, on‑tenant/private, and human‑in‑the‑loop.

Takeaways: fund asset inventory and clear ownership; harden identity (SSO/MFA, kill legacy protocols) and invest in detection engineering to cut noise; automate patching and shift fixes into IaC/CI; standardize schemas/connectors to reduce swivel‑chair and duplicate alerts.
For AI, build an identity‑first evidence graph that de‑duplicates to one incident, returns raw‑log citations and capacity‑aware change plans with dry‑run/rollback, supports private deployment and transparent pricing; measure impact via MTTR reduction, duplicate‑alert collapse rate, and documentation coverage.
Participant Snapshots
8 profiles
Jessica Pena
Jessica Pena

1) Basic Demographics

Jessica Pena is a 44-year-old woman living in urban Raleigh, North Carolina. She uses she/her pronouns, is married, and has one child. She earned a Bachelor’s degree and works full-time. She was born in Spain and is a non-U.…

Simon Tremblay
Simon Tremblay

Simon Tremblay, 33, French-speaking married father of two in Saguenay, QC, currently unemployed and pivoting from IT support to cybersecurity. Pragmatic, budget-conscious homeowner who bakes sourdough and trains BJJ.

Marc Lopez
Marc Lopez

Puerto Rico–born security engineer in rural Florida. Single, veteran, bilingual, high-earning but grounded. Values reliability, privacy, faith in action, and community. Practical dresser, home cook, runner, tinkerer, and thoughtful, proof-first decision-maker.

Christopher Gavlik
Christopher Gavlik

Disciplined 27-year-old Navy veteran in rural Connecticut, now in the Reserve and pivoting to cybersecurity. Practical, faith-driven, budget-aware, and outdoorsy. Values reliability, privacy, and clear value over hype; avoids hidden fees and gimmicks.

Adil Hussain
Adil Hussain

Adil Hussain, 31, is a Leeds-based cybersecurity analyst, married with no kids. A frugal, tech-savvy social renter, he values practical competence, clear value, and reliability, commuting by bus and balancing family, fitness, and steady career ambitions.

Reece McKenzie
Reece McKenzie

29-year-old Brummie, mixed-heritage, married, no kids. Social renter retraining from electrical work to IT support while spouse’s salary sustains them. Faithful, Villa-mad, practical, budget-savvy, community-oriented, and motivated by durability, transparen…

Marta Kowalska
Marta Kowalska

Polish cybersecurity analyst in Sheffield, 28, homeowner, single, no kids. Pragmatic, privacy-minded, outdoorsy, and community-oriented. Values transparency and durability, cooks Polish comfort food, climbs and hikes, and prefers plain-English, no-surprises…

Eamon Keegan
Eamon Keegan

Irish-born, Birmingham-based technical professional, 55, married with no children. Community-minded, privacy-conscious, and pragmatic, he values durability, clear information, and sustainability. Loves Aston Villa, canal walks, hearty cooking, and calm, evi…

Overview 0 participants
Sex / Gender
Race / Ethnicity
Locale (Top)
Occupations (Top)
Demographic Overview No agents selected
Age bucket Male count Female count
Participant locations No agents selected
Participant Incomes US benchmark scaled to group size
Income bucket Participants US households
Source: U.S. Census Bureau, 2022 ACS 1-year (Table B19001; >$200k evenly distributed for comparison)
Media Ingestion
Connections appear when personas follow many of the same sources, highlighting overlapping media diets.
Questions and Responses
7 questions
Response Summaries
7 questions
Word Cloud
Analyzing correlations…
Generating correlations…
Taking longer than usual
Persona Correlations
Analyzing correlations…

Overview

Respondents converge on fundamentals: vulnerability management is dominated by hygiene (patching, identity, asset ownership, runbooks) and is hampered by stale asset inventories, brittle integrations, and alert noise that flips planned proactive work into firefighting. Identity-first correlation and pragmatic, small-scale automation (SOAR playbooks, revoke-token, isolate-host) are common priorities. Views on AI are uniformly pragmatic and cautious-useful for enrichment, drafts and query-generation but only if provenance, tenant-local processing and human-in-the-loop guardrails exist. Differences map predictably to context: enterprise seniors emphasize auditability and SLAs; platform leads push for policy-as-code and build-time controls; GTM/non-blue operators prioritise business/customer impact; resource-constrained and homelab operators prefer simple, local/open solutions.
Total responses: 56

Key Segments

Segment Attributes Insight Supporting Agents
Senior enterprise security practitioners (mid/late-career, UK/US, org-facing)
age range
≈33–55
locale
UK/US
occupation
Senior/Principal Security Engineer, Cybersecurity Analyst
context
Enterprise teams, multi-stakeholder environments, formal SLAs
Prioritise evidence-backed runbooks, asset truth and identity as the primary join key. They are sceptical of vendor 'single pane' claims and demand auditability, provenance and strict change-control before trusting automation or AI-driven actions. Their prioritisation mixes CVSS with exploitability, exposure and business-criticality. Eamon Keegan, Marc Lopez, Adil Hussain, Marta Kowalska
Platform/engineering-oriented security leads (mid-career, high-responsibility, US)
age range
≈33
locale
US (platform/public sector)
occupation
Security Engineer / Principal
context
Platform ownership, infrastructure-as-code, emphasis on testable automation
Drive automation that scales and is testable (detection-as-code, policy-as-code, build-time gating). Treat CVSS as a weak signal and prioritise exploitability, exposure and blast radius. Prefer immutable infrastructure and fixes at build-time rather than run-time patching. Marc Lopez
GTM / non-blue technical operators (mid-career, US, non-traditional security role)
age range
≈44
locale
US (Raleigh)
occupation
Sales Operations / GTM ops
context
Cross-functional remit, responsible for SaaS posture and customer-facing impact
Prioritise remediation by customer-data exposure and business/deal impact. Prefer straightforward receipts/audit trails and pragmatic processes over complex security tooling. Security work is motivated by customer trust and operational continuity as much as CVEs. Jessica Pena
Resource-constrained / community operators (early-career or volunteer, UK)
age range
≈29
locale
UK (Birmingham/community centre)
occupation
Volunteer / IT support / retraining
context
Single-operator environments, minimal budgets, ad-hoc tooling
Operate with spreadsheets, laminated runbooks and single-click mitigations. Focus on exposed edges and quick wins; tooling is minimal or free and processes are heavily pragmatic. Automation value is simple repeatable scripts rather than integrated platforms. Reece McKenzie
Homelab / OSS practitioners (individual contributors, francophone/Canadian and hobbyists)
age range
≈33
locale
Canada / Quebec / rural
occupation
Homelab operator, OSS-focused security tinkerer
context
DIY, local-first, preference for open-source stacks and full control
Maintain rich local stacks (Wazuh, Suricata, Greenbone, Nuclei) and favour proactive effort. Skeptical of vendor lock-in and treat LLMs as coding-assistants rather than decision-makers; will accept automation only when it is transparent and locally controllable. Simon Tremblay, Christopher Gavlik

Shared Mindsets

Trait Signal Agents
Identity-first correlation Identity (IdP/SSO/MFA) is repeatedly cited as the most reliable join key for alerts, investigations and prioritisation; it reduces false positives and maps directly to blast radius. Marc Lopez, Eamon Keegan, Christopher Gavlik, Marta Kowalska
CVSS as a blunt signal Respondents use CVSS as an initial filter but prioritise by exploitability, internet exposure, asset criticality and potential blast radius; CVSS alone is insufficient for operational decisions. Adil Hussain, Jessica Pena, Marta Kowalska, Marc Lopez
Tool integration pain Brittle APIs, schema drift, duplicate alerts and inconsistent asset context force manual correlation and 'swivel chair' workflows across tools. Adil Hussain, Eamon Keegan, Simon Tremblay, Marc Lopez
Preference for small, reliable automations Scoped SOAR actions (isolate host, revoke token, open ticket) are trusted and adopted; large multi-step opaque playbooks and blind automation are distrusted. Adil Hussain, Marta Kowalska, Christopher Gavlik
Proactive vs reactive baseline Teams aim for ~60% proactive work (hardening, detection engineering) but incident spikes routinely push work to 70–80% reactive. Eamon Keegan, Marta Kowalska, Marc Lopez, Adil Hussain
Demand for evidence & auditability Automation and AI outputs must link back to raw events, provide citations and maintain auditable change records; without that, respondents will not trust automated remediation. Eamon Keegan, Marc Lopez, Jessica Pena, Christopher Gavlik
Asset inventory weakness Stale CMDBs, ghost/ephemeral assets and orphaned service accounts are leading blockers to accurate prioritisation and effective remediation. Marta Kowalska, Adil Hussain, Eamon Keegan
AI pragmatic skepticism AI is welcomed for reducing toil (enrichment, draft queries, dedupe) but is distrusted for autonomous destructive actions; provenance, local processing and human oversight are required. Adil Hussain, Marc Lopez, Simon Tremblay, Jessica Pena

Divergences

Segment Contrast Agents
Senior enterprise practitioners vs Platform/engineering-oriented leads Enterprise seniors emphasise auditability, runbooks and cross-team SLAs driven by organisational constraints; platform leads prioritise testable, build-time policy-as-code and immutable infrastructure that reduce run-time remediation. The former tolerates more process friction for control and evidence; the latter designs systems to avoid friction entirely. Eamon Keegan, Marta Kowalska, Marc Lopez
Platform/engineering-oriented leads vs Homelab/OSS practitioners Both favour automation and local control, but platform leads demand enterprise-grade testability and integration into CI/CD, whereas homelab operators favour pragmatic OSS toolchains and scripting without formal CI guardrails. Marc Lopez, Simon Tremblay, Christopher Gavlik
Resource-constrained operators vs GTM / non-blue technical operators Resource-constrained volunteers rely on spreadsheets and one-page runbooks for immediate fixes, prioritising speed and simplicity; GTM operators prioritise business/customer impact and communications, so remediation choices are shaped by customer risk and deal continuity rather than pure technical severity. Reece McKenzie, Jessica Pena
Attitudes to AI-assisted actions Enterprise and platform respondents demand provenance, tenant-local processing and human-in-the-loop for destructive changes; hobbyist and OSS users treat LLMs more as coding assistants for drafts and queries and may be more tolerant of experimental tooling-still preferring transparency. Eamon Keegan, Marc Lopez, Simon Tremblay, Christopher Gavlik
Creating recommendations…
Generating recommendations…
Taking longer than usual
Recommendations & Next Steps
Preparing recommendations…

Overview

Based on respondent consensus, security teams want fewer tabs and more receipts: identity-first stitching, de-duplication into single incidents, NL→query helpers, and safe, reversible change plans. Trust hinges on citations, read-only defaults, on-tenant privacy, and robust connectors. For Claude’s API test context, focus on a pragmatic, ROI-first assistant that reduces swivel-chairing and speeds evidence-backed decisions before any automation writes changes.

Quick Wins (next 2–4 weeks)

# Action Why Owner Effort Impact
1 Citation-first incident summarizer (SIEM+EDR+IdP) Analysts trust small, auditable wins. A cited timeline with raw-event links cuts mean time to clarity and builds credibility. Product + Eng/ML Med High
2 Schema-aware NL→KQL/SPL helper Teams want fast drafts they can tweak, not black-box magic. Guardrailed query generation reduces toil without risking action. Eng/ML Low High
3 KEV + exposure vuln heatmap MVP Prioritization is driven by internet exposure and known exploitability. A simple board aligns work to real risk now. Product Low High
4 Auto ticket/change-plan drafts with rollback Operators value runbooks over heroics. Pre-filled tickets (owner, rollback, tests) save time and pass CAB scrutiny. Product + Design/UX Low Med
5 Phish/ATO de-dup into one parent case Email, IdP and EDR often alert on the same event. Collapsing duplicates reduces fatigue and ping-pong ownership. Eng Med High
6 Connector health + parser drift alerts APIs and schemas break. Visible health checks and replays prevent silent failures that erode trust. Eng Med Med

Initiatives (30–90 days)

# Initiative Description Owner Timeline Dependencies
1 Identity-first Evidence Graph (MVP) Build a read-only incident graph that stitches IdP/SSO, EDR, SIEM and email into one cited timeline with entity resolution (user, device, mailbox, IP, asset owner). Output: one parent case with raw-event links and confidence. Eng/ML 0–90 days Connectors: Okta/Entra, EDR, SIEM, email, Entity resolution model + rules, Security SME playbooks for phishing/ATO
2 Private Deployment & Data Governance Deliver in-tenant/VPC mode with no-training-on-customer-data, full prompt/action audit, granular RBAC, and redaction. Document data flows for procurement and legal. Platform Eng + Security 60–150 days Infra architecture (VPC/on-prem option), Legal/DPA review, Audit logging and export pipeline
3 Safe-Execution Framework (dry-run → approve → rollback) Guardrailed actions for high-demand flows (revoke OAuth token, isolate host, disable legacy auth). Generate pre-checks, post-checks, and one-click rollback; default to read-only until SLOs proven. Product + Eng 90–180 days SOAR/EDR/IdP action APIs, UX for approvals and confidence, Runbook library + tests
4 Connector Reliability Program Versioned connectors with schema contract tests, replayable fixtures, rate-limit handling, and a parser self-heal PR generator when fields drift. Eng 30–120 days Vendor API contracts, Observability (ingest metrics, retries), Fixture library across tenants
5 Policy Simulation + PII Scrub for SaaS (GTM-first) For Okta/Salesforce/Google: simulate DLP/sharing changes, show breakage/owners, and auto-generate least-privilege diffs. Add PII masking for sandboxes. Product + Partnerships 90–180 days SaaS audit/event APIs, Partnerships (e.g., Salesforce ISV), Data classification hooks
6 Local Lite (SMB/offline) Edition Containerized, offline-first build with spreadsheet integration and a router-hardening wizard. Focus on inventory, patch view, phish triage, and change log with rollback. Platform Eng 120–210 days Local packaging (ollama/embedded models), CSV/Sheets adapters, Edge router templates (pfSense/common SMB gear)

KPIs to Track

# KPI Definition Target Frequency
1 Mean Time to Clarity (MTTC) Time from alert ingestion to a cited, identity-first timeline the analyst can act on. −50% vs baseline in 90 days Weekly
2 Duplicate Alert Collapse Rate Percent of cross-tool alerts merged into a single parent case with sources attached. ≥60% in pilot tenants Weekly
3 Citation Coverage Share of answers/actions that include raw-event links, copyable queries, and confidence. 100% of surfaced answers Daily
4 False-Advice Rate Percent of AI suggestions retracted due to incorrect fields/logic detected in review. ≤2% with uncertainty flags Weekly
5 Private Deployment Adoption Share of pilots using in-tenant/VPC mode with no data leaving boundary. ≥70% by end of Q2 pilot cohort Monthly
6 Connector MTBF / MTTR Mean time between parser/connector breaks and mean time to repair via self-heal. MTBF ≥30 days; MTTR ≤2 hours Weekly

Risks & Mitigations

# Risk Mitigation Owner
1 Hallucinations or overconfident summaries erode analyst trust. Enforce citations, show uncertainty, unit-test NL→query, keep read-only by default, and require human approval for actions. Eng/ML
2 Data residency/privacy blockers (no on-tenant mode, training on customer data). Ship VPC/on-prem, disable training by default, provide DPAs and architecture diagrams, add redaction and access logs. Platform Eng + Legal
3 Connector/schema fragility causing silent correlation gaps. Versioned connectors, contract tests, ingest SLOs, drift detection with PRs, and health dashboards with alerts. Eng
4 Unsafe write-actions cause outages or CAB rejections. Guardrailed execution: dry-runs, pre/post checks, rollback scripts, narrow scopes; expand actions only after SLO proof. Product
5 Opaque pricing and token/ingest surprises stall adoption. Flat tiers, budget caps, per-job cost estimates, and a public calculator; avoid per-event lock-ins. GTM/Finance
6 Analyst change fatigue and skepticism limit pilot success. Pilot on high-pain flows (phish/ATO), measure MTTC and de-dup wins, co-design workflows, and keep UX boring and explainable. Design/UX + PM

Timeline

0–30 days
  • Ship NL→KQL/SPL helper and KEV+exposure heatmap MVP.
  • Stand up connectors (IdP/EDR/SIEM/email) with health dashboards.
  • Design cited-timeline UX and ticket/change-plan templates.

31–90 days
  • Release Identity-first Evidence Graph (read-only) with phish/ATO de-dup.
  • Pilot citation-first summaries with 2–3 tenants; track MTTC and de-dup KPIs.
  • Stand up VPC deployment alpha and full audit logging.

91–150 days
  • Safe-Execution framework (dry-run/approve/rollback) for select actions.
  • Connector Reliability Program live (schema tests, self-heal PRs).
  • Policy simulation + PII scrub (Salesforce/Okta) beta.

151–210 days
  • Local Lite (offline) edition beta with spreadsheet + router wizard.
  • Expand action catalog (EDR isolate, OAuth revoke) under guardrails.
  • Pricing + cost-estimator GA; broaden pilot to 8–10 tenants.
Research Study Narrative

Objective and Context

We set out to understand how security professionals manage vulnerabilities, prioritize findings, interface with their tool stack, and view AI-assisted security operations. Across SOC/blue-team, IT/support, and GTM operators, the signal is consistent: work is dominated by hygiene-phishing/identity abuse, misconfiguration, and patch lag-with teams aiming for ~60% proactive / 40% reactive but regularly pushed into firefighting by alert noise, unclear asset ownership, and third-party issues (Eamon Keegan; Marc Lopez; Adil Hussain).

What We Learned (Cross-Question Evidence)

Tooling reality: Stacks are capable yet brittle. Practitioners “swivel-chair” across SIEM, EDR, scanners, ticketing, and SaaS consoles; identity as the join key is the most reliable way to add context, but APIs drift, asset/CMDB truth is weak, and alert duplication erodes focus (Adil Hussain; Marc Lopez; Marta Kowalska).

Vulnerability management pipeline: Intake from scans, advisories, KEV, EDR, and pen tests; normalize to assets and owners; triage by exposure, exploitability (KEV/EPSS/PoC), and asset criticality; assign accountable tickets; remediate via patch/config/mitigation/isolation; validate with proof-of-fix before closure (Eamon Keegan; Jessica Pena; Marta Kowalska). Bottlenecks: fuzzy ownership, scanner noise, change-control friction, and unpatchable legacy.

Prioritization in practice: CVSS is a blunt starting signal. Internet exposure, active exploit signals, identity/blast radius, and team capacity drive sequencing; teams cap WIP, deduplicate, mitigate first, and run visible burndowns when overwhelmed (Adil Hussain; Marta Kowalska).

Knowledge that sticks: Durable artifacts-tickets/change records with Decision + Rationale, runbooks, ADR-style decision logs in Git-paired with shadowing/tabletops convert policy into muscle memory. Where knowledge lives in chats or stale wikis, it leaks on attrition (Jessica Pena; Marc Lopez; Simon Tremblay).

AI/automation posture: Preference for deterministic, boring automation (SOAR, EDR auto-isolate, enrichment). Trust requires citations/provenance, on-tenant privacy, and guardrails (read-only by default, dry-runs, rollback). Desired capabilities: cross-tool deduplication, identity-first timelines, schema-aware NL→KQL/SPL, and change plans with pre/post checks; plus SaaS policy simulation and PII scrub (Adil Hussain; Marta Kowalska; Eamon Keegan).

Persona Correlations and Nuances

  • Senior enterprise practitioners (Keegan, Lopez, Hussain, Kowalska): demand auditability, identity-first stitching, strict change control; mix CVSS with KEV/exposure/criticality.
  • Platform-oriented leads (Lopez): push IaC/policy-as-code, build-time gates, immutable infra.
  • GTM operators (Pena): prioritize customer-data exposure, deal impact, clear receipts; need SaaS policy simulation and bilingual comms.
  • Resource-constrained/OSS (McKenzie; Tremblay): prefer local-first, minimal or OSS stacks, spreadsheet workflows, transparent and offline-capable AI.

Recommendations (Evidence-Backed)

  • Ship citation-first incident summaries that stitch SIEM+EDR+IdP into a single, identity-anchored timeline with raw-event links and copyable queries (addresses swivel-chairing, trust).
  • Deliver schema-aware NL→KQL/SPL helper for guarded query generation (reduce toil without unsafe actions).
  • Launch KEV+exposure vulnerability heatmap to align work to real risk drivers (internet-facing + known exploited).
  • Auto-draft tickets/change plans with rollback to meet CAB expectations and accelerate remediation.
  • Collapse phish/ATO duplicates into one parent case across email, IdP, EDR (cut fatigue and ping-pong ownership).
  • Medium-term: Identity-first evidence graph (read-only), private/VPC deployment with no training on customer data, safe-execution framework (dry-run→approve→rollback), connector reliability program (versioned, contract-tested), SaaS policy simulation + PII scrub.

Risks and Guardrails

  • Hallucinations/confidence errors: enforce citations, uncertainty flags, and read-only defaults.
  • Data residency/privacy: in-tenant/on-prem options, no-training guarantees, full audit logs.
  • Connector drift: versioned schemas, contract tests, health dashboards, self-heal PRs.
  • Unsafe actions/CAB rejection: dry-runs, pre/post checks, rollback, narrow scopes.
  • Opaque pricing: flat tiers, budget caps, per-job cost estimates.

Next Steps and Measurement

  1. 0–30 days: NL→query helper; KEV+exposure heatmap; stand up IdP/EDR/SIEM/email connectors with health metrics; design cited-timeline and ticket templates.
  2. 31–90 days: Release read-only identity-first evidence graph with phish/ATO de-dup; pilot citation-first summaries; enable VPC deployment alpha and audit logging.
  3. 91–150 days: Safe-execution for revoke/isolate/disable-legacy-auth; connector reliability program; SaaS policy simulation + PII scrub beta.
  4. 151–210 days: Local-lite offline edition with spreadsheet integration; expand action catalog under guardrails; publish transparent pricing and cost estimator.
  • KPIs: Mean Time to Clarity (−50% in 90 days), Duplicate Alert Collapse Rate (≥60%), Citation Coverage (100%), False-Advice Rate (≤2%), Private Deployment Adoption (≥70% of pilots).
Recommended Follow-up Questions Updated Jan 23, 2026
  1. In a proof-of-value/pilot for a new security platform, which proof points would most influence your go/no-go decision?
    maxdiff Prioritizes POC design and success criteria to accelerate adoption and reduce sales friction.
  2. How acceptable are different pricing models for an AI-assisted security platform in your organization?
    maxdiff Guides packaging and pricing strategy; identifies models that encourage or inhibit adoption.
  3. Which existing systems must a new platform integrate with on day one to be considered viable in your environment?
    multi select Focuses connector roadmap on day-one blockers to ensure immediate utility.
  4. For each action type, indicate the highest autonomy you would permit a new platform to perform: allow automatically, require human approval, or never allow.
    matrix Defines guardrails and default automation policies aligned to trust boundaries.
  5. Provide the maximum allowed exception duration (in days) for each risk level (Critical, High, Medium, Low).
    matrix Calibrates exception timers, reminders, and recertification workflows by risk.
  6. Approximately what percentage of assets have a clearly identified accountable owner linked to your CMDB/ticketing (0–100%)?
    numeric Quantifies ownership gaps to prioritize identity-first stitching and assignment features.
Use concrete action rows for the automation matrix (e.g., isolate endpoint, disable user, create ticket, apply firewall rule, change IdP policy, close alert, trigger patch).
Study Overview Updated Jan 23, 2026
Research question: how security professionals manage vulnerabilities, prioritize findings, interface with their tool stack, and view AI-assisted operations.
Research group: 8 participants spanning SOC/blue-team (enterprise/MSP/manufacturing), GTM/Sales Ops, resource‑constrained IT support, and homelab/OSS practitioners-giving both enterprise and lean/OSS perspectives.
What they said: day-to-day is hygiene, not heroics-phishing/identity abuse, cloud/network misconfig, and patch lag/legacy dominate; teams aim for ~60/40 proactive/reactive over a quarter, but alert noise, missing asset ownership, and third‑party issues often flip weeks into firefighting.

Main insights: stacks are mature yet brittle-identity/asset context is inconsistent, connectors are fragile, licensing drives telemetry sampling, and small, deterministic automations beat “single pane” promises; identity is the join key for triage and deduplication.
Vulnerability management is a disciplined pipeline (intake→normalize→triage→owner→remediate→validate), with priority driven by exposure, exploitability (KEV/EPSS), and asset criticality-CVSS is a starting signal, not a decision-and bottlenecks are asset truth, scanner noise, change control, and legacy systems.
Knowledge sticks when attached to tickets/runbooks/decision logs that capture the “why”; AI is valued for summarization, NL→query, and dedup only if it is citation‑first, on‑tenant/private, and human‑in‑the‑loop.

Takeaways: fund asset inventory and clear ownership; harden identity (SSO/MFA, kill legacy protocols) and invest in detection engineering to cut noise; automate patching and shift fixes into IaC/CI; standardize schemas/connectors to reduce swivel‑chair and duplicate alerts.
For AI, build an identity‑first evidence graph that de‑duplicates to one incident, returns raw‑log citations and capacity‑aware change plans with dry‑run/rollback, supports private deployment and transparent pricing; measure impact via MTTR reduction, duplicate‑alert collapse rate, and documentation coverage.