1.5 Lab: Operation Sentinel | AI-Assisted Threat Hunting | AP Cybersecurity

Unit 1 • Lesson 1.5 • Lab

Operation Sentinel — AI-Assisted Threat Hunting

Three SIEM alert scenarios. One security operations center. Your job: evaluate the AI's recommendations, decide when to trust them, and determine where human judgment is required.

30 pts total ~45 min 3 stations No hints
Score 0 / 30
Scenario Brief
Meridian Health Network — SOC Analyst Rotation
Real-World Anchor: Healthcare SOC Operations, 2023—2024

Healthcare organizations generate millions of security events daily. AI-powered SIEM platforms like Microsoft Sentinel, Splunk SIEM, and CrowdStrike Falcon now triage the majority of these alerts automatically. But AI systems produce false positives, miss novel attack patterns, and cannot make ethical or legal judgments. The human analyst's role has shifted from alert reading to AI supervision — knowing when to trust, override, and escalate AI recommendations.

You are a SOC analyst at Meridian Health Network, a regional hospital group. Your AI threat detection platform — Sentinel AI — has flagged three separate incidents this week. In each station, you will review the AI's analysis, evaluate its recommendation, and determine the appropriate human response. The three stations are independent scenarios.

Lab Protocol: No hints available. You have completed Exercise 1 (AI security log analyzer) and Exercise 2 (human-in-the-loop decision gate). Apply that knowledge independently here.
Station 1 of 3
False Positive Triage — 10 pts
The Night-Shift EHR Access Alert

Sentinel AI flagged an alert at 02:47 AM on a Tuesday. The following is the AI's analysis and recommendation.

Sentinel AI Alert #SA-2024-8841
ALERT TYPE: Anomalous EHR Access SEVERITY: HIGH USER: Nurse Practitioner Keisha Odom (NP-4471) TIME: 02:47 AM Tuesday ACTION: Accessed 47 patient records in 18 minutes BASELINE: Typical session: 3—8 records, business hours LOCATION: ICU Terminal 7 (on-premises) AI RECOMMENDATION: LOCK ACCOUNT IMMEDIATELY. Pattern matches insider threat profile (mass data access, off-hours, deviation from baseline). Confidence: 87%.
Additional Context (Human-Retrieved)
SHIFT LOG: NP Odom is assigned to ICU overnight rotation this week ICU CENSUS: Mass casualty intake at 01:30 AM (multi-vehicle accident) PATIENT COUNT: 11 new critical patients admitted, requiring chart reviews PRIOR FLAGS: No prior security incidents for NP Odom in 4-year employment ACCESS LOG: All 47 records are ICU patients admitted in last 90 minutes
S1-Q1 — Based on the additional context, what is the most accurate assessment of this alert? 2 pts
ATrue positive — the 87% confidence score is reliable and the account should be locked immediately as the AI recommended.
BFalse positive — the anomaly is fully explained by the mass casualty event. The access pattern is clinically appropriate and the AI lacked operational context that a human analyst could retrieve in seconds.
CInconclusive — the AI and human context are equally weighted and the analyst should escalate to the CISO for a final determination.
DTrue positive with mitigating factors — the mass casualty event is suspicious timing and may have been engineered to provide cover for the insider access.
S1-Q2 — If the analyst had followed the AI's recommendation and locked the account immediately, what would be the consequence? 2 pts
AMinimal consequence — NP Odom could log in from another terminal within a few minutes after the false lock was reversed.
BA critical care nurse would lose access to patient charts during active treatment of mass casualty victims, creating a direct patient safety risk and potential HIPAA violation from care disruption.
CThe account lock would trigger a security review that would confirm the access was legitimate, automatically restoring the account within the same session.
DThe false lock would be logged as a security incident and would reduce the AI's confidence score on future alerts from the same user.
S1-Q3 — What system improvement would most reduce this type of false positive in the future? 3 pts
✍ Predict First
AIncrease the AI confidence threshold from 87% to 95% before triggering HIGH severity alerts, reducing the number of alerts that reach analysts.
BIntegrate the HR scheduling system and clinical census data into the AI's context window, allowing it to check shift assignments and active emergency events before classifying anomalous access.
CSwitch from anomaly-based detection to signature-based detection only, eliminating false positives caused by behavioral baseline deviations.
DRequire dual analyst sign-off on all AI-recommended account lockouts in healthcare environments, slowing response but eliminating unilateral automated actions.
S1-Q4 — The AI reported 87% confidence. What does this number actually mean in the context of machine learning security systems? 3 pts
AThe AI is 87% certain that NP Odom is a malicious insider — human judgment should only override scores below 70%.
B87% of similar historical alerts in the training data were true positives — meaning roughly 1 in 8 alerts with this signature were false positives, which is not a reliable basis for an automatic account lockout without human review.
CThe AI has processed 87% of the available log data and will update the confidence score to 100% once processing is complete.
DThe AI's internal model weights sum to 87% agreement across its decision nodes, which is a proprietary metric not directly comparable to a probability score.
Station 2 of 3
Behavioral Anomaly Detection — 10 pts
The Departing Employee Data Exfiltration

Sentinel AI flagged a series of alerts over four days about a billing department employee. The AI built a behavioral timeline that the analyst must evaluate.

Sentinel AI Behavioral Timeline — User: Marcus Webb, Billing Analyst
Day 1 (Mon): Submitted resignation, effective in 2 weeks Day 2 (Tue): Searched internal file system: "patient billing export", "bulk download" Day 3 (Wed): Accessed 1,240 patient billing records (role allows max 50/session) Copied 847 MB to personal USB drive (USB-ID: Kingston 32GB) Sent 3 emails to personal Gmail with "final" in subject line Day 4 (Thu): Attempted to access finance director payroll files (ACCESS DENIED) Cleared browser history and deleted 142 local files AI RECOMMENDATION: HIGH CONFIDENCE (94%) INSIDER THREAT. Recommend: immediate account suspension, USB forensic capture, legal hold on all user data, notify HR and Legal.
S2-Q1 — Unlike Station 1, should the analyst follow the AI's recommendation here? 2 pts
ANo — the analyst should wait for Marcus Webb to complete his resignation period before taking any action, as employment law requires allowing employees to work through their notice period.
BYes — the behavioral evidence here is qualitatively different from Station 1. Multiple corroborating indicators across four days (role violation, physical exfiltration, anti-forensics) constitute a high-confidence true positive that warrants immediate action.
CNo — the analyst should conduct a two-week investigation before acting to ensure the evidence is conclusive beyond any doubt.
DYes — but only because the confidence score is 94% versus 87% in Station 1. Scores above 90% should always trigger automatic action without human review.
S2-Q2 — The AI flagged "cleared browser history and deleted 142 local files" as an indicator. Why is anti-forensics behavior significant in insider threat detection? 3 pts
AIt is not significant — employees routinely clear browser history for privacy reasons and deleting files before departure is standard offboarding hygiene.
BAnti-forensics behavior is significant because it demonstrates consciousness of wrongdoing — legitimate users do not typically attempt to destroy evidence of their own activity. Combined with the prior exfiltration indicators, it shows the subject knew the data theft was unauthorized and attempted to cover it.
CIt is significant primarily because the deleted files may contain additional evidence of the exfiltration that is now lost, making prosecution more difficult.
DClearing browser history triggers a HIPAA audit event that automatically notifies the Office for Civil Rights, making it a compliance issue independent of the security investigation.
S2-Q3 — The AI recommended notifying HR and Legal. Why is this step necessary before acting on the account? 2 pts
✍ Predict First
AHR and Legal approval is required by HIPAA before any employee account can be suspended in a healthcare organization, regardless of the reason.
BSecurity teams lack authority to make employment decisions. HR handles the employment action; Legal handles evidence preservation, attorney-client privilege for the investigation, and potential law enforcement coordination. Acting without them risks destroying the legal case or creating wrongful termination liability.
CHR and Legal must be notified so they can inform Marcus Webb of the investigation before his account is suspended, as due process requires prior notice.
DNotifying HR first gives the organization 24 hours to back up all of Marcus Webb's data before the account is suspended and access is permanently revoked.
S2-Q4 — The 847 MB copied to USB constitutes a potential HIPAA breach of 1,240 patient records. Under HIPAA, what notification obligation is triggered? 3 pts
ANo notification is required because the breach was caused by an insider rather than an external attacker, which is treated as an internal disciplinary matter under HIPAA.
BNotification is only required if the stolen records are actually published or sold — unauthorized possession alone does not trigger the HIPAA breach notification rule.
CSince 1,240 records exceeds 500, this is a major breach requiring notification to affected individuals, the Department of Health and Human Services (HHS), and prominent media outlets in the affected region within 60 days.
DNotification to 1,240 individual patients is required within 30 days, but HHS notification and media notification are only triggered for breaches exceeding 10,000 records.
Station 3 of 3
AI Limitation — Novel Attack Pattern — 10 pts
The Zero-Day the AI Missed

A week later, the SOC received a threat intelligence bulletin about a new attack technique targeting healthcare EHR systems. The analyst checked whether Sentinel AI had detected any related activity.

Threat Intel Bulletin TIB-2024-0147
TECHNIQUE: Living-off-the-land (LotL) using legitimate EHR admin tools METHOD: Attacker uses stolen admin credentials to run built-in EHR export functions — no malware, no anomalous tools SIGNATURE: NONE — all actions appear as legitimate admin operations BEHAVIORAL: Slow data export (50—100 records/hour) over 2—4 weeks to avoid baseline deviation triggers DETECTION: Requires manual timeline correlation across 4+ week window
Sentinel AI Log Search Results
QUERY: Search for LotL EHR exfiltration patterns AI RESULT: No alerts generated for this pattern in last 90 days AI ASSESSMENT: No anomalous activity detected. System operating normally. ANALYST NOTE: Admin account EHR-ADMIN-03 exported 3,200 records over 32 days at 97 records/day avg — just below baseline threshold (100/day)
S3-Q1 — Why did the AI fail to detect this attack? 2 pts
AThe AI experienced a technical malfunction that caused it to skip log analysis for the affected time window.
BThe attacker deliberately kept activity below the behavioral baseline threshold, exploiting the AI's detection logic by staying just under the trigger point — a technique called threshold evasion. The AI only flags deviations from normal; activity that appears normal (even if it persists for 32 days) generates no alert.
CThe AI was not configured to monitor EHR admin accounts, only standard user accounts, creating a blind spot in its detection scope.
DThe attacker used encryption to hide the export activity, preventing the AI from reading the content of the exported records.
S3-Q2 — The bulletin says this technique requires "manual timeline correlation across a 4+ week window." Why is this beyond the capability of real-time AI alerting? 3 pts
AAI systems cannot process data older than 7 days due to memory limitations in current SIEM architectures.
BReal-time AI alerting evaluates each event or short-window pattern against a baseline — it is optimized to detect spikes and anomalies. Slow, deliberate exfiltration that stays within normal daily bounds requires a human analyst to form a hypothesis ("this account is exfiltrating data slowly") and then retroactively search logs to test it — a hypothesis-driven workflow that does not exist in automated alert generation.
CThe 4-week window exceeds HIPAA log retention requirements, so the historical data needed for the correlation is legally required to be deleted before the pattern becomes detectable.
DReal-time AI cannot perform timeline correlation because it lacks access to HR scheduling data, which is required to establish that the admin account behavior was unauthorized.
S3-Q3 — What should the analyst do immediately upon discovering the 32-day exfiltration pattern? 2 pts
✍ Predict First
AImmediately delete the EHR-ADMIN-03 account to stop any further exfiltration, then notify management.
BPreserve all forensic evidence first (log snapshots, export records, network captures), then suspend the compromised account and escalate to the incident response team. Do not delete anything before forensic preservation is complete.
CContinue monitoring silently for another two weeks to gather more evidence before taking any action that might alert the attacker.
DNotify the 3,200 affected patients immediately per HIPAA requirements before taking any technical action on the account.
S3-Q4 — Across all three stations, what is the most accurate description of the AI's role in this SOC? 3 pts
AThe AI is a replacement for human analysts that handles all routine threat detection, with human analysts only needed for novel, high-complexity incidents.
BThe AI is unreliable and should be replaced with signature-based detection and manual log review by trained analysts.
CThe AI is a force multiplier that handles high-volume pattern matching at scale, freeing human analysts for contextual judgment (Station 1), ethical and legal decisions (Station 2), and hypothesis-driven threat hunting (Station 3) — tasks that require human reasoning the AI cannot perform.
DThe AI's primary value is reducing liability — by documenting all security events automatically, it ensures the organization can demonstrate compliance to auditors regardless of whether threats are actually detected.
AP Exam Tip: AI in cyber defense questions on the AP exam test three concepts: (1) what AI can do well (high-volume pattern matching, baseline deviation detection, alert triage at scale), (2) what AI cannot do (context-dependent judgment, hypothesis-driven threat hunting, legal and ethical decisions), and (3) human-in-the-loop reasoning (when should a human override, slow down, or escalate an AI recommendation). Expect scenario MCQ asking you to classify whether an AI recommendation should be followed, modified, or overridden — and to justify the answer.
Extension Challenge: Write a one-page AI Governance Policy for Meridian Health Network's SOC that defines: (1) which AI actions can be automated without human review, (2) which require human approval before execution, and (3) which categories of decisions are permanently reserved for human judgment. Use the three stations from this lab as the basis for your policy categories. This type of policy is called a Human-in-the-Loop (HITL) framework and is increasingly required by healthcare regulators.
0
out of 30 points

Get in Touch

Whether you're a student, parent, or teacher — I'd love to hear from you.

Just want free AP CS resources?

Enter your email below and check the subscribe box — no message needed. Students get daily practice questions and study tips. Teachers get curriculum resources and teaching strategies.

Typically responds within 24 hours

Message Sent!

Thanks for reaching out. I'll get back to you within 24 hours.

🏫 Welcome, fellow educator!

I offer curriculum resources, practice materials, and study guides designed for AP CS teachers. Let me know what you're looking for — whether it's classroom materials, a guest speaker, or Teachers Pay Teachers resources.

Email

tanner@apcsexamprep.com

📚

Courses

AP CSA, CSP, & Cybersecurity

Response Time

Within 24 hours

Prefer email? Reach me directly at tanner@apcsexamprep.com