AP Cybersecurity 1.5 Exercise 2: Human-in-the-Loop Decision Gate
Exercise 2: Human-in-the-Loop Decision Gate
You are the human analyst. For each AI-generated security alert, decide whether to approve the AI’s recommended action, override it, or escalate — and understand the consequences.
For each scenario below: Read the AI’s recommended action. Before seeing the outcome, choose what you would do. Then reveal the correct decision and learn why human judgment matters.
Context you discover: The Sales team sent out an all-hands email at 8 AM announcing a company-wide remote work day due to building maintenance.
Option A would create a major operational outage affecting hundreds of employees on a planned remote work day — a catastrophic false positive consequence. Option B ignores the AI alert entirely, which is also wrong since 14 different international IPs still warrants investigation. Option C is correct: cross-reference the VPN logins against the employee directory to separate legitimate remote workers from truly anomalous logins. This demonstrates the core human-in-the-loop principle — context the AI lacks (the remote work announcement) changes the appropriate response without ignoring the legitimate concern entirely. Option D removes human agency and delays response unnecessarily.
This scenario demonstrates when automated AI response is appropriate: (1) very high confidence (98.4%); (2) multiple independent indicators converge (macro, spoofed domain, wire transfer request); (3) the action is low-impact and fully reversible (quarantine, not delete). Option B would expose the CEO to a likely phishing email. Option C is irreversible deletion with no audit trail — overreach. Option D delays response on a high-confidence, high-risk threat targeting the CEO. Automated quarantine with human notification is the textbook balanced response.
Context: The AI SIEM shows no anomalies. The employee did not install the tool. No related network activity was detected.
This is a false negative scenario — the AI detected nothing because social engineering attacks operate at the human layer, not the network layer. The SIEM cannot detect a phone call. Human-in-the-loop means humans also feed intelligence into the AI, not just review its outputs. Option A incorrectly defers entirely to AI silence. Option C is disproportionate to the threat — the employee did not install anything. Option D again over-relies on AI and delays a legitimate investigation. This scenario demonstrates that human analysts catch what AI misses.
73% confidence means 27% probability of a false positive — meaningful risk when the action is terminating a process on a C-suite executive’s workstation during business hours. This could disrupt a board presentation, a live financial transaction, or a regulatory filing. The correct approach: immediately identify what the process is (task manager, EDR console), check with IT who manages the CFO’s workstation for context, then make a human decision. Option A accepts too much false positive risk on a high-impact action. Option C dismisses a 73% confidence threat entirely. Option D introduces an arbitrary time constraint that doesn’t improve the decision quality.
Get in Touch
Whether you're a student, parent, or teacher — I'd love to hear from you.
Just want free AP CS resources?
Enter your email below and check the subscribe box — no message needed. Students get daily practice questions and study tips. Teachers get curriculum resources and teaching strategies.
Message Sent!
Thanks for reaching out. I'll get back to you within 24 hours.
tanner@apcsexamprep.com
Courses
AP CSA, CSP, & Cybersecurity
Response Time
Within 24 hours
Prefer email? Reach me directly at tanner@apcsexamprep.com