AP CSP Day 30: Computing Innovations Review

Key Concepts

Computing innovations include hardware, software, and system-level advances that create new capabilities or significantly change existing ones. The AP CSP exam expects students to evaluate both the intended benefits and unintended harmful effects of computing innovations. Key considerations include privacy risks from data collection, security vulnerabilities, economic disruption, and environmental impact. Distinguishing between effects that were anticipated by designers versus those that emerged unexpectedly is a central skill in Big Idea 5 exam questions.

📚 Study the Concept First (Optional) Click to expand ▼

Computing Innovations: Benefits and Harms

What Counts as a Computing Innovation?

A computing innovation uses a program as an integral part of its function, distinguishing it from innovations that merely use computers incidentally. Ride-sharing apps, GPS navigation, social media platforms, and machine learning recommendation systems are all computing innovations.

Unintended Effects

Every computing innovation has intended effects (the designed purpose) and potential unintended effects that emerge from real-world use. Social media was designed for connection; unintended effects include misinformation spread and mental health impacts. Self-driving cars are designed for safety; unintended effects include edge-case accidents and job displacement.

Common Trap: Confusing 'unintended' with 'harmful.' Unintended effects can be positive (GPS navigation unexpectedly helping emergency responders) or negative. The AP exam focuses on whether effects were anticipated by designers.
Exam Tip: For any described innovation, practice generating both a positive unintended consequence and a negative unintended consequence. The AP exam often asks for both.
Big Idea 5: Impact of Computing
Cycle 1 - Day 30 Practice - Medium Difficulty
Focus: Computing Innovations Review

Practice Question

A team develops a machine learning algorithm to screen job applications. The training data consists of resumes from employees hired over the past 10 years at a company that historically employed mostly male engineers. Which of the following is MOST likely to occur?

Why This Answer?

Machine learning algorithms learn patterns from their training data. If the historical data reflects biased hiring practices (predominantly male hires), the algorithm will learn to favor characteristics associated with that group. This is called algorithmic bias - the algorithm amplifies existing human biases rather than being neutral.

Why Not the Others?

A) Computers are not inherently objective - they reflect the biases present in their training data and the choices made by their developers. This is a common misconception.

C) Algorithms do not automatically detect or correct bias. Identifying and mitigating bias requires deliberate effort from developers, including diverse training data and bias testing.

D) Even without explicit gender fields, algorithms can infer gender-correlated patterns from names, activities, word choices, and other resume features.

Common Mistake
Watch Out!

Many students believe algorithms are neutral because they are mathematical. The AP exam tests understanding that algorithms reflect the data they are trained on and the decisions of their creators. Bias in = bias out.

AP Exam Tip

Questions about algorithmic bias almost always have a correct answer involving: (1) biased training data leads to biased results, (2) humans are responsible for addressing bias, and (3) algorithms can have unintended consequences. Look for these themes!

Cycle 1 Complete!

Great job finishing Cycle 1! Continue to Cycle 2 for more practice.

AP CSP Resources Get 1-on-1 Help
Back to blog

Leave a comment

Please note, comments need to be approved before they are published.