AP CSP Day 44: Bias in Data & Algorithms | Cycle 2
Share
Feedback loops in biased algorithms amplify initial inequities over time: a predictive policing algorithm trained on historically biased arrest data directs more patrols to certain areas, generating more arrests there, which reinforces the training data bias in the next model iteration. AP CSP Cycle 2 bias questions ask students to trace how an algorithm's biased outputs become inputs to future decisions, creating a self-reinforcing cycle. Identifying both the original source of bias and the mechanism by which it compounds is the key analytical challenge in these harder exam questions.
📚 Study the Concept First (Optional) Click to expand ▼
Algorithmic Bias: Feedback Loops
What Is a Feedback Loop?
A feedback loop occurs when an algorithm's outputs are used as inputs to train the next version of the same algorithm. If the initial algorithm has bias, its biased decisions generate biased data that makes the next version more biased, amplifying the original problem.
Real-World Example
A credit scoring algorithm trained on historical data where certain communities were systematically denied loans will continue denying loans to those communities, generating more 'no loan' data for those communities, further reinforcing the pattern.
Practice Question
A social media platform uses an algorithm to recommend news articles based on each user's reading history. Which of the following describes potential negative effects of this approach?
I. Users may primarily see articles that reinforce their existing beliefs, creating a filter bubble.
II. The algorithm may amplify popular or sensational content regardless of its accuracy.
III. The algorithm ensures every user receives a balanced, unbiased selection of perspectives.
Statement I is true: recommendation algorithms create filter bubbles by showing content similar to what users already engage with. Statement II is true: engagement-based algorithms tend to promote viral/sensational content because it generates more clicks, regardless of accuracy. Statement III is false: personalized recommendations by definition show different content to different users, making balanced exposure unlikely.
B) Statement II is also a valid negative effect, not just statement I. C) Statement III directly contradicts how recommendation algorithms work. D) Statement III is false, so not all three are correct.
Students believe algorithms are designed to provide balanced information. In practice, engagement-optimized algorithms prioritize content that keeps users clicking, which often means reinforcing existing preferences rather than broadening perspectives.
For questions about algorithmic impact, consider both the intended purpose (recommendations) and unintended consequences (filter bubbles, amplification of sensationalism).
Keep Practicing!
Consistent daily practice is the key to AP CSP success.
AP CSP Resources Get 1-on-1 Help