Paper 1: Cognitive Biases: Understanding and Designing Fair AI Systems for Software Development
Abstract: Artificial Intelligence (AI) systems increasingly influence decisions that affect people's lives, making fairness a core requirement. However, cognitive biases, systematic deviations in human judgment, can enter AI through data, modeling choices, and oversight, amplifying social inequities. This paper examines how three bias channels, data, algorithmic, and human, manifest across the software development lifecycle and synthesizes practical strategies for mitigation. Using a qualitative review of recent scholarship and real‑world case studies, we distill a lightweight diagnostic framework that helps practitioners identify bias sources, evaluate mitigation options against effectiveness, feasibility, transparency, and scalability, and institutionalize routine audits. We illustrate the framework with representative vignettes and summarize trade‑offs between fairness goals and model performance. Our analysis recommends diverse and well‑documented datasets, fairness‑aware learning and evaluation, third‑party audits, and cross‑functional collaboration as mutually reinforcing levers. The paper contributes a developer‑oriented map of cognitive bias risks across data, model, and human processes, a four‑criterion rubric for comparing mitigation techniques, and an actionable checklist that teams can embed in their pipelines. The results aim to support software and product teams in building AI systems that are both accurate and equitable.
Keywords: Cognitive biases; fair AI systems; algorithmic bias; software development; bias mitigation; fairness; software engineering