10F: Assessing Generative AI for Bias: An Input and Output Decision Tool
Friday, September 19 | 4:30 p.m. – 5:15 p.m.
Generative AI models like GPT-4 can perpetuate and amplify biases from their training data, raising ethical concerns over fairness in AI outputs. This study proposes a decision tree as a structured tool to evaluate generative AI inputs and outputs for bias. It guides users systematically in identifying and assessing various biases, including gender, racial, and cultural, helping promote equity and reduce bias in AI applications.
Presenters:
Debra Sullivan, PhD, MSN, RN, CNE, COI | Walden University, MSN specialty
Christine Frazer, PhD, CNS, CNE | Walden University, MSN specialty
Virginia Sullivan, MA, PhD(c) | University of Alabama, Huntsville