AMTP Proceedings 2026

Document Type

Conference Proceeding

Publication Date

Spring 2026

Abstract

This study examines bias in generative AI through an analysis of approximately 8,000 occupational portraits created by Midjourney, Stable Diffusion, and DALL·E 2. We document significant underrepresentation of women and Black individuals compared to real-world benchmarks. The research identifies two primary manifestations of bias: systematic gender and racial disparities, and subtle biases in facial expressions that influence perceptions of competence and trustworthiness. Through an iterative "Creative Lab" involving a fictional brand, we employ a three-phase experimental design to test whether AI disclosure labels—as proposed in the AI Disclosure Act of 2023—can mitigate the impact of these biases on consumer evaluations. This research assesses whether mandated transparency measures effectively reduce the influence of biased AI outputs on brand trust and purchase intention. These findings provide empirical support for disclosure-based regulatory approaches as practical solutions to mitigate generative bias in marketing.

Share

COinS