Gender stereotypes in AI-generated images
- Francisco-José García-Ull 1
- Mónica Melero-Lázaro 2
-
1
Universidad Europea de Valencia
info
Universidad Europea de Valencia
Valencia, España
-
2
Universidad de Valladolid
info
ISSN: 1386-6710, 1699-2407
Año de publicación: 2023
Título del ejemplar: Disinformation and online media
Volumen: 32
Número: 5
Tipo: Artículo
Otras publicaciones en: El profesional de la información
Resumen
This study explores workplace gender bias in images generated by DALL-E 2, an application for synthesising images based on artificial intelligence (AI). To do this, we used a stratified probability sampling method, dividing the sample into segments on the basis of 37 different professions or prompts, replicating the study by Farago, Eggum-Wilkens and Zhang (2020) on gender stereotypes in the workplace. The study involves two coders who manually input different professions into the image generator. DALL-E 2 generated 9 images for each query, and a sample of 666 images was collected, with a confidence level of 99% and a margin of error of 5%. Each image was subsequently evaluated using a 3-point Likert scale: 1, not stereotypical; 2, moderately stereotypical; and 3, strongly stereotypical. Our study found that the images generated replicate gender stereotypes in the workplace. The findings presented indicate that 21.6% of AI-generated images depicting professionals exhibit full stereotypes of women, while 37.8% depict full stereotypes of men. While previous studies conducted with humans found that gender stereotypes in the workplace exist, our research shows that AI not only replicates this stereotyping, but reinforces and increases it. Consequently, while human research on gender bias indicates strong stereotyping in 35% of instances, AI exhibits strong stereotyping in 59.4% of cases. The results of this study emphasise the need for a diverse and inclusive AI development community to serve as the basis for a fairer and less biased AI.
Referencias bibliográficas
- Agudo, Ujué; Liberal, Karlos G. (2020). "El automágico traje del emperador". Medium.com, 9 septiembre. https://medium.com/bikolabs/el-automagico-traje-del-emperador-c2a0bbf6187b
- Archer, Cynthia J. (1984). "Children´s attitudes toward sex-role division in adult occupational roles". Sex roles, v. 10. https://doi.org/10.1007/BF00287742
- Belhadi, Amine; Kamble, Sachin; Fosso-Wamba, Samuel; Queiroz, Maciel M. (2022). "Building supply-chain resilience: an artificial intelligence-based technique and decision-making framework". International journal of production research, v. 60, n. 14, pp. 4487-4507. https://doi.org/10.1080/00207543.2021.1950935
- Bolukbasi, Tolga; Chang, Kai-Wie; Zou, James; Saligrama, Venkatesh; Kalai, Adam (2016). "Man is to computer programmer as woman is to homemaker? Debiasing word embeddings". In: NIPS´16: Proceedings of the 30th international conference on neural information processing systems, pp. 4356-4364. https://doi.org/10.48550/arXiv.1607.06520
- Borji, Ali (2022). Generated faces in the wild: quantitative comparison of stable diffusion, midjourney and DALL-E 2. Quintic AI, San Francisco, CA. https://arxiv.org/pdf/2210.00586.pdf
- Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (2020). "Language models are few-shot learners". Advances in neural information processing systems, v. 33, pp. 1877-1901. https://doi.org/10.48550/arXiv.2005.14165
- Buolamwini, Joy; Gebru, Timnit (2018). "Gender shades: intersectional accuracy disparities in commercial gender classification". Proceedings of machine learning research, v. 81. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
- Caliskan, Aylin; Bryson, Joanna J.; Narayanan, Arvind (2017). "Semantics derived automatically from language corpora contain human-like biases". Science, v. 356, n. 6334, pp.183-186. https://doi.org/10.1126/science.aal4230
- Cortina-Orts, Adela (2019). "Ética de la inteligencia artificial". Anales de la Real Academia de Ciencias Morales y Políticas, pp. 379-394. Ministerio de Justicia. https://www.boe.es/biblioteca_juridica/anuarios_derecho/articulo.php?id=ANU-M-2019-10037900394
- Crawford, Kate (2021). The atlas of AI: power, politics, and the planetary costs of artificial intelligence. Yale University Press. ISBN: 978 0 300252392 https://doi.org/10.2307/j.ctv1ghv45t
- Criado-Pérez, Caroline (2020). La mujer invisible. Descubre cómo los datos configuran un mundo hecho por y para los hombres. Barcelona: Seix Barral. ISBN: 978 84 32236136
- DALL-E 2 (2021). OpenAI. https://openai.com/dall-e-2
- De-Carvalho, André-Carlos-Ponce-de-Leon-Ferreira (2021). Inteligência artificial: riscos, benefícios e uso responsável. Estudos avançados, v. 35, 101. https://doi.org/10.1590/s0103-4014.2021.35101.003
- D´Ignazio, Catherine; Klein, Lauren F. (2020). Data feminism. Cambridge: MIT Press. ISBN: 978 0 262547185
- Eichenberger, Livia (2022). "DALL-E 2: Why discrimination in AI development cannot be ignored". Statworx blog post, 28 June. https://www.statworx.com/en/content-hub/blog/dalle-2-open-ai
- Estupiñán-Ricardo, Jesús; Leyva-Vázquez, Maikel-Yelandi; Peñafiel-Palacios, Álex-Javier; El-Asaffiri-Ojeda, Yusef (2021). "Inteligencia artificial y propiedad intelectual". Universidad y sociedad, v. 13, n. S3, pp. 362-368. https://rus.ucf.edu.cu/index.php/rus/article/view/2490
- Eubanks, Virginia (2018). Automating inequality: how high-tech tools profile, police, and punish the poor. New York: St. Martin´s Press. ISBN: 978 1 250074317
- Farago, Flora; Eggum-Wilkens, Natalie D.; Zhang, Linlin (2021). "Ugandan adolescents´ gender stereotype knowledge about jobs". Youth & society, v. 53, n. 5, pp. 723-744. https://doi.org/10.1177/0044118X19887075
- Francescutti, Pablo (2018). La visibilidad de las científicas españolas. Fundación Dr. Antoni Esteve, Grupo de estudios avanzados de comunicación, Barcelona. https://www.raco.cat/index.php/QuadernsFDAE/issue/download/30066/439
- Franganillo, Jorge (2022). "Contenido generado por inteligencia artificial: oportunidades y amenazas". Anuario ThinkEPI, v. 16, e16a24. https://doi.org/10.3145/thinkepi.2022.e16a24
- Gamir-Ríos, José; Tarullo, Raquel (2022). "Predominio de las cheapfakes en redes sociales. Complejidad técnica y funciones textuales de la desinformación desmentida en Argentina durante 2020". adComunica, v. 23, pp. 97-118. https://doi.org/10.6035/adcomunica.6299
- García-Ull, Francisco-José (2021). "Deepfakes: el próximo reto en la detección de noticias falsas". Anàlisi, n. 64, pp. 103-120. https://doi.org/10.5565/rev/analisi.3378
- Goodfellow, Ian J.; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). "Generative adversarial networks. Advances in neural information processing systems". Communications of the ACM. v. 63, pp. 139-164. https://doi.org/10.48550/arXiv.1406.2661
- Gottfredson, Linda S. (1981). "Circumscription and compromise: A developmental theory of occupational aspirations". Journal of counseling psychology, v. 28, n. 6, pp. 545-579. https://doi.org/10.1037/0022-0167.28.6.545
- Laino, María-Elena; Cancian, Pierandrea; Salvatore-Politi, Letterio; Della-Porta, Matteo-Giovanni; Saba, Luca; Savevski, Victor (2022). "Generative adversarial networks in brain imaging: A narrative review". Journal of imaging, v. 8, n. 4, 83. https://doi.org/10.3390/jimaging8040083
- Leavy, Susan (2018). "Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning". In: Proceedings of the 1st international workshop on gender equality in software engineering, pp. 14-16. https://doi.org/10.1145/3195570.3195580
- Leavy, Susan; Meaney, Gerardine; Wade, Karen; Greene, Derek (2020). "Mitigating gender bias in machine learning data sets". In: Bias2020 workshop: Bias and social aspects in search and recommendation. https://doi.org/10.1007/978-3-030-52485-2_2
- Liben, Lynn S.; Bigler, Rebecca S.; Krogh, Holleen R. (2001). "Pink and blue collar jobs: children´s judgments of job status and job aspirations in relation to sex of worker". Journal of experimental child psychology, v. 79, n. 4, pp. 346-363. https://doi.org/10.1006/jecp.2000.2611
- Loftus, Tyler J.; Tighe, Patrick J.; Filiberto, Amanda C.; Efron, Philip A.; Brakenridge, Scott C.; Mohr, Alicia M.; Rashidi, Parisa; Upchurch, Gilbert R.; Bihorac, Azra (2020). "Artificial intelligence and surgical decision-making". JAMA surgery, v. 155, n. 2, pp. 148-158. https://doi.org/10.1001/jamasurg.2019.4917
- Manassero, Antonia; Vázquez, íngel (2003). "Las mujeres científicas: un grupo invisible en los libros de texto". Revista investigación en la escuela, v. 50, pp. 31-45. https://revistascientificas.us.es/index.php/IE/article/view/7582
- Millán, Víctor (2022). "DALL-E 2: ¿cómo funciona y qué supone? La IA que crea imágenes de la nada y es, simplemente, perfecta y aterradora". Hipertextual, 29 mayo. https://hipertextual.com/2022/05/dall-e-2
- Nica, Elvira; Sabie, Oana-Matilda; Mascu, Simona; Luţan-Petre, Anca-Georgeta (2022). "Artificial intelligence decision-making in shopping patterns: consumer values, cognition, and attitudes". Economics, management and financial markets, v. 17, n. 1, pp. 31-43. https://doi.org/10.22381/emfm17120222
- O´Neil, Cathy (2018). Armas de destrucción matemática: cómo el big data aumenta la desigualdad y amenaza la democracia. Capitán Swing Libros. ISBN: 978 84 947408 4 8
- OpenAI (2022a). "DALL-E now available without waitlist". Openai, September 28. https://openai.com/blog/dall-e-now-available-without-waitlist
- OpenAI (2022b). "Reducing bias and improving safety in DALL-E 2". OpenAI, July 18. https://openai.com/blog/reducing-bias-and-improving-safety-in-dall-e-2
- Ortiz-de-Zárate-Alcarazo, Lucía (2023). "Sesgos de género en la inteligencia artificial". Revista de occidente, v. 1, n. 502. https://dialnet.unirioja.es/servlet/articulo?codigo=8853265
- Pérez-Gómez, Miguel-íngel; Echazarreta-Soler, Carmen; Audebert-Sánchez, Meritxell; Sánchez-Miret, Cristina (2020). "El ciberacoso como elemento articulador de las nuevas violencias digitales: métodos y contextos". Communication papers. Media literacy and gender studies, v. 9, n. 18. https://doi.org/10.33115/udg_bib/cp.v9i18.22470
- Porayska-Pomsta, Kaska; Rajendran, Gnanathusharan (2019). "Accountability in human and artificial intelligence decision-making as the basis for diversity and educational inclusion". In: Knox, Jeremy; Wang, Yuchen; Gallagher, Michael. Artificial intelligence and inclusive education: speculative futures and emerging practices. Springer, pp. 39-59. https://doi.org/10.1007/978-981-13-8161-4_3
- Postman, Neil (1991). Divertirse hasta morir, el discurso público en la era del show business. Barcelona: Ediciones la Tempestad. ISBN: 978 84 79480462
- Quirós-Fons, Antonio; García-Ull, Francisco-José (2022). La inteligencia artificial como herramienta de la desinformación: deepfakes y regulación europea. Los derechos humanos en la inteligencia artificial: su integración en los ODS de la Agenda 2030. Thomson Reuters Aranzadi, pp. 537-556. ISBN: 978 84 1124 557 9
- Rassin, Royi; Ravfogel, Shauli; Goldberg, Yoav (2022). "DALL-E 2 is seeing double: flaws in word-to-concept mapping in text2image models". https://doi.org/10.48550/arXiv.2210.10606
- Sainz, Milagros; Arroyo, Lidia; Castaí±o, Cecilia (2020). Mujeres y digitalización: de las brechas a los algoritmos. Instituto de la Mujer y para la Igualdad de Oportunidades. https://www.inmujeres.gob.es/diseno/novedades/M_MUJERES_Y_DIGITALIZACION_DE_LAS_BRECHAS_A_LOS_ALGORITMOS_04.pdf
- Sourdin, Tania (2018). "Judge v Robot? Artificial intelligence and judicial decision-making". UNSW law journal, v. 41, n. 4, pp. 1114-1133. https://www.unswlawjournal.unsw.edu.au/wp-content/uploads/2018/12/Sourdin.pdf
- Teig, Stacey; Susskind, Joshua E. (2008). "Truck driver or nurse? The impact of gender roles and occupational status on children´s occupational preferences". Sex roles, v. 58, pp. 848-863. https://doi.org/10.1007/s11199-008-9410-x
- Traylor, Jake (2022). "No quick fix: how OpenAI´s DALL-E 2 illustrated the challenges of bias in AI". NBC news, July 27. https://www.nbcnews.com/tech/tech-news/no-quick-fix-openais-dalle-2-illustrated-challenges-bias-ai-rcna39918
- Véliz, Carissa (2021). Privacidad es poder: datos, vigilancia y libertad en la era digital. Debate. ISBN: 978 84 18056680
- Vincent, James (2020). "OpenAI´s latest breakthrough is astonishingly powerful, but still fighting its flaws". The verge tech, July 30. https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential
- Wang, Tianlu; Zhao, Jieyu; Yatskar, Mark; Chang, Kai-Wei; Ordóñez, Vicente (2019). "Balanced datasets are not enough: estimating and mitigating gender bias in deep image representations". In: International conference on computer vision, ICCV 2019. https://doi.org/10.48550/arXiv.1811.08489
- Zhou, Yufan; Zhang, Ruiyi; Chen, Changyou; Li, Chunyuan; Tensmeyer, Chris; Yu, Tong; Gu, Jiuxiang; Xu, Jinhui; Sun, Tong (2021). "Towards language-free training for text-to-image generation". https://arxiv.org/pdf/2111.13792v3.pdf