Creating non-discriminatory Artificial Intelligence systemsbalancing the tensions between code granularity and the general nature of legal rules

  1. Alba Soriano Arnanz 1
  1. 1 Universitat de València
    info

    Universitat de València

    Valencia, España

    ROR https://ror.org/043nxc105

Revista:
IDP: revista de Internet, derecho y política = revista d'Internet, dret i política

ISSN: 1699-8154

Año de publicación: 2023

Número: 38

Tipo: Artículo

DOI: 10.7238/IDP.V0I38.403794 DIALNET GOOGLE SCHOLAR lock_openDialnet editor

Otras publicaciones en: IDP: revista de Internet, derecho y política = revista d'Internet, dret i política

Resumen

Over the past decade, concern has grown regarding the risks generated by the use of artificial intelligence systems. One of the main problems associated with the use of these systems is the harm they have been proven to cause to the fundamental right to equality and non-discrimination. In this context, it is vital that we examine existing and proposed regulatory instruments that aim to address this particular issue, especially taking into consideration the difficulties of applying the abstract nature that typically characterises legal instruments and, in particular, the equality and non-discrimination legal framework, to the specific instructions that are needed when coding an artificial intelligence instrument that aims to be non-discriminatory. This paper focuses on examining how article 10 of the new EU Artificial Intelligence Act proposal may be the starting point for a new form of regulation that adapts to the needs of algorithmic systems.

Referencias bibliográficas

  • BAROCAS, S.; SELBST, A. D. (2016). “Big data’s disparate impact”. California Law Review, vol. 104, no. 3, pp. 671-732. DOI: https://doi.org/10.2139/ssrn.2477899
  • BAROCAS, S.; SELBST, A. D. (2018). “The intuitive appeal of explainable machines”. Fordham Law Review, vol. 87, no. 3, pp. 1085-1139. DOI: https://doi.org/10.2139/ssrn.3126971
  • BENT, J. R. (2020). “Is algorithmic affirmative action legal?”. The Georgetown Law Journal, vol. 108, pp. 803-853.
  • BERK, R.; HEIDARI, H.; JABBARI, S.; KEARNS, M.; ROTH, A. (2018). “Fairness in criminal justice risk assessments: the state of the art”. Sociological Methods and Research, vol 50, no. 1, pp. 1-24. DOI: https://doi.org/10.1177/0049124118782533
  • CERRILLO I MARTÍNEZ, A. (2020). “El impacto de la inteligencia artificial en el derecho administrativo ¿nuevos conceptos para nuevas realidades técnicas?”. Revista General de Derecho Administrativo, no. 50.
  • CHOULDECHOVA, A. (2016). “Fair prediction with disparate impact: a study of bias in recidivism prediction instruments”. arXiv [online]. [Accessed: 6 September 2022]. DOI: https://doi.org/10.48550/arXiv.1610.07524
  • CORBETT-DAVIES, S.; PIERSON, E.; GOEL, S. (2015, October). “A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear”. The Washington Post [online]. [Accessed: 6 September 2022]. Available at: https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/?noredirect=on
  • CORBETT-DAVIES, S.; GOEL, S. (2018). “The measure and mismeasure of fairness: a critical review of fair machine learning”. ArXiv [online]. [Accessed: 6 September 2022]. DOI: https://doi.org/10.48550/arXiv.1808.00023
  • FRIEDLER, S. A.; SCHEIDEGGER, C. E.; VENKATASUBRAMANIAN, S.; CHOUDHARY, S.; HAMILTON, E. P.; ROTH, D. (2018). “A Comparative Study of Fairness-Enhancing Interventions in Machine Learning”. Proceedings of the Conference on Fairness, Accountability, and Transparency [online]. [Accessed: 6 September 2022]. DOI: https://doi.org/10.1145/3287560.3287589
  • GERARDS, J.; XENIDIS, R. (2021). Algorithmic discrimination in Europe: Challenges and opportunities for gender equality and non-discrimination law. European network of legal experts in gender equality and non-discrimination. Luxembourg: Publications Office of the European Union.
  • HACKER, P. (2018). “Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law”. Common Market Law Review, vol. 55, no. 4, pp.1143-1186. DOI: https://doi.org/10.54648/COLA2018095
  • HUERGO LORA, A. (2020). “Una aproximación a los algoritmos desde el Derecho administrativo”. In: HUERGO LORA. A. (dir.) and Díaz González, G.M. (coord.). La Regulación de los Algoritmos, pp. 23-87. Cizur Menor: Aranzadi.
  • HUNT, B. (2005). “Redlining”. Encyclopedia of Chicago, 2005 [online]. [Accessed: 6 September 2022]. Available at: http://www.encyclopedia.chicagohistory.org/
  • KIM, P. T. (2017). “Data-driven discrimination at work”. William & Mary Law Review, vol. 58, pp. 857-936.
  • LESSIG, L. (2006). Code: Version 2.0. New York: Basic books.
  • MAKKONEN, T. (2007). Measuring discrimination: data collection and EU equality law. Luxembourg: Office for Official Publications of the European Communities.
  • PLEISS, G.; RAGHAVAN, M.; WU, F.; KLEINBERG, J.; WEINBERGER, K. Q. (2017). “On Fairness and Calibration”. Advances in Neural Information Processing Systems. [online]. [Accessed: 6 September 2022]. Available at: https://proceedings.neurips.cc/paper/2017/file/b8b9c74ac526fffbeb2d39ab038d1cd7-Paper.pdf
  • RENAN BARZILAY, A.; BEN-DAVID, A. (2017). “Platform inequality: gender in the gig economy”. Seton Hall Law Review, vol. 47, no. 393, pp. 393-431. DOI: https://doi.org/10.2139/ssrn.2995906
  • SORIANO ARNANZ, A. (2020). Posibilidades actuales y futuras para la regulación de la discriminación producida por algoritmos. Doctoral tesis [online]. [Accessed: 6 September 2022]. Available at: https://roderic.uv.es/handle/10550/77050
  • SORIANO ARNANZ, A. (2021a). “La propuesta de Reglamento de Inteligencia Artificial de la Unión Europea y los sistemas de alto riesgo”. Revista General de Derecho de los Sectores Regulados, vol. 8, no. 1.
  • SORIANO ARNANZ, A. (2021b). “La situación de las mujeres en el empleo público: análisis y propuestas”. IgualdadES, no. 4, pp. 87-121. DOI: https://doi.org/10.18042/cepc/IgdES.4.03
  • SORIANO ARNANZ, A. (2021c). “Decisiones automatizadas: problemas y soluciones jurídicas. Más allá de la protección de datos”. Revista de Derecho Público: Teoría y Método, vol. 3, pp. 85-127. DOI: https://doi.org/10.37417/RPD/vol_3_2021_535
  • US Equal Employment Opportunity Commission. (1979). “Questions and Answers to Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures”. US Equal Employment Opportunity Commission [online]. [Accessed: 6 September 2022]. Available at: https://www.eeoc.gov/
  • VALERO TORRIJOS, J. (2020). “The legal guarantees of artificial intelligence in administrative activity: reflections and contributions from the viewpoint of Spanish administrative law and good administration requirements”. European Review of Digital Administration & Law, vol. 1, no. 1-2, pp. 55-62.
  • WACHTER, S. (2020). “Affinity Profiling and Discrimination by Association in Online Behavioural Advertising”. Berkeley Technology Law Journal, vol. 35, no. 2. DOI: https://doi.org/10.2139/ssrn.3388639
  • WACHTER, S.; MITTELSTADT, B.; RUSSELL, C. (2021). “Why fairnesss cannot be automated: bridging the gap between EU non-discrimination law and AI”. Computer Law & Security Review, vol. 41. DOI: https://doi.org/10.1016/j.clsr.2021.105567
  • ŽLIOBAITĖ, I. (2015). “A survey on measuring indirect discrimination in machine learning”. arXiv [online]. [Accessed: 6 September 2022]. DOI: https://doi.org/10.48550/arXiv.1511.00148
  • ŽLIOBAITĖ, I.; CUSTERS, B. (2016). “Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models”. Artificial Intelligence & Law, vol. 24, no. 2, pp. 183-201. DOI: https://doi.org/10.1007/s10506-016-9182-5.