Algoritmikus elfogultság mint jogi dilemma

Kulcsszavak: mesterséges intelligencia, európai jog, algoritmusok, algoritmikus elfogultság, AI Act, algoritmikus elfogultság, mesterséges intelligencia, MI Rendelet, európai szabályozás, generatív MI

Absztrakt

A tanulmány az algoritmikus elfogultságot mint jogi dilemmát vizsgálja, bemutatva a jelenség fogalmi alapjait, hatásait és a szabályozási kihívásokat. Bemutatjuk az Egyesült Államok széttöredezett szabályozási környezetét és az EU szigorú, felhasználóközpontú keretrendszerét, így a GDPR, a DSA és a mesterséges intelligenciáról szóló rendeleteket. Az dolgozat eredményeként kiemeljük az átláthatóság és az elszámoltathatósági mechanizmusok kulcsfontosságú szerepét az elfogultság mérséklésében, és hangsúlyozzuk az összehangolt globális szabályozási erőfeszítések sürgős szükségességét. A tanulmány rámutat arra is, hogy a meglévő végrehajtási mechanizmusok gyakran nem elegendőek a hatékony végrehajtáshoz. Különösen az algoritmusok tervezésében továbbra is fennáll a „fekete doboz” problémája, amely akadályozza az elfogultságok azonosítását és orvoslását, illetve a szabályozási keretek jelentős egyenlőtlenségeket mutatnak különböző csoportokkal szemben. A tanulmányban arra a következtetésre jutunk, hogy globálisan összehangolt szabályozási keret nélkül a meglévő erőfeszítések nem tudják hatékonyan kezelni a mesterséges intelligencia, így különösen az elfogultság kihívásait. A tanulmányban sürgetjük a globális kooperáció, amely meglátásunk szerint elengedhetetlen ahhoz, hogy megfelelően lehessen reagálni a technológiai fejlődésére.

Hivatkozások

Amazon scrapped 'sexist AI' tool. BBC, 2018. október 10. https://www.bbc.com/news/technology-45809919

Angwin, Julian – Larson, Jeff – Mattu, Surya – Kirchner, Lauren: Machine Bias. ProPublica, 2016. május 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Barbosa, Sandra – Félix, Sara: Algorithms and the GDPR: An analysis of article 22. Anuário da Proteção de Dados, 2021.

Binns, Reuben– Van Kleek, Max – Veale, Michael – Lyngs, Ulrik – Zhao, Jun – Shadbolt, Nigel: ’It’s Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions. arXiv (Cornell University), April 21, 2018. http://arxiv.org/abs/1801.10408

Blueprint for an AI Bill of Rights. WH.Gov, 2022–2023. (A kézirat leadásakor elektronikusan elérhető volt, de a Trump-adminisztráció azóta törölte.)

Bouchagiar, Georgios: Is Europe prepared for Risk Assessment Technologies in criminal justice? Lessons from the US experience. New Journal of European Criminal Law, Vol. 15., No. 1. (2024) https://doi.org/10.1177/20322844241228676

Brennan, Tim – Dieterich, William – Ehret, Beate: Evaluating the Predictive Validity of the Compas Risk and Needs Assessment System. Criminal Justice and Behavior, Vol. 36., No. 1. (2008) https://doi.org/10.1177/0093854808326545

Brożek, Bartosz – Furman, Michał – Jakubiec, Marek – Kucharzyk, Bartłomiej: The black box problem revisited. Real and imaginary challenges for automated legal decision making. Artificial Intelligence and Law, Vol. 32., No. 2. (2023) 427–428. https://doi.org/10.1007/s10506-023-09356-9

Bygrave, Lee A.: Article 22. In: Christopher Kuner –Lee A Bygrave – Christopher Docksey – Laura Drechsler (szerk.): The EU General Data Protection Regulation (GDPR): A Commentary. Oxford, Oxford University Press, 2020.

Chen, Zhisheng: Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, Vol. 10., No. 1. (2023) https://doi.org/10.1057/s41599-023-02079-x

Da Silveira, Julia Barroso – Lima, Ellen Alves: Racial Biases in AIs and Gemini’s Inability to Write Narratives About Black People. Emerging Media, Vol. 2., No. 2. (2024) https://doi.org/10.1177/27523543241277564

Dodge, Jonathan – Liao, Q. Vera – Zhang, Yunfeng – Bellamy, Rachel K. E. – Dugan, Casey: Explaining models: an empirical study of how explanations impact fairness judgment. Proceedings of the 24th International Conference on Intelligent User Interfaces. Marina Del Ray, CA, USA., March 17, 2019. (2019) https://doi.org/10.1145/3301275.3302310

Engel, Christoph – Linhardt, Lorenz – Schubert, Marcel: Code is law: how COMPAS affects the way the judiciary handles the risk of recidivism. Artificial Intelligence and Law, Vol. 33. (2024) https://doi.org/10.1007/s10506-024-09389-8

Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. WH.Gov, 2023. október 30. (A kézirat leadásakor elektronikusan elérhető volt, de a Trump-adminisztráció azóta törölte.)

Field, Matthew: From Black Nazis to female Popes and American Indian Vikings: How AI went ‘woke’. The Telegraph, 2024. február 23. https://tinyurl.com/2uke3bt9

Flores, Anthony W. – Bechtel, Kristin – Lowenkamp, Christopher T.: False Positives, False Negatives, and False Analyses: A Rejoinder to ‘Machine Bias: There’s Software Used across the Country to Predict Future Criminals and It’s Biased against Blacks.’ Federal Probation, Vol. 80., No. 2. (2016)

Fogliato, Riccardo – Chouldechova, Alexandra – G’Sell, Max: Fairness Evaluation in Presence of Biased Noisy Labels. International Conference on Artificial Intelligence and Statistics, 2020. június 3. (2020)

Gosztonyi, Gergely – Lendvai, Gergely: Deepfake és dezinformáció. Mit tehet a jog a mélyhamisítással készített álhírek ellen? Médiakutató, Vol. 25., No. 1. (2024) https://doi.org/10.55395/mk.2024.1.3

Governing AI for Humanity. UN Final Report, 2024. szeptember. 7–8.

Hacker, Philipp: Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, Vol. 55., No. 4. (2018) https://doi.org/10.54648/cola2018095

Hofeditz, Lennart – Mirbabaie, Milad – Luther, Audrey – Mauth, Riccarda – Rentemeister, Ina: Ethics Guidelines for Using AI-based Algorithms in Recruiting: Learnings from a Systematic Literature Review. Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences, 2022. január 1. (2022) https://doi.org/10.24251/hicss.2022.018

Hsu, Jeremy: Can AI hiring systems be made antiracist? Makers and users of AI-assisted recruiting software reexamine the tools’ development and how they’re used. IEEE Spectrum, Vol. 57., No. 9. (2020) 9. https://doi.org/10.1109/mspec.2020.9173891

Husovec, Martin: The DSA’s Scope Briefly Explained. SSRN, July 4, 2023. https://doi.org/10.2139/ssrn.4365029

Imran, Muhammad – Almusharraf, Norah: Google Gemini as a next generation AI educational tool: a review of emerging educational technology. Smart Learning Environments, Vol. 11., No. 1. (2024) https://doi.org/10.1186/s40561-024-00310-z

Initial Rescissions of Harmful Executive Orders and Actions. White House, 2025. január 20. https://tinyurl.com/4wuhbe8h

Johnson, Gabbrielle M.: Algorithmic bias: on the implicit biases of social technology. Synthese, Vol. 198., No. 10. (2020) https://doi.org/10.1007/s11229-020-02696-y

Kim, Jin-Young – Cho, Sung-Bae: An information theoretic approach to reducing algorithmic bias for machine learning. Neurocomputing, Vol. 500. (2022) https://doi.org/10.1016/j.neucom.2021.09.081

Kordzadeh, Nima – Ghasemaghaei, Maryam: Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems, Vol. 31., No. 3. (2021) https://doi.org/10.1080/0960085x.2021.1927212

Lagioia, Francesca – Rovatti, Riccardo – Sartor, Giovanni: Algorithmic fairness through group parities? The case of COMPAS-SAPMOC. AI & Society, Vol. 38., No. 2. (2022) https://doi.org/10.1007/s00146-022-01441-y

Larson, Jeff – Mattu, Surya – Kirchner, Lauren – Angwin, Julia: How We Analyzed the COMPAS Recidivism Algorithm. ProPublica, 2016. május 23. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

Lendvai, Gergely Ferenc: Sharenting as a regulatory paradox – a comprehensive overview of the conceptualization and regulation of sharenting. International Journal of Law Policy and the Family, Vol. 38., No. 1. (2024) https://doi.org/10.1093/lawfam/ebae013

Lendvai, Gergely Ferenc: Taming the Titans? – Digital Constitutionalism and the Digital Services Act. ESSACHESS, Vol. 17., No. 34. (2024) http://dx.doi.org/10.21409/0M3P-A614

Liesenfeld, Anna: The Legal Significance of Independent Research based on Article 40 DSA for the Management of Systemic Risks in the Digital Services Act. European Journal of Risk Regulation, (2024) https://doi.org/10.1017/err.2024.61

MacCarthy, Mark: Standards of Fairness for Disparate Impact Assessment of Big Data Algorithms. Cumberland Law Review, Vol. 48., No. 1. (2017) https://doi.org/10.2139/ssrn.3154788

McDonald, Clare: AI and cyber skills worryingly lacking, say business leaders. ComputerWeekly, 2024. július 11. https://tinyurl.com/27kn7p96

Novelli, Claudio – Casolari, Federico – Rotolo, Antonino – Taddeo, Mariarosaria – Floridi, Luciano: Taking AI risks seriously: a new assessment model for the AI Act. AI & Society, Vol. 39., No. 5. (2023) https://doi.org/10.1007/s00146-023-01723-z

Novelli, Claudio – Gaur, Akriti – Floridi, Luciano: Two Futures of AI Regulation under the Trump Administration. (Preprint.) SSRN, 2025. március 30. http://dx.doi.org/10.2139/ssrn.5198926

O’Neil, Cathy: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London, Penguin, 2016. 19.

Pänke, Julian: The Fallout of the EU’s Normative Imperialism in the Eastern Neighborhood. Problems of Post-Communism, Vol. 62., No. 6. (2015). https://doi.org/10.1080/10758216.2015.1093773

Ratwani, Raj M. – Sutton, Karey – Galarraga, Jessica E.: Addressing AI Algorithmic Bias in Health Care. JAMA, Vol. 332., No. 13. (2024). https://doi.org/10.1001/jama.2024.13486

Removing Barriers to American Leadership in Artificial Intelligence. White House, 2025. január 23. https://tinyurl.com/2u6zf73x

Robert, Lionel P. – Pierce, Casey – Marquis, Liz – Kim, Sangmi – Alahmad, Rasha: Designing fair AI for managing employees in organizations: a review, critique, and design agenda. Human-Computer Interaction, Vol. 35., No. 5–6. (2020) https://doi.org/10.1080/07370024.2020.1735391

Robertson, Ali: Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis. The Verge, 2024. február 21. https://tinyurl.com/mfsvs2xc

Rudin, Cynthia – Wang, Caroline – Coker, Beau: The Age of Secrecy and Unfairness in Recidivism Prediction. Harvard Data Science Review, Vol. 2., No. 1. (2020) https://doi.org/10.1162/99608f92.6ed64b30

Saed, Ferial: The Uncertain Future of AI Regulation in a Second Trump Term. Stimson, 2025. március 13. https://tinyurl.com/bnuzdttt

Saeidnia, Hamid Reza: Welcome to the Gemini era: Google DeepMind and the information industry. Library Hi Tech News, December 26, 2023. https://doi.org/10.1108/lhtn-12-2023-0214

Shin, Donghee – Shin, Emily Y.: Data’s Impact on Algorithmic Bias. Computer, Vol. 56., No. 6. (2023) 90. https://doi.org/10.1109/mc.2023.3262909

Singh, Jay P.: Predictive Validity Performance Indicators in Violence Risk Assessment: A Methodological Primer. Behavioral Sciences & the Law, Vol. 31., No. 1. (2013) https://doi.org/10.1002/bsl.2052

Supervision of the designated very large online platforms and search engines under DSA. European Commission, 2024. december 17. https://tinyurl.com/376xzn37

Van Bekkum, Marvin: Using sensitive data to de-bias AI systems: Article 10(5) of the EU AI act. Computer Law & Security Review, Vol. 56. (2025) https://doi.org/10.1016/j.clsr.2025.106115

Veale, Michael – Van Kleek, Max – Binns, Reuben: Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, February 4, 2018. https://doi.org/10.31235/osf.io/8kvf4

Verma, Sahil – Rubin, Julia: Fairness definitions explained. 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), Gothenburg, Sweden., May 29, 2018. https://doi.org/10.1145/3194770.3194776

Wang, Xukang – Wu, Ying Cheg – Ji, Xueliang – Fu, Hongpeng: Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices. Frontiers in Artificial Intelligence, Vol. 7. (2024) https://doi.org/10.3389/frai.2024.1320277

Megjelent
2025-10-20
Rovat
Dissertationes