Analysis of the proposal for a Regulation on ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies
DOI:
https://doi.org/10.12795/IETSCIENTIA.2020.i02.03Keywords:
Artificial intelligence; European Union; algorithms; discrimination; human supervisionAbstract
On 20 October 2020, the European Parliament adopted a resolution (2020/2012(INL)) with recommendations to the Commission regarding artificial intelligence, robotics and related technologies, which included a legislative proposal for a Regulation on the ethical principles for the development, deployment and use of these technologies. The content of this proposal undoubtedly follows from the regulatory vision that the European Commission has maintained in documents such as the White Paper on Artificial Intelligence (COM(2020) 65 final) or the Ethical guidelines for trustworthy AI drawn up by the High-Level Expert Group on AI. Given this new legislative horizon, it is more necessary than ever to address a constructive criticism on the proposal, highlighting the possibility of reformulating its markedly soft-law character despite its location in a regulatory source of general application and directly applicable, such as regulations, or the adopted approach for certain key principles such as human supervision or discrimination.
Downloads
References
ALMADA, M., “Human Intervention in Automated Decision-Making: Toward the Construction of Contestable Systems.”, Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, 2019, pp. 2–11. https://doi.org/10.1145/3322640.3326699
BOIX, A., “Los Algoritmos Son Reglamentos: La Necesidad de Extender Las Garantías Propias de Las Normas Reglamentarias a Los Programas Empleados Por La Administración Para La Adopción de Decisiones.”, Revista de Derecho Público: Teoría y Método, vol. 1, pp. 223–270. https://doi.org/10.37417/RPD/vol_1_2020_33
COBBE, J. y SINGH, J., “Reviewable Automated Decision-Making.”, Computer Law & Security Review, vol. 39, 2020. https://doi.org/10.1016/j.clsr.2020.105475
COMISIÓN EUROPEA (CE), “Libro Blanco sobre la inteligencia artificial - un enfoque europeo orientado a la excelencia y la confianza”, Bruselas, COM(2020) 65 final, 19 de febrero de 2020. Disponible en: https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_es.pdf
DANKS, D. y LONDON, A.J., “Algorithmic Bias in Autonomous Systems.” Proceedings of the 26th International Joint Conference on Artificial Intelligence, AAAI Press, 2017, pp. 4691–4697. https://dl.acm.org/doi/10.5555/3171837.3171944
DEMETZOU, K., “Data Protection Impact Assessment: A Tool for Accountability and the Unclarified Concept of ‘High Risk’ in the General Data Protection Regulation.”, Computer Law & Security Review, vol. 35, núm. 6, 2019. https://doi.org/10.1016/j.clsr.2019.105342
FISCHER, J.E., GREENHALGH, C., JIANG, W., RAMCHURN, S.D., WU, F. y RODDEN, T., “In‐the‐loop or on‐the‐loop? Interactional arrangements to support team coordination with a planning agent.”, Concurrency and Computation: Practice and Experience, 2017, pp. 1-16. https://doi.org/10.1002/cpe.4082
GRUPO DE EXPERTOS DE ALTO NIVEL SOBRE IA (HLEG-AI), “Directrices éticas para una IA fiable”, Bruselas, 8 de abril de 2019. Disponible en: https://op.europa.eu/es/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1
GRUPO DE TRABAJO SOBRE PROTECCIÓN DE DATOS DEL ARTÍCULO 29 (GT29), “Directrices Sobre Decisiones Individuales Automatizadas y Elaboración de Perfiles a Los Efectos Del Reglamento 2016/679.”, Bruselas, 2018, pp. 1-37. Disponible en: https://www.aepd.es/sites/default/files/2019-12/wp251rev01-es.pdf
HILDEBRANT, M., “The issue of bias: the framing powers of ML”, Draft version, en Marcello Pelillo, Teresa Scantamburlo (eds.), Machine Learning and Society: Impact, Trust, Transparency, MIT Press forthcoming 2020
KISELEVA, A., “AI as a Medical Device: Is It Enough to Ensure Performance Transparency and Accountability?”, European Pharmaceutical Law Review, vol. 4, núm. 1, 2020, pp. 5-16. https://doi.org/10.21552/eplr/2020/1/4
De LAAT, P.B., “Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?”, Philosophy & Technology, vol. 31, núm. 4, 2018, pp. 525–41. https://doi.org/10.1007/s13347-017-0293-z
MANN, M. y MATZNER, T., “Challenging Algorithmic Profiling: The Limits of Data Protection and Anti-Discrimination in Responding to Emergent Discrimination.”, Big Data & Society, vol. 6, núm. 2, 2019, pp. 1–11. https://doi.org/10.1177%2F2053951719895805
MITTELSTADT, B. 2019. “Principles Alone Cannot Guarantee Ethical AI.”, Nature Machine Intelligence, vol. 1, núm. 11, 2019, pp. 501–507. https://doi.org/10.1038/s42256-019-0114-4
MITTELSTADT, B., “From Individual to Group Privacy in Big Data Analytics.”, Philosophy & Technology, vol. 30, núm. 4, 2017, pp. 475–94, https://doi.org/10.1007/s13347-017-0253-7
De SIO, F. y Van Den HOVEN, J., “Meaningful Human Control over Autonomous Systems: A Philosophical Account.”, Frontiers in Robotics and AI, vol. 5, 2018, pp. 1-15. https://doi.org/10.3389/frobt.2018.00015
VEALE, M. y BINNS, R., “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data.”, Big Data & Society, vol. 4, núm. 2, 2017, pp. 1-17. https://doi.org/10.1177%2F2053951717743530
WACHTER, S. “Affinity Profiling and Discrimination by Association in Online Behavioural Advertising.” Berkeley Technology Law Journal, vol. 35, núm. 2, 2020, Forthcoming, pp. 1-74. https://dx.doi.org/10.2139/ssrn.3388639
ZUIDERVEEN BORGESIUS, F.J., “Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence.”, The International Journal of Human Rights, vol. 24, núm. 10, 2020, pp. 1–22. https://doi.org/10.1080/13642987.2020.1743976
Downloads
Published
How to Cite
Issue
Section
License
Those authors being published in this journal agree to the following terms:
- Authors retain their copyright and they will guarantee to the journal the right of first publication of their work, which will be simultaneously subject to license recognition by Creative Commons that allows others to share such work provided it is stated the author’s name and his first publishing in IUS ET SCIENTIA.
- Authors may take other non-exclusive distribution license agreements version of the published work (e.g. deposit in an institutional digital file or publish it in a monographic volume) provided it is stated the initial publication in this journal.
- It is allowed and encouraged that Author s disseminate their work via the Internet (e. g. institutional digital files or on their website) prior to and during the submission process, which can lead to interesting exchanges and to increase citation of the published work.
- Abstract 1254
- pdf (Español (España)) 746