La quimera de la objetividad algorítmica: dificultades del aprendizaje automático en el desarrollo de una noción no normativa de salud

Autores/as

DOI:

https://doi.org/10.12795/IETSCIENTIA.2022.i01.03

Palabras clave:

aprendizaje automático, salud, objetividad, normatividad

Resumen

Este ensayo explora si el aprendizaje automático, una subdisciplina de la inteligencia artificial, puede contribuir a desarrollar un acercamiento más objetivo al desarrollo y formulación de conceptos y descripciones, tomando como ejemplo el caso de la definición de salud. Para ello se aborda la teoría naturalista de la salud propuesta por Christopher Boorse y se la contrasta con una serie de posibilidades y problemas que pueden surgir al aplicar el aprendizaje automático a la formulación junto a esta teoría. En base al análisis se concluye que tanto el aprendizaje automático (tanto supervisado como no supervisado) arrastran elementos de normatividad y subjetividad que hacen inviable el desarrollo de conceptos y descripciones de manera neutra y objetiva. Esto no implica que el aprendizaje automático quede invalidado para el análisis evaluativo de la salud, sino que resalta y explicita los elementos subjetivos presentes en él.

Descargas

Los datos de descargas todavía no están disponibles.

Citas

Adamson, A. S., y Smith, A. (2018). Machine learning and health care disparities in dermatology. JAMA Dermatology, 154(11), 1247-1248.

Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. Wired Magazine. Recuperado 15/11/21 de https://www.wired.com/2008/06/pb-theory/

Beery, T. A. (1995). Gender bias in the diagnosis and treatment of coronary artery disease. Heart & Lung, 24(6), 427-435.

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Cambridge: Polity.

Blakeley, J. (2020). We built reality: How social science infiltrated culture, politics, and power. Oxford: Oxford University Press.

Boden, M. (2016). AI: Its nature and future. Oxford: Oxford University Press.

Boorse, C. (1975). On the distinction between disease and illness. Philosophy and Public Affairs, 5(1), 49–68.

Boorse, C. (1977). Health as a theoretical concept. Philosophy of Science, 44(4), 542-573.

Boorse, C. (1997). A rebuttal on health. In J. M. Humber & R. F. Almeder (Eds.), What is disease? (pp. 1–143). Totowa: Humana Press.

Boorse, C. (2014). A second rebuttal on health. Journal of Medicine and Philosophy, 39, 683–724.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Bowker, G. C., y Star, S. L. (2000). Sorting things out: Classification and its consequences. Cambridge: MIT Press.

Carnap, R. (1998). Der logische aufbau der welt (Vol. 514). Felix Meiner Verlag.

Casacuberta, D., y Vallverdú, J. (2014, 2014/01/02). E-science and the data deluge. Philosophical Psychology, 27(1), 126-140.

Domingos, P. (2021) We must stop militant liberals from politicizing artificial intelligence. The Spectator. Recuperado 15/11/21 https://spectator.us/militant-liberals-politicizing-artificial-intelligence/

Ereshefsky, M. (2009). Defining “health” and “disease”. Studies in the History and Philosophy of Biology and Biomedical Sciences, 40(3), 221–227.

Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., y Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115-118.

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.

Fink, C., Blum, A., Buhl, T., Mitteldorf, C., Hofmann-Wellenhof, R., Deinlein, T., Stolz, W., Trennheuser, L., Cussigh, C., Deltgen, D., Winkler, J. K., Toberer, F., Enk, A., Rosenberger, A., y Haenssle, H. A. (2020). Diagnostic performance of a deep learning convolutional neural network in the differentiation of combined naevi and melanomas. Journal of the European Academy of Dermatology and Venereology, 34(6), 1355-1361.

Feyerabend, P. (1993). Against method. Verso.

Gammelgaard, A. (2000). Evolutionary biology and the concept of disease. Medicine, Health Care and Philosophy, 3, 109-116.

Gomez-Uribe, C. A., y Hunt, N. (2016). The Netflix recommender system. ACM Transactions on Management Information Systems, 6(4), 1-19.

Harcourt, B. (2001). Illusion of order: The false promise of broken windows policing. Cambridge: Harvard University Press.

Heaven, W. D. (2020). La IA de plegamiento de proteínas de google resuelve un histórico desafío de la biología. MIT Technology Review. Recuperado 20/01/2021 de https://www.technologyreview.es/s/12935/la-ia-de-plegamiento-de-proteinas-de-google-resuelve-un-historico-desafio-de-la-biologia

Hinton, E. (2016). From the war on poverty to the war on crime. Cambridge: Harvard University Press.

Huntington, A., y Gilmour, J. A. (2005). A life shaped by pain: Women and endometriosis. Journal of Clinical Nursing, 14(9), 1124-1132.

Kingma, E. (2007). What is it to be healthy? Analysis, 67(2), 128–133.

Kingma, E. (2014). Naturalism about health and disease: Adding nuance for progress. The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine, 39(6), 590-608.

Kosinski, M. (2021, 2021/01/11). Facial recognition technology can expose political orientation from naturalistic facial images. Scientific Reports, 11(1), 100.

Kovács, J. (1998). The concept of health and disease. Medicine, Health Care and Philosophy, 1, 31-39.

Marcus, G., y David, E. (2019). Rebooting AI: Building artificial intelligence we can trust. New York: Vintage.

Millikan, R. (1984). Language, Truth, and Other Biological Categories. Cambridge, Mass: MIT Press..

Murphy, D. (2015). Concepts of disease and health. Recuperado 15/04/2016 de http://plato.stanford.edu/archives/spr2015/entries/health-disease/

Nordenfelt, L. (2007). The concepts of health and illness revisited. Medicine, Health Care and Philosophy, 10, 5-10.

Nordenfelt, L. (2016). A defence of a holistic concept of health. In G. É. (Ed.), Naturalism in the philosophy of health. History, philosophy and theory of the life sciences (pp. 209-225). Dordrecht: Springer.

OECD. (2019). The heavy burden of obesity: The economics of prevention. Paris: OECD Publishing.

Radder, H. (2009). Why technologies are inherently normative. In D. Gabbay, P. Thagard, & J. Woods (Eds.), Handbook of the philosophy of science (pp. 887-921). Amsterdam: Elsevier.

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215.

SEEDO. (n/d). IMC. Recuperado 19/01/2021 de https://www.seedo.es/index.php/imc

Van Fraassen, B. C. (1980). The scientific image. Oxford University Press.

Vorvick, L. (2019). Pulse. U.S. Department of Health and Human Services. Recuperado 10/02/2021 de https://medlineplus.gov/ency/article/003399.htm

Wang, Y., y Kosinski, M. (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology, 114(2), 246-257.

Descargas

Publicado

2022-06-27

Cómo citar

Guersenzvaig, A., & Casacuberta, D. (2022). La quimera de la objetividad algorítmica: dificultades del aprendizaje automático en el desarrollo de una noción no normativa de salud. IUS ET SCIENTIA, 8(1), 35–56. https://doi.org/10.12795/IETSCIENTIA.2022.i01.03
Visualizaciones
  • Resumen 230
  • PDF 256
  • HTML 69