Education, digitalization and the role of rights: A critical reflection*

Educación, digitalización y el papel de los derechos: una reflexión crítica

Claudia Severi

University of Modena and Reggio Emilia (Unimore)

San Carlo College Foundation, Modena

Almo Collegio Borromeo, Pavia

Scientific and organizational coordinator of CRID – Interdepartmental Research Centre on Discrimination and Vulnerability, Unimore

claudia.severi@unimore.it 0009-0006-3517-6841

Recibido: 11 de mayo de 2025 | Aceptado: 16 de junio de 2025

IUS ET SCIENTIA • 2024 • ISSN 2444-8478

Vol. 11 • Nº 1 • pp. 188-210

https://dx.doi.org/10.12795/IESTSCIENTIA.2025.i01.09

RESUMEN

PALABRAS CLAVE

This article investigates the complex relationship between digital transformation and education, focusing on the shift from early internet structures, to today’s AI-driven, permanently connected infosphere. It critically explores the evolution of digital education policies, particularly within the European Union, and highlights the tensions between technological opportunity and educational issues. While digital tools offer promising avenues for personalization and inclusion, they also amplify concerns over inequality, digital divides, and the erosion of human-centered pedagogical practices. The analysis underscores how AI, if unregulated, may reinforce biases and undermine rights, especially those of minors. Drawing on frameworks such as the EU Digital Education Action Plan and UNESCO’s recommendations, as well as the UN Convention on the Rights of the Child, the article argues for a rights-based, ethically grounded, and human-centred digital education. Only through such an approach education will be able to fulfil its democratic role and ensure that technology supports, rather than substitutes, human development and agency.

Education

Technology

Artificial Intelligence

Human rights

ABSTRACT

KEYWORDS

Este artículo investiga la compleja relación entre la transformación digital y la educación, centrándose en la transición desde las primeras estructuras de Internet hasta la infosfera actual, impulsada por la inteligencia artificial y permanentemente conectada. Explora críticamente la evolución de las políticas de educación digital, particularmente dentro de la Unión Europea, y pone de relieve las tensiones entre las oportunidades tecnológicas y los problemas educativos. Si bien las herramientas digitales ofrecen vías prometedoras para la personalización y la inclusión, también amplifican las preocupaciones sobre la desigualdad, las brechas digitales y la erosión de las prácticas pedagógicas centradas en el ser humano. El análisis subraya cómo la inteligencia artificial, si no está regulada, puede reforzar los sesgos y socavar los derechos, especialmente los de los menores. Basándose en marcos como el Plan de Acción de Educación Digital de la UE y las recomendaciones de la UNESCO, así como en la Convención sobre los Derechos del Niño de las Naciones Unidas, el artículo aboga por una educación digital basada en los derechos, con fundamentos éticos y centrada en la persona. Solo a través de este enfoque la educación podrá cumplir su función democrática y garantizar que la tecnología apoye, en lugar de sustituir, el desarrollo humano y la capacidad de actuar.

Educación

Tecnología

Inteligencia artificial

Derechos humanos

I. Introduction

In 2025, the Standing Conference of Ministers of Education of the Council of Europe’s member states has designated the year as the “European Year of Digital Citizenship Education”[1]. This initiative responds to the growing urgency for greater investment and strategic action in the field of digital citizenship education, particularly in light of the complex challenges and opportunities brought about –or significantly intensified– by digital technologies and environments. With this objective, the Council of Europe’s framework will focus on four main principles to develop: critical thinking and digital literacy; respecting digital rights and freedoms; active participation and civic engagement; safety and well-being in the digital world.

Far from being only a symbolic gesture, the choice of this thematic year represents a concrete invitation to act. Indeed, the Council of Europe is collaborating with Ministries of Education, schools, NGOs, private sector partners, and civil society organizations at local, national, and international levels. A concrete output of this cooperation is a strategic platform where key stakeholders from the public, private, and civil society sectors can come together in order to define common objectives, share good practices, assess the progress achieved, and collectively outline a roadmap for the future of digital citizenship education.

Building on these premises, the Council of Europe’s framework will focus on three main actions, which are: firstly, “Awareness-raising campaigns” all over the Europe and beyond, using the same social media, public events as well as digital platforms, in order to involve people within the digital citizenship debate; secondly, “Resource development” for educators, policy makers, and other stakeholders, with the aim to hold up the inclusion of Digital Citizenship Education into curricula and educational practices; finally, “Teaching methodologies”, like interactive online courses, webinars, and workshops, imagined to increase the teaching and learning of digital citizenship.

“Learn, Connect, Engage, Thrive Together!” is the motto, from which it is easy to understand that education is at the centre of this project.

Within this context, the article will explore the evolution of the internet and the web, from their origins to the emergence of generative Artificial Intelligence (§1). Secondly, it will focus on the direction taken by the European Union regarding digital education and the use of technology in learning, with particular attention to the “Digital Education Action Plan 2021-2027” (§2). Thirdly, the European framework will be critically assessed, considering not only the opportunities offered by digital tools, but also the risks associated with their uncritical use (§3). Fourthly, attention will be given to the role of Artificial Intelligence and its impact on education, examining how its advent and employment is reshaping educational practices and how it can be effectively governed (§4). Finally, the discussion will be framed within a rights-based approach, with a specific focus on human rights (§5).

II. From the “big library” to Artificial Intelligence: how the internet has been changing and why it matters.

As it is well known, and frequently reiterated in the literature, digitalization is a process that permeates and is profoundly transforming the world and the way we live. The same process of digitalization has itself suffered significant and fast changes, following a dynamic that is possible to describe as a “great acceleration” (Wajcman 2020). Indeed, the information society, or “infosphere” (Floridi 2014; Floridi 2020), in which the whole society is immersed and whose characteristics increasingly permeate every sphere of life, is no longer the same digital realm that emerged in the early 1990s with the advent of the internet.

It is possible to imagine the early internet as a “big library”, in which finding information that, until that time, would have required lengthy research and the consultation of a broad number of paper documents, became much easier. Such a change was a revolution in terms of convenience, accessibility, time savings, and energy expenditure.

After this “phase zero”, the situation profoundly changed with the rise of social networks, opening a stage that could be described as a “great square”: a digital agorà (Gramigna & Poletti, 2019) in which all people are enabled to share their life, exposing themselves through images, videos, and public messages. It is very clear that, from the still closed and protected space of the great library, the arrival of the great square marked a deep shift, opening up to public exposure, and consequently to the criticisms and risks of the web.

The matter took yet another turn with the advent of the smartphone, which effectively inaugurated a condition that is possible to describe as one of “permanent connection” (Gui, 2019): thanks to such digital devices, connectivity is always – literally – “at our fingertips”, creating situations in which the boundary between real life and virtual life becomes increasingly blurred, increasingly “onlife” (from the now well-known expression coined by Luciano Floridi, 2015).

With the recent spread of generative Artificial Intelligence tools, the landscape has become even more complex, once again challenging various paradigms related to law, ethics, and human agency (Tamburrini, 2020; Crawford, 2021; Feroni et al., 2022; Salardi, 2023; Llano Alonso, 2024; Galletti & Zipoli Caiani, 2024;).

In such a context, the use of digital technologies potentially represents a valuable aid for improving various aspects of our lives, as well as now being an indispensable tool for actively participating in democratic life, in line with the so-called “digital citizenship” paradigm (Scagliarini 2024; Gaetano 2025; Illica Magrini 2025). However, every benefit brings with it complex issues that must be addressed with both care and rigor, as the output of any tool often depends on how it is used, all the more so when certain tools are employed in specific areas of life and society, with particularly vulnerable individuals as the primary recipients. This is the case of technologies used in educational settings, for the instruction and training of minors: an issue that is crucial and absolutely central not only for the present, but also for the future; not just for the individuals concerned, but for society as a whole. In this regard, as it is pointed out:

In this context, the role of education has also changed: it is evident that democracy struggles to generate autonomy and social inclusion in this fundamental domain, particularly in light of the impact of digital divides, which negatively affect the actual exercise of rights by producing social inequality, if not outright forms of exclusion.

As a result, education is no longer a reliable vehicle for social mobility and now faces new and complex challenges, among which are those directly stemming from the impact of new technologies (Casadei, 2024: 157, my translation).

Therefore, it is necessary to profoundly reflect on educational technologies and, more broadly, on the role of education, in assessing their risks and benefits. Such a critical reflection must be made in order to fulfil the primary function that educational institutions are called to, which is guaranteeing a true right to education, and that such education be of high quality, aligned with the paradigms of constitutional and democratic societies, as well as with the Human Rights.

III. Technology in education: the direction of European Union

Concerning the issue of technology used in education, European Union launched in 2020 the Digital Education Action Plan (2021-2027), an initiative that aims to define a common vision for high-quality, inclusive, and accessible digital education across Europe, while seeking to support the adaptation of education and training systems in the Member States to the digital age. However, as Barbara Giovanna Bello (2023) pointed out, although with the aim of enabling citizens to take part in social, economic, and civic life, this plan is focused only on technical skills.

In this regard, the European Parliament expressed regret over the «limited ambitions of the new Digital Education Action Plan in promoting digital citizenship», while also hoping that digital literacy will include «traditional, humanistic and soft skills, such as social skills, empathy, problem-solving and creativity». (European Parliament, 2021, par. 29).

Indeed, the European Commission states that, if used in a conscious, equitable, and effective way by educators, technology can be a valuable tool in achieving the goal of providing quality education to all students. In fact, it is able to «facilitating more personalized, flexible, and learner-centered education across all phases and levels of education and training». Moreover, «technology can be a powerful and engaging tool for collaborative and creative learning» (European Commission, 2020: 1).

Other benefits associated with the use of digital tools in education include helping both learners and educators access digital content, information, and resources that would have been inaccessible just a few years ago, as well as creating and sharing new ones.

Another important aspect is the possibility of conducting lessons outside the physical classroom, offering greater freedom and adaptability to individual needs. In this sense, online, distance, and blended learning are concrete examples of how technology can be used to support learning processes[2].

In order to develop an education system which is able to integrate digital tools, the European document presents ten guiding principles which should be followed in order to build a pedagogical system that fit into the digital era. The most relevant of these principles will be presented below.

Firstly, it is necessary to build a high-quality, inclusive digital education, grounded in ethical principles and respectful of personal data protection. This must become a strategic priority for all institutions and organizations involved in education and training.

In this regard, also the European Regulation 2016/679 – better known as GDPR – General Data Protection Regulation – recognizes minors, as «vulnerable natural persons» (Recital 75 of Regulation (EU) 2016/679, “GDPR”), and for this reason they are entitled to «specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards involved as well as their rights in relation to the processing of personal data» (Recital 38 of the GDPR).

Secondly, the transformation of education in response to the digital age is a collective responsibility and not an individual concern. For this reason, society must engage as a whole, since achieving this goal requires strengthened partnerships and ongoing dialogue among educators, researchers, local governments, the private sector, and public authorities. Indeed, according to the European Commission, in order to ensure that digital education and training are truly high-quality, accessible, and inclusive, it is essential to actively involve parents, companies, civil society, and learners themselves – including younger students – in the process. These efforts must be grounded in strong data and evidences, enabling the society to track progress and gain a deeper understanding of both the challenges and the potential of digital transformation in the field of education.

Thirdly, education is recognized as a human and fundamental right by the document, and therefore Europe calls for appropriate investments in connectivity, equipment and organizational capacity and skills, in order to truly fulfill that right.

Fourthly, digital competences must be recognized as fundamental skills for all educators and training professionals. Indeed, educators need both the confidence and the capabilities to use technology in effective and creative ways, also in order to engage and inspire learners, facilitate the development of their digital skills, and ensure that digital tools and platforms are inclusive and accessible to all. This is the reason why teachers and trainers should be supported through continuous, needs-based professional learning opportunities, tailored to their specific disciplines. Moreover, according to the European Commission, digital pedagogy and innovation in digital education should be embedded in all initial teacher education programs and actively promoted in the training of youth workers.

Finally, digital skills (both at the basic and advanced level) are necessary in order to navigate safely in our societies and digital environments and for this reason the European document calls for the efforts of policymakers in terms of investments, support and priority, in order to reach the aim of a digital education for all (European Commission, 2020).

IV. Challenging profiles

The European Digital Education Action Plan underscores the transformative potential of digitalization in education, highlighting many positive impacts, taking into consideration the improvement access to learning resources, major flexibility in teaching and learning methods, as well as the opportunity to develop digital skills, which are essential for future employment, but particularly for active citizenship.

However, while the benefits are significant, it is equally important to recognize the limitations and challenges that digital transformation introduces when integrated into educational systems.

Firstly, a high-quality education system must necessarily manage the issue of digital divides, which can be defined as

[…] the gap between those who have affordable access, skills, and support to effectively engage online and those who do not. As technology constantly evolves, the digital divide prevents equal participation and opportunity in all parts of life, disproportionately affecting people of color, Indigenous peoples, households with low incomes, people with disabilities, people in rural areas, and older adults (NDIA, 2024).

Therefore, digital divide must be understood not only in terms of access to technology, but also in terms of the knowledge and skills required to navigate the (digital) world.

Even the data presented by UNESCO (2023) in the Global Education Monitoring Report “Technology in education: A tool in whose terms?” indicate that, for instance, in Congo, the percentage of young people aged 3 to 17 who have an internet connection at home is close to 0%, both among the poorest and the wealthiest fractions of the population. Not only is internet access unequally distributed, but so are the so-called “digital skills”: the UNESCO study notes that in the 27 European Union countries, 54% of adults had at least basic digital skills in 2021, while in Brazil, 31% of adults possessed at least these skills, but the level was twice as high in urban areas compared to rural ones, three times higher among those who were part of the labour force compared to those outside it, and nine times higher in the highest socioeconomic group compared to the lowest.

Furthermore, according to the PISA – Programme for International Student Assessment 2018 report, 5% of 15-year-olds with the highest reading proficiency were at risk of being deceived by a typical phishing email, while this percentage rose to 24% among their peers with the weakest reading skills.

Within the European context, according to Eurostat data, in 2023, 45% of European citizens lacked basic digital skills, posing a significant risk of exclusion and marginalization, particularly in relation to the exercise of fundamental rights[3].

Within this first area of concern, a further issue arises that introduces an entirely different set of questions related to gender equality: 49% of women lack basic digital skills, compared to 43% of men. This gender gap becomes more pronounced when it comes to specific skills. In 50 countries, 6.5% of men and 3.2% of women reported being able to write a computer program. In Belgium, Hungary, and Switzerland, no more than 2 women for every 10 men had programming skills (UNESCO 2023).

The issue of digital divides, upon closer inspection, undoubtedly concerns the right to education, but even more so the participation in democratic life. In this regard, it has therefore been emphasized that before even discussing

the role that the people should be assigned in political decision-making, that is, whether they should express a non-binding but relevant opinion or whether the popular will should have a strictly deliberative character, public spaces must be created that everyone can access through simple and transparent procedures, in which the discussion topics are explained and made understandable to most. This problem has not yet been resolved (Salardi, 2020: 405, my translation).

Therefore, with regard to education, it is necessary first of all to eliminate existing divides. Yet, unfortunately, as Casadei (2022: 28) notes,

most of the available consultation spaces are accessed mostly by an elite public – and this is a central issue in the discussion of possible responses to the crisis of the parliamentary democracy system – both because there is a lack of equitable distribution of access opportunities, and because there is an equally lacking distribution of the deliberative skills of participants (my translation).

Secondly, the type of platform as well as the pedagogical and educational model adopted has a direct impact on the inclusion or exclusion of individuals within the learning process. It is therefore essential to take into account the diverse needs and characteristics of all learners. For instance, people with disabilities require specific and fully accessible tools in order to benefit from the opportunities offered by digital education (Carruba, 2014).

A third – crucial – issue, which is however not highlighted in European documents, concerns the intensive use of digital technologies, particularly among children in primary and lower secondary schools. Indeed, on the one hand, several studies emphasize the smartphone as a valuable aid to learning (Pachler et al., 2010; Bachmair, 2015); but on the other hand, new empirical evidence have revealed significant side effects associated with the early use of digital devices and access to new technologies, particularly regarding smartphone use (Dempsey et al., 2019; Gui et al., 2020; Gerosa & Gui 2023). In this regard, at the institutional level in Italy, a fact-finding investigation by the Senate published concerning conclusions about the intensive use of digital media by children (Senato della Repubblica, 2021).

It has been shown that this use produces extremely negative effects on neurocognitive development, also reducing academic learning capacities.

The first scientific study in Italy to support this claim was conducted by Tiziano Gerosa (researcher at the University of Applied Sciences and Arts of Southern Switzerland, SUPSI) and Marco Gui (Director of the Centre for Digital Wellbeing at the University of Milano-Bicocca, Department of Sociology and Social Research). The research team analysed a sample of 1,672 lower secondary school students aged between 10 and 14, comparing their INVALSI (National Institute for the Evaluation of the Education System) test results based on whether they had received their first smartphone before the age of 12 or in later years (i.e., at 12, 13, or 14 years old). The study found – among other things – that those who had early access to smartphones or made intensive use of digital media (more than two hours per day across television and video games) experienced a subsequent negative impact on their learning outcomes in Italian language (Gui et al., 2020)[4].

These findings should be interpreted beyond the notion of mere “academic performance”: the negative consequences also affect the emotional and psychological well-being of young people.

These are data that align with findings reported in studies such as those by Jonathan Haidt (2024), who highlights a possible (according to the author, nearly certain) correlation between the rise of social media and the increase in anxiety, depression, and stress among the generation born in the era of social networks[5] (AAP, 2016; Australian Government, department of Health, 2016; IAP, 2020; CPS, 2017).

It is about the relationship between “lights and shadows” (Bello, 2023: 103-104): on the one hand there are the lights constituted by the opportunity to fully exercise one’s rights, exercising a true citizenship, an opportunity given only by the digital realm; on the other hand, there are the shadows of risks and perils that are possible to find in the digital ecosystem. First of all, hate speech and cyberbullying[6] (Bello, Scudieri 2022); but also, the so called “revenge porn” (Barone, 2025; Di Tano, 2024) the hyper connection, that can lead to the phenomenon known in literature as “Hikikomori” (Verza, 2016; Rossi, 2025), without mentioning the inappropriate acquisition of a minor’s digital identity, and the reduction of opportunities in adulthood due to profiling, an effect of the large volumes of data individuals, often unknowingly, provide to the web daily (Sarra, 2022).

In light of these challenges, a deep digital education «capable of facilitating the development of a critical sense and the ability to make judgments on the effects that the intermediation of technologies produces on personal freedoms and rights» (Martoni, 2025: 78) becomes necessary, in order to exercise a full citizenship, a citizenship that makes itself global, also thanks to the elimination of borders due to the digitalization process.

In this regard, it is observed, indeed, that the concept of “citizenship” in itself «evokes a highly complex and multifaceted semantic universe» (Casadei, 2021, p. 109, my translation).

With regard to the concept of citizenship, several interpretative frameworks have gradually become established over time (Casadei, 2021, p. 109-124). The notion of so-called “global citizenship” represents a new interpretation that highlights the tensions and inherent limitations of the concept itself, inevitably leading to a more complex understanding.

Such complexity necessarily calls for new skills to comprehend, study, and explain it. For this reason, “education for citizenship” becomes an interdisciplinary and multidisciplinary efforts, within which the broadest understanding of citizenship – namely, global citizenship – must be situated (Casadei et al., 2025). This form of citizenship is inextricably linked to digital education, understood in light of these considerations in its broadest, most complex, and holistic sense.

Therefore, while technical skills certainly need to be empowered, equally important are skills related to collaborative work, critical thinking, and curiosity. It is therefore necessary to adopt a holistic and interdisciplinary approach to educating both about and through technology.

V. The challenge of Artificial Intelligence

The rapid spread of generative AI further complicates the picture of technology in education. Not by chance, it is defined as the “Fourth industrial revolution” (Schwab, 2016), meaning the fusion and the interrelation of these technologies with the physical, digital and biological realms[7].

In this period of deep change, it is worth recalling the metaphor of “divorce” introduced by Luciano Floridi. The scholar argues that with the advent of artificial intelligence, we have witnessed a growing separation between action and intelligence. He suggests that it is now possible to perform tasks efficiently and successfully without necessarily being intelligent, at least not in the way we traditionally understand human intelligence. In this sense, the term “divorce” becomes fitting: an apparent disconnection between acting and thinking, or between doing and understanding, which increasingly takes new and evolving forms in the age of AI (Floridi, 2022).

In the context of education and learning, this separation – this “divorce” between action and intelligence – must be carefully considered. It invites the academics (and not only) to reflect critically on how teaching and learning practices should evolve in response to the recent transformations taking place in society.

From a regulatory perspective, the European AI Act classifies artificial intelligence systems used in education as “high-risk”, as the regulation states:

The deployment of AI systems in education is important for promoting high-quality digital education and training, and for enabling all learners and educators to acquire and share the necessary digital skills and competencies. […]

However, AI systems used in educational or vocational training institutions […] should be classified as high-risk AI systems, as they can determine a person’s educational and professional path, thereby affecting their ability to secure a livelihood. If poorly designed or implemented, these systems may be particularly intrusive, violating the right to education and training, as well as the right to non-discrimination, and perpetuating historical patterns of inequality […] (European Parliament, 2024, recital n. 56).

This implies that AI applications in the educational sector must meet meticulous compliance standards. Such a classification underscores the in-depth influence AI can exert on students’ learning paths and future professional opportunities.

Indeed, it has been suggested that, when implemented in educational contexts, AI-powered technologies can support a wide range of learners, from children to lifelong learners, including individuals with special needs. Data analytics may provide valuable insights into learning processes, while voice assistants and adaptive tutoring systems have the potential to promote a more inclusive educational environment.

Nonetheless, ineffective pedagogical practices risk being automated, while existing biases, discriminatory methods, alongside misinformation and disinformation may be reinforced and widely diffused. Such developments could disempower both educators and learners, threatening fundamental human rights, particularly the right to quality education.

Furthermore, the essential dimensions of education – especially the role and responsibilities of teachers – might be devalued, because there is the danger that these systems are going to erode trust in their ability to teach effectively within digital and AI-enhanced learning environments.

Another danger is that, placing excessive emphasis on easily quantifiable skills risks marginalizing vital humanistic values such as collaboration, critical thinking, ethics, and democratic engagement, which are elements that are more difficult to assess but crucial to a well-rounded education (Council of Europe Standing Conference of Ministers of Education, 2023).

In this regard, as it is well known, Large Language Models (LLMs), such as ChatGPT and its more recent successors, represent one of the most advanced frontiers in the field of Artificial Intelligence (AI). These systems, trained on vast collections of textual data, are able to produce natural language (generative AI) with a level of fluency and coherence that are always more similar to human expression. In other words, they are capable of generating content that is not only grammatically correct but also contextually appropriate and stylistically consistent. What makes them very efficient is their ability to sustain articulate dialogue, adjusting tone and register according to the situation.

Thanks to their capacity to process and rework enormous volumes of information, LLMs are radically transforming the way people interact with digital technologies, enabling applications ranging from automated writing to image generation and even software programming.

However, their increasingly widespread and often unregulated use raises important questions: the responses they generate tend to reflect a “median” or dominant view of knowledge, thus shaping how information is presented and perceived.

In certain contexts, this tendency toward standardisation can be beneficial.

In education, for instance, generative AI can provide simplified, standard explanations of complex ideas, offering useful entry points for learners approaching scientific or literary subjects for the first time. Yet, this same approach may overlook alternative learning strategies or original perspectives that only a human teacher might offer, limiting the opportunity to nurture critical and creative thinking.

So, generative AI can quickly and efficiently respond to frequently asked questions, drawing on widespread consumer preferences.

While this can streamline communication, it may fall short in situations that require empathy or individualised support, particularly in sensitive contexts that demand a more personalised approach (Caligiore, 2024).

Across all these domains, the value of a model that reflects the “prevailing tendency” is evident. The real challenge, however, lies in balancing this standardisation with the need to highlight exceptions, context-specific nuances, and unconventional perspectives. The use of such technologies risks to conduct to a dangerous outcome: the flattening of individual differences and the loss of unique viewpoints (Campione et al., 2024).

It is crucial to remember that every individual carries a distinctive worldview, shaped by personal experiences, cultural backgrounds, and different life paths. In this regard, history teaches us that some of the most significant innovations emerged exactly from ideas that deviated from dominant thinking. Differences, in fact, do not merely enrich dialogue: they are a vital resource for progress and advancement (Caligiore, 2022).

Caligiore (2024) provides us with meaningful analogy that can be drawn from nature, where difference is often the source of energy, transformation, and vitality. Electricity, for instance, exists thanks to a potential difference between two points, since it is this imbalance that generates the flow of current. Similarly, the flight of an airplane relies on the difference in air pressure between the upper and lower surfaces of the wing, which creates the necessary lift. Biodiversity, too, shows us how the variety of species and their interactions ensures the equilibrium and survival of ecosystems (Caligiore 2024).

In each of these examples, it is precisely diversity that enables motion, evolution, and development. Likewise, in our social and intellectual environments, it is diversity that drives creativity and the discovery of new knowledge. Preserving individual perspectives means valuing heterogeneity and recognising that from it can arise new energies, solutions, and extraordinary ideas.

It is therefore essential to foster open and inclusive dialogue, which is able to give voice to those often marginalised and supports a pluralistic approach to knowledge production and dissemination.

According to Caligiore (2024) only in this way can we avoid reducing the world to a single, simplified narrative, and instead preserve the richness and complexity of the differences that define our shared reality.

In response to the growing integration of artificial intelligence (AI) into education, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence (2021), asking Member States to adopt coordinated and ethically based strategies that promote both access and critical awareness.

Central to this vision is the development of AI literacy for all, through partnerships with international organisations, educational institutions, and civil society. These efforts aim not only to empower individuals, but to reduce digital inequality, particularly in underrepresented or disadvantaged communities.

AI education must be built on fundamental competences, including digital literacy, coding, media and information literacy, and numeracy, but also on human skills such as creativity, critical thinking, communication, emotional intelligence, and ethical reasoning. The integration of these skills is particularly crucial in contexts where educational gaps are more pronounced.

AI systems used in educational environments must be designed and deployed responsibly. While AI can support personalised learning and provide new pedagogical tools, its implementation must respect the cognitive, relational, and social dimensions of education. AI must not diminish learners’ autonomy, gather sensitive data, or promote standardised and reductive forms of assessment and strict safeguards must be put in place to guarantee compliance with data protection standards and to prevent misuse, especially of children’s data.

UNESCO (2021) highlights the importance of inclusive participation in AI education. This includes actively supporting the involvement of girls and women, minorities, persons with disabilities, and other groups facing structural barriers to digital access, as well as ensuring that educational content is accessible, multilingual and that takes into account the broad range of different cultures.

Furthermore, the recommendation calls for sustained investment in interdisciplinary and ethical AI research, encouraging collaboration across fields such as education, law, philosophy, sociology, and the humanities. Researchers must be trained in ethical methods, especially in relation to data handling and the broader societal implications of their work and both public and private actors are encouraged to facilitate access to data for scientific use, especially in low– and middle-income countries.

In the broader social context, AI must be useful to enhance democratic values, including freedom of expression and access to diverse, trustworthy information. Therefore, States are called to regulate automated content moderation and recommender systems, ensuring transparency, appeal mechanisms, and the protection of pluralism in online environments. Media and digital literacy are essential tools to help citizens navigate the ethical and epistemological challenges posed by AI.

In sum, these recommendations point toward a vision of AI that is not only technologically advanced, but also deeply human-centred, inclusive, and ethically conscious, placing education, equity, and democratic accountability at the heart of digital transformation.

The UNESCO’s Beijing Consensus on Artificial Intelligence and Education (2019) echoes this vision, recommending that such a human-centred approach should also be implemented in the field of education. It affirms that AI should enhance human capabilities in various aspects of life and calls for an inclusive model that actively ensures access to technology for those who are typically marginalised.

In the face of the risk that the use of AI systems may lead to a compression of differences and the loss of unique perspectives that are essential for stimulating creativity and empowering the discovery of new knowledge, it is crucial that digital education and education through digital tools adopt a human-centred and pluralistic approach.

This same approach should be extended to the legislative sphere, which must be rooted in the respect for fundamental human rights and the principles they embody.

In this regard, the idea of a “Reserve of Humanity” as described by Fernando H. Llano Alonso (2024) can and should be extended to the field of education.

This reserve of humanity is a concept comparable to the principle of statutory reservation, which is a legal institution that serves to protect citizens’ rights, establishing that certain matters may be regulated only by law or by an act having the force of law. Similarly, the reserve of humanity calls for the protection of rights, needs, and fundamental freedoms from the undesired intrusion of high-risk algorithmic decision-making[8]. In the field of education, it shields learners from standardised and mechanical forms of teaching and assessment, ensuring that human judgment, empathy, and contextual understanding remain central to the educational process.

In this same line of reflection, the UNESCO Guidance for Generative AI in Education and Research (2023), positions itself, stating – on its very first page – that human capacity, in concert with collective action, and not technology, is the key factor in responding to the major challenges faced by society.

Therefore, Unesco document advocates for the development of an appropriate regulatory framework to ensure that generative AI, along with other technologies, truly becomes a tool – placed in the hands of educators, students, and researchers – within a human-centred approach that promotes human agency, inclusion, equality (including gender equality), and cultural and linguistic diversity.

In order to put these statements into practice, an approach starting from human rights appears the most suitable.

VI. Technology in education: a reflection starting from Human Rights

The foundation of this perspective lies in the Universal Declaration of Human Rights (1948), which in Article 26 declares:

«Education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms». This principle clearly aligns with the broader framework of the 2030 Agenda too, and in particular with Goal 4: “Quality Education.”, which calls for inclusive and equitable quality education and the promotion of lifelong learning opportunities for all, ensuring that education systems are responsive to the needs of all learners and contribute to the development of just, peaceful, and sustainable societies.

The era of profound transformation we are living should lead to deeper reflection on the meaning and role of education as a whole.

In such a context, where information is available everywhere and in any time, not only the role of teachers is changing (UNESCO, 2021), but it is necessary to provide people responsible for the education of minors with tools in order to be able to handle such significant change.

As Fernando H. Llano Alonso has stated:

We cannot, as members of the human species, ignore the duty to jointly preserve the cultural heritage of humanism, whose philosophical and legal foundations are fully compatible with scientific and technological development, provided that it respects human rights, with particular attention to autonomy, dignity, and the moral integrity of the person (Llano, 2024: 223, my translation)

Are we perhaps witnessing an epistemological shift in the understanding of the teacher-student paradigm?

If so, then the right to quality education must be redefined and reshaped. The right to education becomes, in essence, the right of young people to be surrounded by adults capable of recognizing the deep transformation is going on, and of adapting their pedagogical practices accordingly. It is responsibility of these adults to embrace their changing role in such a peculiar age of change.

There is a growing need for an educational model that is, in some ways, new, yet firmly rooted in long-standing principles of human rights. This model must not be dazzled by the promises of unrestrained digitalization, but instead there is the necessity to harness its opportunities while mitigating its risks.

It is therefore needed to reframe the role of education, of teachers and that of learners.

In this regard, the Committee on the Rights of the Child, published the General comment n. 25 (2021) on children’s rights in relation to the digital environment. In the document, the Committee recognizes the dual nature of technology, acknowledging that the digital environment is now a fundamental space for the exercise of rights by children and adolescents, as they serve as powerful tools for the realization of rights but at the same time is also pose significant threats to their protection.

In light of this assumption, it provides general guidelines that must be followed as a guide for determining the measures needed to guarantee the realization of children’s rights in relation to the digital environment.

These principles are: non-discrimination; best interests of the child; right to life, survival and development; respect for the views of the child.

Particularly, a relevant focus must be put on the Best interests of the child (Bergamini & Ragni, 2019; European Asylum Support Office, 2020; Freeman, 2007; Lorubbio, 2022), recognized by the art. 3 of 1989 United Nations Convention on the Rights of the Child (UNCRC), which puts at the centre of every action regarding minors, their best interests, becoming more than a statement, but rather a binding principle that orders the prioritization of minor’s welfare in every context.

In this regard, the Committee stated that «The digital environment was not originally designed for children, yet it plays a significant role in children’s lives. States parties should ensure that, in all actions regarding the provision, regulation, design, management and use of the digital environment, the best interests of every child is a primary consideration». (Committee on the Rights of the Child, par. III: 2-3).

As María Carmen Barranco Avilés (2025) has pointed out, the principle of the best interests of the child requires States to take this standard into account in all decisions and measures concerning the digital environment. In its most developed interpretation, this principle demands that all children’s rights are considered in the digital context, including the right to access information, the right to protection from harm, and the right to participate in decisions affecting them.

Accordingly, States are not only obliged to guarantee the opportunities offered by digital technologies, particularly in times of crisis, but also to address the risks they pose – such as exposure to violent or sexual content, cyberbullying, exploitation, abuse, and the incitement to harmful behaviors.

This perspective also underscores the importance of educating parents, caregivers, educators, and other key stakeholders to ensure the effective protection of children’s rights in digital spaces.

In this regard, the aforementioned UNESCO report Technology in Education: A Tool in Whose Terms?(2023) raises several issues. It challenges the assumption that technology automatically enhances education and learning, clearly stating that it is essential to learn how to live both with and without technology, in all areas of life, and particularly in education.

For this reason, the report primarily addresses policymakers and governments, calling for the continuation of debate and scientific research. It urges a shift in focus not only on digital inputs, but more importantly, on learning outcomes.

In addition to the fundamental human right to education, then, the right to health – enshrined in Article 25 of the 1948 Universal Declaration of Human Rights –must not be overlooked. When reflecting on the use of digital devices, indeed, it is essential to consider the growing body of research highlighting the psychophysical impacts on individuals, particularly minors, that, as already pointed out, include increased risks of anxiety disorders, depression, sleep deprivation, as well as the emergence of real forms of addiction to digital devices, especially smartphones (Spitzer, 2013; Twenge, 2017; OECD, 2023; Haidt, 2024).

In this respect, regardless of the decisions made or the direction taken, the report firmly reaffirms the necessity of placing the “best interests of students” at the centre of any educational system rooted in rights. It also asserts that machines can never replace the value of face-to-face interactions between students and teachers.

Ultimately, the UNESCO report calls for a critical rethinking of the role of technology in education, cautioning against viewing it as a universal solution and instead emphasising its limitations and risks. The debate should revolve around building an educational ecosystem in which technology serves as a supportive tool, never as a substitute for the irreplaceable human connection between educators and learners.

In this regard, even the Italian Authority for the Protection of Childhood and Adolescence presented the “Children’s Manifesto on Digital Rights[9], which consists of ten core principles, written by around 10,000 pupils from over 400 primary school classes across Italy[10]. In this context, some rights are particularly remarkable, that is to say: right to education[11], right to protection[12], right to respect[13], right to friendship[14], right to health[15], right to family[16] and right to disconnection[17], because they all have to do with the role of educators, teachers, families in educating, protecting, respecting minors and their right, even in the digital environment.

This project is the proof that minors are and could be, at the same time, recipients, but also manufacturers of these principle, so it appears necessary to call for agency, where “agency”

[…] denotes a particular form of participation that manifests autonomy in action, meaning the ability to choose between possible actions, and therefore requires awareness. In the context of the digital society, this means being aware of the use of devices and technological tools, as well as of the effects that these tools may generate in digital environments, but it also involves navigating in an environment composed of data and information, now referred to as the “infosphere”, which raises new questions for law, starting with private law (Casadei, 2025a).

In doing so, rights and in particular human rights constitute the foundation on which it is possible to (re)draw a new way of teaching.

Only through rights-based approach it is possible to ensure an education that is equitable, inclusive, and respectful of everyone’s rights, both now and in the future.

Conclusion

In conclusion, the digital transformation has reshaped (and is still reshaping) not only our access to information and ways of communication, but the very structure of educational sphere. As it has been tried to show throughout this analysis (which has not the ambition of being complete), the integration of technology – from the use of digital devices at home and at school, such as a simple computer or an interactive whiteboard, to the use of artificial intelligence, both by teachers and by students – into educational systems carries both extraordinary potential and substantial risk, as it has been stated by both international and supranational documents. Indeed, while digital tools can enhance accessibility, support personalized learning, and foster inclusion of a large array of students, they also raise urgent concerns regarding data protection, equity, standardisation of knowledge, and the preservation of human-centred approach in the relationship between teachers and students.

Throughout this article, it has been argued that any effort to embed digital technologies in educational systems must be anchored on rights, therefore, a rights-based approach appears the most suitable in order to address the issue of technology in education and education to technology. A rights-based approach, indeed, could be able to safeguard human dignity, equity, and inclusion, in the sense that young people become actively involved in the choices that regard their own education. From this perspective, education must not only adapt to the digital age but must lead its transformation, ensuring that learners, especially children and adolescents, remain at the centre of this evolution.

Human rights offer a normative framework and solid roots, valuable to orientate oneself in this complex filed and, in the case of minors, the fundamental principle of Best interests of the child and the principles of listening, are the cardinal points that educators, parents, families, and institutions must keep in mind when they have to handle the complex relationship between technology and education.

In this regard, education must be rethought not as a passive recipient of technological innovation, but as a proactive and critical force, able to shape the future of technology through ethical reflection, participation and an open gaze. A gaze that is able to see beyond the narrative of the necessity of technology and its amazing positive sides. A gaze which is able to bridge the “divorce” suggested by Luciano Floridi, between action and intelligence.

According to the thesis presented in this article, a human-centred and pluralistic approach to digital education is not merely desirable, rather it is necessary, since it is the only path through which it is possible to ensure that the process of digitalization truly serves the public good, empowering learners to become not just digitally competent, but ethically aware of the complexity that characterizes this always more interconnected world.

Concluding, it is possible to say that we must learn to unlearn: unlearning to imagine ourselves as inextricably tied to technology, as if there were no choice, as if the process was unstoppable. It probably is, but it is necessary to rethink our own role, the role of educators, the role of education itself, and the role of younger people in order to reposition technology in all spheres of life, starting with education.

We have to become able to choose: just as we choose to wear a pair of sunglasses on a particularly sunny day, but certainly do not do so at night, when there would be no need and, in fact, they would be harmful, in the same way we must learn to use technology when it is needed and unlearn its use when it is useless or even harmful.

Bibliography

Books and Book Chapters

Bachmair, B. (2015). Editorial - Digital mobility. Media educational endeavour in our disparate cultural development. Med. Media Education, 2, pp. 1–6.

Balbinot, M. (2025). Genitori in rete. (Sovra)esposizione: lo sharenting. In Th. Casadei, V. Barone, & B. Rossi (Eds.), Giovani in rete. Guida per un uso consapevole delle tecnologie. Giappichelli.

Barone, V. (2025). “Revenge Porn”: la violenza digitale di genere. In Th. Casadei, V. Barone, & B. Rossi (Eds.), Giovani in rete. Guida per un uso consapevole delle tecnologie. Giappichelli.

Bello, B. G. (2023). In(giustizie) digitali. Un itinerario su tecnologie e diritti. Pacini Giuridica.

Bello, B. G., & Scudieri, L. (Eds.). (2022). Odio online: forme, prevenzione, contrasto. Giappichelli.

Bergamini, E., & Ragni, C. (Eds.). (2019). Fundamental rights and best interests of the child in transnational families. Intersentia. https://doi.org/10.1017/9781780689395

Caligiore, D. (2022). IA. Istruzioni per l’uso. Il Mulino.

Caligiore, D. (2024). Curarsi con l’intelligenza artificiale. Il Mulino.

Carruba, M. C. (2014). Tecnologia e disabilità. Pedagogia speciale e tecnologie per un’inclusione possibile. Pensa Multimedia.

Casadei, Th. (2021). Il diritto in azione: significati, funzione e pratiche. In V. Marzocco, S. Zullo, & Th. Casadei (Eds.), La didattica del diritto. Metodi, strumenti e prospettive (pp. 89–124). Pacini Giuridica.

Casadei, Th. (2022). “Una questione di accesso”? Democrazia e nuove tecnologie. Il caso dell’istruzione. In S. Salardi, M. Saporiti, & M. V. Zaganelli (Eds.), Diritti umani e tecnologie morali. Una prospettiva comparata tra Italia e Brasile (pp. 23–34). Giappichelli.

Casadei, Th., Mondello, M., & Severi, C. (2025). Oltre i confini. Linee guida sull’educazione alla cittadinanza globale. CRID – Centro di Ricerca Interdipartimentale su Discriminazioni e vulnerabilità – Unimore. Mucchi.

Cerrina Feroni, G., Fontana, C., & Raffiotta, E. C. (Eds.). (2022). AI Anthology. Profili giuridici, economici e sociali dell’intelligenza artificiale. Il Mulino.

Crawford, K. (2021). Né intelligente né artificiale. Il lato oscuro dell’IA. Il Mulino.

Di Tano, F. (2024). I reati informatici e i fenomeni del cyberstalking, del cyberbullismo e del revenge porn. In Th. Casadei & S. Pietropaoli (Eds.), Diritto e tecnologie informatiche. Questioni di informatica giuridica, prospettive istituzionali e sfide sociali. 2ª ed. ampliata e aggiornata. pp. 197-212. Wolters Kluwer.

Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford University Press.

Floridi, L. (2020). Pensare l’infosfera. La filosofia come design concettuale. Raffaello Cortina.

Floridi, L. (2022). Etica dell’intelligenza artificiale. Sviluppi, opportunità, sfide. Raffaello Cortina.

Floridi, L. (Ed.). (2015). The onlife manifesto: Being human in a hyperconnected era. Springer Open. https://doi.org/10.1007/978-3-319-04093-6

Freeman, M. D. A. (2007). Article 3: The best interests of the child. Martinus Nijhoff Publishers.

Gaetano, S. (2025). Le nuove frontiere dell’interazione tra cittadino e istituzioni. In F. Casa, S. Gaetano, & G. Pascali (Eds.), Intelligenza artificiale: diritto, etica e democrazia. Il Mulino.

Galletti, M., & Zipoli Caiani, S. (Eds.). (2024). Filosofia dell’intelligenza artificiale. Sfide etiche e teoriche. Il Mulino.

Gui, M. (2019). Benessere digitale a scuola e a casa: un percorso di educazione ai media nella connessione permanente. Erickson.

Gui, M., Gerosa, T., Vitullo, A., & Losi, L. (2020). L’età dello smartphone. Un’analisi dei predittori sociali dell’età di accesso al primo smartphone personale e delle sue possibili conseguenze nel tempo. Centro di ricerca Benessere Digitale. https://boa.unimib.it/retrieve/e39773b8-2216-35a3-e053-3a05fe0aac26/Report-1_Let%C3%A0-dello-smartphone.pdf

Haidt, J. (2024). The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. Penguin Press.

Illica Magrini, A. (2024). Il cammino della cittadinanza digitale: criticità, risorse, prospettive di sviluppo. In F. Casa, S. Gaetano, & G. Pascali (Eds.), Intelligenza artificiale: diritto, etica e democrazia. Il Mulino.

Llano Alonso, F. H. (2024). Homo ex Machina. Ética de la inteligencia artificial y derecho digital ante el horizonte de la singularidad tecnológica (Prologue by S. Pietropaoli). Tirant lo Blanch.

Lorubbio, V. (2022). The best interests of the child: More than a right, a principle, a rule of procedure of international law. Editoriale Scientifica.

Pachler, N., Bachmair, B., & Cook, J. (Eds.). (2010). Mobile learning: Structure, agency, practices. Springer. https://doi.org/10.1007/978-1-4419-0585-7

Rossi, B. (2025). Il rischio dell’autoreclusione. Iperconnettività: hikikomori. In Th. Casadei, V. Barone, & B. Rossi (Eds.), Giovani in rete. Guida per un uso consapevole delle tecnologie. Giappichelli.

Salardi, S. (2023). Intelligenza artificiale e semantica del cambiamento: una lettura critica. Giappichelli.

Sarra, C. (2022). Il mondo-dato. Saggi su datificazione e diritto. CLEUP.

Sartor, G. (2022). L’intelligenza artificiale e il diritto. Giappichelli.

Scagliarini, S. (2024). I diritti costituzionali nell’era di internet: cittadinanza digitale, accesso alla rete e net neutrality. In Th. Casadei & S. Pietropaoli (Eds.), Diritto e tecnologie informatiche. Questioni di informatica giuridica, prospettive istituzionali e sfide sociali. 2ª ed. ampliata e aggiornata. pp. 3-16. Wolters Kluwer.

Schwab, K. (2016). The fourth industrial revolution. World Economic Forum.

Severi, C. (2025). I patti educativi digitali. Fiducia, cooperazione e diritti per un’educazione digitale consapevole. In Th. Casadei, V. Barone, & B. Rossi (Eds.), Giovani in rete. Guida per un uso consapevole delle tecnologie. Giappichelli.

Severi, C. (2025). L’impatto delle tecnologie sui Minori Stranieri Non Accompagnati: tra rischi, opportunità e sicurezza. In Th. Casadei & B. G. Bello (Eds.), Minori Stranieri Non Accompagnati e uso delle tecnologie. Sicurezza, consapevolezza e tutela dei diritti. Mucchi.

Spitzer, M. (2013). Demenza digitale: Come la nuova tecnologia ci rende stupidi. Corbaccio.

Tamburrini, G. (2020). Etica delle macchine. Dilemmi morali per robotica e intelligenza artificiale. Carocci.

Verza, A. (2016). L’hikikomori e il giardino all’inglese: Inquietante irrazionalità e solitudine comune. Ragion Pratica, 46(1), pp. 243–260. https://doi.org/10.1415/83206

Wajcman, J. (2020). La tirannia del tempo. L’accelerazione della vita nel capitalismo digitale. Treccani.

Journal Articles

AAP – American Academy of Pediatrics. (2016). Radesky, J., Christakis, D. A., et al. Media and young minds. Pediatrics, 138(5). https://doi.org/10.1542/peds.2016-2591

Barranco, M. C. (2025). La Carta española de Derechos Digitales y los derechos humanos de los niños, niñas y adolescentes. Revista de Derecho Privado, 48, pp. 47–68. https://doi.org/10.18601/01234366.48.03

Casadei, Th. (2024). Brechas digitales: el reto de las nuevas tecnologías para los derechos humanos. Revista De La Facultad De Derecho De México, 74(290), pp. 149-178. https://doi.org/10.22201/fder.24488933e.2024.290.90069

Casadei, Th. (2025a). Guest editor’s introductory note: Regulation, awareness, agency – Beyond the “risk paradigm”. Revista de Derecho Privado, 48, pp. 5–18. https://doi.org/10.18601/01234366.48.01

Casadei, Th. (2025b). I divari digitali di genere: frontiera del “costituzionalismo digitale”? Diritto & Questioni pubbliche.

Campione, F., Catena, E., Schirripa, A., & Caligiore, D. (2024). Creatività umana e intelligenza artificiale generativa: similarità, differenze e prospettive. Sistemi intelligenti, pp. 1–26.

CPS – Canadian Paediatric Society. (2017). Ponti, M., Bélanger, S., Grimes, R., et al. Screen time and young children: Promoting health and development in a digital world. Paediatrics & Child Health, 22(8), pp. 461-468. https://doi.org/10.1093/pch/pxx123

Dempsey, S., Lyons, S., & McCoy, S. (2019). Later is better: Mobile phone ownership and child academic development. Economics of Innovation and New Technology, 28, 798–815. https://doi.org/10.1080/10438599.2018.1559786

Gerosa, T., & Gui, M. (2023). Earlier smartphone acquisition negatively impacts language proficiency, but only for heavy media users. Social Science Research, 114, 102915. https://doi.org/10.1016/j.ssresearch.2023.102915

Gramigna, A., & Poletti, M. (2019). Luoghi formativi: dall’agorà alla cittadinanza digitale. Formazione & Insegnamento, 17(1), pp. 115-127.

Martoni, M. (2025). Digital transformation and e-citizenship. Children’s access to online services. Revista de Derecho Privado, 48, pp. 69–86. https://doi.org/10.18601/01234366.48.04

Ponce Solé, J. (2022). Reserva de humanidad y supervisión humana de la Inteligencia Artificial. El Cronista del Estado social y democrático de derecho, 100, 58-67.

Salardi, S. (2020). Democrazia e nuove tecnologie: scenari passati e dell’avvenire. Ordines. Per un sapere interdisciplinare sulle istituzioni europee, 2(405), pp. 397–412.

Tremblay, M. S., Carson, V. et al. For children and young people (5 to 17 years); Canadian 24-hour movement guidelines for children and youth: An integration of physical activity, sedentary behaviour, and sleep. Applied Physiology, Nutrition, and Metabolism, 41(6 Suppl 3). https://doi.org/10.1139/apnm-2016-0203

Institutional and Legal Documents

Council of Europe Standing Conference of Ministers of Education. (2023). Regulating artificial intelligence in education. https://rm.coe.int/regulating-artificial-intelligence-in-education-26th-session-council-o/1680ac9b7c

European Asylum Support Office. (2020). EASO practical guide on the best interests of the child in asylum procedures. Publications Office of the European Union. https://op.europa.eu/en/publication-detail/-/publication/ca903e86-3c84-11eb-b27b-01aa75ed71a1

European Commission. (2020). Digital Education Action Plan 2021–2027: Resetting education and training for the digital age. European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020DC0624

European Parliament. (2021, March 25). Report on shaping digital education policy (P9_TA(2021)0095).

European Parliament. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations and Directives.

OECD – Organisation for Economic Co-operation and Development. (2023). PISA 2022 results (Volume II): Learning during – and from – disruption. OECD Publishing. https://doi.org/10.1787/a97db61c-en

Save the Children. (2025). Essere genitori nell’era digitale. https://s3-www.savethechildren.it/public/2025-03/Essere_Genitori_nell_Era-Digitale_stc.pdf

Senato della Repubblica. (2021). Documento a conclusione dell’indagine conoscitiva sull’impatto del digitale sugli studenti. https://www.miur.gov.it/documents/20182/6739250/Documento_Senato_Sull%E2%80%99impatto_del_digitale_sugli_studenti.pdf

UNESCO. (2019). Beijing Consensus on Artificial Intelligence and Education. https://unesdoc.unesco.org/ark:/48223/pf0000368303

UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137

UNESCO. (2023). Global education monitoring report, 2023: Technology in education: a tool in whose terms? https://unesdoc.unesco.org/ark:/48223/pf0000385723

Web Resources / Reports

IAP – Indian Academy of Pediatrics. (2020). Screen Time Guidelines for Parents. https://iapindia.org/screen-time-guidelines

NDIA – National Digital Inclusion Alliance. (2024). Definitions. https://www.digitalinclusion.org/definitions

Twenge, J. M. (2017, September). Have smartphones destroyed a generation? The Atlantic. https://www.theatlantic.com/magazine/archive/2017/09/has-the-smartphone-destroyed-a-generation/534198/


[*] This contribution stems from a presentation bearing the same title, which I had the opportunity to deliver as part of the joint Doctoral Seminar “New Technologies, Ethics, Culture and Society” organized by the PhD programs Humanities, Technology and Society (University of Modena and Reggio Emilia, Fondazione Collegio San Carlo of Modena, Almo Collegio Borromeo of Pavia) and DocDerUS (Universidad de Sevilla) in the year 2025, dedicated to the themes of “AI and digitalization: applications, challenges, and opportunities”. The seminar was held on February 6-7, 2025, at the Universidad de Sevilla. This experience took place within a research period I spent for my doctoral dissertation at the Faculty of Law of the Universidad de Sevilla. For this reason, I express my deepest gratitude to Prof. Fernando H. Llano Alonso (Decano de la Facultad de Derecho de la Universidad de Sevilla), who has been and continues to be an invaluable guide throughout this enriching research experience. I would also like to thank Prof. Matteo Rinaldini, Scientific Coordinator of the PhD program in Humanities, Technology and Society, as well as Prof. Inmaculada Marín Alonso (Univ. de Sevilla, Vicedecana de Calidad e Innovación Docente de la Facultad de Derecho US), for their valuable feedback about my presentation that have been precious in order to improve this contribution. I would also like to thank Prof. Thomas Casadei, my scientific Tutor and Prof. Gianfrancesco Zanetti, for their constant support and precious guidance. Finally, please allow me to thank who is sharing with me this academic journey, my PhD colleagues: Piero Sansò, Caterina Zamboni, Giuseppe Chiavaroli, Matilde Operato, Eugenio Capitani.

[2] However, in order to enjoy the benefits brought by these technological tools in education, it is necessary to have, first of all, some knowledges and skills related to the so called “digital education”. In this regard, Barbara Giovanna Bello punctually noted that «The now numerous documents that promote digital education mostly focus on digital literacy, […] in relation to digital divides that generate new inequalities or exacerbate existing ones. This is certainly a fundamental aspect for participation in the digital sphere, a conditio sine qua non for the exercise of rights but […] what is also needed is a form of “literacy” in rights (human, fundamental, non-digital and digital), in participation in the network, and in this – in many respects still new – onlife citizenship» (Bello, 2023, p. 103, my translation).

[3] More precisely «In 2023, 55% of people in the EU aged 16 to 74 had at least basic overall digital skills. There were significant disparities across the EU, with rates ranging from 83% in the Netherlands to 28% in Romania»: https://ec.europa.eu/eurostat/web/products-eurostat-news/w/ddn-20240222-1%23.

[5] In this regard, the Italian Society of Pediatrics, in line with the American Academy of Pediatrics and Australian guidelines, has called for the regulation of children’s exposure to multimedia devices during early childhood. Specifically, it recommends no use of smartphones or tablets before the age of two, during meals, or before bedtime. It also advises limiting screen time to a maximum of one hour per day for children aged 2 to 5, and to a maximum of two hours per day for those aged 5 to 8.

[6] According to the Italian National Research Council, over 1 million students aged between 15 and 19 (47%) experienced incidents of cyberbullying during 2024. Over 800,000 students (32%) have engaged in cyberbullying, with a slightly higher percentage among boys (35%) compared to girls (29%). Nearly a quarter of students (23%, approximately 600,000 individuals) report being both perpetrators and victims of cyberbullying, finding themselves trapped in a vicious cycle. This dual role is particularly concerning, as it is often linked to more severe consequences, including difficulties in interpersonal relationships and other at-risk behaviours. In recent years, the number of students who are both cyberbullies and victims has continued to rise steadily, affecting both genders. However, the phenomenon appears more prevalent among boys (26%) than girls (21%). In response, the recommended approach is the development of concrete solutions that foster safe and inclusive online environments, while promoting a culture of respect and solidarity among younger generations. The study is available here: https://www.cnr.it/it/nota-stampa/n-13283/cyberbullismo-tra-i-giovani-un-fenomeno-in-crescita-che-colpisce-oltre-un-milione-di-adolescenti-italiani-i-dati-espad-italia

[7] However, as Silvia Salardi (2023) pointed out: «This so-called digital and/or algorithmic revolution, far from being a true revolution, is in reality nothing more than the culmination of a long process that experienced a decisive acceleration in the 20th century. A path shaped by those who held and still hold technological and economic power, and who were able to impose, in a non-democratic manner, a certain direction [...]. Humanity found itself catapulted into a dimension that did not yet globally belong to it, and this digital dimension, although it has saved us in many ways [...], has also revealed all its limitations, namely, the limits of a forced virtualization of human relationships, driven to total digital control under the watchful eye of technology» (Salardi 2023, pp. 14-15, my translation). In other words, Salardi highlights how broad participation has been enabled across a wide range of activities made possible by technological tools. However, the rise of the digital world has also created a significant gap between what technology allows us to do in an extremely short amount of time and the extent to which people have been (or still are) adequately prepared to critically and consciously face such a transformation.

This issue is not rooted in the revolutionary nature of the current era, but rather in the fact that the advancement of technological progress was not accompanied by a parallel, structured educational and pedagogical effort aimed at all users. Such an effort would have been essential to raise levels of knowledge, awareness, and critical thinking.

[8] In this regard, Fernando Llano Alonso pointed out that even the European Regulation 2016/679, (UE GDPR) recognizes the reserve of humanity, since in art. 22, c. 1 it is asserted that the data subject has the right not to be subjected to decisions based exclusively on automated processing, including profiling, when such decisions produce legal effects or similarly significant consequences for them. Furthermore, in art. 22, c. 3 it is expected that in the situations outlined in points (a) and (c) of paragraph 2 – namely, when the decision is necessary for entering into or performing a contract between the data subject and the data controller, or when it is based on the data subject’s explicit consent – the data controller must implement appropriate measures to safeguard the data subject’s rights, freedoms, and legitimate interests. These safeguards must include, at a minimum, the right to obtain human intervention, to express their point of view, and to contest the decision. However, following the idea of Juli Ponce Solé (2022, p. 6), he points out that the the EU Artificial Intelligence Act (AI Act) raises concerns about regulatory consistency in relation to Article 22 of the GDPR. While the GDPR establishes a general “human oversight” safeguard – allowing automated decision-making only under specific legal exceptions and always subject to human supervision – the AI Act adopts a more selective approach. In particular, Articles 5 and 14 of the AI Act prohibit the use of IA to certain narrowly defined scenarios and require human oversight only for high-risk AI systems. For all other AI applications considered to carry lower or moderate risk, the AI Act provides neither a requirement for human oversight nor a general “human-in-the-loop” safeguard (Llano Alonso, 2024, pp. 144-147, my translation).

[9] Along the same lines, in 2021 the Council of Europe published the document “Know Your Rights in the Digital Environment.Council of Europe Guidelines to Respect, Protect and Fulfil the Rights of Children in the Digital Environment”, which outlines the same rights with the aim of making them understandable not only to policymakers and the adults responsible for the education of minors, but even to young people themselves. The document is available at the following link: https://rm.coe.int/conosci-i-tuoi-diritti-nell-ambiente-digitale-linee-guida-del-consigli/1680a3ed2e

[10] The project stems from a digital education project for primary schools carried out in collaboration with the “Istituto degli Innocenti” (IDI). The whole manifesto in English is available at the following link: https://www.garanteinfanzia.org/sites/default/files/2023-07/AGIA-Ambiente-digitale_Manifesto-EN.pdf

[11] Meaning the right to a digital education and to access content and services that are appropriate for one’s age. In this regard, minors are entitled to receive information about the digital world in all its dimensions, provided by qualified individuals who can explain both the benefits and the risks of the internet. Everyone should be guaranteed age-appropriate knowledge of the digital world, without “hiding” or “demonizing” any aspect of it. For this reason, educators should be equipped to present digital topics in an engaging and unbiased manner, because young people have the right to receive digital education in school, including mandatory courses, followed by the issuance of a certificate that grants access to the web.

[12] Meaning the right to navigate the web within a safe, inclusive, and age-appropriate environment, free from exposure to language, images, videos, or solicitations that may intimidate minors or pose a threat to their safety and well-being. Particularly, adults have the responsibility to safeguard young people from any form of abuse, violence, mistreatment, and coercion, as well as from acts of aggression, blackmail, slander, defamation, identity theft, unlawful collection and dissemination of personal data, and deceptive behaviors perpetrated by malicious individuals operating in the digital sphere.

[13] Meaning the right to fully express their identity in the digital world, and to have their digital personality protected and safeguarded. Indeed, they are entitled to hold and express their own ideas and emotions, and to communicate their individuality online. Children and adolescents must be shielded from abuse, ridicule, mockery, and insults on social media platforms. Everyone has the right to express themselves – whether through words, images, or other forms of communication – in a respectful manner and without being subjected to offensive comments. Above all, we should feel free to be ourselves, without the pressure of conforming to group expectations or seeking validation from the online community. The reference is to the phenomenon of the so-called “online hate speech”; to this particular aspect of the digital sphere, please refer to Bello, B.G., Scudieri, L. (Eds.) (2022). Odio online: forme, prevenzione, contrasto, Giappichelli.

Moreover, it should be noted that this aspect is problematic due to the very structure of the internet and could therefore be interpreted as a call for adult responsibility in protecting the content related to minors. This refers specifically to the sharing on social media and the use by adults of images and videos depicting minors. This is the case of the phenomenon known in the literature as “sharenting”. For further insight into this specific aspect, see the contribution by Balbinot, M. (2025). Genitori in rete. (Sovra)esposizione: lo sharenting. In Th. Casadei, V. Barone, B. Rossi. (Eds.). Giovani in rete. Guida per un uso consapevole delle tecnologie, Giappichelli.

[14] Meaning the right to build meaningful, safe, and trustworthy friendships, with whom they can play, communicate, and share our emotions by making use of the opportunities offered by the digital world, within a secure and protected online environment that provides access to reliable and useful information. Therefore, they are entitled to a digital space that enables them to exchange information with their peers without the risk of exposing or disseminating our personal data. According to the manifesto, they also have the right to use digital technologies to maintain connections with friends, parents, and relatives who live far away. Regarding this specific and peculiar aspect, that has to do to minors more vulnerable than others, such as the so-called “Unaccompanied foreign minors”, please allow me to refer to Severi, C. (2025). L’impatto delle tecnologie sui Minori Stranieri Non Accompagnati: tra rischi, opportunità e sicurezza. In Th. Casadei, B.G. Bello (eds.), Minori Stranieri Non Accompagnati e uso delle tecnologie. Sicurezza, consapevolezza e tutela dei diritti, Mucchi.

[15] The manifesto asserts that they have the right to receive clear and accurate information regarding the health risks linked to the use of digital technologies, as well as to access tools designed to reduce potential harm to our physical and mental well-being. Obviously, it is the responsibility of adults to guide and supervise their use of digital devices, supporting them in managing both the duration and the manner of their use. For this reason, adults are the first people with the duty to be informed about the risks connected to social network, in order to offer an appropriate education to the children and in order to protect them from the physical and psychological risks due to the intensive use of technology. In this regard, please refer to Gui, M. (2019). Benessere digitale a scuola e a casa. Un percorso di educazione ai media nella connessione permanente, Mondadori.

[16] This right, connected to the digital environment, means the right to grow up in a nurturing and stimulating environment, where they are supported, guided, and protected, and where we receive the help they need to overcome challenges and build our future. Indeed, parents have the responsibility to educate minors on the proper use of digital technologies, in order to help them understand and safely navigate the digital world, and to establish clear rules for their use, that they have to follow too. The manifesto highlights the fact that family holds a central place in the lives of minors, so young people value the moments when, instead of engaging with digital devices, they are able to spend quality time playing with their parents. This is also the view that guides the “Digital educational agreements”, that can be understood as a shared responsibility pact that originates within families and extends to involve other stakeholders engaged in the digital education of minors. The first initiatives in Italy have emerged in Friuli Venezia Giulia (2019) and Lombardy (2020), and have since continued to spread throughout Italy. Along the same lines, the CRID – Interdepartmental Research Center on Discrimination and Vulnerability at the University of Modena and Reggio Emilia – is also carrying out a project on digital educational agreements, in collaboration with various schools and community organizations in the Modena and Reggio Emilia areas: https://www.crid.unimore.it/site/home/attivita/laboratori-e-gruppi-di-lavoro/articolo1065068599.html

For a more in-depth analysis of digital educational agreements, their history, and the guiding principles of the project, please allow me to refer to: Severi, C. (2025). I patti educativi digitali: fiducia, cooperazione e diritti per un’educazione digitale consapevole. In Th. Casadei, V. Barone, B. Rossi. (Eds.). Giovani in rete. Guida per un uso consapevole delle tecnologie, Giappichelli.

[17] Meaning the right to disconnect from digital devices during social occasions and shared activities with the adults responsible for our care, so as to foster and enhance our interpersonal and relational skills. This has a lot to do with the right to health and the right to education, meaning not being forced to stay connected to do homework online at home, for instance. Indeed, adults are called to rethink their use of the digital tools, for balancing the use of technology in education and alternative instruments that do not involve the digital sphere.