
In today’s fast-paced progress of AI, inserted in our daily life, albeit not correlated with a legal framework, shapes a non-democratic scenario. Any technological development should be part of a robust strategy that is performed under the umbrella of fundamental rights. If not, is it transformed into a dangerous, parallel ad hoc authority.
Is indeed a disturbing legal vacuum that leads the process.
The speed of AI is not correlated to the speed of regulation, consequently, AI is adopted before any legal framework, out from a fundamental rights “umbrella”, and most importantly: out from democracy.
It may be a good exercise to analyze AI challenges through the lenses of CSR as a way to get into a transparent and accountable implementation. Is not a coincidence that the 7 CSR principles are the same as AI challenges: transparency, protection to fundamental rights, respect to stakeholders’ interests, ethics, accountability and Rule of Law.
How tech-companies may be committed to a social responsible action? The answer seems always the same: strict regulation, and robust action from Governments, along with strong commitment from global institutions. Only then, risks, may be fixed, along with a mandatory minimum standard, and strict monitoring processes.
Raising the bar on demanding AI guarantees is not just an ambition, but a true need. Currently we may identify 3 regions that, in the last year have addressed the legal vacuum, yet not so on evaluation or mandatory compliance.
Within the European Union, the AI Act has recently been approved. Covering different levels of risk and mainly focusing on different levels of risk that go from voluntary to mandatory compliance as follows:
Minimal risk: as spam filters, or AI-enabled video games, many companies voluntarily adopt codes of conduct.
Specific transparency risk: chatbots must inform users that is a machine interaction.
High risk: such as AI-based medical software or AI systems used for recruitment that must comply with risk-mitigation systems, high quality data sets, clear use information, etc.
Unacceptable risk : that are absolutely banned, related to AI systems that labelled the population by private or public sector.
A different approach is established in China, mainly because of political reasons. where any foundation model needs to be registered with the Government before released to public. Any algorithm that may influence the public opinion must be registered and get a license. Besides, mandatorily supplier’s must develop a system against the addiction habits for the youngest.
Meanwhile in USA, in spite of showing commitment to regulate, the Executive AI Order 2023 relies only on self-regulation, and voluntary enforcement.
Sadly, the rest of the countries are still in a very early stage, drafting proposals and attempting to face the speedy insertion of AI in their lives with any specific regulation.
Any focus should be supported by a work on research for whom participatory methodologies seem to be the more effective at times to become inclusive and transparent.
For the purpose of this analysis, is good to identify four dimensions that work at different levels, mainstreaming to all sectors, disciplines, and actors that are also correlated to some of the participatory research methodologies. Namely,
Multidisciplinary level /Team
Academic work /Research
Monitoring and evaluation/Outcome, accountability
Managing the Future/Education
Within this spectrum of methodologies is that we may transform AI into an inclusive, and transparent hub to approach the complex AI practices, from a shared and a cross-cutting leadership.
Under the light of “AI values” it impacts equally, no matter regions, politics, religions or culture. The wider the studies, and engagement from institutions and stakeholders -including the final user- the stronger, and faster the outcomes that lead us into a multi-disciplinary, and effective approach.
Including full awareness of their threats to children from an early stage. Work on prevention and supervised change. Education plays a pivotal role at this stage.
In order of priorities, the following methodologies may serve as catalyzers towards a participatory approach that works at a cross-disciplinary level, engages with academia, is part of the decision-making processes, and finally a human-centered approach leading by the main goal of achieving well-being and fundamental rights.
Methodologies of participatory research as: team Science, community-engaged research, participatory evaluation, and popular Education, are the answers for the spectrum that we propose. Since teamwork is the main idea, holistic and cross-disciplinary teams are the “must-have” for AI studies, emphasizing democratic shapes of academic-community partnerships as the well-being of the citizens is at stake. Certainly, with a particular focus on the political decision-making processes, that also demands to be evaluated and is one of the big challenges to be considered when regulating. Finally, a people-centered approach with a particular focus on the future may include their incorporation in an early stage of Education.
Even if presented as a social innovation revolution, by shaping new practices that promise to deliver new senses of well-being for the citizens, currently is not being part of any multistakeholder strategy action. Is precisely their engagement s at the national, and global level, by citizens and institutions the guarantee to make sure that there is 100% protection.
Is it time to re-value, our values around AI developments, printing with new senses the same traditional values that historically accompany societies within compassion and responsibility. Making sure any societal transformation meets human rights, and SDGs expectations, especially thinking on empathetic AI rather than mechanical or analytical tasks. In the end, a good assistant without ethical concerns is more than upskilling or reskilling issues. Actually, all of us specifically work with AI systems like translator, chatbot, etc, is it only when we enter into creative or empathetic AI that we perceive the threat to fundamental rights. The fact that a physiology or a medical doctor may substitute humans and make their own therapies or/or diagnoses is what drives us into a wild ground of uncertainty and insecurity. Is the replacement of humans that raises the worst hesitations about how a values-based approach for AI may dramatically turn into a machine-centered approach.
The automatization of tasks is welcome, not so the automatization of values, that, are not negotiable.
The immutability of values is an asset worthy to fight for and may be transformed when adapted to a new moral framework, not to be erased by a new set of standards strictly based only on mind control, centered power, and business profits.
Do we need a business-friendly approach or hyper- aggressive enforcement?
No doubts the big question for the next decades….
