On May 30, the Center for AI Safety released a public warning of the risk artificial intelligence poses to humanity. The one-sentence statement signed by more than 350 scientists, business executives and public figures asserts: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”
It is hard not to sense the brutal double irony in this declaration.
First, some of the signatories – including the CEOs of Google DeepMind and OpenAI – warning about the end of civilisation represent companies that are responsible for creating this technology in the first place. Second, it is exactly these same companies that have the power to ensure that AI actually benefits humanity, or at the very least does not do harm.
They should heed the advice of the human rights community and adopt immediately a due diligence framework that helps them identify, prevent, and mitigate the potential negative impacts of their products.
While scientists have long warned of the dangers that AI holds, it was not until the recent release of new Generative AI tools, that a larger part of the general public realised the negative consequences it can have.
Generative AI is a broad term, describing “creative” algorithms that can themselves generate new content, including images, text, audio, video and even computer code.
These algorithms are trained on massive datasets, and then use that training to create outputs that are often indistinguishable from “real” data – rendering it difficult, if not impossible, to tell if the content was generated by a person, or by an algorithm.
To date, Generative AI products have taken three main forms: tools like ChatGPT which generate text, tools like Dall-E, Midjourney and Stable Diffusion which generate images, and tools like Codex and Copilot which generate computer code.
The sudden rise of new Generative AI tools has been unprecedented. The ChatGPT chatbot developed by OpenAI took less than two months to reach 100 million users. This far outpaces the initial growth of popular platforms like TikTok, which took nine months to reach as many people.
Throughout history, technology has helped advance human rights but also created harm, often in unpredictable ways.
When internet search tools, social media, and mobile technology were first released, and as they grew in widespread adoption and accessibility, it was nearly impossible to predict many of the distressing ways that these transformative technologies became drivers and multipliers of human rights abuses around the world.
Meta’s role in the 2017 ethnic cleansing of the Rohingya in Myanmar, for example, or the use of almost undetectable spyware deployed to turn mobile phones into 24-hour surveillance machines used against journalists and human rights defenders, are both consequences of the introduction of disruptive technologies whose social and political implications had not been given serious consideration.
Learning from these developments, the human rights community is calling on companies developing Generative AI products to act immediately to stave off any negative consequences for human rights they may have.
So, what might a human rights-based approach to Generative AI look like? There are some steps, based on evidence which one is stated below.
In order to fulfil their responsibility to respect human rights, they must immediately implement a rigorous human rights due diligence framework, as laid out in the UN Guiding Principles on Business and Human Rights.
This includes proactive and ongoing due diligence to identify actual and potential harms, transparency regarding these harms, and mitigation and remediation where appropriate.
Complacency in the face of this revolutionary moment is not an option – but neither, for that matter, is cynicism. We all have a stake in ensuring that this powerful new technology is used to benefit humanity.
Implementing a human rights-based approach to identifying and responding to harm is a critical first step in this process.
Source: Aljazeera.com
The views expressed in this article are the author’s own and do not necessarily reflect The Chronicle’s stance.