Sam Altman, the 38-year-old founder of OpenAI, considered the father of ChatGPT, has sounded the alarm: generative artificial intelligence without precise regulation, if used in the wrong way, can be very dangerous indeed.
OpenAI was founded on the belief that artificial intelligence has the potential to improve almost every aspect of our lives, but it also creates serious risks that we need to work on in order to control them.
According to Altman, called for the first time to testify before the Senate Justice subcommittee in the United States in relation to the dangers that can arise from the improper use of AI, this technology is considered more of a “tool” than a “creature”. but it must be kept at bay, limited by law and controlled by an international agency which must establish common rules for all, as was done by the International Atomic Energy Agency with nuclear weapons. In Europe, we recall, the go-ahead was recently given to the AI Act, the first set of rules on artificial intelligence.
Impact on work? It is very difficult to predict, but I think there will be more work and that today’s work will be improved. We must think that ChatGpt is a tool, not a creature, and therefore can be controlled. Fear depends on where will the technology go and if it will be used in the wrong way
Generative artificial intelligence, added Altman, has the power to change everyone’s life, and turn it upside down. For example, breaking into the presidential election of 2024 by multiplying and making much more powerful disinformation techniques based mainly on fake videos and audio reproductions, technologies that are becoming more and more perfect as the months go by.
Altman wanted to be very clear in this respect and said he was willing to collaborate with the government for the drafting of rules that guarantee “a balance between access to the benefits of technology and user safety”.
Regulatory pressure should be on us, on Google, on the big names in the industry. If this technology goes wrong, it can go very badly. And we want to be heard on that. We want to work with the government to prevent that from happening. As this technology advances, we’re realizing that people are afraid of how it might change the way we live. And so are we.
Furthermore, for Altman, it is necessary that artificial intelligence be developed with democratic values as “it is not a social media” and needs a different response. This technology, he added, “is still in its infancy and mistakes can still be made.”
Altman’s idea would be to evaluate the possible granting of licenses for the development of large artificial intelligence models, with the power to revoke them if they do not meet the standards set by the political power, identify security criteria, which include the evaluation of their ability to self-reproduce, and to avoid manager control through autonomous actions and to resist external manipulation.
Despite these words in which he expressed his concerns, however, Altman has not answered exhaustively to some doubts about problems that many industry experts consider essential. For example, the transparency of the training method of these generative AI models and the commitment not to use copyrighted content for training. Beyond the reassuring tones, Altman did not indicate concrete solutions.