Madre
PR manager
- Joined
- Sep 6, 2022
- Messages
- 58
- Reaction score
- 155
- Points
- 33
Over 1,100 AI experts, including prominent names such as Elon Musk and Apple co-founder Steve Wozniak, have called for a halt on developing AI systems more powerful than GPT-4, until appropriate safety measures are put in place. The open letter, released by nonprofit group the Future of Life Institute, argues that society is not yet prepared for the potential risks that such powerful AI systems could bring. The letter calls for a six-month pause, which should be public and verifiable, or a moratorium on such development by governments if this cannot be achieved.
While some people assume that technological progress cannot be slowed down, the open letter points out that decisions about which types of AI are built and how fast to go are ultimately up to humans to make. The key point is to strike a wise balance between the potential benefits and risks that AI could bring. The more power AI has to disrupt life, the more confident we need to be that we can handle the disruptions and think they are worthwhile.
Many Sci-Fi authors over the decades have shown us their vision of a future where humanity was enslaved or destroyed by artificial intelligence. We have seen this in Matrix, Terminator, Blade Runner and thousands of other projects. But it always seemed to us that if there was going to be one, it would be sometime later. However, the revolution came so suddenly that no one was prepared for it. Today we see only the rudimentary beginnings of future technologies. But they are already forcing humanity to start thinking about the future.
What do you expect from the mass adoption of these technologies in everyday life?