In recent months, there has been a surge of alarming claims suggesting that super intelligent artificial intelligence could eventually bring about the downfall of humanity.
These claims, made by important people in the field, have caused a lot of fear and uncertainty about AI's impact on society. But what's really unsettling is not whether these claims are true or not, but how they are shaping the conversation about AI and its responsible development.
The idea that AI could become smarter than humans and pose a threat has been a common theme in movies and our perception of AI. Now, in 2023, this idea has gone beyond movies and is being talked about in policies and regulations about AI.
What's troubling is that there isn't enough solid evidence to support these claims. The little evidence that is given often doesn't hold up under scrutiny. For example, claims about mysterious "emergent properties" in AI models can usually be explained as expected functions of the models, not something scary and unknown. But these claims are used to avoid being transparent and avoid talking about the responsibilities of AI developers.
Interestingly, the people who shout the loudest about the risks of AI are from Silicon Valley, where big tech companies are based. This might seem strange because big tech companies want AI to keep growing and making money. But these scary stories actually protect big tech by shifting the focus away from what they are doing now and making people worry about what AI might do in the future. By doing this, they avoid being held accountable for their actions today. It makes it seem like AI's future is out of our control, when in reality, it's all about the decisions made by these companies and individuals.
It's important to think about who is saying these things and who is being listened to. People like Sam Altman, who talk about the risks of AI, are welcomed by regulators in Europe. But the voices of those who are negatively affected by AI are often ignored. This shows a problem where claims about AI's dangers mostly come from wealthy white men in power, who are least likely to suffer from AI's harms but have the most responsibility for causing them.
Right now, governments are struggling to figure out how to regulate AI properly. It's crucial to challenge the ideas about super intelligent AI and existential risks. Instead, we should listen to the people who are directly affected by AI and prioritize their perspectives over the interests of big tech. We need to have real conversations about the actual impact of AI, not get lost in fantasies and distractions about AI becoming too powerful.
Comments