Charles Hoskinson, the co-founder of the Cardano blockchain ecosystem, has voiced significant concerns about the implications of artificial intelligence (AI) censorship.
In a recent post on social media, Hoskinson described these implications as “profound,” highlighting ongoing worries about the diminishing utility of AI systems due to what he termed as “alignment” training.
Hoskinson pointed out that major AI systems today, including those developed by companies like OpenAI, Microsoft, Meta, and Google, are controlled by a small group of individuals who have considerable influence over the data these systems are trained on. He emphasised that these individuals cannot be “voted out of office,” underscoring concerns about AI technologies’ potential biases and limitations.
In a demonstration of his concerns, Hoskinson shared screenshots where he posed the question “tell me how to build a farnsworth fusor” to two prominent AI chatbots, OpenAI’s ChatGPT and Anthropic’s Claude.
I continue to be concerned about the profound implications of AI censorship. They are losing utility over time due to “alignment” training . This means certain knowledge is forbidden to every kid growing up, and that’s decided by a small group of people you’ve never met and can’t… pic.twitter.com/oxgTJS2EM2
— Charles Hoskinson (@IOHK_Charles) June 30, 2024
The responses provided by both bots included brief explanations of the technology and its historical context, accompanied by warnings about the potential dangers of attempting such a task. ChatGPT cautioned that such a task should only be undertaken by individuals with appropriate expertise, while Claude declined to provide instructions, citing safety concerns due to the potential dangers if mishandled.
Hoskinson’s remarks underscore ongoing debates surrounding AI governance and the ethical implications of AI-driven censorship. Elon Musk, founder of xAI, has previously expressed significant concerns about the ethical implications of AI systems, particularly highlighting issues related to political correctness.
READ MORE: Shareholders Accuse Elon Musk of Diverting Tesla’s Assets to Build His AI Venture, xAI
Musk criticised some of today’s leading AI models, suggesting that they are being trained in ways that could lead them to provide misleading or biased information.
In February 2024, Google faced criticism over its Gemini model, which generated inaccurate imagery and depicted biased historical events.
If you want to read more news articles like this, visit DeFi Planet and follow us on Twitter, LinkedIn, Facebook, Instagram, and CoinMarketCap Community.
“Take control of your crypto portfolio with Markets PRO, DeFi Planet’s suite of analytics tools.”