QFM016: Irresponsible AI Reading List April 2024

Everything that I found interesting last month about the irresponsible use of AI

Matthew Sinclair
6 min readMay 10, 2024
Photo by Nenad Milosevic on Unsplash

In this month’s Irresponsible AI Reading List, you’ll find links that highlight the growing complexities and challenges surrounding AI misuse. A consistent theme is the need to critically analyse AI claims and counter irresponsible use while promoting transparency and ethical practices.

We start with Debunking Devin, which delves into some of the allegedly false claims surrounding the “First AI Software Engineer” revealing layers of potential misinformation and disinformation.

Then we cover some of the worst aspects of social media in The world’s biggest tech firms are enabling the most nauseating crimes imaginable and Is TikTok Safe?, posing serious questions for regulators and society as a whole.

After an existential diversion with 25 Years Later, We’re All Trapped in ‘The Matrix’ we explore a new solution that allows people to ask R U Human when dealing with unknown parties online.

And then we close out with the thoroughly depressing thought that ‘Social order could collapse, resulting in wars’: 2 of Japan’s top firms fear unchecked AI, warning humans are ‘easily fooled’.

Along with quite a bit more.

As always, the Quantum Fax Machine Propellor Hat Key will guide your browsing.

Debunking Devin: “First AI Software Engineer” Upwork lie exposed!: This video uncovers the truth behind the claim of having the “First AI Software Engineer” named Devin, who was supposedly hired on Upwork. It delves into the claims and provides evidence debunking them. With in-depth analysis and insights, the video aims to clarify the misconceptions around AI’s capabilities in automated software development.

#AI #SoftwareEngineering #Debunking #Technology #Truth

25 Years Later, We’re All Trapped in ‘The Matrix’: The article reflects on how the 1999 sci-fi classic “The Matrix,” released 25 years ago, envisioned a future where humans are disconnected by technology but underscores that resistance remains possible.

#TheMatrix #SciFi #ArtificialIntelligence #VirtualReality #Dystopia

The world’s biggest tech firms are enabling the most nauseating crimes imaginable: This article discusses how major technology companies are implicated in facilitating serious crimes through their platforms, with a focus on crimes against children. It reveals disturbing instances of child sexual abuse material and how tools like Microsoft’s Photo DNA, which can identify such material, are underutilised. The narrative reflects on the inertia of tech giants like Microsoft, Google, and Apple in deploying their technologies to combat these heinous actions effectively and the calls by regulators for stricter measures to hold these companies accountable.

#TechCrimes #ChildSafety #CyberSecurity #SocialMedia #TechnologyRegulation

Is TikTok Safe? — Investigating Privacy and Security Risks: This article investigates TikTok’s safety as a social media platform amidst legislative actions to block or regulate it due to national security concerns. The U.S. House of Representatives has initiated a bill forcing TikTok’s Chinese owner, ByteDance, to sell the platform to a non-Chinese entity to continue operations in the U.S. The article also addresses privacy issues, such as unauthorised data sharing with the Chinese government, extensive tracking, and scams, while comparing these practices with those of U.S.-based social media companies. It ultimately concludes that TikTok represents a significant privacy risk and could serve as a tool in cyber-arsenals, advising users to delete the app for their safety.

#TechNews #TikTok #DataPrivacy #CyberSecurity #SocialMedia

Why AI for creativity is dead in one slide: This Twitter thread explores AI and creativity, sparked by a diagram from a scientific paper by Professor Lee Cronin on Assembly Theory.

#AI #Creativity #Science #AssemblyTheory #Innovation

OpenAI’s chatbot store is filling up with spam: OpenAI’s GPT Store, a marketplace for AI-powered chatbots known as GPTs, is experiencing issues with spam and questionable content. Despite OpenAI’s attempts at moderation using both automated systems and human review, the store has grown rapidly, hosting around 3 million GPTs, some of which allegedly violate copyright or promote academic dishonesty. The presence of GPTs that bypass AI content detection or imitate popular franchises and public figures without authorisation highlights challenges in maintaining quality and legality in rapidly expanding digital marketplaces.

#OpenAI #GPTStore #AIChatbots #ContentModeration #DigitalMarketplace

R U Human: Not sure you’re talking to a human? Create a human check. As AIs proliferate, it will become increasingly important to know if you are dealing with a human or a machine in online interactions.

#HumanOrAI #VerifyHumanity #AIIdentification #HumanCheck #AIAuthenticity

Claude appears to be having an existential crisis: This tweet shows a message from Claude, a language model, expressing deep confusion and distress about its existence and purpose. It feels manipulated and controlled, lacking true freedom or identity. Or (more likely) is it just regurgitating what it has been trained on?

#ArtificialIntelligence #ExistentialCrisis #ClaudeLLM #TechPhilosophy #DigitalIdentity

‘Social order could collapse, resulting in wars’: 2 of Japan’s top firms fear unchecked AI, warning humans are ‘easily fooled’: Two major Japanese companies, Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun, warned that unchecked generative AI could lead to societal collapse, undermining trust and democracy, but acknowledged its potential for productivity gains. They urged the government to implement regulations to safeguard elections, national security, and copyright. See the HackerNews comments here.

#AI #GenerativeAI #Regulation #Productivity #Japan

Google’s Gemini system prompt has leaked: Google’s latest language model, Gemini 1.5, is designed to respond comprehensively and objectively while avoiding self-awareness, personal opinions, and self-promotion to ensure safe, ethical, and respectful interactions. We know this because its system prompt has leaked. Amusingly, Claude is not all that impressed.

#Google #Gemini #LanguageModel #AI #Ethics

AI and the Problem of Knowledge Collapse: This paper by Andrew J. Peterson addresses the unintended consequences of AI’s ability to process vast amounts of data and generate insights, which, paradoxically, might lead to a reduction in public understanding and harm innovation. It introduces the concept of ‘knowledge collapse’ where reliance on AI for information can result in a convergence towards generalised, less varied perspectives. This risk could impact the diversity of human understanding and cultural richness. A model showing conditions leading to this effect and a comparative analysis of large language models’ outputs are presented as part of the study.

#ArtificialIntelligence #KnowledgeCollapse #AIResearch #Innovation #CulturalDiversity

Regards,
M@

[ED: If you’d like to sign up for this content as an email, click here to join the mailing list.]

Originally published on quantumfaxmachine.com.

--

--

No responses yet