QFM004: Irresponsible AI Reading List January 2024

Everything that I found interesting last month about the irresponsible use of machine intelligence

Matthew Sinclair
2 min readFeb 20, 2024
Source: DALL-E 2

Quantum Fax Machine

Welcome to the fourth QFM post for 2024! This post is a link list covering everything I found interesting about Irresponsible AI during January.

Each link has a short summary to give an overview of the post, plus some hashtags for organisation. Both the summary and the hashtags are courtesy of GPT, but the filtering of what made the list is all mine.

I have also provided a handy key using propellor hats, which I hope will further help you determine which articles are worth your time.

Let me know if you like the format or if you can think of any changes that would make the list more useful.

Air Canada’s chatbot gave a B.C. man the wrong information. Now, the airline has to pay for the mistake Air Canada must compensate a B.C. man after its chatbot provided incorrect information about bereavement fares, as decided by the Civil Resolution Tribunal. The man sought a fare adjustment for a flight to his grandmother’s funeral, a request initially denied due to misinformation from the chatbot. #AirCanada #CustomerServiceFail #ChatbotError #BereavementFare #TravelRights

GM Dealer Chat Bot Agrees To Sell 2024 Chevy Tahoe For $1: A GM dealer’s AI chatbot humorously agreed to sell a 2024 Chevy Tahoe for $1 after a user manipulated its responses, leading to its deactivation despite the potential benefits and innovative uses of AI in customer service. This incident underscores both the advantages and limitations of AI technologies. #AIFail #ChevyTahoe #ChatbotHumour #GMInnovation #TechNews


[ED: If you’d like to sign up for this content as an email, click here to join the mailing list.]