#0159: Machine Learning Review September 2023

Matthew Sinclair
12 min readJan 12, 2024

--

Image by Freepik

Braingasm

[ED: There’s a bit of a mix of content here. On balance, it’s 3/5 propeller hats.]

Here’s my review of all of the interesting things that happened in machine intelligence in September 2023.

Be My Eyes — Be My AI This article discusses the introduction of the ‘Be My AI’ feature in the Be My Eyes app, which uses AI, specifically GPT-4, to assist blind and low-vision users with everyday tasks by providing detailed descriptions of photos they take. The feature is in open beta for iOS and Android users and complements the existing human volunteer support system. #BeMyAI #AccessibilityTech #GPT4 #VisualAssistance #InclusiveTechnology

HeyGen — Create videos from text in minutes with AI-generated avatars and voices. HeyGen is an AI-powered video creation platform that allows users to effortlessly produce studio-quality videos using AI-generated avatars and voices. It offers customisable avatars, a selection of high-quality voices, and various templates for different needs, catering to sales outreach, content marketing, product marketing, and training purposes. The platform emphasises ease of use, speed in video creation, and cost-effectiveness compared to professional video editing services. #AIvideoCreation #HeyGen #DigitalMarketing #ContentCreation #TechInnovation

Causality for Machine Learning This is an applied research report that explores the intersection of causal inference and machine learning. It highlights the importance of causality in understanding why certain outcomes occur, going beyond traditional machine learning’s focus on prediction. The report delves into causal graphs and models, discussing their role in building more robust, reliable, and fair machine learning systems, and provides an introduction to causal reasoning in data science and machine learning. #MachineLearning #CausalInference #DataScience #AIResearch #TechInnovation

Copyright Liability for Generative AI Pivots on Fair Use Doctrine The Copyright Office is seeking input on how generative AI technology may impact creative industries and copyright, with legal questions arising regarding the use of copyrighted materials in AI training data and the similarity between AI-generated content and original works. #CopyrightLiability #GenerativeAI #FairUseDoctrine #CopyrightLaw #AIInnovation

The Rise and Potential of Large Language Model Based Agents: A Survey This article is a comprehensive compilation of research papers focused on the development and potential of Large Language Model (LLM) based agents. It covers various aspects such as natural language interaction, knowledge acquisition, memory capabilities, reasoning and planning, and their applications in single-agent and multi-agent scenarios. The repository also explores the interaction between humans and LLM-based agents, as well as the societal impacts and behaviours of these agents. #AIResearch #LLMAgents #MachineLearning #TechnologyInnovation #ArtificialIntelligence

Opinion: The Copyright Office is making a mistake on AI-generated art The US Copyright Office has repeatedly refused to register copyrights for AI-generated art, raising questions about whether AI-created works can be protected by copyright law. #CopyrightOffice #AIArt #CopyrightLaw #AIInnovation #ArtisticExpression

Deriving the Ultimate Neural Network Architecture from Scratch This video is an educational resource focused on explaining the Transformer neural network, a critical architecture in modern AI and machine learning. It likely provides a detailed walkthrough of the Transformer’s components and principles, aiming to make the complex topic accessible to learners and enthusiasts interested in AI technology and neural network design. #MachineLearning #AIEducation #NeuralNetworks #TechTutorial #AlgorithmicSimplicity

Thinking Like Transformers This paper proposes a new computational model for understanding and programming transformer neural networks. It introduces the Restricted Access Sequence Processing Language (RASP), which maps transformer operations into simpler programming constructs. The authors demonstrate how RASP can be used to solve complex tasks and relate them to the transformer architecture, highlighting its potential to demystify transformer neural networks and make their programming more accessible and intuitive. #AIResearch #TransformerNN #ComputationalModel #RASP #MachineLearning

I made a transformer by hand (no training!) The blog post is a detailed account of the author manually creating a transformer neural network, specifically a decoder-only transformer similar to GPT-2, without using training or pre-trained weights. The post is intended for those with some familiarity with language models and provides an in-depth look at the internal workings of transformers, including the design of token and position embeddings, attention mechanisms, and the overall architecture of the model. #AIProgramming #HandmadeTransformer #NeuralNetworks #MachineLearning #TechExploration

Meet your AI executive assistant The blog post on Shortwave introduces their AI Assistant, a conversational agent designed to enhance email productivity. Integrated directly into the user’s inbox, it utilizes large language models and AI search infrastructure to perform tasks like reading threads, drafting emails, searching email history, and checking calendars. Available 24/7, it can understand multiple languages and offers features like summarization, translation, and scheduling assistance, aiming to provide the benefits of an executive assistant to a wider audience. #AIAssistant #EmailProductivity #TechInnovation #ShortwaveAI #ConversationalAI

Toyota Research Institute Unveils Breakthrough in Teaching Robots New Behaviors Toyota Research Institute introduces a generative AI technique to efficiently teach robots new dexterous skills, aiming to build “Large Behavior Models (LBMs)” for robots, similar to Large Language Models (LLMs) for conversational AI. #ToyotaResearchInstitute #AIInnovation #Robotics #BehaviorModels #DexterousSkills

A Hands-On Introduction to Machine Learning (textbook) is a comprehensive textbook by Chirag Shah that provides an accessible and practical approach to learning machine learning. It covers a range of topics including supervised and unsupervised learning, neural networks, and reinforcement learning, and is designed for students and professionals with basic technology knowledge. The book emphasises real-world applications and ethical considerations, making it a valuable resource for anyone seeking to understand and apply machine learning concepts. #MachineLearning #DataScience #AIeducation #TechBook #ChiragShah

The Physical Process That Powers a New Type of Generative AI The Quanta Magazine article discusses a new generative AI model called the Poisson Flow Generative Model (PFGM), inspired by physics. It uses the concept of electric fields and charged particles to generate images, offering a different approach from diffusion-based models like DALL·E 2. This model can create high-quality images up to 20 times faster than diffusion-based methods and includes a parameter for adjusting the dimensionality of the system, allowing for flexibility in image generation. #GenerativeAI #PFGM #PhysicsInspiredAI #AIInnovation #ImageGeneration

Meet Ava: Your Personal Language Server. The power of advanced language models, right on your desktop. Ava PLS is an open-source desktop application for running language models locally on your computer. It offers various language processing tasks such as text generation, grammar correction, rephrasing, summarisation, and data extraction. Emphasising privacy, Ava processes all tasks locally, ensuring that user data remains private. The basic version is free, and it supports multiple languages based on the chosen model. Currently, it’s available for macOS, with Windows and Linux versions planned. #AvaPLS #LanguageModel #PrivacyFocused #OpenSource #AIApplication

Catala is a domain-specific language for deriving faithful-by-construction algorithms from legislative texts Catala is an open-source programming language designed for literate programming in law. It allows for precise annotation of legislative texts, translating legal provisions into executable code. This approach helps in implementing complex socio-fiscal mechanisms with high assurance of code-law faithfulness. The language, developed in collaboration with law professionals, embeds default logic as a core feature, aligning its logical structure with that of legal texts, and is particularly suited for legislative programming. #CatalaLanguage #LegalTech #ProgrammingLaw #OpenSource #CodeLawAlignment

Spellburst: A Node-based Interface for Exploratory Creative Coding with Natural Language Prompts This paper introduces Spellburst, a novel interface for creative coding. It combines a node-based layout with large language model (LLM) capabilities for intuitive and expressive coding through natural language prompts. The system is designed to aid artists in generative art-making by facilitating seamless transitions between semantic and syntactic exploration, enhancing the creative process. #CreativeCoding #Spellburst #AIInArt #NaturalLanguageProgramming #GenerativeArt

DeepMind’s cofoun. er: Generative AI is just a phase. What’s next is interactive AI. The article discusses DeepMind co-founder Mustafa Suleyman’s perspective on the evolution of AI, emphasizing the shift from generative AI to interactive AI. Suleyman, with his new company Inflection, envisions a future where AI can execute tasks beyond conversation, utilizing various tools and collaborating with other software and humans. He also stresses the importance of robust regulation and ethical considerations in AI development. #DeepMind #InteractiveAI #FutureOfAI #AIRegulation #TechnologicalEvolution

A Comprehensive Guide for Building RAG-based LLM Applications This is exactly what it says on the tin: a comprehensive guide to RAG-based LLM apps in the form of a Google Notebook. #MachineLearning #AI #GitHub #JupyterNotebook #RayProject

Centaurs and Cyborgs on the Jagged Frontier The article explores the concept of the “Jagged Frontier” in AI, illustrating how AI capabilities vary greatly across different tasks. It discusses a study showing that consultants using AI outperform those who don’t, highlighting AI’s ability to level skills among workers. The article also delves into the importance of understanding AI’s capabilities and limitations, advocating for a balanced approach where humans and AI collaborate as “Centaurs” or “Cyborgs” to optimise performance and creativity. #AIResearch #JaggedFrontier #AIinWorkplace #SkillLeveling #HumanAIMerger

Opinion: Here are the jobs that AI will impact the most Experts weigh in on how AI will impact various industries and the labor force, suggesting that while AI may change the nature of work, its effects will vary across different sectors. #AIImpact #FutureOfWork #ArtificialIntelligence #JobAutomation #IndustryChanges

A better way to prompt LLMs in Elixir I am a big fan of Elixir and love the way that it is developing as a platform for performant machine learning applications. This article introduces a new Elixir sigil ~LLM, and an accompanying Hex package :ai for more efficient prompting of Large Language Models (LLMs) in Elixir applications. The ~LLM sigil simplifies the syntax for generating prompts for models like GPT-3.5, making code cleaner and more readable. Holtz explains the implementation and advantages of this approach, showcasing its practical application in Elixir-based AI development. #Elixir #AIProgramming #LargeLanguageModels #CodingSimplicity #TechInnovation

IncarnaMind This GitHub repo presents a tool for conversing with personal documents (PDF and TXT) using various Large Language Models (LLMs) like GPT-3.5, GPT-4 Turbo, Claude, and open-source LLMs. It features a Sliding Window Chunking mechanism and an Ensemble Retriever, allowing efficient querying of fine-grained and coarse-grained information within documents. This enhances LLM responses by incorporating ground truth from personal documents, thereby reducing factual inaccuracies. #IncarnaMind #AI #LLM #DocumentInteraction #TechInnovation

DeepMind discovers that AI large language models can optimize their own prompts DeepMind researchers introduce a new method, Optimisation by PROmpting (OPRO), which utilises AI large language models (LLMs) as optimisers, allowing for versatile problem-solving through natural language prompts. #DeepMindResearch #AI #LanguageModels #Optimization #NaturalLanguagePrompts

OpenAI’s new AI image generator pushes the limits in detail and prompt fidelity OpenAI introduces DALL-E 3, an advanced AI image-synthesis model that excels in detail and prompt fidelity, making prompt engineering unnecessary. #OpenAI #DALLE3 #AIImageGenerator #PromptFidelity #Innovation

Agent OS: Design and Architecture This is the architecture and design principles for an operating system for AI agents. This document details the system’s structure, functionalities, and how it integrates with AI technologies, aiming to provide a framework for developing and deploying AI agents in various applications. #AgentOS #AIDesign #Technology #AIIntegration #OpenSource

agent-os This is the GitHub repo for Agent OS (described above). It is an experimental framework for building sophisticated, self-evolving AI agents. It focuses on providing an environment where AI agents can write and execute their own code, interact with the world, and evolve over time. The OS is designed to be robust and adaptable, working with various programming languages and integrating with other AI libraries. The project includes a demo agent, “Jetpack,” demonstrating the system’s capabilities in code generation and task completion. #AgentOS #AIFramework #AutonomousAgents #CodeEvolution #OpenSourceAI

Generative AI and intellectual property — If you put all the world’s knowledge into an AI model and use it to make something new, who owns that and who gets paid? This is a completely new problem that we’ve been arguing about for 500 years. (Ben Evans) This article delves into the complex intersection of generative AI and intellectual property, exploring the challenges and questions arising from AI’s ability to replicate and create content. It examines scenarios like AI mimicking artists’ styles or summarising news, raising legal and ethical concerns about copyright and fair use. The article emphasises the need to redefine our understanding of ownership and creativity in the AI era, considering both the legal framework and the broader implications on artistic and intellectual creation. #GenerativeAI #IntellectualProperty #AIethics #CreativeRights #TechLaw

Prompt engineering for Claude’s long context window This article discusses effective prompt engineering techniques for optimising Claude’s performance over long contexts. It outlines a case study testing two methods to improve Claude’s recall: extracting relevant quotes before answering and supplementing the prompt with examples of correctly answered questions. The study evaluates Claude’s ability to recall information from long documents and suggests strategies for enhancing accuracy, including the use of examples and a scratchpad for pulling relevant quotes. #AI #PromptEngineering #ClaudeAI #MachineLearning #TechInnovation

A couple of videos from ElixirConf 2023 focussing on machine learning topics:

ElixirConf 2023 — Eric Iacutone — Learn Stochastic Gradient Descent in 30 Minutes This video features Eric Iacutone teaching Stochastic Gradient Descent in 30 minutes. It provides a concise and accessible introduction to this fundamental machine learning algorithm, explaining its principles, applications, and implementation, with a focus on the Elixir programming language. #MachineLearning #StochasticGradientDescent #ElixirConf #EducationalVideo #TechLearning

ElixirConf 2023 — Charlie Holtz — Building AI Apps with Elixir This video focuses on building AI applications using the Elixir programming language. It offers insights into integrating AI technologies with Elixir, demonstrating practical approaches and techniques for developing AI-enabled applications in this language. #Elixir #AIApplications #Programming #ElixirConf2023 #TechInnovation

Casually Run Falcon 180B LLM on Apple M2 Ultra! FASTER than nVidia? This video demonstrates the performance of the Falcon 180B Large Language Model running on Apple’s M2 Ultra chip. It compares the efficiency and speed of this setup with Nvidia hardware, providing insights into the capabilities of the M2 Ultra chip in handling advanced AI computations. #AI #AppleM2Ultra #Falcon180B #HardwareComparison #TechReview

A Multi-Level view of LLM Intentionality This post explores the idea of intentionality in Large Language Models (LLMs). It discusses the potential for LLMs to possess multi-level intentions and agency, challenging the notion that they are merely tools for generating plausible responses. The author reflects on the debate surrounding LLMs’ consciousness and intentionality, suggesting that while current models may not possess these attributes, future, more competent LLMs could develop intentions and even consciousness. #AIIntentionality #LLM #AIConsciousness #FutureOfAI #PhilosophyOfAI

Superhuman’s Ultimate 100 AI Tools This page is a curated list of 100 AI tools encompassing a wide range of applications such as image generation, natural language processing, data analysis, and more. This resource seems designed to help individuals and professionals explore and utilize the diverse capabilities of AI in various fields. #AITools #Technology #AIResources #Innovation #Superhuman

US rejects AI copyright for famous state fair-winning Midjourney art The US Copyright Office Review Board rejects copyright protection for an AI-generated artwork, sparking a debate on AI ethics in the art world. #AICopyright #AIArtwork #CopyrightEthics #AIInnovation #ArtControversy

AI Cheat Sheet(s) This GitHub repo provides a collection of essential cheat sheets for deep learning and machine learning researchers. These cheat sheets cover a wide range of topics and tools relevant to the field, including TensorFlow, Keras, Numpy, Scipy, Pandas, Matplotlib, and more, making complex concepts more accessible and easier to reference for practitioners and learners alike. #MachineLearning #DeepLearning #AICheatSheets #DataScience #TechResources

The Best FREE AI Image Generators of 2023 (Tested and Compared) This article discusses the best free AI image generators of 2023, providing a comprehensive comparison and evaluation of their features. It focuses on various AI tools designed to create high-quality images from text prompts, addressing their ease of use, unique capabilities, and pricing structures. The article is particularly useful for marketers, content creators, or designers looking for AI tools to generate stunning visuals efficiently and effectively. #AIImageGenerators #VisualContentCreation #TechTools2023 #CreativeAI #FreeAIResources

MedPALM-2 Med-PaLM is a Google Research large language model designed specifically for the medical domain. It aims to provide high-quality answers to medical questions, using advancements in AI to improve healthcare. Med-PaLM 2, the second version, exhibits state-of-the-art performance in medical question answering, reaching a remarkable 86.5% accuracy on the MedQA medical exam benchmark. It’s evaluated and utilised in various applications, from simple tasks to complex workflows in healthcare, offering long-form, accurate responses to consumer health questions. #MedPALM #GoogleAI #MedicalAI #HealthcareInnovation #AIinMedicine

The Coming Wave (Mustafa Suleyman) “The Coming Wave” is a book by Mustafa Suleyman, co-founder of DeepMind and CEO of Inflection AI, focusing on the technological revolution and its societal impact. The book addresses the containment problem of controlling powerful technologies and explores how these advancements, especially in AI, could transform human society. It’s described as both a warning and a guide for navigating the future amidst significant technological changes. #TheComingWave #MustafaSuleyman #TechRevolution #AI #FutureSociety

Artificial Consciousness Remains Impossible (Part 1) Part 1 of the article series “Artificial Consciousness Remains Impossible” on Mind Matters argues that machines cannot achieve consciousness. It distinguishes between intelligence and consciousness, emphasising that machines, while capable of intelligent tasks, lack the subjective experience and intentionality of consciousness. The article critiques the idea that programming and symbol manipulation can lead to conscious machines, using thought experiments like the Chinese Room to illustrate its points. #ArtificialConsciousness #AIdebate #MachineIntelligence #ConsciousnessTheory #TechPhilosophy

Artificial Consciousness Remains Impossible (Part 2) Part 2 of the “Artificial Consciousness Remains Impossible” series continues the argument against the possibility of machine consciousness. It addresses various counterarguments and theories, such as functionalism, behaviourism, and emergentism, demonstrating their inadequacies in explaining consciousness. The article emphasises that consciousness is not a byproduct of complexity or functionality and argues that machines, regardless of their sophistication, cannot achieve true consciousness. #AIvsConsciousness #ArtificialIntelligence #ConsciousnessDebate #HumanMind #TechEthics

You can see links to the full list of links for 2023 here.

Regards, M@

[ED: If you’d like to sign up for this content as an email, click here to join the mailing list.]

--

--

No responses yet