A new book urges readers to approach AI with scepticism rather than fear
In the late Nineteenth and early Twentieth Centuries in America, snake oil sellers deceived people with false pretences of how miraculous the effects of snake oil can be on human health. Their concoctions hardly contained any snake oil, let alone heal people with their quackery. Later, the tradition and its associated hazards evolved into a term widely used in popular culture. Now, snake oil refers to any sham products, services, ideas or activities that are misguiding, inconsequential or prone to building up fake scenarios.
Appropriating the same, the authors of this book have made a timely and accessible case about AI’s proliferation into an unchecked force. Although the authors agree that in many cases, AI works to some extent, it is mostly accompanied by exaggerated claims by the companies selling it in different forms. This leads to overreliance, such as using AI as a replacement for human expertise instead of as a way to augment it.
The authors introduce the book as a guide to identifying AI snake oil and AI hype to help users navigate their path towards safer and productive uses of technology. They set out to acquaint their readers with a few widely known types of AI, such as generative AI, predictive AI and artificial general intelligence (AGI). The purpose is to offer a much more sceptical and critical picture of when and how AI should be used in one’s career and life.
Before explicating and unravelling the nuances of the types of AI, the authors deal with the question of what is and what is not AI with a humorous definition stating that “AI is whatever hasn’t been done yet.” This evaluation is reflective of the ubiquitous presence of AI in our lives and how, in turn, it might be structured around decisions over-promised by companies marketing AI.
Generative AI, consisting of a set of loosely related technologies, although it has made remarkable progress, is judged by the authors as immature, unreliable and a product prone to misuse. Parallel to their ever-growing popularity, it is the hype, fear, and misinformation that these AI technologies have instilled in the hearts of their users. Much of it is the consequence of how these technologies are built around statistical patterns from the web-based training data, which is then generated as remixed text based on those patterns.
Taking the example of Amazon, the authors divert our attention to the mushrooming of AI-generated books and guides, which “can be fatal if a reader trusts the book.” In spite of lawsuits against the content theft and copyright issues, AI-generated answers continue to circumvent the issue of presenting others’ content as its own. The authors believe that this issue can be resolved by combining technology and law to minimise unethical tendencies.
The book keeps a balanced view by affirming the support AI can provide in making knowledge accessible to consumers, but not without quick money-making and effort-free prospects; its misuse can promise. The pitfalls can only be avoided with practice supported by the right ethical mindset. Image generators such as Dall-E, Firefly, Midjourney and Stability AI, to name a few, have created a deluge of entertainment to an extent that fake images, automated scripts and text-to-video facility may disrupt the human-AI relationships as manifested in the 2023 Hollywood strikes, triggered when actors’ likeness and past labour was manipulated by AI software without compensation.
Considering the more-harm-than-good use of predictive AI, the authors make an explicit argument about predictive AI’s precarious position against its hyped advertisement. Taking a moral and ethical stance, the book argues that “predicting people’s social behaviour is not a solvable technology problem, and determining people’s life chances on the basis of inherently faulty predictions will always be morally problematic.” The problem is that predictive AI is often sold with overstated claims, which allow the so-called authorities on technology, including influential companies, individuals and governments, to make decisions about people’s lives and careers, such as predicting children’s future or pandemic outbreaks. The authors argue that it is in this area that most AI snake oil is instituted.
The book makes extensive use of US-relevant case studies, field research and examples to base its propositions and claims. It turns out, and much to the disappointment of the readers, that the use of Predictive AI in the hiring process, criminal risk detection and insurance estimates can be highly discriminatory in terms of race, gender and age. This type of AI is unable to take into account the circumstantial and contextual differences. For example, a majority of people who are tagged ‘high risk’ by predictive AI may never commit another crime. Similarly, facial recognition, though very useful in some contexts, can be misused against people, such as oppressive governments using it to identify people in peaceful protests. Most of these apparently convenient uses of Predictive AI morph into surreptitious ways to make money, irrespective of the cost humanity has to pay for it.
Shunning the apocalyptic view of Artificial General Intelligence, taking over humanity and out-thinking us in all aspects of life, the authors argue that although things are uncertain at the moment, our focus should be on how humans can devise ways to use this facility smartly and realistically. By recalibrating the fear of imminent super-intelligence, the book takes a more measured approach focused on the present that can be achieved through proper regulation, policy-making and governance.
The book provides a profound understanding of AI to a non-technical reader with some practical ideas about the appropriate and judicious use of this facility. Although the book is oriented towards the unreliable, unrealistic and hype-creating misuse of AI tools, the general message is that if practised in an ethical, controlled and limited environment, reliable results can be achieved.
The authors introduce an interesting concept, critihype, to reveal how “when critics of AI accept inflated claims about what AI can do, and then argue that those capabilities are dangerous, unethical or socially harmful,” they ironically become the hype-creators per se. By keeping auctorial position safe within the pro-AI and anti-AI discourse, the book carves out its own space as a realistic yet philosophically rich inquiry into AI, at the same time delineating our role and responsibilities as human actors and institutions capable of using AI aptly and with a value-driven approach.
AI Snake Oil
What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Authors: Arvind Narayanan and
Sayash Kapoor
Publisher: Princeton
University Press, 2024
Pages: 360
The reviewer is a professor and dean at the School of Liberal Arts, University of Management and Technology, Lahore.