In recent months, debates have emerged suggesting that the capabilities of large language models (LLMs) are overstated and that the AI sector may be on the brink of a bubble. Nicholas Carlini, a scientist at Google DeepMind specializing in machine learning and computer security, has published a rebuttal to these criticisms, asserting that the advancements in AI are genuine and beneficial.
Since the release of ChatGPT by OpenAI in late November 2022, the AI field has seen an unprecedented surge in interest and investment. However, as AI technologies have rapidly evolved, concerns about their sustainability and true impact have surfaced. Despite these concerns, Carlini remains confident in the value of LLMs, drawing from his extensive personal experience with the technology.
Carlini’s engagement with LLMs over the past year has led him to conclude that these models are highly effective tools for enhancing productivity. He highlights several practical applications of LLMs that have significantly streamlined his work. For instance, he has utilized LLMs to develop a comprehensive web application using unfamiliar technologies, learn new frameworks, and convert code to more efficient languages such as C or Rust, resulting in performance improvements ranging from 10 to 100 times. Additionally, LLMs have been instrumental in simplifying large codebases, automating repetitive tasks, and reducing dependency on web searches for setup, configuration, and debugging.
Carlini’s use of LLMs can be categorized into learning and automating mundane tasks. He emphasizes that these examples, although not glamorous, demonstrate LLMs’ utility in everyday work and productivity enhancement. His experience suggests that using LLMs for coding projects alone can cut development time by at least 50%.
Despite Carlini’s defense, skepticism about AI’s hype persists from various quarters. Julia Angwin, co-founder of The Markup, has criticized claims of AI’s transformative achievements, arguing that some results have been exaggerated by OpenAI and that the time saved by AI-generated drafts is often offset by the time required for revisions.
Software engineering expert Jonathan Xia has also expressed doubts, asserting that generative AI is overhyped. He points out that while LLMs can produce useful code, they often contain errors, and their application in other fields, such as legal research, can lead to inaccuracies or irrelevant results. Xia questions the notion of exponential growth in generative AI capabilities.
The Washington Post has similarly warned of a potential AI bubble, citing the high costs of developing AI models and the lack of immediate returns. Canadian journalist Paris Marx has echoed these concerns, noting a decline in AI company valuations, reduced customer expectations, and the ongoing challenges of monetizing expensive AI technologies.
As the debate continues, Carlini’s defense underscores the practical benefits of LLMs while acknowledging the broader skepticism surrounding the AI industry’s future.
Related topics:
Why Hasn’t Sora Been Released Yet?