A recent article published via AFP Relaxnews suggests that the rapid advancements in large AI models, once thought to bring human-level artificial intelligence in the near future, may be experiencing a slowdown in progress. The article highlights the belief in Silicon Valley that pouring in more computing power and data would lead to the emergence of artificial general intelligence (AGI) capable of matching or exceeding human performance. However, industry insiders are beginning to acknowledge that large language models (LLMs) are not scaling infinitely higher as expected.
Despite significant investments from major tech companies like OpenAI and Microsoft, performance improvements in AI models are showing signs of plateauing. Experts like Gary Marcus caution against the fantasy that LLMs will evolve into AGI through continued scaling. Some industry professionals, such as Scott Stevenson of Spellbook and Sasha Luccioni of Hugging Face, argue that the focus on larger models without a clear purpose is reaching its limit.
OpenAI’s CEO Sam Altman maintains that there is no wall in progress towards human-level AI, while others like Dario Amodei of Anthropic remain optimistic about reaching AGI by 2026 or 2027. However, OpenAI has delayed the release of its successor to GPT-4 due to lower than expected capability, shifting focus towards more efficient utilization of existing capabilities.
Overall, the article suggests a shift in strategy within the AI industry, emphasizing the importance of improving reasoning and task-specific capabilities rather than simply adding more data and computing power. It compares the evolution of AI models to human cognitive development, transitioning towards a more thoughtful and purpose-driven approach.
Source
Photo credit www.prestigeonline.com