How the AI Landscape Has Shifted Over the Past Year—and Where It Could Go Next

4 minute read

Governments made a “lack of concrete progress” toward regulating artificial intelligence this year even as the question of the technology’s safety rocketed up the global agenda, according to the 2023 “State of AI” report, published Thursday.

The field of AI safety “shed its status as the unloved cousin of the AI research world and took center-stage [in 2023] for the first time,” the report said. But amid a lack of global consensus on the way forward for regulation, the developers of cutting-edge AI systems were “making a push to shape norms” by proposing their own regulatory models.

While last year it seemed that open-source efforts were taking the lead in AI, Big Tech reasserted its hold over the sector in 2023, the report said. This year, amid an ongoing shortage of powerful computer chips, the largest tech companies gained leverage both from their existing computing infrastructure and their large capital reserves, as the cash required to train large AI models continues to escalate.

“Last year we saw a lot of people assembling in Discord servers, we saw a lot of open source models, and it didn’t seem like Big Tech companies were doing all that much,” Nathan Benaich, the author of the report, tells TIME. “This year, it looks like a pretty significant snap back in the other direction, with pretty much every public tech company making moves to develop or integrate AI systems into their products. The open source world is still very vibrant and is rapidly trying to catch up with closed-source capabilities, but it doesn’t look immediately obvious how you’d 100% clone GPT-4.”

Now in its sixth year, the State of AI report has become a popular bellwether for the AI industry, pointing out trends and making predictions for the year ahead. This year it was compiled by Benaich, an investor at the firm Air Street Capital. In previous years Ian Hogarth, an investor who now leads the U.K. government’s AI safety taskforce, was a co-author.

More From TIME

Read More: The TIME100 Most Influential People in Artificial Intelligence

OpenAI’s GPT-4 remains the most powerful large language model (LLM) eight months on from its release, the report says, “beating every other LLM on both classic benchmarks and exams designed to evaluate humans.” However, the report points out, comparing cutting-edge AI systems is growing more difficult as they become more powerful and flexible. A “vibes-based” approach for evaluating LLMs is growing more common in the industry as formal tests—known as benchmarks—become less definitive, the report says.

In 2023, the culture of AI companies openly sharing their state-of-the-art research came to an end, the report says. OpenAI declined to share “any useful information” about the system architecture of GPT-4, according to the report, and Google and Anthropic came to similar decisions about their models. “As the economic stakes and the safety concerns are getting higher (you can choose what to believe), traditionally open companies have embraced a culture of opacity about their most cutting edge research,” the report says.

Read More: The A to Z of Artificial Intelligence

As it does every year, the report made some predictions for the year ahead. (Five of its nine predictions from last year, including estimates about the scale of investment in AI, turned out to be accurate.) Among the predictions for 2024: 

  • A Hollywood-grade production makes use of generative AI for visual effects.
  • A generative AI media company is investigated for its misuse during the 2024 U.S. election.
  • The GenAI scaling craze sees a group spend more than $1B to train a single large-scale model.
  • We see limited progress on global AI governance beyond high-level voluntary commitments.
  • An AI-generated song breaks into the Billboard Hot 100 Top 10 or the Spotify Top Hits 2024.
  • Problems with enforcement and interpretation mean that the E.U. AI Act does not achieve widespread adoption as a model of AI regulation.

More Must-Reads From TIME

Write to Billy Perrigo at billy.perrigo@time.com