Listen to this article
If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to make that wonder do something useful without breaking the bank.
There was a “shift from releasing models to actually building products,” says Arvind Narayanan, a professor of computer science at Princeton University and co-author of the new book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can it, and how to tell the difference.”
The first 100 million people to experiment with ChatGPT when it came out two years ago actively sought out the chatbot and found it astonishingly helpful at some tasks, laughably mediocre at others.
Now such generative AI technology is being embedded in an increasing number of technology services, whether we look for it or not – for example through the AI-generated answers in Google search results or new AI techniques in photo editing tools.
“The main thing that was wrong with generative AI last year is that companies released these really powerful models without a concrete way for people to use them,” Narayanan said. “What we’re seeing this year is the gradual buildout of these products that can take advantage of these capabilities and do useful things for people.”
At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced comparably performing AI models for major languages, these models have not become significantly “larger and better in quality,” raising exaggerated expectations that AI will move to some sort every few months. way was racing were reset. of better-than-human intelligence, Narayanan said. That also meant that the public discourse has shifted from “is AI going to kill us?” to treat it as a normal technology, he said.
AI’s sticker shock
At quarterly earnings this year, tech executives often heard questions from Wall Street analysts looking for guarantees of future returns from massive spending on AI research and development. Building AI systems behind generative AI tools like OpenAI’s ChatGPT or Google’s Gemini requires investments in power-hungry computing systems running on powerful and expensive AI chips. They need so much electricity that tech giants announced deals this year to tap nuclear power to help them run.
“We’re talking about hundreds of billions of dollars of capital poured into this technology,” said Goldman Sachs analyst Kash Rangan.
Another analyst at the New York investment bank attracted attention last summer by arguing that AI does not solve the complex problems that would justify its costs. He also wondered whether AI models, even though they are trained on much of the written and visual data produced throughout human history, will ever be able to do what humans do so well. Rangan has a more optimistic view.
“We had this fascination that this technology would be absolutely revolutionary, which hasn’t been the case in the two years since ChatGPT was introduced,” Rangan said. “It’s more expensive than we thought and it’s not as productive as we thought.”
However, Rangan is still optimistic about its potential, saying AI tools are already proving to be “absolutely incrementally more productive” in sales, design and a number of other professions.
AI and your job
Some workers wonder whether AI tools will be used to supplement or replace their work as the technology continues to grow. Technology company Borderless AI uses an AI chatbot from Cohere to draft employment contracts for employees in Turkey or India, without the help of outside lawyers or translators.
Video game performers from the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate employment because it could be used to perform one performance without their consent in a number of to replicate other movements. . Concerns about how movie studios will use AI fueled last year’s unionized film and television strikes, which lasted four months. Game companies have also signed side agreements with the union that codify certain AI protections to continue working with actors during the strike.
Musicians and authors have expressed similar concerns about AI scraping their voices and books. But generative AI still can’t create unique work or “completely new things,” says Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech.
“We can train it with more data so it has more information. But having more information doesn’t mean you’re more creative,” he said. “As humans we understand the world around us, right? We understand physics. You understand that if you throw a ball on the ground, it will bounce. AI tools do not currently understand the world.”
Saad pointed to a meme about AI as an example of that shortcoming. When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut-up pieces of salmon found in supermarkets.
“What AI lacks today is human common sense, and I think this is the next step,” he said.
An ‘agentic future’
That kind of reasoning is an important part of the process of making AI tools more useful to consumers, says Vijoy Pandey, senior vice president of Cisco’s innovation and incubation unit, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI agents that can do more useful things for people.
That could mean you can ask an AI agent an ambiguous question and the model can reason and plan steps to solve an ambitious problem, Pandey said. A lot of technology, he said, will move in that direction by 2025.
Pandey predicts that AI agents could eventually come together and perform a task in the way that multiple people come together and solve a problem as a team, rather than simply performing tasks as individual AI tools. The AI agents of the future will work as an ensemble, he said.
For example, future Bitcoin software will likely rely on the use of AI software agents, Pandey said. Those officers will each have a specialty, he said, with “officers checking for accuracy, officers checking for safety, officers checking for scale.”
“We are moving toward a future for officers,” he said. “All these officers will be very good at certain skills, but also have a bit of character or color, because that is how we work.”
AI is making gains in medicine
AI tools have also streamlined the medical sector, or in some cases literally given it a helping hand. This year’s Nobel Prize in Chemistry – one of two Nobel Prizes awarded for AI-related science – went to work led by Google to help discover new medicines.
Saad, a professor at Virginia Tech, said AI has contributed to faster diagnostics by giving doctors a quick starting point in determining a patient’s care. AI can’t detect disease, he said, but it can quickly process data and pinpoint potential problem areas that a real doctor can examine. But as in other arenas, there is a risk that falsehoods will be perpetuated.
For example, tech giant OpenAI has touted its AI-powered transcription tool Whisper as having near-human-level robustness and accuracy. But experts have said that Whisper has a major flaw: it is prone to making up chunks of text or even entire sentences.
Cisco’s Pandey said some of the company’s customers working in the pharmaceutical sector have noted that AI has helped bridge the gap between “wet labs,” where people conduct physical experiments and research, and “dry labs” where people analyze data and often use computers for modeling.
When it comes to pharmaceutical development, that collaborative process can take several years, he said — with AI, the process can be reduced to a few days.
“That has been the most dramatic use for me,” Pandey said.