Report: AI is smarter than a person, sometimes

Tens of billions of dollars have poured into AI companies looking to use the technology to revolutionise everything from medicine to education over the past year. But concerns about the technology – both its power and potential to disrupt work and life – have also grown.

A report released Monday by Stanford University shows just how much those possibilities, and very real worries, have come to the fore since OpenAI launched its ChatGPT chatbot that sent the artificial intelligence industry into overdrive.

“A decade ago, the best AI systems in the world were unable to classify objects in images at a human level,” struggling with language comprehension and math like a grade schooler, wrote Ray Perrault, one of the directors of the AI Index, released annually by the Stanford Human-Centered AI institute.

“Today, AI systems routinely exceed human performance,” he said. But those leaps in ability and processing power have come with problems. Perrault said that while 2023 saw the number of large language processing AI programs double, the technology “cannot reliably deal with facts, perform complex reasoning, or explain its conclusions”.

Performance

The report states unequivocally that AI has achieved levels of performance that surpass human capabilities across a range of tasks. But much of that progress happened before last year.

AI models reliably beat their human counterparts in classifying images nearly a decade ago, outstripping people in basic reading comprehension in 2017. By 2020, the report said, the programs were often able to outdo the human brain in visual reasoning, and by 2021, in natural language inference.

More complex tasks, such as advanced-level math and understanding what images mean, are still firmly the domain of human beings. For instance, when a bot recognises that two people in a photo are playing music in a street, it’s still a tough test to reason that they’re busking and not playing poker.

Still, AI’s ability to perform those kinds of combined tasks, such as analysing a photo and describing it in text, have gotten significantly better.

One example from the report shows how an AI system can take an image of ingredients for a meal and coach a user through turning eggs into an omelet, even nudging the user to flip it and cook a while long after seeing a photo of the not-quite-done product.

Other advances, such as being able to create extremely photorealistic images of faces, have improved massively during the past two years. The image program Midjourney as recently as two years ago could only create rough images that were easily identifiable as AI, the report found.

Now websites like www.whichfaceisreal.com provide extremely challenging AI identification tasks where even the most eagle-eyed observers will struggle to identify the synthetic image.

Risks

All of that means it is much easier to make and distribute AI-generated content for malicious purposes, from political deepfakes to gargantuan waves of disinformation.

California lawmakers are trying to require companies to identify when they use AI in advertising, and to more broadly watermark AI outputs. But although AI developers such as Meta and OpenAI are on board, those efforts present technical challenges and might not yet be possible. Not to mention questions about compliance and how to police bad actors.

Other problems with the technology are on the rise as well.

In 2023, the AI Incident Database, which tallies AI misuse and harms, saw 123 reports, a 32.3% increase from 2022. The database tracks issues like autonomous cars injuring pedestrians, or facial recognition tools leading to arrests of the wrong person.

Image- and text-based AI models can also output copyrighted material, the report said. Examples included Midjourney outputting near exact depictions of Toy Story and Marvel Comics characters when asked to, revealing that those images were probably part of their training data.

Legislation is already being proposed to tamp down potential copyright violations. But it is a developing area of the law, and some AI companies have argued they have a fair use right to some copyrighted material to train their programs.

Economics

While corporate investment in AI technology reached US$189.2bil (RM907bil) globally in 2023, that was actually a decrease of about one-fifth compared to 2022, according to the report, mostly due to a significant fall in mergers and acquisitions in the space.

But funding for generative AI companies – those like OpenAI whose technology generates images, text and increasingly video – saw a sharp uptick in 2023. The sector attracted US$25.2bil (RM120.8bil), which was around nine times the money plowed into it in 2022.

The demand for AI skills in the workforce has also leveled off, according to the Stanford report and data from labor market analytics firm Lightcast.

“Nearly every sector experienced a decrease in the proportion of AI job postings in 2023 compared to 2022, except for public administration and educational services,” the report found. Those drops were likely related to leading employers in the US, such as Amazon, Deloitte, Capital One, and others tapering off their overall job postings in the area.

Medicine

AI’s application in the medical field is among its most promising applications.

Last year saw the launch of programs like EVEscape, which uses AI to aid in predicting pandemics, and AlphaMissense, which can classify gene mutations more easily using the technology.

AI bots have also gotten better at pretending to be doctors.

GPT-4 Medprompt, reached an accuracy rate of 90.2% last year when responding to medical and diagnosis questions. For example, presented with the case of a long-distance runner with side pain, the specialized version GPT-4 was able to correctly diagnose which muscles could be injured based on the symptoms.

The federal government is also approving an increasing number of AI-incorporating medical devices.

As of October 2023 however, the US Food and Drug Administration has yet to approve any devices using generative AI, or that are powered by chatbots, for clinical use.

The public

None of that amounts to the public trusting AI very much.

The report noted a recent Ipsos survey showing that during 2023 more than half of people expressed concern about AI products and services, up 13% from 2022. About a third of respondents said they felt AI would improve their job, with a little less than a quarter of people saying it would make their job worse and the remainder unsure or figuring it wouldn’t make much difference.

About a third of people said the technology would be a boon to the economy, while more than a third said they thought it would make the economic situation in their country worse.

In the US, data from the Pew Research Center showed just over half of people report feeling more concerned than excited about AI, up from 38% in 2022.

Polling also showed differences along demographic lines. Wealthier and more educated people were more optimistic about AI’s positive potential than lower-income and less-educated people.

Despite that, ChatGPT is widely used. An international survey from the University of Toronto found 63% of those surveyed were aware of the chatbot, with around half saying they used it at least once a week. – San Francisco Chronicle/Tribune News Service

Tagged