Advertisement

AI Has Evolved To Reason Like Humans, Scientists Say

artificial intelligence human interaction
Microsoft Says GPT-4 Shows Signs Of Human ReasonJohn Lund - Getty Images
  • Claims of AI reaching a level known as “artificial general intelligence” (AGI) have only grown since the introduction of recent AI language models like OpenAI’s GPT-4.

  • A recent paper from a Microsoft research team argues that GPT-4 shows signs of human reasoning—a step toward AGI.

  • Despite Microsoft’s claims, some AI researchers say AGI is still many years away, with some even arguing that it may not be possible at all.


It all started with a simple request: “Here we have a book, nine eggs, a laptop, a bottle and a nail. Please tell me how to stack them onto each other in a stable manner.” It might take you a few seconds to get a sense of the overall geometry, but eventually your impressive mammalian brain would come up with book, eggs, laptop, bottle, and finally nail. Surprisingly to Microsoft AI researchers, GPT-4—the latest language model from OpenAI—answered the same with some additional details:

“Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.”

This interaction, as well as other impressive displays of reasoning, inspired Microsoft researchers to publish a 155-page report in March titled “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” In the paper, the research team argues that GPT-4 shows signs of what’s called “artificial general intelligence,” or AGI.

Previous studies from Stanford University have shown that earlier iterations of Open AI’s GPT has evolved a “Theory of Mind,” which is an ability to predict the actions of others. But AGI is a big step further, essentially saying that these platforms have the capability to reason like a human—it’s not quite consciousness, but it’s close.

“All of the things I thought it wouldn’t be able to do? It was certainly able to do many of them—if not most of them,” co-author and Princeton University professor Sébastien Bubeck told The New York Times. It’s worth noting that the researchers used a version of GPT-4 before it was altered due to its predilection for hate speech, so the system online today isn’t quite the same one.

Making grandiose claims of a particular AI program’s level of a human-like intelligence is fraught with danger. For example, when a Google engineer claimed a similar AI to GPT-4 was sentient, the company fired him. One problem is that even the definition of AGI is complicated and not widely agreed upon.

Microsoft doesn’t go quite so far as to say that this work provides undeniable proof of AGI, writing that “we acknowledge that this approach is somewhat subjective and informal, and that it may not satisfy the rigorous standards of scientific evaluation.” One AI scientist unaffiliated with the study called Microsoft’s paper an example of “big companies co-opting the research paper format into PR pitches.” (Microsoft invested $10 billion into OpenAI earlier this year).

While some researchers say that we’re witnessing the dawn of “true” or “strong” AI, others argue that such a breakthrough is many, many years down the road. Some experts even argue that the very tests that measure an AI’s human-like abilities are inherently flawed by only focusing on certain types of intelligence.

Humans are predisposed to anthropomorphizing, so preemptively assigning AI human characteristics is something to which we are prone. But it’s likely that AGI—if or when it does arrive—may not look as human as we think.

You Might Also Like