Microsoft Study: GPT-4 Nearing Artificial General Intelligence

A March research paper by Microsoft has reopened discussion as to whether artificial intelligence is inching toward human reasoning, as the industry grapples with how an AI system can assimilate training data in a way that allows it to generate answers and promulgate ideas that weren’t programmed into it. Asked for a stable way to stack a book, nine eggs, a laptop, a bottle and a nail, the Microsoft AI generated a response researchers say hinted at artificial general intelligence, or AGI, a term used to connote an as yet theoretical type of machine learning that can duplicate human reasoning.

The resulting 155-page research paper, “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,” stirred what The New York Times calls “one of the tech world’s testiest debates: Is the industry building something akin to human intelligence? Or are some of the industry’s brightest minds letting their imaginations get the best of them?”

Microsoft research lead Peter Lee said he “started off being very skeptical” only to see his feelings evolve into “frustration, annoyance, maybe even fear,” the computer scientist told NYT.

In the experiment, Microsoft researchers explored the differences between OpenAI’s GPT-3 and the newer GPT-4, which powers the free Bing Chat, and ChatGPT Plus, that requires a $20 per month subscription.

While the paper analyzed capabilities including computer coding, complex math and Shakespeare-style dialogue, it was the basic reasoning exercise of stacking the objects that impressed the researchers, Business Insider writes, noting “GPT-3 got a bit confused,” saying the eggs could balance on top of the nail, but “its upgraded successor had an answer that actually startled the researchers,” directing the team to “arrange the eggs in a three-by-three grid on top of the book, so the laptop and the rest of the objects could balance on it.”

That GPT-4 could provide a solution that drew on an understanding of the physical world demonstrated something approaching AGI, the Microsoft team suggests.

In 2022, Google fired a researcher who said one of its AI systems had achieved sentience, “a step beyond what Microsoft has claimed,” NYT writes, noting “a sentient system would not just be intelligent. It would be able to sense or feel what is happening in the world around it.”

Still, Microsoft believes its discovery unusual enough to reorganize its research labs to further explore the idea, with the AGI paper lead author Sébastien Bubeck steering a dedicated team.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.