CES: Generative AI Is Having Its ‘War of the Worlds’ Moment

ChatGPT came too late (end of November) to make a significant impact on CES this year, but the cacophony of opinions about the generative AI model definitely made its way to Vegas. The timing was perfect. Just as the crypto crash left the hype industry paralyzed, OpenAI launched ChatGPT in what now feels like a nerdy and frustrating tech version of the Rolling Stones’ Altamont concert in ’69 (with computer scientists as the Hells Angels). Make no mistake: this is a landmark achievement in machine learning — perhaps the single greatest since the 2006 paper by Hinton, Salakhutdinov, Osindero and Teh on backpropagation in deep neural networks. However, it’s critical that industries, including M&E, distinguish between hype and reality.

ChatGPT is a version of OpenAI’s latest large language model (LLM), GPT-3.5, optimized for dialogue with humans. As far as OpenAI has been willing to share publicly (quite a bit more than previously, actually), ChatGPT uses similar training data as GPT-3 (45TB of text), and has the same initial number of parameters (175 billion). That’s around 3x less than its two closest competitors, Turing NLG (Microsoft, 530 billion) and PaLM (Google, 540 billion), and 100x more than GPT-2.

Like all large language models, ChatGPT does one critical thing very well: using its gigantic training set to guess the probability that one word or part of word (a token in NLP) will come after another. This is a traditional natural language processing method called an n-gram model, but on T-Rex steroids.

What makes ChatGPT unique — and powerful — is the amount of manual (human) fine-tuning that this probabilistic model went through. OpenAI used a method called Reinforcement Learning through Human Feedback. In simplistic terms, RLHF is a method whereby human analysts prompt GPT-3 and manually “reward” (or not) the responses on how “aligned” they are with the human intent (like teaching a puppy how to play soccer with a pocket full of treats).

Another human team then ranks the rewarded responses from most to least useful, creating a new class of labels, which is then used to re-train the model. Rinse and repeat.

The human in the loop’s ranking greatly improves the model’s “alignment,” which in machine learning means that it’s both accurate and relevant to the prompt, an area where GPT-3 was seriously lacking. It also reduces the size of the model (1.3 billion parameters for the RLHF-powered InstructGPT, close precursor to ChatGPT). Finally, this manual scoring does a much better job at weeding out toxicity in the output text.

The result is impressive. ChatGPT can produce coherent and often insightful text in many specific styles, and perform many natural language tasks at human level. When it’s good (often) the machine-output text is indistinguishable from human output, which is probably why the education industry is (justifiably) freaking out. When it’s bad (also often), it betrays inescapable yet fundamental flaws in OpenAI’s methodology: it is hyperscale parroting, not intelligence per se. But it works, which is why it’s garnering so much attention.

Just like its parent, GPT-3.5, ChatGPT is a mix of different sub-models, each capable of handling specific tasks and types of human knowledge. This approach greatly improves the quality of the outputted text for these use cases in these domains. But it also means that it leaves out many other domains, tasks and types of knowledge, where the model hasn’t been trained.

The lack of proper reasoning modules means that ChatGPT fails at simple arithmetic or logic (“10 pounds of feathers are lighter than 10 pounds of steel”). It often answers questions in ways that are grammatically correct (high accuracy) but don’t make any sense in the real world (low alignment). See Gary Marcus’ recent blog post for examples.

This is because the model has been fine-tuned on tens of thousands of human contexts, but far from all. So when a prompter hits the model with something it’s been trained to “understand,” it can pastiche at a level unseen before. But when in unknown territory, it fails spectacularly. More importantly, ChatGPT is still an extension of the same broad methodology, deep learning, that covers only one aspect of intelligence: learning on training data.

So yes, it’s been fun seeing the techno-optimists and techno-pessimists agree on something. But behind the attention-grabbing hyperbole about ChatGPT is a fundamental reality of the tech industry: keep your eyes on the builders, not the commentators. There’s no tech without products. And there are significantly less high-tech products on our tables than in our podcasts.

In many ways, the past six weeks of the tech press have felt like an extended version of Orson Welles’ 1938 “War of the Worlds” radio broadcast, minus the street mobs. No, ChatGPT is not artificial general intelligence (not even close). No, this isn’t the end of Google (they’re doing well with AI, thank you). No, as we heard at CES, 90 percent of content won’t be machine-generated in two years. No, it’s not the end of writing.

As often, we find the truth by following the money. And in the attention economy there’s big money in hysteria for tech commentators and their book agents, not to mention OpenAI’s lucky stockholders.

Here is what’s most likely to happen:

  1. A handful of creators will leverage Generative AI models to take their wotkflows to the next level. Our most precious commodity is time. Those creators who can leverage ChatGPT into their workflow to laser-focus their time on the core craft of exploring higher-level human narratives will win the era of augmented content creation.
  2. Some cool apps will emerge. Most of our short-form communication is made of commoditized, pro-forma replies that we do mechanically for the performance of politeness. This will be automated, thankfully. And ultimately written politeness will disappear altogether when it’s clear that machines, not people, have been thanking you for years.
  3. The cat-and-mouse game between digital marketers and content distributors will become nuclear warfare. We’ll see a dramatic amplification of the already overwhelming abundance of high calorie, low-nutrient digital content (bad tweets, biased product reviews, boring Instagram posts) designed to game the search and content recommendation platforms. ChatGPT will make this problem exponentially worse. And yes, some clever junk content publishers will make a lot of money for a little while. And yes, search engines will need to apply enormous efforts to combat them.
  4. Education will get seriously disrupted. By automating regurgitation, ChatGPT will force the education sector into the modern age of teaching kids how to digest oceans of public information to solve complex, systems-level problems.
  5. Curation will continue to rule. And Hollywood will as well. In an ocean of content, the value lies with curation and personalization. Time being the world’s most precious commodity, exhausted digital denizens will pay a high premium for a service that can deliver them the exact content that they need or that inspires them. And nobody is better at sorting this signal from the noise than the giant talent-filtering algorithm called Hollywood, which also knows something Silicon Valley keeps ignoring: people actually hate technology.

Editor’s Note: This post is an editorial from Yves Bergquist, ETC’s director of AI and blockchain.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.