According to Dr. Gary Marcus, the end of the generative AI boom is near. In this article in dezeen, and the blog post it is based on, he discusses its impending end. From my perspective, I don't see it coming any time soon. Is generative AI overhyped? Sure, but that's the case with most impressive new technologies. Have we fully explored its potential? Far from it!
Dr. Marcus describes how the generative AI business is overvalued because you can only do code completion and "write prose" with them (or at least we haven't seen any other uses). This is actually more a testament to a lack of imagination than a shortcoming of ChatGPT (or Large Language Models (LLMs), the overarching name of ChatGPT and similar AI tools, in general) or the AI business as a whole.
Discovering use cases and making applications with generative AI is precisely what we're doing at SUMSUM. Some projects we're involved in show there's more to LLMs than the "out-of-the-box" text generation the blog refers to. Here are some examples and a particularly interesting case developed at NVIDIA.
For this case for Acco (publisher of courseware for the University of Leuven), we summarized Acco's courseware so students can study more efficiently, for example, by being quizzed by a chatbot. This seems like an obvious thing to do for an LLM, and in a way, it is. However, it's still different from just producing copy. Similar applications are possible for legal professionals, journalists, researchers, analysts, historians, and anyone who has to process large amounts of text and help them sift through them.
A chatbot based on an LLM generates answers using in-house calculation tools or looks up conditions or prerequisites in the personnel regulations and HR decisions. Ask the chatbot when you can retire, and it will fill in the parameters you provided into the retirement-age calculator your HR staff uses. Or ask it about the conditions for promotion, and it will look up what's written in several documents and give you a summary.
Remember that LLMs are good at generating code but need a developer to keep oversight? This isn't actually true. An LLM can be self-correcting; give it feedback on its mistakes, and it will fix them. That's how you can ask an LLM to take a dataset (from Eurostat, for instance) and generate a graph. It will output code, which you can then use to render the graph.
The team at NVIDIA managed to get an LLM to play Minecraft. It does this in a few steps:
This is known as a reasoning engine. You get an AI system to reason through a problem and devise a solution. Of course, this is yet to be the AGI that many profess is coming very soon (as criticized in the blog article), but it is getting mighty close.
Other uses of such a reasoning engine (integrated with a bigger system) could be:
This shows that LLMs can do more than writing code or marketing copy. It can be used for summaries, creating illustrations and graphs, looking up information, and giving a chatbot interface to calculations or executing plans. This is why, from my point of view, the end of the Generative AI boom is not yet in sight.
If you'd like some help coming up with more ideas, testing them, and developing them, be sure to give us a call.
PS: I did ask ChatGPT to proofread this article and give me some feedback on how to improve it. So, maybe the people who think that ChatGPT, in its current state, can do nothing for them are way better writers than I am.