26.5 C
Wednesday, May 29, 2024

Five Capabilities of GenAI

An introduction to what Generative AI can and cannot accomplish for business executives

It’s hard to believe that it’s still not quite been a year since ChatGPT’s launch, and we have seen Generative AI (GenAI) take the world by storm. From large language models (LLMs) to stable diffusion models for image generation, it is really quite remarkable what this new technology can do. A friend described it to me as the first time AI has felt tangible to them, as if what we only dreamed about through science fiction has now become reality.

Naturally, this has given business leaders pause to wonder what GenAI can or can’t do to transform their business processes. There are certainly many cool things you can do with GenAI, but there are also some misconceptions floating around out there that business leaders should be cautious about. The focus of this post is to share with you some of the core things that GenAI can do while also tempering one’s expectations to caution on what it cannot do.

Can #1: GenAI can summarize a lot of information.

The ability to use large language models (LLM) in particular to reduce a lot of information into something much more consumable is perhaps one of the most common use cases I’m hearing across all industries. GenAI can be used, for instance, to condense a meeting’s transcripted discourse into a few important bullet points. A huge legal document can also be used to have an LLM extract the most important content. Although you should always exercise caution to ensure that the LLM’s output is accurate, doing so can save a ton of time in a variety of business settings. I firmly believe that this will continue to catch on across a growing number of businesses.

Can’t #1: GenAI is never able to know for sure anything.

One of the biggest myths about LLMs is that they are intelligent. LLMs are really word prediction machines in actuality, despite the fact that they are so fascinatingly accurate that it nearly seems as though they are simulating human consciousness. Due to the fact that LLMs operate on probabilities between words, its result can never be completely guaranteed.

Related Articles


Please enter your comment!
Please enter your name here

Latest Articles