Skip to content

By Francis Cianfrocca

This post was written in its entirety by a biological human. No GPUs were harmed during its production.

The Next Big Thing™ is here, complete with hype and FOMO engines running at full throttle. This year’s model is called “agentic AI.”

If you’ve been through a few of these hype-cycles in your time, you know enough to be wary. You’ll ask yourself three questions: Is this good for me (hype)? Is it bad for me (FOMO)? And most of all, Is it real?

This post is the first of a three-part series that will attempt to answer these questions, in a non-tendentious and vendor-neutral way. We’ll start by defining some terms and discussing the potential of agentic AI (AAI). The next post will be more technical and lay out the anatomy of “autonomous AI agents” (AAIA) and agentic “orchestrators.” In the third post we’ll explore how and why you and your organization can approach AAIA.

Stay tuned for a future post, detailing the follow-on concepts of Genomic AI, and Ontologic Fabrics.

LLMs have revolutionized content generation. 2024 was the “breakout year,” in which many people (sometimes grudgingly) accepted that LLMs can generate perfectly usable texts and images for a great many purposes, at a tiny fraction of the cost and effort.

The astonishing array of “emergent” (unexpected) capabilities of LLMs arose essentially from two things.

First, it turned out that the syntax (or grammar) of human languages encodes enough logic to approximate the apprehension of meaning. This was long suspected by linguists, who have been intrigued for decades by the seemingly hard-wired robustness of syntax in human languages. (Even a three-year-old child “knows” enough grammar to laugh at you if you swap the order of words in an English sentence.)

But things really caught fire when enough compute-power became available to run the new models against extremely collections of text. (Large collections of images had already been cracked by the advent of deep learning, from which language “transformers” like LLMs are derived.)

As if by magic, the largest LLMs suddenly could generate language that is not only syntactically correct, but also semantically reasonable. (By my subjective reckoning, this emerged sometime in 2023.)

And in the latest step, LLMs now appear able to execute chains of reasoning analogous to the processes by which humans make decisions. They can automatically search out additional data inputs at each step of a logical “thought” process.

This is a very major advance. If you’ve worked with the latest models as I have, you know the results can be uncanny.

But now take the next logical step: if AIs can make decisions like humans do, then let’s let them make decisions. In a very small nutshell, that’s what Agentic AI is all about.

Does this make you uncomfortable? It should. How do we know that agentic AIs will make the decisions we want them to?

Well, how do you know that humans will make the decisions you want them to? Because along with decision-making authority comes responsibility. This works fairly well in humans, because we can audit their decisions and hold them accountable. (Politicians, of course, are the exception that proves the rule.)

How will this work for AI? That question doesn’t yet have a clear answer.

But I can tell you that while we’re debating this question, one or more large organizations will entrust substantial business and/or technical processes to partial or complete control by agentic AIs. And the results will be successful, perhaps shockingly so. There will be failures along the way, but that’s part of technology progress.

The success stories will show transformative improvements in resource costs, reduced error rates, and enhanced accuracy. I can’t tell you which year it will happen, but you will soon enough be hearing that, as a business or technical leader, you MUST adopt Agentic AI, or be left behind your competitors.

And by then, it won’t be FOMO. It will be real.

So it’s important for you to start understanding exactly how Agentic AI is put together, before you start listening to people who want to sell it to you. In our next post, we’ll dive into the anatomy of autonomic AI agents, how they’re constructed, and how they’re managed.