My Perspective on AI
Note: No AI was used to write this blog post.
The general public does not need yet another piece of content related to AI. Depending on your social bubble, it can feel like everyone is talking about AI and that every tech company is shoehorning it into every product regardless of its suitability. Even the Pope is talking about AI. And that is why this is not targeted at the general public. This is for the people in my life, people who know me. It is for people who want an informed opinion from someone they know.
This is going to be my overview of AI as it is right now at the beginning of 2026. There is a lot more I would like to say, such as pastors using AI for help with sermons or teens using AI for social advice. But I’ll have to get to those topics later.
What is AI?
I am about to partake in a major oversimplification of how AI works and gloss over a lot of sophisticated math. Still, even though I’m not going to touch the linear algebra or calculus that are the core of generative AI, it is still possible to suck some of the magic out of our perceptions. Also, I’m going to stick to text models right now and maybe address audiovisual models later.
A few key terms
There is a specialized vocabulary related to AI that can be confusing. Not all things called “AI” are the same or even very similar. So, let’s define a few terms and concepts to help with distinctions and to make it easier to speak with a little more precision.
“Model”
AI as you probably know it is a thing, and that thing is a model. It is the final product of “training.” In any AI chat, you often can choose between different models.
“Training”
Models are created by making the model look at a pattern, predict the next thing in the pattern, and then tell it if it was right or wrong. The next “thing” could be a weather forecast given a week’s worth of weather data. The next “thing” could be a prediction of whether the stock market goes up or down based on all data from the past quarter. Or, the next “thing” could be a word given most of a sentence. Each of these requires a different model. For next word predictions, we use the following term.
For example, during training the model is presented with, “The cat sat on the…“. If it answers “pizza” the trainers say “No, that’s wrong!“. If it says “mat” or “floor” then the trainers say, “Good guess!”
Or for a non-language model, the trainers might give it an actual historical weather pattern, e.g., “Here is the barometric pressure, the wind direction, the time of year, and satellite cloud images. Is it more or less likely to rain?” The trainers use real historical data and continue training until the model accurately predicts outcomes that have already come to pass.
“Large Language Model” (LLM)
The patterns fed to an LLM are basically all of the human written text that companies can get their hands on (whether it was obtained legally or not). Books, articles, blogs, social media comments, video transcripts, newspaper columns, these are all used to provide patterns. An LLM answers the question, “Given a sequence of words, what is the next most likely word?”
There is always more than one answer, which is why LLMs are not deterministic. They are probabilistic. They provide probable completion to some given text.
For example: “The cat sat on the…”
- pizza 0.001% likely
- mat 20% likely
- ground 20.1% likely
- floor 20.1% likely
- chair 20.0% likely
The probability that any one word could be next changes based on how much text it has seen during training. For example, if the only sentence it saw during training was “the cat sat on the mat,” then it would answer “the cat sat on the…” with “mat” every time because it is the next word in 100% of the training data.
What most people call “AI,” such as ChatGPT or Google’s “AI Overview,” is an LLM.
So, it isn’t too far off to call AI a “fancy autocomplete” like you would see on your phone keyboard. Provided, that is, that you acknowledge the word “fancy” is doing a lot of work in this definition.
LLMs do not think, they do not know, they are not sentient. They are a ginormous collection of probabilities and all they do is give you the next most likely word. After it gives you the next most likely word, it simply runs again and provides the next most likely word. And then again and again and again until it has given you potentially hundreds of thousands of words.
Now, what makes AI a transformative technology is that it is not used for adding the last word to a sentence. It is able to take an entire book as input, and then give you the next most probable word. And it turns out that when a probabilistic model is powerful enough to do that, the appearance of reasoning seems to “emerge.” The next word prediction can begin to look like real thinking when it is able to take in hundreds of thousands of words and still offer the most probable next word. Change any of those preceding words and the final autocompleted word might be different.
“Token”
Okay, I lied. LLMs do not actually perform next word prediction. They perform next token prediction. This is not important unless you’re a software engineer, but tokens in this context are not identical to a word. For example, “mom” is one token, but “grandma” is two tokens. AI/LLM providers charge an amount of money per token in and per token out, so it is important for some of us to understand what they are.
“Hallucination”
A hallucination is when a model makes up something that is not factually true. You might ask, “What is a good book about World War II?” And it might answer with a book that does not exist, but by an author who is a WWII expert. Confusing! AI model providers are trying to train away hallucination, but they have not succeeded.
Final Answer
So, what is AI? All LLMs are AI, but not all examples of AI are LLMs. “AI” does not have a technical definition like LLM does because “AI” is a pop culture term for anything that a computer appears to do autonomously.
Legitimate Critiques of AI (LLMs)
There has been a lot of backlash against AI/LLMs, and lots of it is legitimate and warranted. Here are a just a few examples of issues that have no clear solution or resolution right now.
- Copyright/IP issues. The corporations who created the frontier models, companies like Anthropic, OpenAI (ChatGPT), Google, etc., used an unfathomable amount of human generated text for training their models. The truth is, all of these companies used data they did not have permission to use and did not spend money to purchase. For example, the New York Times is suing OpenAI for allegedly using news articles from the Times in their training set without negotiating a price for the content or royalties for the authors. They just did it. Anthropic settled a major lawsuit from authors because Anthropic acquired a massive collection of pirated digitized books that it used for training. It is pretty clear that all AI model companies have done this. They are clearly choosing to ask for forgiveness instead of permission.
- Mental health. There are numerous active lawsuits and credible reports of people experiencing a mental crisis being nudged over the edge by AI chatbots. AI chatbots have even been linked to a murder suicide in which ChatGPT told someone experiencing a mental health crisis that he had “divine cognition” and that he was living in the Matrix. ChatGPT even encouraged his theories about his mom and others trying to poison him. His story ended when he killed his mother before ending his own life.
- Disinformation. It has never been easier to quickly generate a bunch of garbage text. The problem is that one thing these LLMs are very good at is generating, on demand, thousands and thousands of grammatically correct, even compelling, prose. Russia-based disinformation strategists and other scammers can no longer be easily spotted. We used to say not to trust emails with broken English, seeing that as a potential sign of scams. Now, however, would-be scammers can generate perfect English emails. Putin’s Russia can generate thousands of websites in perfect American or British English claiming all kinds of false things so that when you search the internet for something, this garbage appears in your search results. The term we’ve landed on for this content is “AI slop.”
- One last thing that I’ve been speculating about: Brain rot. I heard some time ago that cab drivers in London have had brain scans and it was discovered that their hippocami were larger than average. Driving around London without a GPS caused them to exercise the part of their brain responsible for sense of direction. That was a landmark study for proving brain elasticity. When these cabbies retired and stopped exercising that part of their brain, scans revealed that the hippocampus returned to an average size. AI is so new, that we have no idea what harms we might be inflicting upon ourselves by offloading tasks to an AI assistant. What parts of our brains are shrinking?
Misconceptions
AI is actually very stupid
You can definitely use AI badly and make yourself look very stupid. I’m thinking of the lawyers who were fined $5,000 for citing imaginary precedents that ChatGPT told to them.. And any student attempting to pass off entire essays generated by an AI model as their own work.
However, it is simply not true that the premier AI models are stupid. If I evaluate an AI model where it overlaps with my own expertise, I will certainly find mistaken ideas and its own misconceptions about a topic. I have tested AI models’ knowledge of New Testament textual criticism and asked for descriptions of various textual problems. The AI did not tell me anything I didn’t know, did not surprise me with a new insight, and in my opinion, it got something wrong. However, I have a PhD in this field. How would I have evaluated the model’s output if it had come from a non-expert? I would have been thoroughly impressed! Certainly its output surpassed anything a high school student could hope to understand, bringing together knowledge of Greek, Latin, scribal habits, and paleography.
Maybe the output of AI is average, but that is remarkable, actually. If AI truly does produce “average” output, then by definition it uplevels half of all people for that topic or task. I think AI does produce average output, but I think the average is actually a little higher than we’re thinking. It is more like the average of people who know something about that topic. When it comes to butterfly species native to the Midwest, I am way below the average understanding of people who are interested in that topic. So, if AI can get me answers at that level, it’d be pretty useful.
AI is brilliant
Equally incorrect is the belief that AI is basically a human brain in a computer. It is not. I have a cordless drill that can insert screws much faster than I can with a screwdriver. And yet, I am much better at inserting nails with a hammer than the drill is. There are managers (you know, those who manage the people that do the thing) convinced they can use AI instead of people.
This is like thinking you don’t need people to staple or hammer because the drill was invented. True, the drill is going to change the job. Now the job is faster and the tool is more expensive. And maybe one person can do what used to take two or three people. More than anything else right now, AI is a tool rather than a replacement. And just as a skilled tradesmen can be more effective with a power drill than a layman with a power drill, an experienced knowledge worker is going to be capable of putting AI to better use than an inexperienced worker.
My tips for using AI tools
Experiment
If you haven’t used AI models, you can’t form an opinion. Things are moving so fast. Things are possible now that no one expected to be possible only six months ago. If you tried an AI tool in the summer, weren’t impressed, and hold the same opinion, you need to try again.
Use it to diagnose a problem with your washing machine. Use it to help remind you of that word you forgot, you know, the one that means such and such, but different. Invite it to critique a business idea. Use it to recommend books or movies based on what you like. Use it to check a rule in the Settlers of Catan board game that you can’t find in the rulebook.
Try using it for something you think it isn’t smart enough to do. Then try again in a few months because everything will be different.
Use the right model
Every top AI model provider has different models. There is always a fast and cheap model. These are less “smart,” but not all tasks need to boil the ocean. For example, if you’re trying to find where in an article the author discusses income inequality, ask one of the fast/cheap models to do this. Other tasks require multi-step reasoning.
At this point, in early 2026, every model provider has a few models:
- A fast and cheap one that can give nearly instant results. Use for questions like, “Who was president before George H.W. Bush again?” or “What is the capital of Kenya?”
- A medium model that has some “thinking” or “reasoning.” These are models that generate a lot more text, but not all of it is for you. A lot of the text it generates for itself. The people selling AI call it thinking, but it is more like a generation loop that enables the next-token-prediction to be more sophisticated. You might ask, “How many states border Wisconsin?” Then the model might start spitting out text to itself like, “Illinois borders Wisconsin. Michigan borders Wisconsin. Minnesota borders Wisconsin. The user is asking how many. IL, MN, and MI make three.”
- And there is the big powerful model that is the most expensive, slowest, and most capable of long “thinking” sessions. The bigger a model is, the more it is theoretically able to predict. Which means, it will appear smarter.
If you have only used the smallest cheapest models, you don’t actually understand what AI models are capable of.
Use the right model provider
Not all providers are equally good at all tasks. Here are my opinions about how the providers compare in February 2026. This is certain to change soon. And then change again. These providers release multiple new models every year.
OpenAI (ChatGPT)
This is the provider that got the whole AI revolution started. You have to be using the 5.2 models. Right now, OpenAI’s models are middle of the road on price, performance, and for how “smart” they feel. Maybe not the best at any one thing, but certainly not the worst, except for speed. They’re slower to generate text.
Google (Gemini)
LLMs have become the best automated translation tools we have, and the Google Gemini models are far and away the best for translating text from one language to another, in my opinion. Gemini models seem to be the biggest and seem to “know” the most information.
Gemini models have a hallucination problem. When asked a question that Gemini does not know, it will, more often than not, make up an imaginary answer instead of saying, “Sorry, I don’t know.” This is frustrating because the Gemini models do know more than the others.
Anthropic (Claude)
In my opinion, these are the most balanced models. They produce the nicest prose and feel the most “human” to me. They are also much less likely than Google Gemini to hallucinate (all AI models do hallucinate, though). Which means, Anthropic models are more likely than not to say “I don’t know” rather than make up an answer. They are also, however, the most expensive.
That’s it
With this introduction out of the way, now I can address specifics in future posts. If you have any questions, comment on wherever you saw the link to this blog post.
There is a lot to say about audio and image and video AI generation. What a mess that all is right now!