A Little Rant About AI

Jeff Powell
5 min readMar 8, 2024
Photo by vackground.com on Unsplash

My schedule this week is a bit odd, so here’s something I’ve been working on for a couple of weeks now. I hope you find it interesting.

The world is all aflutter about AI.

I am not.

I can recall when ELIZA was a big deal. Well, technically I can’t actually recall it because it was written around the time I was born, but 20ish years later when I was in college and taking an AI class, we were assigned to write a program that worked in the same way. ELIZA was a natural language chat program that (in one incarnation) vaguely emulated a psychologist. It worked by recognizing patterns in what someone typed and creating a response based on those patterns.

It was simple and essentially stupid. It was not AI.

Then came expert systems, the next kind of AI I heard touted by industry. For a while I worked for a small company that made such a system. The general idea was that someone with knowledge of a task would document the way it was done so that a computer could do the job. It was a rules based system. Sometimes the experts wrote the rules and sometimes the system could determine its own rules from the data entered using pattern finding software. Either way the computer could follow the rules in the end.

Once again, it was not actually AI, though it was a bit more sophisticated than ELIZA.

Now we are in the era of large language models (LLMs) like ChatGPT. These are trained prediction systems, very similar to the predictive typing software used by your smartphone. (You know, the one that suggests the word “choke” after you’ve typed “ch” and are trying to get to “change”. Like that, and just about as accurate.) LLMs are trained on vast amounts of text and look back farther in what they are given than just a few letters, but they do essentially the same job. And image generators (like DALL-E) do something similar, but generate images rather than text.

And once again it’s just fancy programming. Note that none of these programs understands what they generate, or even what their inputs mean. Instead they are simply predicting the most likely next word given the input prompts. In the case of image generators it’s a bit more complicated, but you only have to look at a few images to see they don't understand a thing about what they are generating.

Is this really AI? I don’t think so. Intelligence seems to require more, but even if it’s only a semantic issue, there are other serious issues to consider.

Training LLMs and related systems uses vast amounts of energy. One article says AI is “on track to annually consume as much electricity as the entire country of Ireland (29.3 terawatt-hours per year)”. Another (surprisingly optimistic) article is titled “AMD And Nvidia Will Boil The Oceans”.

And of course we’re already seeing job losses thanks to these systems. Will those made redundant by AI chatbots find other work?

And what about when those chatbots get things wrong? Here’s a recent story about an Air Canada AI chatbot hallucinating a bereavement fare. Apparently Air Canada claimed the bot was a separate legal entity “responsible for its own actions” which is simply laughable.

And there is yet another potential issue: computational power. Microsoft has been facing blowback due to the hardware requirements for Windows 11. Apparently 240 million PCs can’t upgrade from Windows 10 to Windows 11. (Just imagine 240 million computers headed for the landfill!) It’s so bad that Microsoft has announced they will offer paid security fixes for Windows 10 for another three years, but details (including the cost) are still scarce.

And then comes Windows 12. Admittedly there are few details about it yet and the press is full of speculation, but what little I’ve read looks bad. This article from Digital Trends has a couple of interesting notes:

  • Expect memory requirements to rise, with a minimum of 16 GB required to run the OS.
  • There’s a new performance unit for processors being used: TOPS, for Trillions of Operations Per Second, and the article claims Microsoft is going to require 40 TOPS for Windows 12 machines.

A while ago I bought a new computer — a reasonably nice one, though not a gaming machine — to run Windows 11. Will it be able to run Windows 12?

It came with 32 GB of RAM, so that’s good. But how many TOPS does it provide? It has an AMD 7840HS processor, which (according to this Wikipedia page) is designed for laptop computers. Very few of the entries on that list provide a TOPS number, since most of them don’t include an AI accelerator (or NPU — Neural Processing Unit). Apparently the 7840HS has one, but it can provide only “up to 10 TOPS”.

In fact, if 40 TOPS really is going to be the requirement, none of the AMD processors listed in that page at the moment will be able to run Windows 12. I was unable to find TOPS ratings for Intel processors easily. (Internet searches for things like [ Intel CPU TOPS rating ] all returned articles about the best Intel CPUs, which is not at all the same thing.)

Why does Microsoft want all that computing power? For AI, of course. Stuffing AI into Windows 12 is supposed to be a game changer — and maybe it will be, though I have my doubts — but it would be nice to get Windows 11 fixed before they make nearly every PC on the planet incapable of running Windows 12.

Windows 11 is proving to be pretty buggy in my experience, with more than a few glitches. But the funniest problem had to be with Copilot — Microsoft’s current AI (and ChatGPT) integration, and it’s built right into Windows. For a long time they claimed it was there, but I didn’t see it. Why? Because I use two monitors. That’s right: Microsoft’s hottest AI feature was causing icons in the task bar to jump randomly between screens, so they disabled it for everyone with multiple monitors for months while they fixed the problem. It finally showed up a couple of weeks ago.

And these are the people who are going to put AI into everything? I am skeptical.

Somehow I suspect that by the time Windows 12 is shipping I may be switching back to Linux.

In any case, I remain unconvinced that current generation “AI” is going to be the huge thing that its proponents (and a lot of the press) currently claim. It will find uses, but its energy requirements are enormous and it’s very hard to be sure anything it produces is actually correct. Those are pretty serious limitations.

What do you think? Is there an AI in your future? Or your present? Let me know.

--

--

Jeff Powell

Sculptor/Artist. Former programmer. Former volunteer firefighter. Former fencer. Weirdest resume on the planet, I suspect.