Select Page

A lot has been written about Artificial Intelligence lately. There are those who think it is a miracle, and others who believe it could signify the end of the world. Speculation about AI becoming self-aware abounds. Many people believe that AI has already surpassed human intelligence and fear that it could decide we (humans) are no longer needed.

We do use some AI functions on the website. There’s a story behind how that came about. At one time, we used a chat service. The point of that was to have a real human to talk to. Ideally, one of us would perform that function, but we often have to work at remote sites and are not sitting at a desk just in case a chat comes in. We also eventually need to go home and sleep or do other things. So, the chat service was used. The problem arose when the people answering the chat knew nothing about our business and were not particularly inclined or motivated to learn more. They were tasked with asking three questions, which would determine if it was a valid lead or not. I would have liked more interaction and conditional responses, but there was an issue with just those questions that indicated making the script more complex would have been problematic. Frequently, one of the people would take the initiative to give incorrect answers, or make decisions that were supposed to be referred to one of us. Sometimes, that meant dismissing someone rather than trying to help them.

So, I started looking for an alternative. Initially, that was with the idea of having a question/response type script. That was inadequate. Some of the earlier AI systems had the tendency to create highly incoherent or ignorant responses. Late in 2023, the new generation AI models became available. The first few attempts showed some promise. It did take some adjustment and training to keep it on track. Setting up the AI is different from any programming language I have worked with. The interface is more about providing it with preferred information rather than writing rules. In addition to the big public language engine, it has information about how I want it to behave and is able to use the history and technical information from our website. On occasion, it would go a bit off track, but I was able to fine-tune it and correct those issues, so we were able to have the AI take over the chat engine.

The goal was not really to replace humans. It was to provide better assistance to site visitors. Unfortunately, the humans were not stepping up to the task. Of course, it would take years of study to be able to reach the level of one of our techs, but the issue was that no effort was being made to improve the interactions and the results were unpredictable, depending on who was on shift at the time. In every job I have ever done, I have made an effort to understand everything I could about that job. I did have my misgivings about replacing the people, but we still had the task to be accomplished. Perhaps it is something like the old telephone switchboard operators being replaced with automated switching. We really couldn’t go back to the old method.

After creating the first chat engine, I started experimenting with personalities. We were able to create some of the characters from our film so that you could chat with them. Each of them has a unique and entertaining personality. I don’t necessarily know how they will respond to a question, but none of them has been offensive or tried to take over the world. They also speak multiple languages. Although they were created for entertainment, they also have access to information from our website. Sasquatch tends to say he doesn’t know much about radio, but he can also be fairly accurate.

The dangers that I see in AI are mostly in how it is used. Recently I saw a post from someone trying to build a “free energy” machine that claimed he had validated the design with AI. Though AI can be useful in locating data when the standard search engines fail, it also has an unfortunate tendency to make up fantasies when it doesn’t know the answer. AI has been able to do some interesting things with image processing, but can also create some glitched or illogical results. In creating music, something is just not quite right. An AI has no perception of the real world, nor emotions. Art is all about emotion. Real world machines have to be designed to work with real world physics. So mistakes happen.

  • The person trying to make the “free energy” machine disregarded and did not learn some basic physics
  • Someone running for political office has promised that an AI will make all the decisions. His reasoning is that it has access to more information than he could remember. He has not considered that the AI could be biased by the programing.
  • “Art” being created by people who have put nothing in to learning about art. Give a machine a few sentences and pick a result. What happens when people no longer learn the skills and how to express the emotions behind them?
  • People are using AI as a crutch. Need to write a paper for school? Just have the AI do it. What did you learn?
Regulating Artificial Intelegence

There are many calls to regulate or eliminate AI, which is problematic. Legislators trying to regulate something they don’t understand has often been a problem. Artificial Intelligence is a term for which we are still debating the definition. If you want to ban it, what do you ban? Can you regulate a programming technique? Do you put limits on the amount of data storage or processing speed? It is still debatable if AI is actually intelligence, but the techniques have been around for a long time. Programs that use a set of rules to play a game or compile data about objects in a room are examples. The major difference now is the progression of computer equipment to be able to process large amounts of data in a short time. Do you outlaw data comparisons?

Robot used to clear brush with a flame thrower

Certainly it is not a wise idea idea to give a machine control of a weapon, yet I recently saw a video of a robot designed to clear brush. They equipped it with what was basically a flame thrower. After all, what could go wrong? No matter what regulations are put in place regarding weaponizing of AI, the bodies that create those rules will inevitably exempt themselves. What is illegal in one country will be created in another. Self driving cars, I am not so sure about. Maybe they could be safer when operating correctly. I’m not sure about their ability to handle unexpected circumstances. Certainly a car would be dangerous when the system is not working right.

In school, more and more students are turning AI writing for their work. As AI gets better it becomes harder to spot. Of course the student learns nothing from doing that. Similar to the chat service that learned nothing about the product they are supposed to be representing. This is one of the areas where I feel the humans should step up to the challenge rather than taking shortcuts, or responding with apathy.

My feelings about AI are that it is probably too late to stuff the genie back in the bottle. I actually think that humans need to do better. To me, AI is a tool that, like many tools, needs to be used properly. I use it for a few tasks, but I don’t give it control of my life. Something I find odd in our society is that so many people are using technology with no concept of what it is or how it works. That is a mystery to me since I have always investigated how things work, even toys when I was three. People should still study science and engineering. They should learn art, whether it is producing it or even just understanding it. They should always ask the question, “why?”

In a popular science fiction book series, the backstory was that their society had automated all the menial tasks, so they lived a life of leisure. They no longer sought to better themselves or even face obstacles. Then, the people who controlled the machines, used them to control the population. The humans became the slaves. We face that now, especially as people are coming to rely on AI. The machines are programmed by men and are not all-knowing or infallible. What they present can be biased by the person that programmed it. I’m not sure if fearing AI is the issue. Complacency and reliance are probably bigger issues. Certainly, looking at what the people behind the curtain are doing is warranted.

Robots being used to enslave people
0
Would love your thoughts, please comment.x
()
x