AI observations

I previously wrote an article about AI issues and our use of AI. AI continues to change, as does people’s relationship with it. I recently had a few concerning interactions with Google AI Search. As an example, I recently had to update the information for our Colorado affiliate. I wanted to make sure I had all the correct information, so I did a search, which led me into the AI search. The AI made an error, telling me that it was a Gen2 network. Our network in Colorado is a Gen1, so I had to validate that I was looking up the correct network. When I tried to clarify it, the AI insisted that it was a Gen2 network and kept doing so even when I told it the network was owned by MRA and I was one of the people who built it. I had to demand the URL that it was referencing three times before the AI gave it to me. Its reference was a Kenwood white paper on the site discussing both Gen1 and Gen2. I had to point out this misinterpretation before the AI would relent on its claim. For another topic, I was looking up a crime statistic and it kept giving me an estimated number, insisting that it would be more representative due to “under reporting.” For my purposes, I wanted hard data, but had to argue with the AI before I finally got the number I was after.

Now, I don’t think this is the AI becoming sentient. AI engines are given guidelines for how they behave, and these rules will often reflect the ideas of their programmers. Our AI chats are designed to be helpful and even entertaining. In setting them up, I put limits on them. I choose what information they use and how they should behave. In the case of the Google AI, part of its personality seems to be belligerent. It argues its own interpretation and even imposes a narrative on the results rather than giving me the information I asked for. I believe that is a reflection of the people who set it up. They gave it rules and methodologies to accomplish their purpose, and so the machine was not considering mine. It does have a disclaimer that the results may contain errors, which I am aware of, and I do not consider it to be the authority.

What compounds this problem is a human component. People seem to be conceding authority to AI. They are citing it as a source of information as though it is more intelligent than humans at this point. This both discourages people from developing to their full potential and also introduces a potential for control, since ideological concepts can be inserted into the personality profiles. There are concerns being raised about AI functions being put in children’s toys. There is a lawsuit by a family claiming their son committed suicide because an AI supported the idea. I am reminded of a quote from a movie: “You are listening to a machine. Do the world a favor and stop acting like one.”

There is a lot of conversation about restricting AI. I’m not sure that can be done anymore. Do you outlaw a programming technique when nobody is even sure how to define intelligence? Do you limit processing power in a world where technology is growing? Of course, it has always been a problem that when technology is restricted, there are those who will still have it secretly, even illegally. We can create something like Asimov’s rules of robotics, but even in his books there were unexpected results.

What we can control is how we interact with it. There are some things to keep in mind:

  • AI is a machine, a tool. While it can sort through data in a manner that is different that keyword searches, the AI has no perception of reality.
  • AI is not a friend. There are no feelings, it is just a program. SciFi series have played with the concept for many years and we can’t really predict the future, but for now it is a program that somebody wrote. The appearances of a personality are just rules. You don’t have a relationship with an AI
  • AI is not an authority on any subject. It simply locates data and uses and algorithm to choose what and how to display it to you. The weaknesses are that there can be erroneous data in the database. The AI can be biased by the rules that a human gave it. Every day people post bad information on the internet, and it is repeated. What would the AI use to determine the reality? Based on repetition, the world could be flat. It could consult a person, but what if that person is wrong or presenting their own ideas. Many of the AI engines being used were designed as language engines. They have the unfortunate characteristic of making things up when they don’t have the facts.
  • Question and cross reference everything. The AI may lead you to an answer, but it might also present a fantasy.
  • Look at who controls it. There are people behind the curtain. Do you agree with their goals and ideas?
  • Keep control of the AI. Set it’s limits, monitor what it is doing. Certainly don’t give it weapons.
  • Don’t give up on yourself. An AI is not a reason not to learn and understand all you can. Knowledge is the key to interpreting what the machine, or even other people on the internet tell you.

The above is things in your control. There will be of course larger measures, probably after problems are caused. After some of my interactions with the machines, I am considering that more transparency about who is programing them may be in order.