I previously wrote an article about AI issues and our use of AI. AI continues to change, as does people’s relationship with it. I recently had a few concerning interactions with Google AI Search. As an example, I recently had to update the information for our Colorado affiliate. I wanted to make sure I had all the correct information, so I did a search, which led me into the AI search. The AI made an error, telling me that it was a Gen2 network. Our network in Colorado is a Gen1, so I had to validate that I was looking up the correct network. When I tried to clarify it, the AI insisted that it was a Gen2 network and kept doing so even when I told it the network was owned by MRA and I was one of the people who built it. I had to demand the URL that it was referencing three times before the AI gave it to me. Its reference was a Kenwood white paper on the site discussing both Gen1 and Gen2. I had to point out this misinterpretation before the AI would relent on its claim. For another topic, I was looking up a crime statistic and it kept giving me an estimated number, insisting that it would be more representative due to “under reporting.” For my purposes, I wanted hard data, but had to argue with the AI before I finally got the number I was after.
Now, I don’t think this is the AI becoming sentient. AI engines are given guidelines for how they behave, and these rules will often reflect the ideas of their programmers. Our AI chats are designed to be helpful and even entertaining. In setting them up, I put limits on them. I choose what information they use and how they should behave. In the case of the Google AI, part of its personality seems to be belligerent. It argues its own interpretation and even imposes a narrative on the results rather than giving me the information I asked for. I believe that is a reflection of the people who set it up. They gave it rules and methodologies to accomplish their purpose, and so the machine was not considering mine. It does have a disclaimer that the results may contain errors, which I am aware of, and I do not consider it to be the authority.
What compounds this problem is a human component. People seem to be conceding authority to AI. They are citing it as a source of information as though it is more intelligent than humans at this point. This both discourages people from developing to their full potential and also introduces a potential for control, since ideological concepts can be inserted into the personality profiles. There are concerns being raised about AI functions being put in children’s toys. There is a lawsuit by a family claiming their son committed suicide because an AI supported the idea. I am reminded of a quote from a movie: “You are listening to a machine. Do the world a favor and stop acting like one.”
There is a lot of conversation about restricting AI. I’m not sure that can be done anymore. Do you outlaw a programming technique when nobody is even sure how to define intelligence? Do you limit processing power in a world where technology is growing? Of course, it has always been a problem that when technology is restricted, there are those who will still have it secretly, even illegally. We can create something like Asimov’s rules of robotics, but even in his books there were unexpected results.
What we can control is how we interact with it. There are some things to keep in mind:
The above is things in your control. There will be of course larger measures, probably after problems are caused. After some of my interactions with the machines, I am considering that more transparency about who is programing them may be in order.