What do you trust AI to do?

Real vs. AI. AI is so dumb, it ‘thinks’ it was being racially sensitive.

The pictures above are from one of texts from my linguistics course. The one on the left is real. The one on the right was created by artificial intelligence.

Don’t blame AI entirely. As we learned from Hal, who explained his behavior in “2001” in the sequel, “2010,” you have to be really careful how you program these machines.

Hal was programmed to remove any obstacle that threatened the success of the mission to Jupiter. If that meant killing those pesky astronauts on board, so be it.

The AI (Google’s LLM Gemini) that added a black member to the Wehrmacht “was apparently trained to generate (racially and ethnically) diverse, but had no way to know in what actual, factual contexts such diversity existed (or did not exist),” according to the book.

This happened in 2015. We know that AI is exponentially “smarter” today. But we would still do well to be careful in what we trust it to do.

Seeing that reminded me of something I recently realized about my own attitude toward artificial “intelligence.”

I’m not engaged in creating fake images that look real (although sometimes I do play around with Photoshop), so I’m not really concerned about ending up with fake black Nazis.

I write and edit. And ever since people around me (people who are not writers and artists) started using AI to generate text, I’ve thought, “I would never do that.” And not because I feared losing my job; I’m basically retired.

But I’ve avoided saying that, because I knew how people would react: “Look at the old Luddite who started out writing on manual typewriters!” Which I did; but I’m not a Luddite. I’ve enthusiastically embraced every new technology that has come along since those early days. I was usually at the vanguard of each change, and coached others in how to use it.

But not this. And one day recently, listening to some other people talking back and forth about the advantages and disadvantages of using AI, it suddenly hit me why I refused even to think about it.

No matter what technology was involved, I have never entirely trusted anyone else to express something I wanted to say. Oh, sure, I trusted my reporters and my associates on the editorial board. But I coached them ahead of time on how to write it, and as editor had complete control of the final form.

Sometimes I would rewrite it completely. Not because I was smart and they were dumb. I did the same thing to myself. Over and over, I would write an entire column and then throw it away and rewrite it entirely — sometimes I’d be so disgusted with the draft I’d even abandon the subject and write about something else.

No matter how much I fed to the machine ahead of time, when it was done, I’d be dissatisfied and rewrite it, taking pretty much as much time as if I were rewriting something I or a trusted subordinate had written.

So where’s the advantage in doing it in the first place?

So I basically just use AI to help with searches, and I apply plenty of caveats to that. Just as a way of sniffing the air as I start out on serious work; I always dig deeper than Google’s AI shows me.

So what about you? In what ways do you use AI, and to what extent do you trust it?

Remember Hal?

3 thoughts on “What do you trust AI to do?

  1. Douglas Ross

    As a software developer, I am using AI more and more recently. Yesterday, for example, I took a slow performing program that had been written by someone else a few years ago and fed it into Grok, the AI in X. Prompting is very important from what I have found in AI so I explained what sections of the code were not performing well and asked Grok to recommend fixes. It took a couple minutes but the results were impressive. It didn’t just rewrite the code, it explained all the possible changes in detail. Nothing it suggested was beyond my comprehension based on my 40+ years of experience, it just did it much faster than I could have. I still had to review the code for accuracy. I am not at the point where I would take it as gospel.

    Yesterday, I had to modify some code to handle a new business rule for dealing with adjustments made at the end of a fiscal year. Rather than manually go in and change the same code in 20 places, I fed it into Grok and told it what to do. And I asked it to insert comments above the changes so they could be tracked in the future. Something that may have taken me an hour to do took 2 minutes in Grok.

    I also use AI to do the things I can’t do well — like make high quality Powerpoint decks with graphics. On a recent project I took a text document with an outline and notes and fed it into Gamma and said “turn this into a slide deck”. Again, the results were fast and impressive.

    For more personal use, I have always had the idea that I’d like to write a screenplay. So I took notes I had been collecting over the years and fed them in and then did some back and forth responses with the prompts to tweak what ended up being a very acceptable movie “treatment” with three acts and character backgrounds.

    The problem with AI is when people use it to replace thinking and comprehension. My daughter who is a high school teacher see the effect all the time now. Kids who can’t read at 6th grade level in high school are turning in papers that are obviously AI generated. This is just the evolution from kids cutting-and-pasting from Wikipedia or using Cliff Notes back in the day. It won’t end well for many of those kids.

    Reply
  2. Brad Warthen Post author

    Doug has pointed to some constructive uses of AI — which I appreciate. There are such uses out there.

    I’ve been talking about such uses quite a bit recently with people over at the Moore School, some of them at the Executive Education program, which is a client of ADCO.

    There’s a tendency now in some corners of academia to turn to the positive side of AI, and start teaching about it at the business school — because today’s and especially tomorrow’s businesspeople will have to deal with it and learn positive ways to use it. Y’all might want to read some of Joel Wooten’s comments on the subject in this recent piece on the ExecEd blog…

    Reply

Leave a Reply to Brad Warthen Cancel reply

Your email address will not be published. Required fields are marked *