To hear some people, it seems like Artificial Intelligence (AI, probably pronounced “Aiyeee!”) is a big robot, stalking us all, ready to take our jobs and turn us out into the cold.
So I asked AI (at Chat GPT, a free tool set up to answer questions) if it’s after my job. Here’s what AI said:
I’m sorry, but I am not capable of taking on any jobs or tasks outside of providing information and answering questions. I am a large language model trained by OpenAI, and my purpose is to assist users in generating human-like text based on the input provided to me. I do not have the ability to take on tasks or jobs, and I exist solely as a digital assistant.
What a relief!
(Unless it’s lying...)
But here’s why I believe it’s telling the truth: Its explanation about how it can’t take my job with “I’m sorry...”
I assume it did that because it’s common for humans to start an “no” answer with “I’m sorry.” A human with any emotional intelligence would not do that in this case. But AI doesn’t understand that. (Yet.)
If I were using this AI to write copy, and it made that mistake, I’d just edit it. And that’s what you’re supposed to do with AI-generated copy.
Which I think is the purpose of AI copywriting: It gives you a first draft. You’re still miles from a finished piece of fundraising copy. You have a lot of work to do to create connecting, human, effective writing.
That’s the way all quality writing goes:
- You write a first draft. It’s lousy, not ready to share.
- You work on it until it is ready.
So I can see AI as a useful tool for generating first drafts. You’re going to start with bad copy whether you write the first draft yourself, or get a mindless robot to do it for you. AI will just speed up what for many is the most difficult and frustrating part of the writing process. I’m good with that. (I’ll try it on an upcoming project, and let you know how it goes.)
I think the potential problem with AI is not the tool itself, but the way it’s likely to be used.
This became clear to me recently when I watched a demo of a fundraising-specific AI. The human showing how it works gave the tool a handful of vague, abstract phrases that were connected with a cause. The AI dutifully spit out vague, abstract prose.
Garbage in, garbage out.
But the garbage was grammatically correct, syntactically realistic prose. If you didn’t know better, you could easily think you had a finished letter in front of you. Call it good, send it out.
And watch it fail.
Which, I have to admit, is really no different from a lot of fundraising produced by humans who don’t know much about fundraising. Instead of applying knowledge based on experience, they take wild guesses based on their hunches and beliefs.
So I don’t think AI is going to change things that much:
- Some people will put AI to work as a writing tool. It might save some time on the way to a quality product when used right.
- Some people will use AI to churn out ineffective garbage fundraising. Most of those people are likely to be those who are already creating ineffective garbage fundraising without robot help.
Nothing really changes. Maybe AI saves people some time as they create good or bad fundraising.
Seth Godin made a good point recently about using AI:
If your work isn’t more useful or insightful or urgent than [AI] can create in 12 seconds, don’t interrupt people with it.
That’s right: AI fundraising could be just a machine that churns out tons of junk messaging that wastes people’s time. Or it could be a helpful tool for people who understand fundraising and know that it takes time no matter how you do it.
If a deeper understanding of what AI is and isn’t, I highly recommend this interest (and very funny) book: You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place by Janelle Shane.
Recent Comments