You have probably already read something written by AI. And if you are like most Americans, according to one recent survey, you couldn’t tell that a human didn’t write it.
That’s because of ChatGPT and similar large language model (LLM) tools, which are impressive examples of generative AI. They can quickly respond to simple prompts with loads of conversational and comprehensible text. In mere seconds, they can produce everything from recipe ideas and travel itineraries to convincing essays on the safety of nuclear energy. Heck, they’ve even managed to ace the bar exam.
Right now, generative AI is incredibly popular, widely available, and being integrated into wads of digital tools. So here are a few Sterling tips on how to use it responsibly:
Generative AI Dos and Don’ts
When it comes to professional communications, generative AI technology can be incredibly useful for quickly tackling a lot of things that take humans a lot of time, such as synthesizing notes, summarizing long presentations, kickstarting research, or just combating writer’s block and the tyranny of a blank page. Kevin Roose at The New York Times has used generative AI tools for a variety of work-related purposes, including editing and constructively criticizing his own writing.
Don’t: Believe everything it generates.
There have already been widely publicized incidents of generative AI falsifying and fabricating citations, inventing fake criminal histories for real people, and randomly being confidently wrong. For context, computer scientist Margaret Mitchell recently explained: “It was not built to be factual and thus will not be factual. It’s as simple as that.” It is safer to treat output as a creative shot in the dark, not as a reliable source.
Do: Be transparent.
If you use generative AI to compile and email a list of updates to your team, for example, be courteous and include a disclaimer that says so. Something simple and straightforward like “This message was developed with the help of ChatGPT” should suffice.
Don’t: Enter confidential information into prompts.
Witness the recent news report about employees inputting proprietary code and meeting transcripts into ChatGPT prompts, thus leaking confidential company data.
Do: Read the fine print on generative AI user agreements.
As PR Daily noted, “many AI platforms indicate that they own the content being created on the platform.” The user agreement for Google’s Bard, for example, stipulates that: “You will not input any personal or sensitive information” and notes that “when you interact with Bard, Google collects your conversations” among other data. OpenAi, the developers of ChatGPT, explicitly state: “No, we are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.”
Don’t: Copy and paste output and try to pass it off as your own work.
Media outlets have already started issuing policies and guidelines on generative AI content. Wired says, “We do not publish stories with text generated by AI.” And Forbes stipulates that article submissions be:
- Previously unpublished
- Owned by you
- Not generated by AI tools or platform
Do: Check with your employer before using generative AI for any work-related purposes.
Some are embracing the tools, but many companies are restricting workplace use.
Do: Understand the technology’s limitations.
They can generate seemingly thoughtful text, but they are not a substitute for thinking.
When it comes to navigating generative AI use, take a page from Michael Pollan’s advice on the healthiest human diet. Apply something similar to healthy human communications: Use tools. Not too much. Mostly you.
*Sterling Disclaimer: Everything in this post was created by real live humans without the aid of ChatGPT, Bard, Bing, or other generative AI tools. We did run it through a spellchecker (and ignored several of its suggestions).