Are Robots Coming to Replace Us? 4 Jobs Artificial Intelligence Can't Outcompete (Yet!) Is artificial intelligence (AI) evolving to make our lives easier? I believe the answer is that it is a bit of both in the long run, but there is no need to panic for now.
By Max Kraynov Edited by Micah Zimmerman
Opinions expressed by Entrepreneur contributors are their own.
For the last few months, the debate about artificial intelligence and artificially-generated content (AGC) in particular — the likes of Lensa's images loosely derived from existing photos and popular aesthetics or texts generated by ChatGPT — has been heating up.
ChatGPT is a free tool from OpenAI that has been trained to chat with people and "answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests." On the other hand, the Lensa AI app by Prisma Labs uses artificial intelligence to generate images based on selfies into fantastical portraits, ranging from the beautiful to the downright bizarre.
Does the emergence of these two tools, alongside many others like them, mean that entire social media profiles could potentially be managed and "staffed" by AI? And if AI is as good as we are at answering questions that once required extensive research or making works of art, what does this mean for employment trends going forward?
Are entire creative industries and tech companies that cater to creatives and thrive on user-generated (UGC) content destined for extinction? The landscape continues to evolve quickly, but despite all the fun that can be had with AGC, when it comes to high value-added and non-repetitive tasks, I don't see AI replacing humans just yet.
Related: Student Builds ChatGPT Detection App to Fight AI Plagiarism
1. Reliable research
What research (i.e., too much of my time spent playing with ChatGPT) shows is that when it comes to getting answers to clearly worded, simple questions, the robots are very good at sifting through vast amounts of published content and regurgitating the main ideas quickly. This can certainly save time.
However, the ideas that AI spits out are based on the vast swathes of content available online and are, therefore, not properly fact-checked and sourced. The algorithm cannot distinguish between a well-researched scholarly paper and an article penned by a little-known media organization with questionable editorial standards.
It takes posts by conspiracy theorists and charlatans at face value alongside reputable research to come to conclusions. Therefore, it is hard to know whether the AI's answers are correct. Can they be trusted? Can the sources for the information be made clear to enable the user to decide the veracity of the derived answer for him or herself?
For now, it takes a person with strong research and analytical skills, sound judgment and a good grasp of the media landscape to provide trustworthy research. Will it take more time for a human than an AI to sift through sources to come up with an answer? Yes, but speed is not everything when reliability and trust are at stake.
Related: ChatGPT Is Becoming a Game-Changer for Real Estate Agents
2. Appropriate content
Being able to distinguish between sources and cite them in research is just one part of the problem. Even though OpenAI has tried to include some safeguards into its tech to make sure ChatGPT would decline inappropriate or offensive requests, like all AI products, it has the potential to learn the biases of those who train it. This means that it can, and already has produced some sexist, racist and otherwise offensive material, as several journalists have noted.
When using AI for copyrighting, be it for marketing collateral, blog posts or website content, it is important to ensure that the text AI produces is appropriate in tone for the corporate or personal brand. Cultural context is key, and this is not something AI can decipher well at this point. So, even when using AI as a shortcut to copyrighting, a human touch will still be needed to check for cultural sensitivity and tone to avoid any potentially disastrous blunders.
3. Quality entertainment
Ensuring something is appropriate is one thing, but what makes truly great content? When it comes to content that is meant to entertain, such as comedy sketches or even memes, I would argue that timing, nuance and creativity are key.
By definition, the comedy that resonates the most with people is relatable — it's authentic, it's storytelling and the laughs erupt from a personal connection alongside an understanding of the audience. Often, that lived experience, whether real or imagined, separates the best comedians or the most popular creators on meme platforms like Yepp from the hacks.
Artificially-generated humor is based on logic, structure and formula rather than opportunistic observations or experience of "in the moment" quips. AI is not good at reading the room, regardless of whether that room is virtual. It can mimic existing jokes, but coming up with new, creative ideas that will spark a connection and produce a genuine laugh is not yet within the robot's arsenal.
Related: This Is the Secret Sauce Behind Effective AI and ML Technology
4. Thought leadership
Can AI predict future trends? If we're talking about ChatGPT and similar tech, it can only conclude or make future predictions based on the information that has already been published. This means that it is unlikely to come up with anything truly 'new' when predicting future trends in any industry or sector.
Is a well-structured analysis, based on trends and predictions already in the public domain, useful? Sure, it's interesting to see what has already been said. However, this template-based approach that summarizes existing information does not lend itself well to communicating new ideas that make for engaging and useful content, which defines thought leadership.
At this point, AI is unlikely to develop creative and disruptive ideas drawn from lived experience, analytical and creative skills. This thought leadership remains reserved for qualified humans – at least for now.