But experts warn that you should be wary of using AI tools because there are still many unknown factors, especially concerning "usage rights and data privacy protections."
Google Cloud refers to generative artificial intelligence as the use of AI to produce new content, like text, images, audio, music and videos.
Eleanor Hawkins, a communications strategist and writer at Axios, advised people to reconsider the use of AI tools because they may be held responsible for what the apps produce, including any glaring errors.
Before incorporating AI tools into creative workflows, Hawkins said professionals must first understand the product's terms of service, their own internal corporate policies or contractual obligations and basic intellectual property law.
Michael Lasky, partner at the Davis+Gilbert law firm, explained that there are four key pillars to remember when talking about intellectual property law:
Copyrights protect a "fixed tangible expression of an idea," such as literature, photos and films. (Related: Google unveils plan to use AI to completely destroy journalism.)
Trademarks "protect logos and taglines that are consistently used to denote a specific brand in the consumers' minds.
Right of publicity
Right of publicity refers to the economic harm that can occur when a person's name or likeness is used for a commercial purpose without their consent.
Right to privacy
Right to privacy refers to the emotional damage that can occur when a person's name or likeness is used for a commercial purpose without their consent.
If you use AI tools, there is a chance that you may encounter other key risks such as confidentiality clauses and data privacy concerns. Companies like Apple and Verizon have restricted employees from using open-sourced tools for these reasons.
According to a document and people familiar with the matter, Apple has restricted the use of ChatGPT and other external AI tools for some employees because the company is also developing a similar technology.
The document also revealed that Apple is concerned that employees who use these programs could release confidential data. Additionally, Apple advised employees not to use GitHub's Copilot, a Microsoft-owned AI tool that automates the writing of software code.
ChatGPT, created by Microsoft-backed OpenAI, is a chatbot derived from a large language model that can accomplish various tasks, such as answering user questions, writing essays and even performing other tasks in human-like ways.
However, companies are cautious because when people use these models, data is sent back to the developer to enable continued improvements. This means that there is a potential risk of an organization unintentionally sharing proprietary or confidential information.
In March, OpenAI announced that it took ChatGPT temporarily offline because a bug allowed some users to see the titles from a user's chat history.
An OpenAI spokeswoman referenced an earlier announcement where the company introduced the feature that allows users to turn off their chat history, which the company claimed would block the ability to train the AI model on that data.
Apple is known for its strict security measures to guard information about future products and consumer data.
Many organizations are also wary of the technology as its workers have started using it for a growing number of tasks, such as writing emails and marketing material to coding software.
Apple's AI efforts are spearheaded by John Giannandrea, whom Apple hired from Google in 2018. Thanks to Giannandrea, a senior vice president at the company who reports to CEO Tim Cook, Apple has acquired several AI startups.
In one of Apple's recent quarterly earnings call with analysts, Cook expressed some concerns about advancements in generative AI. He advised that users must be "deliberate and thoughtful" in how they approach these things. However, Cook also acknowledged that generative AI has potential.
Apple is also closely monitoring new software coming onto its iPhone App Store that takes advantage of generative AI.
When app developer Blix tried to update its BlueMail email app with a ChatGPT feature, Apple temporarily blocked the update because "it could potentially show inappropriate content to children."
Apple's review team requested that the developer either move up the app's age restriction to 17 and older, which was previously set at four and older, or include content filtering. After Blix assured Apple that it already had implemented content filtering on the ChatGPT feature, the app was approved.
Lasky said communications professionals, along with other AI tool users, must "be good stewards and experts on leveraging the new technology [while also] understanding what kind of uses carry the most risk."
Visit Robots.news for more stories about artificial intelligence and the potential dangers of AI chatbots.
Watch the video below to learn more about a former senior Google engineer's warnings about an AI chatbot that has become a sentient being.
This video is from the Journaltv channel on Brighteon.com.