The new policy states: “We may collect information that’s publicly available online or from other public sources to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”
Among the activities that Google considers fair game are everything you've searched for, purchased or watched online as well as any location data shared via Android phone. They also say they will gather information about people from “publicly accessible sources” under certain circumstances. This means that if, for example, your name shows up in some type of media online like a blog or forum post, it may also index and use that to train its AI.
While this won't come as much of a surprise to those who are already familiar with Google's blatant disregard for people's privacy, many who may otherwise be willing to share some of their data in exchange for Google's free services will certainly draw the line at allowing their data to feed AI chat bots.
What can users do if they do not want Google to use their data to improve their artificial intelligence technology? You may think that giving up all the Google services and devices that you use would be enough, and while it's certainly a good start, avoiding Google entirely requires extreme vigilance.
For example, if you decide to switch from Google's Chrome web browser to Apple's Safari, the default browser is Google. Therefore, you will need to change your default browser to one that is more respectful of user privacy.
In addition to studying the privacy policies of the internet services you wish to use carefully before signing up, it is important to continue checking them regularly for any updates that may make modifications you are not comfortable with.
Although Google's artificial intelligence chatbot Bard got off to a rocky start with a major public embarrassment, it has quickly caught up with rivals. The company has also announced its intention to launch an AI-based search called the Search Generative experience, even as its parent company, Alphabet, warned employees about security risks associated with using chat bots.
Artificial intelligence becomes more frightening every day, taking jobs from experienced writers, coders, journalists, and paralegals despite getting facts wrong and sometimes fabricating things entirely. The truth is that whether you use Google services or not, there is a good chance that much of the existing AI technology today may well have been trained using something you produced, whether it's a forum comment, a Tweet, a blog post or something else.
Open AI, the maker of AI bot ChatGPT, is currently facing a class action lawsuit over claims that they stole “essentially every piece of data exchanged on the internet it could take” without obtaining consent or providing notice or compensation.
The lawsuit claims that OpenAI’s business model is essentially based on theft, relying on “stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”
It’s just one of several legal attacks they are facing, including a lawsuit by authors whose works were scraped to train its algorithms.
Sources for this article include: