Google Expands AI Training with Public Data, Raising Privacy Concerns
Google has recently implemented changes to its privacy policy, allowing the use of publicly available data for training and creating AI products. While this move signifies Google's deepening focus on AI advancements, it also raises concerns regarding user privacy and the impact of these models on society. This article explores the updated policy, Google's AI initiatives, and the broader implications for privacy and intellectual property.
New Policy Allows Use of Public Data for AI Training
As of July 1, Google has revised its privacy policy to explicitly state that publicly available information can be utilized to train AI models and develop innovative products and features. The policy change expands beyond the previous scope, which solely mentioned using public data for training "language models" and specifically referenced Google Translate. By leveraging public data, Google aims to enhance its AI capabilities in areas such as Google Translate, Bard (its AI chatbot), and Cloud AI services.
Google's Growing Emphasis on AI
Google has been making significant strides in the AI domain, evident through its various AI-based ventures. Despite a lukewarm initial reception, Bard has rapidly improved and caught up with other chatbots in the market. Additionally, Google has teased upcoming AI-powered experiences, including AI shopping, Google Lens enhancements, and a text-to-music generator. To further reinforce its AI initiatives, Google plans to introduce the Search Generative Experience (SGE), an AI-driven search feature.
Privacy Concerns and Intellectual Property
The updated policy has sparked concerns surrounding privacy, intellectual property rights, and the potential impact on human labor and creativity. The introduction of AI products often raises apprehension regarding data privacy and the responsible use of personal information. Last month, OpenAI, the creator of the popular AI bot ChatGPT, faced a class action lawsuit alleging the unauthorized acquisition of extensive data from the internet.
Comparisons to Controversial AI Practices
The Google policy update has drawn comparisons to the controversial actions of ClearView AI, a company that developed a facial recognition app using billions of facial images sourced from social media and other platforms. Following a lawsuit settlement with the ACLU, ClearView AI was restricted from providing access to its facial recognition database to private entities. Although Google's intentions may differ, the association serves as a reminder of the potential risks and ethical concerns surrounding AI development.
Google changed its privacy policy: "we may collect information that’s publicly available online or from other public sources to help train Google’s AI models and build products and features, like Translate, Bard and Cloud AI capabilities". Doesn't sound like Terminator, huh? pic.twitter.com/ctTO1F5FDf
— Lukasz Olejnik (@LukaszOlejnik@Mastodon.Social) (@lukOlejnik) July 2, 2023
Google's Efforts to Address Security Risks
Despite its foray into AI, Google remains cognizant of the security risks associated with these technologies. Alphabet, Google's parent company, recently cautioned its employees about the security vulnerabilities related to chatbots. In response, Google introduced the Secure AI Framework, aimed at bolstering cybersecurity defenses against AI threats.
Conclusion
Google's updated privacy policy permitting the use of public data for AI training signifies the company's dedication to advancing its AI capabilities. However, this shift raises legitimate concerns regarding user privacy, intellectual property rights, and the broader societal impact of AI models. As Google continues to push the boundaries of AI, it is crucial to strike a balance between technological progress and responsible use to address the evolving challenges of the digital age.
Source: mashable.com