Google warns employees against using confidential information on AI Chatbot

Published by
WEB DESK

Alphabet, Google’s parent company, warned staff members about the dangers of using AI tools and advised them not to enter sensitive information into chatbots, including its very own chatbot Bard.

According to the reports, Alphabet had cautioned staff members against using AI chatbots such as Open AI’s ChatGPT to submit confidential data. The company verified this, citing a long-standing information security policy. On June 1 in an updated Google Privacy notice said, “Don’t include confidential or sensitive information in your Bard conversations”.

Chatbots, such as Bard and ChatGPT, are human-sounding software applications that engage in discussions with users and respond to a wide range of questions using so-called generative artificial intelligence. Chats produced on Chatbots can be accessed and read by human reviewers, and researchers discovered that similar AI might duplicate the data it ingested during training may pose a leak risk.

Additionally, Alphabet warned its engineers against using chatbot-generated computer code directly. In spite of the fact that bots can generate certain types of code, they also run the danger of introducing errors or producing “undesirable” suggestions. Google also stated that it wanted to be transparent about the limitations of its technology.

The worries highlight Google’s desire to prevent business harm as a result of the software it released to compete with ChatGPT. There are billions of dollars in investment at play as well as untold amounts of advertising and cloud revenue from new AI programmes in Google’s competition with ChatGPT’s supporters, OpenAI and Microsoft.

Google’s warning also reflects what is becoming as a security norm for businesses cautioning staff against using chat programmes that are accessible to the general public. A rising number of companies around the world, including Samsung, Amazon, Apple, and Deutsche Bank, have put restrictions on AI chatbots.

In a survey conducted by the networking site Fishbowl around 43 per cent of professionals are frequently utilising AI tools such as ChatGPT without informing their bosses. In this survey, around 12,000 respondents have participated, including from top US-based firms. This trend has caused serious concerns among researchers, companies and countries alike about privacy and employability matters as AI tools are becoming capable of replacing humans.

In a statement given to Politico, a Google spokesperson said, “We said in May that we wanted to make Bard more widely available, including in the European Union, and that we would do so responsibly, after engagement with experts, regulators and policymakers”. The spokesperson added, “As part of that process, we’ve been talking with privacy regulators to address their questions and hear feedback”.

Some businesses have created software to address such concerns. For example, Cloudflare, a company that protects websites from cyberattacks and provides other cloud services, is promoting a feature that allows companies to tag and prevent some data from going externally. Cloudflare’s CEO, Matthew Prince, stated that putting sensitive data into chatbots is like “turning a bunch of PhD students loose in all of your private records”

Additionally, Google and Microsoft are providing conversational tools to business clients at a greater cost, but expensive tools will refrain their data into open-source AI models. Users can choose to remove their conversation history in Bard and ChatGPT, which is saved because of its default setting.

Share
Leave a Comment