Begin typing your search...

Before ChatGPT becomes a data privacy nightmare, govt regulations must tame it

Potential adverse effects of ChatGPT can be mitigated with planning, implementation, and monitoring of the chatbots performance. What the govt needs to address is letting consumers have control over how their data is used, say experts

image for illustrative purpose

Before ChatGPT becomes a data privacy nightmare, govt regulations must tame it
X

8 April 2023 6:51 AM IST

Collaboration with industry experts, consumer advocacy groups and other stakeholders can help govt understand the potential risks and benefits of ChatGPT to develop effective policies. The govt can mandate companies to disclose how they use customer data and obtain consent from consumers before collecting information. Centre can set up oversight mechanisms to monitor the use of ChatGPT and ensure that companies are complying with regulations. Oversight can include regular audits, inspections, and investigations, suggests experts

In this digital era, as the consumers of technology, a question arises during every breakthrough - who will bell the cat? Those following the hue and cry around an AI-driven chatbot, will be aware about which cat is being mentioned. Yes, the cat here is the Chat with a GPT (Generative Pre-Trained Transformer).

The latest news is that Microsoft-owned OpenAI has announced that ChatGPT Plus, the subscription service to access its text-generating AI is now available in India. And, here, the government has been delaying the enactment of the Personal Data Protection Bill 2019. In August 2022, the bill was withdrawn by the government for further scrutiny. Netizens are stuck with the question, when will the joint parliamentary panel recommended 81 amendments be implemented to bring a legislation on personal and non-personal data protection. Isn’t it high time? OpenAI through its website is offering a free version and ChatGPT can also be experienced by searching for it on Bing. Now the government needs to answer - when will the cat be belled.

While India’s discussion still rests around implementing data protection bill, Italy’s local data protection authority has passed an order to halt processing Italians’ data for the ChatGPT service which led OpenAI to block access to ChatGPT in that country. The US Chamber of Commerce has called for regulation of artificial intelligence technology to ensure it does not hurt growth or become a national security risk. An EU official has also proposed for rules regulating AI.

For years, governments around the world have been working on legislation. China, an exception, has released legislation on AI last year. In over 65 countries of policy initiatives, only a notable few such as China have been successful to follow through with hard rules.

Artificial Intelligence has been in news for years now. Any new technology or tool wakes up the respective governments to work on its ethical and privacy implication. The Indian government has set up a National Level Expert Committee on AI to provide guidance on the ethical, legal, and social implications of AI. In 2018, the government of India also released the National Artificial Intelligence Strategy, which recognises the need to address the ethical, legal, and social implications of AI and highlights the importance of responsible development and use.

Keeping these strategies and committees aside, the need of the hour for AI-driven technologies such as Chat Generative Pre-Trained Transformer or ChatGPT, is an effective legal framework in place. Bizz Buzz spoke with startups in India to know about their willingness to integrate ChatGPT into their platform, and on the obstacles the country and its citizens face while introducing any ground-breaking technology.

Zaiba Sarang, the Co-Founder of iThink Logistics, is considering to integrate ChatGPT into her core logistics operations. “As amateurs in this field, we are first experimenting with using ChatGPT for marketing purposes, such as creating content and creatives. Even the text you are reading now has been paraphrased by ChatGPT. We need to carefully train ChatGPT to work with data related to activities such as order placement, order tracking, and invoicing. These critical business operations require a high degree of accuracy and reliability, and we must ensure that ChatGPT can meet these standards.”

ChatGPT is unlocking new use cases for AI chatbots says CEO and Co-Founder of Riverum Nitin Raj. “ChatGPT has the potential to revolutionise how we interact with AI chatbots. From customer service and healthcare to education and finance, it is the future of AI chatbot technology.”

Nishant Behl, the Founder of Expand My Business, is exploring the implementation of ChatGPT. He says, “We are utilising the platform to understand what new concepts and trends we can utilise for brand building and for analysing more divergent concepts in marketing. We are in the early stages of using the chatbot. The AI chatbot’s limitations need to be considered before we integrate and leverage it.”

Startups working in different sectors and industries will find out ways to leverage on any new tech, the common ground of usage for ChatGPT are customer service, sales support, lead generation, and market research. But the matter that needs to be addressed is its adverse effect. As Behl says, the model lacks human background knowledge, which may lead to inaccuracies in certain situations. ChatGPT may also be limited regarding new developments or changes in specific fields. Behl further mentions that the impact could be more significant in industries where customer privacy, accuracy, and human interaction are essential. Examples of such sectors include healthcare, finance, legal, and hospitality.

Raj adds on: “While the AI does not have personal beliefs or biases, the discrimination or stereotypes of its developers can impact the outcome. This can impact sectors such as insurance, lending, or recruiting. ChatGPT is not subject to human oversight or accountability, which could lead to legal and ethical issues. The most prominent of these would be the lack of transparency about how AI is used and the potential for abuse.”

Another factor neglected while speaking about integrating ChatGPT into a startup’s platform is about a robust IT infrastructure, says Sarang. “There is a chance of encountering technical issues since ChatGPT is an advanced technology that requires a stable and robust IT infrastructure to function properly. And if there are any glitches in the system or if the system goes down even for a few moments, the entire supply chain could be disrupted,” she adds.

As a startup founder mentioned, these potential adverse effects can be mitigated with proper planning, implementation, and monitoring of the chatbots performance. Here, the million-dollar question is how will the government ensure that the netizens - consumers can make the most out of the technology, while also being assured that safety of their data and privacy is taken into account. ‘Consumers should have control over how their data is used’ is the universal thought of every citizen and startups as individuals.

The biggest obstacle of ChatGPT implementation is that the government should be prepared with policy for governance, says Cyber Security Expert Moutan Sarkar. “I have been into cybercrime investigation with various law enforcement agencies from the last one decade. ChatGPT could be exhaustively used in cracking cybercrime cases once the tool is trained on handling and analysing digital evidence. This tool should be used in keeping the cyberspace safe and secure. This will also speed up the investigation trial. However, the major risk government should be alert on is the privacy. There should be regulation framed around this prior to the acceptance of such AI and privacy of the case, localisation of the data, and no third-party access has to be made mandatory,” Sarkar adds.

Creating awareness about the benefits and dangers of using ChatGPT is the responsibility of the government and also that of ensuring that consumers make an informed decision about how to use the technology, says Nitin Raj. “The government should establish safety standards and regulations for the AI sector. This will help to ensure that companies are held accountable for their actions, and that the consumers are adequately protected. In addition, the government should develop consumer protection laws that will allow consumers to seek recourse if things go awry. Furthermore, regular safety testing should be made mandatory, even after the product has been launched. This will help to ensure that any potential issues can be identified and addressed in a timely manner,” Raj adds.

OpenAI has stated that it may share users’ personal information with unspecified third parties without informing them to meet its business objectives. The IP address of visitors, their browser’s type and settings, and the information about how visitors interact with the websites are collected by the company in accordance with its privacy policy. This issue stresses upon the need for the government to push companies to be ethical. “The government should implement policies to ensure that companies provide clear and concise information about the technology, enabling consumers to make informed decisions. There should also be strict rules and regulations governing data collection,” says Sarang.

A startup founder has suggested that the government can take a multi-pronged approach to safeguard consumers while introducing new technologies like ChatGPT. These measures help protect consumer interests while promoting innovation and growth in the technology sector.

Behl said the government can collaborate with industry experts, consumer advocacy groups, and other stakeholders to develop policies that protect consumer interests. Collaboration can help governments understand the potential risks and benefits of ChatGPT to develop effective policies. The government can mandate companies to disclose how they use customer data and obtain consent from consumers before collecting information. The government can set up oversight mechanisms to monitor the use of ChatGPT and ensure that companies are complying with regulations. Oversight can include regular audits, inspections, and investigations, Behl adds.

Startup founders, tech intellects and cyber security experts are crystal clear about their approach to new tech. But, the government’s far-fetched dream to localise data has become a data privacy nightmare for users. How many users does ChatGPT have in India and across the globe, well, ask the chatbot and the answer might be shocking – best way of putting it across is around or over 100 million, as the source of the answer available online is questionable. However, those numbers are not that important than the unanswered question – when will the cat be belled? The irony - not only ChatGPT, the government also might not have an answer.

ChatGPT AI OpenAI Microsoft 
Next Story
Share it