By continuing to visit this website, you agree to our privacy policy and use of cookies.

Agree
Article

Democratisation of AI: risks and opportunities

5

min read

July 16, 2024

As tech giants race to make their large language models public and accessible, they sometimes put profits ahead of risk mitigation strategies. To maintain consumer trust, companies need to incorporate these models into their workflows carefully.

As the race between ChatGPT, Bard, and other large language models (LLMs) heats up, so too will the push for companies to incorporate LLM outputs into their workflows. That’s particularly true for companies using chatbots for customer service, with 100 million ChatGPT users who may now expect chatbots to respond to them in much more natural ways. 

In March, OpenAI released its API for ChatGPT, making its generative AI accessible to large companies and hobbyist developers alike. This broadening of access to develop and use AI technologies is often referred to as AI democratisation – and for many, it’s something to be celebrated. More access to large language models can translate to lower costs to deploy  sophisticated AI solutions that improve productivity. A recent study by MIT and Stanford researchers found that ChatGPT improved productivity by 13.8% at one contact centre, with faster inquiry resolution times and more success in resolving issues.

However, when organisations without expertise in AI implement these models, they can overlook some important risks that can compromise their reputation, technical efficacy, and most importantly, their customers’ safety. In these cases, the risks of using large language models may outweigh the benefits unless their deployment is carefully thought out.

Below are some important risks for companies to consider when incorporating large language models like ChatGPT into their processes.

Generative AI risk #1: inaccurate or biased responses

One of the most impressive things about ChatGPT is its ability to create responses that sound human. But its ability to sound convincing is also a risk, since it has become notorious for generating inaccurate responses. Without an a method for curating outputs, companies risk losing control over how they are representing themselves and their expertise.

As Professor Ajay Agarwal notes in Forbes, “ChatGPT is very good for coming up with new things that don't follow a predefined script. It's great for being creative... but you can never count on the answer.”

Additionally, the datasets of large language models consist of data scraped from the internet, which mean they are culturally biased in favour of English-speaking, Western audiences. 

Companies that provide specialised services for local communities should be particularly cautious about deploying generative AI that may spread inaccuracies or respond in biased ways.

Generative AI risk #2: data privacy concerns

Data privacy has made recent headlines after ChatGPT was banned in Italy over concerns that it did not comply with the General Data Protection Regulation (GDPR) in Europe, which limits how companies can store and use personal data. 

Since ChatGPT was trained on large amounts of data from the internet, this dataset includes publicly available data about individual people – and it is further trained with data from users. 

Under GDPR, individuals should have the right to consent to having their personal data used in this way, as well as to correct or delete personal data that OpenAI has stored – but the training dataset is so vast that this may be an extremely difficult task

While user data is anonymised for training purposes, this may still represent a fundamental problem with personal data that leads to increased legislation in other countries to protect consumers.

Generative AI risk #3: ethical and regulatory risks

The above two issues are a subset of a broader range of ethical risks of large language models, particularly for organisations in heavily regulated industries. For example, ChatGPT’s potential to fabricate data means significant guardrails need to be in place for its use in industries like healthcare or financial services to avoid reputational damage.

In general, issues surrounding defamation, creative rights, and data protection are creating a  growing international interest in regulating large language models like ChatGPT. Given the nascent nature of large language models, it’s important to approach integrations with new models with caution as their long-term future is still uncertain.

For organisations considering using these models for consumer support, that doesn’t mean abandoning them entirely; instead, it’s possible to leverage large language models safely with a middle layer that protects end users while driving business growth.

How Proto can help

Large language models are powerful resources, but that power needs to be directed effectively for them to be truly valuable. It is important for organisations to carefully consider the ethical implications and risks of using large language models and to take appropriate steps to mitigate them.

Proto’s AI experts work with government and private organisations to ensure large language models such as BERT generate polished responses tailored to the appropriate industry. Our natural language processing engine, proLocal™, has been trained with proprietary local and mixed language data that increase conversational accuracy in local languages such as Kinyarwanda and Tagalog.

In the future, Proto will carefully integrate with ChatGPT to significantly improve intent classification and response times for CX automation without exposing consumers to ChatGPT output directly. As large language models compete to outpace one another, Proto helps support consumers in the emerging world by ensuring their safety comes first.

If you'd like to learn more about how Proto can help you protect consumers, don't hesitate to reach out to our product experts today.

About Proto

Proto is the leading generative AICX platform for local languages. Its inclusive chatbots excel at usecases for customer experience, consumer protection, employee experience, and indoor navigation. Powering the Proto AICX Platform is the proprietary ProtoAI™ engine for exceptional text and voice accuracy in underserved languages, and large language models such as ChatGPT. Proto's enterprise-level capabilities include data privacy options such as hybrid and on-premise hosting, customised CX analytics, and a 24/7 prompt engineering service.

View Case Study
linkedin share button
whatsapp share button