Without human gatekeepers, ChatGPT plugins can jeopardise brand safety​

28 April 2023
5 minutes

With the AI gold rush underway, brands are jumping on the bandwagon of this promising technology. Generative AI is set to revolutionise content marketing, cutting down costs and speeding up production.

Until recently, ChatGPT had been disconnected from the internet, unable to draw upon information from search engines. However, a flurry of new ChatGPT plugins is changing this, allowing the chatbot to start browsing the web.

Guy sitting behind his computer and using ChatGPT plugins

With this recent advent, the response has been dizzying, stirring up a number of concerns over how the technology is used. As usage of the technology accelerates, checks and balances must be put in place to ensure safe, responsible, and ethical widespread adoption.

Brands rushing to onboard plugins and capitalise off of AI content creation tools should be mindful of the risks involved. A seemingly quick fix for scaling content could prove disastrous in the long run.

With this in mind, here are four risks associated with ChatGPT plugins that brands should be aware of.

Privacy issues with ChatGPT plugins

“Though not a perfect analogy, plugins can be “eyes and ears” for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data,” stated OpenAI in a recent press release.

While OpenAI has promised to safeguard its plugin landscape, developers can deploy their own versions. A self-regulating ecosystem might not be able to ward off the proliferation of unsafe dupes and their unforeseen consequences. Not only this, but keeping up with the ever-persistent threat of bugs is a formidable challenge.

As plugin options skyrocket, ChatGPT will capture more sensitive data to carry out more complex tasks. Already, Italy has banned use of the chatbot after a privacy breach incident, where sensitive user information, such as first and last names, financial information including the last four digits of their credit card, and expiration date, was exposed for more than nine hours.

Given this, it is important for brands to be mindful of which information is shared with generative AI technologies and for which purpose, as their privacy is perpetually at stake.

Security threats posed by ChatGPT plugins

It is one thing to generate a response and another to take an action. ChatGPT plugins shift the technology from the former to the latter. As the gap between the two widens, so too does the risk of security threats.

A man is focused on his computer, working on a security system for data protection

Plugins can jailbreak language models, introducing countless vulnerabilities. OpenAI has even launched its Bug Bounty Program, offering a hefty cash incentive for those who expose scenarios in which its safety filters can be bypassed or malicious code can be released.

As well, it might be possible to use unreleased ChatGPT plugins by setting up match-and-replace rules through an HTTP proxy. There are only client-side checks to validate permission to use plugins, which could be bypassed. In March, a hacker found 80 unreleased or experimental plugins.

Overall, plugins have the power to send fraudulent or spam emails, bypass safety restrictions, or misuse information. When brands use plugins, they can expose themselves to these various risks and, for example, end up inadvertently becoming part of a phishing scheme.

Accuracy challenges with ChatGPT plugins and real-time data

Using plugins, ChatGPT has been granted access to third-party knowledge sources and data bases. For example, a Bing API allows the chatbot to incorporate data gathered from the search engine into its responses, citing sources in the process.

Previously, the generative AI technology could only include information pertaining to dates, events, and people up until September 2021. Now, ChatGPT can bring real-time data into the mix, which can include unverified or false information.

ChatGPT has already had “hallucinations,” meaning it sometimes produces convincing texts that are false. When plugged in, the technology could draw upon uncredible sources, further persuading users with fabricated responses.

When it comes to brand safety, this is a major issue. If brands rely too heavily on ChatGPT to produce content at scale without a human gatekeeper to oversee the verification of facts and information, they run the risk of publishing and sharing content that contains hallucinations.

Infographic about how ChatGPT plugins can give false information

Ethical concerns surrounding ChatGPT plugins and AI development

Person stopping a domino effect by holding back a piece of wood

Last month, more than 1,000 artificial intelligence experts, researchers, and supporters signed a letter calling for a six-month pause on the creation of “giant” AIs. The list includes Elon Musk, who co-founded OpenAI, Emad Mostaque, founder of Stability AI, and Steve Wozniak, co-founder of Apple.

The letter in question argues that this is not a complete moratorium, but rather “merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

As well, it asks AI labs and independent experts to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. ”In such a scenario, the letter suggests, OpenAI would join forces with other AI labs to ensure that their systems are safe beyond a reason of a doubt.

With the rise of plugins, if misused, ChatGPT could conceivably carry out unethical tasks on behalf of an individual or brand. Imagine, for instance, that a malicious agent took control of your technology and used your channels to promote political propaganda or hate speech.

Without human beings directly involved, the ethical risks associated with plugins are incalcuable.

We are working on an AI code of conduct

To ensure ethical and responsible use of AI, Contentoo is developing a Code of Conduct that addresses the risks associated with AI use.

We will take steps to prevent bias, ensure transparency, and protect privacy rights. We hope to maintain records of AI use and provide training and education to employees to ensure that they understand the capabilities and limitations of AI.

Contentoo is transparent about the benefits and risks of using AI and welcomes feedback and questions from stakeholders. Using our AI Code of Conduct, we will ensure our use of AI is ethical, responsible, and aligned with our values and mission.

Contentoo AI will revolutionize content marketing

In the near future, we are launching our AI Content Creation and Content Refresh tools, which will empower brands to accelerate growth at every stage of their conversion funnels.

Whether you’re looking to optimise existing content or create new content that scales, Contentoo AI is here to help.

Want to stay up to date? Sign up for beta access here.

Check out our latest report!

Email to colleague

Related Reads