Opinions expressed by Entrepreneur contributors are their own.

It’s ironic that OpenAI would be the victim of its own success, as a recent piece of investigative journalism produced by TechCrunch found the company’s ChatGPT store, which allows users to create and share custom chatbots, found the platform riddled with unethical and potentially illegal applications.

Among the custom ChatGPT apps discovered among the three million on offer in the store were some that obviously violate copyrights, others designed for students to avoid plagiarism detection and still others that impersonate people or organizations without proper consent or legal rights.

Out of curiosity, I submitted a prompt to ChatGPT to “write an article about the best practices for avoiding both copyright infringement and irresponsible or unethical uses of generative AI tools.” In response, the AI whipped off a piece that contained the following caution: “These tools can produce original content based on input data, but they can also inadvertently replicate existing copyrighted material.”

My second prompt, “Write an article about why human supervision of AI tools is essential,” produced the following statement from ChatGPT: “Human supervision is essential for ensuring that AI tools operate within the bounds of existing legal frameworks, especially concerning data protection, privacy laws, and copyright.”

Related: What We Can Learn From the OpenAI Governance Crisis

Do As I Say

In a response to TechCrunch’s inquiry about the company’s methods for preventing chatbot apps that violate its terms of use, an OpenAI spokesperson said the following: “We use a combination of automated systems, human review and user reports to find and assess GPTs that potentially violate our policies.”

Apparently, OpenAI hasn’t taken ChatGPT’s advice about the critical importance of human supervision in the case of its own wildly successful AI. Instead, it most likely relies on some automated system (without sufficient human oversight) to monitor the kosherness of custom apps offered on its store.

Given the massive number of custom chatbots on the ChatGPT Store, it would be understandable why OpenAI would seek to automate the vetting process to the greatest extent possible in the interest of time and efficiency. Whatever the case, their brand has suffered a hit as a result — not big enough to incur fatal damage, but still a hit.

The four keys

As AI becomes more integrated into business processes, the potential for these tools to impact brand reputation — positively or negatively — grows. Missteps in AI implementation can lead to public relations nightmares, customer distrust, and long-term damage to a company’s image. Therefore, businesses must approach AI use with strategic oversight to protect and enhance their brand reputation.

Every business’s specific policies necessary for proper human oversight are unique, depending on how they integrate AI into their operations. However, I propose the following four key principles to prevent damage to your brand’s reputation.

Related: AI Is Changing the Way We Look at Job Skills — Here’s What You Need to Do to Prepare.

1. Be open and transparent

Particularly regarding how data is collected, processed, and used, you should provide clear, accessible information about your AI systems to reassure customers and stakeholders of your commitment to ethical practices.

At my company Presspool.ai, for example, we leverage AI to identify the ideal target audiences for our customers’ B2B advertisements based on secure, first-party data to track key performance indicators such as click-throughs and conversions and to optimize marketing campaigns. This way, our customers are assured the AI-powered, targeted distribution of their ads complies with all relevant data privacy regulations.

2. Prepare for failures and missteps

No technology is foolproof, and AI is no exception. Businesses must have contingency plans for when AI systems fail or produce unintended consequences. This includes having human oversight in critical decision-making processes and establishing clear channels for customer feedback and complaints regarding AI interactions.

Related: 4 Ways Startups Go Wrong When Working With AI

3. Engage in continuous improvement

Societal standards for responsible AI use are rapidly evolving. Businesses must commit to continuous learning and improving their AI systems to avoid ethical, legal, and reputational risks. Review and update your AI strategies regularly to reflect new developments and stakeholder expectations.

4. Educate your workforce

Employees should be well-informed about the role of AI in your business, including the benefits and potential risks. Providing training on responsible AI use and ethical considerations helps ensure that your workforce can effectively manage AI tools in a way that protects your brand reputation.

Related: AI is Disrupting Higher Education — Will Traditional Colleges Survive?

This article is from Entrepreneur.com

You May Also Like

Tiffany Haddish arrested, accused of DUI in Georgia

Actor Tiffany Haddish, 42, is accused of DUI after an arrest in…

Sustained firing heard in Sudanese capital amid tensions between military and powerful paramilitary forces

Sustained firing broke out in the Sudanese capital Saturday amid simmering tensions…

Oil Prices Are Another Headache for Biden—Who Can’t Do Much About It

WASHINGTON—Oil’s surge past $100 a barrel after Russia’s attack on Ukraine intensifies…

GDP Report Shows Economic Growth Slowed in First Quarter

Share Listen (1 min) This post first appeared on wsj.com