Using generative AI tools responsibly and respectfully 

Like many other businesses, the Mindshift team have had a play round with ChatGPT and see some great use for it in our work. For example, we might have an idea for a blog post, so we prompt ChatGPT to expand on our ideas and put some structure around our thoughts.

We’ve also found it great for giving us suggestions on headlines for social media and blog posts.

Prominent generative AI tools include OpenAI’s ‘ChatGPT’, which is an app leveraging their GPT-3 / GPT-4 large language models, Microsoft’s Bing search or Copilot products, which leverages GPT-4, and Google’s Bard.

Artificial intelligence (AI) tools like ChatGPT generate incredibly convincing human-like text. They also provide tangible evidence of the impact AI will have on our lives in the future.

Think about all commentary, buzz, and time humans have already spent on ChatGPT since it was made public 9 months ago. A staggering 100 million people are currently using ChatGPT and the website generated 1.6 billion visits in June 2023.

What could happen if you entered private or personal information into the ChatGPT prompt box? Generative AI models are trained on vast amounts of information, some of which is personal information. This presents various privacy risks, including around how personal information has been collected, whether there is sufficient transparency and whether the information is accurate or contains bias.

The battle to detect AI-produced work is as much big-business as the development of AI tools. ‘Turnitin’ software says it can spot AI-generated material with 98% accuracy and is used by all eight NZ universities.


Guidance from the Office of the Privacy Commissioner 

The NZ OPC has set out their expectations with respect to the use of generative AI by Government agencies and note this will be reviewed as AI tools, their capabilities, and their impact rapidly evolve. 

Using public AI tools responsibly and respectfully

The Mindshift team are all for embracing technology and efficiency but we know this needs to be done responsibly and respectfully.

The benefits of using a public AI tool like ChatGPT include productivity and efficiency, but also presents risks including bias and discrimination, lack of transparency, over-reliance by humans, and privacy concerns.

The team at Simply Privacy say “Now is the time for organisations exploring AI to take an “ethics by design” approach to anticipate and address potential risks. Building privacy, and now responsible AI, into the design phases of online services is the best way to manage risks further along the data lifecycle.”


We hope these tips help start conversations in workplaces about using public AI tools like ChatGPT securely.

1) Never enter sensitive or personal information into public AI tools like ChatGPT

AI is powered by data and uses submitted information to train the natural language learning programme – this is referred to as machine learning (ML) and often that will include personal information.

So, all the usual privacy risks are amplified like data breaches, lack of transparency, unfair or incorrect outputs and excessive data retention.

Anything you put into ChatGPT becomes the property of OpenAI, the creators of ChatGPT and potentially the public domain. This alone means it’s worth using caution when using it, and other large language models.

Bear in mind, ChatGPT itself has already been hacked, so anything anyone’s uploaded into the platform could be out there, circulating among inquisitive users or cyber criminals.

2) Verify with external sources before using AI-generated results

AI tools like ChatGPT are impressive in their ability to mimic and provide solid results. But, as with any information, it’s critical to verify results, especially facts and statistics, before using it.

3) Be aware of potential biases in AI results

While ChatGPT and other AI tools are based on enormous amounts of data, data can be misleading and involve bias, intentional or otherwise, from the creators of that data. Take a moment to consider if the results could potentially be affected by a bias of some sort.

4) Treat AI tools like a knowledgeable but overconfident friend

True they may be knowledgeable but no one knows everything about everything. So a little caution goes a long way before taking anyone’s word without verifying the results before using yourself.


Need a company policy on the use of generative AI tools? 

If your business is developing a policy on this, you may find this AI Usage Policy Checklist from Kordia a helpful start. 

Kia kaha. 


Previous
Previous

Safe travels

Next
Next

How resilient is your small business?