AB Phillips

View Original

Using Generative AI in your Business? Risks & Opportunities

It’s less than a year since OpenAI released its (general) artificial intelligence (AI) app, ChatGPT. Since then, it and its alternatives, have transformed how we do business.

Such tech is known as generative AI, a form of machine learning. Trained on a vast database of natural language, it can create content, such as text, code, audio, images, simulations, and video. Think chatbots, Google Bard, Snapchat’s My AI, DALL-E2, Midjourney, and the voice generator Microsoft’s VALL-E.

Generative AI may already be part of your business strategy. According to the Australian Securities and Investment Commission (ASIC), AI can drive efficiencies in operations – price prediction, hedging, automating, or performance tasks – as well as risk management for fraud detection.

The hype and risks of generative AI

Generative AI has been dubbed industry’s next big disruptor, as big as the internet and the smartphone have been. Microsoft’s Bill Gates has forecast a future where we each have an AI personal assistant. Scholarly researchers across several industries last month released this crystal-ball gazing opinion paper.

Australian eSafety Commissioner has issued a position statement on generative AI, detailing the pros and cons. It frames opportunities through the lens of online safety, including to:

  • Detect and moderate harmful online material more effectively and at school

  • Offering evidence-based scalable support that’s accessible and age-appropriate to meet young people’s needs

  • Enhance learning opportunities and digital literacy skills

  • Provide more efficient methods on consent for data collection and use.

KPMG has listed AI use cases and potential opportunities. Specific industries expected to be early adopters in harnessing AI opportunities include health, banking, finance, education, creative industries, and engineering, says Australia’s Chief Scientist, Dr Cathy Foley. She says it’s almost impossible to accurately forecast the opportunities of generative AI over the next decade.

However, the risks are also clear. Here are three broad categories:

  • Does not perform as expected, such as ‘hallucinating’ responses and responding inappropriately to users, inherently biased

  • Used maliciously for harmful purposes, to create and amplify content that is discriminatory, deceptive, false, harmful and/or defamatory, including scams and phishing (and other cyber security breaches), and a chatbot giving the wrong instructions for preparing machinery

  • Overuse or inappropriate or reckless use in a particular context, including online pornography for a child user.

Ethical and copyright issues include businesses claiming AI-generated content as their own. If AI-produced code or information became part of a deliverable or product, that could breach copyright or IP, thereby damaging your brand’s reputation, says KPMG.

The eSafety Commissioner cautions that, while many companies are quickly developing and deploying their own generative AI technologies, those organisations need to attend to risks, protection, and transparency for regulators, researchers, and the public.

The Federal Government is reviewing the Privacy Act 1988 to ensure it’s fit-for-purpose in the AI era. The eSafety Commissioner has also flagged concerns about generation AI regarding data ownership, national security, law environments, and the environment and labour market, so there may need to be further regulations implemented.

Unauthorised disclosure

There’s currently a paucity of regulatory or legal frameworks to protect businesses against the risks of AI. That means if you and your staff are using generative AI without internal or external buffers, your business could be inadvertently:

  • Disclosing intellectual property, trade secrets or confidential information, and

  • Exposed to liabilities for violating privacy, consumer protection, or other laws.

For example, uploading text or documents to generative AI feeds its dataset. You can change your settings on ChatGPT to incognito mode, so it retains your data for only 30 days for security purposes. But that data is still in its training set for that time, so can be shared via the publicly available knowledge base. Be sure your staff know there’s no safe way to upload confidential information on ChatGPT. It is for this reason that larger companies are opting for ‘internal’ AI systems that don’t expose their data to people outside of their organisations.

You may be using another platform for the automation of some of your processes. Technically, third-party organisations own the platform and could use the data you upload for their own purposes. Video-conferencing service Zoom recently had to clarify terms and conditions which had looked like it could use any audio, video, chat, screen sharing, attachments, etc. to train its own AI models.

Violations of consumer protection laws (GDPR)

The General Data Protection Regulation (GDPR) applies if your business targets or collects data related to people in the European Union. Known as the world’s toughest privacy and security law, it can be daunting for SMEs to comply. Check out this official EU website for guidance, and this one for how to manage the privacy risks of emailing EU countries.

Lack of policy, training and monitoring

So, how can your business develop responsible AI usage practices? KPMG suggests you:

  • Develop a policy on how you’ll train staff to use AI

  • Ensure that policy spells out both appropriate and non-appropriate usage

  • Schedule when you’ll review the policy and associated processes, and assign the task of ongoing monitoring

  • Be transparent in your terms and conditions for clients/customers about how you use AI.

A recent government discussion paper on safe and responsible AI use details how organisations around the world are tackling policy, training, and monitoring. The global AI Standards Hub offers some useful insights and eLearning modules. Salesforce has tips to look at your approach for short, medium, and long-term time frames.

For guidance on AI ethics principles, look to the Federal Department of Industry, Science and Resources. Its voluntary framework can help:

  • Build public trust in your product or organisation

  • Boost consumer loyalty in your AI-powered services

  • Positively influence AI outcomes

  • Ensure all Australians benefit from this innovative technology.

If you provide a service and that service includes providing advice generated through AI you should ensure that your business insurance such as professional indemnity insurance is adequate to manage your risk.