The short answer is: yes, right now every business needs an AI policy.
If you don’t have one, chances are your team is using AI tools already, in all sorts of creative
ways that you don’t know about and that may or may not be keeping you safe.
And while we call it AI Policy, it would be more accurate to call it an AI Use Policy. A
framework for how AI should be used.
An AI policy isn’t about creating a cage; it’s about creating a safe playground for learning
and experimentation.
Right now, the strategy for all professional businesses should be to upskill our workforce into
confident, AI-literate teams who are able to embrace the wave of AI innovation that we’re
facing.
If we give people safe guardrails, they are much more likely to experiment and learn.
What do we need to cover in an effective, safe, and enabling AI policy?
Here are my top three:
Approved Tools & Handling Confidential Information
Staff need clarity on which tools they can use and where they can share confidential
information.
Asking people not to share confidential information with AI is like asking them to learn how to
cook, but not allowing them to use real ingredients or the oven. They can’t learn like that.
If our team must anonymise information, deidentify documents and generalise their
questions, they won’t develop their skills nearly as much as they can, they will not be able to
embed generative AI into their everyday processes and they will find it slow and frustrating
and drop off.
Or… they will use it secretly. And we will have no control and no oversight.
We must provide our teams with paid, secure accounts, with the right security settings, so
that they can get experience using it in their everyday workflows. Yes, including client data.
Read the terms and conditions and privacy policy of your vendor of choice and make an
informed decision. Not based in myths.
Think about it, Microsoft Copilot is generative AI that lives inside your Word and Excel
documents, is it safe enough to share confidential information with?
Your AI policy should offer clear instructions on where confidential data can be shared with
AI so your team can have a good go with it and develop their skills
Responsible Use of AI
We all know that we should use AI as an assistant, but not to make decisions.
We all know that we should own everything we deliver. If something is challenged, we can’t
say “but the AI said it”.
We tell everyone that they must verify everything that comes out of AI.
But then, most people just leave it for their team to come up with the how.
Your AI policy should give people clear steps for HOW to verify the output of AI.
What methods of verification are acceptable. What to do when they are not sure. What audit
trails to create.
Giving your team clear instructions makes everyone safe, rather than leave it for everyone to
come up with their own ways that may or may not be good enough.
Personal vs. Professional Use
Most IT policies have a clause that forbids or limits the personal use of company resources.
2024 is the year of upskilling. 2025 looks to be the year of process embedding, automation
and significant process redesign.
Given that we are in a rapid time of upskilling, I suggest we allow, and even encourage
personal use of company-provided AI tools.
The more context people use it in, the better they get at being sophisticated drivers of this
technology and the more prepared they are to embrace the change and to make great
advancements with it.
Of course, the policy should touch on using tools in a professional manner.
Other areas
Your AI policy is likely to touch on other topics like when to disclose to clients that generative
AI was used, mandatory training (that covers security, verification etc) and reporting of
incidents of significance.
Regular Updates
You will need to review your AI policy more frequently than any other policy you have in
place.
This technology changes quickly, and you want your policy to be relevant and effective.
You will need to update it to reflect new tools and how to set boundaries around new
capabilities as they arrive.
But isn’t AI just another software tool we are using?
Yes, it is.
At some point in the future, your AI policy will merge with your other policies on general use
of technology, but for now, it does need to stand on its own because there is so much unique
and so much new.
And it does not to be there for staff to be familiar with and use as they are getting onto the
tools.
Can I use AI to write my AI policy?
Sure, you could ask ChatGPT or Microsoft Copilot to write you an AI policy.
But it’ll likely be quite generic and focus on things like avoiding bias in training data, which is
relevant for AI developers, but not so much to us, AI users.
When I develop AI policies with businesses, I start with what I know I should have in the
policy, I bring in adjacent policies to align, and yes, I use AI to help me put it all together.
But like with all AI use right now, I drive this process, not the AI.
So I have a conversation with the AI that goes something like this:
- Describing my circumstance: ‘Hi Chatty, I am developing an AI use policy for a
[describe business]’ - Brainstorming what should be included in the policy: ‘Let’s brainstorm the topics that
should be covered in the policy’ - Describing my principles and boundaries: ‘I want to encourage safe exploration of
generative AI. My staff are allowed to use [tools], but not [other tools]…’ - Working through each section: ‘Let’s focus on the training section. People should…’
- uploading adjacent policies for style and for alignment: ‘here is our [adjacent policies]
documents, what areas should we think about to make sure everything aligns? Do I
need to change anything in the other policies?’
Finally, I may take the final document and feed it back to the AI and ask it to review the
policy.
Research shows that we get much better results when we (1) ask AI to plan its work before
carrying out, and (2) ask it to review its work when done and improve it.
Finally
I hope this helps!
If you learned something new from this article and want more such material, sign up to
Inbal’s regular updates here: www.inbal.com.au/join.