In 2009, Pixar debuted an animated movie that debuted at the 62nd Cannes Film Festival. The name was simply “Up”.

The protagonist of the story is Carl Fredricksen. A self-proclaimed grumpy old man, Carl specialises in saying “no”. No to moving into a care facility, no to local developers, no to kids … and the list goes on.

But what does this have to do with Generative AI?

Many companies have taken a Carl approach to AI adoption. Most go so far as to block access to tools like ChatGPT, and yet this approach is farcical. Denying Generative AI use is akin to an ostrich pushing its head in to the sand.

The usual reason I hear for the Carl approach is that either the organisation does not understand AI or that they fear data leakage. Neither of these arguments really hold water. In 2024 every business leader should be AI aware, and data leaks are more likely to occur over email than anywhere else.

Recent studies have reported that more than 60% of office workers report using some form of Generative AI on a weekly basis, however most of them also report not admitting to its use as they are not sure if they should be doing it or not.

So how can organisations not ‘Carl’ their staff’s AI adoption? Through the development of sound Generative AI use guidelines.

If you already have an ethical and responsible AI framework then it should sit within that document. And if your organisation does not have that framework yet, a simple AI use guidelines document will suffice.

Generative AI guidelines

Most of the AI guidelines documents I have read also take a Carl approach. They are often referred to as a policy and like a policy document they are full of don’t do this and don’t to do that. All written in the negative. Why are we demonising a tool that can offer amazing efficiency and productivity gains?

Generative AI guidelines should be a mixture of inspiration and guidance not just a list of Carl styled “do nots”.

Here are my top three elements of any good Generative AI use guidelines.

1. Inspiration

Start your guide with a brief summary of what Generative AI is, where it came from and how staff in your company or industry might use it. Yes – give the readers actual use cases!

I like to include example prompts that they can use right away to get their feet wet. If your organisation does not have an AI tool provided (such as Microsoft co-pilot) then the safest bet is to provide the names of some free tolls that can help in completing the suggested use case.

For example, under the ‘Build Slides’ use case you can mention tools like Canva, BeautifulAI or SlidesAI.

2. Caution

This is the area of the document where you provide some guidance around the possible risks of AI adoption.

Please keep in mind that you work with adults not primary school children, so take the time to explain what each risk is and how it manifests. Use examples to illustrate your point rather than just ‘Carl-ing’ it.

In this section you might also outline how the guidance you are providing aligns with any other frameworks or guidance readily available.

3. Responsible

This is where you ask the audience to agree to follow the advice and operate AI in an ethical and responsible way.

This will vary for each organisation and might include a request to be transparent with colleagues when the majority of the content has been generated using AI. You may also ask your people to share with colleagues when they find a really great prompt or use case – it may help a lot of people be more efficient.

You will need to work this through to get the right balance.

So, there you go. Don’t be like Carl Fredricksen and approach AI as the quintessential grumpy old man. Say yes, trust your people, give them guidance and watch the magic happen.

Discover what customised AI consulting
could look like for your organisation.

Make an enquiry about Simon’s AI consulting services.
This field is for validation purposes and should be left unchanged.

OR

Book a time to discuss your organisation’s needs over a call.