Why you don’t need an AI policy (but if you have to, include these 10 things)
- In Blog
- ai, ai policy, artificial intelligence, governance, policy
- 5 min read
Attractions are no strangers to policies. We have them for ticketing, refunds, workplace safety and fire drills. But here’s the thing: we don’t recommend you write an AI policy.
Why not? Because AI is unlike anything else you’ve tried to regulate. It’s nearly impossible to predict all the ways AI will show up in your workflows, creative projects or visitor experience. Try to legislate it too tightly and you’ll be outdated before the ink is dry.
Instead, what really matters are the same foundations that make teams great without AI:
- Security: the guardrails you already use still apply
- Data education: helping people understand what they’re working with and how to do it safely and well
- Trust: if your team has the right context, they’ll make the right calls
If someone makes a poor choice with AI, that’s either a hiring problem or a context problem, not a policy one.
And here’s the kicker: even if you haven’t “rolled out” AI yet, it’s already infiltrated your workplace. From design software to email apps, AI is baked into tools you’re already using. And if you try to block it? One in three employees are willing to personally pay for AI at work when their employer won’t. Translation: your team is already experimenting.
Still, sometimes the board, funders or legal counsel insist you must have an AI policy, especially for government organizations. Fair enough. If that’s the case, here’s how to make it practical without tying yourself in knots.
10 tips for a practical AI policy
1. Label AI generated content
From exhibition text to marketing copy, acknowledge AI’s role and always have a human review before publishing.
As an example, check out the voice of the visitor data in Dexibit. Every attribute such as ‘theme’, ‘topic’ or ’emotion’ generated from visitor freeform comments is labelled with a purple AI badge to show it is AI generated. In the attribute detail in the data explorer you’ll find notes on its context and limitations.
2. Keep human oversight
AI is a co-pilot, not an autopilot. People remain accountable for every decision and output.
Going back to the voice of the visitor example, we suggest ‘Recommended reply’ is reviewed by one of your guest services representatives before it is used publicly to respond to a visitor review.
3. Protect data (especially commercial and personal)
Accept the reality that staff are likely to use public AI tools. But be tough on the line to never input private data such as sensitive financial, visitor, member or collection information into public AI tools.
For example, at Dexibit we use a secure enterprise version of OpenAI, where personal details are removed before any data is processed, so sensitive visitor and collection information stays protected.
4. Craft mindful prompts
Good prompts consider diversity, representation and accessibility from the start. They should also be appropriate. Treat prompts as if they’re visible on the gallery wall, in that they should represent the culture and uphold the standards of your organization. There’s a world in which they could be subject to official information requests.
5. Credit sources
If AI surfaces or summarizes material, make sure the original creators are acknowledged (one way to discover this is to just ask in your prompt).
6. Watch for bias
AI often reflects stereotypes. AI use should consider bias in the training data, prompt and output alike. Stay alert, especially in areas like collections interpretation, marketing imagery or HR.
7. Limit AI in decision making
AI can analyze surveys, ticketing trends or retail data but it should never replace human judgment on strategy, funding or staffing. Remember you’re in the drivers seat, aligned to vision and values.
8. Keep creativity human led
AI can remix what already exists, but it can’t invent something truly new. In attractions, creativity is the spark that makes a visitor’s experience memorable and that spark should stay firmly human. AI might be a brainstorming partner, but the big ideas, the leaps of imagination, the stories that connect… those should always come from people.
9. Don’t overlook the environment
AI isn’t impact free. Training and running models takes real energy and every “quick draft” leaves a carbon footprint. A smart policy encourages teams to pause and ask: is AI the best tool for this job? Sometimes the greener, more responsible choice for both budgets and the planet is to use AI sparingly and lean on human creativity.
10. Keep learning and lead with trust
AI evolves faster than any policy. Encourage experimentation, share learnings across teams, and revisit practices often. Policies don’t replace good people. Trust your team to use AI wisely when given the right tools and context.
TL,DR? You probably don’t need a rigid AI policy. What you really need is a culture of responsibility, curiosity and trust. Give your team the right tools and the right context, and they’ll know how to use AI well.
Because whether you’ve sanctioned it or not, AI is already in your attraction. The question isn’t whether AI will show up in your work, it’s whether your team have the confidence and compass to navigate it responsibly.
Curious about what AI says about your attraction? Dexibit’s generative AI benchmarks show how you compare to your peers and neighbors.
Book a consult with Dexibit and we’ll take you through it.
Want to learn more about Dexibit?
Talk to one of our expert team about your vision to discover your data strategy and see Dexibit in action.