Voice of the visitor: analysing visitor behavior with sentiment

 

The late Maya Angelou, poet and activist, once famously said “People will forget what you said, people will forget what you did – but people will never forget how you made them feel.” Angelou’s comment is especially true in a visitor attraction – the feeling of outrage in learning uncomfortable truths in a history museum, the sheer delight and rush in emerging from a rollercoaster at a theme park or that weird intersection of disgust and joy that creates morbid fascination for young kids in a science world. For those in the industry, the outcome of how we make our visitors feel is the reason we do what we do.

Traditionally, assessing emotional engagement and response for visitors has been more art than science, favoring qualitative over quantitative. Though a useful part of visitor evaluation practice, treating each comment individually creates an administration nightmare in digesting each piece of visitor feedback individually and risks giving unbalanced weight to a single anecdote.

Artificial intelligence, in the form of natural language processing, changes the game. Using natural language processing this technology can scour online sources such as social media reviews and offline equivalents such as evaluation surveys, turning freeform visitor comments into quantitative metrics for sentiment and key words. These analytics can in turn be monitored over time or correlated with other data from around the venue, such as how busy the venue is or what’s on in and around the attraction. Thousands of visitor remarks can be instantly analyzed to reveal insights such as common themes and trends over time. Sometimes this is straightforward, such as “The staff were lovely”. In other examples things get complex, such as “I didn’t dislike the art” or “I liked the ride but not the queue”. Then there’s the super hard – sarcasm, misdirections – examples such as “I liked this event better last year”. Like any artificial intelligence, the model’s outcomes are subject to a degree of accuracy and in some cases, a level of interpretation too. The objective isn’t necessarily perfection – instead a pragmatic basis of confidence suitable to inform decisions.  

At Dexibit, we’ve recently released our Sentiment Analysis model, the latest addition to our new Insights module. This model follows research in partnership with the team at the Museum of Modern Art (MoMA) in New York. Our model draws upon data from social media integrations such as Facebook and uploaded evaluation remarks from intercept surveys to transform visitor comments into actionable insights. Interestingly, we’ve found one of the benefits in using data from digital channels in addition to onsite surveys is that visitors are more likely to leave a more long form, detailed response – providing greater opportunities for analysis. Turns out they also include emojis which make for colorful word clouds, especially cute in zoos and aquariums!

This data can be explored in the Insights module itself or visualizations added to any dashboard or report – such as a word cloud showing the most common phrases mentioned by visitors. We’ve combined handy features, such as the ability to slice insights by sentiment such as only looking at negative responses, or sorting for the most common negative words. We’ve also added query builder to manage various views excluding a dictionary of terms – each venue is equipped with a default list globally applicable to exclude (such as “I” or “very”). We expect most will want to add various forms of their venue name and other common terms associated with their venue that aren’t necessarily telling of visitor feedback. As an ethics decision, we have made the decision to represent data as it is for now, so this dictionary can also be used to configure settings should you wish to protect staff from reading inappropriate remarks.

Sentiment analysis helps visitor attractions monitor how visitors are feeling over time, similar to a satisfaction measure. It helps discover patterns of when visitors are more likely to have a negative experience and to identify why this occurs by highlighting key problem areas to work on in their own words. For example, we’ve found trending topics amongst our portfolio of venues include common complaints about pricing, queues, seating, restrooms and environmental concerns.

What makes a good benchmark for sentiment? We suggest grading with a letter scale – an A for over 90% through to an F for under 60%. Do make sure you’ve got a big enough data set to be useful – at least 100 remarks a year for smaller venues. Consider whether there is bias in your data, for example if your visitors tend to review only with polarised opinion.

For those getting started with sentiment analysis, here’s 7 starter questions to ask your data:

  • Is sentiment getting better, worse or is it static? How urgent is the need to act?
  • What are the most common negative themes? What problem areas should we focus on improving?
  • What are the most common positive themes? How could marketing messaging better relate?
  • Does sentiment show correlation with how busy the venue is? Are crowding and wait times an issue?
  • Are particular days of the week or times (such as holidays) problematic? Why? How could we improve?
  • How does sentiment from freeform comments track to visitor satisfaction? What does the difference tell us?
  • What is the opportunity cost of a bad experience with reduced spend, member conversion or repeat visit?