Skip to content

Comparing apples with exhibitions

What makes a great exhibition? Is it visitorship and diversity? Revenue and profit? Perhaps education or academia? Press and criticism? Or the harder to quantify, such as impact and influence? No matter the opinion, the evaluation framework is inherently complex and very much depends on the institution’s mission and intended strategy.

Aside from determining on which merits an exhibition should be analyzed, the next dilemma is in understanding that not all exhibitions are created equal. It is hardly fair to line a blockbuster’s box office up against a specialist exhibition with no marketing budget. Differing opening schedules, durations, prices and marketing budgets create an uneven playing field and an enormous challenge to normalize an exhibition against these contributing factors in order to compare apples with apples.

Even when analyzing exhibitions on a simple performance metric such as visitation, this complexity becomes evident. As a side note, ticketed attendance (common in special exhibitions), isn’t the only way to measure this. Installing footfall counters into exhibition gallery spaces, or using WiFi to determine the percentage of visitors converting into these spaces via mobile device tracking (or both, to provide an accurate and enhanced view of visitor behavior), provides alternate data where ticket sales figures are not applicable. Here, we look at five ways of approaching exhibition performance comparisons.

Foundation financials
Total admissions, revenue and return on investment provide a simple financial perspective. Beware if measuring solely on those terms, this data fails to reveal the opportunity cost of what could have been, especially in considering the exhibition’s duration.

Base visitor uplift
Blunt uplift (calculated by comparing the average daily visitation during an exhibition, pegged against other exhibitions in a time period), provides a side by side comparison for roughly evaluating whether the exhibition delivered comparatively higher visitation during its term. It may also highlight exhibitions which downgrade visitation due to visitors selecting to visit other attractions instead as a result of the current program. However, this doesn’t take into account the exhibition’s potentially varying schedule against the season and doesn’t offer insight on return, paying no heed to exhibitions with significant resources versus those produced on a tight budget.

Forecast residual
An alternate approach is to use the bi-products of forecasting during the exhibition term (the difference between forecast versus actual visitation), with the original forecast produced without considering the exhibition itself as a feature. This residual, viewed again as a daily average, balances aspects such as exhibition duration, peak season positioning, school term, public holidays and even the weather – all factors which may naturally advantage one exhibition over another. This implied uplift, if any, can also contribute to the return on investment profile – visitors drawn to the attraction as a direct result of the exhibition, particularly useful in cases where the exhibition does not carry a priced ticket in its own right, yet may be responsible for gains in general admission revenue.

Simulated residual
Yet still, this more advanced analysis doesn’t speak to the varying ticket price, marketing budget or curatorial category. Taking the concept of using forecast bi-products for evaluation further, the residual remaining from forecasting the exhibition itself can finally fill this gap, given it accounts for exhibition specific factors such as these. This provides an understanding of whether the exhibition performed better or worse than expected, which can also be used in evaluating its performance against targets (or helping to set these in the first place).

Additional insight
To round out quantitative analysis on exhibitions, additional visitor analytics can be drawn upon. These might include visitor conversion into permanent collection galleries (from WiFi presence data), visitor satisfaction and sentiment (from intercept evaluation, social media or both), dwell times and repeat visits, member conversion and renewal, merchandising rates and average spend or other venue performance metrics such as press mentions. This information helps complete the picture on how the exhibition contributed to overall visitor experience, both as a standalone offering and together with that of the wider institution.

Combined with a study on the demographic diversity achieved and together with traditional visitor evaluation to provide qualitative context, this multi faceted science of exhibition performance analysis can start to uncover the true stars of the exhibition portfolio.

Angie Judge

Angie is the Chief Executive of Dexibit with a 15 year career history in information technology and is passionate about big data analytics in visitor attractions. 

Want to learn more about Dexibit?

Talk to one of our expert team about your vision to discover your data strategy and see Dexibit in action.

Download Visitor Excellence Kit

Download Visitor Excellence Kit

As the global leader in big data analytics for visitor attractions, Dexibit takes privacy seriously. Read more about how we treat your privacy.

Download Data Audit Workbook

Download Data Audit Workbook

As the global leader in big data analytics for visitor attractions, Dexibit takes privacy seriously. Read more about how we treat your privacy.

Sign up to our mailing list

Get actionable content sent directly to you. Use these insights to reach visitor management excellence at your attraction.

As the global leader in big data analytics for visitor attractions, Dexibit takes privacy seriously. Read more about how we treat your privacy.