Skip to content

Nurturing an AI intuition within your team

👤 Featuring Ravi Chandra, CPTO & Angie Judge, CEO of Dexibit

When the data runs out, intuition takes over. This episode unpacks AI intuition: how organizations can spot signals in uncertainty, embrace disruption and sharpen judgment in the age of generative intelligence.

Transcript (generated with AI)

If you want go from gut feel to insight inspired, this is the Data Diaries with your hosts from Dexibit, Ravi Chandra and Angie Judge. The best podcast for visitor attraction leaders passionate about data and AI. This episode is brought to you by Dexibit. We provide data analytics and AI software specifically for visitor attractions so you can reduce total time to insight and total cost of ownership while democratizing data and improving your team’s agility. Here comes the show!

Angie: The sky’s blue outside, but it looks like we’re in for a stormy weekend. We had to hit the record button very quickly ’cause Ravi was already getting deep in organizational intuition.

Ravi: That’s right.

Angie: Talking about storms on the horizon.

Ravi: We were just talking about how intuition is almost like when seasoned sailors can look out at the sky on a morning and I’ll be like, ah, it’s a stormy one, it’s gonna be this afternoon. And they, they have this intuition, this gut feel where they can recognize the signs before all of the equipment or they get the weather reports even coming.

And in many ways, what we’re talking about today is how organizations can also behave in that same way, and sometimes it’s necessary to use a bit of old gut feel, especially in times of uncertainty or just huge disruption like we are today with AI changing almost everything about what it means to be a business, especially in the technology space.

We can’t use data and and evidence because it, it doesn’t exist yet. We haven’t created that base as a society or a collective to, to lean on huge data sets to, to prove what we should do, and it’s a completely different world and that we’re operating in. Mm-hmm. That’s where it’s sort of back to the, the gut feel and intuition.

Angie: I can hear my father’s words in my head of red sky at night, the sailors delight. But I think I was when we started to get into the generative AI age, and I was just reflecting on this this week ’cause we moved into a new office.

Ravi: Mm-hmm.

Angie: And it reminded me of the times of the previous move into the new office and where technology was at at that point.

You know, I think it was three years ago.

Ravi: What was new then?

Angie: Well, we were working in machine learning. We were sort of setting up a new machine learning framework at that point in automation sort of harness. And we were doing voice of the visitor back then in natural language processing and having mixed results.

It was good but not great, and I remember when. Generative AI opportunities came in a knocking, would’ve been a year later, maybe two years later. I was a little bit resistant at the beginning. Because things with machine learning and, and natural language processing back then were a lot harder than anything is today.

And it seems like at the click of the button, we’ve got access to so much more times a thousand.

Ravi: That’s right. I, I do remember those early days where we were experimenting with, with generative ai and I think, chat GBT had only just, just landed. And it was, it was interesting. It was fascinating because it felt like it could do so much, but it was also actually quite bad and wasn’t quite ready to be embraced in the way we we are today.

But, I think it was quite natural to to be resistant.

Angie: Mm. Nowadays I see faults and chat GBT and think they’ll probably fix this by next week.

Ravi: That’s right. I

Angie: My favorite fault of it’s at the moment is that if you’re doing some extensive work over a longer period of time, it, it hasn’t quite worked out how to save the file and drops the file constantly through the conversation, and it can be really frustrating.

But I think these sorts of things like six months from now will be table stakes for. Conversational insight, at least

Ravi: That’s right. It reminds me of a, of a quote from William Gib Gibson, who is a sci-fi author. I think he said the future is already here. It’s just not evenly distributed. And I really love that.

’cause I think it really rings true. And, and organizations will probably face it whether they, they realize it or not.

Angie: Yeah. You always have the best quotes in our podcasts, really distributed across chat GT features, as well as its users, huh?

Ravi: Yeah.

Angie: So what does organizational intuition mean when we’re talking about ai?

Ravi: So what I think it means is as a business and that sort of. At both the collective and leadership levels, there is a strong understanding amongst everyone about what an AI can do, and especially what an AI can’t do. Mm. And they can start to see opportunities where AI can be harnessed and embraced today, but also where.

It’s heading and it’s sort of a shift in, in direction of travel. And what makes this really hard is because a lot of these opportunities might not have been realized yet exactly as you say, because OpenAI haven’t invented it or they haven’t added that feature. But I think, you know, you can see we are going in that direction and it will be a reality.

It means that you need to take a quite nuanced approach to to leadership and how you create and adopt new processes. And I think for us at Dexibit, it’s really meant reevaluating almost every part of of our own business from both our product and how users might interact with it,

Angie: how we make it,

Ravi: how to make it, but also our own roles and responsibilities.

Because they’re probably going to change as we adopt and transform.

Angie: I think there’s, there’s almost that middle ground as well of knowing what it can do, what it can’t, and knowing when it could probably do something. You’re just not approaching it the right way. And I think you taught me this, like sometimes I would try things out and chat to BT and deliver me a horrible result and I’d get frustrated, throw it in the bin.

Say, oh, it’s terrible at that. But actually it was the way I was prompting it and we hear all about like the skills of the prompt engineer, y, YY, but I think it’s really easy to lose sight of that in the moment of a task that you’re doing versus like seeing how carefully you can create, curate your prompts and how you treat that as like an iterative process.

Was sort of a light bulb moment for me and watching you do it and then working out how to do it and then working out how to teach that as well.

Ravi: Yeah, I mean that’s, it’s funny you say that because I think. The script is flipped. Where now I feel like I’m learning from your own prompts and how you’re using AI and chat.

GPT be because it’s a sign of how quickly these things change and how what was once a good prompt, uh, actually is no longer. Mm-hmm. So, we are discovering and learning all the time, and I think that comes back to that, that sense making and intuition aspect of, of AI It’s gonna require these, these personas.

That will embrace, uh, exploration and experimentation and, and really understand you know, what, what it can be for.

Angie: We have two quite parallel experiences. Me through the UI and, and you through the API in some ways as well.

Ravi: That’s right. I think sometimes in the using the ui and the API, like, they’re quite different experiences.

So my primary experience is via, is for building our product and how we use AI to, to improve that process or at least enable new features. But there are so many different considerations there when it comes to building a product around latency and, and cost and, and quality of result, which is quite different to the user experience from when you’re, talking to, to an AI agent.

Angie: I remember I was sort of reflecting back last night on organizational intuition. It’s a phrase I’ve heard you use a few times around AI And thinking one of the things that, the feelings that you get around it is like the feeling of that exponential growth that you mentioned. You know, where you can see it, feel it in your own use of, you know, I used it a couple of times a week and then I used it a couple of times a day, and then it became a few times an hour and, and then sort of seeing that same journey through the team and hearing it mentioned once every few weeks, and then once a week, and then once a day, and then constantly throughout the day. And you definitely feel that airplane taking off of, of how AI’s being used amongst the team in so many different ways and all these different things that people are doing with it.

Ravi: That’s right. Something I’ve been thinking about, along these lines is like a, an AI fluency framework. And, um, I can kinda see maybe four different, different categories of, of AI adoption. I think the first being, resistance, which is what we’ve talked about already. You know, the, your first encounter with AI is often.

Uh, well, it’s maybe not often, but often people can be quite skeptical.

Angie: Hmm. I remember the first feature we put in our product was a AI query assist where it would use gen AI to turn your words into a data query. And very super, super powerful ’cause you could create any filter in the world with your imagination and little to none technical skill.

And, I loved it. It was magic, but our tolerance for it, making mistakes when we first started using it was zero. And us on the customer side of the team threw our hands in the end and said, we can’t possibly release this. It doesn’t get it right every time. And nowadays, like you come to expect that that’s the case, it’s like you Google something and the first result is never, not always the perfect result for you.

That’s why. You get a whole list of them. It’s the same kind of thing, like you prompt AI and it gives you an answer and then you have a conversation with it to get to the right answer. That’s both your brain and it’s thinking, and it doesn’t necessarily get it right every time. And I think back to our, our first resistance on our first feature is very much that.

Ravi: Yeah. And I think that’s a story that could be shared by, by many, many, many others, um, all around. And it’s funny because I think, you know, people talk about AI as having like a corpus of the entire world’s knowledge within it, but it’s memory is, is extremely tiny. And understanding how to harness that knowledge with such a small amount of memory is, is quite interesting.

And I think. Initially we struggled to, to really understand what that meant and why we wouldn’t always get the same answer. And it was probabilistic. So I think it’s all part of the, the learning process for us and, and any sort of user and getting

Angie: comfortable with the discomfort of releasing a probabilistic product in the first place.

Ravi: That’s right. Then when I think back, I think the next phase we sort of got to was that, is that through all that excitement, it was like, how can we adopt it everywhere? And, you know, both in terms of business processes using Chat GPT to write and summarize and do all of those great tasks as well as, um, solve a lot of hard, problems building our product.

And then, you know, we, we quickly realize actually. It can’t do all of these things, at least not yet. So there are limitations that I think are important to understand before you can really move to the next level of, of proficiency and efficiency. With ai,

Angie: do you think we’ll see like the sort of, what is it more as law or whatever it is of improvement?

Have I got the right law? It’s definitely not Murphy’s Law. When it comes to the memory problem, like is that something that’s gonna get better exponentially or is that always gonna be a limitation?

Ravi: It’s a great question. I’m not an expert. In order to answer that, I expect that we’re gonna continue to, to improve.

And I mean, we, we know, history tells us that humans are amazing at creating these, these scientific barriers. As soon as we think that we’re about to hit a wall, something new, new opens up. We get past that and, um, I think that’s what we’re gonna see playing out. But I don’t know. I don’t know if we’d ever it’s ever gonna be perfect and I think it comes back to a point of around, I still think there’s a huge amount of value in humans applying critical thought to their use of AI

I think you’re always guaranteed to get a better result. If you are the one who’s, who’s in charge and, and not blindly assuming, what you get out from an AI is correct and, and reasonable.

And so I think there’s still a huge amount of influence that humans have on quality of results. Comes back to how we prompt it, how we review output, and how we use it to build with. I think that’s why you get these, these, sometimes these polarizing takes across the end of the spectrum where some people claim it’s absolute junk and garbage and others.

The best thing since sliced bread,

Angie: I’ve found for myself, there’s kind of two modes of how I use AI in terms of thinking. It’s like there’s some things where I say, just do this, you know, like. Maybe we’ve got a blog post and we need a short description of it to go onto the website. You know, something simple like that.

I can give it a copy, it can generate it to me, I can give it a quick review.

But then there’s other things like preparing for today’s discussion. Mm-hmm. Where I don’t want to just say, we’re gonna record a podcast about this topic. Tell me what I should talk about. Mm-hmm. You know, I wanna say like.

Here’s this topic and here’s my thinking on it and what am I missing and you know, how could I develop this idea better? And sort of these two different modes of like, do this for me versus think with me.

Ravi: Yeah.

Angie: And I think a lot of the times where we are unhappy with the result, it’s ’cause we’ve kind of used the wrong mode and how we’ve approached it.

Ravi: We’re trying to outsource our, the thinking. Yeah. Which I think is an natural thing to do.

Angie: Yeah.

Ravi: Um, that’s right. And it’s, um, it’s something that’s close to my heart as a, as a father of, of kids, the elst being 13, I, I really, really discourage the use of, of AI for them because I think it’s important they understand original thought, what that means and how to.

The process that goes into getting to an answer. Because if they don’t, then how can you judge the quality of what an AI tells you? Which on the surface might, might seem fantastic, but you know, when there’s nuance involved what you’re trying to make, work through a delicate issue.

If you lose that muscle. I think we can all be lulled into the sense of just always accepting what the ai AI says. And, uh, in the engineering, um, team, we’re finding that a lot now we are, we are more likely to steer the AI than let it steer us and drag us around. I think that’s a, a failure mode that we’re at Dexabit

We are definitely trying to, trying to avoid.

Angie: It’s like having a junior engineer, not an equivalent one. Right. I, I think this is interesting that the cases of the, of the kids, you know, you and I grew up in an age where we were the generation after our parents and got the calculator. Mm-hmm. And they all thought that was cheating.

And then sort of probably by the time we were at university, I think it was about around when Google. Came out and the, the internet, you can sort of find information and everybody threw their hands up in the air and said, but what about the libraries? And that was sort of seen as cheating, where these days you wouldn’t conceive of going to university without the internet.

Ravi: Mm-hmm.

Angie: When you’re sort of thinking either about your kids or even about our team and, you know, reading the studies of what generative AI does to the brain. Of the little bit of study that’s been done where it says, was it MIT or something said that it reduced the active pathways in the brain for critical thought.

Mm-hmm. And for students who are using it to write essays versus those that didn’t. And what, what do you worry about when it comes to our use of that and what that does to our brains and what that does to a team or to a child?

Ravi: Well, I’m not sure who to attribute this quote to, but one that I love to describe this is when you go to the, to the gym to do weightlifting, it’s, the purpose is not just to move weights around.

Otherwise you’d, you’d use a forklift, right? I think it’s. I think it’s important to, the brain is not actually a muscle, but I think it’s important to, to treat it like that. Mm-hmm. And encourage those, those pathways because, ‘ cause as I said, I think, although an AI might have the world’s knowledge it doesn’t have the world’s context and it doesn’t have your context or context of the environment you’re operating and really, truly understand the problem that you’re trying to solve.

So I think it’s important that we maintain that critical thought to be able to, to prompt it and, and teach it the values that we care about and why we care about it so that it can draw from the correct pieces of knowledge and, and put it together. Uh, otherwise I think it, it would struggle to, to recognize the patterns that I think.

A sharp human brain can do almost instantly or intuitively to come full circle to this uh, to the topic.

Angie: Mm-hmm. It’s hard to get it to debate you. Well, even if you tell it to be harsh, it’s just, just so agreeable as well.

Ravi: That’s right. It’s, it’s the, it’s the, the, the psycho fantic behavior, which I think all of the, the top providers are, are working on, and it’s quite a challenge because.

Sometimes if you tell it don’t agree with me, it will agree with you to be disagreeable. Yeah, absolutely.

Angie: Right. I shouldn’t agree with you.

Ravi: And almost, and then at least for me, I’m second guessing myself. Is it disagreeing with me because I told it not to agree with me, or is it disagreeing with me?

Because actually there’s a fundamental flaw in my reasoning, and again, I think it’s the, it’s the critical human element that. I think it’s going to be still valued for at least the next few years.

Angie: Is there anything that you do in your work that you feel we shouldn’t use AI for? Like when it comes to creativity or critical thought, is there anything that you purposefully swim in the other direction?

And

Ravi: for me personally, I. I’m not a huge fan of the creative writing that comes out of from AI Not that it’s bad, but I feel like it doesn’t fit the, the, uh, my tone or personality so often. I, I might use it for ideas, but not necessarily to write like pros verbatim, but I am, I’m trying to use it in, in many.

Many more situations. But whether I use the output or not, I think doesn’t really bother me too much, but at least learning about the capabilities and the boundaries of what the AI can do to build up, you know, the intuition. So that next time I have a problem, I, I know actually I can use AI for this, or don’t even think about AI because I’ve tried this in the past.

And I didn’t have good results with it.

Angie: Did we get past resistance?

Ravi: In terms of my suggested AI fluency framework, I think we had resistance and then we were talking about using AI And so that’s, that would be, I think, uh, what I would describe as as point solutions. You’re on chat, GBT, you’re summarizing, you’re writing emails, you’re feeling super productive.

And uh, the recipients are probably inundated with a whole bunch of content thinking, ah, what do I do with all of this? And I think that leads to the next stage. In the, in my AI fluency framework, which is, which is really about adopting a and understanding where does it actually fit best without making my team go crazy, but working together with, with my team.

And I think that’s why I think of it more of like as an organizational level rather than an individual one. That’s one of the things that I think you’ve probably seen is sometimes individuals can go crazy. Because it’s so easy, the cost to generate content is so low. It doesn’t necessarily mean everyone is actually, generating valuable content.

So how do we adopt this really powerful, novel technology without losing sight of the actual objectives or goals that we’re trying to solve?

Angie: What sorts of things have you found works, either like with individuals or with the team in terms of bringing them to that adoption challenge or that adoption nirvana where they’re using it for the right things in the right ways?

In the right direction?

Ravi: In the engineering space because we’re using AI to generate code, we’re able to generate so much, so much more quickly. But that’s putting a whole bunch of pressure on the review process. So as a team, we’ve sort of discussed, got together and discussed potential solutions and how we wanna tackle this, this problem.

And ironically, our solution is really to use more AI Use it to help us with the review process, or even better yet, the design process, so we can, we, we are approaching our work differently in the past because it actually takes a lot of time for a human to implement a solution. We generally do a little bit of upfront planning, pick a path, and then we commit to it and implement it because that was a, there was a lot of work involved.

Taking that same approach as AI just means we, we do things way faster, but we are missing the opportunity of actually trying multiple parts in parallel. And this is, I think, what we’re starting to adopt now. And we are doing it early on in the, in the design process and, and continuously and using multiple ais to review different parts play off.

Off, uh, one and each other. And I think ultimately getting to a higher quality result, a higher quality result than what we would used to be able to do manually and also reduce our review effort because we now know that we’ve been through multiple rounds of AI review, picking edge cases and corner cases, trying out different ideas.

Before it lands, so we are way more confident at that review stage.

Angie: Do you do that using each, using your own ai or do you share an AI between you that’s been trained and sort of has its baseline and training in the same place? Like or do you each have your own flavor?

Ravi: So we all have access to different AI models.

The way we use them is actually quite different. And it comes back to, to what you talked about earlier on, which is around how we prompt an AI And it’s quite interesting because sometimes we are actually independently, uh, using it to come up with ideas and we get different answers as to which direction to go in.

And sometimes it traces back to just really subtle instructions in the prompt, or at least the values that we are sharing as context, which is steering the AI And I think that’s why it’s important for an engineer to be able to think critically to evaluate the merits of the suggested solutions rather than just blindly, uh, accepting what they get.

Or at least you can get a, a much higher quality result if you, if you take that, that step.

Angie: We found that on the customer side, like there are some tasks that we all do that are very similar and quite repetitive.

Ravi: Mm-hmm.

Angie: And so we thought, well, we build a shared GPT and get it customized exactly for these particular tasks as sort of an agent of sorts and all share the same one.

So, for example, we all often create synthetic data for various purposes. Maybe we’ve got a customer who’s moving to a new ticketing vendor and they just wanna kind of get a feel for what the data’s gonna look like, but they don’t have any of their own yet. And so we want that GPT to be informed by what that vendor is.

Data looks like, and that way we can just prompt it with, and this is for, you know, this, this theme park over here, and they’ve got these rides and these tickets and things like this, and create me some data that looks like it belongs to them. But even with a custom GPT that’s been trained centrally, the way that we all prompt it, we get different results and some more frustrating, some more easy, and so much of it becomes quite individual

Ravi: that that’s right.

I think it’s, it’s almost a question of, of art versus science. And when I say art, I can sometimes see some of the engineering work we do. It has similarities because you’ve essentially got an unbounded solution space. So it’s, it’s very much about design. However, there are some tasks which are very, very clear cut in there, more about like, do this thing.

And so for instance, writing queries is a great example because, um, you can sort of prompt it any way you want. There’s a right

Angie: or a wrong way to come up with the answer, huh? Yeah,

Ravi: exactly. And you can be pretty confident you’re gonna get the, the right. Well, an answer that works.

Angie: I must admit we’ve, we’ve had better luck with situations like that, but even still, like, knowing little tricks like, you know, for us, given that none of us are, data engineers ourselves, like little tricks, like knowing in our prompt to, or in the training of the, the custom GPT to make sure it comments almost every line, you know, so that we can read it in English and basic language for ourselves as we are going through to review the, query Doesn’t seem overwhelming.

One of the things that I found quite useful. Almost by mistake is in one-to-ones with some of the team, like actually having almost what’s become an agenda item to kind of talk about the different ways in which they’re using AI in that week and what sort of frustrations they’ve run into, what kind of tips and tricks they’ve picked up on, and sort of share some of that around in addition to the times that we do that together as a team.

Ravi: I’ve definitely done the same with, with my team, and it’s, it’s really great. I, I love hearing about how other people are, are using it and what their learning works for them and we’re starting to build up this collective knowledge around different approaches that we can try and we we’re constantly iterating on that

Angie: organizational intuition.

Ravi: Exactly.

Angie: I notice you haven’t mentioned the word policy. Governance.

Ravi: That’s, that’s right. It’s not my favorite topic, but I think we, uh, we are really great at Dexibit because you know, you and I, I think we, we have this like strong pragmatic view around the technology, what can go wrong and, and how to, how to safeguard our, our staff and our customers.

And I think.

It’s walking a tight trope between being too restrictive that nobody can really get the benefit of it. Or in some instances I’ve, I’ve heard that, um, organizations have provided custom AI models, which are very locked down and very primitive, and they’re, they’re on-prem, but they’re, they’re so far behind that.

People don’t get value from using them. So they, they don’t, well not compared to the, the latest and greatest that you’ll see from, from, um, open AI and anthropic. So it stifles adoption or otherwise generates a lot of poor or maybe suboptimal content. And then I think the other side of this is, if. If you gave everybody free reign, you’d have to be very careful of company and, and customer data.

And so I think the approach that we have taken, which is really to understand the benefit of the technology and to work with it, but amenable to, to our own contracts and policies, um, using company accounts where we can ensure our data isn’t using used for the purposes of training. And complies by our data sovereignty and residency rules that our customers expect from us.

As, as a pragmatic and sensible approach to, to get the benefit of the technology without leaving ourselves open to a large surface area for risk.

Angie: It’s almost having those security foundations and then the right education. For the team of like you started off with knowing when to use what and how.

Mm-hmm. Um, and the rest becomes about almost a trust. Like if, if you’ve got a team member who’s not making the right decisions, they probably don’t have the same context as you, and that becomes a reflection on the issue. Rather than trying to write every scenario onto a policy of a technology, we can’t fully wrap our brains around.

Ravi: Yeah, that’s right. That seems almost futile to me. But I mean, so many organizations are, are spending a lot of blood, sweat, and tears to attempt it

Angie: and time

Ravi: and time

Angie: time’s a race right now.

Ravi: That’s right. And I think it’s, it’s really a struggle because often if. The people who are in charge of writing the policy may not necessarily be the ones who have a deeper understanding of the technology, which makes it even more challenging and even more frustrating for those who are using it at times.

Angie: I think what we’ve seen I’m going to forget who came up with this, but they, uh, saw that. For organizations who didn’t put AI in place for their uses, that their staff were paying for it out of their own pockets and using it on the sly anyway, so there’s almost like a rebellion and it’s gone even further than the sas.

Sort of era where one user in a corner could buy a license and start using something. It’s at the point where you can have it on your own personal phone and start doing something.

Ravi: Exactly. You can take a, a, you can take a photo on your phone and feed it into, into personal AI accounts and, and, and I think that.

That resistance, that approach to put up walls is counterproductive to these organizations because they set themselves up for more risk with individuals putting company data into public accounts that could be used any which way with no, with no traceability.

Angie: I think the reverse is a good way to get people started though.

Ravi: Mm-hmm.

Angie: If you have got a, the right safeguards in place.

Ravi: Mm-hmm.

Angie: Encouraging staff members to use AI for personal means, even if they’re using the work one, as long as you’re not throwing their health records into it or something. But it is quite a, a nice way to get a user underway with AI to get them to do something from their personal life with it, and then they can sort of make the leap into finding opportunities in their work life to use it, which can be slightly less threatening way, especially if you’ve got people who are maybe.

Um, I’m thinking about some of the organizations that we work with where, they’re not technology companies in their own right. So a lot of staff members out there who are still gaining digital confidence.

I am had a family member who had actually worked in. The technology sector most of his life, um, but is now retired.

I won’t name him here, but I was quite surprised that he hadn’t leapt into using generative AI He’s normally quite an explorative person but he likes cooking and he had borrowed a cookbook from the library that he really liked. So I took a photo of one of the recipes that he was interested in and showed him, uploading it to chat GBT and said, you know, write me this recipe out.

’cause he was asking me, oh, can you scan this? And in, of course, a couple of seconds if we had a nice recipe. And then I asked it, oh, could you make it gluten free? Mm. And it just blew his mind. But it was the spark that he needed to dive into it. And I think, you know, if you’ve got users and you’re training them, you know, and educating them and introducing them to AI you know, using some personal.

Little things like that can sort of draw them into this world and ignite that spark of imagination in them, of how they might use it at work.

Ravi: That’s right. I can think back to my personal life. For my wife, it was actually, travel planning. Uh, we. Going on holiday and, uh, to a new city and used a tune of AI to help build together an itinerary as something that we have been thinking a lot about here at Dexibit.

Angie: Oh yes,

Ravi: exactly. Just a whole different podcast altogether. But it was, it was exactly, uh, that you just need to find a little opening in, in everyday life, and you can start to see how powerful the technology’s gonna be.

Angie: Every year, I, um, got as a bunch of conferences for visitor attractions and often, sometimes specifically to the digital space or there’s always a bunch of presentations around the digital space no matter what the format. And over the past nearly 10 years. Like there is always this plethora of app presentations and then there was the blockchain stuff and the more recently the metaverse stuff.

And it, it does tend to give you an eye roll with some of these things, but the gut would say this is different, right?

Ravi: Well, I think so. I, I think it’s easy to be. To hand wave away. AI is just another one of these phases that they were sort of going through. And we’ve sort of seen this with, with a blockchain where a few years ago, uh, it was, it was everything to everyone, but in the end it’s actually really nothing to, to most people.

But I think AI is, is, is very different. And I think you’ve sort of touched on it. What, what, for me personally, makes it different is the way I’ve seen it touch people’s lives personally. And I think no other technology has done that since the, since the smartphone. I think we’re going through this, this phase when people tend to overestimate the effect of a technology in the short term, but we underestimate it in the long term.

So, although it’s hot and hyped right now that doesn’t necessarily mean it’s going to go away, but it’s probably just that catalyst of this slow burn of it becoming more and more omnipresent in both our personal lives and our work lives. So it’s one that I definitely would encourage you not to ignore.

If you’re listening,

Angie: The red sky is the sailors delight then!

Ravi: It is… 

If your goal is to get more visitors through the door, engaging and spending more, leaving happy and loyally returning – check out Dexibit’s data analytics and AI software at dexibit.com. We work with visitor attractions, cultural and commercial, integrating with over a hundred industry source systems across visitor experience and venue operations, providing dashboards, reports, insights, forecasts, data management and a unique data concierge.

Until next time, this is Dexibit!

 Ready for more?

Listen to all our other podcasts here:

Download Visitor Excellence Kit

Download Visitor Excellence Kit

As the global leader in big data analytics for visitor attractions, Dexibit takes privacy seriously. Read more about how we treat your privacy.

Download Data Audit Workbook

Download Data Audit Workbook

As the global leader in big data analytics for visitor attractions, Dexibit takes privacy seriously. Read more about how we treat your privacy.

Sign up to our mailing list

Get actionable content sent directly to you. Use these insights to reach visitor management excellence at your attraction.

As the global leader in big data analytics for visitor attractions, Dexibit takes privacy seriously. Read more about how we treat your privacy.