Webinars

CONVERSATIONAL AI:
THE NEW DIGITAL FRONTIER
OR JUST EMPTY TALK?

Adapting to the Dynamics of the Workforce and AI

Kunal Interview speaker Headshot

Kunal Contractor,
Global Director,
Avaamo

Conner Elliot Silicon Valley Innovation Center's Webinar Speaker

Conner Elliott,
Sales Director,
Quid

Mamta Narain Silicon Valley Innovation Center's Webinar Speaker

Mamta Narain,
Artificial Intelligence Product Management
Leader, HPE

ENJOY THE WEBINAR RECORDING

ABOUT THE WEBINAR

In the enterprise setting, conversational AI has the potential to bring huge gains in efficiency and provide innovative solutions to long-standing business challenges. But what will it do to the customer experience? Can we trust AI to be the face - and voice - of a company?

In this webinar we dive deep into the world of conversational AI with three expert speakers. They offer their views on the best use cases of the technology and also highlight challenges that any corporation using conversational AI must be aware of. We answer the question: is conversational AI the new digital frontier or just empty talk?

YOU'LL LEARN

  • The key benefits of conversational AI for business
  • The challenges conversational AI presents for enterprise and how to overcome them
  • What the future holds for conversational AI and which innovative use cases we’ll see next

ABOUT OUR GUESTS

Kunal Interview speaker Headshot

Kunal Contractor

Originally from the UK, Kunal Contractor brings a decade of experience working with clients in EMEA, the Americas, and APAC across a breadth of industries and customer use cases. He leads the strategic process of advising clients around conversational AI, implementing disruptive technologies, and driving initiatives to identify areas of customer delight, cost reduction, and revenue growth.

Conner Elliot Silicon Valley Innovation Center's Webinar Speaker

Conner Elliott

Conner has a strong background in the consumer products goods (CPG) and AI/NLP software spaces. He started his career at The Coca Cola Company, quickly rising through the ranks to manage Coke’s national restaurant business in Northern California.

More recently, Conner transitioned to work at Quid, and is responsible for growing Quid's business across major CPG companies. Quid’s current clients in this vertical include Coke, Pepsi, and a number of other major Fortune 500s.

Mamta Narain Silicon Valley Innovation Center's Webinar Speaker

Mamta Narain

Mamta is a Product Innovation leader. With a business mind and techno-prenuerial energy, she has a consistent track record of using technical innovation to conceive and launch products to make customer experiences simple and efficient, while creating additional revenue channels, growth, and expansion for the businesses. Over the last decade she has been recognised for launching many ‘industry first’ ideas/products. She is also known for challenging the status quo and conventional thinking, and relentlessly deploying quick test and learn, with deep design thinking, leveraging simplicity to solve complex customer problems. She is most passionate about Democratizing Digital. Her products have democratized ecommerce and stock ownership and she is now working on democratizing and simplifying AI. She is also a champion for women in tech and business, and has been mentor for Stanford University students.

WEBINAR TRANSCRIPT

Rahim Rahemtulla:
Well, thank you for joining us today, everyone. Welcome to Silicon Valley Innovation Center webinar. Today we are looking at the question of conversational AI and we are asking, “Is it the new digital frontier or is it just empty talk?” But apart from these bad puns – and I’ll try to keep those to a minimum – we have three expert speakers who know much more about this than I do. And they are Kunal Contractor of Avaamo, Mamta Narain of HPE and Conner Elliot of Quid. So what we’re going to do today, our three panelists have kindly agreed to do a short presentation each. They’re going to have the floor for about 10 minutes and they’re going to present to us a particular aspect on conversational AI, they’re going to give us their thoughts and share some case studies and so on. And that will sort of take us through to the first half an hour and then we’re going to open it up for questions and discussions. So do send in your questions at any time and we will get to as many of those as we can after the presentations.So our first presenter is going to be a Mamta, from HPE. So, Mamta, if you are ready, and I know you have some slides, so please take away the screen sharing and then yeah.


Mamta Narain:

Sure. Just a second. Hello, everyone. This is Mamta Narain and I’m part of HPE’s AI business unit. I lead the product management there for AI. And thank you Rahim for setting it up for us, for a really wonderful and very timely discussion in the industry. So I will kick off this webinar with a short conversation about, “What does it means to have a conversational AI? What’s an anatomy of a conversation? And what are some of the challenges we are going to see in the conversational part, the conversational AI? And how are we keeping up with it? Or where could we go from here?” So with that, let’s go over this.Let’s start about, How important are conversations? And as you can read – for those of you who are in front of the screen – Gartner predicts that, by 2020, the average person will have more conversations with chatbot than with their spouses. And one would say it’s already happening now, it’s not just like a lot of people are only conversing using digital devices, there is very few people-to-people conversation happening. So I don’t think we are going to wait for 2020 for that – it is that important. And why is it so important? One thing I want to call out here, it’s because of the natural language part of it. Everybody talks about NLP. But why is NLP becoming so prevalent, successful? It’s because, as human beings, we learn to talk like when we are two and we learn to type by the time… Well, a lot of babies learn to type now, even, so it is the most fundamental form of human interaction and there is absolutely no learning and that is why conversations are becoming so popular. And if you just had to compare it with an app or a website, even then there is a lot of learning, you need to understand the app and the website, the taxonomy, where your shopping cart is, or where your Confirm button is – there is still some learning. But when you talk about natural language conversation, there is nothing to learn, it’s truly customer or the user in the driving seat. We have turned the table with natural language interface, where the customers tells you what they want to do and the brand has to respond, as against the brand giving you an app and a website and the customer and the users having to go figure out how to do what they want to do. So that is why natural language has become so powerful.

So, coming back to the whole conversational AI part of it, many chatbots – the first conversational AI interface – started off with very rudimentary Q&A type conversation. However, even these conversations cover quite a range of customer’s requirements and, in fact, if you take an example of, let’s say, retail industry, this conversation, which seems like a very scripted Q&A conversation, is extremely important and they cover a wide range of the customer’s requirements already, without even getting into the complex dialogue conversation.

So if you think about just in the retail setting, some of the very prevalent conversation could be, “Where is?”, whereas if you walk into a store, and if you have a chatbot on Facebook Messenger or coming off from your brand website, you’ll ask, “Okay, where is the customer service?” “Find me the next aisle that has the shoes.” Or, “I’m looking for deals.” So these are really simple words that can address many of your customers’ requirements already.

And I just wanted to give you a snapshot of how simple these conversations can be, but they cover a whole range of requirements from the customer. “Where is the nearest store?” “Find me a pair of X.” The pair of X could be, “Find me a pair of jeans” or, “Find me a pair of shoes” or whatever, “Looking for short black dress.” So these are very simple conversations that can enable a chatbot to render a lot of value.

So with that, let’s take a step out and discuss what is really an anatomy of a conversation. So I just talked a lot about the retail industry context, but let’s take any other example. And here I have, “Can I play cricket?” So the request from the user can be talked as, like, utterances, anything that the user says. But then the two most important things about the conversation are intent and entity. This is the heart of what the chatbot technology needs to understand and respond to accurately in order to make a conversation very real and accurate. So an intent is really the purpose or the word, what you are trying to do, and the entity sets up the context for that word, it gives you the content with which the word has to really perform. And sometimes you’ll also see slots. Here I have a very simple example and later on I do have some examples of slots. So it’s entity and intent, which are the two main parts of the anatomy.

And then really, the dialogue is what you would expect it to be when you go back and forth. The customer asks something and the bot responds and you build a conversation that builds the dialogue. So here, “Can I play cricket?” as a whole utterance and then the word here is “play”, which becomes the intent, it is the purpose that the customer or the user is asking for. And the entity, which is the context, “What do I need to play?” It is cricket, so that sets up the context for the intent. And then we will go a little bit deeper, that’s how I’m building up the conversation myself. So when a customer or a user says, “Can I play cricket?” So, yes, cricket is really the context, but when you say, “Can I play cricket?” The context here will be, “Okay, the user wants to go out and play a sport. What is the weather? Where is the user located?” So that really builds up the entire context and that context the bot is expected to know on its own.

So let’s go into what are the different types of conversation and we’ll go back. So the chatbot industry, more prevalently is maybe in the third or fourth year now, but early on, or even now, a lot of chatbot will be limited to command and control, like, “Help”, “Store Locator”, “Deals”, “Check-in” and the chatbot would know how to respond. And along with those intents, you’ll have lots of synonyms or keywords or intents that match. A customer could say, “How do I do this?” versus “Help” and the chatbot has to respond to it, so it becomes a very scripted command and control. But then there are other types of conversation which is like slot filling. And here I have an example of, “What’s the weather in Austin, Texas tomorrow?” So the intent, again, is the weather report and the entity here is two entities. The first slot is “weather where”. You need a location. And then “weather for today” or “weather for tomorrow.” Do you want to forecast that weather?

So that’s the second slot. So in this case, the user entered the entire sentence, so the chatbot has nothing more to ask, they exactly know what they want. But in a natural language interface, the customer could simply ask, “Hey, what’s the weather?” Now, the chatbot or the conversational interface needs to know what else the bot should ask the user in order to fill the entire utterance with all the slots taken up, so that the bot has all the parameters that they need to give you an accurate answer. So again, slot filling is intended to maintain a short but scripted conversation were the bot knows everything that the bot needs to ask to give a meaningful response.

And here the long flow conversation is what is still a nascent stages and that’s where the differences I wanted to bring up are, even if, at the end, going back to the retail example, you can ask, “I am looking for a short black dress under $50 for my daughter” and then there for the chatbots there are few slots to fill. “Okay, do you want a long dress? Short dress?” The customer didn’t say black, then they have to ask for color. They have to ask for size, they have to ask for price before you give the response back. So these types of conversation are then called the slot filling conversation. And here I have an example. A human would say, “What is the weather?” And the bot will say, “Okay, I’ll look up the weather. But tell me, is it local, or somewhere else?” With the local, you can look up your location services – if your mobile phone allows – and the bot can respond. So really slot filling conversation is asking the customer all the information before the bot is able to give the answer and that really creates a nice conversation and you feel you’re having a conversation with somebody.

And then finally, the third kind of conversation which is more popular now is interactive conversation, where you’re relying not just for text and speech, you can also rely on multimodal inputs where you can talk with emojis and images and location and video. And I’m very passionate about creating this conversational user interface that has a combination of text and speech, but also a structured input. And the intent here is, for example, you’re booking a flight and you just went through everything and the chatbot may expect you to say “Yes”, type “Confirm” or type “I agree” or type “Yes” or “No.” If you make a customer or a user type everything, that is not a great customer experience. If you know the possibility of user inputs, it is better to give them options as a button so that you can simply tap. It is a great customer experience as well as the bot knowing, “Okay, if the customer tapped on this button, what do they need to do?” But then, the most important part for you to take away here is, in an interactive conversation where, let’s say, you have finalized, confirmed yes/no, check-in or whatnot, and you’re expecting the user to tap on the button, the customer can very well ignore all of it and simply type something. So how then you react to it is also part of this nested conversation that the bot technology has to understand.

So with that, with the anatomy of a conversation and different types of conversation, what are we really currently having challenges with? So a challenge right now is how to continue building a great conversational flow. I just went over the slot filling or a short scripted conversation where the bot exactly knows what to ask and what to expect, but the moment the customer or the user says something differently, how can we make the bot technology strong and robust to understand the entire context, the domain, the previous history of the customer to give meaningful responses? And this is where we go into this long flow conversation. That’s where one of the biggest challenges for the industry is.

So let’s take the same example, “Can I play cricket?” So your entity is “cricket” and the intent is “play” and the bot said, “No, the weather is bad. I would not suggest playing now.” The user says, “How about football?” The bot will say, “Yeah, the entity is football also.”

Now, here the bot is already taking into account some context that, “Okay, they want to play something outside. Let me check the weather where the user is located.” Football also they know they need to check the weather outside and they’ll say, “No, you cannot play outside because the weather is bad outside also.” But how about if the customer says, “Okay, I can’t play things. I can’t play cricket. I can’t play football. Maybe let’s just stay indoors and watch it.” So the user says, “What about watching it?” Now, here the bot has not been trained yet to go beyond this type of entity, intent and context, and it says, “Oh, entity, now maybe they just said, ‘What about football, football the sport?’ ‘What about watching it? Watch is another form of play?” And clearly you can see it went wrong there and the the bot is, “I don’t know how to check the weather for watch.” So this is what I meant was a long conversation, where the chatbot technology needs to understand the context, needs to memorize the state they were in. The context here was “playing sport” and the state was “the user has already asked to play something outside and now there are multiple sports they cannot play outside. Watches, probably not a sport, could it be something else?” So the context becomes important. And this is where the challenge of the industry really is: how to continue with the long form conversation?

And then I will just breeze through this example in the interest of time. Ordering a pizza, the user want to order the same pizza that was the last order. The box says, “Great, great. I see you ordered medium pepperoni pizza. Is this what you want?” User says “Yes, but can you add ham as well?” The bot totally understood that conversation, added ham. They asked to confirm the selection but the user says, “Actually, do you do low-calorie option?” What the user really meant was “less cheese”. And the bot goes, “Sorry, I did not understand you.” So the user says, “Can you go easy on cheese?” And this time the bot was able to pick up and the bot said, “Did you mean light on cheese?” I just wanted to bring up this example because this is a real example happening in the industry, where the chatbot has become advanced enough already to, while it may not recognize low-calorie option, it is able to come back into the conversation and understand.

So then the need for flow-based, deep-context sensitive conversation – and this is my last thought here – what do we need? We need to go wide and we need to go deep. We need to understand the context of the user and remember the state of the conversation. It is extremely important that, “Okay, you were talking about pizza, you just ordered something two weeks ago, you were asking for a low-calorie option.” And that goes into the personalization part, where how do you remember what your user just told you a week ago, two days ago? And here I’ll take a moment and also go over the technology part- When the user said, “I wanted a low-calorie option and the bot said, “Sorry, I cannot understand you,” what happens behind the scene then is the data scientist or the data engineers or the analyst – whoever is there in your business running this technology – will take a feedback loop from that and say, “Okay, this is yet another way the customer could ask for the light cheese option.” And they should train their models for, “What if the customer asks low calorie, high calorie?”

So that’s how you train the bot. And the whole personalization thing is also about when you meet a friend, how do you develop your personalization over time? You continue to talk and, the more you talk with your friends, the more they know about you and the more they can suit the conversation to, “Okay, my friend likes this, they don’t like this. When they’re saying this, this is what they mean.” That’s all that is learning over time and personalizing.

And then, finally, multimodal input and output. And I thought that would be a very important part here because it’s no longer just speech and text; users and chatbots are responding with images, with video, with different emojis, different kinds of inputs. If you’re in locations, if you are asking for, “Where am I located?” the chatbot may simply respond with the map. So how can you embed multimodal inputs and outputs within your conversation? And, most importantly, when the bot wants to confirm, “Are you okay with your selection?” and you say, “Yes” but you tap “No”, that is where the rubber meets the road. Extremely simple, but a conversation has to happen there. “Okay, what did you mean?” Because you just contradicted yourself. Because these are all real-life human errors which the bot technology needs to be on top of to create meaningful and contextual and very relevant conversations.

So that’s all from me today. I would love to continue these conversations on LinkedIn. If you want to reach out and know more about it, there is a plethora of things around this. This was the fastest I have gone over this ever. So back to you, Rahim. Thank you all so much.

Rahim Rahemtulla:
Thanks so much, Mamta. And you certainly did it very, very speedily, but absolutely no sacrifice of quality there at all, I would say, so much appreciated. And plenty of interesting questions and you’ve given us such a lot to think about here. But we’re going to hold on to those for just a moment because we will have Conner, who will present for us now. He’s going to say a few words if he is ready.

Conner Elliot:
Yeah. Can you hear me everybody?

Rahim Rahemtulla:
Yes. I think it’s all okay.

Conner Elliot:
That okay? Great. All right, awesome. Thanks, Rahim, I appreciate the invite. And thank you to everybody at SVIC. We’ve had a great partnership and relationship with you guys over the last year or so, so thanks again for all that. So, I wanted to spend a little bit of time today talking about conversational AI, my exposure to the topic. And I’m going to do that through a few different stories here, but the title of my topic is “What do 7,500 fridge reviews tell us about conversational AI? “

So, to start here, if I asked you to quickly define conversational AI, how do you think you’d do that? Most people in today’s day and age still probably start with typing conversational AI into Google, they get roughly 176 million search results and scan through the first few pages, different definitions of iterations of the topic. In summary, Google is a great tool for solving these very basic questions like finding relatively quick answer to a definition on conversational AI. Similarly, as Matma mentioned, conversational AI tools like an Amazon Alexa or chatbot are increasingly skilled at automating answers to these easy-to-solve questions or tasks.

However, if I adjust the question slightly and ask you to research conversational AI, develop a comprehensive landscape on the topic and share your insights with me, how would you do that? Now this question is a little tougher. Most, again, would start with Google, skim through the first few pages, piece together a story from there, but there are two inherent issues with this approach to these more complex questions. Again, number one, most people are only going to get through about a page and a half of results. Nobody has time to read through 176 million articles on conversational AI. The articles you skimmed are likely a grouping of articles published by major corporations who probably paid for that placement, yielding some inherent bias in your search results. You’ve missed the other 175 million articles on the topic of conversational AI that may in fact actually be more relevant to the topic but didn’t get their way paid to the top and therefore may have not gotten noticed. Secondly, not only are you looking at a small sample size of only about a page and a half of results, but you’re also missing the contextual element of the conversational AI narrative.

When reporting your insights back to me, let’s say, on conversational AI, you probably list out the top topics, including mentions of top companies in the space, various definitions of conversational AI, key thought leaders, etc. But if I asked you how these topics interconnect with one another, say, if I’m looking at Amazon Alexa and how that conversation’s been trending over time or, even more difficult, how’s the conversation of Alexa similar or different to Google Home or how are competitors being discussed in different articles, how would you be able to answer that question? A lot of that gets lost in translation through a simple Google search.

So, again, while Google’s a great starting place for research and answering some of these basic questions, it struggles with these more complex and multidimensional questions. Similarly, while there have been great advances in conversational AI tools solving these more dynamic questions, a lot of these tools are lacking when it comes to these more complex-related questions. However, these complex questions still very much exist. So when I’m talking to senior leaders across Fortune 1000s being tasked to solve these more complex, multimillion dollar questions, how do they respond? How do they even start to begin to do that?

Questions such as, What are the key needs of customers in X market? What are emerging technologies in Y industry? How our competitors being positioned in Z event? Vast amounts of conversational or unstructured text may exist and hold clues to solve these high-level questions, but quickly extracting real insights from large comprehensive data sets is challenging. In the absence of automated conversational AI solutions to address this need, Quid was designed to help senior leaders extract insights from vast amounts of unstructured text data through our dynamic visualizations.

And the one case I wanted to focus with you on today and talk a little bit more about, very related to the conversational AI topic, revolves around fridges or refrigerators. More specifically, the question poses, “What’s the best strategy for integrating conversational AI solutions into fridges?” So about a year ago, the VP of technology of a Fortune 1000 consumer durables company came to me posing this question: How should you think about integrating conversational AI solutions into fridges? While this fridge brand was only in the UK market at the time, this individual have been hearing about competitors in the US and European market integrating these types of technologies into fridges, albeit it with mixed results. This wasn’t a question you could ask a chatbots to answer, you couldn’t punch into Google – it required a bit more of a complex analysis. So leveraging the strength of Quid’s AI and NLP algorithms, we sought to provide this individual with more strategic direction to answering this question.

After a few different conversations with this individual back and forth, we agreed the best way to address this question was looking at customer reviews, both of his company’s recently released fridges and that of his top five competitors. To start, we built scripts and scraped fridge reviews from a variety of different websites. Not only did these reviews include two sentences on the review body itself, but these reviews also included additional metadata such as the reviewer’s age, gender, city, state, review date, star review, etc. In total we collected and began with a data set of around 7,500 reviews, 1,500 coming from the focus company in this case and the robust evenly distributed across these top five competitors. With all these reviews combined into one CSV document, we uploaded that into Quid and let the tool do its magic. During the roughly two to three minutes of Quid at work, it read through these 7,500 reviews looking through unique language across each review and then the tool extracted and visualized the top discussions or clusters in a dynamic visualization, mapping each cluster on a relative connection to one another. In this case, the tool extracted top themes in the conversation such as water dispensers, service delivery, wine rack and overall fridge space.

So let’s pause for a second. Rather than analysts trying to manually read through 7,500 fridge reviews by hand and create subgroups – that would take probably a month or so to complete that exercise – Quid is able to automate that process in about two to three minutes. Then, augmenting Quid’s visualization with a human intelligence, we’re able to extract key insights from this vast data set. For example, water dispensers were highly discussed and more relevant for millennials than elders. Men tend to focus more on the conversation of space as it relates to fridges while women tend to talk more about wine racks in fridges. And among the conversations, only 3% of the entire network focused on conversational AI or these types of technology solutions. These insights were all hugely valuable thinking about how one builds a strategy addressing certain demographics, developing marketing campaigns for these group or many other implications. And, again, these insights took about a week or so to extract rather than the typical month or so long manual process. In the end of the day, as I’m sure you all know, time is money. And if you have to wait around two months to understand consumer trends, you’ve probably already missed the trend itself.

But getting back to the stakeholder’s initial question, How should we be thinking about conversational AI in fridges? Just a simple frequency count, telling him that his topic had minimal conversations, isn’t enough. Diving further into data, we found some really interesting insights. We learned that conversational AI, first and foremost, was only important in the US market. Europe, another focus market, generally didn’t really seem to care about conversational AI features. When we focused on the US reviews, we noticed a small but quickly growing positive sentiment cluster describing these types of tech features. However, integrating conversational AI, as I mentioned, in the European market – not relevant. It was less than 1% of the market. Mentions of conversational AI-type tools were mostly negative.

And it was only about how this new feature was actually only adding to the cost of the fridge itself. So then you might ask the question, what is the focus of European customers when it comes to fridges? When we look specifically at the client’s own fridges – in this case in the UK market where his fridges were – it was water dispensers. Most of the reviews went on and on about water dispensers, specifically how frequently they leaked and how slow the response time was for a tech coming out to fix this type of solution.

These insights were hugely valuable for this VP and he was able to go now equipped with 7,500 reviews and data points to explain two things to his leadership team. First, since these fridges were in the UK market, conversational AI-type tools should not be the first priority. Investment in the technology was important, but until the price of the fringes dropped, European customers didn’t really seem to care about this type of technology. Second, and maybe even more important, was that they needed to focus more of their resources and time on better water filtration systems in their new fridges and improving response time to actually fixing this types of devices when they broke.

So, in summary, what can 7,500 fridge reviews tell us about conversational AI? I think the insights are here. The world is filled with unstructured data, as highlighted by the small subset of 7,500 fridge reviews. Some of the data can be leveraged by conversational AI and other automated tools to help solve some of these more basic simple questions, but in reality, the vast amount of data that exists in the world is messy. It’s hard to manipulate, hard to analyze and therefore hard to really extract insights. In my opinion, we need to consider both AI solutions and augmented intelligence solutions because being able to leverage the power of AI to automate manual repetitive tasks augments a human’s ability to answer some of these larger complex questions that we’ll need to be able to solve for moving into the future. That’s all I had for today. Thank you for the time.

Rahim Rahemtulla:
Thank you so much, Conner. Very interesting insights there in places where we might not expect it, thinking about reviews, Amazon reviews, fridges and how that can lead to these interesting insights. So I think we’ll definitely be coming back to that in the Q&A session. And before we do that, though, we’re going to hand over to Kunal, who is going to give us his thoughts on conversational AI.

Kunal Contractor:
Great. Well, hopefully, you guys can see my screen now. And Rahim and SVIC, again, thank you guys as well very much for your extended partnership over the previous few years. It’s been great and we really appreciate it. So what I wanted to talk to everybody about today is the realities of conversational computing really for the cognitive enterprise and how companies today can actually use conversational AI in their daily business to drive pretty substantial impact for their customers and their internal employees.

So, I’m not going to spend too much time specifically on what and how Avaamo actually does it, but more in the sense of what’s actually possible today with some case studies in terms of how some of our customers are actually using it and the benefits that they get from that. So at Avaamo, really, we are building out that fundamental AI to make conversational computing a reality today. So conversational AI is available in multiple languages, so it’s not just in the US or in the UK and in English. A lot of our clients are global and available in multiple languages in both text and in voice, which is really important. I think some of the others are focused on the voice elements as well.

But what’s really important is, a couple years ago, Satya Nadella was talking about how chatbots are going to fundamentally revolutionize how computing is experienced by everybody. And why that’s really interesting is the word “chatbots” itself is something that at Avaamo we’re not the biggest fan of, because we do prefer the term “conversational AI”. Luckily for us, a couple years ago, the New York Times actually wrote an article saying a lot of the conversation or a lot of the chatbot companies out there like Microsoft, Amazon, Google, etc., as chatbots weren’t actually delivering what they said there are meant to be doing. They’re more like yelling fools. Luckily, we didn’t even have to pay them for that, which is quite nice. But that’s where conversational AI comes in as a Conversation as a Service. And as the others in the webinar today have said, a lot of it really focuses on advanced NLP, the deep machine learning and the knowledge.

Like, how are we actually going to train the bots to understand what the user is actually saying? What does it mean when a human speaks to the machine? And I think there was some really great knowledge there earlier from the first session of deep machine learning capabilities like, How do we learn from previous interactions? How do we learn from all of that previous data? Something – like in the last talk – that would take humans months and months to figure out, AI can actually figure that out in a matter of minutes, maybe hours, depending if you’re looking at millions and millions of transactions as opposed to a few hundred thousand. And then the knowledge, how do you actually learn from static information, something where there’s not been any conversational or transactional history? So, really, the focus and the power of conversational AI is really geared towards, enterprise customers, to their employees and to their suppliers and their partners. So looking at building an enterprise assistance – never really that simple.

If you’ve got a client in insurance, let’s say, and they have one of their policy holders who comes in and wants to know the answer to the question, “Does my policy lapse immediately if I miss a payment?” To ask the question there, they pick up the phone and call into a contact center. But to answer that it’s never a yes or no answer. You’ve got to look into a lot of workflows. You’ve got to look into, “Who is this person? What kind of policy good to have? How long have they had this policy? When does the policy mature? Have they had a good payment history? Have they been pretty bad at making or maintaining those payments?” And then it’s a mixture of responses from all of those questions to actually come to the answer if we’re going to immediately cancel your policy or not.

So that happens through not only understanding those workflows, but through integrations into multiple systems in the backend. So this is just one example of one of our insurance customers and actually understanding what their data, content actually means in these back-end systems before coming back and making a conversational response to that end-user. So that’s why the power of integrations into all of these backend systems of record to analyze that data is really important before actually being able to release conversational AI across multiple channels for large enterprises looking to really impact that customer satisfaction.

And that’s where we initially advise customers looking to embark on the journey of conversational AI: to focus on what we call the last mile. So from a large number of things, you already know, what are the issues my customers have? What are they asking us? How are they asking us? Where are they asking us? Is it on the phone? Is it by email? Is it by submitting a ticket? And how are we responding to those? What systems are we looking at to get that information? Using all of that data is really low-hanging fruit for the best places to start off building conversational AI and automating that before moving on to slightly more complex or even more complex matters, where there’s never been any previous interaction with data and really addressing things that the company hadn’t addressed in the first place. And that’s where understanding various elements from, focusing on the topic of today, looking at how and where conversational AI can actually be used, a number of the challenges as well as the future.

So looking at some of these actual examples. So there banking customers, a launch global banking organization with a subsidiary here in the US is using conversational AI for their wealth management team. So these are wealth managers who, on the phone with their customers, usually spend about 40 to 50 minutes gathering a lot of information, looking into a number of accounts and then providing advice on matters such as tax breaks, on investment, on insurance. And these advisors, even if they’ve been doing this job for 20, maybe 25 years, they should know a lot of information. But – as I’m sure every single person who’s listening to this today has probably asked a question to Google that you’ve never asked another human being before, primarily because Google’s not going to judge you for asking in any of those kind of questions – they’re able to use a bot to better understand, “Hey, what was the tax rate? How much is tax free if they invest a certain amount?” It allows them to ask, “What’s the best insurance policy that’s in line with today’s laws and regulations for the specific customer based on their net worth?” And the bot is able to go and calculate all of that, so it saves the advisor a lot of time in being able to provide advice to their customer and making sure whatever advice they do provide is accurate.

Looking at healthcare, we’re working with one of the United States top 20 health care providers and we’ve already deployed live on their site a bot that helps their patients to discover, “Hey, if I’ve got a rush on my face or if I’ve got a pain in my neck, where’s the best primary care physician in that specialty for me to see that’s closest to my location? How can I book an appointment with them? How can I reschedule an appointment? What’s the right medication that I need to be taking?”

Because, believe it or not, people looking for a location, booking, scheduling, rescheduling and cancelling an appointment, getting a fax number and getting directions to that location constitute nearly 40% of most healthcare providers’ inbound phone calls. Now, if we can automate that conversational AI, that results in millions of dollars of saved money for that organization and immediate responses and improved customer satisfaction for those patients.

Working with one of the world’s largest FMCGs in the alcohol and beverages space, we’re building bots for the people who actually buy the alcohol from them and helping to arrange deliveries, helping to arrange technicians to come and fix actually their fridges, funnily enough, which are provided to them for free by those providers. One of the latest one, which is really cool and, of course, a lot of these [0:41:12?] But some of them you probably could see on our website, we’re working with one of Hollywood’s major studios, helping them build bots that will be directly engaging with consumers and moviegoers to help drive ticket sales, help drive streaming sales and help promote certain characters that are coming out. But more on that, if you’d like more information, please feel free to reach out after the session.

But, more importantly, some of the challenges when looking at conversational AI are the data. How clean is it? How much data do you have? How far back does it go? In terms of being able to train the bot, do you have previous interactions to train the bot or are you creating new training data on the fly? And how is the platform you choose to use going to be able to help support that to make sure that when the bot does provide an answer, it is an accurate response and it does what it says it’s going to do? Does it provide a response for the sake of it? Because in our world, it is okay for the bot to say, “I don’t know. Let me pass you on to a human being who can help you.” What is never okay is for the bot to give you an incorrect answer. And that’s where understanding what the human’s emotions are, as well as providing those business analysts with analytics. Who are the users coming in? In what channel? In what language? What platform? What are they asking us about the most? Have we got enough training data for it? And making sure the platform is sufficiently trained to be able to handle tens of millions of previous transcripts to train the bot in a very fast amount of time.

And then looking into the future, of course, most of the clients we see so far are usually in North America and in the UK and Western Europe, though we see a lot of our clients in Asia as well, and a lot of people even here in the US are looking for multilingual bots. So a lot of our global clients, especially starting off in markets such as Latin America and in Japan. So we see the future of conversational AI being a single bot interface that can have a conversation with the users in multiple languages and through what we call omnichannel. So, today, everybody should be able to reach the brands of their choice, even if they are talking with these brands probably more than to their spouses, which I completely agree with, probably by 2020 is something that’s going to happen, from Gartner. As well as Gartner did say by 2020 nearly 80% of the interactions you have with a brand will also be automated as opposed to speaking to one of their humans. But we need to be able to reach out to them either through text or SMS, on Facebook Messenger, on WhatsApp, on Alexa, on the phone, etc. And that’s really important and that’s where we really see the future of conversational AI. I’ll think I’ll stop there and go on to any questions. I’ll pass it back to Rahim.

Rahim Rahemtulla:
Fantastic, Kunal. Thank you so much. You’ve given us plenty to think about there, to work with. I think we’ll definitely open up to some questions now. And I’d like to ask, as just sort of a first question, if I can. And Mamta, maybe I’ll come back to you on this just to start us off. Is this maybe becoming a bit of an old chestnut, perhaps, in the world of conversational AI, but do we, as users, as customers, as employees, do we really mind talking to machines? I mean, would we rather be talking to humans? Because I’m increasingly thinking that these days we’re all quite used to using apps, we use Google so much, I’m not sure that it really matters anymore if we’re talking to someone, a real human at the end of the line or if it’s a bot replying back to us. I’m not sure it really matters anymore. Mamta, what do you think on that one?

Mamta Narain:
That’s a great question and the answer to that is multifold. One is from the point of view of businesses. Businesses are really looking to improve their cost-cutting, their productivity by employing chatbots. So cost-cutting, instead of humans, if some rudimentary question, first line of defense can be a chatbot, it’s definitely going to be cheaper for them. So that’s from the business side, why they want to deploy a conversational AI, for productivity, cost efficiency.

On the consumer side, having somebody to talk to, I don’t think whether it is a human being or a machine, consumers are really there to get their job done. Who helps them on the other side is really not top of mind of customers. It’s like that famous saying, “I want to book a ticket. Whether it is an automated system or it’s a human being helping me, get my job done easily, simply and in the most cost-efficient manner that the customer cares about.” So the answer really is who can help the customer at that time, more efficiently, more accurately.

And here, I wanted to cover one important aspect of human-assisted conversational AI and that also ties into the question you asked, whether customers really care about who they’re talking to. It’s going back to, they want to accomplish a goal, a task and if your AI is not able to help me, the business has to quickly determine that and hand it off to an agent to intercept. So I would say the answer to that is really not about which one is better, it is about which one gets the job done faster and accurately.

Rahim Rahemtulla:
Thank you, Mamta. And Connor, I suppose you might echo those thoughts. You were saying with the Quid platform, you use AI there to analyze huge amount of data, unstructured data, but it’s really the human insights then, based on what that AI comes up with, which delivers at the end of the at the end of the day. So in that case, I suppose it’s a hybrid, isn’t it?

Conner Elliot:
Yeah, I think you’re absolutely right. To just echo that and say it slightly differently, we’re looking at a lot of text that exists out in the world, such as the 7000 reviews example that I mentioned. So looking at all these conversations that exist in the world and then being able to scrape that, leverage the power of AI and NLP to quickly parse through that and then layer on top of that the human knowledge of that specific subject to extract insights. But yeah, the AI element really functions as a way to automate the manual task of reading through all those reviews.

Rahim Rahemtulla:
And now, it certainly seems to be that’s the stage. What I’m hearing is that today a lot of what AI and conversational AI can do for us is automate these tasks and save a lot of time, save a lot of resources. But Kunal, do you see, perhaps, somewhere down the line, would you be confident in having an AI engine which is so thoughtful and deep that you really could just give it anything and you wouldn’t really worry about having some humans there to pass on to?

Kunal Contractor:
I think we will reach that stage maybe at some point in the future. But I think right now, given where it is, yes, the reality is like what Mamta and Conner have said, especially what Mamta said, users aren’t really going to care that much as long as they get an accurate response and they get it quickly at anytime they want. But, especially being British, we do love a bit of a small talk and we do love to have a little bit of a chat, a little bit of a joke sometimes when you are speaking to human being. The bots follow some elements of rules and you can’t negotiate them into getting yourself an upgrade if you’re booking a ticket for a flight, for example. So it sounds contrary to what it is that I’m doing by saying sometimes as humans are better, but I am saying that the bots do provide a wealth of benefits to the end users. There are some things that they’re not going to replace humans on just yet.

And like Mamta alluded to, we do have the capability to pass over to a human being as and when required, but the biggest value right now, depending on the use case, especially for a large enterprise, it is looking at becoming future-proof, it is looking at driving that customer satisfaction for the first couple of waves of interactions before it does need to go to certain customers.

Some of our customers are using bots to automate just the first 90 seconds of a conversation and collecting information and authenticating that user and that provides instead of 90 seconds on the phone with an agent authenticating who you are, they spend about 12 seconds with a bot going through that authentication. That is a massive benefit to kind of both parties involved.

Rahim Rahemtulla:
No, sure, I can definitely see that. I think we’ve all had this experience when we were online. And just the amount of time filling in forms, giving your email address, your name, the same stuff time after time and you sometimes feel that we ought to be at a stage now where we just can get past that. Mamta, do you feel that, in some ways, we’ve really made a lot of progress in digital and there are so many things as users now that we can do so easily that I almost feel frustrated that we can’t do more and more easily and more quickly. But I feel like conversational AI is the key to unlocking that. If we can just give simple voice commands, questions, just really easily unlock all of that information and data which is, just for example, on my laptop. My laptop knows everything about me and if I could just talk to it a bit more, I’d really improve what I can do quite substantially.

Mamta Narain:
Yeah, but you’re right, there are a lot of experiences we can bring to market now with AI and for which we would want technology to help us. Just to expand on what you were saying, your laptop knows a lot about you, it’s like my car knows that I dropped laundry because I stopped there. Chances are I dropped off laundry. And we have all at some point of our life, I’m sure all 3 presenters here, created concepts that said, “Oh, while I’m passing in front of the store, remind me to do this,” or “Give me a coupon out” or something like that. So now with AI, it is more real or it’s already happening that while you’re driving by that laundromat, it would say or it could tell you that, “Hey, you’re walking past your Laundromat. Pick up your clothes.” Or you have something in your shopping list, you’re passing by Target or whatever. So those experiences where you want to simplify your life and you want technology to take over is more real than we think. And I’m just so fascinated because in my career – and I’m not that old though – I have created these PowerPoints with these big concepts and it’s just gratifying to see that those concepts are actually live in the market, where technology is helping you with your life.

And here, I learned this one other thing like weather. We have all gone to Weather.com, Weather app. It still takes you 60 seconds or 90 seconds to get your weather as against to the voice assistance, you just ask it, “Hey, what’s the weather today?” That’s it. And it’s, like, not even three seconds. And even before you can flip open your app, you know what the weather is.

Rahim Rahemtulla:
Indeed. Thank you, Mamta. No, it does seem like there’s these steps for overcoming inconvenience and insensibility. But I wonder, the question we’ve had come in and I’m not sure, perhaps all of you might have some insight on this, but what can you tell us today is a sort of adoption rate of conversational AI in a general sense. So we are seeing it mostly in business-to-consumer markets, business-to-business. What’s the sort of adoption that you guys see out there?

Kunal Contractor:
I can take that. So we’re seeing pretty strong mixtures. So, predominantly, we see a lot of our clients looking at a B2C environment. Secondly, we’re seeing actually internally on a B2E, internal employees having bots helping them for things like HR aspects. “How much vacation do I have left? Can I work off the Thanksgiving week?” and the bot will integrate into a system like Workday to look at, “Hey, there are other people on your team who have taken that time off,” because if they have, you may not be able to take that time off. But if you are able to take that time off, you can automate sending their requests to your line manager for approval before coming back.

But actually, the biggest one internally we see is like an IT helpdesk bot. “Hey, my outlook is not working.” “I’m traveling to Brazil, what’s the FTP I’m going to have to use?” “Do I need help and do I need to change my password.” One of our clients, they build products where, if you’ve got a smartphone, you’re using one of their products, that’s part of those devices. They get about 15,000 IT tickets every single month and half of those are password resets.

And that password reset takes a 20 minute call to an offshore call center to go through the authentication. That’s 20 minutes because you’ve actually spent about 14 minutes on hold before we can get through to them. We’re able to reset those passwords in less than a minute. So from that user adoption, it is huge. A capability of saving you 20 minutes every three months for 7,500 people for an organization is huge.

And then we kind of see the B2B from the partner network of suppliers and stakeholders but, really, it is primarily from a kind of B2B, maybe from like a business to business customer element, but we do see the consumer one lead first as they’re more willing to give these things a try.

Rahim Rahemtulla:
Excellent. Thank you. And, Conner, what about with Quid? In the example you gave in your presentation, you talked about how it was a company using it internally, is that is that right? They are looking at the information, the data, again, they’re looking at it to power their decision making? Is that the principle, the way you see it being used?

Conner Elliot:
Yeah. Typically, we’re mainly focused on the Fortune 1000s, as I mentioned, just simply because of the budgetary constraint. Our tool is not inexpensive. And so, secondly, to answer your question more directly, yeah, it’s being able to look at customer data. They’re very focused on, what are the customers talking about? What are key customer needs? But how can they leverage that data and make strategic decisions internally that are then ultimately outward facing to the customer and better align with the key needs of customers?

Rahim Rahemtulla:
Thank you. And, guys, we’re just running up against all deadline here for today’s webinar. Just about past 60 minutes. So I think we’ll just sort of wrap up and I think I’ll just come for the final thoughts to each of you, if I can. So, Kunal, starting with you, perhaps, I think we talked about what conversational AI can do and where it is today and where it might take us. And I suppose Mamta alluded to this in her talk. Where we’re going is developing these longer form conversations and getting a bit deeper and more thoughtful, but at the same time it seems like we’re necessarily going to let go of the human element as well. So it seems like a balance going forward, Kunal, I suppose is what I’m trying to say. Is that your feeling too?

Kunal Contractor:
Yeah, it is a little bit of a balance but, saying that, the conversational AI does need to be able to handle these long complex statements and it’s through what we call dynamic multi-turn conversation, including goal-oriented flows. So if I come in and say something like, “Hey, I’ve noticed some fraud in my account” to my banking bot and the bot is like, “Okay, let’s look at your previous transactions.” And then I suddenly go away to, “Hey, by the way, I’m going to be traveling to Italy next week. Can you activate my card for international travel? Oh, and by the way, what’s my account number?” So you go on to those new questions and giving it a long story. It doesn’t really need all of that information, but it will come back to say, “Hey, now let’s go back to investigating that fraud in your account.” So it can analyze these large, long statements. And for some of our clients, we have actually built in horizontal intelligence to answer questions like, “What were the NFL scores last night?” Or, “Is it going to rain in Denver tomorrow?” And actually building that in to really make it well-rounded and a lot more human-like to answer a lot of those kind of small talk questions.

Rahim Rahemtulla:
Thank you, Kunal. And so, Conner, is that something that you see as well in the sort of queries that people might want to ask? Are they getting more complex? Are they getting broadened out? And is the technology going to have to equally become broader and have more depth to cope with that going forward?

Conner Elliot:
Yeah, it’s interesting. I’d say that the majority of the complex questions that we’re getting from clients do bucket in to a lot of similar type use cases: competitive intelligence, industry landscapes, tech scouting, innovation scouting, research and development. So a lot of the questions do bucket in to a few different categories that are around key Quid use cases – that’s what we call them – and we continue to build our capabilities to address those key questions. But I think, for now, obviously, we want to continue to make Quid as user friendly as possible and easy to extract insights that it takes a lot of time, money and R&D efforts. So I’d say, in conclusion, yeah, we’ll always continue to leverage Quid but I think augmenting intelligence will continue to be our focus. We’d like the tool to extract insights for people, but I think later on the human element will continue to be a really critical component of that.

Rahim Rahemtulla:
Fantastic, thank you. And Mamta, finally, just to come to you, what about you? Where do you see this going? I think you mentioned the longer form conversations. How far away are we from having that?

Mamta Narain:
I wish we had a crystal ball. So there is a lot of research going around and deep learning is going to help a lot. But we may already be familiar, Amazon also has a running contest where if technology can hold a 20-minute long conversation which is relevant, that will be success. So I think every year we see it getting better and better, but here I think it’s technology, yes, the machine learning or the deep learning technology, but more importantly, it also depends upon the data and the learning.

Today, whatever you type in Google, 99% accuracy that you will get a response back accurately. We have to take a moment and see why is that happening that we get accurate, responsible? Is it the 20, 30 years worth of data which has gone into make the systems learn? So one is the conversation part of everything and then another is the learning will come only over time and it is getting better. So the thing – I want to go back to your previous question about adoption – is right now I’m in a technology company that mostly serves like PayPal, eBay calls. What I can say is adoption is pretty high on conversational AI. And when I say adoption, it is investments are there on the table. And I see that as part of being an employee at HPE also. Almost every company and business has some money set aside for AI and out of which almost 80% of the companies are putting money on the table for conversational AI, both for employees, customers and enterprises.

Rahim Rahemtulla:
Well, thank you, Mamta. That’s a very high, it’s quite a high figure.

Mamta Narain:
Yeah, every company is investing in some form of AI. Whether they are successful or not right now or whether they are in early stages or they are ready to scale, you will see a spectrum there, but thinking about AI, allotting money to AI, is a pretty high adoption, you’ll see it and 2019.

Rahim Rahemtulla:
Fantastic. Well, there you have it, ladies and gentlemen. From our panel of experts today, conversational AI is very much high on the agenda and a lot of companies out there are making investments. So, unfortunately, that’s where we’re going to have to wrap up for today. I hope you’ve enjoyed our talk today on conversational AI and you can take away some insights here. So I just want to say a big thank you to all of our guests today for taking part, Kunal Contractor, Mamta Narain and Conner Elliot.

And while I’ve got your attention here, I would just like to sneak in a little plug for the next expert talk we’re going to have here at SVIC. So we’ve been talking today about conversation AI and that’s just one of many technologies which are emerging today and which are already making a big impact, but actually using these technologies in building or rebuilding your company so that you can take best advantage of them is another challenge which is simultaneously happening today. And if you want some insights into how you can make that happen, we have Mathieu Guerville, the Innovation Director at UL Ventures, joining us this Thursday at 10am PDT. He’s going to tell us a bit more about that. He’s going to share a lot of his own experiences, he’s done a great deal in the sphere between corporates and tech, and also corporate venture capital. So please do join us for that.

But that is really where we should wrap up for today. So from me, from all of my three speakers today, we thank you very much for joining u, and we’ll see you again very soon. Bye-bye.

Conner Elliot:
Thank you, everybody.

Kunal Contractor:
Thanks, guys. Bye.

Mamta Narain:
Thank you.

info@svicenter.com 1-650-274-0214