Party like it's 1970: conversational interfaces are back (into your app)
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 90 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/47626 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Mobile appComputer iconSlide ruleAndroid (robot)BitLevel (video gaming)Interface (computing)CASE <Informatik>Data conversionSlide ruleComputer animation
00:28
Android (robot)Mobile appNumerical digitString (computer science)ARPANETData modelMarkov chainPattern recognitionPredictionString (computer science)Projective planeMultiplication signSeries (mathematics)DigitizingEndliche ModelltheorieOnline helpARPANET19 (number)Flow separationPattern recognitionBell and HowellPattern matchingAreaSoftwarePattern languageComputer animation
01:56
Speech synthesisPattern recognitionTemplate (C++)Mobile appAndroid (robot)Extension (kinesiology)SoftwareDialectForcing (mathematics)Pattern languageNeuroinformatikEndliche ModelltheorieZeitdilatationOperator (mathematics)ArmSpeech synthesisNumberCall centre
02:43
Continuous functionGoogolMobile appAndroid (robot)Personal digital assistantSeries (mathematics)Speech synthesisPhase transitionSoftwarePattern recognitionECosAnalytic continuationComputer animation
03:15
Process (computing)Android (robot)Mobile appSign (mathematics)Process (computing)Group actionSoftware developerSoftwareMobile appMereologyChecklistOrder (biology)Latent heatProfil (magazine)Spectrum (functional analysis)Moment (mathematics)Data conversionComputer animation
05:09
WordFunctional (mathematics)Axiom of choiceAndroid (robot)Mobile appWordArithmetic meanNatural languageComputer animation
06:08
Android (robot)Mobile appFunctional (mathematics)WordAxiom of choiceProcess (computing)CuboidInterface (computing)Data conversionOrder (biology)Online helpPoint (geometry)Computer animation
06:36
Android (robot)Mobile appTouchscreenWritingOrder (biology)Online helpPoint (geometry)Data conversionDataflowSoftwareSimilarity (geometry)Form (programming)Scripting languageMobile appFlow separationMereologyComputer animation
07:58
Context awarenessLine (geometry)Mobile appAndroid (robot)Digital photographyData conversionReading (process)Video gameNatural languageContext awarenessMultiplication signTwitterProcess (computing)Moment (mathematics)Programmable read-only memoryLine (geometry)HypermediaQueue (abstract data type)Computer programmingInterface (computing)NeuroinformatikChatterbotMobile appThread (computing)Programming languageComputer animation
10:04
Error messageMobile appAndroid (robot)Mobile appDifferent (Kate Ryan album)Personal digital assistantTerm (mathematics)Interface (computing)Data conversionPoint (geometry)MereologyGroup actionDataflowTable (information)Error messageBranch (computer science)Rule of inferenceProcess (computing)View (database)Special unitary group
11:49
CAN busMobile appAndroid (robot)Moment (mathematics)Video gameQueue (abstract data type)Term (mathematics)Mobile appNoise (electronics)Computer animation
12:37
Mobile appAndroid (robot)Key (cryptography)WordVapor barrierMereologyNatural languageMobile appSoftwareComputer animationMeeting/Interview
13:35
Mobile appAndroid (robot)Rule of inferenceMobile appFocus (optics)Task (computing)Computer configurationProcess (computing)Video gameBitOnline helpComputer animationMeeting/Interview
14:29
Mobile appAndroid (robot)Context awarenessDirection (geometry)BackupPlanningDifferent (Kate Ryan album)NeuroinformatikCASE <Informatik>Data conversionDemosceneMobile appBitTrailComputer animationLecture/Conference
15:49
Mobile appAndroid (robot)PlanningData conversionBackupCASE <Informatik>Mobile appSpeech synthesis2 (number)Multiplication signRight angleSynchronizationInterrupt <Informatik>Directed graphoutputWordComputer animation
17:32
Video GenieNumberSample (statistics)Game theoryGoogolCanonical ensembleMobile appAndroid (robot)LogicMobile appCoefficient of determinationOnline helpBitNumberGame theoryData conversionSampling (statistics)Flow separationVideo GenieMultiplication signSpeech synthesisRight angleCASE <Informatik>Rule of inferenceWordNoise (electronics)Computer animation
19:39
Mobile appAndroid (robot)Similarity (geometry)Asynchronous Transfer ModeData conversionAndroid (robot)MappingFeedbackInteractive televisionWordGraphical user interfacePersonal digital assistantMobile appChemical equationSpeech synthesisData storage deviceCASE <Informatik>Natural languageComputer animation
21:54
Android (robot)Mobile appElectronic program guideCASE <Informatik>Data conversionInteractive televisionSpeech synthesisComputer animation
22:21
Point (geometry)Query languageAndroid (robot)Mobile appData conversionPoint (geometry)Direction (geometry)Order (biology)Natural numberComputer animation
23:07
Android (robot)Mobile appMachine learningMachine learningMobile appComputer animationMeeting/Interview
23:36
Android (robot)Mobile appPrimality testNatural languagePoint cloudDataflowCodeProcess (computing)Group actionProcess (computing)outputData conversionMatching (graph theory)IdentifiabilityTask (computing)Order (biology)Functional (mathematics)Computer configurationAliasingForcing (mathematics)Saddle pointDatabaseSoftware frameworkComputer animation
25:53
Computer virusMobile appAndroid (robot)Dynamic random-access memoryGroup actionDependent and independent variablesScalable Coherent InterfaceContext awarenessSocial classConfiguration spacePhysical systemService (economics)DisintegrationContext awarenessData conversionInstance (computer science)Interactive televisionDialectDependent and independent variablesCASE <Informatik>Image registrationCore dumpService (economics)outputCartesian coordinate systemDataflowSeries (mathematics)Wave packetInternet service providerMobile appConnected spaceWebsiteCodeOrder (biology)Token ringGoogolAndroid (robot)SoftwareLatent heatComputer animation
29:18
Dependent and independent variablesParameter (computer programming)Mobile appAndroid (robot)DisintegrationService (economics)Order (biology)TwitterProcess (computing)Configuration spaceFlow separationMereologyProcess (computing)outputFunction (mathematics)CodeMobile appComputer animation
30:12
Dependent and independent variablesParameter (computer programming)Query languageAndroid (robot)Mobile appService (economics)DisintegrationNatural languageCASE <Informatik>Data conversionScripting languagePlanningPoint (geometry)Interactive televisionInterface (computing)Order (biology)BackupMobile appComputer configurationSpeech synthesisTrailBacktrackingGroup actionFlow separationCovering spaceComputer animation
32:18
Mobile appAndroid (robot)TouchscreenData conversionDataflowSoftware developerFlow separationBuildingChatterbotMedical imagingMobile appProcess (computing)Multiplication signClosed setField (computer science)Term (mathematics)Phase transitionRight angleComputer animation
Transcript: English(auto-generated)
00:00
Hello, everyone. My name is Adrián. I'm from Guatemala. I'm a GDE on Android, Firebase, and IoT. And I'm here to share a little bit about conversational interfaces. And the title make a reference to when this started. It's not something new. So my slides are going to be available,
00:21
already are available on that URL in case you want to check it out. But let's get into business. So let's start with a very brief story in the 70s.
00:45
Everything started around 1952. There was some work in Bell Labs. And Adrián's software was able to recognize a string of digits. This was the start of everything. Although it seems like something really easy in our days,
01:04
at that time, it was a breakthrough, recognizing the digits, strings of digits. And although things started in the 50s, it was until around the 70s where hidden Markov models
01:20
were developed. This helped a lot to recognize a speech and move away from pattern matching. This has happened several times in history. For a while, we tried to recognize things using patterns. That's like one of the first steps.
01:40
And then we find a way to move from pattern recognition into something not that literal, let's say. And around the 70s also, several DARPA projects started to develop in this area. In the 80s, there was still a lot
02:01
of brute force pattern matching and template work. But also, hidden Markov models became popular at last. It took around 10 years for this to happen. And around mid-80s, in 1984, SpeechWorks was one of the first software working on IVR.
02:23
IVR is when you call center and some computer replies like, if you know the extension number, dial it or press zero for an operator. And it was until the 90s, four years later,
02:42
that an industry started. In 1997, Dragon, naturally speaking, was the first recognition software of continuous speech. This was a big breakthrough for the industry. And in our days, 2007, Siri, Amazon Echo, Google Home, and here we are in 2018, where
03:03
it's really common to use Google Assistant or Alexa or Siri or any assistant. And we are in this phase where it's common to speak to our phones and wait for something to happen. We're going to cover several things. But the first one is going to be the design process.
03:29
I would like to ask a question. I can, because of the light, I can barely see everyone. But I would like to know, how many of you are developers? Raise your hands. Almost all of the audience. UX designers?
03:42
No one? Oh, just one hand in the back. Great. Marketing? No? OK, so I'm a developer. But in this talk, I'm covering several parts of the UX design process. So the first step will be create a persona.
04:03
Many of the things that I'm going to be covering in the talk may seem intuitive, but it's important to keep in mind that we need to have a checklist when working with conversational interface. So the idea here is to identify the main traits
04:21
of the user that's going to be using the conversation, who is going to be talking with our app. And from there, keep doing more specific segments in order to identify better and build this profile of a persona.
04:42
If you have ever worked with branding or marketing, it's a common practice. However, in software development, it's not that common. And sometimes it's easier for developers to target a broad spectrum of users
05:00
instead of the one group that's going to be using the app. Well, in this case, besides creating a persona, we need to focus on some specific traits. Because it's going to be a conversation, we need to focus on things like tone, style, technique, and voice. Even if we are using just one language,
05:22
it's not the same way everywhere. There are accents. There are words. There are several things that change from region to region. And it's funny. From my experience, I've been privileged enough to travel a lot mostly because of conferences.
05:42
But there are several words that mean one thing, like literally, but have another meaning in another region. Happened to me a lot in Latin America when speaking in different countries. So keep in mind that we need to focus on all this besides, of course, words and phrases
06:02
and the way in which we're going to communicate with our user. A lot of approaches here in the design process suggest that we think outside the box. It's like a cliche to hear or read that. Instead of thinking outside the box,
06:22
for conversational interfaces, we need to destroy that imaginary box that we have and build something around tone, style, technique, and voice. So in order to do that, traditional thinking won't help us because we're not building a visual interface.
06:45
We might have a visual interface for the conversation, but our main selling point is going to be a conversation. So we need to write a screenplay. It's similar to working with a script or not a software
07:08
script, but a script with several actors. We need to focus on the several parts of our conversation, which role we'll
07:22
be playing each one of the participants, the user, and the software that we're developing, and write all that in a form of a screenplay. In this way, it will be easier for us to describe the flow that the user will be taking. We might be used to write or understand or use mock-ups
07:45
and understand flow for our apps. But for conversational interfaces, there are flows similar because the interface is different as screenplay is needed. And there are several basic principles
08:02
that we put in practice every time we have a conversation that, for some reason, there are not obvious when developing for a chatbot or other conversational interface. The first one and the most obvious is turn-taking. If we're having a conversation, in some moment, we're going to be speaking.
08:21
And at the next, we're going to listen for the other party to speak. Also, context is really important. Mostly, that's why we have programming languages and we don't program in natural language because of context. Computers are terrible understanding context. For some reason, we have not been
08:42
able to develop something to make computers understand context. But humans, we are good with context. Also, threading and reading between the lines. This is something that's a challenge, mostly because when we're having a conversation, there are several channels.
09:00
But usually, it's not the same having a conversation in person. Let's say at the conference, this might be a great chance for a lot of people to meet in real life or having the same conversation over Twitter or over an IM app.
09:22
So when we're having a conversation face to face in real life, usually we can get some cues from our body language. But when having the same conversation over Twitter or social media or instant messaging app, we don't have those cues. The natural replacement is using some emojis.
09:43
But still, it's not exactly the same. So when developing the conversational interfaces, it's important to keep in mind the cooperation, much like when we're having a conversation, we are both in the same boat trying to achieve something and all these traits that I mentioned.
10:06
Also, there are no errors, and this is an important thing to keep in mind. When we're having a conversation, we don't say to the other person, no, that's wrong, but not in terms of what you're saying.
10:21
I consider it's wrong, but you are saying it wrong. Oh, that sounds weird. But anyway, what I'm trying to say is when we're having a conversation, there are no errors. We might have different point of views, but both parties, or if it's a group conversation, if there are many people involved, are putting something to the table
10:42
and adding some value to a conversation. There are no errors. In the same way, when dealing with an app, it's either an Android app or an Assistant app, something like Google Home or Alexa, if for some reason
11:04
the device says you are wrong, it's going to be like something hard on the user. Instead of that, we have several branches. If the user, for some reason, doesn't reply what we're expecting, or if they reply
11:21
it's wrong in some way, we're not going to say that's an error, like what we usually do in UI interfaces. Instead of that, we're going to take a different road, a different branch, and show a different part of the flow.
11:41
This is important to keep in mind. We're not showing like a big red X saying to a user, you're wrong. Instead of that, we're taking a different path. And also, much like real life, interacting with a conversational app sometimes
12:02
will add some challenges. Some of them might be temporarily. Others might be for long term. So we need to consider that there will be some noise, that sometimes both the user or the app will need something to be repeated,
12:23
and that in some moments, we will need a graphical reply. In other moments, it will be only audio. So with all this in mind, we need to keep those cues to understand.
12:41
This is the main goal. We need to understand what the user is trying to say, and we need to be able to provide everything the user needs to understand what our app is trying to say. Understanding is key, and it's interesting mostly
13:01
because there are several barriers. When we're working remotely, developing software, that is, we're working with teams that are located in several parts of the world, usually we communicate using English. And for many of us, it's not our first language. So we need to deal with a couple of things
13:20
like accent, like the words and phrases that we're using, like cultural barriers. The same thing might happen here when developing conversational interface. So keep in mind that understanding, it's critical, and the users are going to help us. Users are here to do something, to achieve something,
13:43
and focus on that task instead of focus in using our app. Users know how to talk. Everybody knows how to ask for something for a conversational interface that might be, let's say, turning the lights on or turning the TV off.
14:04
And we don't need to be taught how to say that or how to speak to the app. What we might need is a little help with the onboarding process. There might be several ways of saying something.
14:21
So our app should consider all these different options and make the user's life a bit easier. Besides that, users also know what they want, how do they want to be done, and basically, we just need to leverage on that,
14:43
mostly because we share the same goal and we're trying to understand each other. As I've said before, the context is really important. The context matters. It's not the same to, might not be the same to say the same phrase in two different contexts.
15:04
We are basically good to infer this context, but as I've said, the computer, it's not as good as humans, so we need to deal with this. If we're having a conversation and someone pops in,
15:23
it's a bit easier to get into track and understand what's happening. But that context might lead us to different directions. So if we don't have a clear understanding of what's the goal, if the user,
15:43
it's not understanding what the app is trying to say in this context, things are not going to work out. And it's important to have a backup plan in cases when something happening, like the user is not replying,
16:01
there's important to have a timeout. When we're having a conversation, it's funny because if we were having a conversation face-to-face, because we are doing this in a synchronous way, the timeout that we have, it's almost non-existent. I mean, if we're speaking face-to-face,
16:21
and for some reason, I say something, and the other person just stares at me, it's going to be awkward, right? Sometimes when we're having a conversation, we're thinking about something, and we might spend a couple of seconds thinking like, hmm, I don't remember this, and it's quite common. But when we're having the same conversation
16:41
over, let's say, an IM app like WhatsApp, that's not synchronous. So we might have to wait for a while, and it's okay. When we're dealing with a conversational interface, usually it's synchronous, but the user might not reply
17:01
because something happened, like some interruption, like someone appeared and started a conversation in person with me, or I'm dealing with something, and the app is still waiting for some input. In this case, we're going to deal with some timeout,
17:20
and that's just one example, but it's important to keep in mind that having a backup plan might let us into a good path, might bring us back into a happy path of the user of our app. Here's an example. This is a sample dialogue with the Google Home
17:42
and an app called Number Genie. You're supposed to guess a number between zero and 100. In this case, user says nothing, like around here, and the reply from the app is, I didn't hear a number. It's not an error.
18:00
It's not, you are wrong. It's, I didn't hear. That might be because noise, because you didn't say anything, or because any reason. But that's what I meant when I say there are no errors. Then it's silent or muffled. It might happen that there's too much noise,
18:22
or maybe it's not the user speaking. Maybe it's, I don't know, a dog barking or something. So the next reply, if you're still there, what's your guess? The wording, the phrases, the tone, it's really important for the conversation to be in the happy path.
18:41
We need to provide the canonical happy path, but consider all these possible branches, and also we need to consider that it might be a first-time experience or a return user. We already know how to have a conversation, but we might need a bit of help on an onboarding process, or if we have used this app several times,
19:04
we will appreciate for our expertise to be considered from the app. This is another example. This is for the game to quit. In this case, the user tried to guess, 21 right here.
19:21
It's not 21, so the user give up. In this case, the app should catch that the user is giving up, quitting, and give a good ending for the conversation. Sure, I'll tell you the number anyway. It was 90.
19:42
Besides that, I would like to provide some snacks. Some people call this best practices. I think each one of these bullets might imply a discussion, so that's why I'm calling all of these snacks,
20:03
and I will happy to discuss any idea that you have after the talk, but let's review some of these ideas on how we are going to be able to build something nice and pretty when we don't have exactly
20:21
some graphical interface. We should avoid written language. This is important, mostly because when we are developing a graphical UI, usually we focus on some wording that we're used to see in a written way. In this case, we might have a supporting
20:43
graphical interface, but the main interface is going to be a conversational one, just maybe with the Google Assistant, that is. We might have some feedback on the app, on the phone, but usually it's just the voice from the Assistant speaking to us.
21:03
It's important to kick off the conversation in a good way. We need to keep a balance on engaging with the user, but not so much that we're going to nag the user, and it's something similar with the notifications.
21:22
For maybe about three or four years, I've been living without notifications, and it's an interesting experience with the new Do Not Disturb mode in Android. It's been easier for me to configure that on my phone, but for some apps, I leave the notifications on,
21:40
like Maps, mostly when I'm traveling, but lately I've been getting so many notifications from Maps that I ended up blocking the app. With this in mind, the idea here is to engage with the user just in the correct amount of interactions. We also will need to guide the user through the conversation,
22:02
although we're not explaining how to have a conversation, nor we are leading all the conversation, we're just guiding, and keep our text-to-speech interactions short and clear, just in case the user is not able to understand for any reason.
22:24
It's also a good idea to avoid any data points unrelated to the user query. Usually when we're having a conversation, we like to be on a direct path. Many people, like me, tend to ramble when speaking, so when having a conversation with someone who rambles a lot
22:45
it's a good idea to stay in the path. The same thing with a conversational interface, we should not ramble, we should just give the user whatever they need for this query, for whatever they're trying to achieve, and follow the natural turn-taking
23:03
in order to keep the conversation like something that we will have with a person. It's also a good idea to use conversational makers to keep the user engaged, and finally, with all this solved,
23:21
we're going to add some salt to our app with some machine learning. As the abstract said, we don't need to deal anymore
23:41
with NLP, basically everything is solved, everything, let's say, and there are many tools that help us dealing with the language processing. So instead of developing the whole stack of recognizing the input for the conversation to happen,
24:02
we can use a tool, a framework, like Dialogflow, there are many other options, for me this is one of the best available tools, and basically Dialogflow works with two things, intent matching and entity extraction, it's going to categorize whatever the user
24:20
is trying to achieve into an intent, and also it's going to identify something that's relevant for that task to be achieved. In order to do that, or in order to take the next step, we're going to take those intent and entities
24:42
and do something with that, and eventually reply to a user. Basically the flow is this, the user is going to input something, either with a key or a microphone, and that's going to turn into a query, Dialogflow is going to receive that query, identify the intent, and process it.
25:01
Might be with something simple, like a Firebase Cloud Function, or AWS Lambda, or whatever you like to use for serverless, or it might be something more complex with your own backend. Either way, we're going to have external APIs,
25:23
maybe a database, some code to do this processing, and when we have all this settled, we're going to reply to the user and provide a fulfillment with actionable data. The idea here is that Dialogflow
25:40
is doing all the heavy lifting, and we only need to identify what the user is trying to achieve, what's the main goal, and which pieces of data are relevant to that goal. We're going to start with an invoke procedure, we need to say hello, and start experience with the agent,
26:01
with the software that we're building, our app. In Dialogflow, we're going to define these intents. This is just an example, an intent for moving recommendations, and we need to provide training phrases, like I want to watch something,
26:21
or tell me what to watch, or recommend me a movie. These are different ways of saying the same thing. The idea here is to get a movie recommendation. Then we need to provide the entities, in this case, it's a movie, and we're going to provide some synonyms, like for sci-fi, we don't have a synonym,
26:42
but for comedy, we have funny movies, or for drama, we have serious movies, and eventually, we're going to use that webhook data that I mentioned before, with a backend, to provide the fulfillment. The response, it's also specified here,
27:01
because we are providing the user something like a phrase. Keep in mind that we're doing a conversation here. We're not just providing some unconnected pieces of data, we are having a full conversation here. So the response might be, you should give it a try and watch a comedy, grab the popcorn, it's movie night,
27:21
just in case the user is looking for a random recommendation, or have out a sci-fi for tonight. We use a context to move between intents, might be that the user is looking forward for something more than just a movie recommendation, maybe would like to discuss a specific actor,
27:43
or actress, or maybe the user is looking for a recommendation, like based on movies that they have already seen before. With this context in mind, we can achieve more fluent conversations. Instead of one request and reply,
28:04
it's a conversation that's going to take many turns between the app and the user, and eventually provide some recommendation, but with all the context shared between those interactions. One of the main things that I was looking forward,
28:23
and Google recently launched at the IEO, is that using the Google Home, it's really nice, but you need to say, okay Google, or hey Google, for each one of the requests. Now they are launching this new feature for you to not say that in every request,
28:41
just to start a new conversation. And finally, this is all the code that we're going to see. In order to integrate this into our Android app, we need to use an AI service. We're going to be using an access token. Dialogflow, it's not that different
29:01
from other API providers. It got its own SDK. It also needs some registration, provide an access token. In this case, we're going to click a button and listen for input, and afterwards, this is the result,
29:22
and here we're just showing what the app is providing. We have a query, we have an action, we have several parameters, and all this depends on whatever we configure on Dialogflow. It's interesting for me mostly because we are dealing with configuration.
29:42
Not everything is done in code, although if you really like to code, this might sound like different, but it's like a trend that I've been seeing for a while that it works better if some parts of the process are just configuration, like defining the intents
30:04
and identifying the intents, and other parts of the process are dealing with both the input and the output with code. And basically, that's all that we need in order to deal with our interaction with Dialogflow.
30:25
It supports several languages and it provides several more features than what I cover here. So to summarize, many of the ideas behind conversational interfaces are things that have been around for a while.
30:42
Many of the things that are useful are quite intuitive. So in order to build an app that got some conversation in any point for any goal for the user, we need to keep in mind that it's important
31:02
to build a persona using tone, words, phrases, and the technique that we would like for that conversation. There are no errors, there's the happy path and many other paths, so we need a backup plan
31:20
in order to get the user back in track in case anything happens. The user is there to help us, already knows how to have a conversation, already knows how to achieve things, we need to leverage that. And basically, the idea here is having a screenplay,
31:40
a script for a couple of interactions. If it's your first approach, I would suggest writing at least three different scenarios, the happy path, and then considering several options in order to build the full screenplay that we're going to have. And there are many other small things
32:01
like avoiding written language, keeping the text-to-speech to shortest interactions, but mainly the idea here is make the experience the most wonderful that we can build for the user. So, that was what I had, thank you very much.
32:28
Do we have time for questions? Yes. Are there any questions? Because we still have time. Over there.
32:42
Perfect, thank you. I was just curious about if you have a conversational interface, what's gonna happen with all the flows, with the existing flows that you have, because it basically leaves us with no job?
33:04
Well, it really depends on what are we looking forward to. I mean, there are several things that are going to be automated, but I don't see this happening as quick for all the fields or for some fields.
33:25
This might be not completely related to the presentation, but in my opinion, in a couple of years, software development is not going to be as good as it's now in terms of wages, mostly because many things are going to be automated, but it's not the only job that's going to change.
33:43
It's not going to disappear, it's going to change. So, with that in mind, many of our, the apps that we're developing nowadays are going to have something like a conversational interface inside. So, it's going to integrate with the flow that we already have. Let's say we're building a movie recommendation app.
34:03
So right now, there are a screen with several buttons and images. Instead of that, we're also going to have first maybe a small button that says chat with someone, and it's going to be that agent recommending movies. Eventually, it's going to be like 50-50,
34:21
and eventually, if anything goes like the way it seems going now, it's going to be like fully automated with a conversational interface. I'm not sure this is going to happen. I mean, chatbots have been around for a couple of years that are not as adopted as I would have thought, but those are my ideas in the coming future.