We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

No excuse user research

00:00

Formal Metadata

Title
No excuse user research
Title of Series
Number of Parts
133
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
As designers and developers, we don’t always have access to research to about our end users, or the opportunity to learn about them. This can leave us building products based on our managers personal opinion, or client specifications, and never really knowing how we can serve our users better.But the good news is there are many opportunities for user research that most designers and developers just aren’t aware of. They are cheap, easy to implement, and can used straight away on almost any project. Lily will talk you through 4 methods of no excuse user research that you can use immediately on the websites, products, apps and services you work on every day.
36
61
Thumbnail
1:11:04
73
107
Designwissenschaft <Informatik>Software developerSoftware developerBitMaxima and minimaProjective planeBasis <Mathematik>Mathematical analysisFront and back endsJSONXMLUMLComputer animation
Software developerPoint (geometry)Software developerDecision theoryDifferent (Kate Ryan album)Data managementClient (computing)Line (geometry)Multiplication signComputer animation
Software developerDecision theoryModeling languageUsabilityProduct (business)Decision theoryPressureDifferent (Kate Ryan album)Multiplication signUsabilityClient (computing)Data managementArithmetic meanProduct (business)Formal languageImplementationProjective planeMetric systemMeasurementView (database)Web pageWebsiteQuantificationContext awarenessComputer animation
Software developerMultiplication signProjective planeDecision theoryCapillary actionComputer animation
Software developerWebsiteBookmark (World Wide Web)Multiplication signWebsiteMathematicsPoint (geometry)Different (Kate Ryan album)Level (video gaming)Lattice (order)Automatic differentiationComputer animation
Software developerSelf-organizationDifferent (Kate Ryan album)Multiplication signProjective planeUsabilityMathematical analysisSoftware developerFeedbackPoint (geometry)Maxima and minimaProcess (computing)Traffic reportingBoss CorporationComputer animation
Software developerFunctional (mathematics)FeedbackSource codeKeyboard shortcutTraffic reportingSoftware bugFeedbackUsabilityVector potentialOrder (biology)Source codeMultiplication signFunctional (mathematics)Keyboard shortcutWebsiteProduct (business)Pauli exclusion principleWhiteboardCASE <Informatik>Computer animation
Software developerTraffic reportingBasis <Mathematik>Pauli exclusion principleSoftware developerComputer-assisted translationSelf-organizationProjective planeUsabilityWebsiteSoftware bugProduct (business)FeedbackOnline helpLink (knot theory)Computer animation
FeedbackFrustrationSoftware developerExpected valueProduct (business)System identificationFunctional (mathematics)Physical systemRow (database)Mathematical analysisPlastikkarteDigital photographyUser profileAddress spacePoint (geometry)Traffic reportingBlogVector potentialUsabilityFrustrationIdentifiabilityProduct (business)FeedbackSoftware bugExpected valueProfil (magazine)MathematicsPhysical systemError messageDigital photographyMathematical analysisSound effectValidity (statistics)Field (computer science)Pattern languageOnline helpWeb pageWorkloadCausalityUniform resource locatorBitEmailQuicksortReliefDrop (liquid)PlanningRepeating decimalAddress spaceFunctional (mathematics)Type theoryForm (programming)Computer animation
Software developerFeedbackUsabilityProduct (business)BitGroup actionTraffic reportingScaling (geometry)Physical lawWebsiteFrustrationUsabilityFlow separationConsistencyArithmetic meanDegree (graph theory)BlogCASE <Informatik>Process (computing)Series (mathematics)FeedbackDifferent (Kate Ryan album)Shape (magazine)NumberSoftware bugMultiplication signSampling (statistics)
Software developerSoftware bugTraffic reportingGroup actionSoftware developerUsabilityTheory of relativityComputer animation
System identificationUsabilityProduct (business)Reduction of orderSoftware developerMathematical analysisIdentifiabilityUsabilityOcean currentMultiplication signBitTerm (mathematics)PlastikkarteWebsiteTraffic reportingProduct (business)Mathematical analysisDatabase transactionFeedbackExpected valueStatement (computer science)Scaling (geometry)Functional (mathematics)Pattern languageSoftware bugReduction of orderCausalityComputer animation
FeedbackMotion captureSoftware developerDatabase transactionWeb pageFeedbackMotion captureGoogolPoint (geometry)Online helpForm (programming)Database transactionSingle-precision floating-point formatResultantCuboidInteractive televisionBoss CorporationProduct (business)Computer animation
FeedbackDatabase transactionSoftware developerVideo gameReal numberContent (media)Point (geometry)Multiplication signInformationTraffic reportingPhysical systemFeedbackMeasurementSoftware bugDatabase transactionProduct (business)WebsiteInteractive televisionData managementContext awarenessRoutingResultantBlog
Software developerDependent and independent variablesBitThumbnailPoint (geometry)Computer iconDependent and independent variablesDatabase transactionFeedbackAbstractionCodeMultiplication signAdditionProper mapFreewareKritischer Punkt <Mathematik>Key (cryptography)Web pageInteractive televisionResultantField (computer science)CuboidReflection (mathematics)MeasurementContent (media)
Dependent and independent variablesSoftware developerDatabase transactionFeedbackContent (media)Discrete groupArchaeological field surveyUniform resource locatorWeb pageDatabase transactionMultiplication signDependent and independent variablesResultantPresentation of a groupVisualization (computer graphics)Game controllerWebsiteField (computer science)Content (media)FeedbackoutputSoftware testingWeb pageKey (cryptography)AdditionArchaeological field surveyWeb 2.01 (number)Query languageBitTwitterInformationPoint (geometry)Mobile WebView (database)Moving averageSkewnessForm (programming)Integrated development environmentBlock (periodic table)Search treeOrder (biology)
Database transactionSoftware developerWeb pageDatabase transactionFeedbackSampling (statistics)Archaeological field surveySound effectDependent and independent variablesConfidence intervalResultantRepresentation (politics)Interactive televisionComputer animation
Database transactionSoftware developerFeedbackContinuous functionMathematical analysisDatabase transactionFlagType theoryInsertion lossArchaeological field surveyFeedbackDependent and independent variablesSoftware bugImplementationPoint (geometry)Different (Kate Ryan album)Basis <Mathematik>MathematicsSource codeMathematical analysisFormal languageAnalytic continuationContent (media)Multiplication signMoving averageLoginIdentifiabilityComputer animation
BlogSource codeSoftware developerMathematical analysisSource codeLoginDatabaseBitMorley's categoricity theoremResultantUsabilityFormal languageMathematical analysisCoefficient of determinationComputer animation
WebsiteSoftware developerContent (media)Formal languageRobotWeb pageQuery languageUser interfaceView (database)Content (media)Electronic mailing listLine (geometry)Source codeWebsiteResultantTerm (mathematics)Morley's categoricity theoremWeb pageNavigationDifferent (Kate Ryan album)Validity (statistics)Analytic setNatural numberFormal languageMathematical analysisUser interfaceUsabilityIdentifiabilityNumberCASE <Informatik>Shared memoryPrice indexCategory of beingWritingWordPerfect groupPairwise comparisonArithmetic meanQuery languageGroup actionVector potentialLogicScaling (geometry)GoogolCoefficient of determinationRight angleRoboticsEqualiser (mathematics)Standard deviationSearch engine (computing)Local ring2 (number)Multiplication signLoginMetadataView (database)
Web pageContent (media)Query languageSoftware developerWebsiteFormal languageMathematical analysisContent (media)Source codeRoutingFormal languageNegative numberMorley's categoricity theoremCASE <Informatik>ResultantUsability
FaktorenanalyseTerm (mathematics)Software developerMathematical analysisSoftware testingContent (media)System identificationFormal languageInformationArchitectureUsabilityLoginResultantError messageContent (media)Traffic reportingComputer architectureSoftware bugCASE <Informatik>Multiplication signFormal languageImplementationUser interfaceTerm (mathematics)Decision theoryIdentifiabilityWebsiteMathematical analysisDivisorUsabilityGraph coloringMetadataElectronic mailing listContent management systemMorley's categoricity theoremInformationWeb pageSampling (statistics)Reduction of orderWeightDatabase transactionMathematicsGraphical user interfaceData structureNavigationSound effectBitDifferent (Kate Ryan album)Query languageAnalytic continuationFeedbackDatabaseLine (geometry)Projective planeMetropolitan area networkFrame problem
Software developerRevision controlMultiplication signProjective planeFeedbackSoftware developerOffice suiteUsabilitySoftware testingMathematical analysisSystem callGoodness of fitRevision controlDecision theoryLattice (order)Data managementLevel (video gaming)BuildingObject (grammar)Client (computing)Computer animation
Software developerFeedbackBitProjective planeProduct (business)Revision controlFormal languageMathematical analysisMathematicsDependent and independent variablesPoint (geometry)CodeArchaeological field surveyDatabase transactionWorkloadStructural loadOnline helpTerm (mathematics)Different (Kate Ryan album)Software testingVideo gameMultiplication signProcess (computing)Service (economics)Forcing (mathematics)Scaling (geometry)Military baseRight angleSoftware developerMultiplicationLine (geometry)PRINCE2FlagHeegaard splittingChannel capacityCovering spaceLevel (video gaming)Visualization (computer graphics)SoftwareComputer animation
Transcript: English(auto-generated)
Cool. Hi, everybody. My name is Lily Dart, and I have a background in design and front-end development. I'm currently working as a freelance UX designer and user researcher.
Today, I want to talk to you about no excuses user research. I want to talk to you a bit about user research and how that helps us as developers on a day-to-day basis, and actually how easy it can be to do. And then to prove that to you, I'm going to show you three quick methods that
can be applied to almost any project and will require minimal dev effort and minimal analysis effort as well. They're very quick and easy to do. A lot of them use data that you already have lying around that you may not be using. But I'm going to start by talking a bit about why user research or why I think user research is
useful for us as devs. And the reason for this is that most of us are plagued by something I like to call opinion-driven design. At some point in our career, probably at some point in the last month even, we've all experienced and suffered through opinion-driven design.
And opinion-driven design is design and development based on subjective opinion, not objective data. So this is design and development based on the opinions of our colleagues, managers, clients, and stakeholders. So why is this a plague? I mean, we all need to make decisions every day.
We don't necessarily want to be caught short on that. We want to be able to make the quick decisions. And sometimes all we have is our best subjective opinion to be able to get on with that. Well, the problem often comes because when we have a lot of subjective opinions together, they don't often align. So if we're working in a team, everyone in that team
has different experience and insight to contribute to any kind of problem solving that we're doing. And often that experience and insight can lead us to different conclusions about what the best thing to do next is. And then when we can't align on what the best thing to do next is, it's often the highest paid opinion
our managers, our clients, our stakeholders, or possibly even just the person with the loudest mouth in the room who gets to break the tie, who gets to make that decision. And that is opinion-driven design. Now, this can lead us to being forced to prioritize pressures based on internal wants and pressures
instead of user needs and priorities. It can lead us to make design and language decisions based on our project leader preferences instead of making decisions that actually make the most sense to our users. And sometimes it means that we become overly focused on things like delivery speed and easily
quantifiable metrics like uptime instead of or as well as usability and product fits. Sometimes we get forced into rushing the implementation of the features that we're building. We aren't considering the best experience for our users, just the speed at which we can deliver something for our managers.
And the reason for this is because when we aren't really considering our users, it's our managers and our clients that we're designing and building for. Then we try and measure the success of what we've built, measure the success of our products and our websites. And we end up focusing on those easily quantifiable metrics that we have to hand that don't necessarily
mean anything to our users. Things like uptime, things like page views without any context or meaning. And this is because, again, we aren't really measuring how well we've met user needs. And it's possible that we didn't really have a clear sense of what those user needs would be in the first place.
So for most of this, this sucks pretty hard because it means that our expertise is ignored. It means that we get railroaded into building things that we don't agree with. And it means that we'll probably never have the opportunity to really know whether that decision that we were forced into was actually a good one or not
because we're not checking. So just before I get into this, how many people here have actually done user research? Three, okay, cool. Anyone worked on a project with user researchers or with research?
More of you, cool, okay, wicked. Well, so you may be aware of this, but the only thing we can really do to counter that kind of subjective decision making, those subjective opinions is to gather objective data. And if you have ever asked to spend more time with a user researcher, perhaps to do some research on your own and been turned down for it,
then you may have heard some of these familiar excuses and I've heard all of them. We just don't have the time, the money, or the people to do research. We already have a site that people use. Why do I need to know more about those people? And my personal favorite, we already know what our users want.
Now this third answer is usually based in overconfidence. It's really based on any kind of research because those of us who have done regular research know that user needs change and evolve over time. And we only have to look at something like the change from desktop to mobile to see that on a very fundamental level. If we aren't keeping our eye on the ball,
then we won't notice when our users want something different. And those of us who do regular research know that you'll never really have a complete picture of what your users' needs and experiences are. Because as soon as we add new features or we change or update them, we need to reassess whether they still meet
our users' needs and what that user experience looks like. So we can't know what our users want, not with any certainty, not at any point without very, very, very regular and thorough research. And I'd say that in fact most organizations I've worked with just don't know enough about the behavior or needs of their users.
And sometimes this is based in overconfidence. They do strongly believe that they know what their users want and need. Sometimes it is the perceived lack of opportunity and resources to gather more data, to do more research. But in both situations, we end up in the same place. We end up relying on our subjective opinions
because we have no objective data to base our decisions on and we're back to opinion-driven design. So the key point that I would like you to take away from this talk, if nothing else, is that user research does not have to be time-consuming, expensive, or difficult.
It can be, if we choose for it to be, but there are plenty of methodologies out there that are quick and easy to implement and will get you a lot of value. So you may not have a team of researchers on hand. You may not even have one researcher on hand. But I want to prove that there really is no excuse for the organizations that we work in that we shouldn't be doing at least some research
on every project. So my job today to prove this to you is to empower you with three quite quick and dirty methods that you can use straightaway on almost any project. They have minimal or no development effort to get them going and they have fairly minimal
analysis effort to understand what your findings are. And I'm hoping that these will help you if you're suffering from it to combat that opinion-driven design in your team or in your organization and that they might be something that you can take back to your colleagues and bosses to prove that research doesn't have to be difficult or time-consuming and that it is genuinely
within everyone's grasp. So the first method that I would like to talk about is usability feedback through complaints and bug reports. Bug reports, where no functional bug can be identified, are a potential source of user feedback.
And in order to use bug reports as feedback, there is one step that we must take. We need to accept that the problem does not exist between chair and keyboard. For those of you unfamiliar with this term, it's usually problem exists between chair and keyboard and it often gets shortened to PEBCAC. And we refer to reports as PEBCAC
when a functional bug can't be found and we assume that the problem is occurring because the user is using it in the wrong way. And this is something that we have to let go of if we're gonna start using our bug reports as usability feedback. In most cases, it's likely that there is, in fact, a usability issue in the way that we have designed our product or site that's making it difficult
or inaccessible for our users. So if you've handled PEBCAC reports, if you've thought about reports as PEBCAC or you've labeled them, I mean, I've seen some developers do this, labeled them as PEBCAC, then you have perhaps unknowingly been handling usability feedback.
Now, most of the organizations I've worked with receive these kinds of reports, certainly anywhere that you will be doing support work, any kind of product or ongoing sites. We get these kind of reports. There are no functional bugs but there is some kind of confusion with the user that we're talking to. And almost all of the organizations I've worked with
have handled them on a very individual basis. We'll solve the problem for the user at hand, perhaps we'll link them to the help documentation, perhaps we'll just describe what they need to do to get around the problem. But then we close them, we close the tickets, and we never consider them again. And that is a seriously missed opportunity.
And much the same can actually be said of complaints. I mean, complaints are feedback born of frustration and dissatisfaction. And they often reveal the best insights about some of the worst pain points in our user journeys. So if they can't be directly related to a functional bug, they too can be considered usability feedback.
And sometimes they might represent a mismatch in product expectations. A mismatch against what the product is intended to deliver, what it's built to do, and what the user is expecting. So complaints and bug reports can help us to identify potential usability issues in our interfaces,
identify mismatches in product features and user expectation. For example, perhaps a user believes that when they upload a photo, they'll have the ability to crop it, and may complain if this feature doesn't appear to be working, even though it's actually not a feature that we've built for them.
And the best kind of side effect of doing user research in this way is that you get to reduce the amount of non-functional bug reports and complaints that you receive. Because the problem with dealing with each of these individually and closing them down is that they keep on coming. And once we've identified those usability issues, we can fix them. And then we remove the trigger
that causes those reports in the first place. So this ultimately lightens our workload. So using bug reports and complaints as user research is actually extremely simple. It's probably the most simple thing I'm gonna talk about today. All you need to do is record reports in a system that allows tagging.
So if you've got a help desk, you can do this. If you've got an email account, you can do it. Then when they come in, when you've dealt with the individual complaints or issue to hand, you can close them down and tag them with appropriate labels. So we tag them to find them easily again for analysis. It is a bit important to be a little bit sparing
with your tags. The location the problem occurred, so for example, login, and the fact that it is a usability issue, so usability, should be sufficient for you to be able to refer back to later. If we get too detailed with our tags, then we may accidentally overlook issues by making assumptions about their causes before labeling them before we've done the analysis.
Then all we need to do is check back regularly until we have enough to actually analyze them. And if you are using something like Zendesk, you can even set automated reminders on all sorts of kinds of tickets to remind you to come back and check on the reports. When you have enough to analyze,
we then wanna have a look at the reports for patterns or recurring themes that might suggest we have a usability issue. So what does a pattern or theme look like? Well, these are three example bug reports, all reporting the same issue. Now the pattern here is fairly simple to summarize. Each user is concerned
that their payment hasn't been processed. So once we've established that the payments are in fact successful, that there are in fact no bugs, we can conclude that the user journey through making a payment doesn't make it clear enough that that payment has in fact been successful for our users which is leading them to feel like there might be a bug in the system.
This is another three examples. The pattern in this example is harder to see because actually the user need behind each one is slightly different. So the first person wants to update their address, the second wants to update their surname, and the third wants to update their profile photo. But if we have a look at the tags, we can see that a usability tag means
that a functional bug has been ruled out, first of all, and that all of these problems occur on the same page, they occur on the profile page. So if we look at it in that sense, a pattern then begins to emerge. The button won't work, I can't make the change, the page won't save. And with this pattern identified, we could summarize that something
like a field validation error is stopping our users from submitting the form, but perhaps it's not visually prominent enough for them to see. Or it might even be something as simple as the method of submitting changes isn't clear enough to them. They might be accidentally clicking cancel instead of submit because we made that button more visually prominent for some reason.
We'd have to go back and look at the page to try and identify exactly what we think the problem is. We can even talk to these users if we want to, but we can see clearly that there is a recurring usability issue with this page. So how many reports like this do we need before we can actually be sure that there is an issue?
Well, an individual bug report or complaint probably isn't enough to give us much certainty. It's always possible that it might be a one-off experience. That said, we do need to remember that not everyone has the time or motivation to provide feedback or log bugs. Many users will just give up.
They will find a workaround or they will just go to a different product or site to do the thing that they wanted to do. So we need to remember that for every report we get, there is likely a group of users who have also experienced the problem, probably a much larger group of users who have also experienced the problem and haven't, for whatever reason, reported it to us.
So there's no easy answer on how many reports constitute a serious issue because it depends a bit on things like the size of your user base. It also depends on the ease of reporting and the severity of the issue. So the sample that we're looking at are those users who have both experienced the issue and been motivated enough to report it.
Now, motivation may come through various means. It may come through severity, for example, with our seemingly missing payments, or perhaps because we made it super easy to report the issue so it was quick for them to do. It may also come through repeated frustration with the problem seemingly reoccurring.
But to give you some sense of scale, if you had 300 users on a non-critical site, and by non-critical I mean something that isn't doing payment processing, something that people don't rely on to do their job every day, and three out of those 300 people
have reported issues that look a lot like they have some consistent usability issues in them, then I would consider that the issue warrants further investigation. That is, remembering that our sample size is both those users who have experienced it, which already makes that group of 300 smaller, and those who have had the motivation
on a non-critical site already makes that group smaller again, then that should be considered for further investigation. If you have 10 people reporting out of 300, then you probably have quite a serious usability issue. So again, remembering that for each of those three people reporting or 10 people reporting, there is likely to be a larger group
invisibly behind them who have had the same issues. So the upshot with complaints and bug reports is that although it really sucks to deal with angry users, users who report issues in anger, who are frustrated, who are difficult to deal with are unintentionally doing us a favor.
Their effort helps us to identify issues that might otherwise go unnoticed, and for every person who reports, there's a group of people who haven't made the effort to report. So it's important to repay that effort by making their user experience better where possible, and by doing that, we make everyone else's user experience hopefully better as well.
But the best benefit that it still has to us as developers is that it reduces the amount of usability-related bug reports that we have to handle because we resolve the problem that triggers those reports. So complaint and bug report analysis can help you to identify usability issues by looking for repeated patterns in bug reports
and complaints that have no functional cause. It can help you to identify product misunderstandings by identifying expectations with users that don't align with the current site functionality. For example, payments not appearing immediately in our credit card statements. And it can also reduce complaints and bug reports
by resolving the issue that triggers those reports. So in terms of time, there's no dev time here beyond tagging reports. Analysis time is gonna be a little bit variable depending on the scale of your reports, but if you've got less than 10 to analyze, you're probably talking 30 minutes or less
to actually identify any recurring patterns. So it's an extremely quick and simple way to get some feedback from your user base. So the next method that I would like to talk about is user feedback through transaction audits.
A transaction audit allows us to capture feedback from users at the point of success or failure. Transaction audits are usually single questions or short forms that we ask users to fill in at the end of their user journey. And they're most commonly used in things like help documentation.
So you may have seen them in the Microsoft documentation pages. You may have seen them in Google's documentation pages. Google has also experimented with using them for search results. So just occasionally I've seen a little box pop up in the sidebar asking me whether or not I found the results that I was looking for.
So why are transaction audits valuable? Well, they give us on the spot feedback about user interactions. They catch user feedback either at the point of success or the point of failure for what they're actually trying to achieve with our site or products at that given time.
And this is really difficult feedback to capture. Even if we usability test, we're never really going to be testing in a real life situation with a real life problem. We're never really gonna be watching someone try and manage something on their phone in their living room while they're also watching Sherlock and drinking a cup of tea.
So we don't get to see those real points of crisis, those real pain points in the system. But transaction audits do allow us to get some measure of that feedback. They can tell us why a user was unsatisfied and give us insight as a result to improving our sites, our user journeys, our user experience.
We can choose to ask users for more information if they do identify as unsatisfied. And we can use that information, if they choose to provide it, to identify things like missing content or a particular pain point in a journey. And they give users a really simple route to feedback when they may not otherwise be motivated
So again, thinking about this in the context of complaints and bug reports, the easier we can make it for someone to feedback, the more likely we are to get that feedback. But more importantly, if something is just a niggle, something that isn't worthy of a bug report or a complaint, users are really unlikely to take the time to report it. But the easier we make it to feedback,
the more likely we are to get them to help us identify those niggles early. And it's those small niggles that over time tend to grow into broader dissatisfaction. So this is a simple example of a transaction audit. This is the kind of question you might ask if you were gathering feedback on content pages
or perhaps something like search results. So did you find what you were looking for? And the preferred interaction for a transaction audit goes something like this. The user says yes, they did find what they were looking for. If they've answered yes, then we thank them for their feedback and we submit the form. We know what we need to know.
The user is satisfied. This is what happens if a user selects no. We reveal an additional free text field, asking them what it was that they were looking for. So this text field should be optional as some users are gonna be put off by being asked to add extra detail.
Some people will just submit with the box empty. Some people may stop at this point, which is why it's important to capture the response of no immediately as soon as it gets clicked in the same way we do with yes. It's important to capture the fact they were unhappy even if we don't get any further details on that. But if they do choose to fill in the text fields and that's where some of our real insights can come from,
we can always then update their response when they submit. If they have submitted and written feedback and press the submit button, then we thank them for their time. It's that simple. So transaction audits are an extremely simple and straightforward way of gathering feedback from people at a critical point.
Other examples of transaction audits are based less in the success of the transaction and more in the satisfaction of the experience. Measuring satisfaction is a little bit fuzzy sometimes. It's a difficult thing to do well, but sometimes it's a useful thing to try and measure if you can't be more specific. But using words like good or bad
may not feel quite like an appropriate reflection of the user's feelings at that point. They're quite specific. So a user might feel satisfied that they've completed their journey, but not necessarily like it was good or great. And we need to think about what kind of feedback we want when we're talking about satisfaction and what satisfaction actually means for us. But for these kinds of audits,
we may wanna keep the initial response abstract. And we can do that by using easily recognizable icons like smiley faces or thumbs up, thumbs down. So if you're designing a transaction audit, keep questions as simple and straightforward as possible to maximize the amount of responses that you get.
The simpler the question you ask and the less time it takes, the more likely you are to get more responses. If a user completes a transaction regularly, don't keep asking them for feedback. Don't be Twitter. If any of you have used the Twitter app, then you'll know that when new features are dismissed, it asks you every single time
if you like that feature, every single time. And not only does this annoy your users, but it also will lead to less reliable results. As an annoyed user, I now randomly answer yes or no every time I'm asked. And finally, reserve audits for your key user journeys or keep them discreet if you're gonna have them permanently on a page
for something like content or search feedback. The key here is not to pepper all of your transactions with audits. Keep your audits for the important transactions, the ones that are important to your business or your users or perhaps you use them for new features that you wanna test out and get feedback on. But users don't wanna be asked every time they complete any kind of transaction
on your site for their feedback. And if you do have a content heavy sites or if you're trying to judge the value of your content or you're gathering feedback on things like search results, you may need to leave it permanently on the page somewhere. So it's important to put it somewhere that won't distract users, but will hopefully catch their eye at an appropriate point,
the point at which they haven't been able to find the search result they were looking for or the point at which they have read the page of content and found that it does fulfill their needs. So this might be underneath the piece of content or it might be something like the sidebar of the search results. So if you wanna trial run an audit
without putting in too much dev time, you can actually run these from embedded survey tools. And I recommend with starting with one if you just wanna judge the value of what it's gonna bring you. You will have less control over the visual presentation and behavior than if you rolled your own, but it's a very simple, quick, easy way to get going and to prove the value of this kind of methodology.
So if you are using a survey tool, the only additional feature you'll need other than it being embeddable is branching. And this is so we can show and hide the extra text input depending on a user's answer. Some survey tools will also allow you to add a hidden field
to pass values through from the page that they're embedded on. So if you've got this running on multiple content pages or you wanna pass back the search query that the feedback related to, you might also wanna look out for that feature. I've used PoleDaddy successfully for this. Again, a little bit less control over the visuals, but you get it up quickly
and get the feedback rolling in. Just make sure, given that you're passing stuff outside of your web environment, that you're not passing any person the identifiable information. So that transaction audits do not become intrusive to users, we should always make them optional.
We should never block users from completing a transaction or getting to the next page that they want to get to by forcing them to complete a feedback form. And I hope it goes without saying, but we should always avoid evasive pop-ups. I think the final thing to say about transaction audits is that they're not surveys.
We may be using survey tools to run them, but they're not surveys. They will not produce quantitative data in the same way that a well-designed survey will. And this is because our audit is optional. So by a side effect of that fact, our responders are self-selecting. They are choosing whether or not they want to engage with the audit.
And they're more likely to respond as a result of a particularly good or a particularly bad interaction. And those individual experiences and motivations mean that we can't assume that their opinions are representative of everyone's user experience. So we can't say with any confidence anything like 50% of users like this page,
even if 50% of our responses were people saying that they like the page. And this is because we don't have any certainty that we have a representative sample because we are self-selecting. So transaction audits are satisfaction gauges or red flags for underperforming transactions. They should be considered a valuable way
of gathering insights, but they are not a way of measuring everyone's experience. So transaction audits can help you to gather feedback at the point of success or failure, helping us to understand if we're really fulfilling a need at the point at which a user tries to meet it, which is difficult feedback to get elsewhere.
It can also help us to identify unsatisfactory transactions and satisfactory transactions. It helps us keep an eye on our key transactions and see whether or not our users are experiencing pain points with them. And in certain circumstances, we can receive continuous feedback. As long as we found a way to place transaction audits
that will not bug or repeatedly annoy our users, then we can leave them running for as long as we like. And this means that we can see how feedback changes over time. And it means we can see how feedback changes through different implementations. This type of continuous feedback is not, as far as I'm aware,
available with any other kind of research methodology. So you've got two really valuable things with transaction audits. You have that on-the-spot feedback that's difficult to gather, and you potentially have continuous feedback as well. So the dev time for implementation depends on whether or not you wanna roll your own or use a survey tool. If we're talking about using a survey tool,
you can get it off the ground in about 30 minutes. Analysis for text responses could take an hour or more, depending on how many you get, but the bulk of your responses will be yes or no, happy face, sad face, whatever you've chosen to configure, which means you can identify in seconds and check on a regular basis
whether your transaction is doing well or not. So the last method that I would like to talk about is content and language analysis through search logs. Internal search logs are a rich, often untapped,
source of insights about user behavior and needs. And they are all, if we have an internal search, it is data that we have to hand. We can use it for research immediately without having to do any dev work at all other than pulling down bits of the database. And when we do search log analysis,
it reveals something that is seemingly negative to be quite insightful and useful for us. The value of search returned no results is immediately evident because failed searches can provide a multitude of insights on missing content, mismatches in language usage and categorization, and even some usability issues.
So search logs can help us to identify content missing from our sites that our users want or expect. And this is something that we can only do through internal search logs. We can't do it with external analytics because if we look at incoming search terms,
then we'll learn how our users have found us, but it won't help us to identify missing or incomplete content. If our content is missing or incomplete, then users will have ended up on another site and that data about their search term will not have ended up in our analytics. So search logs can also reveal
when we're using different language or categorization than our users. For example, the site uses the word zucchini, but users are searching for courgette. And they can give us an indication of how easy our popular content is to find. So in some cases, we may be able to identify potential usability issues with this.
For example, the pages that users want most are actually hidden deep in the navigation, and then it seems likely that users are falling back to search to counteract that. So to give you an example of how to analyze search results, I've made up an awesome site called aliensforever.com. This is a site about alien races in science fiction,
and I've made up a list of the top 20 search queries for this site. So I'm gonna run you through two simple approaches to analyzing the logs. So our first approach is to compare search queries with existing pages. So what we're looking at here are search queries that don't return any results.
And the terms for this site that don't return results are currently highlighted in red. So we can see that seven of our top search queries don't return any results, so that's quite a big problem. And if we analyze the data, we can see that there are three reasons that this has come to pass. The first one is that these items are not alien races,
so this is a categorization error. The doctor is an individual, his race is Time Lord. AI and robots could maybe be argued as races, but they aren't considered as such by the site owner because they are non-biological. And Jedis and peacekeepers are grouped containing multiple races,
they're not individual races themselves. So that means that our first finding is that user language and categorization doesn't align with some of the site content. So the second issue which results in no results returned for our users
is that the 11th most popular search term is Delveon. Delveon is a valid race in popular sci-fi, question popular, but there is currently no content about Delveons on the site. So our next finding is that some content that users are searching for does not exist on the site. And the third issue which results in no results returned
is an unfortunate misspelling of Cardassian. So our third finding then is that misspelling means that some users can't find the pages that they are looking for. Now when we're talking about misspellings, it's important to look at the scale of the search queries.
If just one user has made this mistake, and I hope someone somewhere has made this mistake, then it's probably not a big problem. But if our users are commonly making this mistake, we may want to add that misspelling to our search metadata so that we know they can still end up on the page that they want to get to.
So our third finding, misspelling means that some users can't find the pages that they are looking for. So the second analysis approach we can take is to compare the most popular search queries to the top visited pages that we've pulled out of our analytics package.
Now both sources of data give us slightly different insights. So analytics is gonna tell us about most visited pages, both from navigating internally and from incoming traffic, whereas search logs are gonna tell us about the pages that people are trying to get to once they've already landed on the site. So the fact that these two lists don't align
is not necessarily positive or negative. And we only have seven items that align here in the top 20 list that match both lists. But regardless of whether it's necessarily positive or negative, there are still insights to be found into user behavior here. Now there might be a few reasons as to why these lists don't align.
The first one might be that we're losing traffic for popular content to other sites in natural search. So when people are Googling for one of our races, there's another better looking site that we're losing traffic to. But when people are on our sites, they want to search for that particular race. It might be that some popular pages
are harder to discover through the site navigation. So users who might otherwise prefer standard navigation revert to search to try and find what they're looking for. Or it might be that the user interface itself promotes some content and increases the amount of page views by doing that. But it doesn't promote all of the popular content equally.
So there are a few potential conclusions that we could take from these results. The next step to kind of work out which one is the most likely conclusion is to take a look at the navigation, take a look at the user interface, and take a look at incoming traffic sources and conclude what we think the best,
most likely thing is. But in this case, we'll say that our fourth finding is that navigational routes to some popular content may be more convoluted than others, because Aliens Forever has a really, really horrible IA. Or so I imagine. So just from that quick review of 20 results,
we have four findings on language and categorization, on missing content, and we also have a usability issue. So you can see just potentially how insight-laden even that small amount of data can be. So if you're gonna run your own analysis,
there are a few things that you wanna consider. The first thing you wanna consider is the timeframe for your analysis. So things like, will external factors be affecting your traffic and your internal search terms? So an example for our site might be that more people have been searching for Jedis in the last few months
because of the new Star Wars movies. So could this color the results if we just take a three-month sample and perhaps sway our decisions that we make as a result of them in terms of things like how we prioritize the content in the user interface? Now, whether or not that's a useful insight in terms of more people have been searching for Jedis
in the last three months or a misleading one depends a bit on the intentions of our site and what we're trying to achieve. But we need to just consider what effects external factors have and whether or not our user behavior changes over time. It's also important to consider the size of your user base when deciding on the depth of your analysis.
So the more users you have, the more search queries you have been processed and the more edge cases you're likely to find, particularly regarding categorization errors and misspellings. So if you do have quite a large amount of users, it might be worth your time to go deeper into your analysis, go further into the logs
because you will find that there are probably quite a lot of misspellings that align, quite a lot of categorization errors that might align, but they may be further down the long tail. So if you have the time, it's also worth just testing out now that you have the data to hand all of your top search results and all of your top visited pages in your internal search.
This is something that we don't often do. We might know what content is popular, but we don't always check what the experience of getting to that content is. So now that we know that content is popular, what do our users actually see when they use the internal search? Do they see these results come up first in the list
or are they four items down? If they're not coming up, we'd expect popular content to come up, then we might wanna consider adjusting our page metadata or adjusting our internal search weighting to make sure that those popular items get further up the list. So search log analysis can help you to identify missing
or difficult to find content through identifying which popular search terms don't return results or comparing most visited pages with our most searched for terms. They can help you to identify non-user friendly language or categorization and help us to see whether or not users are potentially missing content
that we do have on the site because they're using different language or classification to us. And it can also help us to reveal poor information architecture by identifying if our popular pages are hidden or obscured by user interfaces or navigational structures or possibly the search itself.
So in terms of time, if you're working with a content heavy site with a content management system and you've got a lovely GUI to go in and review the results, then all you've gotta do is fire up the page. Dev time for this is very minimal. Even if you do need to go searching through the database, you just need to pull back the terms really.
Analysis time depends on the size of your site and how deeply you wanna review the logs. But for an analysis like that of top 20 terms, I wouldn't expect it to take more than about an hour. So today I have presented you with three different
simple to implement methods of user research. Using data that is easy to gather or already on hand. Now if you implemented all three methods on any given project, then you could potentially identify usability issues and reduce usability related bug reports and complaints. Gather continuous feedback on transaction success
and user satisfaction. And improve poor information architecture by aligning it to what we know our user's language and categorization to be. And the real kicker is using all of these methods on a project would likely take you
less than eight hours a month to actually set up and then analyze. Less time than that ongoing when you're just referring back for analysis. And the insights that they can potentially reveal are worth much more than that time. Now this isn't a replacement for user interviews, usability testing, or other face-to-face methodologies.
But these methods will provide you with ongoing valuable feedback that you can use to understand your users better. And finally for us as developers or designers, getting regular feedback from our users offers us three very valuable extras day to day.
It offers us reduced uncertainty in our development decisions and our design decisions. Are we making the right call or are we making the wrong one? And are we perhaps being forced by the highest paid opinion to make a decision that isn't gonna be good for our users? And it can offer us better outcomes
through understanding what our users really want and need. And knowing that when we design and build something, we're genuinely creating something useful and usable. We don't have to feel railroaded into creating things that we consider to be bad work. We can take pride in what we produce knowing that we're meeting the needs of the people that we're building for.
And perhaps most importantly on a very day to day level, it can offer us less conflict with our colleagues, managers, clients, and stakeholders. Through focusing on that objective data over our subjective opinions, we have insights to guide our decision making. And we have pushback when we're being led
by internal priorities instead of user needs. So in my opinion, user research is the cure for opinion-driven design. And I encourage you all to go out and try some as soon as you can, if you haven't already. Thank you very much.
I haven't covered split tests, that's true. I didn't cover them because I felt like, first of all, they can involve quite a lot of development effort because you have to create multiple versions of something.
And they can be, if you're unfamiliar with them, a little bit tricky to get going in the first place. But also because they are fairly commonly in use and I think most people are aware of them, even if they haven't had the chance to play with them themselves. I think that they are, as ever, a useful research methodology
for a particular kind of problem. I think they often get misused quite badly, particularly if you're talking on an A-B testing level and you've got two entirely different layouts. We're not really split testing anymore. We're just kind of going, we don't know what the difference is that's making the difference between these two visuals.
We know that one performs better than the other but we don't really know why. So I think they often get misused but they are valuable when used correctly. I know. So it turns out, it turns out that actually
I had to go and ask a bunch of user researchers what I should even call a transaction audit. And actually I chose the term transaction audit in the end after being told very strongly by someone who specializes in creating surveys that it's not a survey and I shouldn't call it a survey and it's really important that I don't call it a survey. So no, they were quite widely used for a while.
GovUK used them for a bit. They've taken them away. They're putting something else in place soon but I'm not entirely sure what that is. But it actually strangely isn't a very commonly used method outside of things like help documentation. And in some places it's quite badly misused. I don't know if you've ever been
into the British GAS help documentation but it will go, have you, did you find the answer that you were looking for? And you'll go, no. And it'll go, have you tried these answers instead? And I don't think that they actually logs any of the data to do with that because it never seems to improve. So actually I think finding a pre-made survey tool is probably the best, the most close thing
that you'll get and some of the, certainly the analysis tools inside survey tools like language analysis is gonna be useful if you have a large scale of responses but there is not currently software as a service for this. But maybe we should go build some one day. Be up for that. Hi.
Collecting too much feedback. What would you classify as too much feedback? Sure. I think that the only way that you can deal with that
is to have something that, have it in your process that it gets regularly reviewed. If you are finding that it does, so I've worked in teams with user researchers before and we find that actually the user researcher goes and spends a lot of time doing stuff and brings back feedback that is analyzed
and ready to go and ready to go in the product. But actually our dev team is like three sprints behind where they are and we don't then manage to catch up at all because the point at which we actually need to start integrating it, it's irrelevant by the time we get to that point. So I think the only thing that you can do
is make sure that it's based into your process as much as you can. And certainly, I mean there is too much research, like there is too much detail, but I think if you're working on a project that just doesn't have any, then being able to put something like one of these methods in regularly is a good place to start. If you're working on a project that has too much, then I'd consider changing your methodology.
So what is a way that you could get to those answers more quickly without having such an analysis workload on top of you? So for example, with transaction audits, although you do get text responses back and you can use those, you maybe would choose not to look at those text responses unless 80% of people that came back
responded with no. So it gives you that initial flag to go, should I be looking at this? Should I not be looking at this? Okay, we do need to look at this. We've got a problem here and now we've got the data to be able to resolve it. So focusing research around a particular problem or a particular thing that you wanna get answered is also a good way to escape the eternal load of feedback.
There's no point in collecting stuff if you don't know what question you're trying to answer and if you don't have capacity to answer that question in code terms. So I think embedding stuff regularly if you can, keeping your feedback focused to specific questions that you want to get answered and maybe changing your research methodology
to something that is a bit more quick off the mark and accessible would be my answer. Anyone else? Cool, you can go get coffee then. Thank you.