We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

What’s next - How we use feedback to decide what to build

00:00

Formale Metadaten

Titel
What’s next - How we use feedback to decide what to build
Serientitel
Anzahl der Teile
133
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
How do you collect feedback from users? How do you decide which feature to work on? And once you’ve decided, how do you manage its scope? These are all hard problems, and in this talk, you’ll see how my team approaches them. Over the last year and a half, we’ve rewritten our UI 3 times, and we’ve changed the core concepts at the heart of our product. We’ve done all of this and, by carefully controlling a story’s scope, we’ve still found time to add plenty of useful features, as well as better align the product to the overall business strategy. You’ll see what worked for us, what didn’t, and practical ideas for you to replicate our process. You’ll come away from this talk better equipped to make the crucial decisions about scope & what to build: how to combine qualitative & quantitative feedback data from a wide variety of sources and how to feed that into your decision making process. Ultimately, this is a talk about the most fundamental thing: how to decide which feature to build next.
SoftwareentwicklerRückkopplungPrototypingRechenschieberMaschinenschreibenE-MailAdressraumURLMechanismus-Design-TheorieRückkopplungProdukt <Mathematik>Jensen-MaßRoutingSoftwareentwicklerBenutzeroberflächeMAPProjektive EbenePunktMultiplikationsoperatorDifferenteTLSMathematikEntscheidungstheorieVerkehrsinformationJSONXMLUMLComputeranimation
SoftwareentwicklerRückkopplungProdukt <Mathematik>Wald <Graphentheorie>RechenschieberMereologieSelbstrepräsentationFitnessfunktionBetafunktionComputeranimation
SoftwareentwicklerRückkopplungProdukt <Mathematik>EinflussgrößeRechenschieberAdressraumZahlenbereichE-MailProjektive EbeneSkriptspracheJSONComputeranimation
SoftwareentwicklerProjektive EbeneDatenfeldGreen-FunktionMathematikProdukt <Mathematik>DatenbankVersionsverwaltungMultiplikationsoperatorProgrammierumgebungDifferenteTermDatenverwaltungBitProzess <Informatik>Vollständiger VerbandServerZweiRechter WinkelWeb SiteNummernsystemMinimalgradFortsetzung <Mathematik>ARM <Computerarchitektur>
SoftwareentwicklerVollständiger VerbandWeb SiteDemoszene <Programmierung>Projektive EbeneProdukt <Mathematik>Stochastische MatrixAutomatische HandlungsplanungBildschirmmaskeMultiplikationsoperatorPrototypingBootenKartesische KoordinatenElektronisches ForumDatenbankClientE-MailAdressraumProgrammierumgebungFitnessfunktionRückkopplungEDV-BeratungExogene VariableDatenverwaltungComputeranimation
SoftwareentwicklerGruppenoperationGraphfärbungPrototypingGüte der AnpassungLuenberger-BeobachterFitnessfunktionZahlenbereichPunktProzess <Informatik>DifferenteSystemaufrufTermRückkopplungStrömungsrichtungGewicht <Ausgleichsrechnung>Softwaretest
SoftwareentwicklerWhiteboardRückkopplungVorzeichen <Mathematik>Weg <Topologie>MaßerweiterungDreizehnGüte der AnpassungMarketinginformationssystemZahlenbereichProzess <Informatik>HyperbelverfahrenJensen-MaßProgrammfehlerCracker <Computerkriminalität>BitDifferenteBetafunktionArithmetische FolgeTouchscreenSichtenkonzeptDigitale PhotographieComputeranimation
SoftwareentwicklerFortsetzung <Mathematik>WhiteboardRückkopplungWasserdampftafelHypermediaProdukt <Mathematik>CASE <Informatik>DatenbankFamilie <Mathematik>MathematikProjektive EbeneTermMinkowski-MetrikBitJensen-MaßEaster egg <Programm>Computeranimation
VerknüpfungsgliedServerE-MailSoftwareDatenbankMAPOffice-PaketRückkopplungVideokonferenzFortsetzung <Mathematik>NummernsystemFigurierte ZahlJensen-MaßSelbst organisierendes SystemKlassische PhysikProgrammfehlerMathematikMaschinenschreibenDienst <Informatik>AdressraumE-MailURLServerSQL ServerComputeranimation
SoftwareentwicklerMinkowski-MetrikKonfigurationsraumJensen-MaßSoftwareInteraktives FernsehenGewicht <Ausgleichsrechnung>PunktLuenberger-BeobachterZahlenbereichMultiplikationsoperatorRückkopplungBetafunktionShape <Informatik>Produkt <Mathematik>Besprechung/Interview
SoftwareentwicklerDigitalisierungBetafunktionFortsetzung <Mathematik>Projektive EbeneGüte der AnpassungProdukt <Mathematik>DifferenteComputerspielBenutzerfreundlichkeitJensen-MaßStichprobenumfangMaschinenschreibenWhiteboardBitRückkopplungGoogolZahlenbereichSoftwaretestDateiformatAnalytische MengeComputeranimation
SoftwareentwicklerSystemaufrufIndexberechnungFlächeninhaltE-MailGreen-FunktionSoftwaretestSchlüsselverwaltungWeb-SeiteMultiplikationsoperatorApp <Programm>DifferenteZahlenbereichRückkopplungBitMaschinenschreibenProdukt <Mathematik>Metropolitan area networkFamilie <Mathematik>Computeranimation
RückkopplungE-MailSondierungPhysikalisches SystemProdukt <Mathematik>Web-SeiteMultiplikationsoperatorAusnahmebehandlungMailing-ListeSchlüsselverwaltungQuick-SortSoftwareentwicklerE-MailBitKomplex <Algebra>SondierungFrequenzProdukt <Mathematik>DatenverwaltungTouchscreenWasserdampftafelProzess <Informatik>CodeWorkstation <Musikinstrument>AnalysisRückkopplungWeb SitePhysikalisches SystemEin-AusgabeProjektive EbeneZweiAdressraumWellenpaketAggregatzustandMaschinenschreibenPunktAutorisierungRepository <Informatik>EinfügungsdämpfungSoftwaretestLoginBetafunktionSystemverwaltungVarietät <Mathematik>KurvenanpassungSchreiben <Datenverarbeitung>SchnelltasteComputeranimation
SoftwareentwicklerChecklisteMultiplikationsoperatorProzess <Informatik>Produkt <Mathematik>Gesetz <Physik>Ordnung <Mathematik>Bus <Informatik>ServerVersionsverwaltungFreewareBesprechung/Interview
VersionsverwaltungOrdnung <Mathematik>LastZahlenbereichGraphfärbungGraphPunktServerInstallation <Informatik>Produkt <Mathematik>Kartesische KoordinatenRückkopplungGoogolSondierungGraphische BenutzeroberflächeRichtungBildschirmmaskeAggregatzustandGreen-FunktionMultiplikationsoperatorComputeranimation
SoftwareentwicklerServerDatenbankFehlermeldungTotal <Mathematik>Web-SeiteExplosion <Stochastik>RückkopplungProdukt <Mathematik>ZahlenbereichSondierungProgrammierungQuellcodeAusnahmebehandlungProgrammfehlerInformationGoogolMultiplikationsoperatorPhysikalisches SystemFortsetzung <Mathematik>SystemzusammenbruchTotal <Mathematik>App <Programm>StrömungsrichtungAggregatzustandDifferenteAbstimmung <Frequenz>FehlermeldungComputeranimation
Explosion <Stochastik>FehlermeldungTotal <Mathematik>Web-SeiteSoftwareentwicklerProgrammfehlerVerschlingungApp <Programm>Produkt <Mathematik>Projektive EbenePhysikalisches SystemProgrammbibliothekOpen SourceBitAusnahmebehandlungSystemprogrammSoftwareAblaufverfolgungZahlenbereichTeilmengeMAPRückkopplungSystemzusammenbruchLeistung <Physik>AggregatzustandJensen-MaßVerkehrsinformationComputeranimationDiagramm
SoftwareentwicklerGanze FunktionWeb-SeiteMAPSondierungGruppenoperationProdukt <Mathematik>RechenschieberFlächeninhaltRückkopplungMechanismus-Design-TheorieStichprobenumfangQuelle <Physik>EntscheidungstheorieNatürliche ZahlPrototypingPunktspektrumSelbstrepräsentationDifferenteMetrisches SystemQuellcodePunktZählenVerkehrsinformationSystemzusammenbruchWeg <Topologie>Wort <Informatik>ZahlenbereichInstallation <Informatik>
KonfigurationsraumDatenbankKundendatenbankSoftwareentwicklerPrototypingMAPProdukt <Mathematik>MaschinenschreibenRückkopplungMathematikJensen-MaßMultiplikationsoperatorBetafunktionBitDatenbankBildschirmmaskeProgrammierumgebungSoftwaretestProjektive EbeneEinfache GenauigkeitMehrplatzsystemAlgorithmische ProgrammierspracheDifferenteMathematische LogikGreen-FunktionTabelleVersionsverwaltungGüte der AnpassungKartesische KoordinatenKonfigurationsraumVisualisierungNotebook-ComputerNummernsystemFortsetzung <Mathematik>InformationsspeicherungLesen <Datenverarbeitung>PunktInstallation <Informatik>Computeranimation
KonfigurationsraumServerProzess <Informatik>Web-SeiteKonfigurationsraumJensen-MaßBetafunktionRückkopplungFokalpunktAggregatzustandWort <Informatik>Projektive Ebene
KundendatenbankTuring-TestKonfigurationsraumWort <Informatik>KonfigurationsraumWeb-SeiteAggregatzustandUmsetzung <Informatik>Metrisches SystemBetafunktionDatenbankPunktWeg <Topologie>GoogolInstallation <Informatik>Produkt <Mathematik>Computeranimation
TypentheorieWeb-SeiteDatenbankProgrammierumgebungEinfach zusammenhängender RaumArithmetische FolgeMinimumRechter WinkelFortsetzung <Mathematik>Projektive EbeneProdukt <Mathematik>ZweiEchtzeitsystemPerfekte GruppeVorzeichen <Mathematik>ServerBetafunktionComputeranimation
KonfigurationsraumProgrammierumgebungMenütechnikServerLokales MinimumDatenbankKundendatenbankProgrammierumgebungWeb-SeiteAusnahmebehandlungMAPVerkehrsinformationDatenbankEntscheidungstheorieMathematikTypentheorieValiditätProdukt <Mathematik>Suite <Programmpaket>MultiplikationsoperatorKonfigurationsraumNetzadresseSchnelltasteSelbstrepräsentationInformationDatensichtgerätServerProjektive EbeneSystemzusammenbruchZweiProzess <Informatik>DifferenteEinfach zusammenhängender RaumTropfenGruppenoperationApp <Programm>SkriptspracheInstallation <Informatik>LeistungsbewertungOrdnung <Mathematik>Inverser LimesTermBildschirmmaskeRuhmasseBewertungstheorieDienst <Informatik>TouchscreenStützpunkt <Mathematik>Güte der AnpassungGenerator <Informatik>Ganze FunktionRechter WinkelVorzeichen <Mathematik>Verband <Mathematik>Fortsetzung <Mathematik>Figurierte ZahlOktaeder
DatenbankKundendatenbankTypentheorieDatenbankProgrammierumgebungProjektive EbeneBildverstehenCASE <Informatik>SoftwareentwicklerBewertungstheorieLeistungsbewertungProdukt <Mathematik>VerschlingungKonfigurationsraumAussage <Mathematik>HyperlinkVirtuelle MaschineRechter WinkelGamecontrollerAuswahlaxiomSoftwaretestMetropolitan area networkTropfenDatenflussSoftwareGeradeProgramm/QuellcodeXMLComputeranimation
AlgorithmusAbelsche KategorieVirtuelle MaschineKundendatenbankHyperlinkKategorie <Mathematik>SoftwareentwicklerProgrammierumgebungDifferenteAlgorithmische LerntheorieTropfenVirtuelle MaschineLeistung <Physik>DatenbankPunktKonfigurationsraumStichprobenumfangNummernsystemMultiplikationsoperatorWort <Informatik>Projektive EbeneGüte der AnpassungBasis <Mathematik>VierzigComputeranimation
KundendatenbankZeitrichtungKonfigurationsraumVirtuelle MaschineProjektive EbeneSichtenkonzeptDatenbankDefaultPunktServerRückkopplungProgrammfehlerSondierungAlgorithmische LerntheorieProgrammierumgebungKartesische KoordinatenComputeranimation
AlgorithmusVirtuelle MaschineKundendatenbankMagnettrommelspeicherWeb-SeiteAlgorithmusKartesische KoordinatenVirtuelle MaschineMultiplikationsoperatorSondierungFitnessfunktionGeradeRückkopplungAdressraumE-MailComputeranimation
SoftwareentwicklerMultiplikationsoperatorSondierungExogene VariableGraphfärbungDifferenteNeunzehnVirtuelle MaschineBildschirmmaskeAlgorithmusTropfenGamecontrollerUmsetzung <Informatik>ProgrammierumgebungKonfigurationsraumGeradeMailing-ListeMathematikE-MailProdukt <Mathematik>DatenbankRückkopplungBitComputeranimation
E-MailSoftwaretestFehlermeldungVollständiger VerbandE-MailRückkopplungGeradeMathematikAutomatische HandlungsplanungFehlermeldungPunktProjektive EbeneProdukt <Mathematik>Gebäude <Mathematik>ServerDifferenteKonfigurationsraumWellenpaketWhiteboardWeb-SeiteProgrammfehlerSoftwaretestEntscheidungstheorieProgrammierumgebungInformationHyperlinkDatenbankMultiplikationsoperatorBetragsflächeWeb SiteInformationsspeicherungQuellcodeWasserdampftafelKontrollstrukturComputeranimation
SoftwareentwicklerBetafunktionWellenpaketForcingDatenverwaltungProdukt <Mathematik>BitPunktRechter WinkelE-MailKonditionszahlAggregatzustandExpertensystemEntscheidungstheorieTwitter <Softwareplattform>MultiplikationsoperatorComputeranimation
SoftwareentwicklerMultiplikationsoperatorParametersystemPunktGüte der AnpassungNichtlinearer OperatorAssoziativgesetzAdditionComputeranimation
KonfigurationsraumCodeSoftwareentwicklerProjektive EbeneComputerarchitekturMathematikMAPVersionsverwaltungKonfigurationsraumBenutzerbeteiligungE-MailPunktWeb-SeiteSchnittmengeMultiplikationsoperatorProdukt <Mathematik>Kartesische KoordinatenZweiServerMehrrechnersystemPhysikalisches SystemParametersystemNichtlineares ZuordnungsproblemKonstruktor <Informatik>QuellcodeApp <Programm>AutorisierungMailing-ListeStapeldateiComputeranimation
SoftwareentwicklerWellenpaketAeroelastizitätWhiteboardProdukt <Mathematik>ARM <Computerarchitektur>Gemeinsamer Speicher
SoftwareentwicklerEntscheidungstheorieLesezeichen <Internet>Selbst organisierendes SystemOffice-PaketGarbentheorieBitrateComputeranimationJSON
SoftwaretestProdukt <Mathematik>AnalysisSondierungFehlermeldungAbfrageVerkehrsinformationMetrisches SystemSoftwareentwicklerE-MailRechenschieberFehlermeldungJensen-MaßRückkopplungIterationBetafunktionARM <Computerarchitektur>AbfrageTaskSondierungSystemaufrufInteraktives FernsehenVerschlingungBitSkalierbarkeitKartesische KoordinatenGruppenoperationWellenpaketRoutingZahlenbereichDatenverwaltungVollständiger VerbandProjektive EbeneAutorisierungLesen <Datenverarbeitung>TermTelekommunikationSoftwaretestLastMathematikOrdnung <Mathematik>EntscheidungstheorieVersionsverwaltungDatensatzBefehlsprozessorMereologieTouchscreenFirewallDatenbankInverser LimesProdukt <Mathematik>CodeKonstanteMetrisches SystemKonfigurationsraumMittelwertServerApp <Programm>ProgrammierumgebungAusnahmebehandlungExogene VariableHalbleiterspeicherProfil <Strömung>MultiplikationsoperatorSoftwareExpertensystemEinfache GenauigkeitDienst <Informatik>DifferenteNotebook-ComputerSchreiben <Datenverarbeitung>Betti-ZahlFestplatteDickeProzess <Informatik>SchnittmengeRechter WinkelDiskrete-Elemente-MethodeZentrische StreckungDesign by ContractHypermediaProfil <Aerodynamik>Arithmetisches MittelHilfesystemBestimmtheitsmaßBildschirmsymbolPunktFrequenzStatistikPhysikalisches SystemInternetworkingComputeranimation
Transkript: Englisch(automatisch erzeugt)
This is my talk today. So this is me. I'm David Simner. I blog very occasionally at this URL. And my email address is on the slide if you want to get in touch. Don't worry. I'll put all these details up at the end as well. So today, I'm going to be giving an experience
report from the most recent project that I worked on. And the topic is, what's next? How we use feedback to decide what to build. So I'm going to be talking about how we got feedback from our users, all the different mechanisms we used, and how we incorporated that into the product that we were building. So I'm going to start the talk just by motivating why this is important, which will literally just take
five minutes or so. And then I'm going to give you a whistle-stop tour through the prototype, the alpha, and the beta stages of our tool so you can see which different feedback mechanisms we used at the different points. Then I'm going to go into a particular example in depth. So I'm going to talk about how the user interface of our tool evolved throughout those stages and an in-depth look
of the changes we made and which feedback drove those changes. And finally, I'm going to talk about how we managed the scope of the story. So we were using Scrum as our development approach. And I'm going to talk about how we managed the scope. Because if you're not very careful, you can set off to build a feature. And the intentions are all good, but the story can get derailed along the way.
So that's related to how we kept focused on the feedback and didn't just build other things at the same time. So this is why it's important. So we've got this chasm down the middle. And on one side is the users, and on the other side is us. And this is what we're trying to avoid. We're trying to avoid this as much as we can.
And the talk today is basically about bridging the gap between us and the users. And throughout the talk, I'm going to give you lots of tips and tricks on how to bridge that gap. So why do we collect feedback from users in the first place? And the answer to that question is that it's a very important part of achieving the holy grail, which is product-market fit.
So on this slide, we have a market, all the people on the slide. And one of the people in that market, the dude in orange, has got his hand up and he's asked to participate in our beta. And the hope is that by listening to the feedback from the dude in orange, we can build a better product, not just for him, but for everybody else in the market as well.
And so by kind of listening to his feedback and assuming it's representative of the market, we can build a better product for all of them. And that's good because hopefully if we achieve that product-market fit, everybody in the market, or certainly way more people than just him, will start using the product and paying for us, which is good because I like money because I like getting paid, so.
Today's talk is about feedback, how to collect that feedback, how to use that feedback, and so forth. Today's talk is not about how to get users in the first place. So basically the marketing aspects of how you actually get these users is out of scope for the talk today. I'm going to be talking about once you've got these users, how do you then get the feedback off them and act on that feedback?
So even though I'm not going to be talking about the marketing aspects, it is a necessary prerequisite because if you obviously, you need people to give you the feedback, so if you haven't got those people, then it's obviously not going to work. So we had an awesome marketing team on the product I was working on, and over the course of the project, they got us 15,447 people to download it
and give us their email address. And this number was when I made the slides a couple of days ago, so it's probably got up since then as well. So this was absolutely awesome because it meant that we had a lot of people to talk to, and also a lot of people using the product as well. So we can do kind of quantitative measures with that funnel, which we'll see later in the talk as well. So to cut a long story short, you need an awesome marketing person, but it's out of scope for the talk today.
So let's go. This is how our project started. It was a very nice green field. I was literally living the dream of a green field project. Don't worry if you're not working a green field project. The examples today, the tips and tricks are also applicable to brown field projects as well. So don't worry if you're not working on a green field project.
So you can see that green field again. And then here is the hotel where we kind of kicked off our project. So we had an offsite meeting in this hotel. And in that offsite meeting, our product manager kind of, he had done some initial research, and he pitched us the idea of the tool that we were gonna be building. So it took us a whole day,
and I've got like 30 seconds to pitch you the tool. So let's see how it goes. So how many people here have heard of Octopus Deploy? Quite a lot of you, cool. So if you imagine Octopus Deploy, and then if you take away the deployments side of things, now this is a bit weird, because what if you got left? What you've got left is a dashboard
that shows you which versions of the products are running in the different environments. So imagine that, but for database schemas, and that's what we set out to build. So it was a monitoring tool for the database schemas in the different environments. And the reason why this is useful is that often when people have databases
in different environments, they have lots of different ways of making changes to them. Various different people might have access rights to the server. Maybe they've got a deployment process which they're meant to follow, okay, on the times when they don't, then they still want to know what's going on on these databases. So we were a monitoring tool that will tell you the schema
that's on the databases in the different environments. And this is also quite nice for when companies put process change in place in terms of how they're gonna do their deployments, because they can use our monitoring tool to see if they're actually deploying the way they think they're deploying, or whether actually people are just kind of going onto the servers and running bits of SQL. So that was the tool that we set out to build.
And this tool had, the idea for this tool had come from talking to our current market, and which is kind of very database-centric. We write a whole bunch of different tools that help you deploy databases. And that's why the monitoring tool was useful, so you could see how the database deployment tools were getting on, and if they were being used in the right way. Now, because the tool came from talking to our current market, this comes with a massive advantage,
that we already understand this market to some degree already, although obviously we didn't know precisely how it related to this new tool, so we did have some work ahead of us. The hotel now looks like this. It's been demolished, so let's hope that isn't a bad omen for how the project goes. So, the off-site meeting we had was on Wednesday the 12th of March, 2014.
And one of the things we decided we wanted to do in that off-site was to get the product in front of as many people as possible, as quickly as possible. And so we looked around for a conference to do that. And there was a conference in the first week of April in Seattle called ALM Forum, and we booked tickets, and we booked plane tickets,
and we were flying on Saturday the 29th. So that gave us two and a half weeks to build something which we were gonna show at that conference. And we did it in those two and a half weeks, we built a prototype, and then off we flew to Seattle with our completed prototype. And there we were at ALM Forum.
So the reason we chose to go to a conference is that it's a really great way to meet tons and tons of people. So for example, NDC this week, there's 800 people, there's tons of vendors here with booths, and it's a great way of meeting lots and lots of people. Now generally you can't spend that much time with each person because there's the whole conference going on, right? They want to eat, they want to go to the sessions,
they want to network. So you probably get maybe five, 10 minutes at the most with each person, but you get tons and tons of people. And we chose ALM Forum, the name stands for Application Lifecycle Management. The reason why we thought that was relevant to the tool we were building was that lots of applications have a database behind them, and if you're doing ALM, chances are you've got these different environments,
and so a tool that would monitor your database across those environments seemed to be a good fit. So we went to ALM Forum, and we talked to tons and tons of people over the time, over the couple of days of that conference, and the reception we had was really, really good. So we kind of pitched the idea to these people, we judged their response, and we got them to sign up if they were interested, and we got quite a few email addresses
from people who had signed up. It was really, really good. ALM Forum is actually quite consultant-centric as well, lots of consultants there, and that was also really good because each consultant brings with them knowledge of all of their different clients. So consultants were saying, yeah, my clients would love this, and that was the feedback we got. So once we got back from ALM Forum,
we decided that we wanted to continue with the approach. Everything we'd heard so far was good, and so we set out to do some more in-depth research. So rather than talking to lots of people but for five to 10 minutes each, we decided to talk to a smaller number of people, so just 14 people, but for way over an hour each, so kind of an hour, an hour and a half, something like that, and we did this over the phone with them.
And during that phone call, we were writing all of our notes that we were taking onto individual Post-it notes, and the way the call was structured, in the first half of the call, we listened to them describing their job, explaining their current deployment process, where will the pain points they had with it were, and the idea was that by asking them where the pain points in their current process are,
we can identify where would be a good fit for a tool to help address those pain points. In the second half of the call, we gave them the prototype we built and we conducted a UX test to see if our ideas of what the UI were like so far were gonna work or not. And then this is how we analyzed the feedback
from those 14 calls. So what we did was we took all the Post-its that we had written during the calls, and we put them up onto a wall, and we clustered them as well. And the colors of the Post-its mean different things, so you can see the left-hand side of this thing is kind of the first half an hour about their process, the right-hand side is kind of the prototype,
and the colors meant different things in terms of good, bad observations we made and so forth. So this led us kind of, and the clustering process was really useful because it forced us to think, that thing that that person said on call number five, is that the same thing as this person said on call number seven, or did they mean different things?
And by clustering them, it forced us to think back over those calls to try to get the most value out of the calls that we've made and the notes we've taken. Is this feedback the same as this one? Should we put more weight behind that? Is that just one person saying that? And so that's kind of how we analyzed and grouped the feedback from these people. These are the, this is a slightly biased shot,
but kind of in the front we have, I'm in love with this idea. In the Post-it, that's from call number 13, and then from just behind this, I love this idea, this is long overdue, I'm excited. Post-it up here, they asked about pricing. That's always a really good sign if people are kind of so bought in, they wanna know kind of how much is this, kind of when can I get this, those kind of questions are really good to hear.
So yeah, that was kind of the feedback that we got. And so based on that extensive user research, the next thing we set out was to do a private alpha. So the private alpha started off with just 16 users, and this is kind of our board for keeping track of what was going on on that private alpha.
So as you can see, Redgate absolutely loves whiteboards and posters. So we've got, each user has a sheet of A4 paper. At the top, that's just a screenshot from LinkedIn. So that's got their photo, their job title, where they work and so forth. And then the Post-its on here are all the different bits of feedback that we got from that user.
So maybe bugs they were waiting us to fix, questions they'd ask, and see it says resolved here as we'd done that one, this one's in progress. And so this was a great way of kind of tracking where all these users were. So the sheet of A4 can move, it's on BluTack, so it can move left to right along here. So yet to try it in the process of setting it up, they've used it or they are using it.
So that those sheets of A4 move left to right, and as they do move, the Post-its move with them, and the Post-its come on and off the board as we've been resolving these issues. So this was a great way of kind of tracking where we're at with these users, which was a great way of kind of organizing, making sure nothing fell through the cracks in the private alpha. Later on, I'll be talking about the open beta that we did,
and how we had to organize things differently for that. So this was our attempt to come up with a name for the product. So we tweeted, trying to name a new product we're working on, I think it's safe to say we won't go with SQL Badger. So this is SQL Badger over here. It's also safe to say we won't go
with a whole bunch of other things from this board as well. Can you imagine a product called Drift Busters, for example? Drift is the term when people just kind of make random changes on the database. There's also like, you know, SQL Warden up here. There's like Drift Wrangler. These are not, you know, Database Watchdog. These are not, you know, SQL Mole over here. I have no idea why that ended up on the board.
Database Big Brother up here, because it's always watching your databases. These are not kind of sensible names for a product. And so, because we didn't have any sensible names on here, the customers that tweeted back to us didn't give us any sensible names either. So this was not a useful way of getting feedback on what we should call the product.
And yes, it was fun. Yes, there was some buzz on social media about the project, but in hindsight, it wasn't a very useful way of choosing a name for the product. So if we were doing this again, we'd either have to decide whether it was fun up front, in which case, sure, jokey names like Drift Busters is absolutely great, or we'd have to decide whether it's serious up front, and then we can put kind of serious names on the board,
and then we might get some serious replies. In the end, we ended up going with SQL Lighthouse as the name. But yeah, that name wasn't driven by user feedback at all. We literally just picked it, because we wanted to move on and actually release something. And in fact, we came back and changed the name later as well. So yeah, don't do this
if you want to get sensible suggestions. The other little fun piece of feedback we had during the private alpha was that our Easter egg was too easy to find, so we made the Easter egg a bit harder to find during that time as well. So this is our DDL trigger. So the way that we monitor whether these databases are changing in their schema is we install a DDL trigger onto the database server.
And all of the alpha users knew that this DDL trigger was there, because they'd had to install it by hand. But the other people in their organization may not have known. So if there's a bug in this DDL trigger, and there was back in the alpha stages, it could break the way that SQL server worked. So we obviously wanted so that if that happened,
then we got to know about it. So at the top of the trigger, we put this comment, and the comment's got a URL with more details, and it's got an email address if you've got any problems. And this was a great way of getting feedback from the people that we broke. So other people in the organization as our private alpha users,
if they were interacting with the SQL server and it wasn't doing what they were expecting, they could ask SQL server for details of that trigger, get this comment, and know how to get in touch with us. So this was a really great way of getting feedback, not just from our alpha users, but other people in their organization that we'd broken as well, which was really useful, because it meant that we could find some bugs in this and fix the trigger.
So in the final few weeks of the private alpha, we were fretting about what would happen when we release our public beta. So we've had this private alpha. It's had a very small number of users in it. We've kind of done a whole bunch of fixes, but have they really worked? Is there anything left to find? What happens when we release this public beta
to like hundreds of people? What's gonna happen? And so we wanted to find that out before we did. So we were gonna release our public beta at a conference, and that conference was kind of fixed on a certain date. The conference was not gonna move. We had to be ready by then. And we kind of, you know, this conference had hundreds of people at it, and we were gonna launch this beta, let them all download it, try it for themselves. And we wanted to find out in advance
what was gonna happen when we did this. And the way that we did that was we invited a small cohort of users to join our private alpha in the last couple of weeks, just to see kind of whether it was ready for prime time. And our UX designer asked all of these users to keep a journal about their interactions with the software.
So absolutely everything they did with the software, you know, installation, first one experience, configuration, the day-to-day use, we asked them to write in this journal what they were finding. And then we asked the users to then send the journals into us. They sent them in. And so we've now got all these journals from these users. So the question is, how can we get the most value out of this feedback?
So one of the things we could have done would be for our UX designer to read through all these journals, give the team a summary and say, hey, look, I think we should do these improvements to the UX. But that's not what happened. Instead, our UX designer got asked to read them. So this is me, and this is me reading the user's journal. And we had post-its to write down what we,
Rogate loves post-its. So we had post-its to write down the observations that we made from the journal. So the point is that the reason why our UX designer got asked to read the journals is not because he was lazy. He wanted us to empathize with the users. So by us seeing firsthand and reading firsthand
from these users' journals the struggles that they were facing, it was a lot easier for us to empathize with those users. And so we could see exactly where these people were struggling to set the tool up, and we could work out what improvements we should make to the tool before we release the beta so that it would be in the best possible shape.
And so based on us hearing firsthand just how difficult our product was to set up, we prioritized those UX fixes over any new features that we might have added in those last couple of weeks from the suggestions from the alpha users. So this way of getting everyone on your team to read these journals and to see firsthand was a much better approach
than just emailing a summary or putting a summary on Slack, because it meant that we saw it firsthand. So that worked really well. A week later, we put those usability improvements in and we shipped our beta. One of our colleagues got us this 3D jigsaw of a lighthouse because the product name was SQL Lighthouse. And so we built the 3D jigsaw as a team as well. So that was it.
That's our UX designer there, that's Johnny. He came up with the idea for these user journals. So we've now released our public beta. So you know, this is our Greenfield project. It's now got some kind of flowers growing here and here's a lighthouse as kind of our project has now matured. So what I'm gonna do next is to talk about the public beta.
So the main difference between the private alpha and the public beta was that the public beta had way, way more people using it. And what that meant was that we had to scale how we collected feedback. So we still kept the individual touch, but now we kept the individual touch with just a sample of our users
rather than every single one. You know, gone are the days when we can have a whiteboard with a sheet of A4 for each user because we've now got hundreds of them. Gone are the days when we can spend hours on the phones with each user helping them get it set up because there are hundreds of them. So we still kept the individual touch, but just with a sample of the users. We also introduced some new techniques to analyze the users in aggregate.
So we had UserVoice and Google Analytics and we'll see in a bit more detail how we use those in a minute. So one of the ways we kept the individual touch was by doing UX tests. So we chose a small number of users from our user base and we invited them to participate in a UX test and we incentivized them with an Amazon voucher if they did it.
And this is how we ran UX tests. So again, Post-its. So these are printouts of the UI on these bits of A4 paper. So we're demoing the email story here. So you can see here's Outlook receiving an email, here's Dashboard and here's some of the page covered in Post-its. So the key here
is how we gather the feedback during the call. So if there was anything that was good, that went on a green Post-it, anything bad went on a blue Post-it and anything really bad went on a purple Post-it. And as you were on the call to the user, you were kind of writing these Post-its down and sticking them on the bit of the UI they corresponded to. And then at the end of the call,
you could just bring the printouts of the UI back down to the team area. And this meant that everybody who wasn't on the call could get an indication of how it went. You can look at this one and go, that page was fine, all greens. This page, pretty good, there's just one blue. And this page was an absolute disaster, right? The user could not use that page with the product it needs to be redesigned because you can see there's as much blues as greens
and there's even a purple here as well. So this was a great way of kind of seeing in our team area how that call had went. And as we did many different UX calls and as we iterated on the designs, you can kind of see all of them next to each other in the team area. And you can see kind of the Post-its just turned green over time as the feature gets easier to use. So this was a great way of kind of tracking
how the UX designs were evolving and also sharing the feedback with people who weren't on the call because you can instantly see which were the difficult pages to use in the app. We also had to scale our feedback, as I said, because there's many more people. So we sent out author emails to everybody. The author emails came with our product manager's name
and email address, so they didn't look like author emails. They also asked really open questions like what are you hoping to achieve with the tool? And the idea being that open question would encourage users to apply. We also sent out surveys to our users, so loss analysis surveys weren't using the product anymore, that kind of thing. We also use UserVoice just as everybody else does.
In the admin page of UserVoice, you can get a list of all the people and their email addresses that voted for the feature. So when we came to start working on a feature, we got from UserVoice the list of all the people who'd voted for it, and we got in touch with these people in more detail. So rather than just going on a sentence of what they put on UserVoice a couple of weeks ago
or whatever, we got in touch with them, had an in-depth call, did a UX test with them, just to see a bit more detail about what they actually wanted. We also had a support voter, and we had absolutely everybody on the team on that support voter, so not only just the developers, but also our UX designer, our technical author, our marketing person, the project manager.
And by having everybody on that support voter, again, really helps everybody know what the users were struggling and empathize with them. UX designer also discovered something. Once you've answered a user's support question, that is the best time to ask them for something, because they're really engaged with you, you've just solved their support question. So if you ask them a completely unrelated question,
like, you know, we're working on this feature, can you tell me what you think about this? Or would you like to do a UX session? Or can we have a screenshot of how you've configured dashboard? These users would reply to us, and we found this one of the best ways of getting kind of users to actually kind of engage with us was once we'd solved their support request, we kind of just threw in a kind of curve ball
of just this little question, can you help me design this thing I'm working on? And this was really good for getting replies from those people, and making sure that we had a wide variety of people having input on new features, not just the people on user voice who'd voted for them. We also had an in-product support system, so there's a magic keyboard shortcut you could press in the product, and it would send logs through to us.
And that basically saved us time. Rather than us having to do the back and forth with repo steps, we could just tell them to press these magic keys, and it would send logs through to us. And that helped us focus our time on what was actually important, rather than just kind of the complexity of repo steps, which can just be automated away. So one of the best ways to get feedback
on some code that you've written is to ship it. And so we did. So this is our public beta. So we launched it at SQL in the City Conference on the 22nd of October. This is when we came out of public beta and became a free tool seven months later. And the dates in orange are all the days when we shipped.
So you can see we got really, really good at shipping. I mean, obviously it took a bit of time at the beginning to get into the process of shipping as we had to put all the kind of groundwork in place. Once that was in place, we shipped pretty much every single Wednesday. There's one exception in April, where we took three weeks to rewrite our entire UI. We'll see more on that later. But apart from that,
we shipped pretty much every single Wednesday once the kind of the initial period was over. So, and that's when we pulled the release. So, yeah, we got really, really good at shipping every Wednesday. And yeah, done that slide. And this is great because it means you can get feedback from the users about the features that you're writing.
So in December, we went on a team day out to Paris. So this is us at St. Pancras Station early in the morning. This is us on the train, the Eurostar train to Paris, having a nice pair of teeth before lunch. And this was a Wednesday when we went to Paris and we ship on Wednesdays. So we shipped from Paris.
And the reason why I bring this up is that you have to get shipping down to just a few buttons. So we could ship over the VPN, over 3G, kind of, you know, from the restaurant in Paris. And by getting releasing down to just a few buttons, it means that releases don't become a distraction.
So by automating all the tedious things into just button, button, button, then it means that they didn't take that much of our time which let us spend that time on much more valuable things instead, like actually writing features users want rather than the tedious process of following release checklists because we automated the lot. So if you are a hosted product, then once you release,
the users are running the new version because you own the servers and you can just put them on the new version now. It's slightly more complicated when you're an on-premise product, which we were. In order to get users using the new features, we had to rely on them upgrading. So this is a graph that shows how quickly our users upgraded. So we've got the date on the x-axis here.
We've got the number of installs on the y-axis and the different colors represent the different versions. And you can see that when a new version appears, when its color appears for the first time, the first data point is quite high. So this is the first orange data point, this is the first green data point, the first light green data point,
the first red data point. And so because this data point is really high, what you're seeing here is that loads of people install the new versions on day one. Also, you can see the colors drop away really quite rapidly. And that means that everybody upgrades really quickly. So the way we achieved this was by making our check for updates notification incredibly naggy. So until you upgraded it nagged you
and that worked really well. Other products like Google Chrome just make it mandatory. So however, if it's a hosted product or an on-premise product, you need to actually make sure the users are upgrading because otherwise they're not gonna be giving you feedback on the new features. Another place we got feedback from was our uninstall survey. So when you went into control panel add remove programs
and uninstalled us, we popped up a Survey Monkey survey. Not everyone filled it in, but for the people who did, we got that insight as to why they had stopped using the product. Rather than just seeing our usage number drop, we actually got the insight as to why they stopped using the product. And that was really useful to us. We also used an automated error reporting system.
So if it was an exception in the app that was unhandled, it sent it into us. We automatically worked out which Jira issue it was related to. If there wasn't a Jira issue already, we opened a new one. And then we had a dashboard like this one, which pulls together the information from various systems. So you can see all of the Jira issues that are coming in for SQL Lighthouse, their current state,
how many total error reports and issues there are, and the number of users coming from two different sources, check for updates and Google Analytics. And this let us see kind of how particular builds were getting on, whether we should pull a build. And it also gave us a kind of, if you imagine that each time the user sends in a crash report, it's basically a vote to get that bug fixed.
So this let us do kind of the democratization of bug fixes, where we fix the bugs that users were actually hitting. So you can see we fixed the top one here, and obviously we'll be working on the next one. There was such a thing as too much, sorry, go on. Yeah.
Yes, it would. We've also open sourced that bit of software as well. So you have to be using our exception reporting system, which we sell as a product, but we've open sourced the kind of the connector to Jira, which basically looks at the stat trace on the bug, works out the top method and opens a bug for that. And if it's in a library, it will open two bugs, one in the library project in Jira and one in the actual product,
and then it will link the bugs together. So you can have a bug in your SQL lighthouse, which is blocked on our shared utils library, for example. So there was such a thing as too much feedback, which is what we're seeing here. So this is kind of the number, so this is the monitoring system for our exception reporting.
So you can see, this is the number of exception reports coming in per hour. And you can see it's basically tiny. This is when we released and then this is when we pulled the build and it started dropping off. So there is such a thing as too much feedback, which is when you're getting like 5,000 crash reports an hour. So yeah, you have to be ready to pull builds during an alpha and a beta stage,
because as much as you try, you will get something wrong, unfortunately. And finally, our users, sometimes users will walk away, they'll fly away, and they will leave and they will stop using your software. And so we had a usage metric that would let us keep track of how many users we had.
Our usage metric was very, very tight. So it wasn't just number of installations. They actually had to be using the product to count. So they actually had to set it up and they had to be using it. And they had to have visited a page in the product, at least in the last couple of weeks. And so by having this really tight usage metric and holding us to account,
we got to kind of see how our product was doing. Now, I've spent the last few slides talking about the areas where you get the most negative feedback. So crash reports, uninstall surveys, usage metrics, these are the kind of things where you can get the most negative feedback.
And you have to remember that basically what you're doing here is you're picking a sample of the users. So by choosing the people who uninstall the product to fill in an uninstall survey, that's a way of sampling your user base. And similarly, the crash reports, yeah?
That's the point I'm getting to. So the point is that we were sampling our user base and we were sampling our user base by kind of users opting in to uninstall the product and to give us the uninstall feedback. But that means that that sample is not representative. We have a whole bunch of other users who aren't installing it. So if you look solely at the uninstall feedback,
you won't be getting feedback from the users who are happy with the product. And so when you look at the feedback sources that you set up, you have to make sure that they're representative of the entire user base. So if you just look at uninstall surveys, you won't get to hear about the silent majority who are just in love with the tool. So it's very important to realize
whenever you have a feedback mechanism, is the feedback we're getting from this mechanism representative of everybody? Or is it just representative of a particular group? So yes, we use these mechanisms, but we also use lots of other mechanisms as well, like sending out a survey to all the users, like doing UX tests, asking them for net promoter scores,
that kind of thing, which means that we got to have kind of a whole spectrum of feedback from different samples. And that meant that our decisions weren't biased by what one particular group was saying, which was really useful. Cool, so the talk's about halfway through. In the, so in the thing we've just done is we've looked at the prototype and the alpha and beta
and how we use different ways of collecting feedback in the different stages of the product and how different ways were relevant at different stages, depending on whether you could afford the high touch with every single user, whether you had to go with more kind of sampling approaches. So now I want to look at a, about how our UI evolved.
Now our UI, like we literally changed it pretty much every single release. Obviously we didn't have time to go through every single one of the builds that we put out. But so I've just picked five ways that our UI changed. And we'll be going into these in a bit of detail so that we can see kind of what the UI looked like before, what feedback it was that we used to make that change, and then what the UI looked like afterwards.
So this is what our prototype looked like. So this is what we took to ALM form back at the beginning. So back in those days it was called database dashboard or DB dashboard for short. We hadn't even really given it a proper name yet. And so this is what it looked like. We've got a project, CRM, just like projects in Octopus deploy. We've got environments, dev testing, staging production again,
just like projects in Octopus deploy. You can see where our kind of idea for this product came from. And then we've got all the different databases that we're monitoring. These badges are green if the database hasn't changed. So this means everything's good. The database is running the version of the schema. So by schema I mean like the stored procedures
and the business logic in them, but also the table schema as well. And then these two databases have changed. So if a database hasn't changed, it's green. If a database changes, it's red or it's orange. And the difference between red and orange is what it's changed to. So this database in orange has changed to a version that we've seen before. And this normally represents a deployment. So, you know, staging has gone from 1.1 to 2.0.
That's probably a deployment. You might not have meant to have deployed, but you know, it's a deployment. Whereas red means the database has gone to something we've never seen before, hence the goes to question mark. And so we've never seen this before. So kind of production should be green and then it should go orange to mean that it's already been seen elsewhere and production should never go red
because that would mean someone's just kind of logged into production, made a change on production that hasn't been seen anywhere else. So this was our prototype, that's what it looked like. When it was red or orange, you could review the changes or acknowledge the changes. And when you did that, it would go back to green. And if you wanted to roll the changes back, you had to go into SSMS or your deployment tool and make the change there
because we don't change the databases, we just monitor it. So that's what we took to AirLam form. The problem is that there was no configuration. You can't see any way to configure this application about what databases it's monitoring. Also, there was no installer for this. We literally just ran it by pressing F5 in Visual Studio. It was quite buggy that we had to kind of put fixes in at AirLam form itself.
So having Visual Studio on that laptop was invaluable. And that's what you get from a prototype. But obviously that wasn't suitable to be released to our alpha users. So we have to add configuration. And so this is what it looked like once we'd added configuration. So you can see there's a button here which is, we also renamed it to SQL lighthouse. So from DB dashboard. So there's a button here to set up SQL lighthouse.
And when you click that button, you get this configuration page. So this page was basic, but it did the job. And it also looked terribly bad. Like, can you imagine using this configuration page? This is not kind of, you know, for an alpha, this is fine.
You know, this took us no time at all. You can see how long we spent on the CSS for this. But this is just not really suitable for beta. And that was the feedback we got in that it looked terribly bad. So the next thing we decided to do to our UI was to improve the styling before we could ship the public beta. So this was what the alpha looked like, just a reminder. And then we changed the dashboard
to look like this instead. So we've actually added some more features here as well. You can filter and you can collapse projects. And we kind of, no one understood what that question mark meant. So we kind of spelled it out in words. It's drifted to an unrecognized state to make it quite clear what that question mark meant. So that's kind of what it looked like on the dashboard.
That's what the configuration page looked like before. And that's what it looked like afterwards, which is kind of, actually has some CSS now, which is really nice. So now we released our public beta with an installer as well. And during the public beta, we were using Google Analytics to measure our users in aggregate.
And we were using Google Analytics to look at the conversion funnel. And the conversion funnel started when someone downloaded a product. Remember our marketing team put 15,000 people at the top. And then our Google Analytics funnel finished when they became an active user. And that was an active user metric that we were keeping track of. That was a really tight definition. They had to actually configure the product
and have a database on their dashboard. So the funnel basically showed us everywhere that people got stuck during the configuration. So what I'm gonna do now is I'm gonna run you through the UI as it was when we released the first beta. And then I'll point out with hindsight all the things that the funnel then told us we got wrong. And then we'll look at what we fixed.
So this was the first page of our beta. Welcome to SQL Lighthouse. Be the first to know when your database has changed. Please click the blue button. This page performed fine. Everybody found the blue button and clicked it. This was when they started to struggle. And it's never a good sign when they struggle on the second page. So please welcome to SQL Lighthouse.
You need to set up a project, an environment, server connections, and databases. Let's add your first project. And for us, this made perfect sense, right? We were building Octopus Deploy without the deployments with the real-time monitoring for databases. But for our users, this made no sense whatsoever. They'd come to this product thinking it was gonna help them monitor databases
to see if they changed. And they were like, what's a project? Why do I need a project? But anyway, there's a button there. There's nothing else you can do. So you click that. We don't want a name of your project. Well, I guess I'll type something in here and click the button. Welcome back to this page. Now you need to create an environment. So let's do that.
Again, which project would you like it in? Pick a name of the environment, click the button, and then you get here. And finally, I can add a server connection. And that's what our users were telling us, right? That this is what I wanted to do in the first place, but I couldn't, because I had to jump through the project and the environment hoops. So before we made a decision about what to do about this, so we'd literally, in the funnel,
we'd seen people drop out all of those previous stages. They basically just got bored, right? They didn't understand the products, or they didn't know what it was for. Maybe they thought it wasn't really for them. And so they dropped out creating projects and creating environments. And so, we saw quite a lot of those users drop out. It was our biggest drop off in the funnel.
So we wanted to do something about this. And before we made a decision about this, we wanted more data. So, so far, all we've really heard from is the group of users who couldn't get it set up, right? And that we've seen in the funnel, these users have dropped out. And that's not really representative of our entire user base. We wanted to talk to the users who had got it set up to make sure we were building something
representative of everybody. So we asked, for all the people who'd got to the end of the funnel, we asked them to send us a screenshot of what the dashboard looked like. And by having that screenshot come into us, we could see what project names they'd picked, what environment names they'd picked, how many databases they had there, how they'd grouped them, that kind of thing.
And so that screenshot literally told us tons and tons of information about how they'd set up the product. So we asked these users to send in these screenshots, and we looked at what we got back. Now, some of these users had understood products and environments as we wanted them to. And they'd kind of configured products with sensible names and environments with sensible names. And some of them hadn't.
So some of them had just, for project, it was like A. They'd literally just type something on the keyboard to get through that stage. And that's never a good sign if you've got a mandatory form for them just people to type in gibberish in there. As for the environment, some people thought we meant servers there. So we saw environments that were basically the name of a server. We also had somebody who tried to
add as many databases as they could. So we had a limit of five projects, four environments per project, and three databases per environment. So we literally had projects, A, B, C, D, E, environments one, two, three, four, so we could add as many databases as you could to go to the dashboard. And again, in some ways this is a really good sign, right? Someone valued the product so much that they went through an utterly tedious setup process
to add all these databases. But on the other hand, our setup processes just got in their way. And they got to the end, but lots of other people didn't. And so by asking for those screenshots, we got to see kind of how lots of other people were using the products. And that meant that we could now understand kind of not just the people who dropped out, but the people who got to the end as well,
and make a decision representing our entire user base. So essentially we had two different groups of people wanting to use the product. We had the people who wanted to monitor one or two databases, and the people who kind of had bought into this whole pipeline idea. And obviously, over time, the people were just monitoring one or two, some of the time that was for an evaluation of the product and over time they'd add more.
But we were hindering their evaluation of the product by putting these hoops in their way. Now in hindsight, we should have had a UX designer attached to our project at the very beginning. And after this mistake, we very quickly got one given to our project so that it didn't happen again. We'll see what we did about this in a minute, but just there's a couple of issues first.
I want to go through. So the next thing they did after they added a server was we needed them to download that DDL trigger installer and run that SQL script. People dropped out here. They also had to give us a display name and a server address. They got confused about the differences between those. Then they can add a database. We asked them to type in a database name here.
We didn't do any validation at all of what they type in there. So we saw tons and tons of crashes come in from when they made a typo for the name. In fact, that was the build that we pulled because we talked to this database name every 13 seconds to see how it's getting on. And that's why we sent so many exception reports in because literally every single 13 seconds, if the database had the wrong name,
we sent in an exception report. So again, we expected users to be able to copy paste in here and know what the database is called, but people made typos and our lack of validation meant that they didn't find out about it. And after all of that, this is your reward. You get your database on the dashboard. So this is what we did to fix all of those problems.
So the first thing we did was we renamed our product. Now this wasn't actually driven by the users. This was driven by the business. We basically wanted to, so Redgate has a whole bunch of DLM tools which let you deploy databases to different environments in a really easy way. And we wanted this product to be kind of the dashboard to show you how all those deployments were getting on. So there was a business need to rename the product
so people could see how it better fitted in with our suite of tools. So that was the first change we made. And then the second change we made is we just got rid of projects and environments completely. So when you come to this page, it starts off with add a server. That's what our users wanted. It's what made sense to all of them. So that's what we put. There's a slight issue with this,
which is that as a business, we wanted projects and environments in order to show the whole pipeline. But we don't have that information anymore. So we'll see in a couple of minutes how we added projects and environments back into the tool, but in a way that our users could understand so that we could show them the whole pipeline. But in terms of the configuration process, projects and environments are nowhere to be seen.
It's just literally, let's start by adding the server. When you are adding a server, we got rid of the display name because no one understood what it was and we just used the server address everywhere. We actually run the DDL script for them in the app, which obviously helps a couple of users because they don't have to leave the app to access the mess and come back. So we just run the script for them if they consent and click the button.
When you're adding a database, we've now got a dropdown of databases so that you don't need to type the names anymore. And again, that fixed those support issues. And at the end of that, there's your reward. It's on the dashboard. No projects, no environments, so much easier to do. But as I said, as a business, we wanted to have projects and environments here. And indeed, some of our users wanted this.
And even the people who were adding one or two databases, that was sometimes for evaluation and they would eventually go on to add more databases to it and ways of grouping a dashboard is always kind of useful. So we wanted to add these projects and environments back. So now let's talk about how we did that. So this is what the thing looked like
after we removed products and environments. And this is what it looked like when we added them back in. So if we zoom in a bit, we can see what's going on. So this is the badge that represents the database. And you can see, uncategorized is this development. So this was how we added environments back in. Rather than putting them in the configuration workflow
and getting in people's way as they were trying to add these databases, we moved them from before they'd added the database to afterwards. And that meant that if you didn't want to use environments, that was great. You can just leave this thing alone. And if you do want to use the environments, then you can feel free to use it. We also tried to make it really easy to use. So rather than having to type environment names in by hand,
we gave you a dropdown with like dev test staging production. We also put this, is this development link up? So the reason why we did this is we did some machine learning to try to work out what environment the database is in. So because the database has got dev in its name, we assumed it was development, which is why this one shows up. And so by using this machine learning, it meant that we could kind of probably
give the user the right choice anyway. And if we don't, it's really easy to pick from the dropdown what the environment is. And the idea about this is that it doesn't feel like configuration, okay? It is configuration as far as we're concerned, but it doesn't feel like configuration as far as the users is concerned. No longer is it blocking their workflow. It's kind of just an inline control afterwards.
And you pick development from the dropdown or you click the hyperlink and that's what you get. And now you configure the environment. And that worked really, really well. Our funnel dropouts kind of, you know, previously we had tons of people dropping out at projects and environments, and this meant that we could add environments so that as a business, we can show the pipeline, which is what was kind of the value proposition
of the product. But it meant that our users felt like they weren't having to go through any kind of onerous configuration steps. This is the slightest same environment. We sometimes call them categories, but it should say environments at the top here. So this is basically how our machine learning got on. So if the user clicked the hyperlink for development,
that was counted as we got it right. If they went into this dropdown and picked development, that also counted as we got it right. And if they went in the dropdown and picked a different environment, that was counted as we got it wrong. And so this shows you how good our machine learning was at being able to predict the environments that a database was in. And we got it right about 60% of the time and wrong about 40% of the time.
So, you know, it's good, but it's not great. And because it's getting it right over half, we left it in. And so the point is it doesn't feel like configuration. It's getting it right more than half the time. And of course it learns based on your naming scheme for databases, because machine learns on your machine. We ship it with some sample data, but as you start using it, it learns.
Most of our users only have one or two databases, so it doesn't really have a chance to learn. But for people with lots of databases, it will have a chance to learn. And that means that it's even simpler as they are adding their fifth database onwards, because it's now got enough data to learn, which makes it better for the power users, which is really good. The next thing we did was we added back projects.
And we added back projects and called them pipelines. So again, we used machine learning for this. So if databases had a similar schema, we put them in the same pipeline. And if the databases had a different schema, we put them in a different pipeline. And this meant that kind of it was just,
from the user's perspective, it was just like a sensible default. And so just like environments, it didn't bug them during configuration because it's done automatically for them. And so we also have a group by here. So if they don't want to see pipelines, the arrow points to this thing, and they can just pick a different view so they can see by server or whatever if they don't want to use these pipelines.
We also put this survey up in the application, which is kind of how did our machine learning algorithm get on? Was it accurate, almost, or not at all? And so this is the feedback that we got from that. So you can see yes is about 60%, almost is 20, and no is 20. So you can see that the machine learning algorithm for pipelines is really, really good
in that it put things in the right pipelines. Yes plus almost is about 80% of the time, and that was really, really good. We also, for users who said almost or no, we took them to a SurveyMonkey page to ask them how we should improve the feature. And you can see that 19 people filled that in,
and most people didn't fill in the SurveyMonkey. And this gave us another lesson about collecting feedback. If you have little things in the application like this, there's no cross up here, you can't get rid of this, you have to answer the question. People will answer this question because it's in line, it's in the application, and people find it really easy to just kind of click one of those things. Whereas if you take them to a SurveyMonkey page
with some questions and ask for an email address, they won't fill it in as much. So this told us how we should ask for feedback in the application. Based on those 19 survey responses, the most common response was your machine learning algorithm got it wrong, I need to move this thing to another pipeline, or these things were in different pipelines but should be in the same pipeline.
So we added that back in for them. We also changed the colors around this time as well. So this is what it looked like before we added that feature in, and this is after. And you can see the difference is this little drop down control here. You click the drop down and you can see, if you want to move this to a new pipeline, you can. And if there were more than one pipelines, it would list the pipeline so you can move it around.
So again, it was an inline control to move things around so it doesn't feel like configuration. And the idea is by putting these little inline controls in where you click that, you get that, and if you want to change the environment it's in, you click these other, the environment inline control. And that meant that it didn't feel like configuration and our conversions just went through the roof.
Okay, in the final 10 minutes of the talk, I want to talk about scope. So we've seen all of the UI changes that we've made to the product based on that feedback. But, and the reason why I want to talk about scope in the last bit of the talk, is that aggressively controlling scope matters. If you're not careful, you can build things
that the users don't need or they don't want, and that is time wasted, and it's time that could be better spent elsewhere. And also, if you delay a story from shipping because you gold plate it and you add all the bells and the whistles, then because you delayed it from shipping, that's delayed it getting to the hands of users, and so you don't get the feedback
from those users as fast as you could. So these are all the reasons why we aggressively manage scope. So let me give you an example of how we did that. So I'm gonna talk you through our email feature. So the very, very first release of the email feature, we just had plain text email. So the emails that we sent out when a database changed schema were just plain text, one line, something has changed,
please visit dashboard to see what's changed. And that's all we had, just one line. I think we put the hyperlink in, but I'm not quite sure, sorry, the URL in to go through to dashboard. So that was our first release, right? It let us prove the value of the email feature. Do people turn it on? How many people turn it on? What do they think when they do turn it on?
Do people turn it on and then back off again? We can get all that data in. And we got user voice suggestions saying, these emails are great, but I want to see more information in them. And then we built that feature. So we now have HTML emails, which have got more information like which database has changed, which project is it in, or pipeline, which environment is it in.
And that means that we can kind of give the users more information in the email. We then got feedback that it was quite difficult to get email configured. So we added a test button so they can make sure that the server was fine. Then we got feedback that people wanted to send emails in different situations, so we configured when you can send it. And then our fifth release was better explaining the errors from the test button. So you can see that we're kind of
expanding on this feature. If we'd gone with the feature at the very bottom at the beginning, then maybe users wouldn't necessarily wanted these things. So an example is when we first started work on email, we assumed that people would want email to be configurable per project. So different projects want different people to be emailed about them. But because lots of our users
only had one or two databases on the dashboard, and indeed, because we got rid of products entirely, that wasn't a feature that would end up being useful. And so by not building the feature of configuring emails per project, we saved ourselves lots of time because that wouldn't have been useful. So just because you think users are gonna want something or that it makes sense,
you can waste a lot of time building that if you're not careful. So that's why managing the scope is very important because it meant that we freed us up to actually work on what these were actually asking for. So there are various techniques we used to aggressively manage the scope. So it all started with the story breakdown meeting, basically the planning meeting in Scrum.
And the point of this meeting is to avoid the confusion and the rework that can happen if everyone's not on the same page. So yeah, the point of this meeting was to get everyone on the same page. What's in scope? What's not in scope? What will the UI look like? What's gonna be stored in Raven DB, which was our kind of data store? Kind of what are we storing? What's the UI gonna look like? Making sure everyone's on the same page and avoiding that rework.
The next thing we did was we split the stories as you've seen with email. So during that breakdown meeting, we might decide, kind of, is this really related to this? Would this be valuable to users without this? If so, let's split them in two. Another thing we did was Moscow. How many people have heard of Moscow before? Some, but not everyone. So the M stands for must, S is should,
C is could, and W is won't. So when we came up with a post-it that we were gonna do, we said this is a must, a should, a could, or would. And would basically means won't because you never ever get around to them. And then our Scrum board, we obviously had a to-do on the left-hand side of the Scrum board, and we separated that into basically three columns.
The to-dos that are musts, the to-dos that are shoulds, and the to-dos that are coulds. And that meant that we were kind of focusing on the musts first because it was called out explicitly on the board. What is a must and what is a should and what is a could for this story? And we also did this for bugs that we found during the story as well. So when we found a bug, is this a must fix?
Can we ship the story without this bug? Is it a should, is it a could? And we were also reprioritizing stuff as well. As we heard feedback from the UX sessions, sometimes the shoulds move into must or must moves into shoulds based on the UX sessions that we did on the stories and the feedback we got from users. And this was a great way of kind of basically not doing stuff because the coulds didn't really get done
and some of the shoulds got done, but not all of them. So this was basically a great way of basically not doing stuff, which let us ship sooner. The other thing we used to help us make the Moscow decisions was a release train. How many people have heard of a release train before? Some, but not everybody. So one of the teams in Redgate calls it a release rocket
just to be different, but it's the same kind of concept. So the idea about a release train is that you, so it's owned by the technical people on the team. So this is not management telling the team when the deadlines are, it's owned by the team. And so the idea is is that we've got the different weeks here and then, so these are each of the release Wednesdays.
And then these is what we're gonna be delivering on that release Wednesday. And the idea is, is that you, so the release train lasts for like a month, for a couple of months. And so you start off by saying very roughly as a dev team, how long you expect bits of work to take. So this bit of work, I expect to take a week.
So each of these bits of work should take half a week, so we'll put them both going out on the same day. That might take a week, so that might take a week. These will each take a third of a week, half a week in, half a week. And so you kind of put the stories, so these are stories that you're gonna deliver. And you put them on the release train. And so the reason why this is interesting is that it comes back to Moscow.
So sometimes you have to make the decision, am I going to do those shoulds? Or am I going to do those coulds? And the point about the release train is that it forces you to think about the opportunity cost of doing that should or that could. If you do a should or a could, that means that this story might take longer. And if this story takes longer, then everything cascades down here
and then things fall off the end of the train. So the idea is that the date at the end of the train is kind of when your product manager wants you to work on something else. So maybe this entire train is for, well, this entire train is getting it ready for beta. But maybe you have a release train for email or a release train for kind of showing audit history or a release train for kind of adding permissions
or all the other features we added. So the idea is that at the end of this release train, that's how long your product manager wants to spend on permissions or on history or email. And then this is basically the dev team trying to deliver as much as they can in that time. So if you start doing shoulds and coulds, things move to the right
and then things drop off the train. So it's a way of kind of saying, is the should on this story more important than this entire story at the end of the train? And that basically forces you to think, forces you to think about the opportunity cost of doing shoulds and coulds when you may have whole stories just falling off the end of the train. Sometimes the product manager makes the train a bit longer
if kind of he feels that this is more valuable than the next thing he wants you to do. But it's a great way of managing opportunity cost. Another thing we did was woodland creatures. How many people have seen these kind of woodland creatures before? Okay, no one at all. So these little things are basically like story points. So the problem with story points
is that people tend to add them up and do burn downs and say, why aren't you performing as well as you should be? So the point about this is you can't add these up. These don't support the addition operation. How the hell do you add a dragon to a bear? So the point about these is it lets the team convey the risk associated with certain post-its. So we have to slay this dragon or kill this bear,
but it's a way that they can't be added up. So it's a way of kind of the team being able to convey risk of things. And if you see lots of dragons in the must, it might make sense to move one of them to a should. Just, I mean, are they all really musts? Whereas if you've got like a Pikachu, then it's probably not worth arguing about whether it's a must or a should, because you're going to get it done
in such a small amount of time that it's like, why have the discussion about whether it's a must or a should? So that's a great way of kind of managing the scope of things. The other thing we did was we did on-demand configuration through documentation, which is a really long phrase, but it's a really simple idea. So often as a developer, you have to add a constant into your code.
So for our email story, we kind of batch emails up. So we send you an email every 30. So if there's a change, we wait 30 seconds to see if there's another change so we can tell you about them both together, because they're probably related to each other. And so whenever we introduce a constant into the code, maybe which TCP port our web server listens on, rather than the developer just putting that constant
in that source code file, instead, we get that constant from an app config setting. And we have an absolutely brilliant infrastructure to kind of map app config settings into the DI system, into arguments, to constructors. So the point is that now you've got the architecture in place, don't add a constant in the class, add it into the app config. And this lets you be kind of,
you produce on-demand documentation. So at some point a user will ask, how do I list on a different TCP port? And then the config setting is already there in the application that you've shipped. And all it takes is your tech author to write the documentation that says, here's the app config setting you need to change. So this was a great way of basically, so things like, I must be able to change the TCP port,
which, or I should be able to change the TCP port, I could be able to change the TCP port. It meant that literally, the developers could add that feature by an app config setting, and we can write the documentation page later when someone thinks about it. And that's another great way of managing scope. So rather than having a UI to configure all these things, just put them as app config settings,
and then write the doc pages later. Obviously, if enough people ask for that doc page, or we have Google Analytics on our doc pages, we get to see how many times they're visited. Obviously, if lots of people are visiting that doc page, now's a good time to put that as a feature in the product. But it lets you kind of do things for those one-off things for particular users, and get rid of blockers, which is quite nice.
And finally, we did a way of visualizing debt. So one of the problems with Moscow and Release Train is that if you're not very careful, you can end up with a lot of technical debt in your project, because fixing the technical debt becomes shoulds and coulds, right? I can ship without that, so therefore it's a should. So because the technical debt all becomes should, when a story is finished,
any post-its for that story that you didn't end up doing because they were shoulds or coulds, you move them from that story where I want the whiteboard onto the technical debt or the UX debt. And so this means that these things don't get forgotten about, right? So shoulds and coulds that you don't ship, the post-its live on here and here,
and eventually, you know, there just becomes too much technical debt and we might put like a buffet car in the Release Train when we're going to spend a week fixing as many of these things as we can to reduce our technical debt. So this is a great way of managing technical debt while also shipping quickly by, again, just visualizing it. And then there's what would we like to do, which are kind of features that we've had to cut.
So, yeah, tech debt, basically product debt and UX debt, and that's a great way of taking the shoulds and the coulds and putting them somewhere. So let's do a summary in the last couple of minutes. How many people... Sorry, go on. It's kind of like an IOU.
Yeah, sorry. I think it's because it ends in U and it's got an X next to it. So it's the... Yeah, cool. A summary in the last couple of minutes. So how many people have recognized this quote from Newsroom? Okay, not very many. So this is my favorite TV show and this is a quote from it.
We did the news well. You know how? We just decided to. And over the course of NDC, there are like 21 sessions that you've probably been to over these three days. And when you go back to the office on Monday morning, if you're not very careful, it's really easy to just fall back into business as usual and kind of not put into practice what you've learned here.
And you know what? You just have to decide to put these things into practice. So not just my talk, but all of the talks you've been to as well. You just have to decide to do it. And by doing it, you'll get better at it. So the idea is, is that just by starting, you can iterate on that and work out what works for you and your companies and your organizations with your culture,
with your user base, with your markets. Hopefully I've given you a whole bunch of ideas about various places you can start, and hopefully you can kind of start with some of these ideas. But if you're already doing this thing, try something new, iterate on what you're already doing. So we did a whole bunch of things. We went to conferences. We did a private alpha to public beta. We covered a wall in Post-its for our user research.
We did the user journals, which gave us a whole bunch of feedback just before we released the beta. We did the UX test, auto emails, user voice surveys, support queries, automated error reporting, usage metrics, funnels. We walked the tightrope between what the business wanted and what our user wanted, and we came up with a solution for both of them with pipelines.
So we did a whole bunch of stuff. Cool, thank you very much for coming today. Any questions?
So the limitations was, basically we wanted the application to work. So monitoring all these SQL servers obviously puts a load on the CPU and the network, and because we store all the previous versions
in order to show you differences, it also puts a load on the hard disk as well. So for the alpha and the beta, we just picked numbers out of thin air. So five projects, four environments for project three databases per things. And for the alpha and the beta, that was fine, just picking numbers out of thin air. And then we kind of got feedback from users about how it was doing for them.
The slides on top, yeah, I need to put them up, so it'd be sometime later today. So that's kind of what we did. Now, when we came to release, so in the beta period, one of the release trains we did was on scalability. So we did, we performance profiled our app, we memory profiled our app, we kind of got feedback. We looked at the support requests we're getting
from customers who were kind of at the 60, kind of five times four times three and seeing what issues they were getting. We tried different database sizes. We did some surveys about kind of what average databases sizes were. We used our feature usage reporting about how often databases were changing so we can come up with kind of realistic numbers for how often these things were changing to know what we should do. So yeah, literally, and then we had
a release train on scalability. The final product shipped with 20 as the 20 databases as the kind of the free tool that we released. Most recently, we updated it to be 50. So we did a bit more scalability work and then we increased it to 50. And we also, if you installed it today, you get an in-product survey that says,
is 50 enough for you? Yes, no. And if you click no, it takes you to SurveyMonkey and asks you how many you want. So we again used the idea of an in-product link to ask for is these number databases enough. But yeah, right now we're up to 50, I think. But as we do more scalability work, we can increase that number further. It's a matter of prioritizing scalability work
with obviously all the other features people want as well. We don't do feature toglings. The exception to that is the app config settings, which let you toggle, but that's constants in the code. We don't do feature toglings. The reason being is that we tend to like doing UX tests
so we can actually see the users using it and we can actually kind of hear their thoughts. We get people to speak out loud in the UX tests. So some of the teams in Redgate do do feature toggling. We don't because our team kind of prefers the UX test and seeing it being used in person. The other thing we do is before we ship,
we get other teams in Redgate, so we dogfood it internally. So our IT department use dashboard to monitor Redgate's databases. So we do a bit of dogfooding before we ship as well. So those things tied together means that we don't do feature toggling, but it is an idea that other companies use.
A special tool. So we use something to share the screen with them. That differs depending on which company we're doing the UX test with because different firewalls block different things. Our favorite is, I can't remember now if my UX designer does it, but yeah, we use like Cisco WebEx and GoToMeeting and all that kind of stuff. We use Camtasia to record the screen
and then we get the user to talk out loud as much as possible. And we put the user on speaker in the room and then the microphone on the laptop, part of Camtasia just records whatever the user is saying on speaker. So that works really quite well. And then the Post-its to gather the feedback that I talked about.
So it did change a little bit. So before this, we were kind of releasing
every two weeks at the end of each sprint. We had to pull that down to every week, which basically changes how you do the interaction between dev and test change quite a lot. So we've got, so the way our teams work is, I'm in a team with like 10 people. And so there's like five devs, couple of testers, project manager, product manager, UX person, tech author.
And so the interaction between dev and test has had to change because there's, if you have a week to do something and test it, it's kind of interesting as to what that happens. So we tend to do like the red route of the application first, and then we put kind of bits on each bit of the red route
and because the red route is there, the testers can kind of start having a go at it. We tend to also, it's basically a matter of kind of having that communication between dev and test as to how you're going to ship every single week. In terms of the user feedback, we already had UX people who were doing the user feedback kind of things.
But the main way the culture changed there was that everybody in the team was involved with it. So like the reading through the journals and that kind of stuff, which most of the people, in fact, yeah, all the people we've got were really open to that because it's a way of kind of empathizing with the user. And certainly for me, you get a lot more, being connected to the user gives you a lot more pride because you can actually see people using the software,
which kind of really helps. So yeah, it varies from person to person why they like doing it. But yeah, the dev test interaction was the thing we struggled with the most. But it's after like, well, year and a half, two years of doing this, it's fine now.
So you do get fear from the team forming different opinions. And that's, which is why having the journal in front of you is really helpful because you can kind of refer back to it. So you don't get like the Chinese whispers of, well, to me, the user said this and blah, blah, blah. So having the journals in front would help with that.
And Moscow really helps with that. So generally people might come to different conclusions about the priorities of things to do. So, you know, I might be like, well, this is a must. I'll be like, this is a should. And then generally we do the, this, this and this. So this means, which is really quick way of gathering feedback in a group. So this means I support the idea wholeheartedly. This means I would like to veto it so it won't get done.
And then this is whatever, I'm happy to defer my decision to somebody else. So we do that really quickly to kind of gather feedback about, kind of just make sure everyone in the team has a say because otherwise the quiet people don't get a say as much, but because they get to veto it. And of course we may well carry on the discussion on that veto. So the project manager may decide to carry on the discussion in the hope of changing their mind or the project manager may decide to move on
depending on how they're facilitating the meeting. So yeah, there's lots of little techniques we use to kind of manage group interaction and coming to conclusions. It varies actually. So sometimes it's the UX designer,
sometimes it's the product manager and sometimes it's the team as well. So in terms of like when we get support issues coming in, so some of the support issues we were getting during the alpha and the beta was them like misunderstanding the product about what it can do. And so that support issue might come in to the person on the road to that week. And then obviously they'd bring that up, that stand up and what should we do about this?
So it was very much everybody on the team. Obviously it was the primary responsibility of the product manager and the UX designer because that's their job, right? But the devs on the team, their primary responsibility is to write code but their team members just like everybody else, right? And the way the team works is we're given a goal about what we were trying to achieve which was basically a usage metric.
So we had to get 1000 people to 1000 active users based on that usage metric. And so everyone in the team has that goal in mind and how they're gonna achieve that. So yeah, our devs were primarily writing code but they were also kind of a team member to take everybody else and helping out with that usage metric and that goal.
Any more questions? Cool, thank you very much for coming. Please remember to fill in the things on the way out.