We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Automated UI Testing for iOS and Android Mobile Apps

00:00

Formal Metadata

Title
Automated UI Testing for iOS and Android Mobile Apps
Title of Series
Number of Parts
96
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
An ever growing number of mobile devices with constantly advancing operating system releases are hitting the market at a lightning pace. Creating a comprehensive testing suite is imperative to success in the mobile market to ensure your app is of the highest quality with each and every release. Unit tests can only test your core business logic. How can you ensure your user interface is bulletproof and regression free on four versions of iOS on 20 devices or eight versions of android on over 18,000 device models! This is where creating automated user interface testing for mobile apps comes in. Xamarin.UITest is a freely available testing framework that enables you to create user interface tests to programmatically interact with native and hybrid apps. Swipe, tap, or rotate any user interface element and then perform real world assertions and take screenshots for visual validation along the way. Learn how to create these tests and run them locally on your own device or simulator or take them to the Xamarin Test Cloud to automatically test your application on thousands of physical devices ensuring mobile success.
Point cloudStatistical hypothesis testingMobile appMultiplication signBlock (periodic table)FamilyDifferent (Kate Ryan album)MereologyScheduling (computing)Data storage deviceTask (computing)Queue (abstract data type)Lattice (order)Cartesian coordinate systemTransportation theory (mathematics)EmailData conversionMessage passingMobile appPhysical systemPlanningWave packetTouchscreenCASE <Informatik>Time zoneBitSystem callLocal ringUniform resource locatorDefault (computer science)Similarity (geometry)Computer animation
TouchscreenCrash (computing)NumberGastropod shellTouchscreenAndroid (robot)Potenz <Mathematik>SpacetimeEndliche ModelltheorieRevision controlMessage passingData managementNormal (geometry)Configuration spaceData storage deviceTerm (mathematics)Right angleStatistical hypothesis testingShared memoryBoom (sailing)1 (number)outputMultiplication signOperator (mathematics)Bit rateMultiplicationWage labourDependent and independent variablesCartesian coordinate systemInteractive televisionExpected valueUniform resource locatorPlanningEmailStrategy gamePoint (geometry)Mobile appCrash (computing)Moment (mathematics)Uniqueness quantificationBuildingFacebookGoogle MapsSemiconductor memoryFeedbackComputer hardwareOperating systemPhysical systemLimit (category theory)NumberCombinational logicImage resolutionWindowComputing platformDifferent (Kate Ryan album)User interfaceSocial classComputer animation
Time domainComputer-generated imageryMobile WebMobile appView (database)Declarative programmingQuery languageEvent horizonTouchscreenObject (grammar)Statistical hypothesis testingPunched cardOnline helpStatistical hypothesis testingWritingBinary decision diagramCommon Language InfrastructureJava appletComputing platformExecution unitDevice driverVisual systemDataflowScripting languagePoint cloudSoftware developerExecution unitStatistical hypothesis testingView (database)Cartesian coordinate systemStatistical hypothesis testingOnline helpElement (mathematics)Range (statistics)Right angleVideo gameCross-platformMeasurementTouchscreenCycle (graph theory)Different (Kate Ryan album)Level (video gaming)Uniform resource locatorDampingState of matterObject (grammar)Product (business)Software frameworkMobile appAndroid (robot)INTEGRALMobile WeboutputProjective planeGame controllerWellenwiderstand <Strömungsmechanik>Computer programmingBitPhysical systemAutomationCASE <Informatik>Computer configurationRoutingLibrary (computing)Visualization (computer graphics)Wave packetScripting languageFormal languageComputing platformOrder (biology)Java appletTap (transformer)Computer fileUnit testingAdditionRevision controlEvent horizonPoint cloudInstance (computer science)Web browserLatent heatDeclarative programmingMachine codeString (computer science)Hybrid computerQuery languageAudiovisualisierungComputer animation
BlogVideoconferencingParallel portInternet der DingeInformation managementRippingElectronic data interchangeElectronic mailing listDifferent (Kate Ryan album)Point cloudStatistical hypothesis testingVirtual machineContent (media)Structural loadAnalytic setSimulationHydraulic jumpFront and back endsStatistical hypothesis testingResultantInternet der DingeParallel portAuthorizationFacebookDemo (music)Connected spaceRow (database)Uniform resource locatorAsynchronous Transfer ModeComputer simulationSampling (statistics)CodeProfil (magazine)Drag (physics)DistanceCartesian coordinate systemMetric systemSoftware developerProduct (business)Menu (computing)BitAndroid (robot)Physical systemDevice driverCloud computingComputer hardwareSource codeMobile appVideoconferencingTraffic reportingRight angleComputer programmingData analysisMultiplication signProjective planeoutputSoftwarePower (physics)BuildingHill differential equationMobile WebInstance (computer science)Set (mathematics)Point (geometry)Computer animation
Lipschitz-StetigkeitElectronic data interchangeOpen setOpen sourceSource codeMetric systemDistanceInformation privacyFeedbackSynchronizationSoftware repositoryMathematicsGraph coloringLine (geometry)Computing platformCompilation albumBinary codeComputer fileMoment (mathematics)CodeTouchscreenCartesian coordinate systemDemo (music)QuicksortIntegrated development environmentElement (mathematics)Android (robot)Statistical hypothesis testingType theoryCrash (computing)Tap (transformer)FeedbackLevel (video gaming)LoginMessage passingSet (mathematics)Different (Kate Ryan album)Statistical hypothesis testingMenu (computing)2 (number)Point cloudInformationTerm (mathematics)Function (mathematics)Row (database)Boilerplate (text)Attribute grammarSoftware frameworkExecution unitGoodness of fitRevision controlScripting languageAsynchronous Transfer ModeDampingSocial classShape (magazine)Greatest elementPhysical systemSimulationSelectivity (electronic)Computer animation
Integrated development environmentConvex hullFeedbackOpen sourceOpen setSource codeInformation privacyMaxima and minimaAnnulus (mathematics)Lattice (order)10 (number)EmailTouchscreenPoint cloudAndroid (robot)outputCartesian coordinate systemTouchscreenInstance (computer science)Mobile appReal numberType theoryObject (grammar)CodeStatistical hypothesis testingDifferent (Kate Ryan album)Statistical hypothesis testingBefehlsprozessorPoint (geometry)Slide ruleSampling (statistics)CASE <Informatik>Semiconductor memoryLevel (video gaming)Bit rateScripting languagePhysical systemExecution unitPoint cloudSoftware developerSet (mathematics)Demo (music)Goodness of fitProfil (magazine)Pattern languageIntegrated development environmentComputer simulationComputing platformWeb pageSocial classLine (geometry)Intrusion detection systemLogicAuthenticationMatching (graph theory)Latent heatLoginFacebookMetric systemDigital video recorderBitCross-platformVideoconferencingAbstractionSuite (music)Revision controlMultiplication signDirection (geometry)WordBranch (computer science)Parallel portException handlingComputer animation
SoftwareSystem programmingType theoryContinuous functionImplementationMobile WebMaizeReduction of orderFeedbackPlateau's problemContinuum hypothesisScalable Coherent InterfaceKolmogorov complexityDistribution (mathematics)CodeContinuous integrationIntegrated development environmentStatistical hypothesis testingProcess (computing)Software developerBlogGraph (mathematics)Zoom lensMultiplication signMobile appCartesian coordinate systemProcess (computing)Scheduling (computing)outputAnalytic continuationPhysical systemSystem callReliefPoint cloudCycle (graph theory)Statistical hypothesis testingGodTerm (mathematics)Product (business)Mobile WebState of matterDirection (geometry)BuildingInteractive televisionData storage deviceDifferent (Kate Ryan album)Software1 (number)Point (geometry)ImplementationType theoryDependent and independent variablesSoftware bugFeedbackMathematicsContinuous integrationInstance (computer science)Perspective (visual)Procedural programmingBitBlock (periodic table)Statistical hypothesis testingComputer hardwareIntegrated development environmentComputer animationProgram flowchart
Kolmogorov complexityDistribution (mathematics)CodeMobile WebContinuous integrationContinuous functionStatistical hypothesis testingIntegrated development environmentProcess (computing)Streaming mediaData storage deviceProduct (business)Public key certificateProfil (magazine)Sign (mathematics)QuicksortINTEGRALBuildingMathematicsWeb pageStatistical hypothesis testingService (economics)Revision controlWeb 2.0Perfect groupBitBinary codeResultantSubsetSpeech synthesisStack (abstract data type)Metric systemObject-oriented programmingMultiplication signFocus (optics)MereologyPoint (geometry)Branch (computer science)Repository (publishing)Configuration spaceVirtual machineBoilerplate (text)Cartesian coordinate systemComplex (psychology)Statistical hypothesis testingProcess (computing)Mobile appConnectivity (graph theory)outputVisualization (computer graphics)Analytic continuationCodeKey (cryptography)Point cloudLink (knot theory)Message passingSoftware development kitReal numberComputer simulationPhysical systemExecution unitSoftware developerFile formatComputer fileCrash (computing)Distribution (mathematics)Continuous integrationComputer clusterDrop (liquid)Drag (physics)Traffic reportingCommitment schemeTracing (software)Computer animation
Continuous functionMobile appProcess (computing)Cartesian coordinate systemMaxima and minimaoutputDifferent (Kate Ryan album)Multiplication signParallel portFrequencyStatistical hypothesis testingGoodness of fitPoint (geometry)Link (knot theory)NumberBitHybrid computerConcurrency (computer science)Scaling (geometry)Cycle (graph theory)Right angleSpeech synthesisMultitier architectureComputer animationDiagram
Transcript: English(auto-generated)
Good morning. It's extremely bright here, so I can really see you unless I do this. It's kind of annoying. I like to be able to see the faces of the audience. But, oh well. I guess that's what it's like.
So welcome to the talk. I want to go back in time a bit. About three months ago, I was working late. I was working and I was unintentionally working late. I'd been caught up in some tasks I wanted to finish.
I realized I was late because I got a notification on my iPhone from the calendar app which said that you're now entering the block of time which I call family time, which means I'm supposed to be home with my family. I'm not necessarily the person who has to structure every minute of my time schedule
so that this is family, this is fun. It's not like that. But I work in a distributed company. I work at Xamarin, who's now part of Microsoft. So there's over 100,000 people. Not all of them necessarily know that I'm in a different time zone than they are. So I block off this time slot in my calendar to make sure people don't book meetings
when I want to be with my family. So I was getting this notification and I'm realizing, oh, I'm supposed to be home now, so I better hurry up. So I run off. We use Slack, the Slack app for chatting and work. So I send off those last messages, finishing the conversations on Slack,
run out the door, switch over to the mail application on the iPhone and send off the last email and try to hurry to get home as soon as I can. And I use public transportation to work and get back because I really hate traffic.
I hate being stuck in queues. So I like the fact that you can kind of get in a bus or on a train and you can sit down, do some work, read a book or relax or whatever. So I was trying to get home, so I switched over to an application we have in Denmark called Rieseplaten, which is basically a travel planning app for public transportation.
I'm sure you have something similar here. It's actually a really nice app. So it will tell me how do I get from where I am right now to where I want to go, in this case home. And it knows the local transport system, so it's slightly better than Google Maps.
And I enter that I want to get home from here, and I realize this is going to be really late. I'm going to be very late. So I go to the message app and I send a message to my wife, like, sorry, I'm really late. I miss you and the children. I'm so sorry. But I try to get home fast. And so I keep walking a bit, and I think, you know what?
I'm just going to take a taxi. I really want to get home today. And taking a taxi in Denmark is like here. It's not something you automatically do. It's like ridiculously expensive, so it's not like a default thing. But I decided I wanted to do it. So I switched over to the Google Maps application to check my location, because I've been walking for a bit,
and I wanted to call them and tell them where I am. And then just when I was about to call, I realized that last time I'd called a taxi company. I got into this queue, and they said, why don't you try our new mobile application? It's a really easy way to book. And I thought, yeah, that's way better than going into a queue.
So I jumped over to the App Store, searched for the application, downloaded on the fly, got it on my device. And I opened it, and I was a bit puzzled. The UI was a bit weird. But I figured out how to go into the booking screen. And I tapped the booking button, and I got this.
Boom. Crashed home screen. And I was really annoyed. I was late. I was trying to get home. I was trying to book a taxi. And it was a very basic thing I was trying to do. I just got this experience. Crash. And so what do you think I did?
What did I do? Anyone? What? Try it again. Try it again? No. You deleted it. Exactly. That's exactly what I did. I stopped using the application. I deleted it. I went to the App Store because I was pretty annoyed at this time and trying to get home.
I told them in a public rating, one star, please fix your stupid application. It doesn't even allow me to book a taxi, which is the most basic operation of this application. And the UI sucks. I was upset at the time. I don't usually leave aggressive App Store reviews.
But on the other hand, do you think that's unrealistic? This is a true story. This actually happened. Is that normal behavior? Yeah. If you think about it, think about what I've been doing just prior to getting this experience.
I was at work. I got a notification. Sent out a message on Slack. Switched over to email. Walked a bit. Switched over to the travel planning app in Denmark, which is quite good. Realized I was going to be late. Sent a message to my wife. Switched over to Google Maps, an awesome application. Got my location.
I was just about to use the phone app to call the taxi when I decided to switch to App Store app. Downloaded the app on the fly. Everything's smooth so far. And then I got this. This experience. And this is how I felt. I just burned this thing.
My point with this whole story and this message here is that really users have extremely high expectations for the quality of their mobile experiences. If you look at every one of those apps, they have beautiful UIs. They're really responsive and fast.
They have great user interaction design. They publish updates to these applications in response to App Store feedback multiple times per month. And they just have a ton of useful features, right? And that application that you're building
or that this taxi company is sitting right there in between all these world class apps like Snapchat and Instagram and Facebook, your app is right there in the middle. And it's definitely going to be compared against the world's best apps.
So I'm trying to get the point over that the expectations are really high. When you screw up, it's publicly visible in the app stores via the rating system. And users are pretty tough. They'll do this. At the same time, while there's a high bar,
there's also some unique challenges. Quality in mobile is just really hard. Because if you think about it for a moment, there are platforms like iOS, Android, Windows. And there are vendors, so device vendors. So how many devices are out there? Well, there are multiple vendors. Each of the vendors have multiple models.
The models feature different hardware like CPU memory, screen sizes, resolutions. And each of those run different versions of the operating systems. So there's just so many combinations. Trying to test on all of that can be pretty hard.
So one way to kind of constrain this problem, to make it easier, is just to say, I'm just going to limit myself. I'll only support the newest version of the Samsung phone and maybe the newest iPhone or whatever. So if you were to take this approach
and you wanted to, say, limit yourself to the 75% most popular models out there, how many devices do you think you would need to test on to get that market share coverage of the 75 most popular ones? How many? Anyone? No idea?
300. It's actually less. 75%, wow. But you're getting there. You may actually be right, because this data I have is actually probably three years old now, so it could be worse. So to get, this is US data, by the way, to get 75% of the market share of the models out there,
you would need to test on 134 at least three years ago. Actually, there have been even more fragmentation in the space since then, so you could be right. It could be 300 now. And you have this exponential thing where if you want to push that 75% even higher in terms of coverage, you get this exponential growth
in number of configurations of devices. So this is just really tough. So if you're a QA manager, what's your strategy going to be? You're responsible for delivery of a mobile application? Are you going to test everything on all devices?
Are you going to test all the features of all the applications? Are you going to buy all these devices? Are you going to hire someone like this extremely efficient girl? I've never seen anything like this before this picture. But no. We don't have people.
But this is an example of what people will do. They will outsource their testing, basically, to fairly cheap labor. And they are specialized in very efficient multi-device testing. I just really love this picture. But then again, you can think about if you're testing on,
what is it, like 50 devices, something like that? What's the chance that you're moving really fast that you're going to make a mistake? Maybe actually be pretty big. All right, so I think I'm getting my point across. So this is how you should feel, like, ah! The scream, right?
So if you're not familiar with this picture, it's actually here in Oslo. It's Edvard Munch. And he actually gave a German title to this picture. Do you know what it is? The German title? No, it's Der Schrei der Entwicklung Mobiler Apps. So translated to English, it's the scream of mobile app development.
And you can go look that up if you don't believe me, which you probably don't. This is how people feel, like, really anxious. So now what? What do you do? Well, I'm going to argue that one approach you can take is actually to focus on automation.
So if you're able to automate your tests, and as we'll see later, to deploy them to a range of different devices, then you can speed up your cycle, and you can ensure higher quality, and you can test on a lot of devices very easily
without going to extreme measures. So I want to talk specifically about automated UI testing. Now, I know that there is definitely more to life than automated UI testing, and I know about the concept of the testing pyramid, which says that you should have more unit tests
than you have integration tests, and you should have more integration tests than you have automated UI tests. But I would actually argue if you were starting from scratch and it turns out that about, actually, also 75% of mobile app development projects, they're starting from scratch. Like, they have no automation at all. They have no CI.
Now, if you're starting from scratch, that's 75% of you guys here, and you want to get value, and you can now only write one test. Well, I would argue that if you're just able to write one test, you're getting a lot more value out of writing an end-to-end automated UI test or a smoke test, rather than writing one unit test and running that.
I think that testing pyramid story only applies once you have a decent test coverage, and then it tells you something about the ratio between unit integration and UI tests. Anyway, just in case someone's not familiar with the concept, so automated UI testing, the idea is to have a program, a test,
which simulates what the user does to an application and what does the user do? Well, he or she interacts with UI controls, right? Like tapping, scrolling, swiping, entering text. That's what a user does to an application. Now, you're going to simulate that in a program.
So in order to do that, you need to be able to talk about gestures, like tapping and swiping, and you need to be able to talk about views. So the way we do that, at least in our tool, is that we have what we call a query language, where you can specify, you know, it's this exact button that I want to touch,
or it's this exact text I want to find. And there's kind of a nice little DSL declarative language there to let you easily do that. So some examples here would be, so this is C sharp in this case. So application, please tap anything that has the text, the string, help.
Whether it's a button or it's a text, doesn't really matter. Just find something with the text, help. Or application, please tap the element E, which has the ID, technical ID, history, BTN, history button. Or application, please wait for element E,
which has the text, ink. So this is the kind of language you would use in a program like this to tell the program what to do to the application, like tapping, waiting for events to occur. Like, wait until there's no spinner visible on the screen. In addition to these various gestures, you can also generate screenshots.
Like, I want to see what the application looks like just now. Save that to a file. And you can manage the app lifecycle. So launching, stopping the application, clearing the application data so you're starting from scratch. That's often something you need to do during a testing cycle. Like, completely uninstall it, install it again, and start from scratch.
And then some tools, like ours, allow us to actually do some things that are pretty hard to do as a human. So one example would be to simulate the GPS location of the device. So I'm writing a test that says, given I'm in Oslo airport, and I tap the book button or whatever,
and then the next step you say, now set the location to London. So you're pretending that you're flying from Oslo to London. And you can do that now within milliseconds. And another thing you can do is actually very low level is basically grab an object inside the application and start calling methods on it.
So it's a pretty advanced thing, not necessarily something you need to do, but it can be extremely handy to say to speed up the test or set the application into a specific state. All right. So that's a basic overview automated UI testing. So simulate a user using the application. Now, there are a ton of different tools
and frameworks out there that you can use to run these and write these UI tests. So the example you saw before is from Xamarin UI test, which is one of our own products, which is basically a C sharp based API. It's cross platform on iOS and Android.
So you can write tests that work on Android and iOS applications. And basically you can write these things inside any unit tests and run them from inside Xamarin Studio or Visual Studio, or from the command line if you want. There's also support for spec flow, if you're familiar with that. There's a system called Calabash, which is very similar,
except that you are writing your tests now in Ruby, and you're running them using a tool called Cucumber, which is like a behavior driven development tool. So that's kind of matching what you do in your company, then maybe you want to go down that route. Then there's a different, slightly newer option out there, which is called Appium.
And the idea with Appium is, you know what, this UI testing stuff, we already did it. It's Selenium, right? It's Selenium WebDriver, if you're familiar with that. So basically the idea there is that a lot of companies have people who've been writing Selenium web browser scripts. And why don't we see if we can use the same API
to test the mobile application so that we can use those people, and I don't have to kind of train them from scratch. And the advantage there is that there are already libraries out there for Java and Python, JavaScript, whatever language you want, that you can kind of use already. It's already out there.
The downside, I think, is that you're getting a bit of an impedance mismatch, because a browser is not the same as a mobile native application. For instance, there's no URL, and you don't click stuff, right? You swipe or you perform complex gestures. So they add some stuff on top of it. Finally, there's the platform-specific tools, Espresso UI Automator on Android,
and XUI Test is the newest one from Apple. And those are the official tools. They're there. They come with the platform SDKs, and they use the language of the platforms, which in this case is Java and Objective-C or Swift. But they're not cross-platform. So if you write a test for Android, it's not going to be able to run on iOS. So that's the disadvantage there.
But all of these are kind of options you can look into. Now, once you have your automated UI test and you have your application, you can then go and run those. If you go out and buy 10 devices, plug them into your machines, and start running those tests on the devices. So already now you're starting to get some value.
But of course, are you going to go out there and buy thousands or hundreds or 300 different models and install the various OS versions? Well, you may want to. You may also want to look at options there. So this is where the product I work on comes into play. So that's called Xamarin Test Cloud.
And the idea is to solve that problem for you. So we buy the devices. We host the devices. We fully automate them. We don't jailbreak them. These are like real devices as the users would go and buy it in the shop. They're not kind of modded in any way. And you can test any application, whether it's written in Xamarin or Objective-C, Java,
even hybrid apps like Cordova and so on. So we basically provide a test execution infrastructure. So you basically just give us, here's the application, here's the test I want to run, these are the devices I want to run on, and we take care of everything. That's how this works.
So yeah, there's a ton of different devices that you could go and either buy or kind of go to the cloud for hosting. The list just goes on. I just like the scrolling, so I put it in here. Just to get a sense of what this actually looks like.
All right. You can also think, you know what, I'm just going to build this myself. How hard can it be? And I just want to say right now, please don't do it. You may think it's like a month's worth of a project, but it will turn out to take four years. At least it's taken us four years. And you end up having to deal with things like batteries, like this, that start inflating after some time.
You don't want your labs catching fire. You also have to deal with OS upgrades, pub-ups, unstable devices, because these are just consumer hardware. You have to build out a parallel execution infrastructure. You have to figure out how to do reporting and screenshots and videos. So it's just a ton of work.
I've seen some people go out and do this, so I just want to put this out there. All right, so now let's do some demo. I guess you want to see stuff. This is actually a sample that you can go and download yourself. And I think I was just looking at the program right now. I think there's a talk right now that's using this same sample here.
But I guess you can see that online if you're interested after the conference. But basically, it's called My Driving. And it's like an IoT application, an Internet of Things application, that's combining Xamarin to build the application,
and Azure, IoT, and cloud services to do a bunch of analytics on top of that data. And what is that data? Well, the idea with the application is you want to track your driving, basically, and compare how you drive against how other people drive, and maybe just against your own driving to improve.
So basically, what you can do is you buy this IoT device. So it's a small device that you can plug into your car. And then it interfaces with almost any car out there to gather the data that's being shown in your dashboard.
So things like mileage, how fast you're going, what's the engine load like. So this sucks that out of your car, and then sends it out either via Bluetooth or Wi-Fi. It's like a cheap device, and it will plug into most cars. So the idea is you plug this into your car. Then you have your mobile device.
And that IoT device is transmitting data to this MyDriving mobile application. And that then forwards the data into the Azure IoT hub. And then there's a ton of stuff that happens that I don't want to talk about, which is probably the content of that other talk I was referring to. But it's basically like big data analysis, Power BI,
machine learning on top of this data, to basically compare your driving against your previous results and against other people out there. But I'm not going to talk about that bit. I'll talk about the mobile app development story and the testing story for this application.
But it's actually interesting in its own right. And you can go download everything, including the mobile app, the source code to the mobile app, using Xamarin, which is now free for everyone, and the backend infrastructure code for Azure, and all the tests. So everything is out there if you want to look at it afterwards.
All right, so whoops. I was going to actually jump out here. So let's jump here into Xamarin Studio. And I'm just going to launch the application here. So it's going to compile right now for iOS and launch it on the simulator
so you can see what the application looks like. So let's see. Here's the iOS sim. So the idea here is you can log in with Facebook or whatever. And this test build here,
we can just skip the auth just to get into demo mode. Now, because this is the simulator, I'm actually in the middle of the sea somewhere, so I can set the location here to, say, the Apple headquarters here. And I can actually set the simulator to do a three-way drive.
So now it's actually moving. And then we can hit this record button. And it's now saying that there is no connection to one of these devices. But they've built a simulator so we can pretend that there is a device like this in software. And you see now we are moving around,
and it's gathering data here from engine load and duration and distance and so on. So let's say, okay, I'm done driving here. We can then save the trip. And we then get a summary here. We drew 0.25 kilometers.
It took 13 seconds, and we're driving 50 kilometers per hour. Then you can... Maybe we stop this driving thing here. Go back to Apple. Then we can review our past trips here. For instance, James Montemagno was driving around Seattle, and we can look at the path he took here.
And we can kind of drag this here to simulate his driving around Seattle and look at... At this point, he was driving 38 kilometers per hour. And then you can go into, like, a profile where you get your score. Like, this is Scott Guthrie. He is apparently an amazing driver in this sample here.
And you can look at some metrics here. And there are settings. So you can switch, for instance, if you don't like this stupid imperial system with gallons, you can switch into the reasonable metric system instead. And then these... Here, these skills and metrics here update and so on.
So that's the basic application. So what if we were to write an automated UI test for this? How would we go about doing that? We have a fairly new product that we call the Xamarin Test Recorder. And that's, like, an easy way to get started,
because you don't... Like, how do I even get started with this stuff? So it's like a standalone application here where you can say... Let's say I want to run on Android here, and I want to run my driving application. Now it's actually going to install the application, prepare it for testing, and launch it.
So it should happen here. It just takes a bit to install. And then it connects to the application to start the test recording stuff. All right, so now we're connected. So now I can hit the record button here and then go to the simulator or device I plug in. And then I say, like, I want to skip auth.
And you see it's registering a tab gesture here. And then I go into this menu and say I want to go to settings here. And I want to scroll down. And I want to check that there is a leave feedback text down here. So I can click here on this.
I don't know if you can see it here. Like, there's small crosshairs. And that's, like, assertion mode, which means now I want to make an assertion. So I click this. And then I click the element I want to assert is present here. And let's say that's the test I wanted to run. That's a quick smoke test. Took me a few seconds to generate.
So I hit stop here. And now let's just verify that the test actually works, that it does what we want it to do. So we can hit run here. So it's going to restart the application, going to run from scratch, and then replay the things we did. Like, tap off. Tap the menu. Hit settings. Scroll down. Check that. Leave feedback is present.
So that's actually quite nice, right? It's a really fast way to generate a script. And now if you wanted to, you could actually go right now, just with this work, and run this on, say, 10 devices, different Android versions with your application, just right from here into test cloud. But what I actually want to do is I want to look at the output here
because it knows kind of under the hood what it needs to do in terms of C sharp code to actually make this work. So what I can do here is, say, export copy. And then I can go into my IDE here, and I've created just a new C sharp file with some basic plumbing, which is, let's use Xamarin UI test.
Let's use any unit framework. Let's have this class. And this here, this attribute means that I want to run this test on the Android platform. And then we have this line here, which is some boilerplate code just to launch the application. So that's the basic plumbing that you need to get started.
Now I just paste in the stuff from the test recorder here. You're seeing that it has some tab, this particular IDE that it detected. A screenshot is to see screenshots in test cloud. And all the stuff you just saw. So let's now compile the UI tests
and run this example test here on the Android platform. So this is running from within the IDE. So it has to reinstall and clear the data
and then launch the application. So we're tipping skip off. It's a setting, scrolling down. We're good. So quite nice. Now we actually have our first test,
and we can go and run it on any device. If we are able to run a CI system, we can plug it into CI and so on with a few minutes of work. I actually forgot to show you something, which is since this application is written in Xamarin, I can go and change some basic things here,
like the color of the bar at the top of the screen. You'll see what I mean in a second. You see it's blue now. If I wanted to change it black, that's just the line of C sharp code I can change. Sorry, I wanted to do this in the beginning.
So if we look at the sim here, it's launching, and we go to skip off. And you see now this color is black here.
So I want to commit that change to my Git repo here. Let's forget about the tests, but let's add this commit here, black bar.
Okay, so I changed my application background to black. All right, so just a moment. I want to just check here.
Right, so I forgot one thing here when I was doing this demo, which is we have our test now. Suppose we wanted to run this in test cloud. We can also do that from inside here. So if I right-click here, and I run, sorry, from here, I could do what's going on.
There we go, sorry. I had to click the very top element. Run in test cloud from inside the IDE. Pick out the application APK file here.
Upload and run. And then I don't know if you can see it from down there, but it's kind of submitting. It's compiling the application and the application binary and the tests and get the DLLs out of that compilation step and uploading everything to test cloud.
And what then happens is I'm going to pick now a team that I want to run this test in in test cloud. And what I have here is now a selection of devices. So which devices do I want to run this test on that I just wrote? I'm just going to sort by availability, which is how many devices do we have of each of these. I'm just going to pick an Android 4 device
and an Android 5 device of various types just for the demo purpose. Then I do done. And that's going to kind of finalize the upload and go to an application test overview screen. You can see I've been running some tests previously here.
So in fact, let's see this test here. That's the new test that you saw before that I ran before going up on stage here. So new test. It taps the login button. So go into this screen here.
Taps the sidebar menu. Goes onto the settings screen. Scrolls down. And asserts that the feedback button is there. So that's now run on real devices in the cloud. And you can see that the test passed. And you can go and view the screenshots here. You can go and download the test log and the device log
so you have information if there's a crash. So I also did that running on, let's see, eight devices here. And there was an additional feature I wanted to show you. For instance, if you remember the slider thing we had.
So if we scroll up here, maybe just pick a specific device. Remember that we could go to this screen here, like past trips. And then we pick a specific trip here.
And then we actually have a video, if we scroll to here, a video of what happens when you tap the slider at various points. So if you tap the slider at a specific point, the car will drive around on the map. And there's a video recording available of that so that if there's an animation you want to see,
you can capture that in a video instead of a screenshot where it can be hard to actually see. We also tracked the performance data here, like how much memory are we consuming. In this case, we're not actually consuming any CPU because we're sampling at a fairly low rate. So at least you know that the application is doing pretty well,
then it's not like spiking CPU usage. All right, I think that's what I wanted to show. Oh, yeah, except for this, of course, also runs on iOS. So I've run this on six different iOS devices spanning four different iOS versions,
like iOS 9.3, 9.2, 9.1, and 9.0, which is what this application supports. And those tests were all executed in parallel. Kind of the full suite testing the application there. Good.
So that was the basic demo here now. As I said, let's kind of just get rid of this thing here. This is a real application with a real test suite that runs on multiple devices.
So as you maybe realized before, is that the application is actually different on iOS and Android. Oops, here it is. So for instance, the way you navigate to settings here, like on Android, you have to tap this hamburger thing here
to get into the settings screen, and then go here. Whereas on iOS, the navigation pattern is slightly different. Let's kind of get this launched. Here you have to go into profile, and then tap this thing, and then you get into settings. So because this is not like they're trying to build an iOS app
that feels iOS-like on iOS, and something that feels Android-like on Android. So the applications are actually slightly different, which is a challenge for writing cross-platform tests. So we can actually help the developer by putting in the same IDs on iOS and Android for the same types of buttons, because then you can use one line of code to tap the same button across iOS and Android.
But in practice, most applications aren't actually built like this. So there, I recommend that you use something called the page object pattern, which is actually a very similar idea. It is to abstract the logic of the test
into classes that we call page objects. For instance, you may have an object that represents the login page. Let's see if we can find that. So here, login page. On the login page, I might login via Facebook, I might skip authentication. Those become methods.
And then you push the application, the iOS versus Android-specific logic, into those methods, which enables you to reuse the high-level logic of the test script. So that's how you structure your test if you want cross-platform testing. For instance, here on the login page, I can skip auth. In this case, it's actually cross-platform,
because you can just do at the tap the text skip auth, which is the same on iOS and Android. But if you see, if we wanted to login via Facebook, we actually have this kind of interesting construct here, which is on Android, the selector, the ID for the login via Facebook button is button Facebook,
but on iOS, it's login with Facebook. So this means we actually have a bit of branching, but at least that branching is pushed into this abstraction that we call the login page. So if you're going to build these applications trying to be mindful of the tester, so put in the same IDs so you minimize the branching.
And if you're a tester, make sure you abstract away the differences using this page update pattern. That's the recommended practice there. All right. Let me see what else I wanted to cover. Oh, yeah, just to show you kind of some fun stuff, this is actually, this test, as I said,
was kind of a real test for this real application. So we can actually run it on both platforms. So here's the settings test, which changes the metric from, sorry, the unit from the U.S. imperial system to the metric system. So let's run that on iOS here.
And as that is running, because the IDE can't do two test runs at the same time, we can actually kick off the Android test from the command line. So we can raise iOS against Android.
Let's kick that off here. So we're running the same test now on both platforms. So which platform will win? Who's the fastest? Anyone? No? No guesses? You can already see it, right?
It used to be that the iOS simulator was a lot faster, but with some recent updates, kind of keeping it stable and launching it actually takes some time. So now iOS is actually slower. But at least you can see what I'm talking about here in practice, that we're actually running the same code,
pushed the branching down into just page objects, but testing on iOS and Android. And those were the tests that also ran in the cloud on real devices a minute ago. All right. I think that's it for demo now. We've seen this. Yep. And we've seen this. So I wanted to just let this run done.
It doesn't run in the background in the meantime. You see this launch step is also a bit slow. All right. Never mind. Let's kill this. So I wanted to say a few words about kind of going
or which direction you want to go in if you start doing this stuff. So what's next? So for me, the ultimate goal, like the end state that you want to get to, is basically
to implement a continuous delivery process in your mobile applications. I don't know if you're familiar with that, but here's the canonical definition of what that is. What is continuous delivery? Well, it's a software discipline where you build your application in such a way that it can be deployed to production at any time. So your quality is high enough
and your procedures for deployment are high enough or good enough that at any given time, if there is a business requirement to get an update out there, you can just do it. Now, this is not the same to say that you're always automatically pushing out updates without human interaction. That's usually called continuous deployment.
That means that's happening automatic. It's a system. It's pushing out the production system all the time. The difference between continuous deployment, which is that, and continuous delivery is with delivery, you could do it if you wanted to, but it's still a human that goes in and pushes the button to deliver the application to App Store or to deploy the software.
So one of my points in some of the talks I gave is that you can actually go and do this. You might think that there's no way we could ever do this for mobile applications, but I hope I can show you in a few minutes that you actually can do this fairly easily, and it's not as hard as you may think it is
to implement a continuous delivery process for mobile apps. And if you do this, you're going to get the same types of benefits as you do with other systems. So what are those benefits? Basically, the three major ones are you're going to reduce your lead time. And so what is that?
So the lead time is the time from someone fixes a bug or from the time someone gets an idea until the implementation of that bug fix or that idea is in the hands of the users. That's going to be reduced because you're continuously delivering, you're continuously pushing your application out or deploying your software to production.
So that bug fix doesn't have to wait to get batched up with a production deployment that happens three months from now. So you're faster to get things out. This also means, in turn, you're getting faster feedback, right? Because if you get the thing out there faster, well, then the user has a chance to respond earlier, right? But there's also, from a technical perspective,
there's faster feedback because continuous delivery is associated with a lot of automation, like automated builds, automated tests. So for instance, if a developer makes a change that breaks the build, well, then immediately the system notifies that developer that there's a problem. So that leads to faster fixing bugs, which is way less costly. And finally, the release itself is of higher quality
because there's just less stuff in there, right? It's a smaller batch because you're continuously delivering. So each thing you deliver is smaller, which means it's way easier to reason about that delivery. What's in there? How do we test it? It's a lot easier if there's only, say, one thing in there.
And of course, the release process itself is reliable because you've done it so many times that you may even have automated the process, that it's just easy to do and it always works because you're continuously doing it. And there's also this kind of thing, which I'll zoom into on the next graph,
which is just how does the team feel as they're going through delivery. So it's a really nice graph. It's actually completely made up. I took it from the Atlassian blog in this blog post here. And it is kind of a joke, but it's also true. But it's supposed to graph how does the team feel as they're going through a manual,
a non-continuous delivery process. So you're kind of developing. You're looking forward to your target shipping date. And things are going kind of okay, maybe a bit behind schedule. So you're feeling okay but a bit low. And then as time approaches the target ship date, which is when you're supposed to ship, and you have caution, you're not shipping,
then you kind of feel a sense of urgency. Okay, we've got to get this thing done. We've got to get it out there. What's preventing us from moving? And as time passes you feel worse and worse and worse. And then finally you hit your actual ship date. And that's like the very bottom. But at least now you did it.
And then hopefully you feel some relief that we shipped. But actually there's some interesting thing here, which is for mobile there's an additional low here, which I kind of call the abyss of Apple. I don't know if you're familiar with the abyss of Apple, but it's basically the time from you to submit your application to App Store, and then it has to go through this thing called App Store Review Process.
And you're basically just waiting for the gods at Apple to approve the application that you built through a manual review process. So there's like an extended low here where you're feeling really bad. And that can take anywhere from a day to two weeks. But finally let's say that the application is released.
You feel a slope of relief. It's out there in the hands of the users, but what are they saying? And hopefully they're saying good things and you feel better, and you reach the peak of jubilation where, OK, we did it, we got the release out, until you realize I have to go through the cycle again for the next release. So this is the emotional state of the team as you're doing this manual delivery.
Now with continuous delivery it's way more stable. I can tell you that for sure. We've implemented a continuous delivery process for the test cloud product itself, and it made such a huge difference, both in terms of quality and in terms of how the team feels. So big, big kind of push for me to get that
in the hands of mobile app developers. Yeah, so I'd like to say that releasing is like breathing. It's automatic. You don't even think about it. It just happens continuously all the time. All right. The problem is that doing this for mobile
is actually not that easy. So just continuous integration. As I said, only 25% of teams out there are doing it. And you have to set up special hardware, like you need Mac machines to actually build iOS applications. Probably your existing CI infrastructure
doesn't really have Macs in it, because why would it? And there's no book you can go out and read to figure out how do I set up a CI pipeline for mobile. We already talked about testing in the realistic environment, in the actual devices. That's hard. You have to go out and buy them, or you have to plug in into one of these products.
And there's the whole App Store review process, which means that you have an unknown delay between you finish until it gets out there in the hands of the users. Just really frustrating. And there's a ton of complexity around things like code signing, push certificates, provisioning profiles and certificates. So there's just a lot of stuff to learn.
And this is what, I'm not going to go through this in detail, but this is what a CI, continuous delivery pipeline, might look like for an iOS app development. So you have a source control change, triggers a build, runs unit tests, integration tests, runs UI tests both on the iOS simulator and on real devices.
Probably needs to go through some sort of manual testing process, because there are some things we just can't automate. But at least you should make that manual testing step as easy as possible by automatically distributing the application to your testers. Then there's a ton of things like re-signing, generating screenshots for the App Store page,
uploading to Apple, pushing the Submit button, waiting for Apple to review. Then Apple says, OK, your application is good. You have to now go and publish it. And then there's a final step that you don't even control, which is, like with web systems, you push the update out to the user. With mobile apps, the user has to go and actually update. So you can't even control the fact that they're updating.
So you may feel like this is totally daunting. How am I ever going to build all this infrastructure when we have nothing today? And what I want to argue is that you can actually get started with a minimal version of this that's going to deliver a lot of value within hours or minutes or maybe even a day
if you want to get your team up and running. And basically what you can do, and this example is using Visual Studio Team Services. There are a bunch of other kind of cloud CI or continuous delivery vendors out there. But basically what you get here is reacting to source control change, triggering a build, running the UI tests I showed you before
in salmon test cloud, so running on real devices, getting the results back, and easy distribution of the build binary to your manual testers within a few minutes of setup. So let's kind of quickly review this. I want to leave a bit of time for questions also.
So basically, inside of Visual Studio Team Services, I want to focus just on the build part of that, which is basically supporting building mobile applications now.
So you have here, I'm going to focus on iOS here. So I set up what's called a build definition for iOS, kind of specifically for these conferences. And let's look at what that looks like. So to set up a build, you need to connect to a repository.
So I've connected this thing to GitHub, Azure Samuels, my driving, and I connected it to a specific branch and get the evolve branch here, because I used this for the evolve talk. And then you set up a trigger, which is basically, I want to checkbox continuous integration so that if there's a change to this branch,
it triggers my pipeline. You do need to have a Mac, because ultimately, iOS needs to be built on a Mac. I've set up kind of just this machine here and configured an agent on this machine to do the build. But you can also go to things like Mac and Cloud to get a Mac kind of cloud-hosted thing to build.
Once you have that, setting up these build steps is kind of a drag and drop thing. You can add things here, like building Xamarin applications, Xamarin iOS, building Xcode applications, and so on. So here we have some basic boilerplate stuff.
And then the pipeline is restore Xamarin components and NuGet packages, build Xamarin iOS, package the app as an IPA file, which is the format you need to deploy to a device. And then there's a step kind of already inside VSTS to run the tests on Xamarin Test Cloud right from in here.
So basically, what you do there is you set up an API key and a user and a key which chooses which devices you want to run on every commit. Now we've done this. And you see now that there was actually a build that completed eight minutes ago, which passed.
And you remember the time when I had to jump back and do this git commit thing? That's because I wanted to show this eight minutes later. So I was changing the application background to black. And as I was talking, this was actually building in the cloud,
publishing and running the test on Xamarin Test Cloud, and then getting results back here. And we see we got 12 passes here. And we can actually click directly into the Test Cloud link here to see what actually happened. I just ran a small subset of the test,
so it would be really fast. But you see here that the change I made was actually reflected. So this has the black background, whereas we had the blue background before. So all of that happened automatically while I was talking or drinking coffee. And you can set it up to happen with every commit.
And I know there's a bit of setup here. But once you're inside, it doesn't take more than, let's say, an hour to configure this thing. And you have a CI pipeline. And I know it's not the biggest fully perfect pipeline, but it's giving you a lot of value for a very small investment. You can also connect this to HockeyApp, if you're familiar with that, which
lets you do crash reporting. So if there's a crash out there in the wild, you get that registered inside of this web system here. You can see there were some crashes back in February on this application here. And you get some metrics on which OS device it happened on
and the iOS stack trace here. This hasn't been so complicated for whatever reason. And the other big feature of HockeyApp is that you can distribute builds of your application to the manual testers, and that happens automatically. So all of this, and I'm speaking fast because I'm trying to leave time for questions, but my point is that you're getting all of this with a few clicks and a bit of configuration
using these cloud products. All right. So the final point is basically this quote, which I really like. You don't have to read the full thing, just the last one, which is basically whenever you have an opportunity to submit a build to Apple, you should do it.
So you should be continuously deploying your iOS application, even though there was this Apple review process. And right now, Apple is the bottleneck. It takes between one week and two weeks to actually get a build reviewed. So your maximum frequency of iOS applications
is, say, once per week or every two weeks, because this thing is limiting you. But something is changing now, which is really interesting. If you chart the time it takes for Apple to complete their app review process, it's been dropping for about a year now.
So you see now that the very, very right-most corner of this, we're actually down to about a day, which means you can publish new updates every day to your iOS builds also, which is a huge difference from the two weeks cycle. Does anyone know what the spike is?
Huh? Higher. Speak up. Holidays, yeah. So that's not very nice. They will actually go on holiday and just everything just stops. I don't really like that. They should fix that also. But I guess they need their holidays too. So my point is automated UI testing, it takes minutes to get started.
It takes longer to get good. CI delivery, you can set it up probably in a day within your team. And you have a CI pipeline. And you're basically implementing continuous delivery for your mobile apps. You can go and do it. That's the point of my talk. A bit of links, and I'm happy to take any questions. But you have to speak up, because it's really hard to hear.
So for sermon test cloud? Yeah, so we have a free tier now. So you can go try this for free now. And then the entry point is $99 per month for the low tier. And then it scales up depending on what we call device concurrency, which is the number of parallel devices
you're using at the same time. But we changed the pricing a lot to make it way more accessible than it were a year ago. And VSTS is pretty cheap also. More questions? Yeah. You can use it to test any mobile application.
It's not restricted to salmon. Hybrid apps, native apps. More questions? All right. I'm also happy if you want to know more details or you don't like to speak in public, you can come talk to me after the talk.
But thanks very much for listening, and hope you have a great conference.