We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Continuous Delivery of Mobile Apps

00:00

Formal Metadata

Title
Continuous Delivery of Mobile Apps
Title of Series
Number of Parts
133
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Quality and fast feedback on mobile is a challenge! Developing mobile apps requires dealing with multiple platforms, OS versions, form-factors and resolutions, varying hardware capabilities and network conditions. At the same time, users have extremely high expectations for the mobile experience and will promptly punish with bad App Store reviews when disappointed. User expectations are set by fast-moving consumer apps such as Facebook, YouTube and Google Maps which deliver fast, responsive, quality apps with frequent release cycles. At Xamarin, we believe that the way to higher quality and faster mobile release cycles is continuous delivery. In this talk, we show how to setup a continous delivery pipeline for a small mobile app. We show you how fast (and fun) it is to write automated tests and to automatically run them in various deployments with each commit. We discuss some of the challenges that mobile developers face in establishing the "walking skeleton" deployment pipeline for mobile apps. The example pipeline is complemented with a few small but real-life case studies of companies who successfully have implemented continuous delivery for mobile.
Continuous functionMobile appSoftware developerPoint cloudSoftware testingIRIS-TVirtual machineLinear regressionCAN busDigital rights managementFrequencyProjective planeProcess (computing)Multiplication signMobile appConnected spaceData storage deviceSoftware bugSystem callBus (computing)PlanningProduct (business)Profil (magazine)Linear regressionEvent horizonSoftware testingPublic key certificateoutputInformation technology consultingSoftware developerTask (computing)Point cloudWordTunisBlack boxFeedbackWeightSpherical capNumbering schemeJSONXMLUMLComputer animation
Software developerFreewareStandard deviationPlateau's problemBlogContinuous functionSoftwareImplementationMobile WebType theorySystem programmingStapeldateiReduction of orderFeedbackStress (mechanics)Point (geometry)Slide ruleData storage deviceMobile appMathematicsMobile WebExpected valueWebsiteControl flowReliefWeb 2.0FeedbackMultiplication signStress (mechanics)Software testingType theorySoftwareGraph (mathematics)Software developerProcess (computing)Software bugProduct (business)MereologyGodVirtual machineNumberProcedural programmingBlogFacebookGoogolSpacetimeComputerReading (process)Physical systemMoment (mathematics)Numbering schemeBlock (periodic table)Cycle (graph theory)Computer animation
Software developerPhysical systemSoftwareMobile appControl flowLink (knot theory)Point (geometry)SpacetimePower (physics)
Software developerServer (computing)Software testingDisintegrationExecution unitMobile WebContinuous functionContinuous integrationComputer hardwareIntegrated development environmentPower (physics)Computer networkConnectivity (graph theory)Interrupt <Informatik>Revision controlProcess (computing)Bit rateAverageVenn diagramDistribution (mathematics)Identity managementPublic key certificateImplementationType theorySystem programmingObservational studyPointer (computer programming)Sample (statistics)CodeDirectory serviceAndroid (robot)Server (computing)Software testingMobile appIntegrated development environmentoutputShared memoryRule of inferenceDigital rights managementNumberConnected spacePoint (geometry)Self-organizationMobile WebMereologyJava appletDirectory serviceSystem callUniform resource locatorPointer (computer programming)MathematicsProcess (computing)Cycle (graph theory)Control flowSoftware developerSubsetMaxima and minimaProcedural programmingWeb 2.0Projective planeDistribution (mathematics)Real numberLink (knot theory)Public key certificateBitIdentity managementMessage passingCodeBuildingRight angleInformationINTEGRALStatement (computer science)Different (Kate Ryan album)Data storage deviceUnit testingComplex (psychology)Revision controlBit rateService (economics)Computer hardwareStandard deviationAndroid (robot)Multiplication signSoftwareCASE <Informatik>Sampling (statistics)Asynchronous Transfer ModeFunctional (mathematics)Computing platformFreewareProfil (magazine)Execution unitProduct (business)Absolute valueVirtualizationRoundness (object)ChainWindowWebsite
Software developerMathematicsMobile appGraph coloringElectronic mailing listCartesian coordinate systemComputer simulationSoftwareData storage deviceMultilaterationSystem callMobile WebEmailBuildingFront and back endsRight angleOpen sourceComputer animation
Continuous functionStreaming mediaSoftware developerMathematicsSoftware configuration managementSoftware testingExecution unitDistribution (mathematics)Mathematical analysisIntegrated development environmentCodeAutomationCartesian coordinate systemProcess (computing)Integrated development environmentStreaming mediaData storage deviceExploratory data analysisVirtual machineMobile appDirection (geometry)Metropolitan area networkBuildingSubsetFlash memoryError messageSoftware testingDistribution (mathematics)Computer simulationPhysical systemSelf-organizationProduct (business)Service (economics)Generic programmingExecution unitMultiplication signMathematical analysisTraffic reportingDiagramCodeCycle (graph theory)MappingTouchscreenChemical equationCASE <Informatik>Software developerDifferent (Kate Ryan album)Profil (magazine)Mobile WebFormal languageINTEGRAL2 (number)outputBitAndroid (robot)Right angleRevision controlFluid staticsControl flowUnit testingCategory of beingMetadataComputer animation
MathematicsSoftware configuration managementAutomationSoftware developerDistribution (mathematics)Software testingQuery languageDeclarative programmingView (database)Event horizonTouchscreenObject (grammar)WritingComputing platformExecution unitBinary decision diagramVisual systemCommon Language InfrastructureControl flowJava appletScripting languageWeb 2.0SpacetimeDistribution (mathematics)INTEGRALCartesian coordinate systemWellenwiderstand <Strömungsmechanik>Software testingSoftware developerTouchscreenBuildingEvent horizonMobile WebSelf-organizationState of matterBitLevel (video gaming)Different (Kate Ryan album)Stress (mechanics)Formal languageMathematicsQuery languageLoginUnit testingLogic synthesisMultiplication signComponent-based software engineeringControl flowComputer programmingVisualization (computer graphics)Order (biology)Backdoor (computing)Online helpIntegrated development environmentMobile appView (database)Computer configurationPhysical systemSocial classObject (grammar)EmailCASE <Informatik>Drag (physics)Execution unitSinc functionMessage passingPoint cloudScripting languagePRINCE2Instance (computer science)Computer hardwareCycle (graph theory)Video gameBeat (acoustics)TunisReal numberSimilarity (geometry)
Software developerSoftware testingPoint cloudOffice suiteRevision controlSoftware testingInternet service providerPhysical systemReal numberForm factor (electronics)Android (robot)outputProduct (business)Endliche ModelltheoriePlastikkarteLevel (video gaming)Multiplication signComputer animation
Software developerSharewareE-learningSharewareComputing platformSoftware testingCASE <Informatik>Field (computer science)Cartesian coordinate systemEmailAddress spaceElement (mathematics)TouchscreenControl flowPositional notationOperator (mathematics)IdentifiabilityEntire functionBoilerplate (text)Arithmetic meanSoftware developerPlanningLine (geometry)CoalitionCross-platformMultiplication signAndroid (robot)CodePoint (geometry)AbstractionSelectivity (electronic)ResultantSemiconductor memoryFormal languageKeyboard shortcutType theoryQuicksortIntrusion detection systemDefault (computer science)Tap (transformer)Real numberLogin2 (number)Metropolitan area networkVideoconferencingFluid staticsConnected spaceUnit testingFitness functionComputer simulationMathematicsGreen's functionDegree (graph theory)MereologyoutputMobile appSummierbarkeitInternetworkingPoint cloudEmulatorParallel portCompilerComplete metric spaceFront and back endsComputer animationLecture/Conference
SharewareSoftware developerSoftware configuration managementMathematicsDistribution (mathematics)Point cloudInternet service providerPhysical systemSoftware testingLine (geometry)Real numberRevision controlPoint cloudCodeCartesian coordinate systemComputer hardwareNumber1 (number)SharewareComputer simulationEstimatorSpacetimeMathematicsProduct (business)System callComputer animation
Software developerSoftware testingAiry functionSharewareService (economics)Point (geometry)Software testingVirtual machinePoint cloudServer (computing)ResultantEmailCartesian coordinate systemMoment (mathematics)MathematicsFunction (mathematics)LoginQueue (abstract data type)CuboidAuthorizationExecution unitSoftware repositoryRevision controlTouch typingSoftware developerBuildingPhysical systemComponent-based software engineeringLatent heatControl flowMultiplication signBranch (computer science)Android (robot)Public key certificateInternet service providerBitMedical imagingLine (geometry)VideoconferencingInformationCorrespondence (mathematics)Projective planeSequencePosition operatorProfil (magazine)Flock (web browser)
Software developerAsynchronous Transfer ModeMultiplicationKolmogorov complexityRevision controlSoftware testingDirected setFeedbackBit rateAssociative propertyEvent horizonCrash (computing)Genetic programmingDistribution (mathematics)View (database)Server (computing)Euler anglesControl flowService (economics)Continuous functionMultiplication signComponent-based software engineeringNumberView (database)Server (computing)CASE <Informatik>TouchscreenProcess (computing)Revision controlData storage deviceMobile appMereologyWeb 2.0Endliche ModelltheorieType theoryArtistic renderingWebsiteThomas BayesCartesian coordinate systemCore dumpTraffic reportingCrash (computing)Service (economics)Control flowComputer simulationWeb pageDescriptive statisticsFrame problemFeedbackCloud computingFormal languageSet (mathematics)Software testing1 (number)Complex (psychology)Bit rateData conversionTunisLine (geometry)Limit (category theory)Point (geometry)Linear regressionCheat <Computerspiel>Right angleProfil (magazine)Structural loadTheory of relativityHuman migrationAsynchronous Transfer ModeSubsetClient (computing)Real numberSelf-organizationSimilarity (geometry)UsabilityFigurate numberDirection (geometry)outputPoint cloudMobile WebComputer animation
Software developerApproximationExpert systemLinear regressionSoftware testingObservational studyMobile WebExecution unitCASE <Informatik>Linear regressionSoftware testingBitINTEGRALMereologySpacetimeWeb 2.0Electric generatorExecution unitData storage deviceWebsitePoint cloudPoint (geometry)Figurate numberMobile appTouchscreenMultiplication signProcess (computing)Cartesian coordinate systemExploratory data analysisoutputFormal languageDiscrete groupGraphical user interfaceState of matterComputer animation
Software developerSoftware testingMultiplication signComputer animation
Transcript: English(auto-generated)
All right, welcome. I'm going to talk about continuous delivery of mobile apps. But before I do that, I want to share a story with you. And in this story, I am both the hero and the villain
simultaneously. Right now, I work as an engineering manager at Xamarin. I'm managing the test cloud engineering team. And I'll tell you more about that later, not so much the management as the product. But this story happened about six years ago. And back then, I was working as a mobile developer
for a consulting company. And we were helping one of the larger Danish banks build out their mobile apps. I was working on the iOS team. And this was their first app. And we were probably one year into the project. And we had to go through the delivery process.
So we were to submit to App Store and get it approved and so on. So we were going through that kind of anxious period where you submit, and then you wait, and you wait, and you wait. And then maybe, if you're lucky, seven days later, Apple pushes the Approve button, and you can go publish.
And so that happened. And we got the app accepted, fortunately. And we were all happy. And in fact, the timing was really, really good. Because the same week as that happened, we had to go on a company skiing trip to the Swiss mountains to go skiing. It was like an annual event that happened.
And so we could go on that trip knowing that we'd succeeded in our plan. So that was great. And I think we went on, I think it was Thursday. And we would always go by bus. So we'd be driving all night and had some fun on the bus. And that was great. But when we arrived in Switzerland at the hotel,
at the, it wasn't the top of the mountain like this picture, but it was fairly high up in the mountains, I got a call from the team. And they said, you know what? We found a problem in the app. And don't worry. We fixed it. So that's great. But how do we release it?
And as they were saying that, I'm realizing I'm now at the top of a Swiss mountain. And I'm the only person on the team that has the knowledge and the tooling and the provisioning profiles and certificates to do the release. And I'm thinking, shit. Fortunately, very fortunately for everyone, including me,
that hotel had a Wi-Fi. There was exactly one place in the hotel where there was a Wi-Fi. A very, very poor connection, but enough. The good news for me was that the Wi-Fi connection was in the hotel bar, so I could sit there. But the upload, I believe, took something
like three hours and would fail halfway through a couple of times before we actually got there. So fortunately, that was six years ago. So you might call this cabin-based delivery. And what might characterize cabin-based delivery? So you got one developer on your team. He's got his Mac, and on that, there's everything
the whole team needs to deliver this mobile app. For everyone else, the delivery process is just a back box. It's what that guy does. You typically have a lot of manual testing, and once you get to one of these releases, everything just stops and people start testing.
You probably release one to three times per year. There are big releases, a lot of features in them. There's a lot of anxiety coming up to that release. And the process is kind of unreliable. You don't know how long it's gonna take.
You don't know how many bugs are gonna surface. And you may have the person that can do it, maybe he's on a skiing trip or he's sick. If you do actually find a bug, there's a slow reaction time. So how fast can we get the bug fix out? Or how fast can we react to user feedback?
And finally, there's just a lot of repetitive kind of boring work that you need to do associated with this. One thing being regression testing, but the whole delivery process of actually uploading to App Store, going through iTunes and uploading screenshots and so on, is also kind of a quite boring task. So I call that cabin-based delivery.
It's a made-up word, I just made it up because I was in that cabin. This is a slide I didn't intend to include it, but literally one hour ago, I tried to check into my BA flight and I tried to change the seat.
How do I change my, where am I even sitting? That's what I got. So I couldn't change my, so I had to include this. And my point with this was that I expect a really, really high quality on mobile. And when I'm disappointed, I tend to get really angry. And I might even go to the App Store and tell BA that they should fix their stupid app.
So there are high expectations. One other point that I wanna, I think is important, is just how you feel as you're going through this delivery. So this was a nice, kind of totally made-up graph from Atlassian's blog. But I think its point is valid, which is how do you feel as you're developing
and you're approaching the target ship date? You feel, it's okay, we're probably a bit low because we're probably behind. And as we cross that target ship date and we don't ship, we're going down the cliff of urgency. Okay, we have to really hurry up and get this release out and the stress is raising.
And then finally we do it, we publish. And then this graph was actually not for mobile. So for mobile, there's an additional low here where you're waiting for Apple, just waiting for the gods at Apple to either approve or reject your wonderful app. And then hopefully, finally you get this relief
that okay, we're out, but is it actually working? What are the users saying? And you slowly get calm and then eventually you get to this peak of jubilation where you're actually really happy with things are going well until you realize that now okay, we have to go to the next cycle again. And I've been there, so that's why I really like this. I know how stressful it is to push that button
and wait for the gods to approve or deny. So I don't think that's sustainable and it certainly doesn't feel nice. So continuous delivery. What do we mean? This is kind of the definition, Martin Fowler. So a software discipline where you build software
in such a way that it can be released at any given time. It may or may not actually be released, that's up to you. That's usually a distinction between continuous deployment where you are continuously pushing out automatically and delivery where you could in principle do it. Like the software is ready, the quality is high enough.
So that's what we mean. And the whole kind of point of this talk is I wanna convince you if you're not already doing it that you can do continuous delivery for mobile apps. And in fact, if you do, then you're gonna get the same types of benefits as we do with other systems.
Not the exact same benefits, but the same kinds of benefits. So you can leave now if you believe me and if you know how to do it. If not, let's stay and look more. So what are those benefits I'm talking about? So this is pretty fairly established. This has been going on for some years now
in the web space, companies like Google and Facebook, Microsoft, and even a surprising company which I'll tell you about in a moment, does it? So what are those benefits? Obvious thing being reduced lead time. So from you have an idea or from you fix a bug until that bug is in the, sorry,
that bug fix is in the hands of the users. What's that lead time? Well, that's certainly reduced if you're not doing yearly releases. Faster feedback continues deliveries associated with a lot of automation and a lot of automated feedback as part of that automation testing and builds
and notifications and dashboards. So that's fast feedback during development. There is also faster feedback from customers because you are delivering faster. You will get the feedback from customers and that can inform your process and your product.
The reliability of releases is much better because you're reducing the batch size. So the actual amount of work that's in a release is a lot less. So it's much easier to make it reliable and to test it. And the procedure is typically fully or partially automated.
So, and you've done it a number of times so you can trust it, you know that it works. And finally, there's this point about removing dependencies on individuals that might be in skiing cabins when you're about to do that release. The final point was this about reducing team stress. So releasing should just be like breathing.
It should be in everyday activity. And once you get there, you don't even think about it that much that more. And I think there's also a point about team satisfaction. So if you're doing manual repetitive work that in principle a machine could be doing, that's not particularly satisfying.
It's not a good use of your intellect. So I think you will increase the team satisfaction and just how good the team feels by doing this. All right, so what's this? Does anyone know? Yeah, it's a Tesla. So Tesla had, they have software in the system
that can kind of control the, I believe it's called the suspension. So how high the car is in the air relative to the road. And that's usually controllable. And they had realized that some of their users
were driving really fast in the highways with too low suspension. So that would actually touch the ground and eventually catch fire. So this is a great story of the power of continuous delivery. So what they did was they pushed an over the air update out to limit the amount of lowering you could do
in the car. So Tesla does continuous delivery, changing how cars work. I think that's pretty fascinating. Talk about a high risk. I'm thinking about they do SpaceX. I mean Elon Musk does SpaceX. I wonder if they do continuous deployment to spaceships.
Anyway, so if they can do it with a high risk thing like this, we can certainly also do it for mobile apps. That's the point of this. And I think it's cool. You can click on that link. So let me just check. Can you raise your hands? Everyone, please? Yep. Now if you don't,
if you're not involved in some kind of mobile project, either engineering or managing it, it's okay, you can take your hand down. But still listen. All right, so that's surprisingly many going to a mobile talk and not being involved in mobile. So maybe you're gonna get, you should keep your hands up though. So for those of you that don't have your hands up,
are you gonna go into a mobile project fairly soon? Or are you just looking? All right. Anyway, so keep your hand up. But take it down if the following statement doesn't apply to you. We have automated our builds. A few people, all right. They're admitting they haven't.
We run unit tests and integration tests. Okay, a few more people. Three people left. We have some kind of UI test or acceptance test running either each commit or daily on the mobile app. Two people, go.
Keep it up, keep your hands up. We develop tests as part of the development cycle. That's just how you develop a feature. It isn't done until you have a test. We release regularly, flexibly, frequently with confidence. Yes, two people.
And we rarely find bugs. Oh, always lose people at that one. Yeah, that was designed to throw people off. So anyway, the point of this exercise is if what I'm saying is true, that there are all these benefits of doing continuous delivery, why is it only two people in this room
that are close to being there? That's interesting. And I think it's because it's challenging for a number of reasons for mobile. I think just CI is not mainstream. We had to learn how to build mobile apps. So if we had to worry also about the CI infrastructure
at the same time, that's just a lot of stuff to think about. So I think that's one part of it. I think that now, and maybe for some time now, robust tools and standardized ways of doing this have appeared. So I think times are changing and have been for some time.
Another reason is that you just need special hardware. You need Mac servers. And maybe your existing CI infrastructure doesn't run on Mac. It would be surprising if it were. Another point is that testing in a realistic production-like environment is just really hard for a number of reasons.
One being devices, right? So if you're gonna do continuous delivery, you wanna make sure that the app will work in the real environment in which it'll be deployed, which is in real mobile devices and real human hands. And those devices, there are thousands of them, different devices. They run different versions.
They have different hardware. They're running on a different network connectivity. They may run out of power or react differently when they're in low-power mode. They react differently depending on location. They can be interrupted due to calls or text services. So there's just a lot of complexity in being confident that your app will function under these connections.
And that's the whole App Store deployment of being blocked by Apple, which limits you at the maximum at a rate of one release per week. In practice, that's not predictable, though. You can vary. It can be three days.
It can be two weeks. And they even take breaks. I don't know if you, I didn't include it in the slide, but there's this picture online where it says, sorry, we've gone on to Christmas holidays and we'll return in the new year. And so your app is waiting for these guys to have their Christmas, which is totally fair for them,
but it's not so nice when you're a business. There's also another kind of third party that you can't control, which is the user. So who decides which version is being run? The user does. As opposed to the web where you can push out the update. You decide who runs what.
Well, with the App Store, the user decides when to update. And there's just this whole complicated submission procedure which you have to understand first before you can even contemplate automating it. For iOS particularly, there's a lot of complexity around code signing, provisioning profiles, distribution versus development profiles,
managing identities and certificates for push notifications. All of this stuff is stuff we need to understand and master before we can talk about any sort of automation. So yeah, definitely challenges. And these, by the way, are just the technical challenges. I haven't even talked about the organizational challenges involved.
So your organization may be set up in a way such that continuous delivery just totally out of the question because there is a slow change management process basically. But I wanna say, I've seen a couple of places now I've been in a place where for some reason,
because mobile is new, the mobile team gets to break all the rules. So they get put in a special room and the normal processes don't apply to them. And they get special hardware, they get their Macs and they get their iPhones and everyone else in the organization kinda envies them.
So they can kinda go off on their own and do this thing and not follow the rules. But I think that may be an opportunity for us to, maybe if you're in one of these organizations, maybe your mobile projects can where you start this organizational change. All right, let me just check time here.
So what's in this talk for you guys? As I said, the main message is you can do it. It will take a bit of time, but less than you think. And it will provide you with the benefits I was talking about before. So again, if you wanna leave with that statement, that's fine. If not, the rest of the talk has these things here.
So what I wanna show you is an example of what such a continuous delivery pipeline might look like. So you can kinda see what am I talking about for yourself as an example, before you go out and buy, so to speak, go out and build it yourself. Then there's a lot of tools. You need a lot of tools to do this,
and I'll give you some pointers and some links to other talks, and that'll be useful. And then finally, I've been talking, I don't do app publishing anymore, but I've been talking to a number of companies that do, that have been doing this continuous delivery process for mobile, and they've learned some lessons. So there are some tips that,
if you're gonna go down this path, you wanna be aware of those before you go. All right, so for this, I'm gonna use a sample app. It's a fairly simple app. It's called Employee Directory. So basically, you can, in your company, you want to have a directory of all the employees.
You can get their contact info and so on. So this is actually a native app that's built using Xamarin technology. It works for iOS and Android, and in fact, it's interesting in its own right, because it's doing 90% code sharing across platforms, iOS and Android, and potentially Windows also. And it's nice for this because it's a small app,
so we can quickly walk through it, but it's still non-trivial. I wanna point out that everything I say is not bound to any particular technology, so it doesn't matter if you're using Xcode or an Objective-C or using Java and Android's toolchain.
All of these things apply. So it's just an example that I'm more familiar with. So let's look at the example real quick. Let's log out here. So this is the app. It's very basic. It's just thrown together pretty quickly, and it's gonna be revised
and actually open-sourced within a few months. So I can log in here. The intention with the app is that you should be able to plug in any backend store. So right now, it's just running with a mock backend store, which accepts any email that's logged in, except bob at Xamarin, I think it is.
So you log in, and then you get your list of employees. You're gonna scroll through that, click into individual employees. This should fetch a picture if the network is working. Yep, and you can kind of call people. You can search, which is actually broken
in this app sometimes, so that may cause problems for me later. And you can look at yourself, and you can kind of log out. So that's the basic application. And I wanna just, for kicks, I wanna make a small change to it, because that will be useful later.
And what I wanna do is, I wanna change this background color here. So don't worry about the details of this change. Basically, I'm picking kind of a darkish color here, and using that as a primary blue, which is what's used here.
So it's using Xamarin.Forms. There's a talk on Xamarin.Forms, by the way, if you wanna know about the actually building of the app at 1.30 today, and I think it's room seven by Mike. All right, so let's try and run this. Oops, let's just run on the simulator here.
Oops, that's big.
You see, I changed the color now to gray. So let me commit that, because it's an awesome change. And oops, and push it up to GitHub.
All right, so that's the app, real quick. Now let's talk about CI pipeline for mobile.
So there's sometimes mentions of the concept of value stream mapping, and all that means is that you draw a diagram of how does stuff get delivered to your customer, which is from someone gets an idea until it's in the hands of the customer,
draw out all the steps in that process. They call that a value stream, because that's how value streams onto the customer. And they call it a map, because you're drawing a map. So what does that look like for an iOS app? Well, obviously there's some kind of way you get an idea. I'm not gonna try and draw that out. Depends on your organization, and depends on the processes you have to get those approved
and so on. But if we look at the technical side, there's some kind of CI system that is reacting to source control change, building for the, in this case it's for iOS, there's a similar one for an Android. So building for the iOS simulator
and possibly for an iOS device. You run some unit tests and some integration tests. Is that it? Are we done? Well, you already heard me talking about UI testing. So everything I said before is actually kind of generic. It's not really specific for mobile,
except for the building of device versus simulator. The same applies here. So you may wanna run some UI or acceptance tests, end to end tests. And for mobile, you probably wanna run them both on simulators and on devices. You're probably not gonna deliver anything without some kind of manual testing,
because some things are just too hard to test automatically, or maybe you don't have the coverage, or maybe machines can't do what it is you're trying to test, some kind of exploratory testing or user experience testing. But you wanna make sure that the delivery of the application to be tested
is as smooth as possible for the users. Else, what else? Any ideas? Testing deployment. I'm not sure exactly what you mean, but maybe we can discuss in a second.
But if it's the deployment to App Store, you're absolutely correct. So to actually get it out there to the end user, there's either a re-sign or a build step with the so-called distribution profile, which is needed to get to Apple. And you actually do need to go ahead and change screenshots,
update your screenshots in App Store. And ideally, you actually generate a screenshot per language you support, so you can publish different screenshots to the different App Stores. And you probably have some release notes and some metadata associated with the application, like which category is it in. And then you need to upload it, and you need to submit it for review,
and you need to, after the review is done, you need to publish it. And then hopefully, there's actually an extra step I didn't draw, which is important, which is the user actually goes and pulls the update down, because he doesn't actually get that value until he pulls the update down. But then hopefully you get some feedback,
new ideas, and the cycle continues. But what else? Well, there's actually a ton of other stuff we could do if you were really, really serious about this and had the time to build the infrastructure and so on. We could do static analysis of the code, get fast feedback if there's static stuff we can detect statically.
There may be enforcing of code guidelines. You may wanna run performance tests automatically. You may wanna have code coverage reports so you can get a sense of your test coverage. You may wanna provision, that may also be what you were referring to before with the answer, which is provision,
say, a service environment, a backend environment with your production-like data. So you can test this application against that environment, which you also want to automate. And there are insights and so on. So yeah, this looks daunting, right? It's, how am I ever gonna get all this stuff running if I have nothing starting?
Well, my answer to that is just think incrementally. What's the minimal thing we can build that also provides us value as developers, which takes us a step in the right direction? So what's the smallest pipeline I can build that actually gives value for me?
And I would say that's probably this, build. Build the application automatically and raise an error, raise a notification, flash a light if there's a problem. You might go a bit farther and do something maybe slightly controversial here,
and you can think about this yourself, but if you were to build only one test, what test should it be? Should it be a unit test, integration test, or an end-to-end test, UI test? Well, I would actually argue if you're building just one test, you have to start somewhere,
I would get more value out of seeing the whole application functioning on a device running through the major screens rather than testing one particular, say, class or aspect of the system. And I mean, you can, your mileage may vary, but I see a lot of companies actually doing this.
So when they're starting up, they start by UI testing, even though I know about the whole testing pyramid and I believe in it, I think that only applies once you have those unit tests and you have those integration tests, and it should be telling you something about the relationship between those times, sorry, the relationship between those types of tests.
Another step you wanna add is easy distribution for manual testing. So basically, the people doing the manual testing, they should be able to get that on their phone without any hassle, without having to drag stuff into iTunes and email developers and so on. So just real quick, sorry,
about automated UI testing for mobile, not everyone's doing it, I wanna talk about it real quick. You can think of it as a test, a script, which simulates what the user does with your application. So what does the user do? Well, he interacts with UI controls. And he uses gestures, things like tapping or scrolling
or entering text or even rotating the device. That's what a user does. So an automated UI test for mobile synthesizes those gestures and pretends to be a user. So in order to do that from a programming language, you need some kind of way of identifying views. So typically, there are kind of small query languages
that will let you say things like, tap anything that has the text help or tap any UI component which has the technical ID history button. So depending on which tool you use, there are kind of different languages, but usually they're quite declarative and high level.
And they're quite robust to changes in the application. You need to wait for stuff. So waiting for an event to occur, for instance, the appearance of a particular button, because if you try to do that with sleep, you all know what's gonna happen. So another example might be waiting for a spinner
to disappear and failing if that doesn't happen within say 30 seconds. Then it's nice to be able to generate screenshots for reporting purposes. And you need to be able to control some basic app lifecycle stuff like launching or reinstalling or clearing the state of the application. Some tools are interesting in that they allow you
kind of low level APIs that lets you do stuff that users could not do. So actually reflectively calling methods inside application objects from the test script. And why would you want to do that? It's kind of an advanced technique, but I've seen people use it to,
for instance, all their tests start by logging in. But you don't really wanna waste time logging in in every test case. So they have one that thoroughly tests every aspect of login. And then every other test uses one of these techniques to set up the application in a particular state, kind of going behind the UI and then proceeding and thereby gaining a huge speed up in their tests.
So it's kind of an interesting technique. We call this backdoors in our technology. So what tools are out there? I think these are the most important ones, at least in my opinion. So obviously our own is one of the most important ones, Samurai UI test. In that you writing tests using C sharp. You are running them with any unit,
either from inside Samurai Studio or Visual Studio or from the command line. And you can do things like spec flow if you want. Then there's Calabash, which is a Ruby based tool supporting Cucumber and behavior driven development. And you run those either from CLI or from inside an IDE like RubyMine from JetBrains.
Then there's Appium. That's gotten a lot of traction recently. The idea with Appium is, you know what, we already did UI testing. It's called Selenium. And why don't we try and leverage what we learned there and leverage the APIs from Selenium in testing mobile apps. So the trade out there is you get a lot of flexibility
because you can pick your language and then you have this API that you may already know. The downside being that it's not kind of, there's a bit of an impedance mismatch between mobile and the web space. But it depends on your organization what may be best for you. I mean, maybe the APIs don't exactly,
like clicking a mobile app, it just seems wrong. And are similar examples like that. And then there are the official tools, Espresso, UI Automator, XUI Test from Apple and UI Automation from Apple. And those are kind of cool because they're officially supported and they work. And they're usually fairly fast.
The downside for me as a kind of mobile app developer is that they're not really cross platform. And they have, some of them at least, have a very poor development experience. So like just the turnaround of writing a test and running it. But there are some options for you.
So, quickly about our own system because I'm really proud of it and I like talking about it. I will promise to be quick. Summer in Test Cloud, what it basically gives you is the ability to run these tests on real hardware, real non-deal broken devices, just as the devices you would go out and buy in a shop.
Fully automated, running every kind of conceivable version of Android and iOS that you might want to test on and any model, any form factor you might want to test on. I'll show you what that looks like in practice. But it's a solution to the problem of having hundreds of devices lying around in the office and being not charged and people stealing them.
And who's got the iPhone 6 with iOS 8 or whatever? A lot of companies have this problem and have like a full-time person just hired for managing that lab of devices. And each team has its own. There is a lot of problems. So go to the cloud for that.
There are also other providers for that, obviously. Now, let's talk demo. And let me check time. Yep, should be good. So remember the application from before? The now gray application?
Let's look at what a Summer in UI test might look like. Can you read this font? It's fairly decently sized. So as I said, we're looking at an N unit test case. There is some notation here we can use to run a test that runs cross-platform.
So this means that you're gonna run this test twice, one for each platform. There is some basic boilerplate setup which before each test will launch the application on the target platform, which is injected at the beginning of the test case. Then I have two tests.
One is invalid emails are not allowed to log in. And that test goes, log in as Bob, which is the only user in the world who can't log in in this particular backend. And then ask the application to wait for the element E, which has the following text.
No employee for that email address. Now log in is just a helper method which does three things here. It enters the given email into the log in field. It dismisses the keyboard and taps the log in button.
Now what's the log in field? Fairly straightforward, it's just the first text field. Maybe not the best test, you probably wanna use IDs. In this case, the app didn't have any, so I just said whatever, it's the first one. And the log in button is the element E that has the mark log in. And mark is kind of an abstract concept in this tool, meaning it's either a text or an accessibility label
or an accessibility identifier. So it's kind of a quick way to use a fairly robust way of identifying. So let us try and run this. Let's run it on Android. And as you can see here, we're kind of recompiling
and deploying the application and then initiating the test run here. So this is using the Xamarin Android player which is a quite fast Android emulator which gives you a nice development flow for running these UI tests. So I don't know, it's running pretty fast but it asserted that this first user couldn't log in and now it's going to visit the whole screen,
basically the smoke test. So searching for Kelly, looking at him, looking at me and going to Bob. That's quite nice. We can run that exact same test on iOS. Now, if you want to share the code across platforms
like I'm doing here, it does require that the application is the same and you're building the same application across platforms. Otherwise it doesn't even make sense to share a UI test. So you do need to play some small tricks like using the same identifiers on Android and iOS. But if you are conscious of that,
then you can get a high degree of code reuse across platforms here. So now I'm testing the gray app. That wasn't built in Android, the gray part. And we're asserting that no employee and so on. And I kind of like to race them. I always think that's fun.
So mono. So here I'm running the tool from the command line because the IDE doesn't support running two at the same time so I'm going to catch up with iOS from Android because the iOS simulator is somewhat slower than this time we're an Android player,
particularly kind of the launch step. So I think we can catch up. Go, go, go, go, go. Yep, we're catching up and we passed iOS. And test run here and test run there
and we've got all greens so we're happy. One cool feature here is that we can, from straight from within the IDE, I can go and upload this test that I just tested locally to real devices in Xamarin Test Cloud if I have an internet connection.
Let's see here. Yeah, here's the upload command running. It will take just a few seconds to upload.
Let me check my time. Let me just jump into, I did a previous run before because the upload is taking a few seconds on this. That's an old iOS one. This is the Android one, the last one I run.
So what I have here is kind of the results of that run. So the last test was run on eight devices. It took a sum of 25 minutes. So if you sum up the device time, sorry, the sum of the time it took on all the devices was 25 minutes. And it computed some fairly high peak memory actually
which is kind of interesting. And then we have kind of the results of running on the selection of devices. Android 4, Samsung Galaxy S3 Mini, and a fairly newer one, LG Nexus 5X running Android 6.
And if you noticed in the test case, I had put app.screenshot at various places. And that will enable me to generate a screenshot at that point in the test suite, which is what you're seeing here. So what did the application look like on this particular device at this time?
And as I'm walking through these steps over here, you see the application changing. And finally, real quick, there's also kind of a video playback feature. So let's go to the point where we're launching. So some things are just hard to capture by static screenshots. So in that case, we can generate the small video
which captures animations and so on. Okay, so what's happening now is actually the upload from the IDE completed. So that's why it opened this tab here. So it's asking me now, where do I wanna run? Default team. I'm gonna sort.
There's a bunch of devices I can pick between. I'm gonna sort by what we call availability, which is how many devices do we have? And I'll just pick this because we have a lot of these, so I'm guaranteed not to get into a queuing situation. Then I can do stuff like organizing the test, setting the language of the device,
possibly parallelizing, so chopping up all those test cases and running those in parallel. I'm just gonna go ahead with the default. So just real quick, a demo of the capability of that. So this will, let me just let it complete. It's finalizing the upload and now it's validating
the application that everything is okay and we can go ahead and run the test. So that will take probably 10 minutes or so before the results there. So real quick, sorry if you feel like I'm rushing but I wanna make sure I get all the points done. So real quick demo of automated UI testing.
It's not that bad. You can have, I had probably 100 lines of code, smoke tests, cross-platform, and a lot of that code was actually white space, new lines and comments and so on. So what I wanna show you now is a fairly easy,
basic pipeline using a cloud CI provider called Bitrise, which is kind of fairly new thing. And there are a number of other providers out there. You should go and find the one that you like. The reason I like this is because within an hour or so, yesterday I was able to set up a pipeline
that gives me these steps. So that's a very low cost investment to get going with this stuff. So we got an estimate, sorry, a source control change. We built a simulator, we run a UI smoke test, the ones you just saw me running. We published to request running those tests
on real hardware and test cloud. And we distribute the application for manual testing all in kind of one system. So let me show you that, which was also the reason why I did a push to source control at the beginning. So, and I hope I pushed to the right branch.
Otherwise this will make the demo slightly disappointing. All right, so my awesome change actually did. Get built. So before I do that, let me just show you the pipeline actually. Here's called a workflow.
So for the development branch, my workflow consists of everything that's in the features workflow plus these steps. And these steps is basically just deployment to manual testing. So building the IPA and making it distributed to testers.
Okay. And so what's in the features pipeline? Well, the first stuff here is kind of specific to their system. It doesn't really matter. Some of this is managing certificates and provisioning profiles. Then there's some Xamarin specific stuff.
And if you're doing an Xcode or a Gradle Android project, then this would be replaced with corresponding steps. But it's basically registering the Xamarin, making sure you're licensed and so on. And then this step is building, oh, component restore and NuGet package restore for Xamarin builds.
Logging in and then this is my own custom step, which they're now integrating in their system. So you can plug in your own steps in this pipeline. I've built one for running a Xamarin UI test in their cloud. We don't have time to go into details of that. That's gonna be merged into their master repo. So you can kind of do drag and drop
or out of the box running. And the final step they already have, which is submitting the test to test cloud. So I need to provide which test do I wanna run? How am I gonna build it? And what's the authorization I'm using towards test cloud? So let's go back and look at the builds that ran.
This is my awesome change. So I wanna go and see the build logs. Now this is a fairly large log
because a lot of stuff happened and just the output of the build step in itself is quite large. But I wanna look at some of it at least. Yeah, so these are the steps that passed including running of the UI test. And if I expand this log fully, I can see the actual end unit output of that.
But it passed and it submitted the results to test cloud. Which is this thing that's running now. I must have hit a queue there. But I think we'll be able to see the results of that in a moment. So the way this is set up, it will actually push to test cloud to test on devices and then continue and succeed the pipeline
instead of waiting. You could also set it up so it waits for the results and only passes if all the tests pass. So this is quite nice. I have no McMeany. I just have my development machine and I don't know how much it costs. It's not a lot.
It's probably $100 per month or less to set this up. And I have a CI pipeline. And it took me, let's say two hours. Let's even say a full day. That's still a very small investment given the value you're getting. I did wanna show you also this,
if I can figure it out. Yep, so what you see here is the build I just made with the gray background change.
That was built for device, signed with the correct provisioning profile so that my manual tester, which is me, gets this email and is able to install the application directly without having to email you or download.
I know there are a bunch of other services that do this like TestFlight and HockeyApp and so on. It doesn't really matter. The point is this took me maybe an hour to build and I get all this value. So it's not a huge investment. But you wanna have this to ensure a smooth experience for the manual tester. So this is downloading now. Ooh, I like this.
What is it they call this? Panorama, what's it called? Motion sequence? Parallax, that's it. I'm just talking to get the download to seem faster. All right, I can jump into it when it's done.
So that's it for the demo. I wanna, I have a few points left for you guys and before we do questions. I got really confused just there for a second because when you enable video, it tells you that the time is 9.41 on this device. And I thought the time was really 9.41
and I was trying to figure out how much time is left. I should be good. So if you're not able to use a cloud provider for CI, I will recommend that you take a serious look at ThoughtWorks Go, their continuous delivery server. I know it's fairly new and maybe not so well adopted yet.
And it's maybe a bit confusing initially, but it's really a very powerful system for doing an in-house continuous deployment system. We use it ourselves for doing continuous delivery at Summer in Touch Cloud itself. And in fact, our delivery lead is right here and he can tell you all about how awesome this system is.
It is a lot more work to set up rather than just kind of click, point and click thing in the cloud, but it's extremely powerful and we've been certainly happy about it. Maybe you can tell us afterwards if you're not as happy as I am, but implementing continuous delivery inside Touch Cloud itself has been a huge, huge improvement for us.
All right, finally, that's it for the kind of the technical talk. This is like the tips. If you're gonna do this, consider doing also these things. Kill switch. So if you're doing, let's say, weekly or even monthly deployments,
there's gonna be a whole lot more versions out there of your app that users could potentially be using. And managing the complexity of data migration between any previous version of the app and the most recent one, and just testing it can be extremely hard.
So consider adding kill switch, which means there's some limit to how far back the user can go. And if it does go kind of farther back, then you pop up a notification saying, sorry guy, you are running too old version, please do update. It's a similar technique, but it's quite useful
and you may not think about it before you go down this path. There is one thing we should be careful of, which is should the app be usable if it's online? Oh, sorry, if it's offline. Because if you limit and you just pop up this dialog, say, sorry, you should update. Yeah, but I can't update because I'm offline.
That may annoy the user quite a lot. On the other hand, if you do allow offline, he may just switch on airplane mode whenever he's using your app and stay on that version. That actually happened to a game developer I was talking to. So figure out the trade off there, annoying users. The watch new screen, another kind of obvious thing,
but when you're doing so many releases, users don't read the release notes, they just don't do. So consider putting that in app. And not every time, not weekly, that may be too much, but when you do important things, just quick screen, really lightweight and fast, tell them what's new, what's the new feature. That's the new awesome thing we did.
I just had this on LinkedIn. They completely changed the app. Managing App Store ratings, another kind of nice technique there, which is we can't be certain no matter how good our continuous delivery process is that we prevent all bugs, we just can't. What we do get with continuous delivery
is the ability to fix them fast. We also can't prevent unhappy users. So the technique here to make sure you don't get public bad ratings on the App Store is provide an in-app feedback dialogue. So instead of saying, do you wanna go to the App Store and rate me? Say, do you wanna give us,
so do you wanna rate me? And he says, one star. And you go, do you wanna give me some feedback on that one star? And then maybe you get an interactive dialogue with the support person. Because maybe the user's just confused and doesn't know how to do X. Or maybe there's an actual problem. But the point is, the user gets to complain, but he doesn't do it in public.
And you get the feedback directly. So it's a nice technique. And there are some services and things you can do to implement this. Then of course, what you need to do is once the user's pointing at five stars, then you immediately ask him, do you wanna go to the App Store and do this? So it's kinda cheating, but your competitors are doing it. So you'd better do it too.
A quick note is that app ratings do actually get reset with new versions. So that's a cost or an implication of doing a lot of releases. But the old ones do, they're also available in the store. So it's not like you lose everything. There was also another trick I heard of,
which is actually publishing. If you're doing like, I'm gonna get something out real quick, actually publishing it under another name. So your brand doesn't get associated with that app. You can collect the feedback. And then once you're certain, you can go and do it in the name of your organization. Another kinda nice trick to manage ratings.
Then I have to mention this. The last part of the pipeline, we talked about it, all this submitting to iTunes stuff. There's actually a truly amazing set of tools, it's called Fastlane tools, you may have heard of it, that automates that process fully, every aspect of that process. So the screenshot here is showing this application delivered for,
in this case two languages, but think 20 languages, and doing five screenshots for each language of the application. If you were to do that manually, that would take a lot of time. If you're doing weekly releases, that's a lot of a lot of time. So you can automate that, of course.
And this is a nice tool to do it. And it will even do things like framing them in the sense of putting the screenshot inside a nice iPhone-looking frames. So you can get a really beautiful description page in the App Store.
So the Deliver tool will automate the whole process, including submission to App Store, publishing, all that stuff. So you can do it from inside your CI, or your continuous delivery pipeline. But it also does an HTML preview of what the user would see inside the App Store when they go to the description page
with your screenshot, which is a hard thing to test. It's like you push and then you hope, but this will actually let you preview. Which is another nice. And finally, it lets you maintain in your Git source control all the metadata, which is otherwise totally owned by Apple. And they have it, it's their own place, it's in. So really, really cool stuff.
Obviously, you're gonna need to understand what goes on in the real world. That's part of continuous delivery is getting the feedback. And for that, you're gonna need insights analytics, obviously crash reports and simplification on iOS. So you can understand when those crash, which is the core reason. But also, tracking how many users to have, where are they coming from, which devices,
what are they doing inside the application, how long time does various things they do take, so kind of getting a sense of their experience. And again, bunch of services there, Seminary Insights being one of them, but there are also things like Fabric, IO, and Crashlytics. But it's part of being truly
into this mindset of delivery. Final slide, I think, which is A-B testing, which is really hard to do with the App Store model because you don't control which version a user uses. So with the web, you can just segment, say 20% gets A, and the rest gets B,
or the other way around. But with App Store, the user decides if it's gonna get the latest version or not. So a technique that some companies use, which is called View JSON, which I think is a terrible name, but you can think of it as server-driven rendering of the view. So the server decides how you're gonna show this particular component.
So the client will know a number of view types and asks the server, how should I render this particular component? And then the server can do that. That means now that the server can split and say, well, all the requests coming from this particular device or for this country or for this randomly picked subset of users, they get this view,
and then we start tracking how did they respond to that relative to the people getting the other view. Yeah, there's also the added benefit of being able to cut off. If you get a bad view out there, you can cut out so that nobody gets that view because the server decides who gets it.
So it's a way to control either load or bad experiences. But again, there's an article by LinkedIn on how they do that, another advanced technique. So that is it. Conclusion, if you don't remember anything else, you gotta start doing continuous delivery
for mobile and you can. Competitors are probably doing it. Certainly the high profile companies are doing it. Tools are ready, they're free, or at least very low cost. Services are out there to help you and it's a matter of days, not years to get this going. And there are techniques, people know how to do it.
And there's a bunch of value get with this cloud services, including our own, of course. Last quote, I promise. Just read the last line. Whenever you have an opportunity to submit something to Apple, you should have something to submit and do it. That's kind of the extreme end of this.
So if you can submit once per week to Apple, you should be doing it. All right, time for questions. Anyone? Yeah, of simulator test, yeah.
Well, ideally I would just run the same tests. What I've seen people do is they run the full blown test on simulator and smoke tests on devices. But the reason they do that is cost. So depending on your cost sensitivity, because like these services have a cost.
So in the case, I actually have this, I know I was gonna run out of time, so I put it at the end. But there's a case from eBay here in Denmark where they ran, they used to run three days of manual regression testing. They reduced that to three hours of regression testing for a release. I think they do monthly or every other week releases.
And that was exclusively using automation. In fact, I think we have their CI pipeline here, which looks a bit like what I've talked about. They actually do this. So they build unit tests, integration tests, simulators, smoke tests on every.
commit full test nightly, smoke test in salmon test cloud nightly, lightweight manual testing, exploratory testing at release time. And I think the last part of the pipeline, they haven't automated that yet because monthly releases, they can live with that. And they only support one language.
So the screenshot generation process is not that bad. So that's approximately how these guys are doing it. I think the point for me is that you need to get started and figure out how you guys want to do it. And you will get the value. More questions? Yeah.
Yeah, I think maybe two years ago, there was this question of should we really push all these updates out? Doesn't it annoy users? But if you look at how a user experiences an update now compared to two years ago, there
was actually not all these notifications blinging all over the screen. All the notifications are actually in the App Store application on iOS. So at least what I experience is I just go to App Store and do update all. Or I put on auto updates so I don't even see them. So I think it's not as bad as it was before,
this annoying the user thing. It's more like the web space now that it just happens. And Chrome updates, whatever, websites update. I don't even think about it anymore. So I think the world is changing also from the user's perspective. Good question, though.
More questions? Let me see if I can get this. There we have it. I'm manual testing the gray one now. And I just pushed the button. That's a really nice feature. All right, that's it. Thanks a lot for listening.
I think my time is up. And if you want to talk to me or talk to all our delivery lead that has experience with Go, please do. Otherwise, thanks a lot for listening.