This statement will usually earn you nodding heads by the affected folks. But is that really true? And if yes - is mobile testing complex in itself or only when compared to other testing? And where does that complexity come from?
Nothing new, actually.
Okay, we have to admit that when looking at the plethora of connectivity, interfaces, sensors, platforms and operating systems, screen sizes, manufacturers, apps, browsers and last but not least the huge and steadily increasing number of devices, you'll quickly get the picture of a diversified and fragmented technology landscape that can make you lose track.
On the other hand though, it is fair to say that each part of mobile testing on itself isn't actually something new: Cross-browser, usability, accessibility, networking, performance, services – we basically already now that. Multipoint touchscreens and swipe gestures on their own don't explain why we consider mobile testing to be complex (albeit that their portability often turns out to be nasty).
Nevertheless, I often see mobile projects going through trouble times, in the latest when it comes to testing. But why? In my opinion, because it's not about complexity – but rather intensity. The intensity of testing is different with mobile, more precisely it's much higher. This is due to a couple of influencing factors that I will discuss in detail below.
It is only in rare cases, that you see project delivery processes (ranging from RfP over requirements, design and build up to test) working in a frictionless harmony. The effects then are often well-known and - although leaving a tangible impact - still easy to circumvent. When it comes to mobile however, a different and much higher load is poured into the funnel of these processes and what happened to be just a small friction suddenly exponentiates into serious blockers.
This is due to the higher intensity that mobile testing comes with and which is created and influenced the following factors:
Factor 1: Pressure
First, we have to cope with intense dynamics of ever-ongoing change in the mobile market. Also, consumer usage, presence and reach are shifting more and more to mobile - to the disadvantage of other channels. This increases the pressure put on manufacturers and service providers to have to deal with mobile. If you're not on mobile you simply don't exist. As a consequence, everything and everybody is in a rush to mobile. Pressure of competition and time-to-market become more critical than ever.
Secondly, the quality of a mobile presence – as perceived by the end user - is having a higher impact: A bad rating of your app is immediately public. Globally. And a bad mobile web shop won't make your customers navigate to your great desktop web shop. They will navigate to your competitor's mobile web shop instead.
In earlier times, one could get off lightly with doing perpetual beta development: Issues in software quality (not to mention end-user security) rarely surfaced to public. If this ever happened at all, it was because a specialised journalist stumbled upon the issue, decided to write about it, it became published and finally the readers of the printed issue read about it. This left at least quite some period of grace to the affected software company to have the corresponding fixpack or servicepack at hand.
Ghee, did that change. Today, not only software and information is available instantly on demand, globally, 24 hours per day – but also the feedback about it. More than in any other regard this holds true especially for mobile, being built on top of paradigms and technologies all around instant distribution and consumption of software and information. And hardly a day passes without issues in software quality (not to mention security) make the headlines almost in real-time.
Every party involved in delivering a mobile service should (at least by now) get the picture about the importance of quality. Now in that regard, let's have a look at factor two.
Factor 2: Cost
Delivery and maintenance of a mobile service are subject to the very same economic principles like any other kind of product or service, meaning: It costs. The tendency to shell out as little as possible for producing something (especially when you offer it free of charge like many mobile offerings) should make sense even to dummies in economics. Since you can't do without development when developing something, the list of potential major items to have room for savings always boils down to just one: Testing.
Now, any discussion about which functional testing types could be saved upon (accepting the respective risk) in mobile usually ends in the conclusion “everything but acceptance testing”. In almost all of the smaller mobile projects that I was involved in, acceptance testing actually was the only test phase that took place. Due to the fact that this kind of testing still occurs at the very end of a delivery project's food chain, it usually starts in a surrounding made up of accumulated delay, exhausted budgets, grown “known issues” ...and the release date carved into stone for ages.
To make it clear: When talking about mobile, this is more or less the very testing stage where you would want to ensure a maximum of the expected functionality works across a maximum of devices from the view of an end-user. And this test stage – which, as mentioned above, nobody wants to save upon – is felt okay to start later, last less and to become a candidate for further savings? Go ahead, dig your test's grave.
Those who vow to adhere to quality but then don't give it the highest priority should be aware that any possible release success won't last long. If you're required to save on something then please don't save on testing. A much smarter way to save (and more sustainable in matters of quality) is, for instance, to save on the feature scope of your first version. Your team will love you for this, including developers.
Factor 3: Methodology
From many talks (and some own project experience) I suppose that the impression of the complexity often is due to a shift in methodology happening in parallel and not directly related to mobile. Testing itself isn't much different from what it is outside of mobile – which, by the way, is also what the affected testers say. But:
Many organizations which currently try to keep track with mobile, out of coincidence, find themselves in a transitioning phase from waterfall to agile/SCRUM at the same time. Very often, mobile is even taken as the very occasion to perform that shift. That puts testers between rocks and hard places: On one hand, there's that mobile technology to deal with, on the other hand there's a whole new methodology and processes to follow. The fact that most organizations fail to do SCRUM correctly (or do agile by small waterfalls, which causes trouble at least in the forcibly down-streamed testing) only makes it worse.
While testers thus are worrying about mobile and why the previously working testing process now fails, they need to deal with agile (perhaps even done wrong) on top. That doesn't make it easier. Or to put it that way: Testers stood in front of the tip of the testing pyramid with a trunk-load of devices and kept thinking “Now wait. There's something wrong”. And even before that issue was resolved, they have to think about how regression testing can be handled in iterations 2 to n, keeping in mind their trunk-load of devices. So, test automation on real devices or what? How fast can we get this up and running?
To summarize: Today, mobile testing happens in a crossfire of new technologies and change of paradigms, both originating on the development side of the fence. Therefore, could it be that, in order to make mobile testing successful, you need development support? Sounds like a plan. And the interesting part here is that this matches the idea of agile teamwork. It is thus fair to assume that “mobile” and “agile” should actually go well together. But in every-day's project reality they still barely do.
There are some more factors besides the ones above, however they are not specific to mobile. Also, most of them result from grown organizational issues. With mobile however their impact is increased due to the higher intensity of testing.
- The rule-of-thumb pitfall
If test effort estimation is based on a rule-of-thumb like say, “Testing is one third of development effort” and then planning counts for a 50% allocation of a tester, it should not come as a surprise to see test coverage be miserable at best and most of the errors becoming discovered by end-users after the go-live. And it won't make any difference whether you did that mess in waterfall or agile by the way.
- Death by automation
Equally, test automation is not a substitute for manual testing. Instead, automation can only work in addition to manual testing. More precisely, test automation allows for the focus of manual testing to be shifted, making it literally become more “valuable”. Trying to automate end-to-end or acceptance test scenarios with mobile sounds like a good idea (also due to full-bodied promises by tool manufacturers) and it indeed can work out well. However, the required effort for setup and maintenance usually happens to be more than underestimated and, even if estimated correctly, hardly returns on investment if funded by a single project only.
- Doing testing instead of quality assurance
Another problem is to give priority to testing rather than QA: We all know that there is no way of improving quality by doing just testing. However, with mobile, we tend to focus on testing because of “all that different devices” at the cost of neglect of proactive QA methods. Better quality and a frictionless testing phase can rather be achieved by earlier involvement of the QA staff, for instance just by having QA review the requirements or specification deliverables.
I believe that mobile testing doesn't come with a higher complexity - but rather a higher intensity. From my experience, it is this intensity that makes small glitches grow into big problems. More than ever, people have to team up and abolish silo thinking. Of course, also the QA/Testing department needs to have their homework done for mobile. What that again means is a different story – and will be part of an upcoming article.