Mobile UI Automation: A New Hope?

A few weeks ago, I had an epiphany about automating mobile apps.

At work we were discussing some of our automated UI tests of the mobile version of our site. Some locators had changed, meaning some of the tests were failing. We talked about what had to be fixed and this turned out to pretty straightforward. There were only a few elements that had updated locators, which were easy to find and replace. I casually and mostly-jokingly mentioned that if it came down to it, we could just brute force through all the locators on a given screen and find the ones you need that way.  

That’s then the light bulb went off: mobile web apps could very well be suited to UI automation.

Lately I’ve been thinking that automated UI tests aren’t all that and a bag of chips. From my experience, automated UI tests for desktop and web apps tend to be fragile, and the benefit-cost radio of writing and maintaining such tests doesn’t always work out.  Being a lazy thinker I figured these experiences would carry over to automating mobile UI tests.

Then I found this presentation by Christian Heilmann on the future of the mobile web. I recommend watching the whole thing. On top of getting me really interested in what the web and browsers can offer for mobile, Christian makes some good points about what makes mobile apps - native or web - so delightful for users.

One, mobile apps are focused. They tend to do one specific task with little variation. Usage of things like alert, message and decision dialogs or pop-ups tend to be minimized or eliminated to keep app workflow and logic narrowly focused.

Two, mobile apps have simple UIs. Total number of screens are reduced, thanks to the previous point, and in turn the total number of controls and elements are reduced. This also helps keep the app sizes small (front- and back-end) to increase performance on lower-powered devices.

Put these two things together, and you potentially have apps that lend themselves well to UI automation.

As well, automated UI testing of mobile apps may provide even more benefit if we consider them “ideal” functional tests. Since a lot of mobile apps should really be tested in the wild on real devices that run out of battery, have intermittent data connections or get dropped in toilets, automated tests on emulators can provide helpful benchmarks for purely functional aspects of an app. Naturally, this assumes that automated UI tests on emulators works at all, but recent developments have made mobile UI automation more effective.  

Effective mobile testing might be a brave new world at this point, but it may be a good place for UI automation after all.

Bad Design and Bad Usage: What’s the Difference?

Today I made a cup of coffee at work using our automated coffee maker. It almost ended in tragedy.

After putting in the coffee packet and starting the process, I got out the carton of cream to add to the coffee when it was finished. When the coffee had filled up, I took the plastic lid off the cream carton. The lid slipped out of my hands and rolled underneath the platform where the coffee mug was sitting as there is a little gap between the platform and the counter-top. Since the lid was round, I had some difficulty getting it out from underneath the gap, knocking the platform and almost spilling my coffee everywhere.

Luckily, I escaped with a cup of coffee and no real harm done.

After this, I started thinking about how the situation happened and how I could avoid it in the future. I started to think it was bad design; leaving a small gap underneath the platform for items to roll under could lead to similar issues. I also wondered if the coffee vendor had considered these situations.

Now, I realize that the situation could’ve been easily avoided by simple solutions. I could’ve used a carton with a lid or remove the full coffee mug and put it on the counter before opening the carton.

The big question I’m left with: is this bad design (more importantly, bad user experience) or was I just using the product incorrectly? And where exactly is the line?

Sometimes I’ve used libraries or methods in my code that look really ugly and break a lot of well-known development conventions and patterns. I look at the code and it’s fragile and hard to understand. I then find out later I was missing some additional information, like fields or arguments I could’ve set in certain methods, that would make the code much easier to read and work with. What starts out as working with “bad” code turns out to be me misusing it or not having some helpful information.

Similarly, if I tried nailing a nail into a piece of wood with a wooden pencil and the pencil breaks in two pieces without actually nailing the nail into the wood, despite no indication the pencil could nail a nail into wood, that seems like a clear failure by me to properly use the nail, not bad pencil design.

On the other hand, if I tried to write my name on a piece of wood and pencil could not be used to do so, that seems like a clear failure of design, not improper use of the nail.

Where’s the line between these two extremes? 

Sometimes it is difficult to distinguish between these two situations. How can make this easier?

Stuff Git Does
It’s the end of August here, which means an overlap of the dog days of summer with the slow start-up of the school year. I thought this post would fit the mood of things quite nicely.
Here are some handy Git commands that can be used in the Git Bash environment. Each of these are “one-liners”, but keep in mind Git can do more complex operations by combining commands. 
Check what your current staging area and branch status is: 
git status

Create and checkout to a new branch from the current branch: 

git checkout -b new_branch_name
Add all files with .foo extensions to the staging area for commit:
git add *.foo

Add and commit all changes in currently tracked files:

git commit -a -m “add and commit”

Revise the commit message in the last commit on the current local branch: 

git commit - -amend -m “new and improved message”
Copy a file from a previous commit to the current branch, overwriting the existing file: 
git checkout HASHVALUE path/to/file.extention
Copy an entire directory from another branch to the current branch:
git checkout other_branch - - path/to/directory
Go back to the previous commit but keep the current state of all files in your repo:
git reset - -mixed HEAD^
Go back to the previous commit and go back to the current state of all files in your repo (ie “undo” your committed changes):
git reset - -hard HEAD^
Go back four commits in your history and go to the repo state at that point:
git reset - -hard HEAD~4
Merge changes from the master branch into the current branch in a single commit (may have to resolve conflicts): 
git merge - -squash master
Start the merge tool from the command line (will open to any current merge conflicts):
git mergetool
Push current branch to remote (ie create a remote branch):
git push origin new_remote_branch

Pull and create a local branch based on a remote branch: 

git checkout - -track origin/new_remote_branch
Delete a local branch (this can be undone if needed, believe it or not): 
git branch -D branch_to_delete
View all (remote and local) branches in your repository:
git branch -a
Don’t Swallow Exceptions

As is my opinion, test code is code, and should be treated exactly the same as application code. However, there are some subtle differences like test code being more damp.

Another difference is exception handling.

An exception is when a line of code behaves unexpectedly and raises an exception that the program has to deal with before doing anything else. In Java or C#, the jargon is that an exception is thrown at this line of code. If the exception is handled (usually in a try/catch block), it said the exception is caught.

Here’s a psuedocode example:

try {
catch (Exception e) {

In this example, someObj tries to do something. If it does something without any exceptions, the statement is executed and the program continues. If it does something and throws an exception, like trying to work with an uninitialized object for example, an exception is thrown at that point, then the code proceeds to the catch block where the particular exception is handled.

One option in this case is to do nothing, which would look like this:

try {
catch (Exception e) {
    // do nothing

This is often called swallowing the exception because you’re basically taking the exception and swallowing it whole to be digested and dealt with elsewhere.

When it comes to application code, exception swallowing is a not-great practice. It can be helpful in very select circumstances, but it’s generally a bad idea.

When it comes to test code, swallowing exceptions is absolutely awful. It’s never a good idea, and is often harmful.

The reason I feel this way is because of what exceptions are meant to do. An unexpectedly thrown exception means the application is acting exceptionally, which generally means something completely unexpected has happened. Often, this is because something bad has happened. An exception is a mechanism to tell the developer this while stopping additional actions that may now be invalid or even impossible to complete. Since test code is all about providing information about the app under test, intentionally hiding that information is counter-productive. As well, swallowing exceptions gives false confidence in tests; you get the green “pass” for a test by hiding lots of possible problems along the way. Bad news all around.

It can be tempting to swallow exceptions because it makes thing easier initially to complete functionality in code. There are some measures to prevent problems in test code (I’ll leave swallowing exceptions in application code to app developers).

First, avoid catching broad exceptions. Some thrown exceptions might be ok or even expected, but exceptions not so much. Here, this might look like:

try {
catch (NullValueException e) {
    // this is ok, other exceptions are not

You could also re-throw exceptions after handing them. This means you catch the exception but then throw it again after some additional actions. This can be helpful if you want to log the exception somewhere else in a more human-readable format, like so:

try {
catch (Exception e) {
    rethrow e;

Finally, the easiest thing (and best, IMO) is to not handle the exception at all:

// don’t catch it!

For test code, this has several advantages. It’s less work and less typing for one, which is always a good thing. Two, any information or issues that could arise from this line of code happen without any filter, providing as much information as possible for the test developer and others. And finally, most test frameworks like JUnit already have exception handling built-in. Help these frameworks help you.

Keep Your Test Code Damp

First off, a hat tip to Jim Holmes for providing the sources for this post.

Earlier today, I was part of a discussion around writing test code and not being too redundant. When it comes to writing code, usually writing concise and non-repetitious code is a good thing. This is covered by the concept of Don’t Repeat Yourself (or DRY). The mnemonic is to make code DRY. 

On the other hand, sometimes (particularly with test code) a little bit of repetition can be a beneficial thing. It can help keep code understandable and maintainable. Having two test methods that are very similar but not quite the same thing can be better than having one test method that tries to cover two slightly different cases. 

Code like this may be referred to as Descriptive And Meaningful Phrases (DAMP). Basically, tests that have a little repetition to make them easier to understand and maintain aren’t quite DRY but DAMP. 

This might sound like bad practice but in my experience having tests be a bit DAMP is often more helpful than making them totally DRY. And it turns out, other people also have had similar experiences. 

Just one thing to keep in mind while writing test code vs other kinds of code.

Testability Is Always Good Thing

The title may seem like a stretch or wild opinion, but I don’t think it is.

Suppose a piece of software is developed. What does making it testable mean? What’s the benefit?

Testability is having a way of reducing uncertainty around what a piece of software does. It provides a valuable method of answering the question “How do you know what you know?” about your code.

Bluntly, most problems with software arise when developers don’t completely understand how and why a piece of software does what it does. How do you know what a function does? How do you know a particular feature does what you think it does?

Testability is a feature. It helps with these questions and others, and it always a good thing.

In Defense of Java

A couple of weeks ago, I found a Wikipedia article on - basically - why Java is a bad language.

I’m no Java advocate, but sometimes Java bashing can get a little out of hand. So I thought I’d post some good things about Java (based on my experience).

Full disclosure: I love my wife, I love my cats, but I do not love Java.

In no particular order, here are some of the good parts of Java:

Packaging: Creating and maintaining packages of Java code is straightforward. Make a directory/subdirectories, create classes in .java files with correct imports, then jar it all up! Managing packages is even easier with modern IDEs like Eclipse. Creating reusable libraries works more or less exactly as it should without any strict structures or external tools. In turn, adding and removing libraries from other projects also tends to be straightforward. Adding or removing libraries is as simple as adding or removing jars from a project.

Packaging is one of the underappreciated aspects of Java as it allows for making “production-ready” code easier to build and maintain. It’s also quite helpful when working with projects that are larger than a couple of files. Even in Python - one of my favourite languages - projects can be difficult to build, or dependencies not easy to manipulate (such problems is why virtualenv exists).

Testing Frameworks: When it comes to unit testing frameworks, JUnit is the benchmark. Written by Kent Beck (and a great example of test-driven development), JUnit really is an excellent unit testing framework. It strikes a great balance between using Java’s native language features and being a separate testing library. Concepts like annotations for test methods, setup/teardown methods and running test suits via XML are good practices that are emulated elsewhere in other languages. Going a step further, TestNG is a test framework written by Cedric Beust that is built on top of JUnit. TestNG extends JUnit’s features in logical ways, allowing for setup/teardowns before classes, test suites and groups in addition to test methods. It also provides easy functionality to run tests in parallel, as well as valiantly providing a way to give methods default parameter values. JUnit is a great framework for writing unit tests and TestNG is a great framework for writing any other kind of automated tests.

Cross-Platform: Say what you will, but this is at least trivially true. Even C code needs to be recompiled on different platforms to produce an executable.

Consistent Paradigm: Java was designed as an object-oriented (OO) language which is continues to be. Even if you’re not a big fan of OO programming, there is value in a language having a consistent underlying paradigm. Almost every class is kept in its own file, and every object (probably) extends java.lang.Object. This makes projects and classes easier to grok and aids in producing usable design patterns. Even though Java has made lambda expressions a first-class language feature and using Java sometimes meaning getting a gorilla along with your banana, at least with Java you know what you’re getting.

public static void main(): Honestly, being able to turn any class into a executable by adding a single static method is a surprisingly great feature. It provides a quick and easy way to check output or what a piece of code is doing. I’m not sure if Java did this first, but it’s something I use a fair bit.

That’s it for now for my sort-of defense of Java. Like it or not, it’s here to stay.

Being Cool About Best Practices

In software testing, much ink has been spilled on the topic of “best practices”. This tweet really sums things up well for me.

Although I believe that much of software testing and quality is highly context-dependent and context-driven, some cool ideas do often work well in many situations and contexts. Note I didn’t say these ideas always work in all situations. That is important.

The closest I get to using “best practice” is “good practice”, usually written as in this sentence: “A good practice when working with widgets is to build a new sprocket for them first”. Instead of insisting there is a single best practice for testing scenarios (suggesting a unique, globally optimal approach), there are often good practices that work well (not unique and maybe only locally optimal some of the time). Hence a good practice instead of the best practice.

I think the “cool idea” expression takes things a step forward by removing any sense of formality of the circumstance. Cool ideas are pretty cool, but can become uncool as well. Of course, to appreciate this point, you have to be paying attention to the coolness of the situation. Paying attention is quite helpful when testing software.

Let me be among the first to promote this phrasing as part of the context-driven tester’s vocabulary, coined by Michael Bolton. Get your team thinking about cool ideas in their work.

The “Build It After They Come” Anti-Pattern

Lately I’ve been thinking a lot about UI automation. There’s plenty to think about.

On the one hand, I’ve thought that UI automation kind of sucks. It’s an opinion that’s slightly radical but not completely unfounded.  

One the other hand, UI automation can be a great tool for developers, testers and software development teams. It can be helpful for a variety of reasons. Plus, there are some great tools and resources out there for UI automation.

I think part of the problem is that, like a lot of things in software development, UI automation isn’t itself bad but may be used badly or in improper situations. Like pointers in C++, using them and using them in an appropriate way can be very, very different things.

One way that UI automation can be used inappropriately, I think, is to delay it until late in the development cycle. The situation ends up like this:

  1. Software is developed with a specific release date in mind
  2. Development and testing proceeded until close to the release
  3. At this point, automated UI tests are requested to run looking for issues.

The reasoning for following such a pattern is to have the UI tests act as a last check before shipping. Automated tests are straightforward and defined ahead of time, and so act as a good final check.

Sounds pretty reasonable, right? Except that, in practice, there’s a number of problems with this situation such as:

Automated UI tests can be quite sensitive to small changes in the UI, leading to unreliable results: This is a classic case of a radio list becoming a checkbox. Small changes in the UI like this can cause unreliable results in automated UI tests. Trying to locate problems and repair them can be difficult, even more under time pressure.

Test code is susceptible to code rot if not run often: Automated tests may be written and refactored a head of time, but can degrade over time if they are run regularly. This can lead to issues that need to be fixed when the tests are actually needed. See the previous point.

End-to-End UI tests can be relatively complex, meaning they may not be ready to be run in time if they’re not started early enough: One of the arguments for employing UI automation at any point in the development cycle is to save time. Machines are generally faster and more efficient than human beings. However, some scenarios are still complicated enough that machines take a lot of time to complete them, in addition to any setup and initialization that is required.

Automated tests in general can miss subtle bugs a human would find more easily: Computers can only do what they are told to do. It is often valuable to open up an app and taking a human look before committing to a release.

From this perspective, running automated UI tests (or any automated tests) as a last step before shipping is a form of release testing, which is a development anti-pattern. Waiting until just before release to do any form of testing is not a terrific strategy, since finding issues can become precarious. Does a show-stopper bug push back the release? What about less critical issues? Or could these issues have been found earlier?

In this sense, instead of having automated UI tests run at the end of a production cycle, it makes sense to start them as soon as possible (possibly even the first thing that gets done). Not only do the test runs become more helpful at providing information, but the process of automating the app can be a source of information as well.

Yes, automated checks can help provide critical information in an efficient way, but there may often be a better way.

Actual Behaviour: It Stinks!

An important skill in software testing is being able to write a good bug report. Here’s a situation that’s been on my mind related to bug reports for the past while.

Is this a valid (albeit fictitious) bug report?

"After opening TestApp in my browser, I logged into my account and did the following:

-Went to the Widgets list
-Clicked on the Create Widget button
-Waited for the New Widget dialogue to appear
-Attempted to create a new widget

The expected result was that I created a new widget easily in a straightforward manner.

The actual result was that creating a new widget sucked, even if I was able to create the widget successfully.

I was able to reproduce this bug using other user accounts on IE, Chrome and Fire fox. In all cases creating a new widget basically sucked.”

This may seem frivolous; I even thought so myself when I first thought of it. After some reflection, I wondered if this is actually a good bug report.

I’ve often tested apps or features and been left frustrated. Even if the feature works as expected, I find myself being upset, or thinking things like “This work flow is so shitty. Why does it have to be like this?”. In these cases, I find myself thinking that yes, there is a problem here.

What’s interesting is that, by some conventional reasoning, there is no bug. The app works according to the spec. It follows the documentation as expected. The output is correct for the given input. There are no performance issues. Security is not an issue. In some cases, even automated approaches can be applied to this area without problem. Following this line of logic, there is problem here. Hence, the bug report is closed as “Not a bug”.

But it still sucks.

I do believe that good software testing involves information gathering. Drawing attention to a particular area can be highly beneficial and have value even if there is no clear “problem”. Even if the above bug report is posted, it could trigger a discussion. Maybe there is a usability issue, or something even more subtle like localization or accessibility. There may also be misunderstanding: a particular app may have to be a particular way for reasons the bug report’s author hadn’t thought of. Or it could just be something to improve.

Starting these kinds of conversations can be difficult. Bug reports like the one above may or may not help depending on the team or the culture. However, it could also be a way to express something that is slightly intangible at first. It may even be a shared feeling.

For full disclosure, I’ve never posted a bug report at work like the example above. But I have considered it, and I think in some cases I could defend it. Would it be professional or acceptable is really the $64 000 question. I think it can be absolutely acceptable in the right environment. But that’s just me.