I love analogies — I really do. I use them constantly because I find them to be very helpful in conveying a concept, particularly when discussing technical subject-matter with team members, executive management, and customers.

There’s one rather dog-eared analogy I pull from my “PM Toolbox” when engaging in conversations with my customers about user acceptance testing (UAT) and new application rollouts to end users. This analogy occurred to me a few years ago in Brazil while meeting with the Director of IT of a major newspaper, one of my employer’s largest international customers. The topic of our discussion was an assessment of the newspaper’s readiness for an upcoming system upgrade of their classified advertising system. The director and I were discussing — debating really — the risks surrounding the upcoming user acceptance testing (UAT).

The director (I’ll call him “Mr. Oliveira”) felt his organization was ready for UAT despite having no formal testing plan in place. I disagreed. The upgrade included many feature changes as well as a platform migration, and for all intents and purposes it was more akin to a new application. The user experience would be significantly different, and the system had new features not necessarily intuitive to the user. The testers had never seen, touched nor been trained on the new user interface.

In short, I felt Mr. Oliveira’s overconfidence was misplaced.

As a software development and implementation project manager for over two decades, my experience has taught me one inevitable truth: that sending people to test a new application or system without proper training and preparation does not usually end well. For all parties. And usually, the testers suffer the brunt of what typically ends up as a negative experience.

As Mr. Oliveira and I debated the risks of throwing his untrained and apprehensive team into the upcoming testing cycle, his attitude was more or less “the users will do what we tell them to do. Period.” He went on to argue that the users, accustomed to their current system, could figure things out for themselves.

Which brings me to my analogy.

I asked Mr. Oliveira if he currently drove a manual transmission automobile (yes, he did). I asked if he had taken driver’s training to learn how to drive (yes, he had). I went on to prompt Mr. Oliveira to recall the experience of learning how to drive a manual transmission, something we both agreed required a different skillset – and practice – than driving an automatic.

At that point I suggested that Mr. Oliveira think of the upcoming system upgrade as being like switching from driving an automatic transmission to a manual transmission. From his perspective, this was obviously a more feature-rich improvement. But maybe not so much for the uninitiated and untrained.

I remember learning to drive on a stick shift — my parents insisted on it. Decades later I can still recall struggling to get the hang of shifting without making the car buck like a cranky Shetland pony. Or mastering the trick of starting from a dead stop, on an incline, at a stop sign, with the car behind us mere feet away. Suffice it to say there were a few times when I walked away from a driving lesson flustered and grumbling at my parents for not letting me learn to drive on an automatic. (Of course, now, I am grateful for my parent’s foresight.)

So, what do you imagine the results would be if you put someone who has only driven an automatic behind the wheel of a vehicle sporting a manual transmission and ask them to test drive it? And let us also reasonably assume this poor individual applies the only knowledge and skills they have and attempts to drive the car like an automatic.

Yeah, yikes. You can practically hear the gears grinding …

It’s fair to say that it wouldn’t be reasonable to expect the hapless (air quotes) volunteer to give an objective review of the car because they will naturally judge the car’s behavior and performance using the only experience they have: driving an automatic. They will also likely miss (or misjudge) those features that “three-pedal purists” swear by and consider to be what makes manual transmissions superior to automatics.

This would be the case with Mr. Oliveira’s testers. They would not necessarily be able to fairly evaluate the upgraded UI features compared to their previous system. Furthermore, they would most likely mistake new and intentional features as bugs — because they don’t know any better.

Feeling the joy of “clutch and shift” and the indescribable satisfaction of downshifting around a corner may be what three-pedal purists live for; but for many mainstream commuters navigating the daily grind of stop-and-go rush-hour traffic, not so much. It is subjective, a matter of perspective (and preference).

Mr. Oliveira got my point (more or less) and eventually agreed facilitate a brief orientation for the testers. We discussed including an overview to point out the newest features and provide the testers with some background on how and why the new UI had been designed and configured the way it was. This helped to narrow the gap between differing perceptions of “intended feature” vs. “bug”.

It may take some debate and a good analogy (feel free to use mine) to help a customer understand and weigh the risks of transitioning into any new environment, test or real-world, without adequate training. It is certainly worth broaching the subject to at least try.

Otherwise, underestimating the risks of sailing into UAT with misplaced overconfidence brings to mind another analogy. One involving a very large passenger cruise liner and an iceberg…

Have you had challenges with prioritizing proper setup for user acceptance testing within your organization? If so, how has it impacted project and implementation stakeholders — and you? What tactics and techniques have you used to help your or your customers’ organization embrace the importance of properly preparing for user acceptance testing?