November 7, 2007
"Leading" the Test Effort
You're a PeopleSoft developer and you've just finished your project.
You have done your unit tests and everything is working perfectly.
(Or so you think.) Now it's time for a system or user acceptance test.
How much involvement should you have in this effort? Should you in fact
In my experience, answers to those questions cover a wide range.
Some organizations have multiple testing steps that are practically
firewalled from each other. At the other end of the spectrum, some
companies do not have any testing at all beyond what the developer does.
At those installations, developers tend to just move things into
production whenever they are ready—or make changes directly in
the production environment.
But let's ignore those two extremes and talk about the more common
situation. PeopleSoft support and operation are typically handled by
three groups of people, which I'll call technical, functional, and
end users. Where the functional people are assigned varies. They could
be a part of the support group, working side-by-side with the developers.
Or they could be in a separate group—maybe in an HRIS or QA organization,
while the developers are within IT.
In these cases, some very effective testing procedures can be established.
A project is assigned to the functional group for testing. Once it passes
that step, it is sent to the users for an acceptance test (often with
the assistance of the functional people). Assuming that a good test is
done, one of the benefits of this setup is that the independent tests can
uncover problems with the stated requirements or with the developer's
understanding of the requirements, and not simply problems with the
code or page design itself.
Even in this situation there is a danger of one group taking
on too much of a role in another group's test. For example, the functional
people could do so much hand-holding for the end users that they simply
reproduce their own test. The user acceptance test is a good
opportunity to train the users on the new feature. But a training session
does not necessarily constitute a good test. The purpose of a test is to
verify that the feature meets the users' needs. This means that the users
must take responsibility for designing at least part of their own test or defining
what's important to them.
A similar situation can occur in the handoff from the developer to the
functional analyst or QA. If the developer specifically directs every aspect of the
test to be done ("click here, then there"), then the developer is simply
repeating the unit test and little value is added. The functional person should
know enough about the requirements to be able to design his or her own test,
and that test should cover different ground from the unit test. (If the functional
people are in a generic QA organization that handles all of a company's systems,
and they know little or nothing about PeopleSoft, again little value is added.)
A tester should be a knowledgeable skeptic. Unfortunately, over time, as coworkers get
to know and trust each other, testing is often concentrated in the hands of
one person. Everyone else just trusts the result of that person's test or
just repeats the same steps as they are meticulously documented with screenshots
and click-by-click instructions. This is, in
my opinion, another case where there can be too much documentation—and
quality suffers as a result.
Until next time...