The
end of the road for the test phase?
There is much debate about how
testing will be organised in the near future. The testing profession has
evolved from the very first time developers started testing through to separate
test phases and independence and then to collaborative testing.
The waterfall method, which is still
being used by many organisations, prescribes an extensive final test phase.
Another significant feature of waterfall is that the software development
process is organised with several sequential periods, or test phases, in which
a dedicated group performs specialised tests. In this context, the term ‘test
phase’ can be defined as a group of jointly executed and controlled test
activities. The testing within each phase may be more or less structured and
formal, but the test activities within a test phase are bound by common
objectives and the same focus or system boundaries.
Afterthoughts
There is little value in a quality
assessment that is too late. Within the waterfall method the testing is often
tested until just before the deadline. Due to high workload the test report is
written afterwards when the system is already in production. What is the value
of remarks and comments at this stage? What should the project do with bugs
that cannot be restored, since the deployment is already a fact?
Even when testing is done at early
stages, such as with the system test, results are coming in too late. The fair
is already over. The programmers have done their work and want to start
something new, but must wait until the testers make their statement about the
quality, often accompanied by a litany of bugs.
Customer experience is a key
performance indicator (KPI) that is gaining popularity with our stakeholders.
Organisations put the customer rating at the centre of their dashboard.
Although bugs are a threat to customer experience, the perceived value of a
full bug tracking system depreciates quickly.
The aim is not to demonstrate the
differences with the specification, it is to have a satisfied customer. Agile
development aims for ‘right first time’ and has therefore a large focus on
early detection and quick resolution of errors. The user is involved in the
development and cooperation is more important than following a formal
specification. This reduces the need for an independent quality judgment at a
later stage.
The development cycle is shortening
The life of software is becoming
shorter due to rapid innovation. Consequently, we should develop our software
faster also. Kent Beck provides a clear prognosis at the USENIX Technical
Conference.
In the coming years the deployment
cycle will decrease from an average of once a quarter to releases on a daily or
hourly basis. For testing this has two direct consequences. First, test must be
performed quickly. You cannot test for one month when the software is due to be
released next week. Secondly, the phasing of activities gets blurred. Testing
is done continuously and by everyone. There is no longer room for a separate
testing phase.
A shift to operational assurance
For many organisations the test
phase is still important but in Agile organisations there is a shift in
emphasis. We see that testing is an activity that is conducted by many parties:
developers within the sprint, business architects during design and real users
that perform beta tests. Quality attributes like usability, durability and
security are increasingly important and get attention throughout the project.
This is in contrast to the traditional test phase in which a group of
independent testers, urged to speed by a fast approaching deadline, execute
their functional tests.
The above description shows a clear
shift of formal testing phase (especially at the end of the development
process) to a continuous process involving many disciplines. Linda Hayes
indicates that there is a shift from quality assurance to operational
assurance. In this the quality assessment no longer has central place, but the
support of the operational process has. On the basis of the above arguments, it
is clear that one of the ‘victims’, of this shift is the separate testing
phase.
Note that the above arguments
question the health of the separate test phase, and argue it to be dying
dinosaur. Between the lines you can read that testing as discipline is far from
dead. It will be organised differently and other disciplines are getting
involved. Although things are changing for sure, the above arguments are only
one side of the coin. Are there arguments that plead for a separate test phase
and the end of the cycle? Yes, there are!
Arguments for a test phase at the
end of cycle
In the following paragraph I will
share some arguments that plead for a separate test phase.
Although it is desirable to maximise
early testing as much as possible, not all tests can be done upfront. Often it
is just not possible. Unit and system tests only check the quality up to a
certain level. Due to appification and an increase in system couplings, the
system chains get longer.
Using adequate simulations and by
working with trusted components a lot of errors can be solved before
integration, but these measures will never replace a true integration test.
Experience teaches us that when two systems interact for the first time, often
unforeseen problems arise. A testing phase at the end prevents these problems
from occurring while being in operation.
The supplier has other interests
Wherever development work is being
outsourced, organisational boundaries arise. On either side of this boundary
parties have their own interests. And they might be different and exclusive.
For political reasons or due geographic spread it is difficult for the
accepting party to have real insight in the activities of the supplier,
therefore control and checking by the acceptor is a necessity.
Preferably this are done during the
project and in cooperation with the supplier, but formal acceptance means that
there is should be a critical examination once the goods are delivered.
Although the weight of this activity may vary based upon the trust one has in
the supplier and the risk involved, it pleads for a small testing phase at
least.
Politics rule
Apart from the question of
acceptance, when organising testing one has to have an eye for the role that
politics has in the organisation. Increasingly, organisations are expected to
meet compliance standards like Basel, SOx, SEPA (just to name a few). This
forces compliance testing, and makes demands on the formality of the testing
activities. Besides it is often desirable to have a shared responsibility and
create a wide commitment. Both can be achieved by involving stakeholders and
management in testing. Such a test phase therefore has a political purpose.
As mentioned, quality attributes
such as security, performance, durability and user friendliness become
increasingly important. Experience shows that these specialised tests are often
best organised separately. Usability, Performance testing, and especially
reliability testing seldom fit within a two week sprint. If you organise Beta Testing,
a longer run will lead to greater coverage, reasons for these tests to be
organised in a – you guessed it – separate test phase.
Legacy
Almost all organisations have to
deal with legacy. Agile development, continuous integration and testing are all
right, but bear in mind, that not all of the software is suitable for this mode
of development. In particular, legacy systems can best be adapted in a
traditional development way.
According to Ken Beck various types
of systems require different testing approaches. It may therefore be effective
to choose for different test approaches. These coexist within the same
organisation. Besides legacy systems, there are also legacy organisations. In
these organisations, the technique does not determine what is possible, but the
culture and available knowledge does. Agile development requires the right
expertise and mindset. Not every organisation is ready for this.
In life-critical systems the above
arguments apply even stronger. If lives depend on it, the organisation is bound
to tackle as many problems as possible by fully integrating testing into the
development, and to have the necessary objective test moments. Thus, the two opinions
merge into one other and coexist side by side.
Best of both worlds
We have seen that there are
arguments for and against separate test phases within software development. I
do not think there is value in forced decisions for or against. We should not
cling to the known test phases just because we are familiar with them. Neither
is it desirable to throw away the old approaches. Current developments will
lead to many changes. It is important for testers to track these developments
and to consider what its consequences are for the testing profession and the
way we do our work.
In my view, the job gets more
colourful, versatile and challenging. We get new tools and options. Test
strategists will have to think about the contribution they want to make to the
organisation and what objectives we pursue with our activities. On this basis
we can make choices.
Let old and new ideas come together.
This will result in testers that sit beside developers to reduce and rapidly
detect errors. This will also lead to testers that are working in separately
organised test phases, whenever this is more efficient.
I can think of situations where, for
example, all activities related to a certain risk group are combined in a
dedicated test phase. Proper business alignment dictates that the output of our
test activities are closely related to the information needs of the business.
Regardless of the moment in time that particular test activities are being
performed, it can be rewarding to organise them separately. This holds for all
testing activities that contribute to the same insights and information. By
doing so, the test coordinator becomes the responsible person that, on behalf
of the business, ensures intelligence and comfort for one or more key focus
areas.
The test phase is far from dead, but
it will increasingly be defined and organised in a different manner. I think
that’s just fine, as long as we keep aligned with the needs of the
organisation, we continually challenge ourselves to deliver maximum added value
and contribute to operational excellence.
Happy testing
Prasad