
294
12 Software development
12
Key terms
Trace table – a table showing the process of dry-
running a program with columns showing the values of
each variable as it changes.
Run-time error – an error found in a program when it is
executed; the program may halt unexpectedly.
Test strategy – an overview of the testing required
to meet the requirements specified for a particular
program; it shows how and when the program is to be
tested.
Test plan – a detailed list showing all the stages of
testing and every test that will be performed for a
particular program.
Dry run – a method of testing a program that involves
working through a program or module from a program
manually.
Walkthrough – a method of testing a program. A formal
version of a dry run using pre-defined test cases.
Normal test data – test data that should be accepted by
a program.
Abnormal test data – test data that should be rejected
by a program.
Extreme test data – test data that is on the limit of that
accepted by a program.
Boundary test data – test data that is on the limit of that
accepted by a program or data that is just outside the
limit of that rejected by a program.
White-box testing – a method of testing a program that
tests the structure and logic of every path through a
program module.
Black-box testing – a method of testing a program that
tests a module’s inputs and outputs.
Integration testing – a method of testing a program
that tests combinations of program modules that work
together.
Stub testing – the use of dummy modules for testing
purposes.
Alpha testing – the testing of a completed or nearly
completed program in-house by the development team.
Beta testing – the testing of a completed program by a
small group of users before it is released.
Acceptance testing – the testing of a completed
program to prove to the customer that it works as
required.
Corrective maintenance – the correction of any errors
that appear during use.
Perfective maintenance – the process of making
improvements to the performance of a program.
Adaptive maintenance – the alteration of a program to
perform new tasks.
12.3.1 Ways of avoiding and exposing faults in programs
Most programs written to perform a real task will contain errors, as
programmers are human and do make mistakes. The aim is to avoid making as
many mistakes as possible and then find as many mistakes as possible before
the program goes live. Unfortunately, this does not always happen and many
spectacular failures have occurred. More than one large bank has found that its
customers were locked out of their accounts for some time when new software
was installed. Major airlines have had to cancel flights because of programming
errors. One prison service released prisoners many days earlier than required for
about 15 years because of a faulty program.
Faults in an executable program are frequently faults in the design of the
program. Fault avoidance starts with the provision of a comprehensive and
rigorous program specification at the end of the analysis phase of the program
development lifecycle, followed by the use of formal methods such as structure
charts, state-transition diagrams and pseudocode at the design stage. At the
coding stage, the use of programming disciplines such as information hiding,
encapsulation and exception handling, as described in Chapter 20, all help to
prevent faults.
Faults or bugs in a program are then exposed at the testing stage. Testing will
show the presence of faults to be corrected, but cannot guarantee that large,
457591_12_CI_AS & A_Level_CS_283-303.indd 294 25/04/19 11:33 AM