- Testing shows that a program does what it is intended to do and to discover program defects before it is put into use.
- To test, you execute a program using artificial data.
- Check the results of the test run for errors, anomalies or information about the program’s non-functional attributes.
- Can reveal the presence of errors NOT their absence.
- Testing is part of a more general verification and validation process.
Program testing goals
- To demonstrate that software meets its requirements.
- To discover situations in which the behavior of the software is incorrect or undesirable.
Testing process goals
- Validation testing
–To demonstrate to the developer and the system customer that the software meets its requirements
–A successful test shows that the system operates as intended.
- Defect testing
–A successful test is a test that makes the system perform incorrectly.
Verification vs validation
“Are we building the product right?”
–The software should conform to its specification.
“Are we building the right product?”
–The software should do what the user really requires.
Inspections and testing
- Software inspections: Analyze the static system representation to discover problems (static verification)
–May be supplemented by tool-based document and code analysis.
- Software testing: Exercise and observe product behaviour (dynamic verification)
–The system is executed with test data and its operational behaviour is observed.
Stages of testing
- Development testing; the system is tested during development.
- Release testing; a separate testing team tests a complete version of the system before it is released.
- User testing, where users or potential users of a system test the system in their own environment.
- Development testing; includes all testing activities that are carried out by the developers
–Unit testing, Focuses on testing the functionality of objects or methods.
–Component testing, create components from object combinations. Focuses on testing component interfaces.
–System testing, the components in a system are integrated and the system is tested as a whole. Focuses on testing component interactions.
Testing Quality Dimensions
Content testing has two important objectives:
- to uncover syntactic errors (e.g. grammar mistakes) in text-based documents, graphical representations, and other media
- to uncover semantic errors (i.e., errors in the accuracy or completeness of information) in any content object presented as navigation occurs
- Is the information accurate?
- Is the information concise and to the point?
- Is the layout of the content is easy for the user to understand?
- Have proper references been provided for all information derived from other sources?
- Is the content offensive, misleading?
- Does the content infringe on existing copyrights or trademarks?
User Interface Testing
- The complete interface is tested against selected use-cases to uncover errors in the semantics of the interface.
- The interface is tested within a variety of environments (e.g., browsers) to ensure that it will be compatible.
Testing Interface Mechanisms
- Links—navigation mechanisms that link the user to some other content object or function.
- Forms—a structured document containing blank fields that are filled in by the user. The data contained in the fields are used as input to one or more WebApp functions.
- Client-side pop-up windows—small windows that pop-up without user interaction. These windows can be content-oriented and may require some form of user interaction.
Different levels of abstraction:
- the usability of a specific interface mechanism (e.g., a form) can be assessed
- the usability of a complete Web page (encompassing interface mechanisms, data objects and related functions) can be evaluated
- the usability of the complete WebApp can be considered.
Compatibility testing is to define a set of “commonly encountered” client side computing configurations and their variants
Create a tree structure identifying
- each computing platform
- typical display devices
- the operating systems supported on the platform
- the browsers available
- likely Internet connection speeds
Testing Navigation Semantics
- Is there a mechanism (other than the browser ‘back’ arrow) for returning to the preceding navigation node and to the beginning of the navigation path.
- Do mechanisms for navigation within a large navigation node (i.e., a long web page) work properly?
- Is every node reachable from the site map? Are node names meaningful to end-users?
- If a node is reached from some external source, is it possible to process to the next node on the navigation path. Is it possible to return to the previous node on the navigation path?
- Does the user understand his location within the content architecture ?
- Is the WebApp fully compatible with the server OS?
- Are system files, directories, and related system data created correctly when the WebApp is operational?
- Do system security measures (e.g., firewalls or encryption) allow the WebApp to execute and service users without interference or performance degradation?
- Has the WebApp been tested with the distributed server configuration (if one exists) that has been chosen?
- Is the WebApp properly integrated with database software? Is the WebApp sensitive to different versions of database software?
- Do server-side WebApp scripts execute properly?
- Have system administrator errors been examined for their affect on WebApp operations?
- Hardware—CPU, memory, storage and printing devices
- Operating systems—Linux, Macintosh OS, Microsoft Windows, a mobile-based OS
- Browser software—Internet Explorer, Mozilla/Netscape, Opera, Safari, and others
- User interface components—Active X, Java applets and others
- Plug-ins—QuickTime, RealPlayer, and many others
- Connectivity—cable, DSL, regular modem
- Designed to probe vulnerabilities of the client-side environment, the network communications that occur as data are passed from client to server and back again, and the server-side environment
- On the client-side, vulnerabilities can often be traced to pre-existing bugs in browsers, e-mail programs, or communication software.
- On the server-side, vulnerabilities include denial-of-service attacks and malicious scripts that can be passed along to the client-side or used to disable server operations
- Does the system degrade ‘gently’ or does the server shut down as capacity is exceeded?
- Does server software generate “server not available” messages? More generally, are users aware that they cannot reach the server?
- Are transactions lost as capacity is exceeded?
- Is data integrity affected as capacity is exceeded?
- Does the server response time degrade to a point where it is noticeable and unacceptable?
- What system components are responsible for performance degradation?
- Does performance degradation have an impact on system security?
- Is WebApp reliability or accuracy affected as the load on the system grows?
- What happens when loads that are greater than maximum server capacity are applied?
The intent is to determine how the WebApp and its server-side environment will respond to various loading conditions
- N, the number of concurrent users
- T, the number of on-line transactions per unit of time
- D, the data load processed by the server per transaction
Overall throughput, P, is computed in the following manner:
P = N x T x D
Characteristics of Testable Software
–The better it works (i.e., better quality), the easier it is to test
–Incorrect output is easily identified; internal errors are automatically detected
–The states and variables of the software can be controlled directly by the tester
–The software is built from independent modules that can be tested independently
–The program should exhibit functional, structural, and code simplicity
–Changes to the software during testing are infrequent and do not invalidate existing tests
–The architectural design is well understood; documentation is available and organized
Black & White-Box testing
- Black-box testing
–Knowing the specified function that a product has been designed to perform, test to see if that function is fully operational and error free
–Includes tests that are conducted at the software interface
–Not concerned with internal logical structure of the software
- White-box testing
–Knowing the internal workings of a product, test that all internal operations are performed according to specifications and all internal components have been exercised
–Involves tests that concentrate on close examination of procedural detail
–Logical paths through the software are tested