Wednesday, September 12, 2007

Traceability Matrix

Traceability Matrix
A traceability matrix is a powerful tool. It can be of use to many regardless of the audience. It Clears confusion , settle disputes, shows coverage of requirements in specs, code, tests, etc. , it exposes gaps
shows real project progress, is a great tool in managing change, used to assist with project management
used to establish design/development/test priorities, used to identify risk areas, used to determine what if any 3rd party technologies are needed, used to determine tools needed for design, development and testing, etc.

NOTES!
This is just an illustration in an MS-Word document. Ideally one would assemble such a matrix in a spreadsheet or database to allow for querying. Custom views can be created to show only those columns that fit the specific needs of the user.
Simply stated - as in real life application of such a matrix, the white space represents work to be completed.
Examples within are at varying levels of requirement decomposition. This is intentional. It shows the need for more work - decomposition. It also demonstrates a common challenge to test engineering. The challenge is that some of the requirements are untestable. Ideally, those requirements would be decomposed to a state of testability.
Columns can be modified, added, or deleted/hidden to fit specific purposes.
Following ten columns will be the in Traceability matrix.
1.Requirement ID( the Requirement id provided in the SRS
document)
2.Requirements (Requirement Descrition)
3.High Level Design (document reference)
4.Implementation Design(implemented or not)
5.Source Code (Component/class Program Name)
6.User Documentation(preparation)
7.Unit Test Case Id(Unit test case ID's)
8.Integration Test Case Id (Integration test case ID's)
9.System Test Case Id (System Test case Id's)
10.Release / Build Number(build release number)
It will give coverage of Testcases at different levels of
Testing.


Wednesday, August 29, 2007

Test Bed

What is Test Bed? Who creates test bed?

Test Bed: A test environment containing the hardware, instrumentation tools, simulators, software tools and other support software necessary for testing a system or system component and a set of test files, (including databases and reference files), in a known state, used with input test data to test one or more test conditions, measuring against expected results is called a test bed.

Test Bed is created by the Test Manager or Test Lead during the Test Planning( Test strategy, test plan and test bed). It varies from company to compan
y.

Tuesday, August 28, 2007

BVA and ECP


Boundary value Analysis (BVA): BVA is different from equivalence partitioning in that it focuses on “corner cases” or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive that value often used as a technique for stress load or volume testing. This type of validation is usually performed after positive functional validation has completed successfully using requirements specifications and user documentation. Equivalence Partitioning: An approach where classes of inputs categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition


Saturday, August 25, 2007

Testing Models

V Model:

The V-model is a software development model which can be presumed to be the extension of the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing.
Evolution:
The V-model can be said to have developed as a result of the evolution of software testing. Various testing techniques were defined and various kinds of testing were clearly separated from each other which led to the waterfall model evolving into the V-model. The tests in the ascending (Validation) hand are derived directly from their design or requirements counterparts in the descending (Verification) hand. The ‘V’ can also stand for the terms Verification and Validation.
Verification Phases:
Requirements analysis
In this phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned about establishing what the ideal system has to perform. However, it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated. The user requirements document will typically describe the system’s functional, physical, interface, performance, data, security requirements etc as expected by the user. It is one which the business analysts use to communicate their understanding of the system back to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase.
System Design
System engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements is not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly.
The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold examples business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system testing is prepared in this phase.
Architecture Design
This phase can also be called as high-level design. The baseline in selecting the architecture is that it should realize all the requirements within the given time, cost and resources. Software architecture is commonly represented as two-tier, three-tier or multi-tier models which typically comprises of the database layer, user-interface layer and the application layer. The modules and components representing each layer, their inter-relationships, subsystems, operating environment and interfaces are laid out in detail.
The output of this phase is the high-level design document which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in this phase.
Module Design
This phase can also be called as low-level design. The designed system is broken up in to smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudocode - database tables, with all elements, including their type and size - all interface details with complete API references- all dependency issues- error message listings- complete input and outputs for a module. The unit test design is developed in this stage.
Coding
Validation Phases
Unit Testing
In the V-model of software development, unit testing implies the first stage of dynamic testing process. According to software development expert Barry Boehm, a fault discovered and corrected in the unit testing phase is more than a hundred times cheaper than if it is done after delivery to the customer.
It involves analysis of the written code with the intention of eliminating errors. It also verifies that the codes are efficient and adheres to the adopted coding standards. Testing is usually white box. It is done using the Unit test design prepared during the module design phase. This may be carried out by software testers, software developers or both. (papu)
Integration Testing
In integration testing the separate modules will be tested together expose faults in the interfaces and in the interaction between integrated components. Testing is usually black box as the code is not directly checked for errors. It is done using the integration test design prepared during the architecture design phase. Integration testing is generally conducted by software testers.
System Testing
System testing will compare the system specifications against the actual system. The system test design derived from the system design documents and is used in this phase. Sometimes system testing is automated using testing tools. once all the modules are integrated several erros may rise.testing done at this stage is called system test.
User Acceptance Testing
Acceptance Testing checks the system against the requirements of the user. It uses black box testing using real data, real people and real documents to ensure ease of use and functionality of systems. Users who understand the business functions run the tests as given in the acceptance test plans, including installation and Online help. Hardcopies of user documentation are also being reviewed for usability and accuracy. The testers formally document the results of each test, and provide error reports, correction requests to the developers.
Benefits
The V-model deploys a well-structured method in which each phase can be implemented by the detailed documentation of the previous phase. Testing activities like test designing start at the beginning of the project well before coding and therefore saves a huge amount of the project time.

Waterfall Model:

Royce's original waterfall model, the following phases are followed perfectly in order:
1. Requirements specification
2. Design
3. Construction (aka: implementation or coding)
4. Integration
5. Testing and debugging (aka: verification)
6. Installation
7. Maintenance

The software project must be adaptable, and spending considerable effort in design and implementation based on the idea that requirements will never change is neither adaptable nor realistic in these cases.
A project using the waterfall model moves down a series of steps starting from initial idea to a final product. At the end of each step, the project team holds a review to determine if they are ready to move to the next step. If the project isn’t ready to progress, it stays at that level until its ready.
Notice three important things about the waterfall model:
a) There is large emphasis on specifying what the product will be. Note that the development or coding phase is only a single block!
b) The steps are discrete; there’s no overlap
c) There’s no way to back up. As soon as you’re on a step, you need to complete the tasks for that step and then move on – you can’t go back.
This may sound very limiting, and it is, but it works well for projects with a well- understood product definitions and a disciplined development staff. The goal is to work out all the unknowns and nail down all the details before the first line of code is written. The drawback is that in today’s fast moving culture, will products being developed on Internet time, by the time a software product is so carefully thought out and defined, the original reason for its being may have changed.
From a testing perspective, the waterfall model offers one huge advantage over other models presented so far. Everything is carefully and thoroughly specified. By the time the software is delivered to the test group, every detail has been decided on, written down, and turned into software. From that, the test group can create an accurate plan and schedule. They know exactly what they are testing, and there’s no question about whether something is a feature or a bug. But, with this advantage, comes a large disadvantage. Because testing occurs only at the end, a fundamental problem could creep in early on and not be detected until days before scheduled product release



Tuesday, August 21, 2007

Software Testing Definitions

General Testing Process


Obtain requirements, functional design, and internal design specifications and other necessary documents.
Obtain budget and schedule requirements.Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests.
Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
Determine test environment requirements (hardware, software, communications, etc.)
Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
Determine test input data requirements
Identify tasks, those responsible for tasks, and labor requirements
Set schedule estimates, timelines, milestones
Determine input equivalence classes, boundary value analyses, error classes
Prepare test plan document and have needed reviews/approvals
Write test cases
Have needed reviews/inspections/approvals of test cases
Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
Obtain and install software releases
Perform tests
Evaluate and report results
Track problems/bugs and fixes
Retest as needed
Maintain and update test plans, test cases, test environment, and testware through life cycle

Types of Non-functional Testing:

  • Performance Testing
  • GUI Testing
  • Recovery Testing
  • Memory Management
  • Reliability Testing
  • Maintainability
  • Configuration Testing

Defect age

Defect age is nothing but the time gap between bug raised and bug resolved. Defect age analysis suggests how quickly defects are resolved by category. Defect age reports are a type of defect distribution report that shows how long a defect has been in a particular state, such as Open.

Build Interval Period:

The time gap between the two consecutive build versions is called Build Interval Period.

Risk Management:

An organized process to identify what can go wrong, to quantify and access associated risks and to implement and control the appropriate approach for preventing or handling each risk identified.
Risk Analysis is the process of evaluating risks, threats, controls and vulnerabilities.
Risk: Potential loss to organization
Vulnerability:This is a design, implementation and operational flaw.
Threat:This exploits vulnerability and triggers risks to become loss.