Monday, 20 January 2014

Practical aspects in integration testing

Hope you have read my blog http://qcfromjagan.blogspot.in/2014/01/what-is-integration-testing.html on the definition and main classification of integration tests.

Here lets discuss about the practical aspects of integration tests:
In a typical software development paradigm, several teams are involved in the development of software and the teams come from several organizations following different methodologies. Normally, people are not aware of anything happening beyond their teams and hardly are aware of the bigger picture. However, lets not forget that software is a series of deliverables from all the involved parties. You have to understand to a reasonable extent on what the other person is doing before so as to plan your own work and to provide a good quality product.

During development stage, teams focus on finding defects as much as possible. In the context of distributed development and the involved contracts and bonus models, it happens that teams only focus on defects in their own owned code rather than others. Naturally, integration between components owned by different teams and organizations are under risk.

What is integration testing?

Testing the interfaces & interaction within different parts of system is integration testing. It normally includes interaction with various involved stacks, layers, components, operating system, file system etc. The integration testing could be sub divided into two main sub-types / sub labels in certification terminology :

  • Component testing : Focus is on interaction between different software components
  • System testing : Focus on interaction between different software systems, between hard ware & software, cross platforms etc.

Inputs for component testing

Requirement documents
Design document
Code

In a water fall model, the requirements and design are available. The real trick would however be in scrum team world where the focus will not in documents. The testers should develop some coding skills and knowledge. They have to work as a team with developers, get over the dev-Q divide and synergize with them to understand the design. Also, code review is a very useful technique for a test colleague in scrum setup.

Flavours of component testing

Normally in component testing, the developer who writes the code writer the unit tests. Very often, what i consider the achilees heel is that the unit test will be positive flow. Code coverage ranges from anything higher than 70% and which is usually met with ease just looking at the targets.

The absurdness in setting KPIs is that KPIs eventually become reality.

Anyways, the other less used but much better for quality is that tester writes the unit tests. The difference between developer coding unit tests and the tester coding it will be the difference between how developer and tester tests the product. The number of tests are exhaustive and more meaningful and covers negative flows as well.

What is probably the most effective component testing is the test driven development in agile world. In this approach, the unit tests are first written and ensured that it fails. Then actual code is written so that the failed TC passes. Writing unit test and code happens in sequence and is highly effective when done by two different programmers working in pair mode.

Will write more about TDD is days ahead. Enough for now.

What is component testing?

The act of verifying and finding of defects in individually verifiable components in a software is called component testing. Eg : testing of individual modules, classes etc. It is popularly called unit testing and occasionally called module testing. In the organizations that I have come across, component testing usually required access to the code and is done using unit test frameworks and by debugging. Automation using unit test frameworks is systematic and offers a lot of benefits like ensuring code coverage, regression monitoring etc. Defects, if found, are normally fixed immediately and sometimes get fixed from the code coverage reports if not meeting the quality KPIs. However, one thing is certain - monitoring of these defects by tool is not usually done. Once, we have tried to track the defects via shared excel and it was cumbersome and the team concluded that benefits are too less as compared to efforts :). Whether or not i agree to this view is different but the point is monitoring is rare except for KPIs.

When to stop testing?

It is highly improbable that any software is 100% defect free. Defects always do exist and it is certainly possible to find out new issues any stage stage of product life cycle. So whats the right time to stop testing? As a logic, the right time to stop testing is when the cost of doing test activities is higher than the return of investments that you reap out of such an engagement. The degree of this cost benefit analysis is often subjective and varies between teams. However, you can regularize such variations between different teams in your organization by setting appropriate shipment criteria.

What is the need for testing?

Testing is done for a variety of reasons:
  • To uncover defects in the program
  • To meet contractual obligations
  • To measure quality
  • To reduce / mitigate / eliminate the risk of losses to customer
Lets not forget that its a premium tool for decision making as well.