In our day-to-day life, we can see testing is everywhere whatever it is the technology, garments, medicine, vehicle, real state, or food sector. In this article, we will learn about software testing in a brief.
- What is testing
- Testing objectives
- Who performs testing
- When testing should start
- When to stop testing
- Difference between Quality Assurance and Testing
- Difference between Verification and Validation
- Difference between Error, defect, bug, fault, and failure
- Testing common beliefs
- Testing Types
- Testing Methods
- Testing Levels
- Different types of system testing
- Test design techniques
- Testing Documentation
What is Testing
Testing is the process of evaluating a system or its components with the intent to find that whether it satisfies the specified requirements or not. In simple words, testing is executing a system in order to identify any gaps, errors, or missing requirements in aspect to the actual desire or requirements. Software testing is mainly 2 types.
(a) Manual testing
(b) Automation Testing
The test objectives provide a prioritized list of verification or validation objectives for the project.
Functional correctness: To verify that all functionalities are ok as per requirement, identify defects, prevent defects, and gain software confidence level.
Authorization: To verify that actions and data are available only to those users with correct authorization.
Service level: To verify that the system has no performance issue in the required service level. It could be a load test or responsive test or environment incompetency.
Usability: See how much users enjoy using it.
Who performs testing
Software Tester: Performs System testing, integration testing, and non-functional testing
Software Developer: Performs unit testing
Project Lead/Manager: Performs acceptance testing with client
End-User: Use software and give feedback
When testing should start
- In Software Development Life Cycle (SDLC) testing can be started from the Requirements Gathering phase and lasts till the deployment of the software.
- Testing is done in different forms at every phase of SDLC like during the Requirement gathering phase, the analysis and verifications of requirements are also considered testing.
- Reviewing the design in the design phase with the intent to improve the design is also considered testing.
- Testing performed by a developer on completion of the code is also categorized as a Unit type of testing.
When to stop testing
- It is difficult to determine when to stop testing, as testing is a never-ending process and no one can say that any software is 100% tested. Following are the aspects which should be considered to stop the testing:
- Testing Deadlines.
- Completion of test case execution.
- Completion of Functional and code coverage to a certain point.
- Bug rate falls below a certain level and no high priority bugs are identified.
- Management decision.
Difference between Quality Assurance and Testing
|QA makes sure you are doing the right things.||QC makes sure the results of what you’ve done are what you expected.|
|QA aims to prevent the defect.||QC aims to identify and improve the defects.|
|QA process starts before executing the program.||QC is always involved after executing the program.|
|QA is a verification process||QC is a validation process|
|QA means Planning for doing a process.||QC Means Action for executing the planned process.|
|QA is the process to create the deliverables.||QC is the process to verify those deliverables.|
|QA is responsible for full SDLC||QC is responsible for STLC|
Difference between Verification and Validation
|Are you building it right?||Are you building the right thing?|
|Ensure that the software system meets all the functionality.||Ensure that functionalities meet the intended behavior.|
|Verification takes place first and includes the checking for documentation, code, etc||Validation occurs after verification and mainly involves the checking of the overall product|
|It’s part of quality assurance||It’s part of software testing|
|It’s part of static testing||It’s part of dynamic testing|
|Verification can’t be an automated process||Validation can be done by automation|
Difference between Error, defect, bug, fault, and failure
Error: Any execution issue or runtime failure that occurs by the developers is called an error.
Defect: When testers test a system and find issues that are mismatched with the requirements, that is a defect.
Bug: When any defect is accepted by the developers is called a bug.
Fault: A fault is introduced into the software as the result of an error.
Failure: When any software system behaves unexpectedly under specified requirements, that is called failure.
Testing common beliefs
- Only those who can’t do coding, become QA engineer
- QA engineers are a low salaried person
- Anyone can do testing
- Testing only starts when development is done
- Quality assurance and testing are the same
- Manual Testing (Write and execute test case, feature testing, bug report)
- Automation Testing (Using several tools like selenium, appium, cypress, etc to automate applications )
Black box Testing: Black Box Testing is a software testing method in which the functionalities of software applications are tested without having knowledge of internal code structure, implementation details, and internal paths.
Black Box Testing mainly focuses on the input and output of software applications and it is entirely based on software requirements and specifications. It is also known as Behavioral Testing.
Grey box Testing: Grey Box Testing or Gray box testing is a software testing technique to test a software product or application with partial knowledge of the internal structure of the application.
The purpose of grey box testing is to search and identify the defects due to improper code structure or improper use of applications.
White box Testing: White Box Testing is a software testing technique in which internal structure, design, and coding of software are tested to verify the flow of input-output and to improve design, usability, and security.
In white-box testing, code is visible to testers so it is also called Clear box testing, Open box testing, transparent box testing, Code-based testing, and Glass box testing.
There are mainly four Levels of Testing in software testing:
Unit Testing: checks if software components are fulfilling each unit of functionalities or not. Mainly done by developers. Automation engineers can do if have code access.
Integration Testing: checks the data flow from one module to other modules. Mainly this is API testing.
System Testing: Evaluates both functional and non-functional needs for the testing.
Acceptance Testing: checks the requirements of a specification or contract are met as per its delivery. Mainly checked/tested by clients before releasing the software product.
Different types of system testing
- Functional Testing
- Usability Testing: To check that how users get comfortable with it.
- Exploratory Testing: Explore a feature for learning purpose
- Feature Testing: Simple feature testing
- Sanity Testing: Testing a feature and related features in the same module
- Regression Testing: Testing the other features in different modules to ensure that no other feature gets broken due to the new build.
- Smoke Testing: Testing the whole system but priority based
- Ad-hoc Testing: It aims to generate the bug or break the system at any cost.
- Monkey Testing: It is basically random testing
- Gorilla Testing: Testing a feature repeatedly to make sure that the system does not crash.
- Non Functional Testing
- Performance Testing: To check a server system that concurrent user hits on the system but the system is sustainable within a predefined load.
- Security Testing: To ensure that a system is not vulnerable and hackers can’t do harm the systems.
Test design techniques
- Equivalence Partitioning
- Boundary Value Analysis
- Decision Table Testing
- State Transition Diagram
- Testing Coverage
It means finding out the set of valid/invalid data set based on the conditions.
Scenario: A software system accepts birthdays of members only from 18 to 100 years.
Boundary Value Analysis:
Boundary value analysis is a process of data testing with minimal numbers of data in the minimum to maximum range.
Formula to get boundary value data:
min-1, min, min+1, max-1, max, max+1
Decision Table Testing
Decision table testing is a process that is used to test system behavior with different input combinations. A number of possible Combinations are given by 2 ^ n, where n is the number of Inputs.\
|Case 1||valid email||valid password||true|
|Case 2||valid email||invalid password||false|
|Case 3||Invalid email||valid password||false|
|Case 4||Invalid email||invalid password||false|
State transition testing helps to analyze the behavior of an application for different input conditions. Any system where you get a different output for the same input, depending on what has happened before, is a finite state system.
Scenario: An ATM booth takes the correct PIN from the user and grants access. If the user gives an incorrect PIN, it prompts the user to give the correct PIN. If the user gives an incorrect PIN 3 times, then the account gets blocked.
In the above image, we can understand how the system behaves regarding the user input several times.
Decision coverage is determined by the number of all decisions covered by the test cases divided by the number of all possible decision outcomes in the code under test.
It helps to reduce redundant tests and provides high test coverage because it covers all statements and branches in the code.
In that above image, we can see that there are 4 types of path coverage:
That means we can get output by 4 separate paths.
Scenario: We can sign up to daraz.com.bd by submitting email and password directly or by Google or Facebook. Our main intention is to signup there and we can do it in 3 different ways. Path coverage confirms that we can execute a feature from all the possible ways and no way is missed.
Testing documentation involves the documentation of artifacts that should be developed before or during the testing of Software.
Some commonly used documented artifacts related to Software testing such as:
1. Test Plan
2. Test Strategy
3. Test Case
4. Traceability Matrix (RTM)
Test plan contains the following things:
- Analyze the product
- Who will use the software?
- What is it used for?
- How will it work?
- Define the testing strategy
- Define Testing Scope
- Identify Testing Type
- Document risk and analysis
- Define the test objective
- List all the software features (functionality, performance, GUI…) which may need to test
- Define the target or the goal of the test based on the above features
- Define Test Criteria
- Suspension criteria
- Exit criteria
- Resource Planning
- Who will test
- When testing will start
- Plan Test Environment
- Performance testing
- Browser compatibility
- Test Estimation
- Determine Test Deliverables
- Document all kinds of test cases, RTM, Testing tools, Bug reports, release notes, etc.
Test Plan vs Test Strategy
|Test Plan||Test Strategy|
|In the Test Plan, test focus and project scope are defined. It deals with test coverage, scheduling, features to be tested, features not to be tested, estimation, and resource management.||In Test strategy is a guideline to be followed to achieve the test objective and execution of test types mentioned in the testing plan. It deals with test objectives, test environments, test approaches, automation tools and strategies, contingency plans, and risk analysis.|
|Define testing goal||Activity to acquire goal|
A TEST CASE is a set of actions executed to verify a particular feature or functionality of a software system.
1.Test case Id
3.Test case title/description
Here is a standard test case format:
Requirement Traceability Matrix (RTM)
A traceability matrix is a document that details the technical requirements for a given test scenario and its current state. It helps the testing team understand the level of testing that is done for a given product.