Logo Image
Blog

Mastering Quality Assurance: A guide to Software Testing

  • Author:Deepika Toshniwal
  • Published On:Apr 4, 2024
  • Category:Events

Software Testing Overview

1.  What is Software Testing?


Software testing is defined as an activity to check whether the actual results match the expected results and to ensure that the software system is defect-free. It involves the execution of a software component or system component to evaluate one or more properties of interest.


Software testing is a process which helps to identify the correctness, completeness, security and quality of developed software applications.


In simple terms, Software Testing means Verification of Application Under Test (AUT).

                                                     Or

Software testing is the systematic analysis of application code, employing various approaches to assess its usability and functionality. Its primary aim is to uncover bugs and ensure the development process aligns with quality standards





2.  Who Does the Testing?


It depends on the process and the associated stakeholders of the project(s). In the IT industry, large companies have a team with responsibilities to evaluate the developed software in context of the given requirements. Moreover, developers also conduct testing which is called Unit Testing. In most cases, the following professionals are involved in testing a system within their respective capacities:


  •  Software Tester

  •  Software Developer

  •  Project Lead/Manager

  •  End User


Different companies have different designations for people who test the software on the basis of their experience and knowledge, such as Software Tester, Software Quality Assurance Engineer, QA Analyst, etc.

3.  When to Start Testing?

An early start to testing reduces the cost and time to rework and produce error-free software that is delivered to the client. However in Software Development Life Cycle (SDLC), testing can be started from the Requirements Gathering phase and continued till the deployment of the software.

4.  When to Stop Testing ?


It is difficult to determine when to stop testing, as testing is a never-ending process and no one can claim that a software is 100% tested or bug free.


The following aspects are to be considered for stopping the testing process:


  •  Testing Deadlines.

  •  Completion of test case execution.

  •  Completion of functional and code coverage to a certain point.

  •  Bug rate falls below a certain level and no high-priority bugs are identified.

  •  Management decision.


5.  Why is Software Testing Important?

Testing is important because software bugs could be expensive or even dangerous.

Software bugs can potentially cause monetary and human loss, and history is full of such examples.


  • In April 2015, Bloomberg terminal in London crashed due to a software glitch affecting more than 300,000 traders on financial markets. It forced the government to postpone a 3bn pound debt sale.

  • Nissan cars have to recall over 1 million cars from the market due to software failure in the airbag sensory detectors. There have been two reported accidents due to this software failure.

  • Starbucks was forced to close about 60 percent of stores in the U.S and Canada due to software failure in its POS system. At one point the store served coffee for free as they were unable to process the transaction.

  • Some of Amazon's third party retailers saw their product price reduced to 1p due to a software glitch. They were left with heavy losses.

  • Vulnerability in Window 10. This bug enables users to escape from security sandboxes through a flaw in the win32k system.

  • In May of 1996, a software bug caused the bank accounts of 823 customers of a major U.S. bank to be credited with 920 million US dollars.



6.  What is the Software Development Life Cycle?


- Software Development Life Cycle 


SDLC is a process which includes various phases that are followed to develop the software application. It is the sequence of activities carried out by Developers to design and develop high-quality software.Though SDLC uses the term ‘Development’, it does not involve just coding

tasks done by developers but also incorporates the tasks.



- Different Phases of the SDLC Model.


- Below are the Description of SDLC Phases


Phases

Description 

  • Planning Phase

Defining project scope, objectives, and resources required

  • Requirement Gathering Phase 

Gathering and analysing requirements to understand user needs.

  • Analysis and design Phase

Creating a blueprint for the system architecture and functionality.

  • Development /implementation Phase

Writing code and developing the software based on design specifications.

  • Testing Phase

Assessing software quality through various testing methods to ensure it meets requirements.

  • Deployment Phase

Releasing the software into the production environment for use by end-users.

  • Maintenance Phase

Addressing bugs, adding new features, and updating the software to ensure its continued functionality and relevance.


7. What is Software Testing Life Cycle (STLC)?

It is defined as a sequence of activities conducted to perform Software Testing. Contrary to popular belief, Software Testing is not just a single activity. It consists of a series of activities carried out methodologically to help certify your software product.


- Different Phases of the STLC Model.


- Below are the Description of STLC Phases


Phases

Description 

Requirement Analysis

Understanding and analysing the testing requirements and objectives.

Test Planning

Developing a comprehensive test plan outlining test strategies, resources, and timelines.

Test Case Development

Creating detailed test cases based on requirements and design specifications.

Environment Setup

Establishing the necessary hardware, software, and network configurations for testing.

Test Execution

Running test cases and scripts to identify defects and validate the system's functionality.

Test Case Closure

Evaluating test results, generating reports, and concluding the testing process.


8. What Bug/Defect Life Cycle and how it works in Testing?

1. What is a Bug?

In software testing, a bug is a deviation from the customer requirement, in simple language, we can say deviation between the expected result and the actual result in an application or in a module that is found by the testing team during the testing period.


2. What is a Defect?

If the functionality of an application is not working as per the customer’s requirement is known as a defect, It is found during the development phase while unit testing. Giving wrong input may lead to a defect or Any code error may lead to a defect.


3. What is an Error?

An error in software testing refers to a slip-up, misunderstanding, or mistake made by a software engineer. In the category of developer, we include software engineers, analysts, programmers, and testers. For example, a developer may misunderstand a design notation, or a programmer might type a variable name incorrectly – leading to an Error. It is generated because of the incorrect login, loop, or syntax.


4. What is the Defect/Bug Life Cycle ?

In the Software Development Process, Defect Life Cycle is the life cycle of a defect or bug which it goes through covering a specific set of states in its entire life. Mainly bug life cycle refers to its entire state starting from a new defect detected to the closing off of that defect by the tester. Alternatively, it is also called a Bug Life Cycle.

5. Defect States 


  • New: When a new defect is logged and posted for the first time. It is assigned a status as NEW.

  • Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to the developer team.

  • Open: The developer starts analyzing and works on the defect fix.

  • Fixed: When a developer makes a necessary code change and verifies the change, he or she can make bug status as “Fixed.”

  • Pending retest: Once the defect is fixed the developer gives a particular code for retesting the code to the tester. Since the software testing remains pending from the tester's end, the status assigned is “pending retest.”

  • Retest: Tester does the retesting of the code at this stage to check whether the defect is fixed by the developer or not and changes the status to “Re-test.”

  • Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug detected in the software, then the bug is fixed and the status assigned is “verified.”

  • Reopen: If the bug persists even after the developer has fixed the bug, the tester changes the status to “reopened”. Once again the bug goes through the life cycle.

  • Closed: If the bug no longer exists then the tester assigns the status “Closed.” 

  • Duplicate: If the defect is repeated twice or the defect corresponds to the same concept of the bug, the status is changed to “duplicate.”

  • Rejected: If the developer feels the defect is not a genuine defect then it changes the defect to “rejected.”

  • Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in the next release, then status “Deferred” is assigned to such bugs

  • Not a bug: If it does not affect the functionality of the application then the status assigned to a bug is “Not a bug”.


6. How Defect /Bug Life Cycle Works 





9. What is Severity and Priority In Bug/Defect


This information makes it easier for the developer to decide on which bug to fix earlier.


1. Priority 

is determined by the impact of the bug on the business. For example, if on the Google home page an ‘o’ is missing from the word ‘Google’, it does not impact any functionality but can impact the name and business and therefore this bug will be of higher priority.

- Type of Priority 


  1. High - Such defect needs immediate attention as it might lead to complete failure of the system. The earliest resolution of such defects should be conducted.

  2. Medium - These problems don’t interfere with the system’s operation. These issues can be resolved concurrently with the testing and design phases. Although these flaws definitely require fixing, they do not require any immediate attention.

  3. Low - These bugs are at the lowest priority. These are fixed once the developer is done with the high and medium-priority bugs.


2. Severity 

is determined by the functional impact of the bug or impact on the application. For example, if a very commonly used button on the homepage of a website does not work, it is a bug of critical severity. Whereas if there is some spelling mistake in a disclaimer text which is present on the last page and is rarely seen by the user, then it will be of trivial severity.

- Type of Severity 


  1. Blocker is a type of a bug that blocks further testing because the app or software crashes in a specific environment due to the bug.

  2. Critical error is connected with security and leads to the program shutdown, data loss or other serious damage. Such bugs disable the app’s main functionality and are primarily fixed.

  3. Major level of severity is assigned to a bug which negatively affects large areas of the software checked through certain types of testing. 

For example, in case of localization testing, bugs of major severity can be non-displayed letters, systematic omissions of spaces, text going beyond the screen bounds, untranslated text, etc.

  1. Minor error does not influence the app’s basic functions or the process of testing. This type of a bug happens when, 

For instance, the text does not fit in a separate bar, there is incorrect hyphenation, a missing space in a particular place, etc.

  1. Low severity levels have little impact on the program functioning. They are generally found in the course of user interface testing. A low severity level bug may be the wrong size of a button, too-bright colour of an object and so on.


3. Conditions of Priority and Severity 


These levels do not always coincide with the severity division. Bugs can be of:


  • high priority, blocker severity (e.g. the page is not displaying);

  • medium priority, major severity (e.g. the submission button does not work);

  • critical severity, high priority (e.g. log-in field is missing);

  • low severity, lowest priority (e.g. the wrong colour of the submission button).




10. What is Verification in Testing 



- Verification 

The software application needs to conform to the predefined requirement specifications. As the software development goes through different phases, it is necessary to ensure that deliverable of each phase is as per the specification. Verification makes sure that the Software Application is getting developed in the correct way with respect to the requirements - 

Are we doing the job right?


- Verification process

During Verification, contents of SDLC deliverables including code (also called as work products) are read and reviewed by one more responsible team member to find defects in them. Walkthrough and review are two important methods of conducting a verification activity.

Verification activities are conducted for the deliverables of development phases shown on the left arm of V Model.


Following are the different document deliverables for which verification activity is conducted and defects are reported. Documents or code are then revised to take care of the reported defects.

  • Document of understanding Software requirement specification

  • High level design document

  • Detailed design document

  • Code

11. What is Validation in Testing 



- Validation

The software application should functionally do what it is supposed to and it should satisfy all the requirements set by the customer and end user. Validation is done during or at the end of the development process to determine whether the application satisfies specified requirements. Validation is done by executing the intended functionality on the developed application.

Validation and Verification processes go hand in hand, but visibly the validation process starts after the verification process ends (after coding of the product ends).


12. Difference Between Verification and Validation 



Particulars

Verification

Validation

Definition

The process of evaluating documents/work- product of a development phase

The process of evaluating software applications during or at the end of the development process

Role

Ensures that work products meet their specified requirements

Demonstrates that the product fulfils its intended use when placed in its intended environment

Slogan 

Are we Building the product right ?

Are we building the right product?

Evaluation

Plans, requirement specification, design specification, code , test cases

The actual product/software application

Activities

Reviews, walkthroughs

Levels of testing


13. What is Quality Means in Software Testing

1. What is Quality Refers in software Testing 


Quality is meeting the customer requirements first time and every time. A quality product is one which is fit for use to perform its intended functions, with reasonable cost and within time.


Quality means consistently meeting customer's needs with respect to:

• Requirements

• Cost

• Delivery Schedule 

• Services Offered 


2. How to Ensuring Quality


To achieve quality consistently, the organisational processes need to be defined and implemented in a structured way.


1. Quality Management System

QMS can be defined as a systematic definition and utilisation of organisational procedures, processes, structure and resources to implement quality management. QMS includes defining the quality objectives and processes to achieve quality in the organisation


Quality Assurance (QA) and Quality Control (QC) are two key attributes of Quality ManagementSystem (QMS)


2. Quality Assurance

QA is a set of procedures that should be followed while developing a product or service to assure good quality to the customers. It includes measuring the processes, identifying the deficiencies / or weaknesses in processes, suggesting improvements and refine the processes. Conducting quality Audits based on QMS is an example of a QA process.


3. Quality Control

QC is a set of activities performed to ensure that the products or services meet the requirements. It is done during the development process (verification) and also once the product is developed (Validation). AC measures the product, identifies the deficiencies / actual defects and suggests improvements. It is an activity which verifies whether or not the product meets all standards. Testing (Verification and Validation) is an example of QC activity.

14. Difference Between QA and QC



Basis

QA

QC

Approach

Preventive approach

Detective approach

Activities

Work process oriented

Work product oriented

Direct result of activities

Are changes to the process

Are changes to the product

Changes

Can range from better compliance with the process to entirely new process.

Can range from a single line code change to complete reworking

Major Focus



Finding the process loopholes, identifying the root cause to overcome them.

Finding the defects in the work product and getting them fixed.

Inputs

Organisational processes,

Feedback from QC

Product or project requirements


15. The customer is The King!


The customer is the most important entity in the improvement process. Customer's feedback (suggestions / complaints) is the key attribute which helps for process improvement. Organisation should dedicate efforts towards customer delight.


Customers are classified into two categories: Internal and External. Customers who are outsiders and not directly involved in the process are External customers, whereas our team members, office staff, corporate employees are Internal customers.


Customer satisfaction and delight should be true for both types of customers.


16. Tester’s Contribution to Quality Of Software Application


• Tester's role is not to build the quality of the product but testers measure the quality and help developers for quality improvement.

• Share the responsibility of Quality improvement by effectively conducting quality control activities - verification and validation.

• Testers can provide inputs to quality processes.

• Frequent review of the development and testing artefacts (Requirement review, Test Case review, etc.) will help an organisation to maintain the quality of the work products.

• Tracing the requirements using RTM is a good practice to ensure the product quality.

• Effective defect logging should be the strength of a tester.

• Adequate use of checklists, tools, templates may help the process improvement.

• On time escalation of the issues helps in issue resolution at its earliest, implies quality improvement and ensures quality in test deliverables.


2. Software Testing Models



1. Waterfall Model 


This model for testing works perfectly for small, less complicated projects and is built on a team’s step-by-step growth during the test procedure. As it has fewer players and procedures to tend with, this can result in speedy project completion. But, bugs are found at later phases, making them extremely costly.


Levels of Waterfall Model 


2. V- Model 


The V model is the extension of the waterfall model.The left arm of V model is a conventional waterfall model of the deployment and the right arm represents corresponding validation of testing levels deliverable of each phase shown on the left arm undergoing verification.Validation is conducted at different levels namely unit,integration,system and acceptance testing. Each phase of development provides input to the respective test plan used in validation.


For example - Acceptance test plans can be made ready once user requirements are captured and verified.

Each validation activity such as requirement specification verification, functional design verification etc has its corresponding validation activity such as functional validation or testing, unit testing, system testing etc.


Levels of V Model



3. Iterative Model 


The iterative model does not require a complete list of requirements before the start of the project. The development process begins with the functional requirements, which can be enhanced later. The procedure is cyclic and produces new versions of the software for each cycle. Every iteration develops a separate component in the system that adds to what has been preserved from earlier functions.


In this SDLC model, requirements and solutions evolve through collaboration between various cross-functional teams. This is known as an iterative and incremental model.


Level of Iterative Model


4. Agile Model 


This model includes early and continuous testing, attention to collaboration among teams, constant interaction with the client, and focus on flexibility.To maintain the quality of deliverables through continuous testing and validation.


Levels of Agile Model



5. Spiral Model


It is more like the Agile model, but with more emphasis on risk analysis.The Spiral Model is a Software Development Life Cycle (SDLC) model that provides a systematic and iterative approach to software development. In its diagrammatic representation, it looks like a spiral with many loops. The exact number of loops of the spiral is unknown and can vary from project to project. Each loop of the spiral is called a Phase of the software development process.


  • The exact number of phases needed to develop the product can be varied by the project manager depending upon the project risks.

  • As the project manager dynamically determines the number of phases, the project manager has an important role in developing a product using the spiral model. 

  • It is based on the idea of a spiral, with each iteration of the spiral representing a complete software development cycle, from requirements gathering and analysis to design, implementation, testing, and maintenance.


Levels of Spiral Model



3. Test Design 

1. What is Test Design 


Test design is a process that defines how testing has to be done. It involves the process of identifying the testing techniques, test scenarios, test cases, test data, and expected test results


2. Process of Test Design 

1. Test Scenarios 

Test scenario can be defined as a top view of functionalities under test Or what is needed to test at high level.

• How to Identify Test Scenarios ?

• From Use Cases

• Functionality breakdown

• From state changes of an Entity in an Application


Test scenarios represent what needs to be tested in an application. Test scenario Identification ensures coverage of all features of the application in testing. If test scenarios are not identified, it may result in a particular functionality not getting tested in detail or not getting tested at all. Test scenarios further help in developing End to End or combination scenarios which actually simulate the complete user interaction with Application. Though test scenarios are important to create, they need not be a formal deliverable always.

2. Test Case 

• Specifies "how" to test the particular functionality.

• Describes steps to be performed with input data and output expectations based on the user requirements.

• Test cases are used while test execution to check the actual behaviour of the application

Test case provides a detailed procedure that helps to test a particular aspect or feature of an application in detail.

1. Who creates the Test Cases 

Test cases are created by the tester who has sufficient knowledge about the application functionalities and user requirements. The purpose of writing a test case is to detail out a test scenario in a specified format to validate functionality by executing it.

2. Process of Test Case Creation

Test case creation includes identification of test conditions and then documenting the detailed process to test this test condition. Test conditions can be identified in the following two ways:

From Test Scenario breakdown - For every scenario, identify various paths - a normal or happy path, alternate path and error flow. Each path will lead to a unique test condition.

From Use Case or functionality - Check normal flow, alternative flow and error flow documented for use case or functionality. Each of them leads to a unique test condition.

3. How to document a Test Case ?

Every test case should contain the following details

           • Test case ID

• Test case objective

• Prerequisite

• Steps and Data

• Expected Result

• Actual Result

• Status

Test cases can be documented in many formats. The template to be used for test cases can change organization-wise. Many organisations make use of Excel sheets to document test cases or test cases can directly be written in a test management tool.


3. Test Case Review 

1. Why Review Test Cases ?

• Test cases are the most important deliverable of the Test design phase and if needed, are shared with customers.

• To check correctness and completeness of documented Test cases


Once test cases are created by the Tester they need to undergo a review before they are actually used in validation or dynamic testing. Reviewers can be Peer (Test team member), Superior (Test lead) or Application expert. Sometimes, customers can also review the test cases.

4. Test Case Storage

 • Test cases should be maintained in the Test Management Tool or in a centralised controlled location

• Tools - Quality Centre, Testlink etc.

5. Test Data 

• Data used as an input to the application while testing

• Effectiveness of the test case depends on the use of correct test data while testing

Test case includes test data to be used during test case execution. One test case can be executed multiple times with different sets of test data to check the application behaviour under different conditions.

6. What is the Purpose of Test Design?

Software testing, like all processes, is an investment and is considered successful when it provides an ROI. To ensure, we are getting the ROI; it has to be tracked too. One factor that will create an impact on the ROI, is the effectiveness of the test strategy. An effective test design will largely dictate the effectiveness of the testing strategy, thus the ROI from the software testing process.

7. When to Create a Test Design?

Test design should be created once the test conditions are defined, and adequate information is available to create the test cases of both high and low levels.

3. Test Data Creation Techniques

The test data selection depends on the requirements. For any test condition, the number of possible data inputs can be very high based on the requirements. It is not possible to check the test case with all data values. But at the same time, testers need to ensure that the test case covers all the combinations possible. Techniques used to achieve this are Equivalence Class Partitioning and Boundary Value Analysis


1. Boundary Value Analysis



2. Error Guessing 



3. Negative Testing 


4. Equivalence Class Partitioning (ECP)




5. Requirements Traceability Matrix (RTM)






4. Type of Software Testing 



1. Manual Testing 

Manual testing is a type of testing in which we do not take the help of any tools (automation) to perform the testing. In this testing, testers make test cases for the codes, test the software, and give the final report about that software. Manual testing is time-consuming testing because it is done by humans and there is a chance of human errors.


1. Type of Manual Testing 



1. White Box Testing 

The white box testing is done by the Developer, where they check every line of a code before giving it to the Test Engineer. Since the code is visible for the Developer during the testing, that's why it is also known as White box testing


2. Gray Box Testing 

Gray box testing is a combination of white box and Black box testing. It can be performed by a person who knows both coding and testing. And if the single person performs white box, as well as black-box testing for the application, it is known as Gray box testing.

3. Black Box Testing

The black box testing is done by the Test Engineer, where they can check the functionality of an application or the software according to the customer /client's needs. In this, the code is not visible while performing the testing, that's why it is known as black-box testing.


1. Black Box Testing Type 


1. Functional Testing 


Functional testing is basically defined as a type of testing that verifies that each function of the software application works in conformance with the requirement and specification. This testing is not concerned with the source code of the application. Each functionality of the software application is tested by providing appropriate test input, expecting the output, and comparing the actual output with the expected output. This testing focuses on checking the user interface, APIs, database, security, client or server application, and functionality of the Application Under Test. Functional testing can be manual or automated.


Type of Functional Testing also known as Level of testing


1. Unit Testing 

Unit test aims at testing each of components that a system is built upon. As long as each of them works as they are defined to, then the application as a whole has a better chance of working when units are put together. Working together in a procedural oriented programming, a unit may be an individual program, function, procedure. In object oriented programming the smallest unit is always a class.


2. Integration Testing 


In this testing, two or more modules which are unit tested are integrated to test i.e., technique interacting components, and are then verified if these integrated modules work as per the expectation or not, and interface errors are also detected



Type of Integration Testing 


1. Big Bang Integration Testing Approach



2. Top Down Integration 



3. Bottom down integration 



3. System Testing 

 

In system testing, complete and integrated Softwares are tested i.e., all the system elements forming the system are tested as a whole to meet the requirements of the system.



Type of System Testing 

1. End to end Testing 


End-to-end testing (E2E testing) is a software testing method that verifies the functionality of an application from start to finish, simulating real-world user scenarios. It tests the entire system, including all integrations and dependencies. 


2. Retesting and Regression


Retesting is done to determine whether the identified defect is successfully removed or resolved. It is also known as confirmation testing. Previously failed test cases are executed again in the next cycle to ensure removal of the earlier existing defect.


Regression Testing - As defects are fixed or new functionalities get added into the application, it becomes necessary to check that there is no impact of these changes on the previously working functionality. Regression testing is carried out to determine whether the changed component has affected the functionality of the unchanged component.


3. Smoke and Sanity Testing 


Smoke Testing helps the tester to assess the stability of the build and the test environment. When any build is received by the testing team, ideally that has to be installed first and needs to support the basic operations in application. Smoke testing ensures that the build is good enough to conduct the entire execution cycle. If the build does not pass the smoke tests, further execution can be put on hold till the necessary corrections are done in the build.


Sanity Testing is a type of software testing that aims to quickly evaluate whether the basic functionality of a new software build is working. It is a subset of regression testing and is performed to ensure that the code changes have not adversely affected the existing functionalities


4. Acceptance Testing 


This is a kind of testing conducted to ensure that the requirements of the users are fulfilled before its delivery and that the software works correctly in the user’s working environment.




Type of Acceptance Testing




1. Alpha Testing 

is performed by the development team to identify and fix bugs before the software is released to the customer.


2. Beta Testing 

is performed by a group of selected customers to provide feedback on the software before it is released to the general public.


3. Customer Acceptance Testing 

is performed by the customer to verify that the software meets their requirements.


4. User Acceptance Testing 

is performed by the end users of the software to ensure that it meets their needs.


2. Non Functional Testing 


It can be performed using a variety of techniques, including performance testing, load testing, stress testing, and usability testing. These techniques help to identify and address potential issues with the software's performance, scalability, and user experience.



1. Security Testing 

It determines whether an application is capable of identifying security related risks and averting possible attack (virus attack). It is extremely important to conduct security testing for the applications having critical information handling or sites representing govt or military organisations, financial sites, brand conscious industries.


Tools - IBM Rational App scan , HP web inspect, Web search.


2. Performance Testing 

It determines how a system performs in terms of responsiveness to the above requirements under a particular workload. Performance testing checks whether an application provides stipulated output in a stipulated time. It is carried out after functional testing.


PerformanceTesting Tools

• Load runner, Rational Performance Tester, Silk performer, OpenSTA


Type of Performance Testing 





1. Load Testing 

A load test is conducted to understand the behaviour of the system under a specific expected load as per requirements document. Load can be multiple number users accessing the application concurrently.


For example - a Web application will be used by a thousand users at a time. It is called load on the application. It is specified in the non-functional requirement specifications. All thousand users would not carry out the same task. But they would access different features. Like - 

600 Users log in, browse and then log off.

250 Users log in, add items to cart, check out log off.

• 150 Users just log in without any subsequent activity.


2. Scalability Testing 

The main goals of scalability testing are to determine the user limit for the web application and ensure end user experience, under a high load, is not compromised. One example is if a web page can be accessed in a timely fashion with a limited delay in response.


3. Endurance Testing 


Endurance testing determines whether a system can sustain a continuous expected load for a longer duration of time. Predefined load is applied for a longer period of time and an application performance is checked.


For example, Check the above web application flooded with a thousand users throughout the day.


4. Stress Testing 


Stress testing is used to evaluate the ability of an application to maintain a certain level of performance effectiveness under unfavourable conditions like -


• Overloading of the existing resources with excess jobs

• Application of load beyond the specified limits


For example, Testing is conducted by increasing the number of users beyond the prescribed load and running several resource-intensive applications in a single computer at the same time.



5. Volume Testing 


It refers to testing an application with a huge amount of data and checking its limitations in terms of performance. Tests are conducted once the database is loaded with the required volume of data. It determines whether response limits are acceptable to meet the organisation's projected business.


This type of testing is needed mainly for transactions processing systems like the one capturing real time sales etc.


6. Spike Testing 


Spike tests are useful when the system may experience events of sudden and massive traffic.


Examples of such events include ticket sales (Taylor Swift), product launches (PS5), broadcast ads (Super Bowl), process deadlines (tax declaration), and seasonal sales (Black Friday).

3. Usability Testing


Usability testing is a technique used in user-centred interaction design to evaluate a product by testing it on users. It is an irreplaceable part of the design process and can help to identify usability problems early on


Type of Usability Testing 



1. Exploratory Testing 


This testing allows you to think outside the box and come up with use cases that might not be covered in a test case. 


For example, you might perform one test and then ask yourself, “What if I tried this? What if I didn't do that?”


2. Cross Browser Testing


It is an important part of the testing process in software development. It is an interactive performance test that allows you to check whether your website works as you intended across all web browsers, mobile devices, and operating systems, such as Windows, MacOS, iOS, and Android.


3. Accessibility Testing 


Accessibility testing is another type of software testing used to test the application from the physically challenged person's point of view. Here the physical disability could be - old age, hearing, colour blindness, and other underprivileged groups. 


It is also known as 508 compliance testing. In this, we will test a web application to ensure that every user can access the website.


4. UI Testing 


Ul comprises controls like textbox, text area, radio buttons, dropdown lists, checkboxes etc. Ul testing determines how user friendly the application is from a look and feel perspective.


Examples for Usability Testing

• For a website, is navigation within the web pages provided and are there home page and logout links on every page

• Check for use of correct icons and corresponding tool tips

• Check for dropdown values being sorted correctly


4. Compatibility Testing


Compatibility testing is a type of software testing that ensures that an application or software runs seamlessly across different hardware, software, networks, and browsers. It helps identify any compatibility issues that may arise when using the application in various environments. 


Type of Compatibility Testing 


1. Backward Compatibility Testing 

signifies verifying the behaviour of the developed hardware/software with the older versions of the hardware/software.


2. Forward Compatibility Testing

verifies the behaviour of the developed hardware/software with the newer versions of the hardware/software.


When a QA team runs a compatibility test, the software is tested on many hardware systems under different conditions. For instance, the QA team will test your software on:


  • Different browsers like Firefox, Chrome, Internet Explorer and Safari, Brave.

  • Operating Systems, including different versions of Windows, Chrome, iOS and Linux

  • Various levels of computing capacity

  • Hardware peripherals

  • Multiple versions of system software

5. Installation Testing 

• Verify that all the necessary components of the application are getting installed correctly

• Identifies different ways in which installation procedure may cause errors

• Needs proper documented installation procedure


Installation testing is done after system testing. Installation testing is needed for all software Applications, irrespective of their architecture. Smoke testing can be conducted after installation to confirm correct installation.


6. Configuration Testing

Configuration means minimum requirement of following components for running the application

• Hardware - RAM, Hard disk, CD-drive, type of monitor e.g VGA, video

adapter,particular,microprocessor.

• Other peripheral devices - Printer.

• Software - Operating system i.e. windows, Linux etc, other prerequisite software.

When an application is developed, it is designed to work for a particular configuration. Configuration testing is done to assess an application's behaviour and performance on the range of hardware and software configurations for which it is designed.


It may include different hardware, processors, operating systems and peripherals.

2. Automation Testing 


Automated software testing involves the development of code/test scripts that carry out tests automatically. To ensure the program is reliable, testers create test scripts with the help of the right.


Automation tools like - selenium. Pie test, jenkins, lambda test, Postman, browser stack for Faster test execution.

1. Difference Between Manual and Automation Testing




Manual Testing

Automation Testing

In manual testing, the test cases are executed by the human tester.

In automated testing, the test cases are executed by the software tools.

In manual testing, investment is required for human resources.

In automated testing, investment is required for tools and automated engineers.

In manual testing, the test results are recorded in an excel sheet so they are not readily available.

In automated testing, the test results are readily available to all the stakeholders in the dashboard of the automated tool.

Manual testing is usable for Exploratory testing, Usability testing, and Ad hoc testing.

Automated testing is suitable for Regression testing, Load testing, and Performance testing.



This guide is written by Deepika Toshniwal, Software Test Engineer at Hashtrust Technologies. Leveraging her expertise and experience in software testing, Deepika has curated this comprehensive resource to offer valuable insights and practical advice to readers. You can download this detailed testing guide from this link and access it offline too.  
https://bit.ly/SoftwareTestingGuide_HashtrustTechnologies