As QA professionals, our job is to uncover bugs before they reach the end user. To do this effectively, we incorporate QA as an ongoing process that spans the entire software development lifecycle. We like to compare software development to building a house, with QA serving as the various stages of inspection as the house is being built. We're involved from the initial planning stage to the final implementation, ensuring quality at every step.
But why is this continuous testing so crucial? Let's break it down:
Early bug detection reduces bug fix costs: The earlier we catch a bug, the easier and less expensive it is to fix. More about this in a previous blog post.
Configuration management: Software operates in various environments. Our job is to ensure that the product works across different environments and user configurations.
Avoiding unintended consequences: Sometimes, fixing one issue can inadvertently create another. That's why we can't assume that testing during feature development alone guarantees a bug-free release.
By utilizing continuous testing and employing a variety of testing types, we aim to catch bugs before they reach the end user.
The Foundation: Black Box vs. White Box Testing
Before we dive into specific testing types, let's clarify two fundamental approaches: black box and white box testing.
Black Box Testing: The User's Perspective
Black box testing involves using a product without knowledge of its internal workings. We focus solely on inputs and outputs, ignoring the internal code structure. This approach mimics how end-users interact with the software, which is essential as we put ourselves in the position of the user, allowing us to understand the product as the user does.
Key point: The majority of manual testing falls under the black box category. This is because it aligns closely with the user experience, allowing us to identify issues that directly impact usability and functionality.
White Box Testing: Understanding the Code In Testing
While black box testing allows the tester to test the product without intimate knowledge of the code, white box testing involves analyzing the structure of the product itself, from code to data structures. As this perspective is not available to the user, white box testing is infrequently used in manual testing. White box testing is often reserved for unit, integration, and automated regressions.
Below we'll explore types of manual testing within black box testing.
Types of Manual Testing
It is crucial to understand that while there is a general standard for QA terminology, individual organizations may use types of testing in different ways. As such, it is essential when beginning a partnership or development project to align on shared definitions of terminology and the usage for each testing type. What works for one organization may not necessarily work for another.
1. Regression Testing
What is it?
Regression testing ensures that new code changes haven't adversely affected existing functionalities. Regressions are run from detailed test scripts that range from testing the entire product (full regression) to focusing on core areas of risk in the product (priority 1 regressions)
When to use it:
Before release to the production environment
At LyonQA we recommend running a P1 regression is run at the end of each sprint, with a full regression run on a quarterly basis.
Ideally what will be run in the P1 regression will be decided at the beginning of the sprint, ensuring all parties have an opportunity to weigh in on areas of risk, and that the regression is tailored to the individual outcome of the sprint.
What goes into regression testing:
Regression script writing & editing
Regression planning for the specific regression being run
Regression testing based on the specific test scripts for that regression run
Bugs being logged
Bug fixes to be retested as UAT in subsequent sprint cycles
High priority/blocking bugs fixed before release and retested before second regression run
Regression findings shared with key-stakeholders and documented
2. Exploratory Testing
What is it?
Exploratory testing is a crucial tool in testing the product "as a user.” Unbound by test scripts, testers are able to explore the product as a user would, investigating edge case user scenarios and various user configurations. While exploratory testing does not rely on test scripts, it should be based on a general test plan covering the areas to test (often based on areas of risk).
When to use it:
For new features where formal test cases haven't been developed yet
To complement automated tests by finding unexpected issues
To complement P1 regressions to cover areas of the product untouched by regression coverage
What goes into exploratory testing:
General planning around areas to test, aim of exploratory testing, and time allocated
Testing
Bugs being logged
Findings shared with key-stakeholders and documented
3. User Acceptance Testing (Acceptance Testing)
What is it?
Acceptance testing verifies whether the new development meets the specified requirements and the Definition of Done (DoD).
We differentiate between User Acceptance Testing (UAT) and Acceptance Testing, though the terms are often used interchangeably. We recommend that UAT should include both functionality and usability testing. However, many organizations focus only on functionality at this stage, referring to it as Acceptance Testing. In these cases, a final round of testing with actual users may still be conducted before release, which they then call UAT. At LyonQA, we believe integrating user-centric testing throughout the Acceptance Testing phase allows for the earlier identification of critical usability issues. Addressing these issues during development is not only easier but also more cost-effective than waiting until after features are completed.
When to use it:
Throughout development, when a developer has finished a work ticket the work is passed on to QA to test
When stakeholders need to sign off on the product
To ensure the software meets business requirements and user needs
It is important to note that User Acceptance Testing should be done continuously throughout the sprint, and not reserved for when development is complete and right before the sprint ends. By integrating UAT throughout the sprint all parties have a clearer picture of completeness and quality, instead of waiting until the last minute to know where things stand.
What goes into UAT:
Review of development ticket, questioning: what is missing, what is out of date, any areas for misunderstanding
Test script writing: writing user scenarios and test scripts for each acceptance criteria associated with the ticket, as well as related edge cases
Testing and reporting findings:
Passing or failing the ticket - including reason why and documentation (screen shot/recording)
Logging related bugs
Giving sign off on DoD for the sprint
4. Smoke and Sanity Testing
What is it?
Smoke and sanity testing are often paired with more rigorous testing to quickly assess the stability of a build or to verify that a push to a higher environment or recent configuration changes have not caused any breakages.
When to use it:
After receiving a new build
Before proceeding with more rigorous testing
To quickly assess if a build is stable enough for further testing
After a push to prod (note it is recommended that regression testing occurs before this stage) to ensure key functionality
Smoke and sanity testing is often used in conjunction with more rigorous testing, either to quickly assess the stability of a build or to quickly verify that a push to a higher environment/configuration changes has not led to breakage in the code.
What goes into smoke and sanity testing:
Test planning
Testing
Reporting findings
5. Cross-Browser Testing
What is it?
Cross-browser testing ensures that web applications function correctly across different browsers and versions.
When to use it:
During the development of web applications
Before major releases
When supporting a wide range of browsers or updating browser compatibility
While cross-browser testing does uncover functional issues, the primary issues uncovered with are user interface (UI) issues. In planning for cross-browser testing it is important to review user data and identify which browsers are most used, tailoring to those top 3 most used browsers.
Within cross-browser testing an organization may determine that version testing is also essential to verify stability and usability.
What goes into cross browser testing:
Test planning - often basing testing on regression, UAT, or exploratory testing
Testing
Reporting findings and tracking metrics on browsers that consistently show more or less issues to then further tailor testing
6. Mobile Testing
What is it?
Mobile testing focuses on testing applications on various mobile devices, considering factors like screen sizes, operating systems, and hardware capabilities. It is important to note that all types of testing mentioned in this article can and should be done within the framework of mobile testing.
When to use it:
When developing mobile applications or web apps
To ensure consistent user experience across different devices
When updating apps for new mobile OS versions
As with cross-browser testing, it is important to review user data and identify which devices are most used, tailoring it to the top most used devices. Both mobile and cross-browser testing can quickly become time-consuming if it calls for extensive testing across devices and browsers.
If mobile testing is used for testing responsive websites, it is recommended that cross-browser testing is incorporated, ensuring functionality and usability across devices and browsers.
While mobile testing is important for both mobile applications as well as web apps, mobile testing is much more detailed for mobile applications.
Mobile app testing may include:
Notification testing
File upload/download
Sharing (through SMS, email, etc)
What goes into mobile testing:
Test planning - often basing testing on regression, UAT, or exploratory testing
Testing
Reporting findings and tracking metrics on devices that consistently show more or less issues to then further tailor testing
7. Accessibility Testing
What is it?
Accessibility testing ensures that products are accessible by all users, including those with disabilities. "Digital accessibility refers to the ability of people with disabilities/impairments to independently consume and/or interact with digital (e.g., web, mobile) applications and content." - GAAD
When to use it:
Throughout the development process
When aiming to comply with accessibility standards (e.g., WCAG)
To make the application inclusive for all users
Read more about accessibility testing in our blog post Embracing Digital Accessibility: A Call to Prioritize Inclusivity in Tech.
What goes into accessibility testing:
Test planning:
Identifying specifics of accessibility testing need (ie screen reader testing, full compliance with WCAG standards, etc)
Basing testing on functional testing as outlined in either regression scripts, UAT scripts, or exploratory testing
Testing
Reporting findings and tracking metrics on areas that consistently show more or less issues to then further tailor testing
8. Ad Hoc Testing
What is it?
Ad hoc testing is unplanned, informal, and relies heavily on the tester's intuition and experience. Ad hoc testing may be paired with other testing, allowing the tester to go "outside the bounds" of the testing they are engaged in. Ad hoc testing may be seen in tracking down a related bug to a UAT ticket that then uncovers more issues in that area.
When to use it:
To quickly test a specific feature or function
When formal test cases might miss edge cases
As a complement to more structured testing methods
What goes into ad hoc testing:
Testing
Being aware of the scope for the ad hoc testing
Reporting findings
Note, it is important in ad hoc testing to keep in mind the scope. It is easy to find yourself deep in rabbit holes tracking down issue after issue and find yourself in an area of the product that is low risk and low usage, thus not warranting time spent testing in it unless previously approved.
Functional vs. Non-Functional Testing
It is important to understand that the above testing types all can range from functional testing to non-functional or incorporate the two.
Functional Testing
Functional testing focuses on verifying that each function of the software works according to the specification. It's about answering the question, does the software do what it's supposed to do?
Non-Functional Testing
Non-functional testing examines the aspects of the software that aren't related to specific behaviors or functions, but rather to operational aspects. It answers questions like "How well does the software do what it does?"
It is helpful to see how these two types of testing differ and complement each other:
Functional Testing | Non-Functional Testing |
When a new user is created, the new user data displays on the user table | The user table displays the new user data without needing a page refresh |
When a file is uploaded, the loader accepts the file | When large files are uploaded that require processing time, a loading bar clearly shows processing status |
When a submission has all required data input, a save button becomes active | The save button matches the appearance of all other save buttons |
In creating test plans for anything from Regression testing to exploratory testing, it is crucial that the intent behind the testing is understood by all, clearly defining if testing is to be functional, non-functional or both.
The Strategic Deployment of Manual Testing
Manual testing requires more than just the testing itself; it demands a deep understanding of the types of testing and when best to incorporate each type. To determine when to use each type of testing, it's key to understand your project's unique needs:
Project Phase: Early in development? Exploratory and smoke testing can provide quick feedback. Nearing release? User acceptance and regression testing ensure a polished, stable product.
Available Resources: Limited time or personnel? UAT may be the best option. Have a dedicated QA team? Comprehensive UAT and regression testing should be implemented.
Application Type: Developing a web app? Cross-browser testing becomes crucial. Mobile app? Mobile testing takes center stage.
User Base: Targeting a wide range of users? Accessibility testing becomes vital to ensure inclusivity.
Conclusion: A Comprehensive Approach
To provide truly effective quality assurance, it's essential to understand project needs and integrate types of manual testing on a tailored basis. Testing strategies for a mobile photo-sharing app differ greatly from those for a web-based fintech platform. By strategically deploying various manual testing types, we don't just catch bugs—we enhance the user experience and drive better overall performance.
Commenti