top of page
Cotopaxi Lyon

Exploring Manual Testing: Understanding Testing Types and the Human Element Within


Different types of plants to


What is Manual Software Testing and what does it encompass?


Manual Software Testing is an integral aspect of the quality assurance field, in that it brings empathy and critical thinking to software testing. This is where we get to serve as the voice of the user, testing beyond the standard question, “Does it function like it’s supposed to?” to, “Does it function in a way that is intuitive to the user?”


A number of articles will tell you, “Manual Testing is a type of software testing in which test cases are executed manually by a tester without using any automated tools.” Super helpful. That tells us nothing more than manual software testing does not use automated tools. We’re pretty sure that you were able to suss that out already.


To expand upon that insightful definition, we’d say “Manual software testing is a type of software testing and quality assurance in which the testing requires critical thinking and empathy, to:

  • Not only manually execute test cases, but to do so while constantly evaluating the test cases themselves as well as the context around which the test case exists.

  • Question what’s missing, what’s out of date, or obsolete.

  • Make sure the functionality and design align with the rest of the application.

  • Test wonky user flows for the users that might get a little lost.


Manual Software testing can be seen throughout the Software Development Lifecycle from initial shaping and test plan prep, to post-release smoke and sanity testing.


Read on to learn about steps & types of manual software testing!


Shaping:

In an ideal world, Manual Software Testing begins before our testers even interact with the code. It begins in shaping meetings for a new feature, where we can bring that critical thinking and empathy to the table and pair it with the developer’s engineering brain, and product’s creative, analytical brain. While acting as a communication bridge between the two parties, we can collectively identify what’s missing, the weird ways a user will use the feature, the implications of this new feature on the rest of the application if it will meet Accessibility standards, and whether we’re following design standards, so the user has a seamless UI experience.


Test Case Writing:

After shaping, our team will go heads down reviewing the documentation of the new feature to write test cases for each user scenario. These test cases not only provide a guide for testing when we do get into the code but also clearly lay out to all parties what we are evaluating the new development against. We want Product to know what we are testing to make sure we aren’t missing something and to confirm we are reading their requirements correctly. We want Development to know what we are testing to make sure we are in alignment with the requirements for development and testing. Bonus points if Sales and key stakeholders know the detail to which their product is being tested, ensuring confidence in all parties.


Acceptance Testing:

Acceptance Testing is known by many names, some of which are User Acceptance Testing, Application Testing, and End User Testing. Acceptance Testing is where we get to dive into the new development. Ideally, new work will be passed over to QA to be reviewed in small chunks.


There’s a lot out there in the Agile world about the importance of work being segmented into small and manageable pieces. We’re totally in support. Not only does it ensure intentional focus on each small aspect of the work, it also helps reduce confusion/irritation/friction between QA and Dev so things don’t get lost in a mass of requirements.


When Acceptance Testing, our team tests against the test cases we have already written out, and evaluates them against what we are seeing in the QA environment. Again, what did we miss? What’s out of date? What functions differently than we had expected it to? Here we add to our test cases, ensuring all of what we are testing is clearly documented. As we test against our acceptance criteria/test cases we’ll also be keeping an eye out for User Interface issues and Accessibility issues. Depending on the application, we may also be testing on multiple browsers and/or devices to verify the responsive nature of the product. For example, making sure there aren’t funky differences between Safari and Chrome. (← Which there always seem to be.)


Design Verification:

Design Verification is a very detail-oriented type of testing. In this testing, we drill down to exact design specifications, from how the hero image displays to verifying the correct font, pixel size, and color. The Design team focuses on understanding user’s needs, preferences, and behaviors to create user-centric designs enhancing the user experience. Design teams are prioritizing usability and accessibility to ensure the software meets the needs and expectations of the intended user. When we are testing, we use a source of truth, like Figma, that is provided by a Design team, to guide us through each element on the page. Design verification is a really important part of software testing to ensure the User Interface is designed correctly to be visually appealing, cohesive with the rest of the application, and supports the brand identity.


Accessibility Testing:

Accessibility Testing is a wildly important type of testing, and is often overlooked during development. Ideally, Accessibility is at the forefront during the initial shaping meetings, making sure the experience is seamless for all user types. At its core, Accessibility is about fostering adaptability to accommodate diverse user needs. When we say Accessibility Testing, most often, people think this only pertains to developing the site to be navigated by someone with a physical disability, someone who needs to use a screen reader for example. While that is a part of it, web accessibility also benefits people without disabilities by making the site easier to interact with. Some examples that we would test for are captions on videos, for the user who learns by reading and less by listening. Color contrast of elements within the page, which may be restrictive for a user who is in direct sunlight. We would test for responsive design, to imitate the user who reduces screen size to limit the amount of content they are seeing at once. One of our favorites is navigating through a site using only your keyboard. This scratches the surface of what is involved in Accessibility Testing but demonstrates how crucial it is for businesses to implement for user experience. There are also legal requirements for accessibility, but that’ll be another post.


Regression Testing:

Regression Testing is key in making sure that new functionality/new code didn’t break existing code. Our work in Regression Testing is to verify that no regression has occurred when new code is introduced. Regression Testing can be a small task, verifying one feature, or a large task testing the entire application.


Ideally, a high-priority regression is run at the end of each sprint. This high-priority regression covers features and functions that are of the highest value in terms of user experience and the highest risk of regression with the new code. A high-priority regression ensures that the primary user flows have not regressed and that the new development is good to release to the user.

While regular Acceptance Testing and high-priority regression testing may be occurring regularly, this does not mean eyes are on every aspect of the application. Some areas of the application may have minimal value or are only used by the user once or twice. With a quarterly full regression, we’ll spend time testing all areas of the application. Making sure that although they may be little used, they are still working as intended.


Smoke and Sanity Testing:

Remember that regression that we just ran to cover the high-priority areas of the application? This regression will have taken place in an environment that is not user-facing, ideally a Staging environment. This environment should be a replica of the Production (Prod) environment users are in, and as such any testing in Staging should reflect testing in Prod. While these two environments are as much of a match as possible, there may be settings or configurations in the Prod environment that were not replicated in Staging. As such it may be good to run a “Smoke and Sanity” test in Prod to ensure there are no issues seen in Prod as a result of pushing the new code. These tests often take no more than an hour and are a quick run-through to verify functionally.


Exploratory Testing:

Exploratory Testing can take place at any stage in the Software Development Lifecycle. This testing more closely mimics the experience of a user. Without the constraints of a test case, the tester is intuitively feeling out how the application functions. Exploratory testing relies on the empathy of the tester and their ability to truly put themselves in the users shoes. This testing is intended to uncover bugs not yet found in other testing. Questions are asked along the lines of “What happens if I progress to page three of sign up, go back to page one to fix something, refresh my screen on page one, and navigate back to page three?” Is data retained as it should be? These are scenarios in which we lean into boundary testing with unexpected, or as we call them, wonky user experiences.


In essence, manual software testing is the soul of quality assurance, intertwining critical thinking, empathy, and the human touch throughout the software development lifecycle. From shaping meetings to exploratory testing, it serves as the bridge between developers, designers, and end-users, ensuring not just functionality, but intuitiveness and seamless user experiences. Manual software testing is more than just executing test cases; it's about questioning, adapting, and empathizing with users to uncover potential issues and improve overall software quality.

Comments


bottom of page