Manual Testing Interview Questions for Experienced

Manual Testing is like being a quality inspector for software. A tester, acting as a user, interacts with the software to find flaws. They follow a plan, but also improvise to explore different scenarios. They also compare how the software behaves to how it’s supposed to work, and report any issues they find. This helps ensure the software is functional, user-friendly, and ready for the real world.

Below, we’ve crafted interview questions and answers, focusing on scenarios, tailored for experienced candidates in the field of Manual Testing.

Question: Describe briefly your experience in Manual Testing?

Answer: In my [X] years of experience in manual testing, I have had the opportunity to work on a wide range of projects and applications. I’ve been involved in various phases of the software development life cycle, from requirement analysis to test planning, test case design, execution, defect tracking, and reporting.

My experience spans across different domains, including finance, healthcare, and e-commerce, which has given me a broad perspective on various testing challenges and strategies.

I’ve also worked with diverse testing methodologies, such as black-box, white-box, and gray-box testing, and have been actively involved in creating and executing test cases, ensuring the software’s quality and reliability.

Question: What exactly know about manual testing, and what sets it apart from automated testing?

Answer: Manual Testing is the process of evaluating software manually, without the aid of automated tools. Testers execute test cases, record results, and identify defects by interacting with the software.

Automated Testing involves using scripts and software tools to execute test cases. It’s faster for repetitive tasks and ideal for regression testing, but it requires script development and isn’t suitable for exploratory testing.

Question: What are the roles and responsibilities of manual testing?

Answer: Roles and Responsibilities of Manual Testing:

Test Planning: Creating test plans, defining test objectives, and estimating resources.

Test Case Design: Developing test cases, test scripts, and test data.

Test Execution: Running test cases, recording results, and verifying software functionality.

Defect Reporting: Identifying and documenting defects, including their severity and impact.

Regression Testing: Ensuring existing functionality remains intact after code changes.

Test Documentation: Maintaining test documentation for future reference.

Test Environment Setup: Configuring test environments to simulate production conditions.

Test Data Management: Creating and managing test data required for testing.

Quality Assurance: Ensuring the software meets quality and performance standards.

Collaboration: Working closely with developers, product managers, and stakeholders to deliver a high-quality product.

Question: What are the different levels of Manual Testing?

Answer: Manual testing can be categorized into several levels:

  1. Unit Testing: At this level, individual components or modules of the software are tested to verify their correctness and functionality.
  2. Integration Testing: In this phase, the interactions between different modules are tested to ensure that they work together as intended.
  3. System Testing: The entire system is tested as a whole to evaluate its compliance with specified requirements.
  4. Acceptance Testing: This level involves user acceptance testing (UAT), where the software is tested by end-users to confirm that it meets their expectations and requirements.
  5. Regression Testing: After any code changes or updates, regression testing ensures that existing functionalities haven’t been negatively affected.
  6. Exploratory Testing: This is more informal and unscripted testing, where testers explore the application to identify unexpected issues or defects.
  7. Ad-Hoc Testing: Similar to exploratory testing, but more focused on specific aspects of the application, typically without predefined test cases.

Question: How to write a Test Cases in Manual Testing? Given an Example.

Answer: Writing effective test cases is crucial to a successful testing process. A test case typically includes the following components:

Test Case ID: A unique identifier for the test case.

Test Case Title: A descriptive title that summarizes the purpose of the test case.

Pre-conditions: If any necessary conditions that must be met before executing the test case.

Test Steps: It is a A step-by-step description of the actions to be taken during the test.

Expected Result: The expected outcome or behavior after performing the test steps.

Actual Result: Finally, The actual outcome observed during testing (to be filled in during execution).

Status: Pass, Fail, or Incomplete, indicating the result of the test.

Here’s an example of a test case for a login page:

Test Case ID: TC001

Test Case Title: Verify User Login

Preconditions: User must have a valid username and password.

Test Steps:

Open the application.

Navigate to the login page.

Enter a valid username.

Enter a valid password.

Click the “Login” button.

Expected Result: The system should grant access to the user and redirect them to the home page.

Actual Result: [To be filled in during execution]

Status: [To be filled in during execution]

This is a simple example of a test case. In practice, test cases can become more complex, covering different scenarios and edge cases, but the structure remains the same to ensure consistency and clarity in testing efforts.

Question: What is test plan, test scenario, test data, test script, test closure, Test harness?

Answer: Here are the explanations for each term:

Test Plan: A document that outlines the scope, approach, resources, and schedule for the testing activities. It defines the test strategy, objectives, and deliverables, serving as a guide for the testing process.

Test Scenario: A specific test environment or situation designed to validate an aspect of the application under test. Test scenarios help in determining the expected behavior of the system under various conditions.

Test Data: The data used to test the software system. It includes both valid and invalid input data and is essential for verifying the correctness, completeness, and security of the application.

Test Script: A set of instructions written in a programming language that is used to automate the execution of test cases. Test scripts are created for automated testing and help in simulating user interactions with the application.

Test Closure: The final phase of the testing process that involves evaluating the testing cycle against predefined objectives. Test closure includes activities such as creating a test summary report, gathering test metrics, documenting lessons learned, and ensuring that all testing-related tasks are completed.

Test Harness: A collection of software and test data configured to test a program unit by running it under varying conditions. It provides the necessary test environment to automate and run the test cases and collect the results for further analysis.

Question: How would you define the concept of the average age of a defect in software testing?

Answer: The average age of a defect in software testing refers to the amount of time a defect exists in the software, typically measured from the moment it is introduced until it is discovered and resolved. It is a key metric for assessing the efficiency of the testing process and the overall software development cycle.

Question: Can you define: Smoke Testing, Soak Testing, Sanity testing, Test Coverage, testbed, Bug Triage, Unit Testing, Error?

Answer:

Smoke Testing: It is an initial, basic-level testing phase where the software build is quickly checked to verify that it is stable enough for more comprehensive testing. If the smoke test fails, it indicates critical issues, and further testing is postponed until those are fixed.

Soak Testing: It is a type of performance testing where the application is subjected to a continuous load for an extended period to evaluate its stability and performance over time. The goal is to identify memory leaks or performance degradation that may occur with prolonged use.

Sanity Testing: Sanity testing is a subset of regression testing that focuses on specific areas or functionalities of an application. It’s a quick check to ensure that recent changes or bug fixes haven’t adversely affected the basic functionality of the software.

Test Coverage: Test coverage measures the extent to which the code or functionality of the software has been tested. It helps determine the effectiveness and thoroughness of the testing process.

Testbed: A testbed is an environment set up for testing, containing hardware, software, configurations, and data required to perform tests on an application or system.

Bug Triage: Bug triage is the process of evaluating, prioritizing, and categorizing reported defects (bugs) based on their severity, impact, and other factors. It also helps determine which issues should be fixed first.

Unit Testing: Unit testing is the practice of testing individual components or units of code in isolation. The purpose is to verify that each unit functions correctly as designed.

Error (in Manual Testing): An error in manual testing refers to a mistake or failure in the testing process where the actual result of a test case does not match the expected result. Errors are also known as defects or bugs when they represent issues in the software being tested.

Question: On average, how many test cases can you write in a day?

Answer: On average, I can write approximately 5 to 10 test cases per day. The number can vary depending on factors like complexity, documentation availability, and the need for extensive data setup. Quality and thoroughness of test cases are always my priority over quantity.

Question: How do you know when we should stop our testing Process?

When to Stop Manual Testing:

Answer: Knowing when to stop manual testing is crucial to balance thorough testing and project timelines. You should consider stopping when:

  • All test cases have been executed, and their pass/fail status is documented.
  • The exit criteria defined in the test plan have been met.
  • A predetermined level of test coverage is achieved.
  • The cost and time for testing outweigh the potential benefits of finding additional defects.
  • The software meets the defined quality criteria and is ready for release.

Question: Explain the difference between Test-Driven Development (TDD) and Validation-Driven Development (VDD).

Answer: TDD is a development approach where tests are written before writing code, focusing on design and functionality. VDD, on the other hand, emphasizes validation or testing of the product after development, ensuring it meets user expectations.

TDD aims to guide the development process, while VDD aims to validate the final product against requirements.

Question: What is the difference between Test Drive and Test Stub in Testing?

Answer: Difference between Test Drive and Test Stub:

Test Drive: A Test Drive is a component or module that is temporarily developed to simulate the behaviour of a component not yet implemented. It assists in testing other parts of the system that rely on the missing component. It is a temporary placeholder.

Test Stub: A Test Stub is a minimal implementation of a component or module that acts as a replacement for the actual component. It provides a way to test the component that relies on it. A stub doesn’t contain the full functionality of the actual component and returns predefined or hard-coded values.

Question: What is the most interesting bug you have found in your testing?

Answer: My previous project, one of the most interesting bugs I encountered was in a financial application. In a specific scenario, the system allowed users to make duplicate payments for the same invoice, resulting in overpayment.

The bug seemed harmless at first but had significant financial implications. It highlighted the importance of rigorous testing, even for seemingly minor features, in a financial context.

Question: How do you approach testing a product when the requirements are not yet finalized?

Answer: When requirements are not frozen, it’s important to adopt an agile and flexible testing approach. Begin by testing based on the available or evolving requirements and continuously adapt your test cases as requirements change.

Frequent communication with developers and stakeholders is key to ensure testing aligns with the evolving project scope.

Question: What is the distinction between validation and verification in software testing?

Answer: Verification is the process of evaluating whether the software meets the specified requirements. It ensures that the software has been built correctly. Validation, on the other hand, checks if the software fulfils the user’s needs and expectations, confirming that it’s fit for its intended purpose.

Question: Name some tools commonly used by manual testers for testing?

Answer: Manual testers often use a variety of tools to assist in their testing efforts.

Some common tools include

  • TestRail for test case management,
  • JIRA for issue tracking,
  • Excel for test data management, and various browsers’ developer tools for web application testing.
  • Additionally, tools like Snagit can be used for capturing and documenting defects.

Question: Which software is known for reducing the number of bugs during testing?

Answer: Quality assurance and bug reduction in testing often depend on various factors, including the effectiveness of the testing process, but tools like automated testing frameworks (e.g., Selenium, JUnit), static analysis tools (e.g., SonarQube), and code review tools (e.g., Review Board) can help in identifying and reducing bugs during testing.

Question: What are the possible Test Cases for Login Attempts?

Answer:

Valid Login: Enter correct username and password. The system should grant access.

Invalid Username: Use a valid password with an invalid username. The system should display an error message.

Invalid Password: Use a valid username with an incorrect password. The system should display an error message.

Empty Fields: Attempt login with both fields empty. The system should prompt for both username and password.

Caps Lock: Ensure the system handles caps lock being on during login.

Multiple Failed Attempts: Simulate multiple consecutive failed login attempts. The system should lock the account after a predefined number of failures.

Password Recovery: Test the “Forgot Password” feature and verify that it allows password recovery.

Account Locking: After multiple failed attempts, verify that the account is locked and can be unlocked through a proper process.

Question: How do you debug an error in login page?

Debugging an Error in the Login Page:

Recreate the Error: First, replicate the issue by trying to log in using the same conditions that triggered the error.

Check Logs: Examine error logs or messages to gain insights into the issue’s root cause.

Inspect Code: Review the relevant code sections, especially those handling authentication, to identify coding errors or logic flaws.

Input Validation: Verify that input validation for usernames and passwords is functioning correctly.

Browser Compatibility: Ensure the error is not browser-specific by testing on different browsers.

Network and Server Issues: Verify network connectivity and server response times.

Database Queries: Review database queries related to user authentication for potential errors.

Testing Environment: Ensure the error is reproducible in a controlled testing environment.

Question: Can We Do Automation Testing Without Manual Testing?

Answer: Automation testing complements manual testing but does not replace it entirely. Manual testing is essential for exploratory testing, usability testing, and identifying issues that automation may overlook. While it is possible to automate test cases without prior manual testing, manual testing provides an understanding of the application’s behavior and can help prioritize what to automate.

A combination of both manual and automated testing is typically the most effective approach for comprehensive software quality assurance.

Question: What are the essential components to include when writing a bug report as a manual tester?

Answer:

A comprehensive bug report should include:

Title/Summary: A brief, descriptive title.

Description: A detailed explanation of the issue, including steps to reproduce.

Environment: Specify the test environment, including OS, browser, and device.

Severity/Priority: Assess the impact and importance of the bug.

Expected vs. Actual Results: Describe what was expected and what actually occurred.

Attachments: Include screenshots, logs, or additional files to aid in debugging.

Reproducibility: Note whether the issue is consistently reproducible or intermittent.

Test Data: Mention the input data used during testing.

Version Information: Specify the software version being tested.

Assigned To: Indicate the developer or team responsible for addressing the issue.

A well-documented bug report helps developers understand and resolve issues efficiently.

Question: When is it appropriate to automate test cases, and how do you decide which test cases to automate?

Answer:

I decide to automate test cases when:

  • Test cases are repetitive and prone to human error.
  • Test cases need to be executed across multiple configurations or environments.
  • There’s a need for frequent regression testing.
  • Test cases involve large data sets.
  • Test cases can be automated with a reasonable return on investment (ROI).

Choosing which test cases to automate involves assessing the potential benefits in terms of time saved, accuracy, and repeatability while considering the effort and cost of automation.

Question: In a scenario where a user inputs an invalid email address, explain the process you would use to test how the application handles and reports errors.

Answer: Testing Invalid Email Address Input

When testing how the application handles invalid email addresses, I follow these steps:

Test Planning: First, I review the requirements and design documents to understand the expected behavior when an invalid email is entered.

Test Case Design: I create test cases that cover various scenarios of invalid email inputs, including missing ‘@’, incorrect domain format, and special character misuse.

Test Data Preparation: I gather test data, including invalid email addresses, and ensure I have a variety of examples to cover potential cases.

Execution: I input the test data into the application as per the test cases, paying attention to the system’s response.

Verification: I verify that the application correctly identifies and reports errors related to invalid email addresses. This includes checking for error messages and ensuring the user isn’t allowed to proceed with invalid data.

Logging Defects: If I find any discrepancies or the application doesn’t handle invalid emails as expected, I log defects with detailed information and steps to reproduce.

Retesting: After developers fix the issues, I retest the scenarios to ensure the problems have been resolved.

Question:  How would you handle the management and creation of test data for your test cases?

Answer: I manage and create test data by identifying data requirements for each test case, using a combination of manually generated and pre-existing data. I ensure data privacy and anonymization when needed.

Data is organized, version-controlled, and stored separately, making it readily available for testing and easy to update as needed for various test scenarios.

Scroll to Top