Vài câu hỏi phỏng vấn cho Manual Tester (Software Testing) năm 2023

Anh Tester chia sẻ đến bạn vài câu hỏi phỏng vấn cho Manual Tester (Software Testing) năm 2023

💥Câu hỏi phỏng vấn cho Manual Tester năm 2023

 

1.  What is Software Testing?

Answer: Software testing is a process of evaluating a software application to identify defects and ensure that it meets specified requirements and functions as expected.

 

2.  What are the objectives of Software Testing?

Answer: The objectives of software testing include finding defects, ensuring software functionality, verifying requirements, enhancing software quality, and providing confidence in the software.

 

3.  What are the different levels of testing?

Answer: There are various levels of testing, including:

  • Unit Testing
  • Integration Testing
  • System Testing
  • User Acceptance Testing (UAT)

 

4.  What is the purpose of Regression Testing?

Answer: Regression testing verifies that recent code changes do not negatively impact existing functionality, ensuring that new features or fixes do not introduce new defects.

 

5.  Explain the term "Test Case"

Answer: A test case is a set of conditions, steps, and inputs that a tester executes to verify specific functionality in a software application. It includes expected results and preconditions.

 

6.  What is the Test Plan, and why is it important?

Answer: A test plan is a document that outlines the approach, scope, objectives, resources, and schedule for testing. It's crucial for providing a roadmap for testing activities and ensuring alignment with project goals.

 

7.  What is a Test Scenario?

Answer: A test scenario is a high-level description of a testing situation that may consist of multiple related test cases. It outlines a specific testing condition.

 

8.  How do you prioritize Test Cases?

Answer: Test cases can be prioritized based on factors such as business impact, critical functionality, and risk assessment. High-priority test cases should be tested first.

 

9.  What is Smoke Testing?

Answer: Smoke testing is a quick, high-level test to determine whether the software build is stable enough for more extensive testing. It checks basic functionality and ensures the build is deployable.

 

10.  What is Sanity Testing?

Answer: Sanity testing is a type of software testing performed after minor changes or bug fixes to ensure that the specific areas affected by the changes are still functioning correctly. It is a subset of regression testing and focuses on verifying that the recent modifications have not adversely impacted the core functionality of the software.

 

11.  Explain the term "Defect Life Cycle"

Answer: The defect life cycle represents the stages that a defect goes through, from discovery to resolution. Common stages include New, Assigned, In Progress, Fixed, Verified, and Closed.

 

12.  What is a Test Environment, and why is it important?

Answer: A test environment is a setup that mimics the production environment. It's essential for testing because it ensures that software behaves as expected in real-world conditions.

 

13.  What is Boundary Testing?

Answer: Boundary testing examines values at the edge of the input domain. It aims to find defects related to boundary conditions, such as minimum and maximum input values.

 

14.  What is Compatibility Testing?

Answer: Compatibility testing ensures that the software functions correctly on various platforms, browsers, devices, and operating systems.

 

15.  What is Negative Testing?

Answer: Negative testing involves intentionally providing incorrect inputs or using invalid conditions to verify that the software can handle errors gracefully.

 

16.  Explain the term "Traceability Matrix"

Answer: A traceability matrix is a document that establishes a link between requirements and test cases. It helps ensure that all requirements are covered by test cases.

 

17.  What is Ad-hoc Testing?

Answer: Ad-hoc testing is informal testing where testers explore the application without predefined test cases. It aims to discover defects through unscripted exploration.

 

18.  What is Exploratory Testing?

Answer: Exploratory testing is a testing approach where testers simultaneously learn about the application while designing and executing test cases. It's particularly useful for complex or poorly-documented systems.

Xem thêm: Kiểm thử khám phá (Exploratory Testing)

 

19.  What is Alpha Testing and Beta Testing?

Answer: Alpha testing is performed by the development team in a controlled environment. Beta testing involves end-users testing the software in a real-world setting before the official release.

 

20.  What is Load Testing, and why is it important?

Answer: Load testing evaluates the performance of a system under expected load conditions. It helps identify bottlenecks and assesses system scalability.

 

21.  What is Stress Testing?

Answer: Stress testing evaluates how a system behaves under extreme conditions, often beyond normal operational limits. It helps identify system weaknesses.

 

22.  How do you handle a situation where a critical defect is found just before a release?

Answer: In such cases, the severity and impact of the defect should be communicated to stakeholders. A decision on whether to release or delay should be made based on risk assessment.

 

23.  Explain the term "Test Coverage"

Answer: Test coverage is a measure of how much of the application's functionality has been tested. It helps identify areas that may require additional testing.

 

24.  What is the purpose of a Test Execution Report?

Answer: A Test Execution Report summarizes the results of test case execution. It includes information about passed, failed, and blocked test cases.

 

25.  What is Usability Testing?

Answer: Usability testing evaluates how user-friendly a software application is by observing real users interacting with it. It helps identify user interface issues.

 

26.  What is meant by test coverage?

Answer: Test coverage is a quality metric to represent the amount (in percentage) of testing that has been completed. It is relevant for both functional and non-functional testing activities. This metric is used to add missing test cases.

 

27.  What is the Entry and Exit Criteria in Software Testing?

Answer: Entry criteria define the conditions that must be met before testing can begin, while exit criteria specify when testing should be considered complete.

 

28.  What is a Test Management Tool, and why is it used?

Answer: A test management tool helps manage and organize test cases, track test execution, and generate reports. It enhances test efficiency and visibility.

 

29.  Explain the concept of Test Automation?

Answer: Test automation is the process of using automated scripts and testing tools to perform tests, execute test cases, and compare actual results with expected results.

 

30.  What are the benefits and limitations of Test Automation?

Answer: Benefits include repeatability, efficiency, and reduced human error. Limitations include initial setup time, maintenance effort, and unsuitability for some types of testing.

 

31.  What is Positive and Negative Testing?

Answer: Positive testing verifies that the system behaves as expected with valid inputs, while negative testing validates that the system handles invalid inputs or error conditions appropriately.

 

32.  What is Test Driven Development (TDD)?

Answer: Test Driven Development is a software development methodology where tests are written before writing the actual code. It helps ensure that the code meets the required functionality.

 

33.  What is User Acceptance Testing (UAT)?

Answer: User Acceptance Testing is the final phase of testing where end-users validate whether the software meets their business requirements and is ready for production use.

 

34.  What is Behavior-Driven Development (BDD)?

Answer: Behavior-Driven Development, is a software development approach that extends the principles of Test-Driven Development (TDD) to include collaboration between developers, testers, and non-technical stakeholders. BDD places a strong emphasis on communication and aligning development with business goals by using natural language specifications that describe the expected behavior of the software.



35. What is Regression Testing?

Regression Testing is a full or partial selection of already executed test cases that are re-executed to ensure existing functionalities work fine.

Steps involved:

  1. Re-testing: All of the tests in the current test suite are run again. It turns out to be both pricey and time-consuming.
  2. Regression tests are divided into three categories: feature tests, integration tests, and end-to-end testing. Some of the tests are chosen in this step.
  3. Prioritization of test cases: The test cases are ranked according to their business impact and important functionalities.

Xem thêm: Kiểm thử hồi quy (Regression Test)

36. What is Test Harness?

A test harness is a collection of software and test data used to put a programme unit to the test by running it under various conditions such as stress, load, and data-driven data while monitoring its behaviour and outputs.


37. Differentiate between Positive and Negative Testing 

Positive Testing

Negative Testing 

Positive testing ensures that your software performs as expected. The test fails if an error occurs during positive testing.

Negative testing guarantees that your app can gracefully deal with unexpected user behaviour or incorrect input.

In this testing, the tester always looks for a single set of valid data.

Testers use as much ingenuity as possible when validating the app against erroneous data.

 
 

38. What is a Critical Bug?

A critical bug is one that has the potential to affect the bulk of an application's functioning. It indicates that a significant portion of functionality or a critical system component is utterly broken, with no way to proceed. The application cannot be delivered to end users until the critical bug has been fixed.


39. What is Test Closure?

Test Closure is a document that summarises all of the tests performed throughout the software development life cycle, as well as a full analysis of the defects fixed and errors discovered. The total number of experiments, the total number of experiments executed, the total number of flaws detected, the total number of defects settled, the total number of bugs not settled, the total number of bugs rejected, and so on are all included in this memo.


40. Explain the defect life cycle

A defect life cycle is a process by which a defect progresses through numerous stages over the course of its existence. The cycle begins when a fault is discovered and concludes when the defect is closed after it has been verified that it will not be recreated.

Vài câu hỏi phỏng vấn cho Manual Tester (Software Testing) năm 2023 | Anh Tester



41. What is the pesticide paradox? How to overcome it?

According to the pesticide paradox, if the same tests are done repeatedly, the same test cases will eventually stop finding new bugs. Developers will be especially cautious in regions where testers discovered more flaws, and they may overlookPositive and Negative Testing?

 other areas. Methods for avoiding the pesticide conundrum include:

  • To create a completely new set of test cases to put various aspects of the software to the test.
  • To create new test cases and incorporate them into existing test cases.

It is possible to detect more flaws in areas where defect levels have decreased using these methods.


42. What is API testing?

API testing is a sort of software testing that entails evaluating application programming interfaces (APIs) to see if they meet functionality, reliability, performance, and security requirements. Simply put, API testing is designed to detect defects, inconsistencies, or departures from an API's expected behaviour. Typically, applications are divided into three layers:

The user interface is also known as the presentation layer.

For business logical processing, the Business Layer or application user interface is used.

API testing is done at the most vital and important layer of software architecture, the Business Layer, for modelling and manipulating data.


43. What is System testing?

System testing is a type of testing in which the entire software is tested. System testing examines the application's compliance with its business requirements.


44. What is Acceptance testing?

Acceptance testing is a type of testing done by a possible end-user or customer to see if the software meets the business requirements and can be used.


45. Differentiate between bug leakage and bug release

Bug Leakage - When tested software is pushed into the market and the end-user discovers defects, this is known as bug leakage. These are bugs that the testing team overlooked throughout the testing phase.

Bug Release - When a certain version of software is launched into the market with some known bugs that are expected to be fixed in later versions, this is known as a bug release. These are low-priority issues that are highlighted in the release notes when sharing with end-users.


46. What do you mean by Defect Triage?

Defect triage is a procedure in which defects are prioritised depending on a variety of characteristics such as severity, risk, and the amount of time it will take to fix the fault. The defect triage meeting brings together several stakeholders - the development team, testing team, project manager, BAs, and so on – to determine the order in which defects should be fixed.

 
 

47. What is Integration Testing? What are its types?

Integration testing is performed after unit testing. We test a group of linked modules in integration testing. Its goal is to identify faults with module interaction.

The following are the types of integration testing:

  • Big Bang Integration Testing — After all of the modules have been merged, big bang integration testing begins.
  • Top-down Integration Testing — In top-down integration, testing and integration begin at the top and work their way down.
  • Bottom-up Integration Testing — In bottom-up integration testing, lower-level modules are tested before moving up the hierarchy to higher-level modules.
  • Hybrid Integration Testing — Hybrid integration testing combines top-down and bottom-up integration testing techniques. The integration with this approach starts at the middle layer, and testing is done in both directions.


48. What is a stub?

Many times, when top-down integration testing is performed, lower-level modules are not produced until top-level modules are tested and integrated. Stubs or dummy modules are used in these circumstances to emulate module behaviour by delivering a hard-coded or predicted result based on the input variables.


49.  What is code coverage?

The quantity of code covered by the test scripts is referred to as code coverage. It conveys the scope of the test suite's coverage of the application.


50. What is a cause-effect graph?

A cause-effect graph testing technique is a black-box test design technique that uses a graphical representation of the input (cause) and output (effect) to construct the test. This method employs a variety of notations to describe AND, OR, NOT, and other relationships between the input and output conditions.


51. Explain equivalence class partitioning.

Equivalence class partitioning is a black-box testing technique based on specifications. A set of input data that defines multiple test conditions is partitioned into logically comparable groups in equivalence class partitioning, so that utilising even a single test data from the group for testing can be considered as similar to using all the other data in that group.


52. What is boundary value analysis?

The border values of the classes of the equivalence class partitioning are used as input to the test cases in boundary value analysis, which is a software testing technique for designing test cases.


53. What is your approach towards a severely buggy program? How would you handle it?

In such cases, the best course of action is for testers to go through the process of reporting any flaws or blocking-type issues that arise, with an emphasis on critical bugs. Because this sort of crisis might result in serious issues such as insufficient unit or integration testing, poor design, wrong build or release methods, and so on, management should be contacted and given documentation as proof of the problem.


54. What if an organization's growth is so rapid that standard testing procedures are no longer feasible? What should you do in such a situation?

This is a very prevalent issue in the software industry, especially with the new technologies that are being used in product development. In this case, there is no simple answer; however, you could:

  • Hire people who are good at what they do.
  • Quality issues should be ‘fiercely prioritised' by management, with a constant focus on the client.
  • Everyone in the company should understand what the term "quality" implies to the end-user.


55. When can you say for sure that the code has met its specifications? 

Most businesses have coding "standards" that all developers are expected to follow, but everyone has their own opinion on what is best, as well as how many regulations are too many or too few. There are many methods available, such as a traceability matrix, to guarantee that requirements are linked to test cases. And when all of the test cases pass, that means the code satisfies the requirement.


56. What is the difference between manual testing and automation testing?

Manual testing is the process of manually testing software for defects. It requires a tester to manually execute the test steps and compare the actual and expected results. Automation testing uses special software to control the execution of tests and compare the results with the desired results. As a result, automation testing is much faster than manual testing and can reduce the time required to complete a test cycle.


57. When should you opt for manual testing over automation testing?

Manual testing should be used over automation testing when the tests are particular or require human interpretation. Manual testing is also better suited for exploratory testing, usability testing, and testing on multiple operating systems or unique hardware.


58. What are the phases involved in the Software Testing Life Cycle?

The phases involved in the Software Testing Life Cycle are:

  • Test Planning
  • Test Analysis
  • Test Design
  • Test Implementation
  • Test Execution
  • Test Results Analysis
  • Test Closure
 
 

59. What makes a good test engineer?

A good test engineer is detail-oriented and organized, has excellent problem-solving skills, and can produce high-quality work quickly and efficiently. And also should have strong communication and collaboration skills and be an outstanding team player. They also need to be up to date on the latest technologies and software trends and be able to apply them to their testing process.


60. What is the difference between system testing and integration testing?

System testing is a type of software testing that evaluates a complete and fully integrated software product. It verifies that the software meets the requirements specified in the design and the system-level technical specifications. System testing also identifies any weaknesses, errors, or bugs.

Integration testing is software testing that verifies the interactions between two or more system components. It is performed after unit testing and before system testing. It checks how components interact with each other and how they fit together. Integration testing is necessary to ensure that the components of the system work together as expected.


61. What is Defect Cascading in Software Testing?

Defect cascading is a type of software testing issue in which the result of a defect in one part of the system causes other defects or problems to occur in other parts of the system. This cascading causes a chain reaction of errors, making it difficult to trace the source of the problem. Defect cascading can lead to many issues, from minor performance slowdowns to significant system crashes, making it a severe risk to software developers and testers.


62. What does the term 'quality' mean when testing?

Quality in testing refers to the degree to which a product meets its intended requirements, as well as the degree to which it satisfies customer needs and expectations. It includes both the functional and non-functional aspects of the product. Quality assurance is ensuring that the product meets its requirements, while quality control focuses on testing to ensure that the product meets its needs.


63. What are the Experience-based testing techniques?

Experience-based testing techniques include:

  • Exploratory Testing
  • Error Guessing
  • Adhoc Testing
  • Checklist-based Testing
  • Exploit-based Testing
  • Session-based Testing
  • Alpha Testing
  • Beta Testing
  • User Acceptance Testing
  • Usability Testing.


64. What is a top-down and bottom-up approach in testing?

A top-down and bottom-up approach in testing refers to the order of testing.

  • Top-down testing begins at the highest level and works downward. Thus, each higher-level component is tested in isolation from the lower-level components.
  • Bottom-up testing starts at the lowest level and works upward. Thus, each lower-level component is tested in isolation from higher-level components.


65. What is the difference between smoke testing and sanity testing?

  • Smoke testing is a high-level test used to ensure the most critical functions of a software system are working correctly. It is a quick test that can be used to determine whether it is worth investing time and energy into further, more extensive testing. 
  • Sanity testing is a more specific test used to check that recent changes to a system have not caused any new, unwanted behavior. It ensures that basic features are still functioning as expected after minor changes have been made.

Xem thêm: Phân biệt Smoke Testing và Sanity Testing

66. What is the difference between static testing and dynamic testing?

  • Static testing is a type of testing performed without executing the code of a software application. Instead, it includes reviews, inspections, and walkthroughs.
  • Dynamic testing is a type of testing that involves executing the code of a software application to determine the results of certain functions and operations. It includes unit testing, integration testing, and acceptance testing.



67. How will you determine when to stop testing?

When testing, it is vital to determine when to stop to prevent wasting resources. When deciding when to stop testing, then you should consider the following criteria: 

  • Desired levels of quality 
  • Adherence to timelines and budget 
  • Number of defects found 
  • Number of test cases that have been completed 
  • Risk factors associated with the project

Once these criteria have been met, you can stop your testing.


68. How do you test a product if the requirements are yet to freeze?

When requirements are yet to freeze, the best approach is to use an agile development methodology, such as Scrum. 

  • The first step would be to hold requirements gathering meetings with all stakeholders to understand the product’s purpose and desired outcomes. The next step would be to break up the project into individual, manageable user stories. 
  • From there, we would prioritize the user stories and assign them to sprint for development. 
  • As the project progresses, we continually test the product using techniques such as unit tests, integration tests, user acceptance tests, and system testing. In addition, as requirements change, we will update our tests to ensure the product meets the desired outcomes.


69. What are the cases when you'll consider choosing automated testing over manual testing?

The following steps are the considering cases:

  • When the test requires repetitive steps: 

Automated testing is ideal for running tests requiring multiple iterations or repeating the same actions repeatedly.

  • When the test requires a large amount of data: 

Automated testing can quickly insert large amounts of data into the tested system.

  • When the test requires multiple environments: 

Automated testing can easily be configured to test systems in various domains, such as multiple operating systems, browsers, and devices.

  • When the test requires precise timing: 

Automated tests can be programmed to run precisely, ensuring that each test step is performed at the exact time it needs to be.

  • When the test requires multiple users: 

Automated testing can simulate multiple users accessing the system simultaneously, allowing for more realistic testing.



70. What is configuration management ?

Configuration management is managing, tracking, and controlling changes to a system's software, hardware, or network configuration. Configuration management can maintain the integrity of a system and ensure that it is secure, stable, and compliant with organizational policies. The primary goals of configuration management are to ensure system reliability, maintain system availability, and improve system performance.


71. Is it true that we can do system testing at any stage?

No, system testing is typically carried out at the end of the development process, after integration and user acceptance testing.


72. What are some best practices that you should follow when writing test cases?

Here are the top 10 best test case practices:

  • Develop test cases that are clear, concise, and to the point.
  • Ensure that the test cases challenge the software's functionality in all dimensions.
  • Make sure that the test cases cover all the requirements.
  • Develop repeatable test cases that can be automated when necessary.
  • Develop test cases that are independent of each other.
  • Use meaningful and descriptive names for test cases.
  • Record the results of test cases for future reference.
  • Make sure that the test cases are modular and can be reused.
  • Perform reviews of the test cases to ensure accuracy and completeness.
  • Document the test cases in a standard format.


73. Why is it that the boundary value analysis provides good test cases?

Boundary value analysis provides suitable test cases because it ensures that the boundaries of input and output values are tested, making it easier to identify edge cases. Testing these edge cases ensures that your system is robust and can handle any unexpected input or output values. 


74. Why is it impossible to test a program thoroughly or 100% bug-free?

It is impossible to test a program thoroughly or 100% bug-free because it is impossible to anticipate and test every possible combination of inputs, environments, and states a program might encounter.


75. Can automation testing replace manual testing?

No, automation testing cannot fully replace manual testing. Automation testing is designed to supplement manual testing, not replace it. Automation testing can automate repetitive, tedious test cases and make the testing process more efficient. However, it cannot replace manual testing completely, as some tests can only be performed manually. 

For example, exploratory testing, usability testing, and user experience testing are all tasks that require manual testing.


76. Mention a few advantages of Automated testing.

The following are some major advantages of automated testing:

  • Automated test execution is quick and saves a significant amount of time.
  • Human mistakes are eliminated during testing when test scripts are carefully prepared.
  • CI tools like Jenkins, which may also be set to distribute daily test results to key stakeholders, can be used to schedule test execution for a nightly run.
  • Automation testing uses a lot less resources. Test execution requires nearly no time from QAs once the tests have been automated. QA bandwidth can be used for other exploratory work.


77. Explain what is SDLC.

This is an acronym for Software Development Life Cycle and encompasses all of the stages of software development, including requirement gathering and analysis, designing, coding, testing, deployment, and maintenance.

 

78. What types of manual testing are there? Break them down.

Manual testing is broken down into:

  • Black Box
  • White Box
  • Integration
  • Unit
  • System
  • Acceptance

 

79. What’s the difference between verification and validation?

Verification evaluates the software at the development phase, ascertaining whether or not a product meets the expected requirements. On the other hand, validation evaluates the software after the development phase, making it sure it meets the requirements of the customer.

 

80. When should testing end?

There are a few criteria for ending testing:

  • The bug rate has fallen below an agreed-upon level
  • The testing or release deadlines have arrived
  • The testing budget is out of funds
  • A certain percentage of test cases have passed
  • The alpha or beta testing periods have ended
  • Code, functionality, or requirements coverage have been met at a declared point

 

Để An săn thêm có câu nào hay hay bỏ vào cho thú vị hen 😋

Tham khảo thêm

https://www.simplilearn.com/manual-testing-interview-questions-and-answers-article

  • Anh Tester

    Đường dẫu khó chân vẫn cần bước đi
    Đời dẫu khổ tâm vẫn cần nghĩ thấu