Software Testing

  • What is 'Software Quality Assurance'?
    Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.
  • What is 'Software Testing'?
    Testing involves operation of a system or application under controlled conditions and evaluating the results. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should.
  • Does every software project need testers?
    It depends on the size and context of the project, the risks, the development methodology, the skill and experience of the developers. If the project is a short-term, small, low risk project, with highly experienced programmers utilizing thorough unit testing or test-first development, then test engineers may not be required for the project to succeed. For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually necessary. The use of personnel with specialized skills enhances an organization's ability to be successful in large, complex, or difficult tasks. It allows for both a) deeper and stronger skills and b) the contribution of differing perspectives.
  • What is Regression testing?
    Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
  • Why does software have bugs?
    Some of the reasons are:
    • Miscommunication or no communication.
    • Programming errors
    • Changing requirements
    • Time pressures
  • How can new Software QA processes be introduced in an existing Organization?
    It depends on the size of the organization and the risks involved.
    • For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects.
    • By incremental self managed team approaches.
  • What is verification? Validation?
    Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed.
  • What is a 'walkthrough'? What's an 'inspection'?
    A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.
  • What kinds of testing should be considered?
    Some of the basic kinds of testing involve:
    Blackbox testing, Whitebox testing, Integration testing, Functional testing, smoke testing, Acceptance testing, Load testing, Performance testing, User acceptance testing.
  • What are 5 common problems in the software development process?
    • Poor requirements
    • Unrealistic Schedule
    • Inadequate testing
    • Changing requirements
    • Miscommunication
  • What are 5 common solutions to software development problems?
    • Solid requirements
    • Realistic Schedule
    • Adequate testing
    • Clarity of requirements
    • Good communication among the Project team
  • What is software 'quality'?
    Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable
  • What are some recent major computer system failures caused by software bugs?
    Trading on a major Asian stock exchange was brought to a halt in November of 2005, reportedly due to an error in a system software upgrade. A May 2005 newspaper article reported that a major hybrid car manufacturer had to install a software fix on 20,000 vehicles due to problems with invalid engine warning lights and occasional stalling. Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing the project.
  • What is 'good code'? What is 'good design'?
    'Good code' is code that works, is bug free, and is readable and maintainable. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.
  • What is SEI? CMM? CMMI? ISO? Will it help?
    These are all standards that determine effectiveness in delivering quality software. It helps organizations to identify best practices useful in helping them increase the maturity of their processes.
  • What steps are needed to develop and run software tests?
    • Obtain requirements, functional design, and internal design specifications and other necessary documents
    • Obtain budget and schedule requirements.
    • Determine Project context.
    • Identify risks.
    • Determine testing approaches, methods, test environment, test data.
    • Set Schedules, testing documents.
    • Perform tests.
    • Perform reviews and evaluations
    • Maintain and update documents
  • What's a 'test plan'? What's a 'test case'?
    A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly.
  • What should be done after a bug is found?
    The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere
  • Will automated testing tools make testing easier?
    It depends on the Project size. For small projects, the time needed to learn and implement them may not be worth it unless personnel are already familiar with the tools. For larger projects, or on-going long-term projects they can be valuable.
  • What's the best way to choose a test automation tool? Some of the points that can be noted before choosing a tool would be:
    • Analyze the non-automated testing situation to determine the testing activity that is being performed.
    • Testing procedures that are time consuming and repetition.
    • Cost/Budget of tool, Training and implementation factors.
    • Evaluation of the chosen tool to explore the benefits.
  • How can it be determined if a test environment is appropriate?
    Test environment should match exactly all possible hardware, software, network, data, and usage characteristics of the expected live environments in which the software will be used.
  • What's the best approach to software test estimation?
    The 'best approach' is highly dependent on the particular organization and project and the experience of the personnel involved
    Some of the following approaches to be considered are:
    • Implicit Risk Context Approach
    • Metrics-Based Approach
    • Test Work Breakdown Approach
    • Iterative Approach
    • Percentage-of-Development Approach
  • What if the software is so buggy it can't really be tested at all?
    The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs.
  • How can it be known when to stop testing?
    Common factors in deciding when to stop are:
    • Deadlines (release deadlines, testing deadlines, etc.)
    • Test cases completed with certain percentage passed
    • Test budget depleted
    • Coverage of code/functionality/requirements reaches a specified point
    • Bug rate falls below a certain level
    • Beta or alpha testing period ends
  • What if there isn't enough time for thorough testing?
    • Use risk analysis to determine where testing should be focused.
    • Determine the important functionalities to be tested.
    • Determine the high risk aspects of the project.
    • Prioritize the kinds of testing that need to be performed.
    • Determine the tests that will have the best high-risk-coverage to time-required ratio.
  • What if the project isn't big enough to justify extensive testing?
    Consider the impact of project errors, not the size of the project. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.
  • How does a client/server environment affect testing?
    Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers, especially in multi-tier systems. Load/stress/performance testing may be useful in determining client/server application limitations and capabilities.
  • How can World Wide Web sites be tested?
    Some of the considerations might include:
    • Testing the expected loads on the server
    • Performance expected on the client side
    • Testing the required securities to be implemented and verified.
    • Testing the HTML specification, external and internal links
    • cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled
  • How is testing affected by object-oriented designs?
    Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. If the application was well-designed this can simplify test design.
  • What is Extreme Programming and what's it got to do with testing?
    Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. For testing ('extreme testing', programmers are expected to write unit and functional test code first - before writing the application code. Customers are expected to be an integral part of the project team and to help develop scenarios for acceptance/black box testing.
  • What makes a good Software Test engineer?
    A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.
  • What makes a good Software QA engineer?
    They must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews
  • What's the role of documentation in QA?
    QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. Change management for documentation should be used.
  • What is a test strategy? What is the purpose of a test strategy?
    It is a plan for conducting the test effort against one or more aspects of the target system.
    A test strategy needs to be able to convince management and other stakeholders that the approach is sound and achievable, and it also needs to be appropriate both in terms of the software product to be tested and the skills of the test team.
  • What information does a test strategy captures?
    It captures an explanation of the general approach that will be used and the specific types, techniques, styles of testing
  • What is test data?
    It is a collection of test input values that are consumed during the execution of a test, and expected results referenced for comparative purposes during the execution of a test
  • What is Unit testing?
    It is implemented against the smallest testable element (units) of the software, and involves testing the internal structure such as logic and dataflow, and the unit's function and observable behaviors
  • How can the test results be used in testing?
    Test Results are used to record the detailed findings of the test effort and to subsequently calculate the different key measures of testing
  • What is Developer testing?
    Developer testing denotes the aspects of test design and implementation most appropriate for the team of developers to undertake.
  • What is independent testing?
    Independent testing denotes the test design and implementation most appropriately performed by someone who is independent from the team of developers.
  • What is Integration testing?
    Integration testing is performed to ensure that the components in the implementation model operate properly when combined to execute a use case
  • What is System testing?
    A series of tests designed to ensure that the modified program interacts correctly with other system components. These test procedures typically are performed by the system maintenance staff in their development library.
  • What is Acceptance testing?
    User acceptance testing is the final test action taken before deploying the software. The goal of acceptance testing is to verify that the software is ready, and that it can be used by end users to perform those functions and tasks for which the software was built
  • What is the role of a Test Manager?
    The Test Manager role is tasked with the overall responsibility for the test effort's success. The role involves quality and test advocacy, resource planning and management, and resolution of issues that impede the test effort
  • What is the role of a Test Analyst?
    The Test Analyst role is responsible for identifying and defining the required tests, monitoring detailed testing progress and results in each test cycle and evaluating the overall quality experienced as a result of testing activities. The role typically carries the responsibility for appropriately representing the needs of stakeholders that do not have direct or regular representation on the project.
  • What is the role of a Test Designer?
    The Test Designer role is responsible for defining the test approach and ensuring its successful implementation. The role involves identifying the appropriate techniques, tools and guidelines to implement the required tests, and to give guidance on the corresponding resources requirements for the test effort
  • What are the roles and responsibilities of a Tester?
    The Tester role is responsible for the core activities of the test effort, which involves conducting the necessary tests and logging the outcomes of that testing. The tester is responsible for identifying the most appropriate implementation approach for a given test, implementing individual tests, setting up and executing the tests, logging outcomes and verifying test execution, analyzing and recovering from execution errors.
  • What are the skills required to be a good tester?
    A tester should have knowledge of testing approaches and techniques, diagnostic and problem-solving skills, knowledge of the system or application being tested, and knowledge of networking and system architecture
  • What is test coverage?
    Test coverage is the measurement of testing completeness, and it's based on the coverage of testing expressed by the coverage of test requirements and test cases or by the coverage of executed code.
  • What is a test script?
    The step-by-step instructions that realize a test, enabling its execution. Test Scripts may take the form of either documented textual instructions that are executed manually or computer readable instructions that enable automated test execution.