您好,欢迎来到意榕旅游网。
搜索
您的当前位置:首页QA常见面试问题答与问(English)

QA常见面试问题答与问(English)

来源:意榕旅游网


QA常见面试问题答与问(English)

Interview questions on WinRunner

1. How you used WinRunner in your project? - Yes, I have been using WinRunner for creating automated scripts for GUI, functional and regression

testing of the AUT.

2. Explain WinRunner testing process? - WinRunner testing process involves

six main stages

o

Create GUI Map File so that WinRunner can recognize the GUI objects

in the application being tested

o

Create test scripts by recording, programming, or a combination of

both. While recording tests, insert checkpoints where you want to check the

response of the application being tested.

o

Debug Test: run tests in Debug mode to make sure they run smoothly

o

Run Tests: run tests in Verify mode to test your application.

o

View Results: determines the success or failure of the tests.

o

Report Defects: If a test run fails due to a defect in the application

being tested, you can report information about the defect directly from the Test

Results window.

3. What is contained in the GUI map? - WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object.s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. There are 2 types of GUI Map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically

creates a GUI Map file for each test created.

4. How does WinRunner recognize objects on the application? - WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object.s description in

the GUI map and then looks for an object with the same properties in the

application being tested.

5. Have you created test scripts and what is contained in the test scripts? - Yes I have created test scripts. It contains the statement in Mercury Interactive.s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL

functions and programming elements or by using WinRunner.s visual

programming tool, the Function Generator.

6. How does WinRunner evaluate test results? - Following each test run,

WinRunner displays the results in a report. The report details all the major events

that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test

Results window.

7. Have you performed debugging of the scripts? - Yes, I have performed

debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities

provided by the WinRunner.

8. How do you run your test scripts? - We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual

results.

9. How do you analyze results and report the defects? - Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

This information is sent via e-mail to the quality assurance manager, who tracks

the defect until it is fixed.

10. What is the use of Test Director software? - TestDirector is Mercury

Interactive.s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.

11. Have you integrated your automated scripts from TestDirector? - When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into

one which could be used to test the AUT.

12. What are the different modes of recording? - There are two type of recording in WinRunner. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. Analog recording records keyboard input, mouse clicks, and the precise x- and

y-coordinates traveled by the mouse pointer across the screen.

13. What is the purpose of loading WinRunner Add-Ins? - Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory.

While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it

does not recognize the function.

14. What are the reasons that WinRunner fails to identify an object on the GUI? - WinRunner fails to identify an object in a GUI due to various reasons. The object is not a standard windows object. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects

displayed in the browser window.

15. What is meant by the logical name of the object? - An object.s logical name is determined by its class. In most cases, the logical name is the label that

appears on an object.

16. If the object does not have a name then what will be the logical name? - If the object does not have a name then the logical name could be the attached

text.

17. What is the different between GUI map and GUI map files? - The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical

description.

18. How do you view the contents of the GUI map? - GUI Map editor

displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name

and physical description.

19. When you create GUI map do you record all the objects of specific

objects? - If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while

creating scripts.

LoadRunner interview questions

1. What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.

2. What is Performance testing? - Timing for both read and update

transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the

timing of a single transaction.

3. Did u use LoadRunner? What version? - Yes. Version 7.2.

4. Explain the Load testing process? -

5. Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by

each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We

can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.

6. We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.

Step 5: Monitoring the scenario.

7. We monitor scenario execution using the LoadRunner online runtime,

transaction, system resource, Web resource, Web server resource, Web application

server resource, database server resource, network delay, streaming media

resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner.s graphs and reports to analyze the application.s performance.

8. When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

9. What are the components of LoadRunner? - The components of

LoadRunner are The Virtual User Generator, Controller, and the Agent process,

LoadRunner Analysis and Monitoring, LoadRunner Books Online.

10. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication

protocols.

11. What Component of LoadRunner would you use to play Back the script

in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is

executed by a number of vusers in a group.

12. What is a rendezvous point? - You insert rendezvous pointsinto Vuser scripts to emulate heavy user load on the server. Rendezvous pointsinstruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers

to deposit cash into their accounts at the same time.

13. What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the

virtual users run their emulations.

14. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the

generated function calls into a Vuser script.

15. Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many

different users on the system.

16. What is correlation? Explain the difference between automatic

correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

17. How do you find out where correlation is required? Give few examples from your projects? - Two ways:First we can scan for correlations, and see the list of values which can becorrelated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my

script. I did using scan for correlation.

18. Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

19. What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.

20. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled.Standard Log Option:When you select

21. Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically

disabledExtended Log Option: Select

22. extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which

additional information should be added to the extended log using the Extended

log options.

23. How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive

debug information about a small section of the script only.

24. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions

we need to create the external

25. library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my

earlier project.

26. What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think

time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and

whether each step as a transaction.

27. Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings,

Pacing tab, set number of iterations.

28. How do you perform functional testing under load? - Functionality

under load can be tested by running several Vusers concurrently. By increasing the

amount of Vusers, we can determine how much load the server can sustain.

29. What is Ramp up? How do you set this? - This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value

to wait between intervals can be

30. specified. To set Ramp Up, go to ‘Scenario Scheduling Options’

31. What is the advantage of running the Vuser as thread? - VuGen provides the facility to use multithreading. This enables more Vusers to be run per

32. generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory.

This limits the number of Vusers that can be run on a single

33. generator. If the Vuser is run as a thread, only one instance of the driver

program is loaded into memory for the given number of

34. Vusers (say 100). Each thread shares the memory of the parent driver

program, thus enabling more Vusers to be run per generator.

35. If you want to stop the execution of your script on error, how do you do that? - The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status \"Stopped\". For this to take effect, we have to first uncheck the .Continue on error. option in Run-Time Settings.

36. What is the relation between Response Time and Throughput? - The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response

time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur

approximately at the same time.

37. Explain the Configuration of your systems? - The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system

component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load

testing objectives.

38. How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance

response time, throughput, hits/sec, network delay graphs, etc.

39. If web server, database and Network are all fine where could be the

problem? - The problem could be in the system itself or in the application server or

in the code written for the application.

40. How did you find web server related issues? - Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that

41. occurred during scenario, the number of http responses per second, the

number of downloaded pages per second.

42. How did you find database related issues? - By running .Database.

monitor and help of .Data Resource Graph. we can find database related issues. E.g.

You can specify the resource you want to measure on before running the controller

and than you can see database related issues

43. Explain all the web recording options?

44. What is the difference between Overlay graph and Correlate graph? - Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show.s the current graph.s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph.s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph.s

Y-axis.

45. How did you plan the Load? What are the Criteria? - Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution

Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority

levels with regard to the scenario we are deciding.

46. What does vuser_init action contain? - Vuser_init action contains

procedures to login to a server.

47. What does vuser_end action contain? - Vuser_end section contains log

off procedures.

48. What is think time? How do you change the threshold? - Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the

Recording options of the Vugen.

49. What is the difference between standard log and extended log? - The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data

returned by the server. Advanced trace.

50. Explain the following functions: - lr_debug_message - The

lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the

LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL

statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row

from the result set.

51. Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graphwere to remain relatively flat as the number of Vusers increased, it would

52. be reasonable to conclude that the bandwidth is constraining the

volume of

53. data delivered.

. Types of Goals in Goal-Oriented Scenario - Load Runner provides you

with five different types of goals in a goal oriented scenario:

o

The number of concurrent Vusers

o

The number of hits per second

o

The number of transactions per second

o

The number of pages per minute

o

The transaction response time that you want your scenario

55. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with

the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases.

In other words, the average response time steadily increases as the load

56. increases. At 56 Vusers, there is a sudden, sharp increase in the average

response

57. time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more

than 56 Vusers running simultaneously.

58. What is correlation? Explain the difference between automatic

correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific.Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

59. Where do you set automatic correlation options? - Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online

messages or offline actions, where we can define rules for that

correlation. Automatic correlation for database, can be done using show output

window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to

be created.

60. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.

Software tester (SQA) interview questionsThese questions are used for

software tester or SQA (Software Quality Assurance) positions. Refer to The Real

World of Software Testing for more information in the field.

1. The top management was feeling that when there are any changes in the technology being used, development schedules etc, it was a waste of time to update the Test Plan. Instead, they were emphasizing that you should put your time into testing than working on the test plan. Your Project Manager asked for your opinion. You have argued that Test Plan is very important and you need to update your test plan from time to time. It’s not a waste of time and testing activities would be more effective when you have your plan clear. Use some metrics. How you would support your argument to have the test plan consistently

updated all the time.

2. The QAI is starting a project to put the CSTE certification online. They will use an automated process for recording candidate information, scheduling candidates for exams, keeping track of results and sending out certificates. Write a

brief test plan for this new project.

3. The project had a very high cost of testing. After going in detail, someone found out that the testers are spending their time on software that doesn’t have

too many defects. How will you make sure that this is correct?

4. What are the disadvantages of overtesting?

5. What happens to the test plan if the application has a functionality not

mentioned in the requirements?

6. You are given two scenarios to test. Scenario 1 has only one terminal for entry and processing whereas scenario 2 has several terminals where the data input can be made. Assuming that the processing work is the same, what would be the specific tests that you would perform in Scenario 2, which you would not carry

on Scenario 1?

7. Your customer does not have experience in writing Acceptance Test Plan. How will you do that in coordination with customer? What will be the contents of

Acceptance Test Plan?

8. How do you know when to stop testing?

9. What can you do if the requirements are changing continuously?

10. What is the need for Test Planning?

11. What are the various status reports you will generate to Developers and

Senior Management?

12. Define and explain any three aspects of code review?

13. Why do you need test planning?

14. Explain 5 risks in an e-commerce project. Identify the personnel that must be involved in the risk analysis of a project and describe their duties. How will

you prioritize the risks?

15. What are the various status reports that you need generate for

Developers and Senior Management?

16. You have been asked to design a Defect Tracking system. Think about

the fields you would specify in the defect tracking system?

17. Write a sample Test Policy?

18. Explain the various types of testing after arranging them in a

chronological order?

19. Explain what test tools you will need for client-server testing and why?

20. Explain what test tools you will need for Web app testing and why?

21. Explain pros and cons of testing done development team and testing by

an independent team?

22. Differentiate Validation and Verification?

23. Explain Stress, Load and Performance testing?

24. Describe automated capture/playback tools and list their benefits?

25. How can software QA processes be implemented without stifling

productivity?

26. How is testing affected by object-oriented designs?

27. What is extreme programming and what does it have to do with testing?

28. Write a test transaction for a scenario where 6.2% of tax deduction for

the first $62,000 of income has to be done?

29. What would be the Test Objective for Unit Testing? What are the quality

measurements to assure that unit testing is complete?

30. Prepare a checklist for the developers on Unit Testing before the

application comes to testing department.

31. Draw a pictorial diagram of a report you would create for developers to

determine project status.

32. Draw a pictorial diagram of a report you would create for users and

management to determine project status.

33. What 3 tools would you purchase for your company for use in testing?

Justify the need?

34. Put the following concepts, put them in order, and provide a brief

description of each:

o

system testing

o

acceptance testing

o

unit testing

o

integration testing

o

benefits realization testing

35. What are two primary goals of testing?

36. If your company is going to conduct a review meeting, who should be on

the review committe and why?

37. Write any three attributes which will impact the Testing Process?

38. What activity is done in Acceptance Testing, which is not done in System

testing?

39. You are a tester for testing a large system. The system data model is very large with many attributes and there are a lot of inter-dependencies within the fields. What steps would you use to test the system and also what are the effects of

the steps you have taken on the test plan?

40. Explain and provide examples for the following black box techniques?

o

Boundary Value testing

o

Equivalence testing

o

Error Guessing

41. What are the product standards for?

o

Test Plan

o

Test Script and Test Report

42. You are the test manager starting on system testing. The development team says that due to a change in the requirements, they will be able to deliver the

system for SQA 5 days past the deadline. You cannot change the resources (work hours, days, or test tools). What steps will you take to be able to finish the testing

in time?

43. Your company is about to roll out an e-commerce application. It’s not possible to test the application on all types of browsers on all platforms and operating systems. What steps would you take in the testing environment to

reduce the business risks and commercial risks?

44. In your organization, testers are delivering code for system testing without performing unit testing. Give an example of test policy:

o

Policy statement

o

Methodology

o

Measurement

45. Testers in your organization are performing tests on the deliverables even after significant defects have been found. This has resulted in unnecessary testing of little value, because re-testing needs to be done after defects have been rectified. You are going to update the test plan with recommendations on when to

halt testing. Wwhat recommendations are you going to make?

46. How do you measure:

o

Test Effectiveness

o

Test Efficiency

47. You found out the senior testers are making more mistakes then junior testers; you need to communicate this aspect to the senior tester. Also, you don’t

want to lose this tester. How should one go about constructive criticism?

48. You are assigned to be the test lead for a new program that will

automate take-offs and landings at an airport. How would you write a test strategy

for this new program?

49. [转贴] QA常见面试问题答与问(English)

50. SQL Servers

51. What is a major difference between SQL Server 6.5 and 7.0 platform

wise?

52. SQL Server 6.5 runs only on Windows NT Server. SQL Server 7.0 runs on

Windows NT Server, workstation and Windows 95/98.

53. --------------------------------------------------------------------------------

. Is SQL Server implemented as a service or an application?

55. It is implemented as a service on Windows NT server and workstation

and as an application on Windows 95/98.

56. --------------------------------------------------------------------------------

57. What is the difference in Login Security Modes between v6.5 and 7.0?

58. 7.0 doesn't have Standard Mode, only Windows NT Integrated mode and Mixed mode that consists of both Windows NT Integrated and SQL Server

authentication modes.

59. --------------------------------------------------------------------------------

60. What is a traditional Network Library for SQL Servers?

61. Named Pipes

62. --------------------------------------------------------------------------------

63. What is a default TCP/IP socket assigned for SQL Server?

. 1433

65. --------------------------------------------------------------------------------

66. If you encounter this kind of an error message, what you need to look into to solve this problem? \"[Microsoft][ODBC SQL Server Driver][Named

Pipes]Specified SQL Server not found.\"

67. 1.Check if MS SQL Server service is running on the computer you are

trying to log into

68. 2.Check on Client Configuration utility. Client and Server have to in sync.

69. --------------------------------------------------------------------------------

70. What are the two options the DBA has to assign a password to sa?

71. a) to use SQL statement

72. Use master

73. Exec sp_password NULL,

74. b) to use Query Analyzer utility

75. -------------------------------------------------------------------------------

-

76. What is new philosophy for database devises for SQL Server 7.0?

77. There are no devises anymore in SQL Server 7.0. It is file system now.

78. --------------------------------------------------------------------------------

79. When you create a database how is it stored?

80. It is stored in two separate files: one file contains the data, system tables,

other database objects, the other file stores the transaction log.

81. --------------------------------------------------------------------------------

82. Let's assume you have data that resides on SQL Server 6.5. You have to

move it SQL Server 7.0. How are you going to do it?

83. You have to use transfer command.

84. --------------------------------------------------------------------------------

85. DirectConnect

86. Have you ever tested 3 tier applications?

87. --------------------------------------------------------------------------------

88. Do you know anything about DirectConnect software? Who is a vendor

of the software?

. Sybase.

90. --------------------------------------------------------------------------------

91. What platform does it run on?

92. UNIX.

93. --------------------------------------------------------------------------------

94. How did you use it? What kind of tools have you used to test

connection?

95. SQL Server or Sybase client tools.

96. -------------------------------------------------------------------------------

-

97. How to set up a permission for 3 tier application?

98. Contact System Administrator.

99. --------------------------------------------------------------------------------

100. What UNIX command do you use to connect to UNIX server?

101. FTP Server Name

102. --------------------------------------------------------------------------------

103. Do you know how to configure DB2 side of the application?

104. Set up an application ID, create RACF group with tables attached to this

group, attach the ID to this RACF group.

105. --------------------------------------------------------------------------------

106. Web Application

107. What kind of LAN types do you know?

108. Ethernet networks and token ring networks.

109. --------------------------------------------------------------------------------

110. What is the difference between them?

111. With Ethernet, any devices on the network can send data in a packet to any location on the network at any time. With Token Ring, data is transmitted in

'tokens' from computer to computer in a ring or star configuration.

112. Steve Dalton from ExchangeTechnology: \"This is such a common mistake that people make about TR I didn't want it to propagated further!\"

113. Token ring is the IEEE 802.5 standard that connects computers together in a closed ring. Devices on the ring cannot transmit data until permission is received from the network in the form of an electronic 'token'. The token is a short message that can be passed around the network when the owner is finished. At any time, one node owns the token and is free to send messages. As with Ethernet

the messages are packetized. The packet = start_flag + address + header + message + checksum + stop_flag. The message packets circulate around the ring until the addressed recipient receives them. When the sender is finished sending

the full message (normally many packets),he sends the token.

114. An Ethernet message is sent in packets too. The sending protocol goes

like this:

115. wait until you see no activity on the network .

116. begin sending your message pocket.

117. while sending, check simultaneously for interference (another node

wants to send data).

118. as long as all clear, continue sending your message.

119. if you detect interference abort your transmission, wait a random length

of time and try again.

120. Token ring speed is 4/16 Mbit/sec , Ethernet - 10/100 Mbit/sec

121. For more info see http://www.flgnetworking.com/usefuli4.html

122. --------------------------------------------------------------------------------

123. What protocol both networks use? What it stands for?

124. TCP/IP. Transmission Control Protocol/ Internet Protocol.

125. --------------------------------------------------------------------------------

126. How many bits IP Address consist of?

127. An IP Address is a 32-bit number.

128. --------------------------------------------------------------------------------

129. How many layers of TCP/IP protocol combined of?

130. Five. (Application, Transport, Internet, Data link, Physical)

131. --------------------------------------------------------------------------------

132. How to define testing of network layers?

133. Reviewing with your developers to identify the layers of the Network layered architecture, your Web client and Web server application interact with. Determine the hardware and software configuration dependencies for the

application under test.

134. --------------------------------------------------------------------------------

135. How to test proper TCP/IP configuration Windows machine?

136. To run command on:

137. Windows NT: IPCONFIG/ALL

138. Windows 95: WINIPCFG

139. Ping or ping

140. --------------------------------------------------------------------------------

141. What is a component-based Architecture? How to approach testing of a

component based application?

142. · Define how many and what kind of components your application has.

143. · Identify how server-side components are distributed

144. · Identify How server-side software components interact with each other

145. · Identify how Web-To- Database connectivity is implemented

146. · Identify how processing load is distributed between client and server to

prepare for load stress and performance testing

147. · Prepare for compatibility and reliability testing

148. --------------------------------------------------------------------------------

149. How to maintain Browser settings?

150. Go to Control Panel, Internet Option.

151. --------------------------------------------------------------------------------

152. What kind of testing considerations you have to have in mind for

Security Testing?

153. In client/server system, every component carries its own security

weaknesses.

1. The primary components which need to be tested are:

155. · application software

156. · the database

157. · servers

158. · the client workstations

159. · the network

160. How to Hire a QA Person

161. What criteria do people use to select QA engineers? It’s natural to think that the right kinds of people to hire are people just like you—but this can be a mistake. In fact, every job requires its own unique set of skills and personality types,

and the skills that make you successful in your field may have significant

differences from the skills needed for the QA job.

162. If you read many job posting specifications for QA roles, you’ll find that they commonly describe skills that are much more appropriate for a developer, including specific knowledge of the company’s unique technology. Some specifications are so unique and lofty that it seems the only qualified candidates

would be former heads of development!

163. Realistically, the QA person you seek should have the adaptability,

intelligence, and QA-specific skills that will enable them to come up to speed on your project quickly. Relevant experience includes testing procedures, test writing,

puzzle solving, follow-through, communication, and the \"QA mindset.\"

1. Unless they are testing a programming interface or scripting language, a QA person’s role is to test the product from the end user’s perspective. Contrast

this with developers, who look at the product from a code perspective. Consider the difference between being focused on making the code perform in a very specific way and wondering what would happen if you did \"this\" instead of \"that\"

through the user interface.

165. It’s remarkable that the people who are assigned to interview QA

candidates tend to be anything but QA people themselves. Most often, developers and HR people do the bulk of the interviewing. Yet QA is a unique discipline, and candidates need to be evaluated from a QA point of view, as would accountants, advertising staff, and other specialized professionals. QA people often have the feeling that they need to have two sets of skills: those that interview well with development engineers, and those that they actually need once they get the job.

166. What Not to Do

167. The first mistake you can make is to assume that you don’t really need a QA person. Code-based unit tests do not represent the end user’s interaction with the product. If you tell your boss that you \"just know\" it works, or base your assumptions on unit tests, she probably won’t feel reassured. Before the big

rollout, she is going to want metrics, generated by a professional.

168. The second mistake is to conduct the interview as you would for a development position. Even though more and more QA people are getting into programming, most of them aren’t developers. If you give most QA people a

C++ test, they will fail.

169. Quite often, developers are tagged and thrown into a room with a QA candidate just to round out the interview process and make sure that everyone on the team feels comfortable with the choice. But many developers only know how to interview from a developer’s perspective. When asked to interview someone, they will usually give them a programming test, which might eliminate candidates

who have the best QA skills.

170. Unless they are testing from the API level, most QA people don’t go near the code. They approach the product from a user’s perspective. You are not looking for a programmer; you are looking for someone to represent the user and

evaluate the product from their perspective.

171. What QA People Do

172. If the actual requirements of QA almost never involve any experience with the programming language, environment, and operating system, and very little to do with the type of program being created, what criteria should we be

looking for? If QA people aren’t programmers, what do they do?

173. 1. They Are Sleuths. Perhaps most important, a QA person needs to be an investigator in order to define the job and understand the project. There may or may not be a product specification (spec) available defining the project. Too often the spec is nonexistent, bare bones, or woefully out of date. Furthermore, the

difference between the original bare-bones spec and the current but undocumented reality is known and discussed only in private development

meetings at which QA people are usually not present. QA is usually not deliberately excluded, just overlooked because development’s focus is to accomplish the task, not necessarily to share their information with everyone.

174. Thus a QA person needs to have the investigative skills to seek out information through all available means: manuals, specs, interviews, emails, and good old trial and error. What is the product really supposed to do? What is the customer expectation? How will management know when the product is ready to

ship? What measurable standards must be met? What are the individual developers working on now and what are they most concerned about? This investigation is the job of all QA people. Some experienced developers may find this in conflict with their experience, as some organizations set development tasks in a hierarchical way, with job specifications coming down from the architect and individual contributors dealing with specific focused subsets. It may seem natural to expect QA to work the same way, but in fact each QA person needs to be an independent investigator with a broad picture. Where developers are making code conform to specifications, QA people are looking for the needle-in-a-haystack problems and unexpected scenarios, in addition to verifying that the product

actually works as expected.

175. 2. They Know How to Plan. A QA person needs to plan and set priorities. There is a definable project to test. Given all the possible combinations of expected

uses, as well as all the potential unexpected scenarios including human and mechanical failure, one can imagine an infinite number of possibilities. Organizing

one’s activity to get the most effective results in the (usually) limited time

available is of paramount importance.

176. Further, this is an ever-changing evaluation. In ideal circumstances, QA is on pace with or even ahead of development. QA should be included in the earliest planning so that at the same time developers are figuring out how to build the code, QA is figuring out how to test the code, anticipating resource needs and planning training. But more likely, QA is brought to the project late in its development and is racing to catch up. This requires planning and prioritization

with a vengeance.

177. Consider also that each new build represents a threat to established code. Code that worked in previous builds can suddenly fail because of coding errors, new conflicts, miscommunication, and even compiler errors introduced in the latest build. Therefore, each new build needs to be verified again to assure that

good code remains good. A useful prioritization of tasks would be to

178. spot-check the new build for overall integrity before accepting it for

general testing

179. verify that new bug fixes have indeed been fixed

180. exercise new code that was just added, as this is the area most likely to

have problems

181. revalidate the established code in general as much as you can before the

next build is released

182. Outside of straightforward functional testing, there may be

requirements for performance testing, platform testing, and compatibility testing that should run in environments separate from the standard development and test environment. That’s a lot of testing to manage. QA people have to be able to react at a moment’s notice to get on top of sudden changes in priority, then

return to the game plan again after the emergency has been met.

183. 3. They See the Big Picture. A QA person needs the \"QA mindset.\" Generally, a development engineer needs to be a focused person who drives toward a specific goal with a specific portion of the larger program. Highly focused and detail-oriented persons tend to do well at this. QA, however, is not a good place for a highly focused person. QA in fact needs to have multiple perspectives and the ability to approach the task at many levels from the general to the specific, not to mention from left field. A highly focused person could miss too many things in a QA role by exhaustively testing, say, the math functions, but not noticing that

printing doesn’t work.

184. 4. They Know How to Document. A major portion of the QA role involves writing. Plans need to be written, both the master plan kind and the detailed test script kind. As the project evolves, these documents need to be updated as well. A good QA person can write testing instructions so that any intelligent person with basic user skills can pick up the script and test the product unaided. Bug reports are another major communication tool and QA people need to have the ability to

define the bug in steps that are easy to understand and reproduce. It would be good to ask a candidate to bring samples of bug reports and testing instructions to the interview. Lacking these, look for any technical writing samples that show that the candidate can clearly and economically communicate technical subject

matter.

185. 5. They Care About the Whole Project. It’s also important for the candidate to have a passion for getting things right. Ultimately, QA is entrusted with watching the process with a big-picture perspective, to see that it all comes together as well as possible. Everyone has that goal, but most are too busy working on their individual trees to see how the forest is doing. QA candidates should exhibit a passion for making the project successful, for fighting for the right thing when necessary, yet with the practical flexibility to know when to let go and

ship the project.

186. How to Hire Right

187. So how do you evaluate a complete stranger for QA skills?

188. Here’s one idea. Find a simple and familiar window dialog such as a print dialog, and ask your candidates to describe how they would go about writing a test for it. Look for thoroughness and for the ability to approach the items from many angles. A good QA person will consider testing that the buttons themselves work (good QA people don’t trust things that are supposed to work without question), then that the functions are properly hooked up to the buttons. They

should suggest various kinds of print jobs. They should suggest testing the same dialog on various supported platforms and exception testing if the network is down or a printer is out of paper. They should mention the appearance and perhaps the working of the dialog. Performance testing may also come up, as well as the handling of various kinds of content. The more variations on a theme they

come up with, the stronger a candidate they are.

1. Another idea is to present them with a function-testing scenario in which there is no specification from which to develop a test plan. Ask them how they would learn about the function and the test. Their answers should include documentation, old tests, marketing people, conversations with the developers, reading the bug database, trial and error, and writing up presumptions to be presented to developers for evaluation and correction. Again, look for variety and

creativity in finding solutions.

190. QA people need to be creative problem solvers. They like to grab onto a problem and figure out the solution by whatever means they can. They will peek at the answers of a crossword puzzle to break a deadlock. They will come up with a new solution to an old problem. They are aware of the details when finding a solution, yet they have the ability to think outside the box, to appreciate new and experimental aspects. Some successful interviewers keep one or two \"brain teaser\" types of puzzles on hand for the candidates to work out. Candidates are asked to solve the problem and explain their thinking as they go. Whether they find the answer is not as important. Listen to their thinking process as they work. If they are able to attack the problem from many directions and not give up after the first

failures, they are showing the right thinking style. Particularly look to see if they

dig into the problem with real enjoyment. A true QA person would.

191. Of course, QA people need to be intuitively technical. They can usually program a VCR and use most technical equipment without needing the instructions (at least for basic functionality). Other people go to them for help with technical things. Listen for examples of this in their conversation. For example, if they are computer inquisitive, they don't just use software, they tinker with it. They inquire into the details and obscure corners of functionality and try things to see how they work. They may have stories of some creative accomplishment using software differently than intended by the developers, such as using a spreadsheet

to write documents.

192. Good QA people are always learning, whether advancing their technical skills or learning something entirely new. Listen for signs of self-directed learning

in the interview.

193. Conclusion

194. Good QA people have a sense of ownership and follow-through in their work. They are directly responsible for their work and its contribution to the overall process. They are good at taking a general instruction and fleshing out the details of their work on their own. They will work long and hard at it. Let them tell stories of their achievements and successes in overcoming bad situations. Look for the

passion, the ownership, and the pride.

195. The key thing to remember is that the kinds of skills and mindset needed for QA work is different from those needed for other roles. Spend some time getting to know good QA people in your organization and getting to know what characteristics make them successful. Seek out their opinions on what to look for. Develop a consistent interviewing approach that you use over and over so that you become familiar with the range of responses from various candidates. And for goodness’ sake, use your own QA people, even the new ones, to evaluate new

candidates.

196. About the Author

197. Bill Bliss is a QA manager and consultant whose clients include Lotus Development, Digital, and Dragon Systems. You can send him email at bill@sqaoutsource.com or visit his Web site at http://www.sqacenter.com.

198. Mitch Allen is an author and consultant whose many clients have included Fleet, Caterpillar, IBM, Lotus Development and Dragon Systems. He is currently working on a book about Flash programming, due to be published by the end of 2002. You can send him an email at mitch@mitchallen.com or visit his Web

site at http://www.mitchallen.com.

199. What is 'Software Quality Assurance'?

200. Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon

standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'. (See the Bookstore section's 'Software QA'

category for a list of useful books on Software Quality Assurance.)

201. What is 'Software Testing'?

202. Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. (See the Bookstore section's 'Software Testing' category for

a list of useful books on Software Testing.)

203. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and

business structure.

204. What are some recent major computer system failures caused by

software bugs?

205. A September 2006 news report indicated problems with software

utilized in a state government's primary election, resulting in periodic unexpected rebooting of voter checkin machines, which were separate from the electronic voting machines, and resulted in confusion and delays at voting sites. The problem

was reportedly due to insufficient testing.

206. In August of 2006 a U.S. government student loan service erroneously made public the personal data of as many as 21,000 borrowers on it's web site, due

to a software error. The bug was fixed and the government department subsequently offered to arrange for free credit monitoring services for those

affected.

207. A software error reportedly resulted in overbilling of up to several thousand dollars to each of 11,000 customers of a major telecommunications company in June of 2006. It was reported that the software bug was fixed within

days, but that correcting the billing errors would take much longer.

208. News reports in May of 2006 described a multi-million dollar lawsuit settlement paid by a healthcare software vendor to one of its customers. It was reported that the customer claimed there were problems with the software they had contracted for, including poor integration of software modules, and problems

that resulted in missing or incorrect data used by medical personnel.

209. In early 2006 problems in a government's financial monitoring software resulted in incorrect election candidate financial reports being made available to

the public. The government's election finance reporting web site had to be shut

down until the software was repaired.

210. Trading on a major Asian stock exchange was brought to a halt in

November of 2005, reportedly due to an error in a system software upgrade. The

problem was rectified and trading resumed later the same day.

211. A May 2005 newspaper article reported that a major hybrid car

manufacturer had to install a software fix on 20,000 vehicles due to problems with invalid engine warning lights and occasional stalling. In the article, an automotive software specialist indicated that the automobile industry spends $2 billion to $3

billion per year fixing software problems.

212. Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing

the project. In March of 2005 it was decided to scrap the entire project.

213. In July 2004 newspapers reported that a new government welfare

management system in Canada costing several hundred million dollars was unable

to handle a simple benefits rate increase after being put into live operation. Reportedly the original contract allowed for only 6 weeks of acceptance testing

and the system was never tested for its ability to handle a rate increase.

214. Millions of bank accounts were impacted by errors due to installation of

inadequately tested software code in the transaction processing system of a major North American bank, according to mid-2004 news reports. Articles about the incident stated that it took two weeks to fix all the resulting errors, that additional problems resulted when the incident drew a large number of e-mail phishing attacks against the bank's customers, and that the total cost of the incident could

exceed $100 million.

215. A bug in site management software utilized by companies with a

significant percentage of worldwide web traffic was reported in May of 2004. The bug resulted in performance problems for many of the sites simultaneously and

required disabling of the software until the bug was fixed.

216. According to news reports in April of 2004, a software bug was

determined to be a major contributor to the 2003 Northeast blackout, the worst power system failure in North American history. The failure involved loss of electrical power to 50 million customers, forced shutdown of 100 power plants, and economic losses estimated at $6 billion. The bug was reportedly in one utility company's vendor-supplied power monitoring and management system, which was unable to correctly handle and report on an unusual confluence of initially localized events. The error was found and corrected after examining millions of

lines of code.

217. In early 2004, news reports revealed the intentional use of a software bug as a counter-espionage tool. According to the report, in the early 1980's one nation surreptitiously allowed a hostile nation's espionage service to steal a

version of sophisticated industrial software that had intentionally-added flaws. This eventually resulted in major industrial disruption in the country that used the

stolen flawed software.

218. A major U.S. retailer was reportedly hit with a large government fine in October of 2003 due to web site errors that enabled customers to view one

anothers' online orders.

219. News stories in the fall of 2003 stated that a manufacturing company recalled all their transportation products in order to fix a software problem causing instability in certain circumstances. The company found and reported the bug itself and initiated the recall procedure in which a software upgrade fixed the

problems.

220. In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage company could proceed; the lawsuit reportedly involved claims that the company was not fixing system problems that sometimes resulted in failed stock trades, based on the experiences of 4 plaintiffs during an 8-month period. A previous lower court's ruling that \"...six miscues out of more than 400 trades does

not indicate negligence.\" was invalidated.

221. In April of 2003 it was announced that a large student loan company in the U.S. made a software error in calculating the monthly payments on 800,000 loans. Although borrowers were to be notified of an increase in their required payments, the company will still reportedly lose $8 million in interest. The error

was uncovered when borrowers began reporting inconsistencies in their bills.

222. News reports in February of 2003 revealed that the U.S. Treasury

Department mailed 50,000 Social Security checks without any beneficiary names. A spokesperson indicated that the missing names were due to an error in a software change. Replacement checks were subsequently mailed out with the problem corrected, and recipients were then able to cash their Social Security

checks.

223. In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attributed to the difficulty of testing the integration of multiple systems.

224. A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information,

that U.S. officials became aware of the problems.

225. According to newspaper stories in mid-2001, a major systems

development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system

deliveries were late, the software had excessive defects, and it caused other

systems to crash.

226. In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their

newer trains would not run due to their inability to recognize the date '31/12/2000'; the trains were started by altering the control system's date settings.

227. News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered

late, and didn't work.

228. In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for at least a year until the

bugs were worked out of the new system by the software vendors.

229. A review board concluded that the NASA Mars Polar Lander failed in December 1999 due to software problems that caused improper functioning of retro rockets utilized by the Lander as it entered the Martian atmosphere.

230. In October of 1999 the $125 million NASA Mars Climate Orbiter

spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a

communications relay for the Mars Polar Lander mission, which failed for unknown

reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.

231. Bugs in software supporting a large commercial high-speed data

network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the

outages.

232. In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight

hearings were requested.

233. A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its

normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.

234. In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public

apology for releasing a product before it was ready.

235. The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in

a software upgrade intended to speed online trade confirmations.

236. In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The

cause was eventually traced to a software bug.

237. January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers

called up with questions about their bills.

238. In November of 1997 the stock of a major health industry company

dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million

dollar fines were levied on the company by government agencies.

239. A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software's

inability to handle credit cards with year 2000 expiration dates.

240. In August of 1997 one of the leading consumer credit reporting

companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number

of initial users ended up viewing each others' reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to \"...unexpectedly high demand from consumers and faulty software that routed

the files to the wrong computers.\"

241. In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of the U.S. RBOC's to fail for most of a day. Most of the 2000 operators had to search through phone books instead of

using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that 'It had nothing to do with the integrity of the software. It was human

error.'

242. On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a -bit integer to a 16-bit signed

integer.

243. Software bugs caused the bank accounts of 823 customers of a major

U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors

were corrected and all funds were recovered.

244. Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on what he said was a '...funny feeling in my gut', decided the apparent missile attack was a false alarm. The

filtering software code was rewritten.

245. Does every software project need testers?

246. While all projects will benefit from testing, some projects may not

require independent test staff to succeed.

247. Which projects may not need independent test staff? The answer depends on the size and context of the project, the risks, the development methodology, the skill and experience of the developers, and other factors. For

instance, if the project is a short-term, small, low risk project, with highly experienced programmers utilizing thorough unit testing or test-first development,

then test engineers may not be required for the project to succeed.

248. In some cases an IT organization may be too small or new to have a testing staff even if the situation calls for it. In these circumstances it may be appropriate to instead use contractors or outsourcing, or adjust the project management and development approach (by switching to more senior developers

and agile test-first development, for example). Inexperienced managers sometimes gamble on the success of a project by skipping thorough testing or having programmers do post-development functional testing of their own work, a

decidedly high risk gamble.

249. For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually necessary. As in any business, the use of personnel with specialized skills enhances an organization's ability to be successful in large, complex, or difficult tasks. It allows for both a) deeper and stronger skills and b) the contribution of differing perspectives. For example, programmers typically have the perspective of 'what are the technical issues in making this functionality work?'. A test engineer typically has the perspective of 'what might go wrong with this functionality, and how can we ensure it meets expectations?'. Technical people who can be highly effective in approaching tasks from both of those perspectives are rare, which is why, sooner or later, organizations bring in test specialists.

250. Why does software have bugs?

251. miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).

252. software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system

complexity.

253. programming errors - programmers, like anyone else, can make

mistakes.

2. changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the

project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control - see 'What can be done if requirements are changing continuously?' in the LFAQ. Also

see information about 'agile' approaches such as XP, in Part 2 of the FAQ.

255. time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes,

mistakes will be made.

256. egos - people prefer to say things like:

257. 'no problem' 'piece of cake' 'I can whip that out in a few hours' 'it should be easy to update that old code' instead of: 'that adds a lot of complexity and we could end up making a lot of mistakes' 'we have no idea if we can do that; we'll wing it' 'I can't estimate how long it will take, until I take a close look at it' 'we can't figure out what that old spaghetti code did in the first place' If

there are too many unrealistic 'no problem's', the result is bugs.

258. poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').

259. software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented,

resulting in added bugs.

260. How can new Software QA processes be introduced in an existing

organization?

261. A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.

262. Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.

263. For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications

among customers, managers, developers, and testers.

2. The most value for effort will often be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications

embodied in requirements or design documentation, or in 'agile'-type environments extensive continuous coordination with end-users, (b) design inspections and code inspections, and (c) post-mortems/retrospectives.

265. Other possibilities include incremental self-managed team approaches such as 'Kaizen' methods of continuous process improvement, the

Deming-Shewhart Plan-Do-Check-Act cycle, and others.

266. Also see 'How can QA processes be implemented without reducing

productivity?' in the LFAQ section.

267. (See the Bookstore section's 'Software QA', 'Software Engineering', and 'Project Management' categories for useful books with more information.)

268. What is verification? validation?

269. Verification typically involves reviews and meetings to evaluate

documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term

'IV & V' refers to Independent Verification and Validation.

270. What is a 'walkthrough'?

271. A 'walkthrough' is an informal meeting for evaluation or informational

purposes. Little or no preparation is usually required.

272. What's an 'inspection'?

273. An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.

Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often hard for organizations to get serious about quality assurance?'. Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far

more cost-effective than bug detection.

274. What kinds of testing should be considered?

275. Black box testing - not based on any knowledge of internal design or

code. Tests are based on requirements and functionality.

276. White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches,

paths, conditions.

277. unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code;

may require developing test driver modules or test harnesses.

278. incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by

programmers or by testers.

279. integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type

of testing is especially relevant to client/server and distributed systems.

280. functional testing - black-box type testing geared to functional

requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before

releasing it (which of course applies to any stage of testing.)

281. system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

282. end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if

appropriate.

283. sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current

state.

284. regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be

especially useful for this type of testing.

285. acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of

time.

286. load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response

time degrades or fails.

287. stress testing - term often used interchangeably with 'load' and

'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

288. performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is

defined in requirements documentation or QA or Test Plans.

2. usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers

and testers are usually not appropriate as usability testers.

290. install/uninstall testing - testing of full, partial, or upgrade

install/uninstall processes.

291. recovery testing - testing how well a system recovers from crashes,

hardware failures, or other catastrophic problems.

292. failover testing - typically used interchangeably with 'recovery testing'

293. security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require

sophisticated testing techniques.

294. compatability testing - testing how well software performs in a particular

hardware/software/operating system/network/etc. environment.

295. exploratory testing - often taken to mean a creative, informal software

test that is not based on formal test plans or test cases; testers may be learning the

software as they test it.

296. ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

297. context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely

different than that for a low-cost computer game.

298. user acceptance testing - determining if software is satisfactory to an

end-user or customer.

299. comparison testing - comparing software weaknesses and strengths to

competing products.

300. alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing.

Typically done by end-users or others, not by programmers or testers.

301. beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release.

Typically done by end-users or others, not by programmers or testers.

302. mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected.

Proper implementation requires large computational resources.

303. (See the Bookstore section's 'Software Testing' category for useful books

on Software Testing.)

304. What are 5 common problems in the software development process?

305. poor requirements - if requirements are unclear, incomplete, too general,

and not testable, there will be problems.

306. unrealistic schedule - if too much work is crammed in too little time,

problems are inevitable.

307. inadequate testing - no one will know whether or not the program is any

good until the customer complains or systems crash.

308. featuritis - requests to pile on new features after development is

underway; extremely common.

309. miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems are guaranteed.

310. (See the Bookstore section's 'Software QA', 'Software Engineering', and

'Project Management' categories for useful books with more information.)

311. What are 5 common solutions to software development problems?

312. solid requirements - clear, complete, detailed, cohesive, attainable,

testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. In 'agile'-type environments, continuous close coordination

with customers/end-users is necessary.

313. realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to

complete the project without burning out.

314. adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. 'Early' testing ideally includes

unit testing by developers and built-in testing and diagnostic capabilities.

315. stick to initial requirements as much as possible - be prepared to defend against excessive changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. This will provide them a higher comfort level with their requirements decisions and minimize excessive changes

later on.

316. communication - require walkthroughs and inspections when

appropriate; make extensive use of group communication tools - groupware, wiki's, bug-tracking tools and change management tools, intranet capabilities, etc.; insure that information/documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use protoypes and/or continuous communication with end-users if possible to clarify expectations.

317. (See the Bookstore section's 'Software QA', 'Software Engineering', and 'Project Management' categories for useful books with more information.)

318. What is software 'quality'?

319. Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and

their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer

acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each

type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. (See the Bookstore section's 'Software QA'

category for useful books with more information.)

320. What is 'good code'?

321. 'Good code' is code that works, is bug free, and is readable and

maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce

standards.

322. For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation:

323. minimize or eliminate use of global variables.

324. use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in

naming conventions.

325. use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming

conventions.

326. function and method sizes should be minimized; less than 100 lines of

code is good, less than 50 lines is preferable.

327. function descriptions should be clearly spelled out in comments

preceding a function's code.

328. organize code for readability.

329. use whitespace generously - vertically and horizontally

330. each line of code should contain 70 characters max.

331. one code statement per line.

332. coding style should be consistent throught a program (eg, use of

brackets, indentations, naming conventions, etc.)

333. in adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines

of comments (including header blocks) as lines of code.

334. no matter how small, an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing);

or if possible a separate flow chart and detailed program documentation.

335. make extensive use of error handling procedures and status and error

logging.

336. for C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates

multiple inheritance and operator overloading.)

337. for C++, keep class methods small, less than 50 lines of code per method

is preferable.

338. for C++, make liberal use of exception handlers

339. What is 'good design'?

340. 'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code

whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. (See further discussion of functional and internal design in 'What's the big deal about requirements?' in FAQ #2.) For programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help;

some common rules-of-thumb include:

341. the program should act in a way that least surprises the user

342. it should always be evident to the user what can be done next and how

to exit

343. the program shouldn't let the users do something stupid without

warning them.

344. What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

345. SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development

processes.

346. CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings

by undergoing assessments by qualified auditors.

347. Level 1 - characterized by chaos, periodic panics, and

heroic efforts required by individuals to

successfully complete projects. Few if any processes in place; successes may not be repeatable. Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated. Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to

oversee software processes, and training programs are used to ensure understanding and compliance. Level 4 - metrics are

used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high. Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required. Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those

rated at Level 1, the most problematical key process area was

in Software Quality Assurance.

348. ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns

quality systems that are assessed by outside auditors, and it applies to many kinds

of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see http://www.iso.org/ for the latest information. In the U.S. the standards can be purchased via the ASQ web site at

http://e-standards.asq.org/

349. ISO 9126 defines six high level quality characteristics that can be used in software evaluation. It includes functionality, reliability, usability, efficiency,

maintainability, and portability.

350. IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI

Standard 730), and others.

351. ANSI = 'American National Standards Institute', the primary industrial

standards body in the U.S.; publishes some software-related standards in

conjunction with the IEEE and ASQ (American Society for Quality).

352. Other software development/IT management process assessment

methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL,

MOF, and CobiT.

353. See the 'Other Resources' section for further information available on

the web.

3. What is the 'software life cycle'?

355. The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning,

coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects. (See the Bookstore section's 'Software QA', 'Software Engineering', and 'Project Management' categories for useful books with

more information.)

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- yrrf.cn 版权所有 赣ICP备2024042794号-2

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务