Tuesday, 12 March 2013

Skills and knowledge of game tester




To become a computer games tester you will need:
  • excellent analytical and problem-solving skills
  • a passion for playing computer games and an ability to play at high levels
  • good written and spoken communication skills
  • an understanding of quality assurance processes
  • the ability to work well as part of a team
  • a tactful manner
  • good negotiation skills
  • the ability to work under pressure and meet deadlines
  • patience and persistence, for repetitive work
  • a methodical and disciplined approach
  • excellent attention to detail
  • a good knowledge of the games market
  • a willingness to work flexibly
  • good office computer skills.


Foreign language skills are also useful for testing games aimed at overseas markets.
Coordinated with programmers and producers of game.
Performed tests on all new games throughout the development cycle of game.
Identified all errors in game application and made necessary changes to resolve issues.
Managed all programs and conducted trial runs of applications and ensured that results were according to client requirement.
Documented all program development procedures and changes made into same.
Maintained various programs on a regular basis such as storing and retrieving data.
Managed and tracked inventory of products and controlled all equipments.

what is the work of game tester



Testing is a vital part of producing a computer game. As well as finding and recording programming faults (bugs), you would also play the role of the game’s first public user. You would report on its playability and recommend improvements.

As part of a team of quality assurance (QA) testers, you would:
  • play games in detail and in as many ways as possible
  • test different levels and versions of a game
  • check its performance against what the designer intended
  • compare the game against others on the market
  • note problems and suggest improvements
  • try to work out what is causing a problem
  • try to recreate the problem, recording the steps you took
  • check accessibility options
  • check for spelling mistakes and copyright issues such as logos
  • check the text on packaging and in instruction manuals
  • enter each 'bug report' into a quality management system
  • work to strict deadlines.
You would work closely with programmers, artists and designers before a game is released, and with customer support teams after it is on the market. Some jobs may involve checking and translating in-game instructions and manuals for overseas markets.
A good games tester has the ability to work under pressure and meet deadlines. You will also need patience, persistence and good office computer skills.


Process of game testing


A typical bug report progression of testing process is seen below:
  • Identification. Incorrect program behavior is analyzed and identified as a bug.
  • Reporting. The bug is reported to the developers using a defect tracking system. The circumstances of the bug and steps to reproduce are included in the report. Developers may request additional documentation such as a real-time video of the bug's manifestation.
  • Analysis. The developer responsible for the bug, such as an artist, programmer or game designer checks the malfunction. This is outside the scope of game tester duties, although inconsistencies in the report may require more information or evidence from the tester.
  • Verification. After the developer fixes the issue, the tester verifies that the bug no longer occurs. Not all bugs are addressed by the developer, for example, some bugs may be claimed as features (expressed as "NAB" or "not a bug"), and may also be "waived" (given permission to be ignored) by producers, game designers, or even lead testers, according to company policy.


what is a game testing ?


Game testing, a subset of game development, is a software testing process for quality control of video games. The primary function of game testing is the discovery and documentation of software defects (aka bugs). 

Interactive entertainment software testing is a highly technical field requiring computing expertise, analytic competence, critical evaluation skills, and endurance. 

In recent years the field of game testing has come under fire for being excessively strenuous and unrewarding, both financially and emotionally. 


In the early days of computer and video games, the developer was in charge of all the testing. No more than one or two testers were required due to the limited scope of the games. In some cases, the programmers could handle all the testing.
As games become more complex, a larger pool of QA resources, called "Quality Assessment" or "Quality Assurance" is necessary. Most publishers employ a large QA staff for testing various games from different developers. Despite the large QA infrastructure most publishers have, many developers retain a small group of testers to provide on-the-spot QA.
So now most game developers rely on their highly technical and game savvy testers to find glitches and 'bugs' in either the programming code or graphic layers. Game testers usually have a background playing a variety of different games on a multitude of platforms. They must be able to notate and reference any problems they find in detailed reports, meet deadlines with assignments and have the skill level to complete the game titles on their most difficult settings. Most of the time the position of game tester is a highly stressful and competitive position with little pay yet is highly sought after for it serves as a doorway into a rapidly growing industry.
A common misconception is that all game testers enjoy alpha or beta version of the game and report occasionally found bugs. In contrast, game testing is highly focused on finding bugs using established and often tedious methodologies before alpha version.



Quality assurance is a critical component in game development, though the video game industry does not have a standard methodology. Instead developers and publishers have their own methods. Small developers do not have QA staff, however large companies may employ QA teams full-time. High-profile commercial games are professionally and efficiently tested by publisher QA department.



Wednesday, 27 February 2013

Testing FAQ



Testing presents an interesting anomaly for the software engineer. Earlier in the software process, the engineer attempts to build software from an abstract concept to a tangible implementation. Now comes testing. The engineer creates a series of test cases that are intended to “demolish” the software that has been built. In fact, testing is the one step in the software engineering process that could be viewed as destructive rather than constructive.
Software developers are by their nature constructive people. Testing requires that the developer discard preconceived notions of the correctness of software just developed and over come a conflict of interest that occurs when errors are uncovered.
Testing Principals
Davis suggests a set of testing principals, which have been adapted for use:
1.      All tests should be traceable to customer requirements.
2.      Tests should be planned long before testing begins.
3.      The Pareto principal applies to software testing.
4.      Testing should begin “in the small” and progress toward testing “in the large”.
5.      Exhaustive testing is not possible.
6.    To be most effective, testing should be conducted by an independent third                    party.
Pareto Principal: Simply put, the Pareto principal implies that 80% of all errors uncovered during testing will likely be traceable to 20% of all program modules.
Testability
The checklist that follows provides a set of characteristics that lead to testable software.
1.      Operability – The better it works, the more efficiently it can be tested.
2.      Observability – What you see is what you test.
3.      Controllability – The better we can control the software, the more the testing can be automated and optimized.
4.      Decomposability – By controlling the scope of testing we can more quickly isolate problems and perform smarter retesting.
5.      Simplicity – The less there is to test, the more quickly we can test it.
6.      Stability – The fewer the changes, the fewer the disruptions to testing.
7.      Understandability – The more information we have, the smarter we will test.
 Testing Methods
White Box Testing
White box testing sometimes called as glass box testing, is a test case design method that uses the control structure of the procedural design to derive test cases. Using White box testing methods, the test engineer can derive test cases that:
1.      Guarantee that all independent paths within a module have been exercised at least once.
2.      Exercise all logical decisions on their true or false sides.
3.      Execute all loops at their boundaries and within their operational bounds.
4.      Exercise internal data structures to assure their validity.
Basis Path Testing
Basis path testing is a white box testing technique first proposed by Tom McCabe. The basis path method enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing
Flow Graph Notation
The Flow Graph depicts logical control flow.
Cyclomatic Complexity
Cyclomatic Complexity is a software metric that provides a quantitative measure of the logical complexity of a program. When this metric is used in the context of the basis path testing method, the value computed for Cyclomatic Complexity defines the number of independent paths in the basis set of a program and provides us with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.
Black Box Testing
Black box testing focuses on the functional requirements of the software. That is, black box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. Black box testing is not an alternative to white box techniques.
Black box testing attempts to find errors in the following categories:
1.      Incorrect or missing functions.
2.      Interface errors.
3.      Errors in data structures or external data base access.
4.      Performance errors, and
5.      Initialization and termination errors.
Equivalence Partitioning
Equivalence Partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived. Test case design for equivalence partitioning is bases on an evaluation of equivalence classes for an input condition.
Equivalence classes may be defined according to the following guidelines:
1.      In an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2.      If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3.      If an input condition specifies a member of a set, one valid and one invalid equivalence classes are defined.
4.      If an input condition is Boolean, one valid and one invalid class are defined.
Boundary Value Analysis
Boundary Value Analysis (BVA) leads to a selection of test cases that exercise bounding values. BVA analysis is a test case design technique that complements equivalence partitioning. Rather than selecting any element of an equivalence class BVA leads, to the selection of test cases at the edges of the class.
The guidelines for BVA are similar in many respects to those provide for equivalence partitioning.
1.  If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b, just above and just below a and b respectively.
2.  If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are tested.
Inspection
A manual testing technique in which program documents (requirements, design, source code, user manuals etc) are examined in a very formal and disciplined manner to discover errors, violations of standards and other problems. Checklists are a typical vechile used in accomplishing this technique.
Walk Through
A manual testing error technique where program logic is tested manually by a group with a small set of test cases, while the state of program variables are manually monitored to analyze the programmer’s logic and assumptions.
Review
A process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers or other interested parties for comment or approval. Types of review include code review, design review, requirements review etc.
Cyclomatic Complexity
The number of independent paths through a program. The Cyclomatic complexity of a program to the number of decision statements plus 1.
Quality Control
The operational techniques and procedures used to achieve quality requirements.
Types of Testing
The following are the major types of testing.
1.      Integration Testing.
2.      System Testing.
3.      Usability Testing.
4.      Compatibility Testing.
5.      Reliability Testing.
6.      Test Automation.
7.      Performance Testing.
8.      Supportability Testing.
9.      Security and Access Control Testing.
10.   Content Management Testing.
11.   API Testing.
Let us look at some basic definitions of testing.
Testing
The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. (2) The process of analyzing a software item to detect the differences between existing and required conditions, i.e., bugs, and to evaluate the features of the software items. See: dynamic analysis, static analysis, software engineering.
Acceptance Testing
Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. Contrast with testing, development; testing, operational. See: testing, qualification, and user acceptance testing.
Alpha [a] Testing
Acceptance testing performed by the customer in a controlled environment at the developer's site. The software is used by the customer in a setting approximating the target environment with the developer observing and recording errors and usage problems.
Assertion Testing
A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes. See: assertion checking, instrumentation.
Beta [B] Testing
Acceptance testing performed by the customer in a live application of the software, at one or more end user sites, in an environment not controlled by the developer. (2) For medical device software such use may require an Investigational Device Exemption [ICE] or Institutional Review Board (IRS] approval.
Boundary Value Analysis (BVA)
A testing technique using input values at, just below, and just above, the defined limits of an input domain; and with input values causing outputs to be at, just below, and just above, the defined limits of an output domain. See: boundary' value analysis; testing, stress.
Branch Testing
Testing technique to satisfy coverage criteria which require that for each decision point, each possible branch (outcome] be executed at least once. Contrast with testing, path; testing, statement. See: branch coverage.
Compatibility Testing
The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems. See: different software system analysis; testing, integration; testing, interface. program variables. Feasible only for small, simple programs.
Formal Testing
Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.
Functional Testing
Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to selected inputs and execution conditions. (2) Testing conducted to evaluate the compliance of a system or component with specified functional requirements and corresponding predicted results. Syn: black-box testing, input/output driven testing. Contrast with testing, structural.
Integration Testing
An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire system has been integrated.
Interface Testing.
Testing conducted to evaluate whether systems or components pass data and control correctly to one another. Contrast with testing, unit; testing, system. See: testing, integration.
Invalid case Testing
A testing technique using erroneous [invalid, abnormal, or unexpected] input values or conditions. See: equivalence class partitioning.
Mutation Testing
A testing methodology in which two or more program mutations are executed using the same test cases to evaluate the ability of the test cases to detect differences in the mutations.


Operational Testing
Testing conducted to evaluate a system or component in its operational environment. Contrast with testing, development; testing, acceptance; See: testing, system.
Design based functional Testing.
The application of test data derived through functional analysis extended to include design functions as well as requirement functions. See: testing, functional.
Development Testing
Testing conducted during the development of a system or component, usually in the development environment by the developer. Contrast with testing, acceptance; testing, operational.
Exhaustive Testing
Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.
Parallel Testing
Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run.
Path Testing
Testing to satisfy coverage criteria that each logical path through the program be tested. Often paths through the program are grouped into a finite set of classes. One path from each class is then tested. Syn:path coverage. Contrast with testing, branch; testing, statement; branch coverage; condition coverage; decision coverage.
Performance Testing
Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements.
Qualification Testing.
Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: testing, acceptance; testing, system.


Regression Testing
Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software development and maintenance.
Special case Testing.  
A testing technique using input values that seem likely to cause program errors; a.g., "0", "1", NULL, empty string. See: error guessing.
Statement Testing
Testing to satisfy the criterion that each statement in a program be executed at least once during program testing. Syn: statement coverage. Contrast with testing, branch; testing, path; branch coverage; condition coverage; decision coverage; multiple condition coverage; path coverage.
Storage Testing
This is a determination of whether or not certain processing conditions use more storage (memory] than estimated.
Stress Testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. Syn: testing, boundary value.
Structural Testing
Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.
System Testing.  
The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. Such testing may be conducted in both the development environment and the target environment.
Test Oracle
'Test Oracle' is a mechanism, different from the program itself, that can be used to check the correctness of the output of the program for the test cases.


Unit Testing
Testing of a module for typographic, syntactic, and logical errors, for correct implementation of its design, and for satisfaction of its requirements. (2) Testing conducted to verify the implementation of the design for one software element; a.g., a unit or module; or a collection of software elements. Syn: component testing.
Usability Testing
Tests designed to evaluate the machine/user interface. Are the communication device(s) designed in a manner such that the information is displayed in a understandable fashion enabling the operator to correctly interact with the system?
Valid case Testing
A testing technique using valid [normal or expected] input values or conditions. See: equivalence class partitioning.
Volume Testing
Testing designed to challenge a system's ability to manage the maximum amount of data over a period of time. This type of testing also evaluates a system's ability to handle overload situations in an orderly fashion.