Sunday
Friday
Requirement traceability and its impotrance
In many organization testing starts after the execution/coding phase of the project.If the organization wants to really get benefits from testing,then tester should get involved right from the beginning of the requirement phase.
If the tester get involved right from the requirement phase ,then requirement traceability is one of the important report that can detail what kind of test coverage the testcase have.
The following figure shows how we can measure the test coverage using the requirement traceability.
we have extracted the important functionality from the requirement document and aligned it on the left hand side of the document.on the other side on the top we have mapped the test cases with the requirement.with this we can ensure the all the requirements are covered by our test cases.
requirement | testcase1 | testcase2 | testcase3 | testcase4 |
---|---|---|---|---|
should not allow duplicate recept | # | # | ||
amount field should be numeric | # | # | ||
debit n credit should not be zero | # |
Tuesday
Benefits of tracebility matrix
- Demonstrate the relationship between the requirements to the system.
- To ensure the design is based on established scope, business requirement or functional requirements.
- To ensure that the design documents are appropriately verified, and that functional requirements are appropriately validated.
- To track the requirements changes and their impact to the system
- To demonstrate the system built met the functionality of the customer, end users needs and expectations.
- At faster rate can trace back to the functionality/design/test cases, if there is means of any defect/changes to the system.
Monday
Difference between load testing,performance testing and stress testing.....
Dear friend,
Yes it is very confusing. I passed thru the same phase before knowing what it is.
Yes it is very confusing. I passed thru the same phase before knowing what it is.
Read below;
Load and stress testing are subsets of performance testing.
Performance testing means how best something performs under a given
benchmark. For example How mucn time you take to run 100 meters without
carrying any load (no load is the benchmark) ?
Load testing is also performance testing but under various loads. The
previous example if extended would be How much time you took to run the
same 100mts but carrying a load of 50 kilos, 100 kilos .... ?
Stress testing is performance under stress conditions. Extending the
same example as before How much time you took to run 100 meters with
load or no load when a strong wind was blowing in opposite direction ?.
Extending performance, load.. testing to s/w or h/w application.
example
Performance : 1000 txns per minute with 1000 users concurrent
Load : How many txns when 2000, 3000, 4000 concurrent users.
Stress : How many txns when 1000, 2000, 3000 concurrent users...
under conditions like server memory very low, data transmission line
poor, etc...
Sunday
some software testing important documents
SOFTWARE TESTING INTRODUCTION :
CLICK HERE TO DOWNLOAD
TESTING OVER VIEW OF BLACK BOX TESTING:
CLICK HERE TO DOWNLOAD
SOME TESTING MISTAKES :
CLICK HERE TO DOWNLOAD
Tuesday
Testing Projects
Those who are interested in manual testing projects join my sites and follow this site and send me your mail id .I have some manual testing projects so hurry up.
Thanks
ADMIN
Monday
WATER FALL MODEL
waterfall model:
water fall model is a sequential design process in which progress is downwardthe phases are analysis ,design coding testing, implementation.In this model one can move to the next phase when the previous phase is completed . Here the process is downward like water fall so it is called waterfall model. let describe the phases
1.ANALYSIS :
In this phase we analyize the system, requirement is gathered and discus them time shedule is prepared in this phase .BRS(business requirment specification) is prepared in this phase.After that we move to the next phase
2 DESIGN :
In this phase the UI screens are designed.FRS is prepared and the back end is designed.
3 CODING:
In this phase the actual programming code is written.The purpose of coding to prepare a programme which performs the specific function.tables in database are created.
4 TESTING:
after the coding is finished we move to the next phase where we test the whole software or application.If any error found we refer it to the developer for correction. Here we check the software is made as per the requirement or not
IMPLEMENTATION:
Here the software is deployed at the clients end and the software is maintained.
ADVANTAGES:
1.IF there is a fault or requirement in the project it is detected at the initial phases .
2 Strong documentation is available.
DISADVANTAGES
1 Huge amount of time is wasted because it is a sequential process
2.we cant modify the requirement once the process is started.
3.Testing phase comes quite late
Friday
what is severity and priority
Severity :
severity means how much intense the error is ,how much a error is critical for the product .If further testing is not possible for this error it means severity is high and if the error does not affect the testing process it means the severity is low.
priority
priority means in which order we fix the error.we have to fix the error according to its criticality .If a error is very critical then it is to be fixed first and if a error is not so critical it is to be fixed later
lets take some example
1.high severity and high priority
In this condition both the severity and priority are high .so it s very critical condition
example :
when the URL of the application is not working properly so here we can not proceed the further testing process.so here severity and priority both are high.
2. when login to the application is not possible
2. low severity and high priority
example
password not encrypted - if password is visible to everyone its a very critical condition . This condition does not affect our testing process we can continue our testing with this error but the priority is high so this error has to be fixed right now.
2.forget password not working properly
3. company logo is not visible
3 high severity and low priority
example : let save button is not working properly but we have another alternative option(ctrl A) .so that can we can proceed the testing process .SO here the severity is high and priority is low
4. Low severity and low priority
example:
Spelling mistake in the confirmation error message like “You have registered success” instead of successfully ,success is written.
Defect status
Defect status:In defect life cycle , defect has to go through in these status
Submitted
Accepted
Fixed
Fixed
Rejected or withdraw
Postponed
Closed
Reopen or Resubmit
Test completion criteria
Test completion criteria
1. When all planed testcases are executed and bug rate falls below a certain accepted level
2. Running out of time
3. Running out of money
1. When all planed testcases are executed and bug rate falls below a certain accepted level
2. Running out of time
3. Running out of money
Thursday
Usability testing
This testing is done to check the 'user-friendliness' of the application. This depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers
sanity testing or Smoke testing
This type of testing is done initially to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing the systems in every 5 minutes or corrupting databases, the software may not be in a 'sound’ condition to proceed for further testing in its current state.
Types of software error and bugs ?
Types of errors with examples:
User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues - Poor responsiveness, Can't redirect output, inappropriate use of key board.
Error Handling: Inadequate - protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.
Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.
Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.
Initial and Later states: Failure to - set data item to zero, to initialize a loop-control variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.
Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.
Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.
Race Conditions: Assumption that one event or task finished before another begins, Resource races, Tasks starts before its prerequisites are met, Messages cross or don't arrive in the order sent.
Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn't erase old files from mass storage, Doesn't return unused memory.
Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status or return code, Wrong operation or instruction codes.
Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.
Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.
User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues - Poor responsiveness, Can't redirect output, inappropriate use of key board.
Error Handling: Inadequate - protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.
Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.
Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.
Initial and Later states: Failure to - set data item to zero, to initialize a loop-control variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.
Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.
Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.
Race Conditions: Assumption that one event or task finished before another begins, Resource races, Tasks starts before its prerequisites are met, Messages cross or don't arrive in the order sent.
Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn't erase old files from mass storage, Doesn't return unused memory.
Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status or return code, Wrong operation or instruction codes.
Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.
Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.
what is Incremental testing ,Beta testing and adhoc testing
Incremental testing is the partial testing of a incomplete software product.The goal of this testing is to give a earlier feed back to the developer team
Beta testing
beta testing is a type of testing is which is conducted by the customer in the observance of development team
Ad hoc testing;
ad hoc testing is least formal testing approach testing carried out without any recognized testing technique
Beta testing
beta testing is a type of testing is which is conducted by the customer in the observance of development team
Ad hoc testing;
ad hoc testing is least formal testing approach testing carried out without any recognized testing technique
what is integration testing ?
Integration testing is a phase of software testing in which individual software modules are combined and tested as a group Integration testing is a black box testing the purpose of this testing is to insure different components of the application still work in accordance to the customer requirement.Testcases are developed which is the purpose of exercising the enterprise between the components .This activity is carried out by the testing.
why should we do regression testing ?
Regression testing is selective retesting of the system with an objective to ensure the bug fixes work and those bug fixes have not caused any un-intended effects in the system.
Goals:
The goals of regression testing are to check the bug fixation of the previous build and also to ensure that the bug fixes have not caused any unintended defects in the same or other modules of the build in test.
The objectives of Regression testing are to:
1. Partially validate the system (i.e., to determine if it fulfills its operational requirements).
2 Cause failures concerning the operational requirements that help identify defects that are not efficiently found during unit and integration testing.
3 Report these failures to the development teams so that the associated defects can be fixed.
4 Help determine the extent to which the system is ready for launch.
5 To check if the bug fixes have caused further bug in the same or other modules of the product
6 Provide input to the defect trend analysis effort.
preconditions for regression testing
Regression test execution can begin when the following preconditions hold:
1 The operational requirements to be tested have been specified and fully implemented.
2 The relevant software components have passed unit testing and one cycle of functional testing
3 Bug for the earlier builds have been posted and acted up on.
4 The test team is adequately staffed and trained in functional testing and regression testing
5 The test environment is ready.
6 The regression test set, as identified during planning/test case authoring phase is ready.
7 The bug reporting procedure and communication protocol is ready.
Wednesday
what is Quality Standard in testing
QUALITY STANDARDS :
1. ISO -International organization for standard( ISO 9000 ,9001 ,9062 e.t.c)
2. CMM - capability maturity model
3. SEI -software engineering institute
4. IEEE -Institute of electrical and electronics engineering
5. ANSI -American national standard institute
6. ASQ -American society for quality
why quality standard is required :
1.To insure quality
2.Key selling point
3.Reducing rework
4.To maintain discipline
5.To have transparency
3. SEI -software engineering institute
4. IEEE -Institute of electrical and electronics engineering
5. ANSI -American national standard institute
6. ASQ -American society for quality
why quality standard is required :
1.To insure quality
2.Key selling point
3.Reducing rework
4.To maintain discipline
5.To have transparency
Testing Limitations
- We can only test against system requirements
·
May
not detect errors in the requirements.
·
Incomplete
or ambiguous requirements may lead to inadequate or incorrect testing.
- Exhaustive (total) testing is impossible in present scenario.
- Time and budget constraints normally require very careful planning of the testing effort.
·
Compromise
between thoroughness and budget.
Test results are used
to make business decisions for release which test should we do first?
We can’t test every thing. There is never enough time to do all testing you would like, so what testing should you do?
Prioritize Tests. So that, whenever you stop testing, you have done the best testing in the time available.
Tips
Possible ranking criteria ( all risk based)
Test where a failure would be most serve.
Test where failures would be most visible.
Take the help of customer in understanding what is most important to him.
What is most critical to the customers business.
Areas changed most often.
Areas with most problems in the past.
Most complex areas, or technically critical.
what is blackbox testing ?
Black Box Testing
Black
box testing is also called as Functionality Testing. In this testing user will be asked to test
the correctness of the functionality with the help of Inputs and Outputs. User
doesn’t require the knowledge of software code.
Approach:
Equivalence Class:
•
For each piece of the specification, generate one or more equivalence Class
•
Label the classes as “Valid” or “Invalid”
•
Generate one test case for each Invalid Equivalence class
•
Generate a test case that covers as many Valid Equivalence Classes as possible
Boundary Value Analysis
• Generate test cases for the boundary values.
• Minimum Value, Minimum Value + 1, Minimum
Value -1
• Maximum Value, Maximum Value + 1, Maximum
Value - 1
Error Guessing.
·
Generating
test cases against to the specification.
Subscribe to:
Posts (Atom)