Software testing: Difference between revisions

Content deleted Content added
Organize all the types/categories in one section
Move automated testing under categorization and replace oversell with except from linked article
Line 59:
 
Testing can be categorized many ways.<ref>{{Cite book |last1=Kaner |first1=Cem |url=https://archive.org/details/lessonslearnedso00kane |title=Lessons Learned in Software Testing: A Context-Driven Approach |last2=Bach |first2=James |last3=Pettichord |first3=Bret |publisher=Wiley |year=2001 |isbn=978-0-471-08112-8 |pages=[https://archive.org/details/lessonslearnedso00kane/page/n55 31]–43 |url-access=limited}}</ref>
 
=== Automated testing ===
{{Excerpt|Test automation|paragraphs=1|only=paragraph}}
 
=== Levels ===
Line 365 ⟶ 368:
* [[Regression testing]]: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything and that the software product as a whole is still working correctly.
* Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
 
== Automated testing ==
{{Main|Test automation}}
 
Many programming groups {{Like whom?|date=August 2018}} are relying more and more {{Vague|date=August 2018}} on [[Test automation|automated testing]], especially groups that use [[test-driven development]]. There are many frameworks {{Specify|date=April 2019}} to write tests in, and [[continuous integration]] software will run tests automatically every time code is checked into a [[version control]] system.
 
While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed [[test suite]] of testing scripts in order to be truly useful.
 
=== Testing tools ===
 
Program testing and fault detection can be aided significantly by testing tools and [[debugger]]s.
Testing/debug tools include features such as:
 
* Program monitors, permitting full or partial monitoring of program code, including:
** [[Instruction set simulator]], permitting complete instruction level monitoring and trace facilities
** [[Hypervisor]], permitting complete control of the execution of program code
** [[Program animation]], permitting step-by-step execution and conditional [[breakpoint]] at source level or in [[machine code]]
** [[Code coverage]] reports
* Formatted dump or [[symbolic debugging]], tools allowing inspection of program variables on error or at chosen points
* Automated functional [[Graphical User Interface]] (GUI) testing tools are used to repeat system-level tests through the GUI
* [[Benchmark (computing)|Benchmark]]s, allowing run-time performance comparisons to be made
* [[Profiling (computer programming)|Performance analysis]] (or profiling tools) that can help to highlight [[hot spot (computer science)|hot spot]]s and resource usage
 
Some of these features may be incorporated into a single composite tool or an [[Integrated Development Environment]] (IDE).
 
=== Capture and replay ===
 
'''Capture and replay testing''' consists of collecting end-to-end usage scenarios while interacting with an application and in turning these scenarios into test cases. Possible applications of capture and replay include the generation of regression tests. The SCARPE tool <ref>{{Cite book |last1=Joshi |first1=Shrinivas |last2=Orso |first2=Alessandro |title=2007 IEEE International Conference on Software Maintenance |chapter=SCARPE: A Technique and Tool for Selective Capture and Replay of Program Executions |date=October 2007 |chapter-url=https://ieeexplore.ieee.org/document/4362636 |pages=234–243 |doi=10.1109/ICSM.2007.4362636 |isbn=978-1-4244-1255-6 |s2cid=17718313}}</ref> selectively captures a subset of the application under study as it executes. JRapture captures the sequence of interactions between an executing Java program and components on the host system such as files, or events on graphical user interfaces. These sequences can then be replayed for observation-based testing.<ref>{{Cite journal |last1=Steven |first1=John |last2=Chandra |first2=Pravir |last3=Fleck |first3=Bob |last4=Podgurski |first4=Andy |date=September 2000 |title=jRapture: A Capture/Replay tool for observation-based testing |journal=ACM SIGSOFT Software Engineering Notes |language=en |volume=25 |issue=5 |pages=158–167 |doi=10.1145/347636.348993 |issn=0163-5948|doi-access=free }}</ref>
Saieva et al. propose to generate ad-hoc tests that replay recorded user execution traces in order to test candidate patches for critical security bugs.<ref>{{Cite book |last1=Saieva |first1=Anthony |last2=Singh |first2=Shirish |last3=Kaiser |first3=Gail |title=2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM) |chapter=Ad hoc Test Generation Through Binary Rewriting |date=September 2020 |chapter-url=https://ieeexplore.ieee.org/document/9252025 |location=Adelaide, Australia |publisher=IEEE |pages=115–126 |doi=10.1109/SCAM51674.2020.00018 |isbn=978-1-7281-9248-2 |s2cid=219618921}}</ref>
 
== Measurement in software testing ==