mirror of
https://github.com/EiffelSoftware/eiffel-org.git
synced 2025-12-06 14:52:03 +01:00
Author:halw
Date:2009-02-18T23:10:46.000000Z git-svn-id: https://svn.eiffel.com/eiffel-org/trunk@187 abb3cda0-5349-4a8f-a601-0c33ac3a8c38
This commit is contained in:
@@ -11,7 +11,7 @@ Developers test their software with the goal of ensuring that the software meets
|
||||
In parallel with the "agile" software development methods, a technique called Test-Driven Development (TDD) has emerged. In TDD, a software test is written before the code that it tests. In this way, the test serves not just to be a test, but as a portion of specification for the code yet to be written.
|
||||
:Q: What is software element X supposed to do?
|
||||
:A: For one thing, it is supposed to pass this test!
|
||||
So, in TDD, before each additional software element is written, its test is written. Once the software element is written, the test is run. If it fails, then the software (or possibly the test) needs to be fixed. If the test succeeds, then the programmer is free to write the test for the next element to be developed. In this way, the tests accumulate until eventually there is a whole suite of tests that can be run at anytime in order to ensure that the addition of any new software element causes a previously successful test now to fail. Some software development processes call for running a whole suite of tests after every increment of development activity. This type of testing is often referred to as regression testing, because it tends to expose software which had been satisfying its tests at one time, but because of some development activity has "regressed" to a failing state.
|
||||
So, in TDD, before each additional software element is written, its test is written. Once the software element is written, the test is run. If it fails, then the software (or possibly the test) needs to be fixed. If the test succeeds, then the programmer is free to write the test for the next element to be developed. In this way, the tests accumulate until eventually there is a whole suite of tests that can be run at any time in order to ensure that the addition of any new software element does not cause a previously successful test now to fail. Some software development processes call for running a whole suite of tests after every increment of development activity. This type of testing is often referred to as ''regression testing'', because it tends to expose software which had been satisfying its tests at one time, but because of some development activity has regressed to a failing state.
|
||||
|
||||
Creating, managing and running a large number of tests manually can be time-consuming, messy, and error-prone, thus the motivation for automated testing tools. Testing tools help programmers to create, maintain, and execute a suite of tests by automating the activity.
|
||||
|
||||
@@ -22,7 +22,7 @@ So it's important to understand that the tests in TDD serve, not just as verific
|
||||
|
||||
But in Eiffel, of course, the specification for a software element exists in its contract. Like the tests in TDD, Eiffel contracts are ordinarily written before the software elements that implement them.
|
||||
|
||||
With contracts enabled at runtime, the running software's behavior is constantly monitored against the contract's expectations. In other words, for routines, the precondition defines an acceptable state in which the routine can execute, and the postcondition defines the acceptable state after successful execution. The class invariant defines the constraints for the validity of instances of a class.
|
||||
With contract checking enabled at run time, the running software's behavior is constantly monitored against the contract's expectations. In other words, for routines, the precondition defines an acceptable state in which the routine can execute, and the postcondition defines an acceptable state after successful execution. The class invariant defines the constraints necessary for instances of a class to be valid.
|
||||
|
||||
A term commonly used in software testing is "oracle". Tests are generally looked at as having two parts, the first part is a mechanism that exercises (runs or calls) a particular software element in a given context. The second part is the "oracle" whose responsibility it is to determine whether the software element passes or fails the test. Not surprisingly, test oracles in other testing frameworks often look a lot like assertions in Eiffel. So the advantage for Eiffel is that the test oracles for all routines are already written as the postconditions on routines and class invariants.
|
||||
|
||||
@@ -43,7 +43,7 @@ View --> Tools --> Testing Tool </code>
|
||||
|
||||
==Test classes and tests==
|
||||
|
||||
The Eiffel Testing Framework interface helps you to create and execute tests on the software you develop. The interface contains a wizard called the ''New Eiffel Test Wizard'' which helps you create or generate the types of tests you need. We'll learn more about the interface and the wizard as we go along. But first, let's look at what constitutes an Eiffel Testing Framework ''test''? For the Eiffel Testing Framework, we define the term ''test'' in the context of some other testing terminology:
|
||||
The Eiffel Testing Framework interface helps you to create and execute tests on the software you develop. The interface contains a wizard called the '''New Eiffel Test Wizard''' which helps you create or generate the types of tests you need. We'll learn more about the interface and the wizard as we go along. But first, let's look at what constitutes an Eiffel Testing Framework ''test''. For the Eiffel Testing Framework, we define the term ''test'' in the context of some other testing terminology:
|
||||
|
||||
|
||||
{{definition|Test class|An effective class that inherits from the class EQA_TEST_SET. }}
|
||||
|
||||
@@ -4,6 +4,8 @@
|
||||
{{underconstruction}}
|
||||
|
||||
|
||||
|
||||
When you are running a system in EiffelStudio debugger, you can ask the Eiffel Testing Framework to extract a new test class and test from the current executable context. Most often you would use this capability in the case in which you experienced an unexpected failure or exception in one of your routines. It is possible, though, to extract at any point at which the system is paused.
|
||||
|
||||
The great advantage to extracted tests is that they provide a kind of a snapshot in testing form that will reproduce the unexpected failure.
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user