Author:halw

Date:2009-01-29T15:59:29.000000Z


git-svn-id: https://svn.eiffel.com/eiffel-org/trunk@174 abb3cda0-5349-4a8f-a601-0c33ac3a8c38
This commit is contained in:
halw
2009-01-29 15:59:29 +00:00
parent 0b74232240
commit 681bcf4ebe
2 changed files with 153 additions and 1 deletions

View File

@@ -0,0 +1,152 @@
[[Property:title|The Eiffel Testing Framework]]
[[Property:link_title|Using AutoTest]]
[[Property:weight|-1]]
[[Property:uuid|6b900a65-85c6-9cd6-ef57-ccd4b8decbef]]
{{underconstruction}}
==Overview==
The Eiffel Testing Framework is a tool that helps you to create, manage, and run tests against your software. The Eiffel Testing Framework is accessible directly as a part of EiffelStudio, but works to a large extent behind the scenes so that it doesn't get in the way of your development activities. In other words, even though you may be accumulating a substantial collection of test software along with your project software, you can still run and deliver your project software without going to a lot of trouble separating the two. Tests managed by the Eiffel Testing Framework stay handy and can be run any time to help make sure everything still stands up to scrutiny of testing.
==Background and motivation for testing tools==
Developers test their software with the goal of ensuring that the software meets its specification. Informally, this means that tests hope to show that the software does what it is supposed to do, and only that. Programmers design tests that exercise software given certain inputs and execution state, then observe the outcome to see if the software produces the expected results. During the last few years, both tests and testing tools have become more sophisticated.
In parallel with the "agile" software development methods, a technique called Test-Driven Development (TDD) has emerged. In TDD, a software test is written before the code that it tests. In this way, the test serves not just to be a test, but as a portion of specification for the code yet to be written.
:Q: What is software element X supposed to do?
:A: For one thing, it is supposed to pass this test!
So, in TDD, before each additional software element is written, its test is written. Once the software element is written, the test is run. If it fails, then the software (or possibly the test) needs to be fixed. If the test succeeds, then the programmer is free to write the test for the next element to be developed. In this way, the tests accumulate until eventually there is a whole suite of tests that can be run at anytime in order to ensure that the addition of any new software element causes a previously successful test now to fail. Some software development processes call for running a whole suite of tests after every increment of development activity. This type of testing is often referred to as regression testing, because it tends to expose software which had been satisfying its tests at one time, but because of some development activity has "regressed" to a failing state.
Creating, managing and running a large number of tests manually can be time-consuming, messy, and error-prone, thus the motivation for automated testing tools. Testing tools help programmers to create, maintain, and execute a suite of tests by automating the activity.
==The Eiffel advantage in testing==
So it's important to understand that the tests in TDD serve, not just as verification tools, but as a part of the software specification.
But in Eiffel, of course, the specification for a software element exists in its contract. Like the tests in TDD, Eiffel contracts are ordinarily written before the software elements that implement them.
With contracts enabled at runtime, the running software's behavior is constantly monitored against the contract's expectations. In other words, for routines, the precondition defines an acceptable state in which the routine can execute, and the postcondition defines the acceptable state after successful execution. The class invariant defines the constraints for the validity of instances of a class.
A term commonly used in software testing is "oracle". Tests are generally looked at as having two parts, the first part is a mechanism that exercises (runs or calls) a particular software element in a given context. The second part is the "oracle" whose responsibility it is to determine whether the software element passes or fails the test. Not surprisingly, test oracles in other testing frameworks often look a lot like assertions in Eiffel. So the advantage for Eiffel is that the test oracles for all routines are already written as the postconditions on routines and class invariants.
The presence of preconditions provides another advantage. Preconditions make it possible to automate testing in ways unavailable in other environments. Because of preconditions, we already have information about the limits of valid inputs to routines. So it's possible to generate a call to a routine we want to test automatically and with a context that meets the routine's precondition.
==The Eiffel Testing Framework==
The Eiffel Testing Framework attempts to capitalize on the testing advantages inherent in Eiffel due to Design by Contract. The Eiffel Testing Framework consists of an interactive interface, and a library of classes which support testing activity.
The testing support classes are distributed with EiffelStudio and exist in the ''testing'' subfolder of the ''libraries'' folder. With the exception of one class which we will discuss soon, the classes in "testing" are not intended to be used directly by developers. They exist to support the functionality of the Eiffel Testing Framework.
The interface for the Eiffel Testing Framework is accessible through the EiffelStudio development environment. You may find it already resident as a tab in the right hand pane next to Clusters, Features, and Favorites. If it's not there, then you can bring it up by following the menu path:
<code lang=text>
View --> Tools --> Testing Tool </code>
==Test classes and tests==
The Eiffel Testing Framework interface helps you to create and execute tests on the software you develop. The interface contains a wizard called the ''New Eiffel Test Wizard'' which helps you create or generate the types of tests you need. We'll learn more about the interface and the wizard as we go along. But first, let's look at what constitutes an ETF ''test''? For the Eiffel Testing Framework, we define the term ''test'' in the context of some other testing terminology:
{{definition|Test class|An effective class that inherits from the class EQA_TEST_SET. }}
{{definition|Test|Any procedure defined within the text of a test class that is exported to <code>ANY</code> and which takes no arguments. }}
{{definition|Test set|The set of tests in a test class. }}
{{definition|Test suite|A set of test classes (and by implication the tests contained therein) which is designed to test some particular software system or library. }}
Whenever you use the Eiffel Testing Framework, it will find test classes, those classes that inherit from EQA_TEST_SET. When you run tests, it will execute all the tests in those classes, or a subset of tests that you choose. So, you have probably figured out that the one class from the testing library that you may need to know a little about is EQA_TEST_SET. But you don't have to know very much, because the Eiffel Testing Framework can help you construct your test classes.
==Types of Tests==
There are three different types of tests supported by the Eiffel Testing Framework:
* Manual tests
* Synthesized tests
* Extracted tests
Each test of any of these types ultimately is a feature of class that inherits from EQA_TEST_SET. Ordinarily, though, the three types of tests won't be mixed in a test class. That is, any one particular test class will contain only one type of test. But from the point of view of the Eiffel Testing Framework, all types of tests are managed and run the same way. We will discuss these types of tests in more detail later, but for right now, let's just establish some definitions.
{{definition|Manual test|A test manually coded within a test class. }}
Manual tests are features, procedures in fact, of classes that inherit from EQA_TEST_SET. In many simple cases, test classes containing manual tests inherit directly from EQA_TEST_SET, but that's not a requirement. Occasionally it can be useful for test classes to inherit from a descendant of EQA_TEST_SET that provides additional functionality.
A manual test is "manual" in the sense that you code the essential procedural part of the test by hand. But you really don't have to deal with the more mundane business of creating the whole test class and ensuring the proper inheritance. The ''New Eiffel Test Wizard'' helps out by automatically creating the shell of a test class and the shell of a test for you to fill in. Then it's pretty easy to add new tests manually to an existing test class.
{{definition|Synthesized test|A test that is the product of generating and running a series of randomly generated invocations target routines. }}
The process of creating synthesized tests is sometimes known in the community as creating via ''AutoTest''. The randomly generated calls to target routines which were created and run are discarded at the completion of the creation. But from the results of these calls, a set of permanent tests is distilled. These are the synthesized tests.
Synthesized tests are made possible by Design by Contract. Hopefully, you remember that one thing that DbC gives us is the handy ability to assign blame when something goes wrong. When a test makes a call to a routine we want to test, if a contract violation occurs, it may be the fault of the called routine or it may be the fault of the caller ... and that depends upon what type of contract violation has occurred. The contract violations that are interesting to the Eiffel Testing Framework in the process of synthesizing tests are only those in which the called routine is at fault. That is, postcondition and invariant violations. The Eiffel Testing Framework will then create a synthesized test for every ''unique'' failure in which the called routine being tested was to blame.
{{definition|Extracted test|A test that has been created during the execution of a system as a result of a developer request or a failure of the system. Extracted with the test is the current runtime state. When run, the test will attempt to recreate the runtime context. }}
Extracted tests are convenient because they allow you to accumulate tests that are based on actual failures of your software (good for the software, not so good for your ego!). Once these tests are in your suite of tests, they are available from then on.
==Anatomy of a test==
{{definition|Target routine|A routine that is to be tested by a test. Sometimes called a "routine under test." }}
{{definition|Target class|A class that contains target routines. Sometimes called a "class under test." }}
In its simplest form, a test is a routine that issues a call to some routine you've developed in some class you've developed.
So the tests and the test classes are in the testing realm and are used to test the target routines in target classes which are the real product of your software development project.
Still, you should understand that the Eiffel Testing Framework will manage and run the tests in any test class whether or not they actually test any target routines. Of course, it would seem silly to keep a test around that doesn't test anything, but the important thing to understand is that the Eiffel Testing Framework will work with anything that matches the definitions of test and test class above. That is, once tests are created, the Eiffel Testing Framework doesn't really have a stake in what you are trying to test.
Take a look at the following test class:
<code>
note
description: "[
Eiffel tests that can be executed by testing tool.
]"
author: "EiffelStudio test wizard"
date: "$Date$"
revision: "$Revision$"
testing: "type/manual"
class
MY_TEST_CLASS
inherit
EQA_TEST_SET
feature -- Test routines
my_test
-- New test routine
do
assert ("not_implemented", False)
end
end</code>
This test class was created by the Eiffel Testing Framework's New Eiffel Test Wizard. It is about as simple a test class as there can be. It doesn't do anyone any good, except to illustrate the basic form of ETF tests. So, let's look at that form.
It is clear that <code>MY_TEST_CLASS</code> is an effective class that inherits from <code>EQA_TEST_SET</code>, so that makes it fit the definition of a test class. And, it's also clear the <code>my_test</code> is a feature of <code>MY_TEST_CLASS</code>, specifically a procedure, exported to <code>ANY</code>, requiring no arguments. That qualifies <code>my_test</code> as a test. If <code>MY_TEST_CLASS</code> is located in a test cluster of my project, then the Eiffel Testing Framework will find it and be able to run it whenever you request.
This test would always fail because of the <code>assert</code> that the wizard put in the implementation. So if you asked the Eiffel Testing Framework to run your tests, it would tell you that <code>my_test</code> was a failed test, for the reason: "not_implemented". The <code>assert</code> is not a necessary part of a test. The wizard puts it there to remind you that the test has not been implemented. If you removed the <code>assert</code> line from the test, then the test would always succeed, which would be nice, but it would be succeeding at testing nothing! We'll see more later about what it means for tests to succeed and fail.

View File

@@ -319,7 +319,7 @@ Concretely, exceptions may result from the following events: <br/>
* Operating system signal:arithmetic overfolow; no memory available for a requested creation or twin -- even after garbage collection has rummaged everything to find some space. (But no C/C++-like "wrong pointer address", which cannot occur thanks to the statically typed nature of Eiffel.)
It is sometimes useful, when handling exceptions in <code>rescue</code> clauses, to ascertain the exact nature of the exception that got the execution there. For this it is suffices to inherit from the Kernel Library class <code>EXCEPTIONS</code>, which provides queries such as <code>exception</code>, giving the code for the last exception, and symbolic names ( [[10 Other Mechanisms#Constant_and_unique_attributes|"Constant and unique attributes"]] ) for all such codes, such as <code>No_more_memory</code>. You can then process different exceptions differently by testing <code>exception</code> against various possibilities. The method strongly suggests, however, that exception handling code should remain simple; a complicated algorithm in a <code>rescue</code> clause is usually a sign that the mechanism is being misused. Class <code>EXCEPTIONS</code> also provides various facilities for fine-tuning the exception facilities, such as a procedure <code>raise</code> that will explicitly trigger a "developer exception" with a code than can then be detected and processed. Exception handling helps produce Eiffel software that is not just correct but robust, by planning for cases that should not normally arise, but might out of Murphy's law, and ensuring they do not affect the software's basic safety and simplicity.
It is sometimes useful, when handling exceptions in <code>rescue</code> clauses, to ascertain the exact nature of the exception that got the execution there. For this it is suffices to inherit from the Kernel Library class <code>EXCEPTIONS</code>, which provides queries such as <code>exception</code>, giving the code for the last exception, and symbolic names ( [[10 Other Mechanisms#Constant_attributes|"Constant attributes"]] ) for all such codes, such as <code>No_more_memory</code>. You can then process different exceptions differently by testing <code>exception</code> against various possibilities. The method strongly suggests, however, that exception handling code should remain simple; a complicated algorithm in a <code>rescue</code> clause is usually a sign that the mechanism is being misused. Class <code>EXCEPTIONS</code> also provides various facilities for fine-tuning the exception facilities, such as a procedure <code>raise</code> that will explicitly trigger a "developer exception" with a code than can then be detected and processed. Exception handling helps produce Eiffel software that is not just correct but robust, by planning for cases that should not normally arise, but might out of Murphy's law, and ensuring they do not affect the software's basic safety and simplicity.
==Other applications of Design by Contract&#153;==