mirror of
https://github.com/EiffelSoftware/eiffel-org.git
synced 2025-12-07 23:32:42 +01:00
Renamed current as trunk.
git-svn-id: https://svn.eiffel.com/eiffel-org/trunk@1433 abb3cda0-5349-4a8f-a601-0c33ac3a8c38
This commit is contained in:
@@ -0,0 +1,323 @@
|
||||
[[Property:title|Create a manual test]]
|
||||
[[Property:weight|2]]
|
||||
[[Property:uuid|e78f25e3-ed3a-f8fa-e71d-28a4dda1825f]]
|
||||
==A system to test==
|
||||
|
||||
For developing our manual test, let's use a simple system that contains a class modeling bank accounts. Here are two classes that will make up our system. The first, <code>APPLICATION</code> will be the root class of our system. <code>APPLICATION</code> really only serves to declare an attribute of type <code>BANK_ACCOUNT</code>, which is the class we will write a test against. <code>APPLICATION</code> looks like this:
|
||||
|
||||
<code>
|
||||
class
|
||||
APPLICATION
|
||||
|
||||
inherit
|
||||
ARGUMENTS
|
||||
|
||||
create
|
||||
make
|
||||
|
||||
feature {NONE} -- Initialization
|
||||
|
||||
make
|
||||
-- Run application.
|
||||
do
|
||||
create my_account
|
||||
end
|
||||
|
||||
my_account: BANK_ACCOUNT
|
||||
|
||||
end
|
||||
</code>
|
||||
|
||||
|
||||
And here's the class <code>BANK_ACCOUNT</code>:
|
||||
|
||||
<code>
|
||||
class
|
||||
BANK_ACCOUNT
|
||||
inherit
|
||||
ANY
|
||||
redefine
|
||||
default_create
|
||||
end
|
||||
feature
|
||||
default_create
|
||||
do
|
||||
balance := 0
|
||||
end
|
||||
|
||||
balance: INTEGER
|
||||
|
||||
deposit (an_amount: INTEGER)
|
||||
-- Deposit `an_amount'.
|
||||
require
|
||||
amount_large_enough: an_amount > 0
|
||||
do
|
||||
ensure
|
||||
balance_increased: balance > old balance
|
||||
deposited: balance = old balance + an_amount
|
||||
end
|
||||
|
||||
withdraw (an_amount: INTEGER)
|
||||
-- Withdraw `an_amount'.
|
||||
require
|
||||
amount_large_enough: an_amount > 0
|
||||
amount_valid: balance >= an_amount
|
||||
do
|
||||
balance := balance - an_amount
|
||||
ensure
|
||||
balance_decreased: balance < old balance
|
||||
withdrawn: balance = old balance + an_amount
|
||||
end
|
||||
|
||||
invariant
|
||||
balance_not_negative: balance >= 0
|
||||
end
|
||||
</code>
|
||||
|
||||
You shouldn't let it worry you if you've noticed that the class <code>BANK_ACCOUNT</code> contains some flaws. We'll deal with these later.
|
||||
|
||||
If you want to work along with this tutorial, you should be able to copy the text of each these classes from this page and paste it into the EiffelStudio editor pane. Build a system using these two classes, and <code>{APPLICATION}.make</code> as the root.
|
||||
|
||||
|
||||
{{note|If you are using EiffelStudio version 6.3, there two things you will need to do to prepare your system for use with AutoTest. Both of these are done from the [[EiffelStudio: Project settings window]].<br/> 1) Set your project to be a console application in the [[Advanced options]].<br/>2) Set a value of <code>False</code> for the <code>Recursive</code> attribute of your project cluster in [[Group options]].}}
|
||||
|
||||
==Getting to the AutoTest interface==
|
||||
|
||||
If the AutoTest interface is not on a tab next to Clusters, Features, and Favorites, you can invoke it by following the menu path:
|
||||
<code lang=text>
|
||||
View --> Tools --> AutoTest
|
||||
</code>
|
||||
Depending upon your version and platform, the AutoTest interface should look about like this:
|
||||
|
||||
|
||||
[[Image:AutoTest empty tool 01]]
|
||||
|
||||
|
||||
==Creating a new test==
|
||||
|
||||
To begin the process of creating a new test, click the Create New Test button ( [[Image:create new tests]] ) on the interface's tool bar. When you click this button, by default AutoTest will set you up to create a new Manual test. To choose a different test type, click the small triangle to the right of the Create New Test button and you'll be presented with a drop-down menu of choices:
|
||||
|
||||
|
||||
[[Image: AutoTest create new test|Create new test drop-down menu]]
|
||||
|
||||
|
||||
For now, let's select Create Manual Test.
|
||||
|
||||
If this is the first time you've used the testing tool for this project, it is likely that you will be presented with a dialog box asking if you want to add the testing library classes to your project and recompile:
|
||||
|
||||
|
||||
[[Image:AutoTest add testing libraries dialog]]
|
||||
|
||||
You want EiffelStudio to do this before launching the wizard so, click "Yes". In a moment your system will have recompiled with the testing library classes available. Remember that you won't need to interact much with the testing classes, but AutoTest uses them, so they need to be available. As long as the testing classes stay available, you should not see this dialog again for the current project.
|
||||
|
||||
|
||||
==The Manual Test Pane==
|
||||
|
||||
After the compile completes, then the first pane of the New Eiffel Test Wizard appears. It's the Manual Test pane and should look like this:
|
||||
|
||||
|
||||
[[Image:AutoTest Manual Test pane]]
|
||||
|
||||
|
||||
Here we will name our test. Let's say that we plan to write this test against the feature <code>{BANK_ACCOUNT}.deposit</code>. We'll give this test the name <code>test_deposit_01</code>. The name uses an ad hoc naming convention for tests. You can use this, or develop your own. The prefix <code>test_</code> comes before the feature name it will test, and the suffix <code>_01</code> follows, so that we have a framework for adding more tests against <code>deposit</code>. Again, you can choose any naming scheme that makes sense to you. You may want to try to describe the test in its name. For example, <code>test_deposit_very_large_amount</code>.
|
||||
|
||||
We're ready to click '''Next''', but before we do, let's look at the check boxes on this wizard pane. The two check boxes labeled '''Redefine `on_prepare'''' and '''Redefine `on_clean'''' have to do with the way that tests are run.
|
||||
|
||||
AutoTest runs each test as a three step process:
|
||||
# Preparation
|
||||
# Execution
|
||||
# Clean up
|
||||
|
||||
There are features in class <code>EQA_TEST_SET</code> named <code>prepare</code> and <code>clean</code> which accomplish steps 1 and 3 above. These features are <code>frozen</code>, therefore you cannot redefine them in a test class (i.e., a descendant of <code>EQA_TEST_SET</code>) However the class does provide features that can be redefined so that you can include custom behavior before and/or after the execution of a test. These features are <code>on_prepare</code> and <code>on_clean</code>. So if you check one of these boxes, then the test class that is built for you will include a redefined feature ready for you to implement. In this simple example, we'll leave both boxes unchecked.
|
||||
|
||||
|
||||
{{note|The check box labeled '''System level test''' is displayed here as not sensitive. This box is reserved for future system level testing capability in AutoTest, so for versions including 7.0, you can ignore it. }}
|
||||
|
||||
|
||||
Another thing to notice before we click '''Next''', is that at this point we could click '''Launch'''. '''Launch''' will immediately try to create the test with the information it has available. The idea is that if you are creating several similar tests, you can change the test routine name and leave the rest of the information as you had entered it on a previous test. This keeps you from having to traverse the wizard panes entering the same information repeatedly.
|
||||
|
||||
But in our case, we need to use the subsequent wizard panes, so let's click '''Next''', to go to the next one.
|
||||
|
||||
|
||||
==The Tags Pane==
|
||||
|
||||
|
||||
[[Image:AutoTest Tags pane empty|Tags pane]]
|
||||
|
||||
|
||||
With this pane, you identify tags for your test that allow you to manage your test set more easily in the future. Read more in [[#About Tags|About Tags]] below.
|
||||
|
||||
For this test, we will include only a tag that identifies the class and feature covered by the test. To do this we click '''Add tag for covered class/feature'''. When we do, we are presented with a dialog in which we can choose a class and feature.
|
||||
|
||||
|
||||
[[Image:Autotest test coverage tag dialog|Dialog for coverage tag]]
|
||||
|
||||
|
||||
We'll choose class <code>BANK_ACCOUNT</code> and feature <code>deposit</code>, click '''OK'''.
|
||||
|
||||
Now you should see the coverage tag in the list of '''Tags used in new test'''.
|
||||
|
||||
|
||||
[[Image: AutoTest Tags pane|Tags pane]]
|
||||
|
||||
|
||||
That takes care of adding our coverage tag, so let's click '''Next''' to go to the next wizard pane, the '''General''' pane.
|
||||
|
||||
|
||||
==The General Pane==
|
||||
|
||||
|
||||
[[Image: AutoTest General pane empty|The General Pane]]
|
||||
|
||||
|
||||
We will use this wizard pane to name our test class and let AutoTest know where we want the test class to reside. You can give a test class any name you wish, as long as it doesn't conflict with another class name in your system. If you try to type in a class name that already exists, the wizard will let you know right away by changing the text color to red. There is a convention that has arisen around test class names. If possible make the test class name the name of the target class, prefixed with <code>TEST_</code>. So in our case, we want to build a test against a feature of the <code>BANK_ACCOUNT</code> class, so we will name our test class <code>TEST_BANK_ACCOUNT</code>.
|
||||
|
||||
Now, for the question of where the tests should be kept.
|
||||
|
||||
By default, tests will be stored in a subdirectory of the EIGENs directory that is generated by the Eiffel compiler. Because it's the default, it's the quickest, easiest way to house tests. But it may not be the best for you in the long run. For example, if you manually delete the EIFGENs directory, which is occasionally necessary, you will lose your tests.
|
||||
|
||||
You could include them in the same cluster as some of your application classes. But there are some advantages to keeping the test classes in a '''test cluster''' separate from your target classes. For example, it will be easier for you to deliver your application or library classes if the testing classes aren't mixed with your domain classes. A '''test cluster''' is just a cluster of classes that EiffelStudio and AutoTest expect to contain test classes. So, in our case, let's create a new testing cluster as a subcluster of the cluster in which the classes <code>APPLICATION</code> and <code>BANK_ACCOUNT</code> reside.
|
||||
|
||||
First, uncheck the box labeled '''Use EIFGENs cluster'''.
|
||||
|
||||
Notice the '''New cluster''' link on the General pane. We click that link to add a new test cluster. The '''Add Cluster''' dialog box appears:
|
||||
|
||||
|
||||
[[Image:AutoTest Add Cluster dialog]]
|
||||
|
||||
|
||||
We can name our test cluster <code>tests</code>, the default, and make it a subcluster to our root cluster <code>accounts</code>. Notice that there is a '''test cluster''' check box on the dialog. It is checked and disabled, so at this point in the wizard you would always create a test cluster. Let's also check the box labeled '''recursive'''. Once the test cluster is created, we're back to the General pane which now looks like this:
|
||||
|
||||
|
||||
[[Image:AutoTest General pane]]
|
||||
|
||||
|
||||
At this point we have provided all the information necessary for AutoTest to create the shell for a manual test on the <code>deposit</code> feature of the <code>BANK_ACCOUNT</code> class.
|
||||
|
||||
So, now we click '''Launch''', and AutoTest creates our test set and test.
|
||||
|
||||
|
||||
|
||||
==Writing a test==
|
||||
|
||||
Let's look at the class <code>TEST_BANK_ACCOUNT</code>:
|
||||
|
||||
<code>
|
||||
note
|
||||
description: "[
|
||||
Eiffel tests that can be executed by testing tool.
|
||||
]"
|
||||
author: "EiffelStudio test wizard"
|
||||
date: "$Date$"
|
||||
revision: "$Revision$"
|
||||
testing: "type/manual"
|
||||
|
||||
class
|
||||
TEST_BANK_ACCOUNT
|
||||
|
||||
inherit
|
||||
EQA_TEST_SET
|
||||
|
||||
feature -- Test routines
|
||||
|
||||
test_deposit_01
|
||||
-- New test routine
|
||||
note
|
||||
testing: "covers/{BANK_ACCOUNT}.deposit"
|
||||
do
|
||||
assert ("not_implemented", False)
|
||||
end
|
||||
|
||||
end
|
||||
</code>
|
||||
|
||||
We can see that the feature <code>test_deposit_01</code> exists, but doesn't really test anything. So, let's change that. We'll alter <code>test_deposit_01</code> so that it creates an instance of <code>BANK_ACCOUNT</code> and then makes a deposit to that account.
|
||||
|
||||
So, <code>test_deposit_01</code> now looks like this:
|
||||
|
||||
<code>
|
||||
test_deposit_01
|
||||
-- New test routine
|
||||
note
|
||||
testing: "covers/{BANK_ACCOUNT}.deposit"
|
||||
local
|
||||
l_ba: BANK_ACCOUNT
|
||||
do
|
||||
create l_ba
|
||||
l_ba.deposit (500)
|
||||
end
|
||||
</code>
|
||||
|
||||
Now we have created and written a manual test using AutoTest.
|
||||
|
||||
Next let's look into the notion of '''Tags''' in a little more detail, then see what it takes to execute a test.
|
||||
|
||||
|
||||
==About Tags==
|
||||
|
||||
The '''Tags''' pane allows us to associate our test with any AutoTest '''tags''' that we feel are appropriate.
|
||||
|
||||
'''Tags''' are simply names or otherwise meaningful strings of characters that are arranged hierarchically and can be associated with a test to help manage, maintain, execute, and monitor its results. Any one test can support many tags. It is quite likely that during the development process, your system may eventually accumulate a great number of tests. And you may want only to execute some selected portion of those tests at any particular time. '''Tags''' allow you do that with the help of AutoTest.
|
||||
|
||||
One of the most common types of tags specifies what class and feature a test covers. In our example, we wrote our test against the <code>deposit</code> procedure of the class <code>BANK_ACCOUNT</code>. The tag that we added to express this is:
|
||||
<code>
|
||||
covers/{BANK_ACCOUNT}.deposit
|
||||
</code>
|
||||
When we look at a tag in this notation, each hierarchical level is delimited by the forward slash. So the tag above specifies a root "covers" and its child "{BANK_ACCOUNT}.deposit". If this same test tested both <code>deposit</code> and <code>withdraw</code>, then its list of tags would be:
|
||||
<code>
|
||||
covers/{BANK_ACCOUNT}.deposit
|
||||
covers/{BANK_ACCOUNT}.withdraw
|
||||
</code>
|
||||
So when ever you ask to view or run all the tests that <code>covers</code> either <code>deposit</code> or <code>withdraw</code>, this test would show up in that set.
|
||||
|
||||
The "covers" tags, as you saw earlier, can be generated by AutoTest's New Eiffel Test Wizard when you create a new test. But you could enter the tag manually, as well. For example if you had written a high level test that exercised all or most of the functionality of the class <code>BANK_ACCOUNT</code>, you could manually add a tag that expresses that, i.e., a "covers" tag for <code>BANK_ACCOUNT</code> that does not specify a particular routine:
|
||||
<code>
|
||||
covers/{BANK_ACCOUNT}
|
||||
</code>
|
||||
|
||||
Tags can be completely arbitrary, too. So, for example if you were building software that you expected to run on multiple platforms, in the test suite, you might have a test with the following tags:
|
||||
<code>
|
||||
platform/os/linux
|
||||
platform/architecture/i386
|
||||
</code>
|
||||
So this test would be specifically for Linux running on Intel architecture. When you were testing on that platform combination, you could select the appropriate tests to run using tags.
|
||||
|
||||
|
||||
===Associating tags with a new test===
|
||||
|
||||
Looking again at the '''Tags''' pane, you will see that there are two boxes under the label '''Tags used in new test'''. The first is just a display of the list of tags that you have added to the new test. The next box down allows you to add an arbitrary tag sequence like:
|
||||
<code>
|
||||
platform/os/linux
|
||||
</code>
|
||||
Below that box are links that allow you to add certain commonly used or predefined tag types. One of these, '''Add tag for covered class/feature''' is the link we used to add the "covers" tag for our test on <code>{BANK_ACCOUNT}.deposit</code>.
|
||||
|
||||
|
||||
===Other predefined tags===
|
||||
|
||||
In addition to '''Add tag for covered class/feature''', choices for other predefined tags are shown as links. For example, '''Add tag to run test in private evaluator''' and '''Add tag to run test serially'''.
|
||||
|
||||
Selecting '''Run test in private evaluator''' will insert the tag:
|
||||
<code>
|
||||
execution/isolated
|
||||
</code>
|
||||
|
||||
When tests are executed, they do so within the context of '''evaluator processes'''. Normally, evaluator processes are reused for multiple test executions. But if you select '''Run in private evaluator''', the tag added to your test guarantees that this test will be run in a fresh evaluator process, that terminates when the test completes. This can be helpful, for example, when you don't want your test to enter or leave the evaluator process with the effects of "once" routines or any other action that might effect the efficacy of other tests. For example, if your test executes external routines which might have a damaging effect on memory, you should run the test in a private evaluator.
|
||||
|
||||
If you select '''Run test serially''', the following tag will be inserted:
|
||||
<code>
|
||||
execution/serial
|
||||
</code>
|
||||
|
||||
Tests tagged with this tag will not run concurrently with any other similarly tagged test is running.
|
||||
|
||||
You can extend the serial execution tag with arbitrary terms that will differentiate groups of tagged tests. For example, if some of your tests are tagged like this:
|
||||
<code>
|
||||
execution/serial/group_1
|
||||
</code>
|
||||
and some are tagged:
|
||||
<code>
|
||||
execution/serial/group_2
|
||||
</code>
|
||||
then AutoTest will not run any <code>group_1</code> tagged test concurrently with any other <code>group_1</code> test, and likewise for tests tagged <code>group_2</code>.
|
||||
|
||||
|
||||
@@ -0,0 +1,110 @@
|
||||
[[Property:title|Execute tests]]
|
||||
[[Property:weight|3]]
|
||||
[[Property:uuid|d0515cb1-0792-3028-2a24-a71b56506959]]
|
||||
In the previous section we coded a manually created test. AutoTest will allow us to execute that test, or, in more practical terms, any set of tests that we select. But before we execute our test, let's take a look at what we will get out of such an execution.
|
||||
|
||||
==About test results==
|
||||
|
||||
It is important to understand that for AutoTest, test results are solely determined by whether an exception occurs during the execution of a test, and, in cases in which an exception does occur, what kind of exception it is. So, with AutoTest, it is not necessary for you to write anything special into a test that propagates the test's results.
|
||||
|
||||
When AutoTest executes a test, the result will be one of only three possibilities:
|
||||
# The test is '''successful'''
|
||||
# The test is '''failing'''
|
||||
# The test result is '''unresolved'''
|
||||
|
||||
These possibilities are defined as follows.
|
||||
|
||||
{{definition|Successful test|A test which has executed without causing and exception to occur. }}
|
||||
|
||||
|
||||
{{definition|Failing test|A test which has caused an exception to occur during its execution, specifically during the execution of a target routine. }}
|
||||
|
||||
|
||||
{{definition|Unresolved test result|A test which has caused an exception to occur during its execution, but exclusive of the execution of a target routine. }}
|
||||
|
||||
|
||||
So, successful tests are easy enough to understand. The test executed with no exception.
|
||||
|
||||
Failing tests and unresolved test results both mean that an exception occurred during the execution of the test. The distinction is made based on the location of the feature that causes the exception.
|
||||
|
||||
When we execute our test <code>{TEST_BANK_ACCOUNT}.test_deposit_01</code>, we know that <code>test_deposit_01</code> will make a call to <code>{BANK_ACCOUNT}.deposit</code>. If the exception occurs during the execution of a target routine (i.e., in <code>{BANK_ACCOUNT}.deposit</code>), then the test is considered failing. If the exception occurs anywhere else in the execution of <code>{TEST_BANK_ACCOUNT}.test_deposit_01</code>, then the test is considered to have an unresolved result.
|
||||
|
||||
|
||||
{{note|Be aware that some early versions of AutoTest reported some unresolved test results as failing tests. }}
|
||||
|
||||
This behavior can be helpful to us as testers. A failing test indicates that there is something amiss in the target routine. The routine has not completed in a state that satisfies its postcondition and class invariant, or is dealing with an unresolved exception from some routine that it has called. An unresolved test result indicates that something is amiss in our test. Something went wrong in the setup or cleanup of the test or perhaps the test called a target routine from a state that did not satisfy the target routine's precondition.
|
||||
|
||||
|
||||
==The AutoTest tool==
|
||||
|
||||
In the last section, we created a manual test. The AutoTest tool shows us the new test in the '''Tests''' column. So, now the tool should look something like this:
|
||||
|
||||
|
||||
[[Image:AutoTest tool with test]]
|
||||
|
||||
|
||||
==Test execution==
|
||||
|
||||
You see under "Tests" the project cluster <code>accounts</code>, the test cluster <code>tests</code>, the test class <code>TEST_BANK_ACCOUNT</code>, and the test <code>test_deposit_01</code>. You might have to expand some of the elements to see everything as shown above.
|
||||
|
||||
You see that the '''Status''' of <code>test_deposit</code> is "not tested", and that the '''Last executed''' date is empty.
|
||||
|
||||
To execute tests we use the "Run" button ( [[Image:debug-run-icon]] ) on the interface toolbar. By default, the Run button will run all tests matching the tags in the '''Filter''' box. However, there is a list of run options that you can access by clicking the black triangle just to the right of Run. You can choose to run all tests, only those with failing status, a filtered set of tests, or only those tests that you have selected in the tree below. We'll cover filtering a little later. For now, life is simple, we have only one test so just selecting '''Run all''' should execute it.
|
||||
|
||||
==Examining test results==
|
||||
|
||||
The test runs in background and the AutoTest interface now looks like this:
|
||||
|
||||
|
||||
[[Image:AutoTest tool with failed test]]
|
||||
|
||||
|
||||
It's pretty clear that our test has failed. Its status is now marked with the Failing icon ( [[Image:general-error-icon]] ) and in the box below the '''Execution''' tab we see that the status also includes a tag: <code>balance_increased</code>. More detail is provided in the Testing pane of the Outputs tool, as shown below.
|
||||
|
||||
|
||||
[[Image:AutoTest Outputs tool after run 01]]
|
||||
|
||||
|
||||
We see that <code>balance_increased</code> is a postcondition tag on the target routine <code>{BANK_ACCOUNT}.deposit</code>. Upon examination of the code:
|
||||
|
||||
<code>
|
||||
deposit (an_amount: INTEGER)
|
||||
-- Deposit `an_amount'.
|
||||
require
|
||||
amount_large_enough: an_amount > 0
|
||||
do
|
||||
ensure
|
||||
balance_increased: balance > old balance
|
||||
deposited: balance = old balance + an_amount
|
||||
end
|
||||
</code>
|
||||
|
||||
we realize that there is no implementation here. So we add the code to implement <code>deposit</code>:
|
||||
|
||||
<code>
|
||||
...
|
||||
do
|
||||
balance := balance + an_amount
|
||||
ensure
|
||||
...
|
||||
</code>
|
||||
|
||||
After compiling, we can execute the test again. We could do this by selecting '''Run all''' as we did last time, or by selecting '''Run failing'''. Once the test executes we see now that it was successful:
|
||||
|
||||
|
||||
[[Image:AutoTest tool with passed test]]
|
||||
|
||||
|
||||
This time we see that the test is successful, as indicated by the Success icon ( [[Image:general-tick-icon]] ) in the Status column.
|
||||
|
||||
==The beginnings of a test suite==
|
||||
|
||||
Of course we would not have had to use AutoTest to find that bug in <code>{BANK_ACCOUNT}.deposit</code>. We could have just written a simple class to exercise instances of <code>BANK_ACCOUNT</code> and truth would have come out.
|
||||
|
||||
The advantage of using AutoTest is that the test that we wrote to cover <code>{BANK_ACCOUNT}.deposit</code> can stay with us throughout the lifecycle of class <code>BANK_ACCOUNT</code>. We can expand the <code>TEST_BANK_ACCOUNT</code> with additional manual tests and run them after every development increment to ensure that all tests that were once successful are still successful.
|
||||
|
||||
==Manual test summary==
|
||||
|
||||
We have seen how to create and execute a manual test. You will find that manual tests form the backbone of your test suite. But there are two other types of tests available in AutoTest. Next let's take a look at these test types and in what ways they can be used.
|
||||
|
||||
|
||||
@@ -0,0 +1,30 @@
|
||||
[[Property:title|Using AutoTest]]
|
||||
[[Property:weight|-1]]
|
||||
[[Property:uuid|6b900a65-85c6-9cd6-ef57-ccd4b8decbef]]
|
||||
{{note|The following few pages contain the AutoTest tutorial. This tutorial uses a different software example than the bulk of the EiffelStudio Guided Tour. If this is your first time through, you may want to delay the AutoTest tutorial until you have completed the rest of the Guided Tour, then come back to it when you're feeling more familiar with EiffelStudio.}}
|
||||
|
||||
|
||||
{{note| '''To users of V6.6 and later:''' As of V6.6, the New Eiffel test wizard panes have changed somewhat from this documentation. V6.6 introduces the ability to store certain preferred values for creating tests. The advantage is that one need not enter this information on wizard panes each time a test is created. Because preferred values can be stored, the panes containing the values more likely to change between test creations are presented earlier than other panes. In previous versions, these panes were presented later, as shown in this documentation. The documentation will be updated in the future to reflect the newer wizard sequences and pane layouts. }}
|
||||
|
||||
|
||||
==Introduction==
|
||||
|
||||
AutoTest is a tool that helps you to create, manage, and run tests against your software. AutoTest is accessible directly as a part of EiffelStudio, but works to a large extent behind the scenes so that it doesn't get in the way of your development activities. In other words, even though you may be accumulating a substantial collection of test software along with your project software, you can still run and deliver your project software without going to a lot of trouble to separate the two. Tests managed by AutoTest stay handy and can be run any time to help make sure everything always stands up to the scrutiny of testing.
|
||||
|
||||
This tutorial will guide you through the use of AutoTest. A [[AutoTest|reference section]] for AutoTest is also available.
|
||||
|
||||
|
||||
{{Recommended|At least on your first viewing of this tutorial, take the sections in the order in which they are presented. There are three different types of tests supported by AutoTest. Each type of test is discussed on its own page. But to avoid repetition, the pages for the second and third types of tests omit some of the detail in the first and assume a familiarity with the example. }}
|
||||
|
||||
|
||||
{{Caution|<br/> 1) At this time, AutoTest will work '''only''' for project targets in the '''classic Eiffel''' environment. This means that projects targeted to Microsoft .NET will not be able to use AutoTest.<br/> 2) Currently, the use of AutoTest should be '''restricted to projects built without void-safe settings'''.}}
|
||||
|
||||
|
||||
{{Recommended|During the transition to void-safe Eiffel, projects can be built using '''experimental''' mode. This mode is as stable as '''non-experimental''' mode, but includes some facilities that might break existing code in a few circumstances. However, since version 6.5, EiffelStudio itself is built in experimental mode, so '''we recommend that you use AutoTest only on projects also built using experimental mode'''. Experimental mode can be invoked by using the "-experiment" option from the command line, or on Microsoft Windows by following the '''Start''' menu path to EiffelStudio and selecting experimental mode. As of version 6.6, the mode that was '''experimental''' in previous versions, becomes the '''default''' mode.}}
|
||||
|
||||
|
||||
{{SeeAlso|<br />[[AutoTest]] reference }}
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,104 @@
|
||||
[[Property:title|Managing tests]]
|
||||
[[Property:weight|9]]
|
||||
[[Property:uuid|f1e7f63a-dc86-fefb-e669-3e3ea178c596]]
|
||||
The previous sections cover the basics of testing and what it takes to create and use each of the test types supported by AutoTest. This section will finish things up with some miscellaneous information about testing strategy and hints on using AutoTest.
|
||||
|
||||
|
||||
|
||||
==Favor manual tests==
|
||||
|
||||
|
||||
It is worth repeating that currently, manual tests should form the majority of your testing suite. As you have seen, extracted and synthesized tests use more complex setup and execution mechanisms. These mechanisms make tests less robust and readable than manual tests. So using extracted and synthesized tests as a guide to produce manual tests with the same coverage is, at this time, the best way to work. You will probably be able to do this easily enough with synthesized tests. Extracted tests attempt to recreate the context at a specific point in time, which may make it more difficult to write a manual test that is equivalent an extracted test.
|
||||
|
||||
Because manual tests are more easily readable than either of the automatically generated test types, you should be able to understand more quickly what has happened when a test produces failing results.
|
||||
|
||||
|
||||
==Deleting uneeded tests==
|
||||
|
||||
|
||||
At some point and for various reasons, you will probably want to delete tests from your test suite. This is easy enough to do. Remember that test sets are actually just classes with certain characteristics, and that tests are actually just specialized routines of test classes.
|
||||
|
||||
If you want to delete a single test, you can delete that feature from its test class.
|
||||
|
||||
If you want to remove a whole test set, then [[Removing a class|delete the class]] that defines that test set.
|
||||
|
||||
|
||||
==Using Filters==
|
||||
|
||||
Filtering is provided to help view, manage, and run the tests in a test suite.
|
||||
|
||||
Filtering controls which tests are visible in the AutoTest interface how the view is organized. You can display tests organized by the test classes that contain them, by the classes they target, by their type, by their most recent results, or by any system you set up using a system of [[Create a manual test#About tags|tags]].
|
||||
|
||||
Filtering helps you manage which tests get run during a give execution. You can select certain tests to be run from those visible in the AutoTest interface, or you can choose to run all tests visible through a filter.
|
||||
|
||||
|
||||
===The Filter box===
|
||||
|
||||
The Filter box in the AutoTest interface can be used to enter filter text which will allow only certain tests to be visible.
|
||||
|
||||
Filter text can be a string of characters occurring in specific test class name or test routine name, or it can be a [[Create a manual test#About tags|tag]] or a portion of a tag hierarchy. The Filter box supports regular expressions, so you can filter with more granularity.
|
||||
|
||||
It is important to bear in mind that the View box works with the system of [[Create a manual test#About tags|tags]] described in the section on creating manual tests. Tags are hierarchically structured names that are applied to tests through the <code>note</code> clause. When you use the View box to display a set of tests, you specify that set by the tags on the tests. Some of the tags are implicit, in the sense that AutoTest accounts for them, and they are not explicitly coded in <code>note</code> clauses. This should become clear when we look at some examples.
|
||||
|
||||
When the filter text is cleared, the AutoTest interface will display tests accessible through all tag roots.
|
||||
|
||||
As of version 6.5 of EiffelStudio, the tag root words used are:
|
||||
|
||||
|
||||
{| border="2"
|
||||
|-
|
||||
| class || Tests organized by test classes
|
||||
|-
|
||||
| covers || Tests organized by target classes/routines
|
||||
|-
|
||||
| result || Tests organized by the results of their most recent execution
|
||||
|-
|
||||
| user || Tests organized by type (manual, extracted, generated) and by user-added tag hierachies
|
||||
|}
|
||||
|
||||
|
||||
{{note|The tag roots will appear only if there are tests that can be categorized under them. For example, if you have not run any tests, then '''result''' will not appear. }}
|
||||
|
||||
|
||||
Notice that the Filter box has a drop-down with a list of options:
|
||||
|
||||
|
||||
[[Image:AutoTest filter drop down]]
|
||||
|
||||
|
||||
These options are shortcuts to the various tag roots listed above:
|
||||
|
||||
#'''Test classes''' displays the sub-tree under the tag root '''class'''
|
||||
#'''Classes under test''' displays the sub-tree under the tag root '''covers'''
|
||||
#'''Results''' displays the sub-tree under the tag root '''result'''
|
||||
#'''User-defined tags''' displays the sub-tree under the tag root '''user'''
|
||||
|
||||
|
||||
Any tagging system that you devise will show up under the '''user''' tag root.
|
||||
|
||||
For example, consider a manual test containing a '''testing:''' note name with a user-defined tag as in the following code.
|
||||
|
||||
<code>
|
||||
test_deposit_01
|
||||
-- New test routine
|
||||
note
|
||||
testing: "covers/{BANK_ACCOUNT}.deposit"
|
||||
testing: "my_tag_root" -- My new tag root
|
||||
local
|
||||
l_ba: BANK_ACCOUNT
|
||||
do
|
||||
create l_ba
|
||||
l_ba.deposit (500)
|
||||
end
|
||||
</code>
|
||||
|
||||
This will cause the new user-defined tag and its associated tests to be visible in the AutoTest interface.
|
||||
|
||||
[[Image: AutoTest user defined tag root]]
|
||||
|
||||
|
||||
{{seealso|The [[The AutoTest Interface#Filtering|Filtering]] section in [[The AutoTest interface]].}}
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,147 @@
|
||||
[[Property:title|Testing: Background and basics]]
|
||||
[[Property:weight|0]]
|
||||
[[Property:uuid|12c2a2d4-9bf2-ba73-6647-cb9900666de1]]
|
||||
==Background and motivation for testing tools==
|
||||
|
||||
Developers test software in the hope that the testing process will expose faults in the software they've developed. Most developers also realize that no amount of testing will ever prove software to be bug free. So while testing is a virtuous activity that we dare not neglect, we are wise to temper our expectation of the practical value of testing.
|
||||
|
||||
A test is designed to exercise a software element given certain inputs and execution state. The state is observed after the test execution to see if the software element has behaved in a manner that is consistent with its specification.
|
||||
|
||||
As a body of software is developed and tested, a large number of tests may accumulate. This large suite of tests can be run at any time in order to ensure that a change or the addition of a new software element does not cause a previously successful test now to fail. Some software development processes call for running a whole suite of tests after every increment of development activity. This type of testing is often referred to as ''regression testing'', because it tends to expose software which had been satisfying its tests at one time, but because of some development activity has regressed to a failing state.
|
||||
|
||||
Creating, managing and running a large number of tests manually can be time-consuming, messy, and error-prone, thus the motivation for automated testing tools. Testing tools help programmers to create, maintain, and execute a suite of tests by automating the activity. During the last few years, both testing methods and tools have become more sophisticated.
|
||||
|
||||
|
||||
==The Eiffel advantage in testing==
|
||||
|
||||
Some of today's development methods require tests to be written before the software elements they test. Then the tests are included as a part of the software specification. But tests can only reflect a very small subset of the possible execution cases. Testing can never replace a comprehensive software specification.
|
||||
|
||||
The great advantage you have with Eiffel, of course, is that the specification for a software element exists in its contract. Like the tests mentioned above, contracts for software are written prior to implementation. So, importantly, tests are ''not'' a part of a software specification in Eiffel.
|
||||
|
||||
With contract checking enabled at run time, the running software's behavior is constantly monitored against the contract's expectations. In other words, for routines, the precondition defines an acceptable state in which the routine can execute, and the postcondition defines an acceptable state after successful execution. The class invariant defines the constraints necessary for instances of a class to be valid.
|
||||
|
||||
A term commonly used in software testing is "oracle". Tests are generally looked at as having two parts, the first part is a mechanism that exercises (runs or calls) a particular software element in a given context. The second part is the "oracle" whose responsibility it is to determine whether the software element passes or fails the test. Not surprisingly, test oracles in other testing frameworks often look a lot like assertions in Eiffel. So the advantage for Eiffel is that the test oracles for all routines are already written as the postconditions on routines and class invariants.
|
||||
|
||||
The presence of preconditions provides another advantage. Preconditions make it possible to automate testing in ways unavailable in other environments. Because of preconditions, we already have information about the limits of valid inputs to routines. So it's possible to generate a call to a routine we want to test automatically and with a context that meets the routine's precondition.
|
||||
|
||||
|
||||
==AutoTest==
|
||||
|
||||
AutoTest attempts to capitalize on the testing advantages inherent in Eiffel due to Design by Contract. AutoTest consists of an interactive interface, and a library of classes which support testing activity.
|
||||
|
||||
The testing support classes are distributed with EiffelStudio and exist in the ''testing'' subfolder of the ''libraries'' folder. With the exception of one class which we will discuss soon, the classes in "testing" are not intended to be used directly by developers. They exist to support the functionality of AutoTest.
|
||||
|
||||
The interface for AutoTest is accessible through the EiffelStudio development environment. You may find it already resident as a tab in the right hand pane next to Clusters, Features, and Favorites. If it's not there, then you can bring it up by following the menu path:
|
||||
|
||||
<code lang=text>
|
||||
View --> Tools --> Testing Tool </code>
|
||||
|
||||
|
||||
==Test classes and tests==
|
||||
|
||||
The AutoTest interface helps you to create and execute tests on the software you develop. The interface contains a wizard called the '''New Eiffel Test Wizard''' which helps you create or generate the types of tests you need. We'll learn more about the interface and the wizard as we go along. But first, let's look at what constitutes a ''test''. For AutoTest, we define the term ''test'' in the context of some other testing terminology:
|
||||
|
||||
|
||||
{{definition|Test class|An effective class that inherits from the class EQA_TEST_SET. }}
|
||||
|
||||
|
||||
{{definition|Test|Any procedure of a test class that satisfies all of the following conditions: <br/>1) Is exported to <code>ANY</code><br/>2) Is immediate (i.e., introduced within the text of the test class)<br/>3) Takes no arguments }}
|
||||
|
||||
|
||||
{{definition|Test set|The set of tests in a test class. }}
|
||||
|
||||
|
||||
{{definition|Test suite|A set of test classes (and by implication the tests contained therein) which is designed to test some particular software system or library. }}
|
||||
|
||||
|
||||
Whenever you use AutoTest, it will find your test classes, those classes that inherit from EQA_TEST_SET. When you run tests, it will execute all the tests in those classes, or a subset of tests that you choose. So, you have probably figured out that the one class from the testing library that you may need to know a little about is EQA_TEST_SET. But you don't have to know very much, because AutoTest can help you construct your test classes.
|
||||
|
||||
|
||||
==Types of tests==
|
||||
|
||||
There are three different types of tests supported by AutoTest:
|
||||
* Manual tests
|
||||
* Extracted tests
|
||||
* Generated tests
|
||||
|
||||
Each test of any of these types ultimately is a feature of class that inherits from EQA_TEST_SET. Ordinarily, though, the three types of tests won't be mixed in a test class. That is, any one particular test class will contain only one type of test. But from the point of view of AutoTest, all types of tests are managed and run the same way. We will discuss these types of tests in more detail later, but for right now, let's just establish some definitions.
|
||||
|
||||
|
||||
{{definition|Manual test|A test manually coded within a test class. }}
|
||||
|
||||
|
||||
Manual tests are features, procedures in fact, of classes that inherit from EQA_TEST_SET. In many simple cases, test classes containing manual tests inherit directly from EQA_TEST_SET, but that's not a requirement. Occasionally it can be useful for test classes to inherit from a descendant of EQA_TEST_SET that provides additional functionality.
|
||||
|
||||
A manual test is "manual" in the sense that you code the essential procedural part of the test by hand. But you really don't have to deal with the more mundane business of creating the whole test class and ensuring the proper inheritance. The ''New Eiffel Test Wizard'' helps out by automatically creating the shell of a test class and the shell of a test for you to fill in. Then it's pretty easy to add new tests manually to an existing test class.
|
||||
|
||||
|
||||
{{definition|Extracted test|A test that has been created during the execution of a system as a result of a developer request or a failure of the system. Extracted with the test is the current runtime state. When run, the test will attempt to recreate the runtime context. }}
|
||||
|
||||
|
||||
Extracted tests are convenient because they allow you to accumulate tests that are based on actual failures of your software (good for the software, not so good for your ego!). Once these tests are in your suite of tests, they are available from then on.
|
||||
|
||||
|
||||
{{definition|Generated test|A test that is the product of generating and running a series of randomly generated invocations of target routines. }}
|
||||
|
||||
|
||||
The process of creating generated tests is sometimes known in the community as creating via ''AutoTest''. The randomly generated calls to target routines which were created and run are discarded at the completion of the creation. But from the results of these calls, a set of permanent tests is distilled. These are the generated tests.
|
||||
|
||||
Generated tests are made possible by Design by Contract. Hopefully, you remember that one thing that DbC gives us is the handy ability to assign blame when something goes wrong. When a test makes a call to a routine we want to test, if a contract violation occurs, it may be the fault of the called routine or it may be the fault of the caller ... and that depends upon what type of contract violation has occurred. The contract violations that are interesting to AutoTest in the process of synthesizing tests are only those in which the called routine is at fault. That is, postcondition and invariant violations. AutoTest will then create a generated test for every ''unique'' failure in which the called routine being tested was to blame.
|
||||
|
||||
|
||||
|
||||
==Anatomy of a test==
|
||||
|
||||
Here are two more definitions:
|
||||
|
||||
{{definition|Target routine|A routine that is to be tested by a test. Sometimes called a "routine under test." }}
|
||||
|
||||
|
||||
{{definition|Target class|A class that contains target routines. Sometimes called a "class under test." }}
|
||||
|
||||
|
||||
In its simplest form, a test is a routine that issues a call to some routine you've developed in some class you've developed.
|
||||
|
||||
So the tests and the test classes are in the realm of testing and are used to test the target routines in target classes which are the real product of your software development project.
|
||||
|
||||
AutoTest will manage and run the tests in any test class whether or not they actually test any target routines. Even though the test shown below doesn't test anything, it still qualifies as a test. Naturally, it would seem silly to keep a test around that doesn't test anything, but the important thing to understand is that AutoTest will work with anything that matches the definitions of test and test class above. That is, once tests are created, AutoTest doesn't really have a stake in what you are trying to test.
|
||||
|
||||
|
||||
|
||||
<code>
|
||||
note
|
||||
description: "[
|
||||
Eiffel tests that can be executed by testing tool.
|
||||
|
||||
]"
|
||||
author: "EiffelStudio test wizard"
|
||||
date: "$Date$"
|
||||
revision: "$Revision$"
|
||||
testing: "type/manual"
|
||||
|
||||
class
|
||||
MY_TEST_CLASS
|
||||
|
||||
inherit
|
||||
EQA_TEST_SET
|
||||
|
||||
feature -- Test routines
|
||||
|
||||
my_test
|
||||
-- New test routine
|
||||
do
|
||||
assert ("not_implemented", False)
|
||||
end
|
||||
|
||||
end</code>
|
||||
|
||||
|
||||
This test class was created by AutoTest's New Eiffel Test Wizard. It is about as simple a test class as there can be. Its only value is to illustrate the basic form of AutoTest tests. So, let's look at that form.
|
||||
|
||||
It is clear that <code>MY_TEST_CLASS</code> is an effective class that inherits from <code>EQA_TEST_SET</code>, so that makes it fit the definition of a test class. And, it's also clear the <code>my_test</code> is a feature of <code>MY_TEST_CLASS</code>, specifically a procedure, exported to <code>ANY</code>, requiring no arguments. That qualifies <code>my_test</code> as a test. If <code>MY_TEST_CLASS</code> is located in a test cluster of your project, then AutoTest will find it and be able to run it whenever you request.
|
||||
|
||||
This test would always fail because of the <code>assert</code> that the wizard put in the implementation. So if you asked AutoTest to run your tests, it would tell you that <code>my_test</code> was a failed test, for the reason: "not_implemented". The <code>assert</code> is not a necessary part of a test. The wizard puts it there to remind you that the test has not been implemented. If you removed the <code>assert</code> line from the test, then the test would always succeed, which would be nice, but it would be succeeding at testing nothing! We'll see more later about what it means for tests to succeed and fail.
|
||||
|
||||
But first let's get some exposure to the AutoTest interface, by building a manual test for a routine in a simple class.
|
||||
|
||||
|
||||
@@ -0,0 +1,146 @@
|
||||
[[Property:title|Using extracted tests]]
|
||||
[[Property:weight|5]]
|
||||
[[Property:uuid|bebd4f28-9818-80f0-a69a-e9ce867723f4]]
|
||||
==About extracted tests==
|
||||
|
||||
At any time that you are running a system in EiffelStudio debugger and your system is paused, you can ask AutoTest to extract a new test class and test from the current executable context. Most often you would use this capability in the case in which you experienced an unexpected failure or exception in one of your routines. It is possible, though, to extract at any point at which the system is paused.
|
||||
|
||||
The value of extracted tests is that they provide a kind of a snapshot in testing form that will reproduce the unexpected failure. An extracted test attempts to reproduce the context in which the offending routine executed. So, extracted tests supplement your manual tests. They serve to cover situations which you just may not have written manual tests to cover.
|
||||
|
||||
Extracted tests are intended to supplement the suite of manual tests that you have created to do the bulk of your testing. So, usually when you create an extracted test, it happens as a result of your being surprised. You will notice that each time you create an extracted test, you get a new test class, too. This is in contrast to manual tests, in which you might use the wizard to create a new test class and one new test to cover a particular target class and target routine. Then you might manually create, in that same test class, many additional tests covering the routine behavior of the same or other target routines in the same target class.
|
||||
|
||||
|
||||
==Creating an extracted test==
|
||||
|
||||
Let's use the same test system we used for manual tests to demonstrate the creation of an extracted test. The example will be slightly contrived because it will find a problem that certainly we would already have discovered had we written a comprehensive set of manual tests against the <code>BANK_ACCOUNT</code> class. Still, the simplicity should help keep things clear.
|
||||
|
||||
If you remember, the root class for the example application was not very interesting, just a root procedure with a single instruction and a declaration <code>my_account</code> of type <code>BANK_ACCOUNT</code>:
|
||||
|
||||
|
||||
<code>
|
||||
make
|
||||
-- Run application.
|
||||
do
|
||||
create my_account
|
||||
end
|
||||
|
||||
my_account: BANK_ACCOUNT
|
||||
</code>
|
||||
|
||||
|
||||
Now, let's add some code into the <code>make</code> procedure that will make use of <code>my_account</code>:
|
||||
|
||||
<code>
|
||||
make
|
||||
-- Run application.
|
||||
do
|
||||
create my_account
|
||||
my_account.deposit (500)
|
||||
my_account.withdraw (100)
|
||||
end
|
||||
</code>
|
||||
|
||||
|
||||
If we run the application from EiffelStudio, we see that it stops when it incurs a postcondition violation in <code>{BANK_ACCOUNT}.withdraw</code>:
|
||||
|
||||
|
||||
[[Image:AutoTest extracted 01]]
|
||||
|
||||
|
||||
When we look at the feature pane, it's pretty easy to see where the problem is:
|
||||
|
||||
|
||||
[[Image:AutoTest extracted 02]]
|
||||
|
||||
|
||||
There is an error in the specification for <code>withdraw</code>. In the postcondition tagged <code>withdrawn</code>, the plus sign should have been a minus sign. Therefore, the assertion should read like this:
|
||||
|
||||
<code>
|
||||
withdrawn: balance = old balance - an_amount
|
||||
</code>
|
||||
|
||||
Certainly we will fix this, but AutoTest gives us the opportunity to extract a test based on this particular failure. So, let's do that.
|
||||
|
||||
So, we go to the AutoTest tool and click triangle next to ''Create new tests'' button and select the '''Extract tests from debugger''' from the drop-down menu. Because we are paused in the debugger, the drop-down menu appears with the '''Extract tests from debugger''' choice enabled this time:
|
||||
|
||||
|
||||
[[Image:AutoTest create new test 02]]
|
||||
|
||||
|
||||
When we select '''Extract tests from debugger''', we are presented with the New Eiffel Test Wizard's '''Test Extraction''' pane. This wizard pane shows a depiction of the current call stack and asks us for which feature(s) on the stack we want to create the test:
|
||||
|
||||
|
||||
[[Image:AutoTest test extraction pane|Test extraction pane]]
|
||||
|
||||
|
||||
The choice for <code>withdraw</code> is the selection we want. We can deselect the stack frame for <code>make</code> if it is pre-selected. If we click '''Next''' at this point we would be taken to the '''Tags''' pane, and from there to the '''General''' pane. But we really don't need to do this. AutoTest will sense that we are extracting a test for <code>{BANK_ACCOUNT}.withdraw</code> and tag the test properly. It will use the same test class name from the '''General''' pane, but add a numerical suffix. So, all we need to do now is to click '''Launch''' from the '''Text Extraction''' pane.
|
||||
|
||||
AutoTest creates the new test and returns us to the debugger, where our system is still on hold. We can stop execution and compile to include the new test.
|
||||
|
||||
Now we see the new test class and test in the AutoTest tool windows.
|
||||
|
||||
|
||||
==Run the tests, fix a problem, run the tests==
|
||||
|
||||
We run our tests using '''Run all''', and we see that the test on <code>withdraw</code> is still failing:
|
||||
|
||||
|
||||
[[Image:AutoTest tool after run]]
|
||||
|
||||
|
||||
If we fix the error in the postcondition in <code>withdraw</code>, recompile, and then re-execute the test, we find that it is successful.
|
||||
|
||||
|
||||
==A closer look at an extracted test==
|
||||
|
||||
Look at the code that was generated for the extracted test after the assertion violation occurred:
|
||||
|
||||
<code>
|
||||
note
|
||||
description: "Regression tests reproducing application state of a previous execution."
|
||||
author: "Testing tool"
|
||||
|
||||
class
|
||||
TEST_BANK_ACCOUNT_EXTRACTED_WITHDRAW_01
|
||||
|
||||
inherit
|
||||
EQA_EXTRACTED_TEST_SET
|
||||
|
||||
feature -- Test routines
|
||||
|
||||
test_withdraw
|
||||
note
|
||||
testing: "type/extracted"
|
||||
testing: "covers/{BANK_ACCOUNT}.withdraw"
|
||||
do
|
||||
run_extracted_test (agent {BANK_ACCOUNT}.withdraw, ["#1", {INTEGER_32} 100])
|
||||
end
|
||||
|
||||
feature {NONE} -- Access
|
||||
|
||||
context: !ARRAY [!TUPLE [type: !TYPE [ANY]; attributes: !TUPLE; inv: BOOLEAN]]
|
||||
-- <Precursor>
|
||||
do
|
||||
Result := <<
|
||||
[{BANK_ACCOUNT}, [
|
||||
"balance", {INTEGER_32} 400
|
||||
], False]
|
||||
>>
|
||||
end
|
||||
|
||||
end
|
||||
</code>
|
||||
|
||||
|
||||
You probably noticed immediately that it doesn't look much like the code that we wrote for our manual test in the previous section.
|
||||
|
||||
One reason for the difference is that the class does not inherit directly from <code>EQA_TEST_SET</code> as our manual test did. Instead, it inherits from <code>EQA_EXTRACTED_TEST_SET</code> which itself is a descendant of <code>EQA_TEST_SET</code>. <code>EQA_EXTRACTED_TEST_SET</code> provides additional functionality for extracted tests.
|
||||
|
||||
Notice that the call to the target routine <code>{BANK_ACCOUNT}.withdraw</code> is effected in the routine <code>test_withdraw</code> which passes an agent representing <code>{BANK_ACCOUNT}.withdraw</code> to the procedure <code>run_extracted_test</code>. The second argument to <code>run_extracted_test</code> is a <code>TUPLE</code> with the argument values which were used in the call to <code>withdraw</code> which caused the original assertion violation.
|
||||
|
||||
Another thing worth noting is the function <code>context</code>. This is how AutoTest recreates the state of the instance of <code>BANK_ACCOUNT</code> at the time of the assertion violation.
|
||||
|
||||
{{caution|The extracted test recreates the state at the point at which execution has halted. So, in the case of a postcondition or invariant violation, the values of the attributes will reflect any changes that have been made during the execution of the routine. (In the example, the value of balance is set to 400, rather than 500 as it would have been when routine <code>withdraw</code> began execution.) This could make a difference in whether the test extracted after an exception is a valid recreation of the original failure. One way of dealing with this, at least in simple cases like this, is to change the test class code to reflect the proper value. A safer way would be rather than extracting the test after the exception, restart the system and stop execution as it enters the failing routine, then extract the test at that point. }}
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,142 @@
|
||||
[[Property:title|Using generated tests]]
|
||||
[[Property:weight|7]]
|
||||
[[Property:uuid|c17ebddf-5d35-76c1-4912-d9f1ca3770a5]]
|
||||
==About generated tests==
|
||||
|
||||
Generated tests fill a different role from either extracted or manual tests. The idea behind generated tests is that because we specify software through its contracts, and because compliance of the software to those contracts can be actively monitored at runtime, we can know two things necessary for building tests:
|
||||
#For any routine, what argument values are valid
|
||||
#For the execution of any routine, what resulting states are acceptable
|
||||
|
||||
The first bit of knowledge comes from the ''preconditions'' of target routines. The second comes from ''postconditions'' of target routines and the ''invariants'' of target classes. Armed with this knowledge, we should be able to generate a series of invocations of target routines using random argument values, and evaluate the results. This is what is done by an internal facility of AutoTest that builds generated tests (this facility is often also referred to itself as AutoTest). After many of these randomly generated invocations, AutoTest attempts to synthesize the results of these feature calls into new test classes. The tests in these new test classes contain the calls leading up and including calls that fail. AutoTest will attempt to create only one test from each unique type of failure, so that your test directory doesn't get loaded with lots of duplicate tests.
|
||||
|
||||
You may look at a generated test class and think that it seems to be very long and to contain lots of stuff that you doubt is relevant. This is a fair assessment. The processes that AutoTest uses to build and minimize generated tests are constantly being improved. But for now, generated tests, although useful, retain a certain amount of that randomness that was used in their creation.
|
||||
|
||||
So for the time being, unlike manual and extracted tests, you should not make generated tests a part of your permanent test suite. Rather, you should consider them a disposable means to an end. Use each generated test as a guide for building an effective and readable manual test.
|
||||
|
||||
|
||||
==Creating generated tests==
|
||||
|
||||
|
||||
If you've been through the discussion of the creation of [[Create a manual test|manual]] and [[Using extracted tests|extracted]] tests, then it should come as no surprise to learn that you use the '''New Eiffel test wizard''' to create generated tests. And much of this process will seem familiar now.
|
||||
|
||||
In the drop-down menu for the '''Create new test''' button, choose the item '''Generate tests for custom types'''.
|
||||
|
||||
|
||||
[[Image:AutoTest create new test]]
|
||||
|
||||
|
||||
At this point, you'll see the '''Test Generation''' wizard pane. This pane allows you to specify which classes you want to generate tests for. You can also adjust the values of certain parameters used in the test generation.
|
||||
|
||||
Let's type the class name <code>BANK_ACCOUNT</code> into the box labeled '''Class or type name''' and click the "'''+'''" button to added it to the list. Of course you can remove an item from the list by selecting it and clicking "'''-'''".
|
||||
|
||||
|
||||
[[Image:AutoTest Test Generation pane]]
|
||||
|
||||
|
||||
The rest of the pane is used to configure certain options for the test generation process.
|
||||
|
||||
'''Cutoff (minutes)''' lets you specify a number of minutes for AutoTest to run random invocations of the routines in your target class(es).
|
||||
|
||||
'''Cutoff (invocations)''' lets you control how long AutoTest will run random invocations by declaring a specific number of invocations.
|
||||
|
||||
'''Routine timeout''' sets an upper limit on how long AutoTest will wait for a random feature call to complete.
|
||||
|
||||
'''Random number generation seed''' provides a way for you to control the seeding of the random number generator used by AutoTest. When the value is '''0''', as shown here, the seed is created from the system clock. This is adequate in most cases, but this option is provided because there might be some cases in which you would want to try to reproduce a previous test generation run. And to do that, you would have to set the seed to the same value for multiple runs.
|
||||
|
||||
The two check boxes '''Slice minimization''' and '''DDmin for minimization''' allow you to select the approach that you want to use for minimizing the size of generated tests. Generally, the default value here is adequate. '''Slicing''' and '''ddmin''' are two different ways of doing minimization. Tests are generated after running many randomly generated calls to routines in your target class. Tests are generated for calls that fail. So, there may have been many randomized calls leading up to the failed call. Minimization helps to eliminate the majority of the unrelated randomly generated calls, leaving the test code as short as possible. You will notice that minimization processing is memory and processor intensive.
|
||||
|
||||
The last check box, '''HTML statistics''' gives you the option of having AutoTest record the results of a test generation run in a set of files that you can review with a web browser.
|
||||
|
||||
We can allow all these to remain their default values, with one exception. Let's check the '''HTML statistics''' box.
|
||||
|
||||
|
||||
|
||||
During the test generation you can watch the random invocations of your class's routines being logged in the Testing pane of the Outputs tool. When the generation completes, AutoTest directs you to the location of the results:
|
||||
|
||||
|
||||
[[Image:AutoTest testing pane after generation]]
|
||||
|
||||
|
||||
The file <code>statistics.txt</code> contains a summary of the generation run. If you enabled '''Create HTML output''' you can open the file <code lang="text">index.html</code> with your browser and view formatted summary and detail information.
|
||||
|
||||
|
||||
{{note|The '''result''' directory includes files that summarize the whole generated testing process. Some of these are lengthy because they contain information on test cases used for each target routine. }}
|
||||
|
||||
|
||||
If we try to generate tests on the class <code>BANK_ACCOUNT</code> in which we have already fixed two bugs after the manual and extracted tests, we will see something about like the following results:
|
||||
|
||||
|
||||
[[Image:AutoTest generated results pass]]
|
||||
|
||||
|
||||
The important thing to notice here is the status: '''pass'''. There were no randomly generated cases that failed. So every valid invocation of a routine for class <code>BANK_ACCOUNT</code> completed satisfactorily. Therefore, no generated test class was created.
|
||||
|
||||
If we re-introduce the bug into the <code>deposit</code> procedure of class <code>BANK_ACCOUNT</code> (i.e., Remove this line of code: <code>balance := balance + an_amount</code>), and then request generated tests again, we get different results:
|
||||
|
||||
|
||||
[[Image:AutoTest generated results fail]]
|
||||
|
||||
|
||||
This time, as we expected, failures were encountered. And a generated test class was created.
|
||||
|
||||
|
||||
==A look at a generated test==
|
||||
|
||||
|
||||
The generated test class looks like this:
|
||||
|
||||
|
||||
<code>
|
||||
note
|
||||
description: "Generated test created by AutoTest."
|
||||
author: "Testing tool"
|
||||
|
||||
class
|
||||
TEST_BANK_ACCOUNT_GENERATED_001
|
||||
|
||||
inherit
|
||||
EQA_SYNTHESIZED_TEST_SET
|
||||
|
||||
feature -- Test routines
|
||||
|
||||
generated_test_1
|
||||
note
|
||||
testing: "type/generated"
|
||||
testing: "covers/{BANK_ACCOUNT}.deposit"
|
||||
local
|
||||
v_6: BANK_ACCOUNT
|
||||
v_7: INTEGER_32
|
||||
do
|
||||
execute_safe (agent: BANK_ACCOUNT
|
||||
do
|
||||
create {BANK_ACCOUNT} Result
|
||||
end)
|
||||
if {l_ot1: BANK_ACCOUNT} last_object then
|
||||
v_6 := l_ot1
|
||||
end
|
||||
v_7 := {INTEGER_32} 3
|
||||
|
||||
-- Final routine call
|
||||
set_is_recovery_enabled (False)
|
||||
execute_safe (agent v_6.deposit (v_7))
|
||||
end
|
||||
|
||||
end
|
||||
</code>
|
||||
|
||||
|
||||
{{note|If you've been following along by the doing these examples in EiffelStudio, you may notice that your generated class looks slightly different. }}
|
||||
|
||||
|
||||
This test is written in a way that is a little different from both the manual test we wrote and the extracted test. But it's not too hard to figure out what's going on. An object of type <code>BANK_ACCOUNT</code> will be created (local <code>v_6</code>) and the <code>deposit</code> feature will be applied to it with an argument value of <code>3</code> (local <code>v_7</code>).
|
||||
|
||||
You can see that this test, although it is implemented differently, is about the same as the manual test we wrote covering <code>{BANK_ACCOUNT}.deposit</code>. Because we have re-introduced the bug in <code>BANK_ACCOUNT</code>, if we run all tests, we see that both our manual test and the generated test are failing ... only the extracted test covering <code>{BANK_ACCOUNT}.withdraw</code> is successful:
|
||||
|
||||
|
||||
[[Image:AutoTest interface after run 05]]
|
||||
|
||||
|
||||
If we replace the implementation for <code>{BANK_ACCOUNT}.deposit</code> that we had removed, and then re-execute the tests, all are successful.
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user