mirror of
https://github.com/EiffelSoftware/eiffel-org.git
synced 2025-12-08 15:52:26 +01:00
Author:halw Date:2010-01-19T20:20:25.000000Z git-svn-id: https://svn.eiffel.com/eiffel-org/trunk@399 abb3cda0-5349-4a8f-a601-0c33ac3a8c38
120 lines
6.8 KiB
Plaintext
120 lines
6.8 KiB
Plaintext
[[Property:title|Execute tests]]
|
|
[[Property:weight|3]]
|
|
[[Property:uuid|d0515cb1-0792-3028-2a24-a71b56506959]]
|
|
In the previous section we coded a manually created test. AutoTest will allow us to execute that test, or, in more practical terms, any set of tests that we select. But before we execute our test, let's take a look at what we will get out of such an execution.
|
|
|
|
==About test results==
|
|
|
|
It is important to understand that for AutoTest, test results are solely determined by whether an exception occurs during the execution of a test, and, in cases in which an exception does occur, what kind of exception it is. So, with AutoTest, it is not necessary for you to write anything special into a test that propagates the test's results.
|
|
|
|
When AutoTest executes a test, the result will be one of only three possibilities:
|
|
# The test is '''successful'''
|
|
# The test is '''failing'''
|
|
# The test result is '''unresolved'''
|
|
|
|
These possibilities are defined as follows.
|
|
|
|
{{definition|Successful test|A test which has executed without causing and exception to occur. }}
|
|
|
|
|
|
{{definition|Failing test|A test which has caused an exception to occur during its execution, specifically during the execution of a target routine. }}
|
|
|
|
|
|
{{definition|Unresolved test result|A test which has caused an exception to occur during its execution, but exclusive of the execution of a target routine. }}
|
|
|
|
|
|
So, successful tests are easy enough to understand. The test executed with no exception.
|
|
|
|
Failing tests and unresolved test results both mean that an exception occurred during the execution of the test. The distinction is made based on the location of the feature that causes the exception.
|
|
|
|
When we execute our test <code>{TEST_BANK_ACCOUNT}.test_deposit_01</code>, we know that <code>test_deposit_01</code> will make a call to <code>{BANK_ACCOUNT}.deposit</code>. If the exception occurs during the execution of a target routine (i.e., in <code>{BANK_ACCOUNT}.deposit</code>), then the test is considered failing. If the exception occurs anywhere else in the execution of <code>{TEST_BANK_ACCOUNT}.test_deposit_01</code>, then the test is considered to have an unresolved result.
|
|
|
|
|
|
{{note|Be aware that some early versions of AutoTest reported some unresolved test results as failing tests. }}
|
|
|
|
This behavior can be helpful to us as testers. A failing test indicates that there is something amiss in the target routine. The routine has not completed in a state that satisfies its postcondition and class invariant, or is dealing with an unresolved exception from some routine that it has called. An unresolved test result indicates that something is amiss in our test. Something went wrong in the setup or cleanup of the test or perhaps the test called a target routine from a state that did not satisfy the target routine's precondition.
|
|
|
|
|
|
==Refreshing the test suite==
|
|
|
|
In the last section, we created a manual test. The AutoTest tool shows us a log of that action in the Creation tab of the lower pane. So, now the tool should look something like this:
|
|
|
|
[[Image:AutoTest tool Creation and Refresh 01]]
|
|
|
|
To make the new test visible in the upper pane, we refresh the test suite by clicking the refresh button ( [[Image:refresh-icon]] ) indicated by the cursor in the screenshot above.
|
|
|
|
{{tip|You can force the test suite to be refreshed automatically after each compile by clicking the triangle to the right of the refresh button and checking the '''Refresh after compilation''' option in the list. }}
|
|
|
|
|
|
==Test execution==
|
|
|
|
After refreshing the test suite, the new test becomes visible in the top pane of the AutoTest tool.
|
|
|
|
|
|
[[Image:AutoTest interface 01]]
|
|
|
|
|
|
Here you see under "Tests" the project cluster <code>accounts</code>, the test cluster <code>tests</code>, the test class <code>TEST_BANK_ACCOUNT</code>, and the test <code>test_deposit_01</code>.
|
|
|
|
You see that the '''Status''' of <code>test_deposit</code> is "not tested", and that the '''Last executed''' date is empty.
|
|
|
|
To execute tests we use the "Run" button ( [[Image:debug-run-icon]] ) on the interface toolbar. The Run button will run all tests by default. However, there is a list of run options that you can access by clicking the black triangle just to the right of Run. You can choose to run all tests, only those with failing status, a filtered set of tests, or only those tests that you have selected in the tree below. We'll cover filtering a little later. For now, life is simple, we have only one test so just clicking the Run button should execute it.
|
|
|
|
==Examining test results==
|
|
|
|
The test runs in background and the AutoTest interface now looks like this:
|
|
|
|
|
|
[[Image:AutoTest interface after run 01]]
|
|
|
|
|
|
It's pretty clear that our test has failed. Its status is now marked with the Failing icon ( [[Image:general-error-icon]] ) and in the box below the '''Execution''' tab we see that the status also includes a tag: <code>balance_increased</code>. More detail is provided in the Testing pane of the Outputs tool, as shown below.
|
|
|
|
|
|
[[Image:AutoTest Testing pane after run 01]]
|
|
|
|
|
|
We see that <code>balance_increased</code> is a postcondition tag on the target routine <code>{BANK_ACCOUNT}.deposit</code>. Upon examination of the code:
|
|
|
|
<code>
|
|
deposit (an_amount: INTEGER)
|
|
-- Deposit `an_amount'.
|
|
require
|
|
amount_large_enough: an_amount > 0
|
|
do
|
|
ensure
|
|
balance_increased: balance > old balance
|
|
deposited: balance = old balance + an_amount
|
|
end
|
|
</code>
|
|
|
|
we realize that there is no implementation here. So we add the code to implement <code>deposit</code>:
|
|
|
|
<code>
|
|
...
|
|
do
|
|
balance := balance + an_amount
|
|
ensure
|
|
...
|
|
</code>
|
|
|
|
After compiling, we execute the test again, and view the result:
|
|
|
|
|
|
[[Image:AutoTest interface after run 03]]
|
|
|
|
|
|
This time we see that the test is successful, as indicated by the Success icon ( [[Image:general-tick-icon]] ) in the Status column.
|
|
|
|
==The beginnings of a test suite==
|
|
|
|
Of course we would not have had to use AutoTest to find that bug in <code>{BANK_ACCOUNT}.deposit</code>. We could have just written a simple class to exercise instances of <code>BANK_ACCOUNT</code> and truth would have come out.
|
|
|
|
The advantage of using AutoTest is that the test that we wrote to cover <code>{BANK_ACCOUNT}.deposit</code> can stay with us throughout the lifecycle of class <code>BANK_ACCOUNT</code>. We can expand the <code>TEST_BANK_ACCOUNT</code> with additional manual tests and run them after every development increment to ensure that all tests that were once successful are still successful.
|
|
|
|
==Manual test summary==
|
|
|
|
We have seen how to create and execute a manual test. You will find that manual tests form the backbone of your test suite. But there are two other types of tests available in AutoTest. Next let's take a look at these test types and in what ways they can be used.
|
|
|
|
|