merge changes from branch 17.05 onto trunk

git-svn-id: https://svn.eiffel.com/eiffel-org/trunk@1941 abb3cda0-5349-4a8f-a601-0c33ac3a8c38
This commit is contained in:
eiffel-org
2018-02-06 13:41:39 +00:00
parent 0b47147042
commit 265a446dab
14 changed files with 293 additions and 307 deletions

View File

@@ -1,7 +1,6 @@
[[Property:title|EiffelBase Tutorial]]
[[Property:weight|-2]]
[[Property:uuid|d540615d-802b-8e12-af74-4d01d1fc4760]]
==EiffelBase Tutorial==
Learn about the EiffelBase library.

View File

@@ -1,136 +1,136 @@
[[Property:title|Asynchronous Calls]]
[[Property:weight|6]]
[[Property:uuid|d3d3873c-5c84-7566-547e-1ede38544081]]
==Overview==
As we've seen in [[Separate Calls]], feature calls to a non-separate target are always synchronous.
Furthermore, queries are always synchronous as well, because the caller has to wait for the result.
{| border="1"
|-
! Target
! Query
! Command
|-
| non-separate
| synchronous
| synchronous
|-
| separate
| synchronous
| potentially asynchronous
|}
Asynchronous calls can therefore only happen on commands with a separate target.
Indeed, such calls are by default executed asynchronously, but there are some important exceptions to this rule.
A command to a separate target is executed synchronously if any of the following applies:
* The client (caller) and supplier (target) region are the same.
* The target region is passive.
* The callee needs a lock currently held by the caller (lock passing).
* The caller holds the locks of the callee (separate callbacks).
== Triggers for Synchronization ==
=== Same Regions ===
The first case happens when a reference is declared separate, but happens to be non-separate. This case follows directly from the type system: A non-separate type <code>A</code> always conforms to its variation <code>separate A</code>. At run-time such cases can be detected with an object test:
<code>
sep_object: separate A
--...
if attached {A} sep_object as non_sep_object then
-- ...
end
</code>
=== Passive Regions ===
In the SCOOP model, a passive region does not have a processor attached to it.
This means that clients of the passive region need to apply features logged against a passive region themselves.
The logical consequence of this is that all call to a passive region, including commands, are executed synchronously.
=== Lock Passing ===
Lock passing is another source of synchronization. It is one of the trickiest issues to detect, and to fully understand it we must first introduce a few more definitions.
In [[Exclusive Access]] we have learned that an object is ''controlled'' if it appears as a formal argument of the enclosing routine. SCOOP however always grants exclusive access over a whole region. We therefore introduce the new term ''Lock'':
{{definition|Lock|Exclusive access to a SCOOP region and all objects therein.}}
Note the difference between ''controlled'' and ''locked'':
* ''Controlled'' applies to a single object, whereas ''locked'' applies to a region.
* The ''controlled'' property can be determined statically at compile-time, whereas ''locked'' is determined at runtime.
* The set of ''controlled'' objects of a processor is always a subset of the set of objects in ''locked'' regions.
{{note|In terms of implementation, a ''lock'' corresponds to an open call queue to a region.}}
Now consider a small classes <code>HASH_STORAGE</code> and <code>EXAMPLE</code>:
<code>
class HASH_STORAGE feature
hash_code: INTEGER
set_hash_code (a_string: separate STRING)
do
hash_code := a_string.hash_code
end
end
class EXAMPLE feature
run (a_hash_storage: separate HASH_STORAGE; a_string: separate STRING)
do
a_hash_storage.set_hash_code (a_string)
io.put_integer (a_hash_storage.hash_code)
end
end
</code>
You might notice a problem here:
In the feature <code>{EXAMPLE}.run</code>, exclusive access to 'a_hash_storage' and 'a_string' is guaranteed by the SCOOP semantics.
Or in other words, the corresponding regions are ''locked''. The feature <code>{HASH_STORAGE}.set_hash_code</code> however needs access to ''a_string'' as well.
In the SCOOP model, as seen so far, this would result in a deadlock. The handler of the <code>HASH_STORAGE</code> object waits for exclusive access on the string object, and the <code>EXAMPLE</code> object waits for the query <code>{HASH_STORAGE}.hash_code</code> to return.
To resolve this problem, SCOOP implements a technique called ''Lock Passing''.
Locks on regions can be passed to the handler of the target of a separate call.
Lock passing happens whenever the client processor (the handler of the <code>EXAMPLE</code> object) has locked a region that holds an object which is passed as an actual argument to a separate call. Note that this also includes non-separate reference objects, because a processor always holds a lock over its own region.
When a client has passed its locks to the supplier processor, it cannot continue execution until the called feature has been applied by the supplier processor, and the supplier processor has given back the locks to the client. Therefore, this type of call must be synchronous.
{{note|During lock passing, a processor gives away all the locks that it currently holds, including the lock on itself.}}
{{note| Lock passing happens for every synchronous call, in particular also for queries and passive processors.}}
The advantage of lock passing is that it enables some very common programming patterns without triggering a deadlock. The disadvantage, however, is that it's hard to tell '''when''' it happens. However, there are a few cases when lock passing is guaranteed to happen, namely when the actual argument passed to a separate call is
* a formal argument of the enclosing routine,
* of a non-separate reference type or
* <code>Current</code>.
There are, however, some cases where it's not immediately obvious that lock passing happens.
For example, a region might be locked because of a controlled argument somewhere further up in the call stack (i.e. not the enclosing routine, but the caller of that routine), or because an object is passed as an argument which happens to be on the same region as one of the controlled objects.
There is a workaround to disable lock passing for a specific call:
<code>
async_call (a_procedure: separate PROCEDURE [ANY, TUPLE])
do
a_procedure.call (Void)
end
example (a_object: separate ANY)
do
async_call (agent a_object.some_feature (Current))
end
</code>
The feature <code>async_call</code> can be defined somewhere in the project and can be reused. The downside is that an agent needs to be created, but there's no lock passing happening, because all arguments to the agent are closed and only <code>Void</code> is passed to the separate call which cannot trigger lock passing.
However, this mechanism should be used with some care, because it's easy to run into one of the above mentioned deadlocks.
=== Separate Callbacks ===
The last occurrence of synchronous calls is closely related to lock passing. If a processor '''A''' has passed a non-separate reference argument to another processor '''B''', and thus has passed its locks away, it cannot proceed its execution. Sometimes however processor '''B''' has to log some calls back to '''A''', which is called a ''separate callback''.
{{definition|Separate Callback | A separate call where the caller holds the locks of the callee. }}
During a separate callback processor '''B''' has to give back the locks it has previously received from '''A'''.
This in turn means '''B''' has to wait until '''A''' has finished its execution of the separate callback and returned the locks, which effectively makes the call synchronous.
[[Property:title|Asynchronous Calls]]
[[Property:weight|6]]
[[Property:uuid|d3d3873c-5c84-7566-547e-1ede38544081]]
==Overview==
As we've seen in [[Separate Calls]], feature calls to a non-separate target are always synchronous.
Furthermore, queries are always synchronous as well, because the caller has to wait for the result.
{| border="1"
|-
! Target
! Query
! Command
|-
| non-separate
| synchronous
| synchronous
|-
| separate
| synchronous
| potentially asynchronous
|}
Asynchronous calls can therefore only happen on commands with a separate target.
Indeed, such calls are by default executed asynchronously, but there are some important exceptions to this rule.
A command to a separate target is executed synchronously if any of the following applies:
* The client (caller) and supplier (target) region are the same.
* The target region is passive.
* The callee needs a lock currently held by the caller (lock passing).
* The caller holds the locks of the callee (separate callbacks).
== Triggers for Synchronization ==
=== Same Regions ===
The first case happens when a reference is declared separate, but happens to be non-separate. This case follows directly from the type system: A non-separate type <code>A</code> always conforms to its variation <code>separate A</code>. At run-time such cases can be detected with an object test:
<code>
sep_object: separate A
--...
if attached {A} sep_object as non_sep_object then
-- ...
end
</code>
=== Passive Regions ===
In the SCOOP model, a passive region does not have a processor attached to it.
This means that clients of the passive region need to apply features logged against a passive region themselves.
The logical consequence of this is that all call to a passive region, including commands, are executed synchronously.
=== Lock Passing ===
Lock passing is another source of synchronization. It is one of the trickiest issues to detect, and to fully understand it we must first introduce a few more definitions.
In [[Exclusive Access]] we have learned that an object is ''controlled'' if it appears as a formal argument of the enclosing routine. SCOOP however always grants exclusive access over a whole region. We therefore introduce the new term ''Lock'':
{{definition|Lock|Exclusive access to a SCOOP region and all objects therein.}}
Note the difference between ''controlled'' and ''locked'':
* ''Controlled'' applies to a single object, whereas ''locked'' applies to a region.
* The ''controlled'' property can be determined statically at compile-time, whereas ''locked'' is determined at runtime.
* The set of ''controlled'' objects of a processor is always a subset of the set of objects in ''locked'' regions.
{{note|In terms of implementation, a ''lock'' corresponds to an open call queue to a region.}}
Now consider a small classes <code>HASH_STORAGE</code> and <code>EXAMPLE</code>:
<code>
class HASH_STORAGE feature
hash_code: INTEGER
set_hash_code (a_string: separate STRING)
do
hash_code := a_string.hash_code
end
end
class EXAMPLE feature
run (a_hash_storage: separate HASH_STORAGE; a_string: separate STRING)
do
a_hash_storage.set_hash_code (a_string)
io.put_integer (a_hash_storage.hash_code)
end
end
</code>
You might notice a problem here:
In the feature <code>{EXAMPLE}.run</code>, exclusive access to 'a_hash_storage' and 'a_string' is guaranteed by the SCOOP semantics.
Or in other words, the corresponding regions are ''locked''. The feature <code>{HASH_STORAGE}.set_hash_code</code> however needs access to ''a_string'' as well.
In the SCOOP model, as seen so far, this would result in a deadlock. The handler of the <code>HASH_STORAGE</code> object waits for exclusive access on the string object, and the <code>EXAMPLE</code> object waits for the query <code>{HASH_STORAGE}.hash_code</code> to return.
To resolve this problem, SCOOP implements a technique called ''Lock Passing''.
Locks on regions can be passed to the handler of the target of a separate call.
Lock passing happens whenever the client processor (the handler of the <code>EXAMPLE</code> object) has locked a region that holds an object which is passed as an actual argument to a separate call. Note that this also includes non-separate reference objects, because a processor always holds a lock over its own region.
When a client has passed its locks to the supplier processor, it cannot continue execution until the called feature has been applied by the supplier processor, and the supplier processor has given back the locks to the client. Therefore, this type of call must be synchronous.
{{note|During lock passing, a processor gives away all the locks that it currently holds, including the lock on itself.}}
{{note| Lock passing happens for every synchronous call, in particular also for queries and passive processors.}}
The advantage of lock passing is that it enables some very common programming patterns without triggering a deadlock. The disadvantage, however, is that it's hard to tell '''when''' it happens. However, there are a few cases when lock passing is guaranteed to happen, namely when the actual argument passed to a separate call is
* a formal argument of the enclosing routine,
* of a non-separate reference type or
* <code>Current</code>.
There are, however, some cases where it's not immediately obvious that lock passing happens.
For example, a region might be locked because of a controlled argument somewhere further up in the call stack (i.e. not the enclosing routine, but the caller of that routine), or because an object is passed as an argument which happens to be on the same region as one of the controlled objects.
There is a workaround to disable lock passing for a specific call:
<code>
async_call (a_procedure: separate PROCEDURE [TUPLE])
do
a_procedure.call (Void)
end
example (a_object: separate ANY)
do
async_call (agent a_object.some_feature (Current))
end
</code>
The feature <code>async_call</code> can be defined somewhere in the project and can be reused. The downside is that an agent needs to be created, but there's no lock passing happening, because all arguments to the agent are closed and only <code>Void</code> is passed to the separate call which cannot trigger lock passing.
However, this mechanism should be used with some care, because it's easy to run into one of the above mentioned deadlocks.
=== Separate Callbacks ===
The last occurrence of synchronous calls is closely related to lock passing. If a processor '''A''' has passed a non-separate reference argument to another processor '''B''', and thus has passed its locks away, it cannot proceed its execution. Sometimes however processor '''B''' has to log some calls back to '''A''', which is called a ''separate callback''.
{{definition|Separate Callback | A separate call where the caller holds the locks of the callee. }}
During a separate callback processor '''B''' has to give back the locks it has previously received from '''A'''.
This in turn means '''B''' has to wait until '''A''' has finished its execution of the separate callback and returned the locks, which effectively makes the call synchronous.

View File

@@ -1,13 +1,9 @@
[[Property:title|SCOOP implementation]]
[[Property:weight|9]]
[[Property:uuid|eeb07907-e036-f3d6-5297-a7c44cfd1900]]
The implementation of SCOOP within EiffelStudio varies from the definition as it has been presented in publications during the model's evolution.
Some of the differences stem from the reality that SCOOP systems could be optimized for performance, including maximizing concurrency. For example, even though the SCOOP rules state that a separate call to a query is synchronous, i. e., the calling processor waits until the query completes before proceeding, if a static analysis can show that the wait is not necessary to the proper functioning of the remainder of the calling routine, then the call to the query can be done asynchronously.
In other ways, particularly for early versions, the EiffelStudio implementation may not cover all the goals of SCOOP as stated in the various publications. Certain elements that are defined in the SCOOP descriptions may not yet be present, or only partially implemented in the EiffelStudio implementation.
The differences between the EiffelStudio implementation of SCOOP and current and previous definitions of the SCOOP model are shown below.
The implementation of SCOOP is the result of many years of design and refinement of the model. This page describes specific properties of its current state, in particular ways in which it differs from some of the earlier publications about the model.
=Known limitations=
@@ -15,44 +11,18 @@ The differences between the EiffelStudio implementation of SCOOP and current and
==Supported concurrency mechanisms==
Although the SCOOP model can support virtually any underlying concurrency mechanism, the initial SCOOP implementation in EiffelStudio version 6.8 supports only one executable, using multiple process threads as SCOOP [[Concurrent programming with SCOOP#Processors|processors]].
At the core of the SCOOP model lies the notion of [[Concurrent programming with SCOOP#Processors|processor]]: a mechanism that can execute instructions sequentially. As a general model covering many forms of concurrency, SCOOP allows many possible implementations of this concept. In the EiffelStudio implementation, processors are implemented as <em>threads</em> of the operating system.
==Maximum number of SCOOP processors==
In the initial release, the allowable maximum number of SCOOP processors per system is 1024.
The maximum number of SCOOP processors per system is currently 1024.
==Separate anchored types==
Applicable prior to revision number 86657:
Separate status for anchored types is not supported properly. So declarations of the form:
<code>
my_entity: separate like my_query
</code>
and
<code>
my_entity: separate like Current
</code>
should be used only if you are using revision 86657 or later.
Also, if you use an anchored declaration such as:
<code>
my_entity: like my_query
</code>
and the type of <code>my_query</code> is <code>separate</code>, you should make sure you are using revision 86657 or later.
==Agents targeted to objects of separate types==
In version 6.8, agents targeted on separate objects are not supported.
Agents targeted on separate objects are currently not supported.
=Workarounds=
@@ -101,7 +71,7 @@ If the agent does not take any arguments, you must pass Void, otherwise the comp
is on the same processor as the caller and thus triggers lock passing (see [[Asynchronous Calls]]):
<code>
do_call (proc: separate PROCEDURE [ANY, TUPLE])
do_call (proc: separate PROCEDURE [TUPLE])
do
proc.call (Void)
end
@@ -110,7 +80,7 @@ do_call (proc: separate PROCEDURE [ANY, TUPLE])
If the agent does take arguments, things get a bit more tricky. If the call must be asynchronous, you have to do a workaround with the feature <code>{ROUTINE}.empty_operands</code> like this:
<code>
do_call (a_procedure: separate PROCEDURE [ANY, TUPLE[separate STRING]]; a_string: separate STRING)
do_call (a_procedure: separate PROCEDURE [TUPLE[separate STRING]]; a_string: separate STRING)
local
l_tuple: separate TUPLE [separate STRING]
do

View File

@@ -36,7 +36,7 @@ Note that for performance reasons it is usually better to use predefined criteri
===Agent Criteria===
An agent criterion will filter the objects according to the result of an agent applied to them.
The criterion is initialized with an agent of type <e>PREDICATE [ANY, TUPLE [ANY]]</e>.
The criterion is initialized with an agent of type <e>PREDICATE [TUPLE [ANY]]</e>.
There should be either an open target or a single open argument, and the type of the objects in the query result should conform to the agent's open operand.
==Creating criteria objects==
@@ -57,7 +57,7 @@ feature -- Creating a criterion
-- containing either a single PREDICATE or three
-- values of type [STRING, STRING, ANY].
new_agent (a_predicate: PREDICATE [ANY, TUPLE [ANY]]): PS_CRITERION
new_agent (a_predicate: PREDICATE [TUPLE [ANY]]): PS_CRITERION
-- Creates an agent criterion.
new_predefined (object_attribute: STRING; operator: STRING; value: ANY): PS_CRITERION

View File

@@ -3,4 +3,15 @@
[[Property:uuid|4ddb235c-fea6-ee00-05af-6493e2c652a7]]
== Network communication solutions==
There are a few network related library either coming with EiffelStudio delivery, or available as iron package:
* '''EiffelNet''':
: manipulation of sockets, network addresses, IPv4, IPv6. But also basic limited implementation for a few protocols. <code lang="shell">$ISE_LIBRARY/library/net/net.ecf</code>
* '''ZeroMQ''':
: wrapping for the [http://zeromq.org/ ZeroMQ] Distributed Messaging . [https://svn.eiffel.com/eiffelstudio/trunk/Src/library/zeromq]
* '''http_client''':
: simple web client to send http (GET, POST, ...) request and receive associated response. <code lang="shell">$ISE_LIBRARY/contrib/library/network/http_client/http_client.ecf</code>

View File

@@ -7,13 +7,13 @@ The implementation of agents is an advanced topic, but you do not have to unders
==Objects that Represent Operations==
Object technology is based on the idea that when we model real world objects, we model them based on the things that they have and what they can do ... their properties and their operations. The operations in Eiffel are the routines, i.e., the functions and procedures, of classes. Operations are not objects.
Object technology is based on the idea that when we model systems based on objects, representing the "things" they manipulate. As to operations on these objects, they appear in the corresponding classes, as routines (functions and procedures). Operations are not objects.
Having said that, it is sometimes desirable for us to model operations. We do this in the same fashion that we model other concepts: statically as classes, and as objects at runtime.
Sometimes, on the other hand, the "things" we model with our objects could represent operations. For example, we might want to build a list of tasks to be performed later; each task is defined by a routine. Each of the objects in the list will represent the corresponding routine.
An object that represents an operation is called an agent.
Such an object, representing an operation, is called an agent.
If we can have a runtime object that represents an operation, then we can place the object in the structure of another object, where at some later time, a client can cause the associated operation to execute.
If we can have a run-time object that represents an operation, then we can place the object in the structure of another object, where at some later time, a client can cause the associated operation to execute.
This is a very desirable model for event driven processing, like graphical user interfaces. The operations that are executed when a user take some action like clicking on a button, could be represented by agents. When the user interface element is initialized, agents that represent the action routines are stored within the interface element. Then at the time that an event, say a button click, occurs, the agents for that event are retrieved and their associated operations are executed.
@@ -23,9 +23,9 @@ Another area in which agents are commonly used is in traversing data structures.
We know that there are two types of routines in Eiffel, functions and procedures.
Not surprisingly, the implementation of agents relies on three classes in the Base Library. Class <code>ROUTINE</code>, and its heirs <code>FUNCTION</code> and <code>PROCEDURE</code>.
The implementation of agents correspondingly relies on three classes in the Base Library: class <code>ROUTINE</code> for the general notion, and its heirs <code>FUNCTION</code>, with and <code>PROCEDURE</code>. In addition, <code>PREDICATE</code>, an heir of <code>FUNCTION</code> , covers the particular case of a function returning a boolean result.
When you use an agent from a client routine, you will be building an instance of either <code>FUNCTION</code> or <code>ROUTINE</code>. This happens implicitly as you will see.
When you use an agent from a client routine, you will be building an instance of either <code>FUNCTION</code> or <code>ROUTINE</code>.
==Using Agents==
@@ -42,10 +42,10 @@ It is important to understand that <code>step_forward</code> does not get applie
In this example, the routine "<code>step_forward</code>" on which the agent is based takes no arguments. If you drilled down into the workings of this example you would find that class that implements the feature <code>extend</code> is class <code>EV_NOTIFY_ACTION_SEQUENCE</code>. You would also see that the signature for the feature <code>extend</code> is as essentially as follows.
<code>
extend (v: PROCEDURE [ANY, TUPLE])
extend (v: PROCEDURE [TUPLE])
</code>
We don't have to know too much about the workings of agents to see that "<code>extend</code>" takes an argument <code>v</code> which is of type <code>PROCEDURE</code>. It turns out that the actual generic parameter <code>TUPLE</code> represents the set of "open" arguments. In this case, extend is expecting an agent which has no open arguments.
We don't have to know too much about the workings of agents to see that "<code>extend</code>" takes an argument <code>v</code> which is of type <code>PROCEDURE</code>. The actual generic parameter <code>TUPLE</code> represents the set of "open" arguments. In this case, <code>extend</code> is expecting an agent with no open arguments.
===Open and Closed Arguments===
@@ -53,7 +53,7 @@ It is this business of open and closed arguments which really makes agents remar
Suppose a class has a feature declared as shown below.
<code>
my_procedure: PROCEDURE [ANY, TUPLE]
my_procedure: PROCEDURE [TUPLE]
</code>
Then what can be assigned to <code>my_procedure</code>?. An agent, of course. Say the class has procedures as follows.
@@ -76,14 +76,14 @@ Then the following assignment is valid.
my_procedure := agent no_argument_procedure
</code>
What this means is that the agent created and associated with the procedure <code>no_argument_procedure</code> must conform to the type <code>PROCEDURE [ANY, TUPLE]</code>. The feature <code>my_procedure</code> (which is of type <code>PROCEDURE [ANY, TUPLE]</code>) can be attached at runtime to an agent representing a procedure with no open arguments, which indeed is what <code>no_argument_procedure</code> is.
What this means is that the agent created and associated with the procedure <code>no_argument_procedure</code> must conform to the type <code>PROCEDURE [TUPLE]</code>. The feature <code>my_procedure</code> (which is of type <code>PROCEDURE [TUPLE]</code>) can be attached at runtime to an agent representing a procedure with no open arguments, which indeed is what <code>no_argument_procedure</code> is.
Now let's turn our attention to the other procedure <code>two_argument_procedure</code>. You might think that because it takes two arguments, that you would not be able to build an agent from it which could be assigned to the attribute <code>my_procedure</code>. But you can do it by closing the two arguments at the time that the agent is created, as in the following.
<code>
my_procedure := agent two_argument_procedure (1, 2) -- Is Valid
</code>
What happens here is that values are fixed for those arguments at the time that the agent, an object of type <code>PROCEDURE [ ANY, TUPLE]</code> is created.
What happens here is that values are fixed for those arguments at the time that the agent, an object of type <code>PROCEDURE [ TUPLE]</code> is created.
So this is the wonderful thing about agents. A routine which will be represented as an agent does not have to be an exact fit for the expected signature. By closing some arguments at agent creation, you have effectively produced a new and conforming routine.
@@ -93,14 +93,14 @@ To leave an argument open, you hold its place with a question mark. If you inten
<code>
my_procedure := agent two_argument_procedure (?, 2) -- Argument 1 left open
my_procedure := agent two_argument_procedure (?, ?) -- Both arguments left open
my_procedure := agent two_argument_procedure -- Both arguments left open
my_procedure := agent two_argument_procedure -- Both arguments left open
</code>
If an argument is open, then it means that a value is not provided for that argument at the time that the agent is created. The implication is that the value must be provided at some time prior to the time that the agent's associated routine gets executed. A precondition to executing a routine associated with an agent is that the agent has a valid set of arguments (called operands within the <code>ROUTINE</code> classes) for the call. If you were to leave one or both of the arguments to <code>two_argument_procedure</code> open as in the examples above, the assignment would still work due to the rules governing <code>TUPLE</code> conformance. But, at runtime unless the other arguments had been provided, the "<code>valid operands</code>" precondition would be violated.
Let's see an example in which we leave a target open. Suppose we have a class that has a feature coded as below
<code>
my_strings: LINKED_LIST [STRING]
my_strings: LINKED_LIST [STRING]
</code>
and some code to put some strings in <code>my_strings</code>:
@@ -133,7 +133,7 @@ Now suppose we want to print the values of all the strings in <code>my_strings</
The availability of agents gives us new options. <code>LINKED_LIST</code> has a feature <code>do_all</code> which comes to it from its ancestor <code>LINEAR</code>. The <code>do_all</code> feature's signature looks like this:
<code>
do_all (action: PROCEDURE [ANY, TUPLE [G]])
do_all (action: PROCEDURE [TUPLE [G]])
</code>
As an argument <code>do_all</code> takes an agent based on a procedure with one open argument which is the same type as the list items (in this class, <code>G</code> is the formal generic parameter representing the type of the items being stored). Then it traverses the list executing the routine associated with that agent and uses the current list item to satisfy the open argument.
@@ -190,8 +190,14 @@ Here again we'll use a feature of the <code>LINKED_LIST</code> class. There is a
my_list.do_if (agent print_on_new_line(?), agent {STRING}.has('!'))
</code>
The agent for the action is the same as we used earlier. We've added an agent for the test. It represents applying the <code>has</code> feature of the <code>STRING</code> class. Here the target is left open, because we want each of the strings in the list to be the target of <code>has</code>.
Compatibility note
Versions of the Kernel Library classes ROUTINE, PROCEDURE, FUNCTION and PREDICATE prior to EiffelStudio 17-05 had an extra generic parameter at the initial position; the usual actual generic parameter was ANY. It has been removed. The compiler has been engineered so that in almost all cases it will still accept the old style.