create 20.05 branch

git-svn-id: https://svn.eiffel.com/eiffel-org/trunk@2240 abb3cda0-5349-4a8f-a601-0c33ac3a8c38
This commit is contained in:
jfiat
2020-05-22 15:25:48 +00:00
parent 352128f476
commit 0ef958966d
2943 changed files with 62390 additions and 0 deletions

View File

@@ -0,0 +1,16 @@
[[Property:modification_date|Sun, 20 Jan 2019 12:52:42 GMT]]
[[Property:publication_date|Mon, 26 Nov 2018 12:09:44 GMT]]
[[Property:uuid|FF51774B-2EB9-4EDF-8A0C-0F71A96D391F]]
[[Property:weight|0]]
[[Property:title|SCOOP_tutorial]]
[[Property:link_title|SCOOP Tutorial (hands-on): a small concurrent email system]]
This tutorial comes with a full-fledged example: a simple email client system. It is hands-on, inviting you to look at the code of the sequential version of the system, then guiding you step-by-step to make it concurrent, and hence much nicer to use.
The Eiffel compiler helps you through out: at each step of the transformation from sequential to concurrent, the error messages tell you what remains to be done.
For reading convenience the tutorial is in PDF format. It includes all the links to the code to be downloaded.
Read the tutorial by clicking [https://www.eiffel.org/files/uploads/scoop_tutorial_%282%29.pdf this link].
You should also note the [https://learning.acm.org/webinars/coop ACM webinar on SCOOP] which relies on this example to provide a general introduction to SCOOP.

View File

@@ -0,0 +1,9 @@
[[Property:uuid|7A9673B8-2FFC-4C9B-A055-29EB1CDD653F]]
[[Property:weight|10]]
[[Property:title|SCOOP papers]]
[[Property:link_title|papers]]
= Papers related to SCOOP and Concurrency =
== Concurrency patterns in SCOOP (Master Thesis by Romans Schmocker) ==
* Permanent link: [https://doi.org/10.3929/ethz-a-010255889]
* Local file: [[file:eth-46802-01.pdf|document as pdf]].

View File

@@ -0,0 +1,32 @@
[[Property:link_title|SCOOP]]
[[Property:title|Concurrent programming with SCOOP]]
[[Property:weight|1]]
[[Property:uuid|5FE312E0-0AC6-465C-AD3B-D5D73AAE566F]]
==Overview==
SCOOP is ''Simple Concurrent Object-Oriented Programming''. SCOOP allows developers to create object-oriented software systems which will take advantage of multiple, concurrently active execution engines while providing strong guarantees that allow programmers to reason like in sequential programs. Read further to get a better idea of what all this means, but for now, the primary message should be: SCOOP is concurrent software development made easy. The basic SCOOP ideas were first published as early as 1993. Since that time, considerable research and development has refined the SCOOP into the model that is implemented in EiffelStudio today.
{{Note|As you begin to use SCOOP you should make sure that you correctly set up your project. Information on how to do this can be found at [[Getting Started with SCOOP]]. Also consider compiling and working with some of the many [[SCOOP examples]].}}
==Concurrency==
Concurrency in computation is a situation in which we can expect that a running computer system will have multiple computations executing simultaneously in a controlled fashion to achieve the goals of the system. The simultaneous executions can be handled by widely diverse computational engines: separate networked computer systems, separate processors in the same CPU, separate processor cores on a single chip, separate processor threads within a process, separate processes on the same CPU, etc...
Concurrent systems would not cause much trouble if the portions of the systems on different processors, processes, or threads were completely independent, that is, they shared no resources. But that would be a rare case indeed. In a concurrent system, simultaneously executing software elements can and do share resources and communicate with each other. This is where the problems can arise; problems in the form of various synchronization issues such as [http://en.wikipedia.org/wiki/Race_condition#Computing race conditions], [http://en.wikipedia.org/wiki/Atomicity_(programming) atomicity] violations, and [http://en.wikipedia.org/wiki/Deadlock deadlocks]. The issues boil down to two essential problems in allowing access to shared resources:
:# '''Provide Safety''': Make certain that nothing bad ever happens, like two threads that access the same memory in no defined order, or an invalid interleaving of operations that causes the program to crash.
:# '''Ensure Progress''': Make certain that every participating thread eventually gets the opportunity to execute. Possible problems in this category are deadlocks, starvation (a thread keeps a lock forever, causing another one to wait), fairness etc...
Concurrency control is a rich research area in computer science. Consequently, many schemes have been designed to control concurrent computation.
SCOOP is such a model for concurrent computation which differs in some areas from other research efforts.
First, it is a goal of SCOOP to abstract the notion of concurrency to a level above the tools and techniques that are currently available. What this means is that if you were writing a system with multiple process threads, you could do that without SCOOP, using the tools that are currently used in multi-threaded programming, like semaphores and mutexes. Or you could write it in SCOOP using only the SCOOP mechanisms. Likewise with SCOOP, a system intended to run on multiple processors or multiple processor cores also could be written using only those same SCOOP mechanisms that you used for the multi-threaded system.
Second, the SCOOP model depends primarily upon Design by Contract with slightly changed contract semantics, and a single new keyword <code>separate</code> added to the Eiffel programming language. As you will see, the semantics of preconditions differ with concurrent execution versus sequential. Also, there are other underlying concepts and rules that need to be understood, but the point is that concurrent Eiffel using SCOOP will look a lot like sequential Eiffel.
Third, SCOOP uses the common act of argument passing to identify the necessity for guaranteeing exclusive access.
We will examine the details of how all this fits together and what it means to you as you begin to build concurrent software in Eiffel using SCOOP.

View File

@@ -0,0 +1,135 @@
[[Property:modification_date|Thu, 13 Feb 2020 16:00:53 GMT]]
[[Property:publication_date|Thu, 13 Feb 2020 16:00:53 GMT]]
[[Property:title|Asynchronous Calls]]
[[Property:weight|6]]
[[Property:uuid|d3d3873c-5c84-7566-547e-1ede38544081]]
==Overview==
As we've seen in [[Separate Calls]], feature calls to a non-separate target are always synchronous.
Furthermore, queries are always synchronous as well, because the caller has to wait for the result.
{| border="1"
|-
! Target
! Query
! Command
|-
| non-separate
| synchronous
| synchronous
|-
| separate
| synchronous
| potentially asynchronous
|}
Asynchronous calls can therefore only happen on commands with a separate target.
Indeed, such calls are by default executed asynchronously, but there are some important exceptions to this rule.
A command to a separate target is executed synchronously if any of the following applies:
* The client (caller) and supplier (target) region are the same.
* The target region is passive.
* The callee needs a lock currently held by the caller (lock passing).
* The caller holds the locks of the callee (separate callbacks).
== Triggers for Synchronization ==
=== Same Regions ===
The first case happens when a reference is declared separate, but happens to be non-separate. This case follows directly from the type system: A non-separate type <code>A</code> always conforms to its variation <code>separate A</code>. At run-time such cases can be detected with an object test:
<code>
sep_object: separate A
--...
if attached {A} sep_object as non_sep_object then
-- ...
end
</code>
=== Passive Regions ===
In the SCOOP model, a passive region does not have a processor attached to it.
This means that clients of the passive region need to apply features logged against a passive region themselves.
The logical consequence of this is that all call to a passive region, including commands, are executed synchronously.
=== Lock Passing ===
Lock passing is another source of synchronization. It is one of the trickiest issues to detect, and to fully understand it we must first introduce a few more definitions.
In [[Exclusive Access]] we have learned that an object is ''controlled'' if it appears as a formal argument of the enclosing routine. SCOOP however always grants exclusive access over a whole region. We therefore introduce the new term ''Lock'':
{{definition|Lock|Exclusive access to a SCOOP region and all objects therein.}}
Note the difference between ''controlled'' and ''locked'':
* ''Controlled'' applies to a single object, whereas ''locked'' applies to a region.
* The ''controlled'' property can be determined statically at compile-time, whereas ''locked'' is determined at runtime.
* The set of ''controlled'' objects of a processor is always a subset of the set of objects in ''locked'' regions.
{{note|In terms of implementation, a ''lock'' corresponds to an open call queue to a region.}}
Now consider a small classes <code>HASH_STORAGE</code> and <code>EXAMPLE</code>:
<code>
class HASH_STORAGE feature
hash_code: INTEGER
set_hash_code (a_string: separate STRING)
do
hash_code := a_string.hash_code
end
end
class EXAMPLE feature
run (a_hash_storage: separate HASH_STORAGE; a_string: separate STRING)
do
a_hash_storage.set_hash_code (a_string)
io.put_integer (a_hash_storage.hash_code)
end
end
</code>
You might notice a problem here:
In the feature <code>{EXAMPLE}.run</code>, exclusive access to 'a_hash_storage' and 'a_string' is guaranteed by the SCOOP semantics.
Or in other words, the corresponding regions are ''locked''. The feature <code>{HASH_STORAGE}.set_hash_code</code> however needs access to ''a_string'' as well. In the SCOOP model, as seen so far, this would result in a deadlock. The handler of the <code>HASH_STORAGE</code> object waits for exclusive access on the string object, and the <code>EXAMPLE</code> object waits for the query <code>{HASH_STORAGE}.hash_code</code> to return.
To resolve this problem, SCOOP implements a technique called ''Lock Passing''. Locks on regions can be passed to the handler of the target of a separate call. Lock passing happens whenever the client processor (the handler of the <code>EXAMPLE</code> object) has locked a region that holds an object which is passed as an actual argument to a separate call. Note that this also includes non-separate reference objects, because a processor always holds a lock over its own region.
When a client has passed its locks to the supplier processor, it cannot continue execution until the called feature has been applied by the supplier processor, and the supplier processor has given back the locks to the client. Therefore, this type of call must be synchronous.
{{note|During lock passing, a processor gives away all the locks that it currently holds, including the lock on itself.}}
{{note| Lock passing happens for every synchronous call, in particular also for queries and passive processors.}}
The advantage of lock passing is that it enables some very common programming patterns without triggering a deadlock. The disadvantage, however, is that it's hard to tell '''when''' it happens. However, there are a few cases when lock passing is guaranteed to happen, namely when the actual argument passed to a separate call is
* a formal argument of the enclosing routine,
* of a non-separate reference type or
* <code>Current</code>.
There are, however, some cases where it's not immediately obvious that lock passing happens.
For example, a region might be locked because of a controlled argument somewhere further up in the call stack (i.e. not the enclosing routine, but the caller of that routine), or because an object is passed as an argument which happens to be on the same region as one of the controlled objects.
There is a workaround to disable lock passing for a specific call:
<code>
async_call (a_procedure: separate PROCEDURE [TUPLE])
do
a_procedure.call (Void)
end
example (a_object: separate ANY)
do
async_call (agent a_object.some_feature (Current))
end
</code>
The feature <code>async_call</code> can be defined somewhere in the project and can be reused. The downside is that an agent needs to be created, but there's no lock passing happening, because all arguments to the agent are closed and only <code>Void</code> is passed to the separate call which cannot trigger lock passing.
However, this mechanism should be used with some care, because it's easy to run into one of the above mentioned deadlocks.
=== Separate Callbacks ===
The last occurrence of synchronous calls is closely related to lock passing. If a processor '''A''' has passed a non-separate reference argument to another processor '''B''', and thus has passed its locks away, it cannot proceed its execution. Sometimes however processor '''B''' has to log some calls back to '''A''', which is called a ''separate callback''.
{{definition|Separate Callback | A separate call where the caller holds the locks of the callee. }}
During a separate callback processor '''B''' has to give back the locks it has previously received from '''A'''.
This in turn means '''B''' has to wait until '''A''' has finished its execution of the separate callback and returned the locks, which effectively makes the call synchronous.

View File

@@ -0,0 +1,57 @@
[[Property:title|Design by Contract in SCOOP]]
[[Property:link_title|Design by Contract]]
[[Property:weight|5]]
[[Property:uuid|f8df5904-d1ee-31d7-f618-fb8bf1ddc876]]
The backbone of the Eiffel Method is design by contract. Preconditions, postconditions, and class invariants are used in Eiffel for extending software interfaces into software specification. This is essentially the same in concurrent Eiffel with SCOOP as it is in traditional, sequential Eiffel. However, because of the concurrent nature of processing under SCOOP, the runtime semantics of the elements of Design by Contract are different for concurrent systems.
==Preconditions==
The role of the precondition is somewhat different in SCOOP than in sequential Eiffel. In non-concurrent Eiffel we view the precondition of a routine as defining a set of obligations on potential callers of the routine. That is, the set of conditions that must be true before correct execution of the routine can be expected. So, we could look at the precondition clauses in sequential Eiffel as '''correctness conditions'''. A typical example might be a square root routine that returns the square root of a passed argument value. A precondition clause, i. e., a correctness condition, for this routine will be that the argument must be non-negative. It is the responsibility of the caller to ensure that this property of the argument holds at the time of the feature call.
In concurrent Eiffel, the same correctness conditions are still valid, but there are cases in which we must view the clients role here a little differently.
In the case of a precondition clause that depends on a separate object, even if the client tests the condition ahead of the call, there is no assurance that action by some other concurrent processor may have invalidated the precondition clause between the time that the check was made and the time that the feature application takes place.
In SCOOP preconditions can therefore additionally take the role of a '''wait condition'''.
Wait conditions are useful for cases where the caller can't guarantee that a property on an object is true at the time of the call, but it knows that it will eventually become true.
If a wait condition fails, the current processor will stall its execution, release the locks on its arguments, and wait until the precondition is fulfilled.
A typical example is a <code>CONSUMER</code> object trying to dequeue an item from a shared <code>BUFFER</code>.
In the following example, the precondition in <code>{CONSUMER}.consume</code> is treated as a wait condition:
<code>
class CONSUMER feature
consume (a_buffer: separate BUFFER): INTEGER
require
not_empty: not a_buffer.is_empty
do
Result := a_buffer.item
a_buffer.remove
end
end
</code>
A precondition clause is only treated as a wait condition when there's a separate call.
However, the opposite is not true - not all precondition clauses with a separate call are treated as wait conditions.
The rule is a bit tricky to understand, but as a general rule of thumb, a precondition violation is raised when the SCOOP runtime detects that there's no possibility that a precondition clause may become true in the future if the routine releases its exclusive access.
{{Info| Alright, here are the exact rules: The decision depends on the context of the '''caller''' of the routine.
If one of the separate objects used as a target in a precondition clause is ''locked'' (see definition in [[Asynchronous Calls]]) in the context of the caller, the precondition is treated as a correctness condition. Otherwise, it is a wait condition.}}
{{SeeAlso|The <code>{PRODUCER}.store</code> feature in the [[Producer-consumer|producer-consumer]] example. When called by <code>{PRODUCER}.produce</code> it becomes a '''wait condition'''.}}
==Postconditions==
As with preconditions the effect of concurrent execution can make a difference in how postconditions are viewed.
If a routine has executed correctly, then the postcondition of the routine will hold at the time that it terminates. This is true whether or not concurrency is involved. However, when a postcondition involves separate calls, clients must be cautious about how they depend upon the state guaranteed by postconditions.
==Class invariants==
The '''separate argument''' rule in [[Separate Calls]] tells us that a separate call is valid only on a target which is a formal arguments of the enclosing routine. Because class invariants are not routines and therefore have no arguments, separate calls are not allowed in class invariants.
{{Info|Technically, it should be possible in a class invariant to code an inline agent that gets passed arguments of separate types, then execute separate calls within the inline agent. But generally, it can be assumed that class invariants contain no separate calls. }}
The semantics of class invariants will be the same as in sequential Eiffel, precisely because invariants must include only non-spearate calls. To put it the terms of SCOOP, the class invariant ensuring the validity of any particular object will be evaluated entirely by the processor handling that object.

View File

@@ -0,0 +1,46 @@
[[Property:title|Baboon crossing]]
[[Property:weight|-10]]
[[Property:uuid|8b48b7bb-baa8-41a2-2d93-1d9667b05323]]
=Description=
In the baboon crossing problem, a number of baboons are located on the edges of a deep canyon. There are baboons on both sides and, as you have probably guessed, some of the baboons on the left side want to get to the right side, and vice versa. Fortunately, a large rope has been stretched across the abyss allowing baboons to cross by brachiation: hanging from the rope and swinging hand-over-hand to the other side.
The baboon crossing policy must be compatible with the constraints of the situation:
# If two baboons meet in the middle of the rope, then all activity stops, and no other baboons can cross (deadlock). So, at any given time, all the baboons on the rope must be going the same direction.
# The rope can hold only a certain number of baboons at a time. A baboon cannot be allowed on the rope if the rope is already at full capacity.
# A continuous stream of baboons from one direction could prevent baboons wanting to go the opposite direction from ever being able to cross (unfairness, starvation).
=Highlights=
The rope is modeled in this example by class <code>ROPE</code>, and is the primary shared resource. The fact that, at any given moment, the rope is considered unavailable to baboons wanting to cross the canyon in the direction opposite the current flow of baboons adds an interesting twist ... as does the maximum limit of baboon flesh that the rope can support. The <code>ROPE</code> class has features to manage its state:
: <code>capacity</code> is an integer representing the maximum number of baboons that can be supported at one time.
: <code>baboons</code> is an integer which is the number of baboons currently traversing the rope.
: <code>direction</code> is the current direction of baboon flow, represented as a boolean.
: <code>changing</code> is an indicator of whether the direction of the rope is currently in the process of changing.
: <code>is_secure</code> is a boolean indicating whether the rope is in such a state that a baboon may be allowed to mount the rope.
<code>ROPE</code> also includes procedures <code>mount</code> and <code>dismount</code> which a baboon uses to get on and off the rope.
There are two more interesting features of <code>ROPE</code>, <code>directions</code> and <code>announce</code> which will be discussed below.
The baboons (modeled by class <code>BABOON</code>), as they are created, are determined at random to want to go either left or right. As each baboon is created by the root class of the example, it is launched into life. For the purposes of this example, the baboon life is short and consists of four steps:
: 1) Announcing one's desired direction to the rope. Doing this involves the feature <code>{ROPE}.announce</code> that was mentioned above. The rope keeps a queue of these desired directions as announced by the baboons in its <code>directions</code> feature. It is by keeping this queue that the rope can make its decisions to change the direction of the flow of baboons.
: 2) Mounting the rope. The baboon calls its own <code>mount</code> feature (which in turn calls <code>{ROPE}.mount</code>. The baboon's <code>mount</code> procedure includes two wait conditions. One makes sure the rope is safe (<code>{ROPE}.is_secure</code>), and the second make sure that the direction of the baboons is the same as that of the baboon wanting to mount. Whenever these conditions are met, the baboon is able to grab a spot on the rope and start to cross the canyon.
: 3) Traversing the canyon. Once on the rope, the baboon takes a short time to make it to the other side.
: 4) Dismounting the rope. After crossing the canyon, the baboon gets off the rope ... and dies immediately after attaining what was apparently its only goal in life.
Although it was mentioned above, it bears repeating that the rope is not just a shared resource, but an active and important player in this example. Indeed, it is the rope that controls the direction of the flow of baboons across the canyon, and ensures fairness.
When a baboon announces himself as a candidate for crossing the canyon in a particular direction, the rope queues this information. When the rope allows a baboon to mount and cross, the desired direction of the next baboon in the directions queue is queried. If the next baboon wants to go in a different direction, then the state of the rope is set to "changing", and no more baboons are allowed on the rope until the current stream finishes crossing. When the last baboon if that stream dismounts the rope, the direction of flow is changed and the "changing" state is repealed, allowing baboons wanting to cross in the opposite direction to have a go.

View File

@@ -0,0 +1,27 @@
[[Property:title|Barbershop]]
[[Property:weight|-11]]
[[Property:uuid|3a6f929f-17a2-c9d8-1b88-0aad83db4160]]
=Description=
The [http://en.wikipedia.org/wiki/Sleeping_barber_problem barbershop], sometimes called ''the sleeping barber'', models a barbershop with one barber, one barber's chair, and a waiting room with several chairs. The alternative name of the problem comes from the fact that if there is no one waiting for a haircut, the barber flops in his chair and falls asleep.
Also involved are a number of shaggy-haired customers. A customer entering the barbershop looks around and assesses the situation. If all the waiting room chairs are occupied, the customer leaves the shop to return again at a time when hopefully the queue will be shorter. If there is an open chair the customer sits down and waits for the barber.
Once the barber has finished cutting a customer's hair, the customer leaves the shop which allows another customer to enter the shop (if all the chairs had been occupied) and the the next customer (if there is one) in the queue to get a haircut.
In this example, even after getting their hair cut, the customers come back to the shop until they have had their hair cut some prescribed number of times.
=Highlights=
The example contains classes that model the barber (<code>BARBER</code>), the customers (<code>CUSTOMER</code>), and the shop with its waiting area (<code>SHOP</code>). The root class creates as separate objects the barber, the shop, and the set of customers. As each customer is created, it is launched on its lifecycle of getting haircut after haircut until haircuts are no longer needed.
The <code>SHOP</code> includes features <code>enter</code> and <code>leave</code>. Customers call the <code>enter</code> feature to find out if there is a chair available in the shop. Customers call <code>leave</code>, after their hair has been cut. Both of these calls are "wrapped" as separate calls in the class <code>CUSTOMER</code>.
A typical customer lives in this way: As long as he still needs haircuts, he repeatedly does the following steps: He tries to enter the shop. If he's unsuccessful because all the chairs are occupied, he goes off for a while (in the implementation, his processor sleeps and then comes to the end of this step). If he is able to enter the shop, then he puts himself in the queue for a haircut. Once his haircut is complete, he reduces his number of haircuts needed by one, and leaves the shop.

View File

@@ -0,0 +1,36 @@
[[Property:title|Counter]]
[[Property:weight|-14]]
[[Property:uuid|ef5e7a86-3c2d-6a55-07b5-395f30bf8f96]]
=Description=
Unlike many of the other examples, this one is not really a problem to be solved. Rather, the Counter example uses multiple instances of the simple class <code>COUNTER</code> to explore various concurrent scenarios. Each instance of <code>COUNTER</code> has a unique identifier, a current value, and a speed. A counter's speed is that time that it takes to perform a single <code>increment</code>. You will see that some of the tests start multiple counters at different speeds. Class <code>COUNTER</code> has a procedure <code>run</code> which takes an integer as an argument, and increments the counter that many times. The example's root class contains the code to create counter instances and run the various tests.
=Highlights=
There are six tests that can be called from the root procedure:
<code>
make
-- Test counters.
do
print ("Counter application%N")
-- Leave one of the following lines uncommented to perform testing.
test_1 -- start just one counter
-- test_2 -- start two counters
-- test_3 -- start many counters
-- test_4 -- start counter_1 twice
-- test_5 -- start counter_1 with precondition
-- test_6 -- start counter_1 separately and counter_2 non-separately
end
</code>
You can uncomment the test that you want to run and leave the rest commented. When you run the test, you can see you can watch the output in the console window as the test progresses.
Have a look at the source code for each test before you run it, so that you can reconcile the output you see with your expectation of the execution.

View File

@@ -0,0 +1,53 @@
[[Property:title|Dining philosophers]]
[[Property:weight|-12]]
[[Property:uuid|569f012e-7913-fbdf-7ad7-cd17d82e64aa]]
=Description=
In the [http://en.wikipedia.org/wiki/Dining_philosophers_problem dining philosophers] a number of philosophers (five, in our example) are seated at a round table. On the table are five plates of food, one in front of each philosopher, and five forks, one between each adjacent pair of plates. So each philosopher has a plate in front of him and a fork to his left and a fork to his right.
The philosophers spend all their time in either of only two states: they are thinking or they are eating. The philosophers may be mental giants, but apparently manual dexterity is not their strong suit. This is evidenced by the fact that in order to eat, any philosopher must be able to pick up both of the forks positioned next to his plate (which he can do, so long as neither of the philosophers next to him is currently eating). So, while eating he must have possession of both forks, and while thinking, he has put down any forks that he had previously used. Therefore, any particular philosopher has the opportunity to eat only when the two philosophers on either side of him are thinking and have made their forks available.
Apart from any negative consequences from the questionable sanitary practices described above, the dining philosophers can, in improperly designed solutions, encounter problems related to concurrency. For example, if all philosophers were to pick up the fork to their right and then wait for the fork to their left to become available (or vice versa), the philosophers would be caught in a [http://en.wikipedia.org/wiki/Deadlock deadlock]. If, because of a lack of fairness in the solution, some of the philosophers get stuck in thinking mode because they can never secure the two forks necessary to eat, then those philosophers so affected would suffer a condition known as [http://en.wikipedia.org/wiki/Resource_starvation resource starvation].
=Highlights=
This example includes three classes relevant to the problem: <code>DINING_PHILOSOPHERS</code>, <code>PHILOSOPHER</code>, and <code>FORK</code>. Class <code>DINING_PHILOSOPHERS</code> sets the problem in motion by creating the forks and philosophers all typed as <code>separate</code>, and then applying the feature <code>live</code> to each philosopher after creation.
Class <code>PHILOSOPHER</code> models the philosophers. The totality of a philosopher's exciting activities is modeled by the feature <code>step</code>:
<code>
step
-- Perform tasks.
do
think
eat (left_fork, right_fork)
end
</code>
This feature is called by <code>{PHILOSOPHER}.live</code> repeatedly until the philosopher has eaten a prescribed number of times.
The feature <code>think</code> requires no access to shared objects, but the feature <code>eat</code> depends upon the philosopher's ability to secure access to both of the forks adjacent to his plate. Because all forks are separate objects, each call to <code>eat</code> waits until the processors for both the left and right forks are available (in accordance with the [[Concurrent programming with SCOOP#Access to shared resources|Wait rule]]).
Another interesting feature of this example is the feature <code>{PHILOSOPHER}.eat</code>. If you look at the text of this feature
<code>
eat (l, r: separate FORK)
-- Eat, having grabbed l and r.
do
io.put_string ("Philosopher " + id.out + ": taking forks%N")
times_eaten := times_eaten + 1
io.put_string ("Philosopher " + id.out + ": eating%N")
io.put_string ("Philosopher " + id.out + ": putting forks back%N")
end
</code>
and you're not wearing your SCOOP glasses, this could look a little odd to you. Here is a routine that takes two arguments <code>l</code> and <code>r</code> representing the left and right forks. But then, <code>l</code> and <code>r</code> are never used in body of the routine!
However, with SCOOP in mind, we realize that the fork objects are shared resources to which exclusive access must be secured before a philosopher can eat. In this example, the fork object themselves don't really do anything except serve that purpose. (Take a look at the <code>FORK</code> class, and you'll see that it has no features.)
In real world concurrency problems, it is likely that shared resources would play a more active role than the forks of the dining philosophers, but here it's just not necessary.

View File

@@ -0,0 +1,24 @@
[[Property:title|Dining savages]]
[[Property:weight|-6]]
[[Property:uuid|ecd618f3-14f4-1a06-7f9c-be57623a9889]]
=Description=
The problem of the dining savages (an allusion to the classic dining philosophers) is based on the arguably tasteless analogy of a number of members of a primitive culture, hereinafter called the "savages", sharing a meal from a single pot. The primary abstractions are the savages themselves, a cook, and the pot. The pot contains a certain number of servings of savage sustenance (the nature of which will be left to your imagination). Each of the savages can freely remove a serving from the pot so long as the pot is not empty. So before taking a serving, a savage must check to make sure the pot is not empty. In the case in which the pot is empty, the savage must wake up the cook to refill the pot, after which the feast continue. The savages, then, can eat only when the pot is not empty, and the cook can fill the pot only when the pot is empty.
=Highlights=
The primary shared resource here is the pot, represented by class <code>POT</code>, which is accessed for different purposes by both the savages and by the cook. <code>POT</code> has queries <code>is_empty</code> and <code>is_full</code> that can be used by savages (modeled by class <code>SAVAGE</code>) and the cook (class <code>COOK</code>). <code>POT</code> also has a feature to allow refilling of the pot. This feature is exported only to <code>COOK</code>. Another feature, this one exported only to <code>SAVAGE</code> allows the removal of a serving from the pot.
The cook can also be viewed as a resource shared among all the savages. Whenever a savage executes the feature that checks the pot, he must have exclusive access to both the pot and the cook. If the pot is empty then the savage uses his access to the cook to request a refill. If the pot is not empty, then the savage exits the routine, and goes on to execute the routine that removes a serving from the pot.
In the root class, you can adjust the number of savages, the size (in servings) of the pot, and how hungry the savages are. The hunger index indicates how many times a savage will take a serving from the pot and eat it before becoming sated. So if the pot holds 20 servings and there are 5 savages with hunger index of 4, then the pot will become empty just as the last savage takes his last serving, meaning that the pot will not require refilling. In the same scenario, if the hunger index were 10, then 50 servings total would be required, resulting in the need for the cook to be notified to refill the pot 2 times ... and 10 servings leftover ... presumably for tomorrow's breakfast.
The root class creates the pot, then the cook, then some number of savages. As the savages are created, their lives are launched. To occupy themselves, they repeatedly check the pot, take a serving, and eat. They give this all up once they have eaten the number of servings prescribed by their hunger index. During the check of the pot, if the pot is empty, a separate call is made to the cook requesting that the pot be refilled, and the savage goes on about the business of removing a serving from the pot. It is possible that when the savage then tries to get a serving from the pot, the pot will still be empty. In this case the precondition on <code>get_serving_from_pot</code> will cause the savage to wait until such time as the pot is no longer empty.
Whenever the cook is requested to refill the pot, the <code>{COOK}.cook</code> procedure is called. The procedure takes as an argument the pot which is declared of course as separate. So access to the pot must be guaranteed before <code>cook</code> can execute. The <code>cook</code> procedure has a precondition that causes it to wait if the pot is not currently empty.

View File

@@ -0,0 +1,43 @@
[[Property:title|Faneuil Hall]]
[[Property:weight|-5]]
[[Property:uuid|93132084-5eb9-c7d9-d58c-7b5c3f7508f8]]
=Description=
The Faneuil Hall example is one of several examples that comes to us from Allen Downey's book ''[http://greenteapress.com/semaphores/ The Little Book of Semaphores]''. Downey credits Grant Hutchins as the originator of the example. [http://en.wikipedia.org/wiki/Faneuil_Hall Faneuil Hall] itself is an historic building in Boston which dates from 1742 an has served as a public meeting and market place.
The scenario in the Faneuil Hall example involves a number of immigrants waiting to have their naturalizations confirmed by a judge and receive their certificates of citizenship. Immigrants entering the Hall wait in line to check in, then they wait to take their oaths and receive their certificates of citizenship. Meanwhile, a number of spectators can also enter the building. Once the judge enters the Hall, no one else may enter the hall. Spectators may leave, but immigrants may not. Once the immigrants in the Hall have checked in, their naturalization can be confirmed by the judge. Once confirmed, the immigrants can pick up their certificates. At some point after the confirmation, the judge leaves the Hall. At that point, spectators can enter again, and immigrants can leave as soon as they have picked up their certificates. The judge will make successive trips into the hall until all the immigrants expect during the day have been confirmed.
=Highlights=
The primary actors here are the immigrants, the judge, and the spectators, model by classes <code>IMMIGRANT</code>, <code>JUDGE</code>, and <code>SPECTATOR</code>, respectively. In addition to the actor classes, there is a class <code>HALL</code> that represents Faneuil Hall itself, and a root class that sets everything up and starts the processing. There is only one judge. But there can be many immigrants and spectators. Their numbers are limited to certain maximums specified by constants in the root class. The specific number of immigrants and spectators varies at random from one execution to the next. You can experiment with larger or smaller maximum numbers for immigrants and spectators by changing the values for the constants <code>{FANEUIL_HALL}.Max_immigrants</code> and <code>{FANEUIL_HALL}.Max_spectators</code>.
Although not really considered an actor here, the class <code>HALL</code> plays a critical role in synchronizing the concurrent actions of the immigrants, spectators, and the judge. <code>HALL</code> includes many status queries which, when used in preconditions in features of the other actors, constitute [[Concurrent programming with SCOOP#Preconditions|uncontrolled precondition clauses]] which when false will cause the calling processor to wait until the condition becomes true. For example, consider the following status query from class <code>HALL</code>:
<code>
immigrants_ready: BOOLEAN
-- Are immigrants ready?
do
Result := present_immigrant_count = ready_immigrant_count
end
</code>
This query is used by the <code>JUDGE</code> when preparing to sit and administer oaths to the immigrants:
<code>
take_place (a_hall: separate HALL)
-- Prepare to confirm.
require
immigrants_ready: a_hall.immigrants_ready
do
print (out + " taking place%N")
a_hall.sit_judge
end
</code>
The judge will take his place only when all the immigrants present have checked in and are ready to take the oath.
Another thing to note about this example is that immigrants and spectators obey slightly different rules when coming and going in the hall. Neither immigrants nor spectators may enter the hall if the judge is in the hall. Immigrants may not leave until the judge has left, but spectators may leave at anytime. So when you compare the <code>leave</code> features of the two classes you'll see a precondition that serves as a wait condition on <code>{IMMIGRANT}.leave</code> that is not present on <code>{SPECTATOR}.leave</code>.

View File

@@ -0,0 +1,5 @@
[[Property:title|SCOOP examples]]
[[Property:weight|8]]
[[Property:uuid|75ddd9e0-3baf-655a-748f-ea8765a1d06d]]
The examples for SCOOP that are distributed with EiffelStudio are solutions to classic and not-so-classic concurrency problems.

View File

@@ -0,0 +1,31 @@
[[Property:title|Observer pattern]]
[[Property:weight|-4]]
[[Property:uuid|72c53c25-6fa5-6787-0762-cfa3d1c814c5]]
=Description=
The Observer pattern example should be considered a work in progress. During the development of SCOOP for EiffelStudio, Eiffel Software engineers began to think in terms of the impact that SCOOP might have on our own software. One area that emerged was the parsing of Eiffel software text during compilation. You know that Eiffel systems are composed of modules called classes. In a non-concurrent compilation process, the classes are parsed one after another. However, there is no reason why parsing could not take place on multiple, concurrently executing SCOOP processors.
You may remember seeing as you compile an Eiffel system, the different degrees of compilation counting down. Degree 5 is a phase of compilation that deals with parsing classes and creating an abstract syntax tree. The Observer pattern example tries to imagine concurrent Degree 5 parsing in the presence of SCOOP.
{{note|You should understand that the example doesn't really parse any Eiffel code or, for that matter, involve any real code files. Rather, it just tries to show what the structure of such a concurrent parser might look like, and the parsing step just involves a short wait to simulate the time that parsing would take.}}
=Highlights=
The name of this example is Observer pattern, but it's not a classic example of the [http://en.wikipedia.org/wiki/Observer_pattern Observer design pattern] as commonly known. But it does have elements of the observer pattern, as you will see below.
The important classes here are <code>DEGREE_5</code>, <code>EIFFEL_PARSER_POOL</code>, and <code>EIFFEL_PARSER</code>. <code>DEGREE_5</code> represents Eiffel compilation degree five, parsing of classes. In the example, <code>DEGREE_5</code> uses an instance of <code>EIFFEL_PARSER_POOL</code> to manage a pool of instances of <code>EIFFEL_PARSER</code> which actually do the (simulated) parsing. The <code>EIFFEL_PARSER</code>s are declared <code>separate</code> so that they can work concurrently, parsing different files.
When <code>DEGREE_5</code> creates the <code>EIFFEL_PARSER_POOL</code>, it provides a maximum number of parsers to be held in the pool and a function agent which the pool can use to create a new parser instance. Then when <code>DEGREE_5</code> asks the pool to parse a file, it provides references to the file itself and two procedure agents: one for pre-parse processing and one for post-parse processing.
The pre-parse processing agent is associated with a routine that is used to set up a parser before asking it to parse a file.
When an <code>EIFFEL_PARSER</code> finishes with a file, it calls the agent for post-parse processing. In this way, it notifies the instance of <code>DEGREE_5</code> that it is done with that file.
So, it is here that elements of the observer pattern become evident, just in a slightly non-typical way. In more typical observer pattern examples, there is one observed object and a set of one or more observers. But here, there is one observer, the instance of <code>DEGREE_5</code>, and many observable objects, the parsers. When parsers complete their work, they notify their observer (the <code>DEGREE_5</code>), that they are done by executing the routine associated with the post-parse agent. So there's another difference, too. Instead of making calls directly to the observer, the observed objects apply the agents that have been provided by the observer.

View File

@@ -0,0 +1,69 @@
[[Property:title|Producer-consumer]]
[[Property:weight|-13]]
[[Property:uuid|03739be2-e0d5-f5f0-b405-0bb75c8fee0f]]
=Description=
The [http://en.wikipedia.org/wiki/Producer-consumer_problem producer-consumer] problem is a classic software concurrency problem. The problem features one or more "producers" and one or more "consumers". All producers and consumers must share access to a "buffer" into which producers insert the products they produce, and from which consumers take the products they consume. The shared buffer is "bounded", that is, it has a maximum capacity.
So at any time, the buffer could be empty, precluding any consumer from withdrawing a product. Or the buffer could be full, which would mean that no producer could produce a new product until a consumer had first consumed a product, making space in the buffer. To avoid concurrency related problems, producers and consumers can access the buffer only at times when no other producer or consumer is accessing it, and only when it is in the proper state for the particular type requesting access (i. e., not empty for consumers and not full for producers).
=Highlights=
The root class of the example creates the bounded product buffer and a number of producers and consumers, all given <code>separate</code> types. It requests the producers to create a number of products, and the consumers, in the aggregate, to consume that same number of products.
==Separate argument rule==
Notice that the root class uses a feature <code>launch_producer</code> (and a corresponding feature <code>launch_consumer</code>) for instructing the producers and consumers on how many products to handle. <code>launch_producer</code> looks like this:
<code>
launch_producer (a_producer: separate PRODUCER)
-- Launch `a_producer'.
do
a_producer.produce (900)
end
</code>
It might occur to you that it would be easier, simpler, and clearer just to include this feature's single procedural line:
<code>
a_producer.produce (900)
</code>
in place of the call to <code>launch_producer</code>, and dispense with the <code>launch_producer</code> feature entirely. But that is not possible in this case.
The reason is that <code>a_producer.produce (900)</code> is a [[Concurrent programming with SCOOP#Separate types and separate calls|separate call]] (i. e., the object attached to <code>a_producer</code> is declared of a separate type), and according to the [[Concurrent programming with SCOOP#Access to shared resources|separate argument rule]], calls on a separate object are valid only when applied to an argument of the enclosing routine.
==Wait condition==
This example also shows an [[Concurrent programming with SCOOP#Preconditions|uncontrolled precondition]] serving as a "wait condition". In the class <code>PRODUCER</code> we see the buffer declared as a class attribute with a <code>separate</code> type:
<code>
buffer: separate BOUNDED_BUFFER [INTEGER]
-- Shared product buffer.
</code>
The feature <code>store</code> contains a precondition which ensures that the shared buffer is not full when <code>store</code> gets applied:
<code>
store (a_buffer: separate BOUNDED_BUFFER [INTEGER]; an_element: INTEGER)
-- Store `an_element' into `a_buffer'.
require
not a_buffer.is_full
do
a_buffer.put (an_element)
ensure
not a_buffer.is_empty
a_buffer.count = old a_buffer.count + 1
end
</code>
The <code>store</code> routine is called by the routine <code>produce</code>, passing a reference to the <code>separate</code> attribute <code>buffer</code> like this:
<code>
store (buffer, l_element)
</code>
Because <code>buffer</code> is considered uncontrolled in the context of <code>produce</code>, then the precondition for <code>store</code> becomes a wait condition, rather than a correctness condition. This means that if the buffer is full, then the application of the feature <code>store</code> will wait until some consumer removes an product from the buffer. The removal of a product makes the precondition hold again, and the application of <code>store</code> can proceed.

View File

@@ -0,0 +1,22 @@
[[Property:title|Quicksort]]
[[Property:weight|-7]]
[[Property:uuid|61632685-20ad-293b-44b6-907b15d0447a]]
=Description=
The quicksort example is a concurrent implementation of the well-known [http://en.wikipedia.org/wiki/Quicksort quicksort] sorting algorithm developed by computer scientist [http://en.wikipedia.org/wiki/C._A._R._Hoare C. A. R. Hoare]. Quicksort uses a "divide and conquer" strategy to sort a structure. It applies a basic algorithm to the structure which leads to a division of the elements into to two substructures. Then it applies the same algorithm to each of the substructures, and so on, until the whole structure is sorted. Because of the repetitive application of the same algorithm to evolving parts of the structure, the quicksort is often used in computer science classes to provide students with experience in [http://en.wikipedia.org/wiki/Recursion_(computer_science) recursive] computation.
In the SCOOP example, instead of recursive calls, substructures are handled (within limits) by separate [[Concurrent programming with SCOOP|SCOOP processors]] running concurrently.
=Highlights=
The quicksort example sorts a randomly generated container of integers. The set-up for this example is done in the root class. It is interactive in the sense that when you run the example, you get to to choose how many elements will be sorted (within certain limits) and you get to provide a seed for the random number generator which will be used to produce the unsorted structure.
The quicksort algorithm is embodied in the class <code>QUICKSORTER</code>, primarily in its routine <code>sort</code>. Instances of <code>QUICKSORTER</code> declared as <code>separate</code> are spawned to sort substructures as the algorithm progresses.
The structures acted upon by <code>QUICKSORTER</code> are managed in instances of class <code>DATA</code>. <code>DATA</code> is a class designed specifically to support the quicksort example.
When the example runs, separate <code>QUICKSORTER</code> processes are used for the recursive sorts up until a certain depth of recursion is reached. The limit is defined by the <code>NATURAL</code> constant <code>{QUICKSORTER}.max_recursion_depth</code>.

View File

@@ -0,0 +1,44 @@
[[Property:title|Search-insert-delete]]
[[Property:weight|-9]]
[[Property:uuid|2c09ce66-f3be-1e31-eac8-f06ad3d6fc3a]]
=Description=
The Search-insert-delete example involves a shared data structure that is being accessed concurrently by three types of independent actors. ''Searchers'' access the list without changing it. So, any number of concurrent searchers can be accessing the structure safely. ''Inserters'' have the ability to add new elements to the end of the structure. Only one inserter can access the structure at any given time, but can work concurrently with any number of searchers. Lastly, ''Deleters'' can remove items from any position in the structure. As such, any deleter demands exclusive access to the structure. So, while a deleter has access to the structure neither any other deleter, any inserter, nor any searcher is allowed access.
=Highlights=
The primary actors are modeled by classes <code>SEARCHER</code>, <code>INSERTER</code>, and <code>DELETER</code>. Additionally, some common features are abstracted into a class <code>ACTOR</code> from which the effective actor classes inherit. Each actor lives only to access the data structure one time. The access looks similar in the different actor classes, and consists of executing a procedure to start the action, then waiting a random time interval, then executing a procedure to end the action.
The shared data structure is modeled by the class <code>SHARED_LIST</code>. Because the point of this example is to demonstrate the different allowable types of concurrent access by the different types of actors, it should be said that features that support that goal are all that you will find in this class. In other words, <code>SHARED_LIST</code> doesn't really maintain a data structure, it only provides the features necessary to coordinate safe concurrent access by searchers, inserters, and deleters.
<code>SHARED_LIST</code> provides features in three feature clauses, each explicitly exported one of the types of accessors. For example, the feature clause exported to clients of type <code>SEARCHER</code> includes the query <code>can_search</code> and the procedures <code>start_search</code> and <code>end_search</code>. The features available to inserters and deleters are nearly identical. Because of the different concurrency requirements of each type of actor though, the implementation for <code>can_search</code> and <code>can_delete</code> are different. Also different are the implementations for starting and ending actions for the various actor types.
<code>SHARED_LIST</code> keeps certain state attributes:
<code>
searchers: INTEGER_32
-- How many searchers are there?
inserting: BOOLEAN
-- Is someone inserting?
deleting: BOOLEAN
-- Is someone deleting?
</code>
These are used in the <code>can_search</code>, <code>can_insert</code>, and <code>can_delete</code> queries, which are in turn used in the preconditions for the corresponding <code>start_xxxx</code> features. For example, <code>start_delete</code> is constrained by a precondition <code>can_delete</code>, which is implemented like this:
<code>
can_delete: BOOLEAN
-- Can delete?
do
Result := not deleting and not inserting and searchers = 0
end
</code>
For the deleter calling <code>{SHARED_LIST}.start_delete</code>, the precondition clause <code>can_delete</code> is an [[Concurrent programming with SCOOP#Preconditions|uncontrolled precondition]]. This means that the deleter will wait until the <code>can_delete</code> becomes true before feature application of <code>start_delete</code> occurs.

View File

@@ -0,0 +1,25 @@
[[Property:title|Senate bus]]
[[Property:weight|-8]]
[[Property:uuid|cfce3b31-bb8d-8259-a02b-73fd1495fce9]]
=Description=
According to Allen Downey in his text ''[http://greenteapress.com/semaphores/ The Little Book of Semaphores],'' the Senate bus example was inspired by the [http://www.wellesley.edu/Transportation/senate.html Senate Bus at Wellesley College]. Passengers come to a bus stop to catch the Senate bus. The bus can hold 50 passengers. When the bus stops at the bus stop, the waiting passengers board. If the bus fills up, then any passengers who cannot board, must wait until the bus shows up again. Likewise, any passenger who arrives at the stop during the time the bus is boarding also must wait until the next cycle.
=Highlights=
The root class for this example creates the bus stop, the bus, and the passengers all typed as separate.
The bus stop, modeled by class <code>STATION</code> has features that can be used by the bus and by passengers. Access to these
features is restricted to the appropriate client classes through the clients part of the feature clause. Clients of type <code>PASSENGER</code> can access <code>{STATION}.pass_enter</code>. A client of type <code>BUS</code> can access <code>{STATION}.bus_enter</code>, <code>{STATION}.pick_up</code>, and <code>{STATION}.leave</code>, as well as a status feature <code>{STATION}.bus_is_waiting</code> and two passenger queues <code>{STATION}.waiting_list</code> and <code>{STATION}.checked_in_list</code>.
The lifecycle of a passenger is simple: enter the bus stop. This is accomplished by making a [[Concurrent programming with SCOOP#Separate types and separate calls|separate call]] to <code>{STATION}.enter</code> and passing <code>Current</code> (the passenger object itself) as an argument.
The lifecycle of the bus is slightly more complex: enter the bus stop, pick up passengers, leave the bus stop, wait for a short time. The bus repeats this sequence forever. The routines in class <code>BUS</code> for entering the bus stop, picking up passengers, and leaving the bus stop all accept as an argument the separate bus stop object (<code>a_station: separate STATION</code>) and make a [[Concurrent programming with SCOOP#Separate types and separate calls|separate call]] to the corresponding routine in <code>STATION</code>.
Features of the bus stop (class <code>STATION</code>) manage the queues for waiting and checked-in passengers and whether a bus is at the bus stop. Passengers are added to the waiting queue when they arrive at the station. When the bus leaves the station, any waiting passengers are transferred to the checked-in queue. When the bus arrives at the station, the passengers on the checked-in queue are allowed to board the bus (up to the first 50 passengers, that is), and the boarding passengers are then removed from the checked-in queue.

View File

@@ -0,0 +1,21 @@
[[Property:title|Single-element producer-consumer]]
[[Property:weight|-15]]
[[Property:uuid|25d3e585-0eb6-efa8-ada9-8ee596df5ada]]
=Description=
The single-element producer-consumer is a simpler variant of the classic [http://en.wikipedia.org/wiki/Producer-consumer_problem producer-consumer] problem. A producer produces products, in this case integers, into a single-element inventory. The products are then consumed from inventory by a consumer. The producer, consumer, and inventory are managed by separate [[Concurrent programming with SCOOP#Processors|processors]], so any access they have to one another must be synchronized through scoop mechanisms.
=Highlights=
In the single-element producer-consumer only a single producer and single consumer are created, and there is only storage allowing for a single instance of the product. So, effectively in this example, the bounded buffer of the classic producer-consumer problem has a size of one.
The classes modeling the different actors have obvious names: <code>PRODUCER</code>, <code>CONSUMER</code>, and <code>INVENTORY</code>. The root class of the example creates one <code>separate</code> instance of each of these, and then brings the producer and consumer to life.
The <code>PRODUCER</code> class supports a procedure <code>produce</code> in which a product is produced and stored in the single-element <code>INVENTORY</code>. The producer can only produce an element if the inventory is currently empty. Class <code>INVENTORY</code> exports a boolean query <code>has_item</code> which is the indicator of whether a product has been produced and is available for consumption. So <code>{PRODUCER}.produce</code> has a precondition that depends upon <code>{INVENTORY}.has_item</code> being false. Because the inventory is handled by a separate processor, this precondition is [[Concurrent programming with SCOOP#Preconditions|uncontrolled]] and will cause the producer to wait until the condition is true to proceed.
The <code>CONSUMER</code> class works in a way that is largely the symmetrical opposite of <code>PRODUCER</code>. The consumer tries to <code>consume</code> the item from the <code>INVENTORY</code>. But this only possible if an item has been produced and is available. So <code>{CONSUMER}.consume</code> has a "wait" precondition based on the query <code>{INVENTORY}.has_item</code>.
So the heart of the problem is the synchronization between producer and consumer sharing a single inventory. If there's already a product in inventory, the producer cannot produce more and must wait. Only when the consumer consumes the current product can the producer produce again. For the consumer's part, if there's a product currently in inventory, then the consumer can consume that product. Otherwise, the consumer must wait until the producer produces a new product. The synchronization is handled by the SCOOP mechanism of uncontrolled (wait) preconditions on the inventory.

View File

@@ -0,0 +1,107 @@
[[Property:title|Exceptions in SCOOP]]
[[Property:link_title|Exceptions]]
[[Property:weight|7]]
[[Property:uuid|4f760d0c-ff3b-5f8a-7d43-9be855cef17a]]
== Introduction ==
Exceptions are a rather nasty issue in concurrency.
In a shared memory system, an exception can leave a system in an inconsistent state, for example because they jump over an unlock operation.
In message passing systems on the other hand they can introduce unnecessary synchronization just to make sure that no exception happened, or they can create havoc because the recipient of an exception message is no longer in a state where it can handle it.
For SCOOP, the exception mechanism was carefully designed with the following goals in mind:
* Comprehensibility: It should be easy to understand
* Compatibility with exceptions in sequential programs
* Consistency: An exception should not leave objects in a tainted state.
* Asynchrony: Exceptions should not restrict the level of concurrency.
== Exception Propagation ==
Within a single processor, exceptions propagate just like in a sequential program.
When a routine encounters an exception, the <code>rescue</code> clause is entered, and if no <code>retry</code> statement is present, the exception is propagated to the caller.
This ensures backwards compatibility with sequential programs, because when there's only the root processor, the semantics are exactly the same.
Furthermore, this mechanism has proven itself useful for restoring invariants after an exception in order to bring objects to a consistent state.
The interesting case is when an exception propagates between regions, which happens during a separate call.
In that case there are two possibilities:
* The call is synchronous: The exception is propagated to the client region.
* The call is asynchronous: The exception is not propagated, because the client is busy executing something else. Instead, the supplier region is marked as ''dirty''.
This decision was mostly made to ensure comprehensibility.
Propagating an exception to the client in an asynchonous call would be really hard to handle.
The client would have to be ready to handle an exception at any point in time, and there would have been a need for an additional language mechanism to protect critical sections.
Because of these reasons SCOOP restricts exception propagation to synchronous calls only.
== Dirty Regions ==
A region marked as dirty has suffered an exception in an asynchronous call, which could not be propagated to its client.
The dirty mark has a big impact for future separate calls.
{{rule|name=Semantics of Dirty Regions|text= <br/>
1) All calls logged to a dirty region, whether synchronous or asynchronous, are ignored. <br/>
2) A synchronous separate feature call to a dirty region immediately triggers an exception in the client. Afterwards, the region is clean again.}}
The reason for these rules is that a series of commands and a subsequent query often depend on each other.
For example, a first call may instruct the target region to open a file, the next call to append a string to it, followed by a query to get the new size of the file.
If the first call already fails, there's no point in executing subsequent calls.
Even worse, it can make recovery from exceptions very hard to do in the client if it has no idea which calls have been successfully executed after the first exception.
The dirty mark will also vanish when an region is unlocked.
{{rule|name=Unlocking Dirty Regions|text=After releasing a lock on a dirty region, the region is clean again.}}
This is probably the most controversial design decision, because '''it allows for exceptions to be lost'''.
During the design of the exception mechanism, there was a choice of two other solutions.
One would have been to add an automatic "safeguard" synchronization whenever an unlock operation happens, during which exceptions could be propagated. The obvious downside is that it severely limits the uses of asynchrony.
Another solution would have been to preserve the exception, and raise it in the client that next logs a call.
The last solution only partially solves the problem (there might be no next client logging a query at all), but introduces a new problem that processors can get an exception completely out of context.
However, the main reason to choose the "forget-upon-unlock" solution over the other two is that it's easy to simulate the behaviour manually (as you'll see in the next section), while it's impossible to have a "forget-upon-unlock" semantics if one of the other models is used.
{{info|Upon lock passing, the dirtyness of a region is preserved.}}
== Preventing Exception Loss ==
One way to prevent exceptions from being lost is to add a synchronous query at the end of a routine:
<code>
put_character (c: CHARACTER; a_file: separate MY_FILE)
local
l_sync: POINTER
do
a_file.open
a_file.put_character (c)
a_file.close
-- This ensures that exceptions are propagated:
l_sync := a_file.default_pointer
end
</code>
Another possibility is to store the failure in the separate object:
<code>
class MY_FILE feature
is_tainted: BOOLEAN
open
do
-- Open a file.
rescue
is_tainted := True
end
-- other features
end
class CLIENT feature
put_character (c: CHARACTER; a_file: separate MY_FILE)
do
if a_file.is_tainted then
-- Handle exception in `a_file'.
end
a_file.open
a_file.put_character (c)
a_file.close
end
end
</code>

View File

@@ -0,0 +1,145 @@
[[Property:title|Exclusive Access]]
[[Property:weight|4]]
[[Property:uuid|7f5adf71-7169-c54e-9bed-079c84aee8d3]]
The topic of exclusive access was already briefly touched in [[Separate Calls]].
In this chapter, we'll have a deeper look at the exclusive access guarantees given in SCOOP, and what it means in practice.
== Reasoning Guarantees ==
In the SCOOP model, there are two very important guarantees - ''Order Preservation'' and ''Exclusive Access''.
{{definition|Order Preservation | The guarantee that, between any two regions, the order of separate feature calls logged by one region is the same as the order of feature applications by the other region.}}
{{definition|Exclusive Access (to a region) | The guarantee that no intervening features logged by other processors are executed.}}
The first guarantee is always valid.
A SCOOP processor is never allowed to reorder two feature calls when both have been logged by the same region.
However, the order preservation guarantee only holds between two processors - it is not valid in a global context.
For example, when a region A first logs a command to C, and then another region B logs a command to C, you generally don't know which feature will be applied first.
And also the other way around: You have no guarantee on the order of feature application when A logs both an (asynchronous) separate call to B and C.
Exclusive access on the other hand is only valid in certain contexts.
When an object is ''controlled'', SCOOP guarantees exclusive access to the region that contains the object.
We'll define the term ''controlled'' a bit later, but you've already seen an example in [[Separate Calls]]:
A separate object which is passed as an argument.
These two basic guarantees in the SCOOP model are important to reach one of the main goals:
The ability to reason about a program should be preserved in a concurrent program.
SCOOP guarantees that a ''controlled'' object behaves just like in a sequential program,
meaning that Pre- and Postconditions will remain valid in between feature calls because no other processor can interfere.
== Inline Separate ==
Passing a separate object as an argument is not the only way to gain exclusive access in SCOOP.
While in theory the mechanism would be sufficient, experience has shown that it's often bulky and not very elegant to write a new routine whenever one wants to call a feature on a separate target.
Therefore SCOOP introduces the ''Inline Separate'' block to make things a bit easier.
<code>
person: separate PERSON
show_separate_block
do
separate person as l_person do
print (l_person.age)
end
end
</code>
The separate block evaluates the expression on the left hand side of the <code>as</code> keyword, assigns it to the new read-only local variable <code>l_person</code> on the right hand side,
and does whatever is necessary to provide ''exclusive access'' to the region that handles <code>l_person</code>.
You can think of the inline separate block as syntactic sugar that creates a new wrapper routine (although inline separate still lets you access local variables of the enclosing routine):
<code>
person: separate PERSON
show_separate_block
do
anonymous_wrapper (person)
end
anonymous_wrapper (l_person: separate PERSON)
do
print (l_person.age)
end
</code>
== Controlled Objects ==
The Eiffel compiler introduces the concept of a ''controlled'' object to support the exclusive access guarantee.
{{definition|Controlled/uncontrolled object|An object is '''controlled''' if it is attached to a reference that has one of the following properties:<br/>
1) It is of a non-separate type. <br/>
2) It is of a separate type and it appears as a formal argument of the enclosing routine. <br/>
3) It is a local variable of an inline separate block.
<br/><br/>Otherwise it is '''uncontrolled'''.}}
An object is always ''controlled'' with respect to the processor handling <code>Current</code>, and the meaning is that the current processor has ''Exclusive Access'' to the region that holds the object. In particular, this means that no other processor can access or modify a controlled object.
In chapter [[Separate Calls]] we've already mentioned that SCOOP places a restriction on separate calls.
A separate call is allowed if the target appears as a formal argument of the enclosing routine.
While this rule is correct, it does not cover all of the cases.
With the above definition however, we can both simplify and extend the previous ''Separate Argument'' rule:
{{rule|name=Controlled Target|text=A separate call is valid if its target is controlled.}}
The compiler checks this property at compile-time and throws an error if a separate call happens on an uncontrolled target.
{{note| Query results with a non-separate return type are placed on the same processor as the target. This means that, if a processor controls the target, it is also safe to use these query results. The compiler tries to exploit this fact a little by treating such query results as controlled as well within a single expression. This allows to have multi-dot calls on separate target, such as <code> sep_person.spouse.name.out</code>. <br/>
Note that this does not yet cover all cases where ''controlled'' can be inferred, and the compiler may become smarter in the future.}}
== Exclusive Access to multiple regions ==
It is possible to gain exclusive access to multiple regions simultaneously.
This can either be done by passing multiple arguments to a region, or with an ''inline separate'' block.
<code>
balance_with_arguments (incr: separate INCREMENTER; decr: separate DECREMENTER): INTEGER
do
Result := incr.value + decr.value
end
balance_with_inline_separate: INTEGER
do
separate incrementer as incr, decrementer as decr do
Result := incr.value + decr.value
end
end
incrementer: separate INCREMENTER
decrementer: separate DECREMENTER
</code>
Exclusive access to the arguments is granted '''atomically''' in this case.
This ensures that no deadlock can occur when two regions want to gain access to the same regions, provided that they both use the "multiple-arguments-mechanism" of SCOOP and don't lock regions one after the other by hand.
{{SeeAlso|The dining philosopher example makes use of this fact. The feature <code>{PHILOSOPHER}.eat</code> has two separate arguments, which are guaranteed to be locked in one atomic step by the SCOOP runtime. If this were not the case, a deadlock may occur.}}
== Waiting for regions ==
You may wonder how the exclusive access guarantee is implemented in SCOOP.
A simple solution would be to have a lock for every region, and to gain exclusive access, one has to acquire the lock.
Before the 15.01 release, this was indeed how it was implemented.
However, the implementation proved to be flawed because it caused a lot of synchronization and waiting.
In EiffelStudio 15.01 this has changed.
When a processor needs exclusive access to another one, it opens a new ''queue'' to log calls.
There can be multiple open queues to a processor, but the processor will always work on only one queue.
This has an important effect in practice: '''Gaining exclusive access to a region is always non-blocking.'''
Note that this is also true for exclusive access to multiple regions.
Previously it may have been necessary to wait for another processor to relinquish the lock, but now a processor can just proceed with logging calls, which will eventually be executed by the supplier processor.
You may start to wonder now where waiting happens, as it isn't possible to both have a guarantee for exclusive access and fully non-blocking behaviour.
Well, it happens during synchronous calls.
When a client gains exclusive access and starts logging calls, the supplier may not care about it yet because it is busy with something else.
However, as soon as a synchronous call is logged, the client needs to wait for the result.
During that time, the supplier will finish its other tasks and eventually start to execute the calls logged by the suspended client, at which point it will wake up again.
{{note|Due to this, the only way to experience a deadlock in SCOOP is when a processor is stuck in a separate query.}}
To summarize:
* Gaining exclusive access, also to multiple regions, is always non-blocking.
* An asynchronous call is also non-blocking.
* Only a call to a query may be blocking, and thus a place where a processor can be stuck (e.g. in case of deadlock or starvation).

View File

@@ -0,0 +1,74 @@
[[Property:title|Getting Started with SCOOP]]
[[Property:link_title|Getting Started]]
[[Property:weight|-1]]
[[Property:uuid|87f78898-2bbb-b134-0128-e55401a61a05]]
=Introduction=
SCOOP adds only a single keyword to the Eiffel programming language: <code>separate</code>. In any case in which SCOOP is not enabled, the <code>separate</code> keyword is ignored, and the SCOOP-specific validity rules are not applied.
In order to enable SCOOP processing in your project, you use the project settings. There are two mandatory settings. One of them is the option ''Concurrency'', and the value you need is ''SCOOP'', as shown in the section of the Project Settings dialog below.
[[Image:SCOOP project setting]]
The second setting is the precompiled library. This should either be '''base-scoop-safe''', or nothing at all. A precompiled library which is not built with SCOOP support will not work.
{{caution | When you use a precompiled library, the ''Concurrency'' setting for that library trumps whatever ''Concurrency'' value you have set for the project. So, if you use the "base-safe.ecf" precompiled library, your project will not be SCOOP-enabled regardless of the setting of ''Concurrency'' in the Target. Likewise, using the "base-scoop-safe.ecf" precompiled library always produces a SCOOP-enabled project, even if the Target ''Concurrency'' setting is ''None''.}}
=Creating a new SCOOP project=
== Step 1: Create a new project ==
Create a new project with the EiffelStudio new project wizard by selecting ''Basic application'' from the choices under ''Create project'' when EiffelStudio opens.
The created project will not have SCOOP enabled however (this is true in all EiffelStudio releases so far, but may change in the future).
Do not compile the newly created project or precompile a library if asked - it will just be a waste of time.
== Step 2: Convert the project ==
To enable SCOOP, you'll have to do some work on the project settings. Follow the steps in the next section.
= Convert an existing project to SCOOP =
== Step 1: Adjust Concurrency setting ==
Open the project settings dialogue (Project -> Project settings...).
Select your favorite target (usually there's only one) and extend the section ''Advanced''.
In there you'll find the setting ''Concurrency'', which should have a value ''None'' for sequential projects. Change it to ''SCOOP''.
In addition to the ''Concurrency'' project setting, it is recommended that for new projects you use [[Creating a new void-safe project#Project settings for void-safe projects|the settings that guarantee void-safety]].
== Step 2: Remove the precompiled library ==
Expand the node ''Groups'' in the project settings on the left.
If your project uses a precompiled library, you should be able to see and expand the node ''Precompile'' as well.
Right click on the item (usually something like "base-safe_precompile") in the category ''Precompile'', then select ''Remove''.
== Step 3: (Optional) Add precompiled base-scoop-safe to the project ==
Select the node ''Groups'' on the left, and then press Ctrl + P on the keyboard (or select ''Add Precompile'' from the toolbox).
Note that this only works when there's no existing precompiled library and when the node ''Groups'' is highlighted.
In the dialog that opens, select "precomp_base-scoop-safe", then close the dialog by clicking ''OK''.
== Step 4: Clean compile ==
Changing the ''Concurrency'' setting and precompiled libraries each would cause you to have to do a [[Clean compile|clean compile]] of your system in order to become effective. To do so, close EiffelStudio and reopen it.
In the dialog that appears, select the previously modified project and then select the ''Compile'' action and add a tick to ''Clean''.
At this point, your project should be void-safe and SCOOP enabled.
== Step 5: Double-check ==
Before doing anything after the clean compile, check the warnings section. If EiffelStudio is complaining that some settings could not be applied, something might be wrong with the precompiled library.
To make sure that SCOOP is really enabled, you can insert the following code snippet somewhere in your code:
<code>
check_scoop (a_string: separate STRING)
local
str: STRING
do
str := a_string
end
</code>
This should trigger a compiler error. If it doesn't, then SCOOP is not correctly set up in your project.

View File

@@ -0,0 +1,109 @@
[[Property:title|SCOOP implementation]]
[[Property:weight|9]]
[[Property:uuid|eeb07907-e036-f3d6-5297-a7c44cfd1900]]
The implementation of SCOOP is the result of many years of design and refinement of the model. This page describes specific properties of its current state, in particular ways in which it differs from some of the earlier publications about the model.
=Known limitations=
==Supported concurrency mechanisms==
At the core of the SCOOP model lies the notion of [[Concurrent programming with SCOOP#Processors|processor]]: a mechanism that can execute instructions sequentially. As a general model covering many forms of concurrency, SCOOP allows many possible implementations of this concept. In the EiffelStudio implementation, processors are implemented as <em>threads</em> of the operating system.
==Maximum number of SCOOP processors==
The maximum number of SCOOP processors per system is currently 1024.
==Agents targeted to objects of separate types==
Agents targeted on separate objects are currently not supported.
=Workarounds=
The first implementation of SCOOP, some things that we do commonly in sequential Eiffel become less fluid in the presence of SCOOP. Although not strictly limitations in the implementation of SCOOP principles, in order to make SCOOP programming easier, these are areas that should be improved in future releases. In the meantime, there are workarounds for some of these situations.
==Printing a separate STRING ==
Suppose you have declared a class attribute of type <code>separate STRING</code>:
<code>
my_separate_string: separate STRING = "Hello Eiffel World!"
</code>
and you want to output that string using <code>io.put_string</code>. The solution you might use from sequential Eiffel would be:
<code>
io.put_string (my_separate_string)
</code>
But the statement above results in a compile error because the argument type (<code>separate STRING</code>) is not compatible with the type (<code>STRING</code>) that <code>put_string</code> is expecting.
In order to make printing of the content of separate instances of <code>STRING</code>, a creation procedure, <code>make_from_separate</code>, is available in the string classes which allows initialization of a non-separate instance of <code>STRING</code> from a separate <code>STRING</code>.
So, to print <code>my_separate_string</code>, you could create a non-separate instance of <code>STRING</code>, then print the non-separate instance, as shown below.
<code>
local
l_temp: STRING
...
create l_temp.make_from_separate (my_separate_string)
io.put_string (l_temp)
</code>
Or use a creation expression and avoid declaring the local variable:
<code>
io.put_string (create {STRING}.make_from_separate (my_separate_string))
</code>
== Calling a separate agent ==
Calling a separate agent is a bit tricky, especially if it's a <code>PROCEDURE</code> which should be executed asynchronously.
If the agent does not take any arguments, you must pass Void, otherwise the compiler will generate an empty <code>TUPLE</code> which
is on the same processor as the caller and thus triggers lock passing (see [[Asynchronous Calls]]):
<code>
do_call (proc: separate PROCEDURE [TUPLE])
do
proc.call (Void)
end
</code>
If the agent does take arguments, things get a bit more tricky. If the call must be asynchronous, you have to do a workaround with the feature <code>{ROUTINE}.empty_operands</code> like this:
<code>
do_call (a_procedure: separate PROCEDURE [TUPLE[separate STRING]]; a_string: separate STRING)
local
l_tuple: separate TUPLE [separate STRING]
do
l_tuple := a_procedure.empty_operands
set_tuple_string (l_tuple, a_string)
a_procedure.call (l_tuple)
end
set_tuple_string (a_tuple: separate TUPLE [str: separate STRING]; a_string: separate STRING)
do
a_tuple.str := a_string
end
</code>
=Implementation dependent behavior=
==The Wait Rule==
{{note | This only applies to EiffelStudio releases prior to 15.01}}
The [[Concurrent programming with SCOOP#Access to shared resources|Wait Rule]] says: ''A routine call with separate arguments will execute when all corresponding processors are available and hold them exclusively for the duration of the routine.''
In the EiffelStudio implementation prior to 15.01, a routine will not necessarily wait for all processors associated with its separate arguments to be available before it ''begins'' execution. The waiting on processors occurs in a "lazy" manner. Execution will only wait on the availability of one of the processors when it actually needs to use the argument associated with that processor. This means that if there are several instructions ahead of the first instruction that references a separate argument, then those several instructions will be executed immediately. Only at the point at which the separate argument's processor is needed will the routine pause and wait for the availability of the processor.

View File

@@ -0,0 +1,115 @@
[[Property:title|Regions and Processors]]
[[Property:weight|2]]
[[Property:uuid|974a41dd-0e36-4d75-edd1-ead6ea4b372d]]
== Regions ==
One of the key ideas in SCOOP is to prohibit simultaneous access to shared memory.
In order to reach this goal, the SCOOP model partitions the heap memory into ''regions''.
{{definition|Region|A set of objects in the heap. The set of all regions in a program is a partition of the heap.}}
Every object in an Eiffel program belongs to exactly one ''region''.
A ''region'' is by itself sequential, meaning that there can only be one routine executed in one object.
There can be multiple regions in a SCOOP program however.
[[Image:SCOOP regions]]
{{info|SCOOP is classified as a message passing concurrency model, because there is no shared memory.}}
{{note|A sequential program is a special case for a SCOOP program that has only one region.}}
A direct access from one region into another is not allowed.
If one wishes to perform a command or a query in an object of a different region, a message has to be sent.
You'll see how this can be done in chapter [[Separate Calls]].
The simple trick of splitting the heap into several regions, where each region by itself is sequential,
prevents one of the trickiest problems in concurrency: Data Races.
In SCOOP you are guaranteed that a data race, meaning a read and write access to the same memory with nondeterministic ordering, can never happen.
== Processors ==
In the SCOOP model, a ''processor'' is used as the engine for execution.
{{definition|Processor| An autonomous thread of control capable of sequential execution of instructions.}}
A ''processor'' is always attached to exactly one region, and is responsible to perform operations on all its objects.
The term ''handler of an object'' is used to denote the processor attached to the region on which the object is placed.
As already mentioned earlier, a ''processor'' cannot access or perform operations on an object in a different ''region''
and has to send a message to the other handler instead.
{{info|''Processor'' is an abstract notion and does not mean the physical silicon chip which is present in every computer.
In SCOOP we think of it as a thread of control capable of applying features to objects.
In theory processors are not restricted to any particular type of hardware or software, for example they could correspond to threads, processes, hardware processors, or machines in a network.
Currently however a SCOOP processor is implemented as a thread.}}
== Separate Types==
To support the concept of regions in a program text, SCOOP extends the type system by introducing a single new keyword: <code>separate</code>.
The <code>separate</code> keyword is used to annotate a reference and means that the object attached to it may be in a different region.
{{definition|Separate type|A type which has been declared including the keyword <code>separate</code>.}}
If an entity uses the keyword <code>separate</code> in its declaration, such as:
<code>
my_x: separate X
</code>
it indicates that the handler of <code>my_x</code> may be different than the handler of <code>Current</code>.
This in turn means that it is forbidden to access and modify <code>my_x</code> directly.
To perform any operation on <code>my_x</code>, a message should be sent to the other handler.
Note that the SCOOP type system allows to attach an object to a separate reference, even if it is actually in the same region as <code>Current</code>.
But it is not possible the other way around: An object in a different region must always be of a separate type.
This is reflected in the type checker, which treats a regular type <code>A</code> as a subtype of the separate type <code>separate A</code>.
In the image above, the three references that cross a processor boundary must be declared separate.
The single reference in Region 2, which stays in the same region, can be of a non-separate type.
== Creating regions and processors ==
In order to turn a sequential program into a concurrent program, one has to create new regions and put objects into them.
The means to achieve this is the creation instruction on an entity whose type is separate.
<code>
my_x: separate X
-- ...
create my_x.make
</code>
The instruction <code>create my_x.make</code> does a lot of things at the same time:
* It creates a new region.
* It creates a new processor for the new region.
* It creates a new object of type X which is placed into the newly created region.
With this new knowledge we can create a small program that generates the object and region graph shown above:
<code>
class APPLICATION create make feature
person: separate PERSON
thing: separate ANY
make
do
create person.make
create thing
end
end
class PERSON create make feature
name: STRING
thing: separate ANY
make
do
create name.make_from_string ("John Doe")
create thing
end
end
</code>

View File

@@ -0,0 +1,126 @@
[[Property:title|Separate Calls]]
[[Property:weight|3]]
[[Property:uuid|c030d521-1420-1570-b63e-9035332a3e26]]
==Separate Calls==
In the [[Regions and Processors | previous chapter]] we've learned that a concurrent SCOOP program consists of several ''regions'' that communicate via message passing.
You may wonder how it is possible to pass a message to another processor, and the answer is simple:
With the good old feature call.
{{definition|Separate call|A feature call whose target is of a separate type.}}
The ''separate call'' is the SCOOP way to pass messages from one region to another.
A call such as this
<code>
do_call (my_x: separate X)
do
my_x.f (42)
end
</code>
roughly corresponds to a message to the handler of <code>my_x</code> with the content:
"Execute feature f on target my_x with argument 42."
Note that there can be a difference between the time a message is sent and the time the feature is executed.
In SCOOP we therefore distinguish between a ''feature call'' and a ''feature application''.
{{definition|Feature call|Register a feature to be executed.}}
{{definition|Feature application|Execute a feature.}}
==Access to shared resources==
The main issue with concurrent systems is the proper control of access to resources that can be shared among simultaneously executing processors.
Traditional solutions to the problem involve the use of “critical sections” of code.
These are sections of code in which the shared resource is accessed.
Only one thread is allowed to be executing the critical section at a time.
So if one thread wants to execute the critical section and another is already doing so, then the first must wait.
Thread synchronization schemes ensure this “mutual exclusion” of access to the critical section.
Rather than using critical sections, SCOOP relies on the mechanism of argument passing to assure exclusive access.
As a result, there is a restriction placed on separate calls.
{{Rule|name=Separate argument|text=A separate call <code>a_x.f (a)</code> is valid if <code>a_x</code> is an argument of the enclosing routine.}}
So, according to this rule, for a separate call to be valid, the target of the call must be a formal argument of the routine in which the call occurs.
The code below contains both an invalid separate call and a valid one.
<code>
my_separate_attribute: separate SOME_TYPE
...
calling_routine
-- One routine
do
my_separate_attribute.some_feature -- Invalid call: Feature call on separate attribute
enclosing_routine (my_separate_attribute) -- Separate attribute passed as argument
end
enclosing_routine (a_arg: separate SOME_TYPE)
-- Another routine
do
a_arg.some_feature -- Valid call: Feature call on separate argument
end
</code>
In the code above, <code>my_separate_attribute</code> is a class attribute declared as a separate type. In the first line in <code>calling_routine</code> a direct feature call is made to apply <code>some_feature</code> to <code>my_separate_attribute</code>. This is an invalid separate call. The second line calls feature <code>enclosing_routine</code> and passes <code>my_separate_attribute</code> as an argument. <code>enclosing_routine</code> takes an argument of type <code>separate SOME_TYPE</code>. Within <code>enclosing_routine</code> it is valid to call <code>some_feature</code> on <code>a_arg</code>.
{{info|Congratulations! You've just gained enough knowledge to write a small SCOOP program.
You may want to have a look at the [[Dining Philosophers]] example, which does not use any of the advanced concepts.}}
{{SeeAlso|The <code>launch_producer</code> feature of the [[Producer-consumer|producer-consumer]] example, a feature which exists for the purpose of compliance with the separate argument rule.}}
==Synchronous and asynchronous feature calls==
When we think of the execution of sequential Eiffel, we tend to equate feature call and feature application. That is, it is expected that for a sequence of two feature calls:
<code>
x.f
y.g
</code>
that the feature application of <code>x.f</code> will complete before <code>y.g</code> begins.
In concurrent Eiffel with SCOOP things are different. This is because a particular feature call, <code>x.f</code>, may occur on one processor, and the consequent feature application (of feature <code>f</code> to <code>x</code>) may occur on a different processor.
{{definition|Synchronous feature call|A feature call in which the execution of the calling client does not proceed until the feature application has completed. }}
{{definition|Asynchronous feature call|A feature call which causes the “logging” of a request by a client for the associated feature application to be executed by the suppliers processor. }}
After an asynchronous feature call, the execution of the client proceeds immediately, possibly in parallel with the application of the feature on some other processor.
===What makes a call synchronous or asynchronous?===
First, every feature call is either a synchronous feature call or an asynchronous feature call. For a particular call, the following rules determine which it is:
A feature call is always '''synchronous''' in the following cases:
:S1 It is a non-separate call.
:S2 It is a separate call to a query.
A feature call may be '''asynchronous''' in the following case:
:A1 It is a separate call to a command.
Lets look a little closer at those cases determining synchronous calls.
Case S1 is the case of typical sequential Eiffel, where all calls are non-separate, and therefore synchronous. Of course, even in concurrent Eiffel with SCOOP, plenty of non-separate calls will occur, and these will be synchronous.
Case S2 says that if a separate call is a query it must be synchronous. This is because even though the feature application will probably occur on a different processor, the instructions following the query will likely depend up on the result of the query, so they must wait until the feature application completes. This situation is known as ''wait by necessity''.
The case A1 is the only case where asynchrony is involved. But be careful with the wording - it says the call '''may be asynchronous''', because there are some exceptions. The exact rules are a bit complex and are described in [[Asynchronous Calls]]. As a general rule of thumb, a separate call is executed asynchronously when the client does not have exclusive access over an object which is needed by the target region.
Asynchronous execution means that when a client processes the call to the command, it “logs” the need for its associated feature application. But then, rather than waiting for the feature application to complete, the client continues execution of instructions beyond the asynchronous call.
It is in this case that concurrent computation is achieved. The processor of the client object is free to continue processing while the processor handling the target of the asynchronous feature call applies that feature.

View File

@@ -0,0 +1,7 @@
[[Property:title|EiffelThread Class Reference]]
[[Property:weight|0]]
[[Property:uuid|e1d3e50a-e67a-a375-7165-8a7856b7b969]]
==View the [[ref:libraries/thread/reference/index|EiffelThread Class Reference]]==

View File

@@ -0,0 +1,49 @@
[[Property:title|Compilation of multithreaded systems]]
[[Property:weight|3]]
[[Property:uuid|24e474b5-3cb7-16ff-365f-73be8e73bd56]]
{{UpdateNeeded}}
{{ReviewRequested}}
==Settings==
* Set the Concurrency option to EiffelThread in the project settings under Advanced.
* Do not use a non-multi-threaded precompiled library. The corresponding multi-threaded libraries of base, WEL and Vision2 should be located in $ISE_EIFFEL/precomp/spec/$PLATFORM/ with the mt prefix.
* You may want to include the thread library (located at "$ISE_EIFFEL/library/thread") in your project. This setting can be done in [[Group Options| the groups section of the project settings window]] .
* When using external C libraries, be sure that they are MT-safe on the platforms where the Eiffel Threads are available. For example [[WEL]] and [[EiffelNet]] multi-threaded libraries can be recognized by the `mt` prefix.
==Compilation==
Just launch the compilation: the Eiffel compiler will generate and link the multi-threaded executable. If the project was already compiled, [[Clean compile|clean the project and recompile it from scratch]].
==External C files==
The C files that you link with a multi-threaded Eiffel application must be compiled in multi-threaded mode. In addition, you should follow some basic guidelines, which you can find in any documentation on threads. Here are the main requirements:
* Check that what you are importing is safe: you cannot arbitrarily enter non-threaded code in a multi-threaded program. Check your include files and the libraries that you use for linking to decide if they can be used in multi-threaded mode.
* If you are using CECIL in multi-threaded mode, you must compile your C files with the same defined symbols as those used to compile the generated C-code in multi-threaded mode.
* Threaded code can safely refer to unsafe code ONLY from the initial thread.
{{note|If you use the libraries EiffelNet and MEL in multi-threaded mode, you should use libmtnet. a and libmtmel. a. When using MEL, you have to be aware that Motif 1. 2 is not threaded-safe (i.e not reentrant). Motif 2. x is threaded-safe.}}
==Thread Safe Eiffel libraries==
Most libraries are not thread safe. This means that even though you can use them in a multi-threaded environment, you will need to add the necessary protection to ensure that objects are accessed only by one thread at a time.
Most of the issues are related to the use of EiffelBase containers which were designed at a time when threads were not present. In the future we will certainly provide a version of EiffelBase which will be designed to be thread safe, in the meantime, you have to use basic protection mechanisms such as mutexes to guarantee the safe execution of your programs.
Here is a summary of what you need to include in your project settings when using our libraries:
- EiffelBase: not thread safe, no externals required
- WEL: partly thread safe regarding creation of graphical objects that belong to two different threads, but since it relies on EiffelBase, be aware of what has been said. Other libraries have the same dependency on EiffelBase and this warning will not be repeated.
- MEL: you have to be aware that Motif 1. 2 is not thread safe and that Motif 2.x is. Therefore you can use MEL in a multi-threaded program only if you have Motif 2.x. In this case, you need to include libmtmel.a instead of libmel.a
- EiffelNet: thread safe with above limitation on EiffelBase.
- COM: not thread safe.

View File

@@ -0,0 +1,17 @@
[[Property:title|EiffelThread Tutorial]]
[[Property:weight|-1]]
[[Property:uuid|c1965bc3-23cf-25d6-02c1-717b7035d41c]]
Multithreaded applications provide a flexible, exciting way of utilizing the power of modern computer systems. EiffelThread supports a powerful multithreaded model, fast and easy to use.
The EiffelThread library is mapped to the native thread library of your platform such as Windows, POSIX, Solaris and Unix International Threads.
This document will explain how to use the EiffelThread library. To fully take advantage of this documentation you should know some basics about multithreaded systems
To enable multithreading in the EiffelStudio environment, you will need to do some changes in your project settings.
There are special considerations for using "once" features in an Eiffel multithreaded environment.
If you encounter problems, take a look at the FAQ list.

View File

@@ -0,0 +1,24 @@
[[Property:title|Multithread FAQ]]
[[Property:weight|6]]
[[Property:uuid|a0e1ddf6-cc19-b6f8-0e05-075908ddd347]]
===I've launched several threads and they do not seem to be executed:===
The thread that launched the several threads may be dead before its children.
===Two threads can lock the same mutex, however none of them unlocks it:===
Same problem as above. Maybe the first thread that locked the shared mutex died before the second tried to lock it: thus, the first one automatically unlocked it when it died. You should put a join_all or an infinite loop in the parent thread.
===I've added the option multithreaded in the project settings and it crashes:===
If you have already compiled your system in non-MT mode, you cannot change the mode of compilation and simply relaunch your compilation (the generated C-code would be incompatible). Delete all your object files in your W_code or F_code directory and freeze or finalize the system again.
===My once function changed during my MT-Eiffel-program:===
The once functions are once per thread in Multithreaded mode. Hence, each once function is thread-specific and is initialized the first time it is called in a thread.
You can create a once per process by following [[Once features in multithreaded mode|these instructions]] .

View File

@@ -0,0 +1,87 @@
[[Property:title|Once features in multithreaded mode]]
[[Property:weight|4]]
[[Property:uuid|5578da29-7603-b501-1a7d-305d20fd6485]]
==Manipulating Once features in multithreaded mode==
Eiffel introduced the powerful mechanism of once routines. A once routine has a body that will be executed only once, at the first call. Subsequent calls will have no further effect and, in the case of a function, will return the same result as the first. This provides a simple way of sharing objects in an object-oriented context.
For multithreaded applications, the appropriate semantics is that once routines must be called once per thread (rather than once per process). This is the semantics supported by EiffelThread.
Then the once feature is not initialized once per process but once per thread. Your once feature will be called again in any new thread execution.
==Once per Process/Thread==
By default, once features are once per thread. This means that when a once feature is called in a thread, the Eiffel run-time will check whether it has been already computed in this thread. If not, the once feature will be initialized and computed. This corresponds to the most commonly desired behavior for once features in multithreaded mode: most of the time, a once called in a thread is not likely to share its result with other threads.
This is only the default, however: you may at times need to use "once per process" versus "once per thread". Objects created "once per process" in multithreading mode can be shared among threads. You can use a [[ET: Once routines and shared objects#Adjusting once semantics with "once keys"|'''once key''']] to indicate the mode you wish to use.
==Specifying once per thread or once per process ==
As mentioned above, in multithreaded mode, the default <code>once</code> syntax will ensure "once per thread", as in this example:
<code>
object_per_thread: OBJECT
-- Once per thread.
once
create Result.make
end
</code>
You could obtain the same effect by explicitly coding the "THREAD" once key:
<code>
object_per_thread: OBJECT
-- Once per thread.
once ("THREAD")
create Result.make
end
</code>
To ensure that a once function is executed only once per process, you would use the "PROCESS" once key:
<code>
object_per_process: OBJECT
-- New 'object' (once per process)
-- that could be shared between threads
-- without reinitializing it.
once ("PROCESS")
create Result.make
end
</code>
The same concepts apply to once procedures.
{{note|Remember the effect of exceptions on the execution of once routines. If an exception occurs during the execution of a once routine, then '''that same exception will be raised again on each subsequent call''' to the once routine.}}
===Obsolete syntax===
The syntax shown above is the current standard syntax. However in Eiffel code written for previous versions, you may run across once keys for multithreaded systems which are expressed in a different syntax. Specifically, the obsolete syntax used a feature's <code>note</code> clause to specify a once key, as in the following example.
<code>
object_per_process: OBJECT
-- Obsolete syntax
-- New 'object' (once per process)
-- that could be shared between threads
-- without reinitializing it.
note
once_status: global
once
create Result.make
end
</code>
==Using once per object in a multithreaded system==
It is possible to use a once key to specify a once function that is executed [[ET: Once routines and shared objects#Adjusting once semantics with "once keys"|once per object]]. However, in a multithreaded system, it is important to understand that no automatic synchronization of access occurs in this case. So the following caution is given.
{{caution|You should understand that in a multithreaded system, you must synchronize access to the result of a function executed once per object in the same way that you would synchronize access to class attributes. }}
{{SeeAlso|The Eiffel Tutorial section on [[ET: Once routines and shared objects]] .}}

View File

@@ -0,0 +1,199 @@
[[Property:title|Thread library overview]]
[[Property:weight|1]]
[[Property:uuid|2bdeeb91-1917-f443-ebfc-4f877107edd7]]
This is only a quick overview of the EiffelThread Library. The class reference for this library should give its complete interface.
==Creating and launching threads: the class THREAD (deferred)==
The class of the thread object you want to create should inherit the <eiffel>THREAD</eiffel> class. <br/>
Your thread is represented by a class which inherits from <eiffel>THREAD</eiffel> (deferred class). <br/>
<code>
class
MY_THREAD
inherit
THREAD
...
feature
execute
-- define the deferred feature from THREAD.
do
...
end
...
end -- class MY_THREAD
</code>
Creating a thread is like creating an Eiffel object:
<code>
my_thread: MY_THREAD
-- MY_THREAD inherits from THREAD and defines
-- the deferred procedure `execute'
...
create my_thread
</code>
{{note|You have created a thread object but have not started the thread itself yet. <br/>
To run the thread, use the feature <eiffel>launch</eiffel> from <eiffel>THREAD</eiffel>. }}
<code>
my_thread.launch</code>
On the Eiffel side, the procedure <eiffel>execute</eiffel> will be launched. This procedures deferred in class <eiffel>THREAD</eiffel>, you have to define it in <eiffel>MY_THREAD</eiffel>.
On the C side, a C thread will be created and launched.
{{caution|You may call <eiffel>join_all</eiffel> and the end of the execution of the parent thread if you do not want it to die before its child, otherwise they may prematurely terminate. }}
==The class MUTEX==
The implementation of the class <eiffel>MUTEX</eiffel> is mapped on the C standard thread library. An instance of class <eiffel>MUTEX</eiffel> can be shared between different thread.
<eiffel>my_mutex.pointer</eiffel> is the pointer to the nested C mutex of <eiffel>my_mutex</eiffel>.
* Declaration of the mutex:
<code>
my_mutex: MUTEX</code>
* Creation of mutex:
<code>
create my_mutex.make</code>
* Locking the mutex:
<code>
my_mutex.lock</code>
* Unlocking the mutex:
<code>
my_mutex.unlock</code>
* <eiffel>try_lock</eiffel>: if it is not locked yet, lock the mutex and return True, otherwise it returns False.
<code>
my_mutex.try_lock</code>
* Is my mutex initialized?
<code>
my_mutex.is_set</code>
{{note|on Windows: The <eiffel>MUTEX</eiffel> objects on Windows are recursive while they are not on Unix. A recursive mutex can be locked twice by the same thread. }}
{{caution|Be sure that a mutex is unlocked when it is disposed. }}
==The class SEMAPHORE==
Like <eiffel>MUTEX</eiffel>, the features of this class are mapped on the C thread library. An instance of class <eiffel>SEMAPHORE</eiffel> can be shared between thread.
* Declaration of the semaphore :
<code>
my_sem: SEMAPHORE</code>
Creation of semaphore: initialize semaphore with nb_tokens, it requires nb_tokens > = 0
<code>
create my_sem.make (nb_tokens)</code>
* Wait for a token:
<code>
my_sem.wait</code>
* Give back a token:
<code>
my_sem.post</code>
* <eiffel>try_wait</eiffel>, similar to try_lock from <eiffel>MUTEX</eiffel>, if a token is available, take it and return <code> True </code>, otherwise return <code> False </code>.
<code>
my_sem.try_wait</code>
{{caution|Be sure that a semaphore does not wait for a token when it is disposed }}
==The class CONDITION_VARIABLE==
This class allows to use condition variables in Eiffel. An instance of class <eiffel>CONDITION_VARIABLE</eiffel> can be shared between threads.
* Declaration of the condition variable
<code>
my_cond: CONDITION_VARIABLE</code>
* Creation:
<code>
create my_cond.make</code>
* Wait for a signal (send by <eiffel>signal</eiffel>). You need to use a mutex.
<code>
my_mutex: MUTEX
...
create my_mutex.make
</code>
<code>my_mutex</code> must be locked by the calling thread so as <eiffel>wait</eiffel> can be called. <eiffel>wait</eiffel> atomically unlocks <code> my_mutex </code> and waits for the condition variable <code> my_mutex </code> to receive a signal. As soon as it received a signal, ''<code>my_cond </code>'' locks ''<code>my_mutex </code>''
<code>
my_mutex.lock
-- You must lock `my_mutex' before calling wait.
my_cond.wait (my_mutex)
-- Here the critical code to execute when `my_cond' received a signal.
my_mutex.unlock
-- Unlock the mutex at the end of the critical section.
</code>
* Send a signal one thread blocked on the condition variable `my_cond'.
<code>
my_cond.signal</code>
* Send a signal to all the threads blocked on the condition variable `my_cond'.
<code>
my_cond.broadcast</code>
{{caution|Be sure that a condition variable is unblocked when it is disposed. }}
==Miscellaneous classes==
class <eiffel>THREAD_ATTRIBUTES</eiffel>: defines the attributes of an Eiffel Thread regarding the thread scheduling policy and priority.
==Controlling execution: THREAD_CONTROL==
* <eiffel>yield</eiffel>: the calling thread yields its execution in favor of an other thread of same priority.
* <eiffel>join_all</eiffel>: the calling thread waits for all other threads to finished (all its children).
* A parent thread can wait for the termination of a child process through the feature <eiffel>join</eiffel> of class <eiffel>THREAD_CONTROL</eiffel> (inherited by <eiffel>THREAD</eiffel>):
<code>
thr: MY_THREAD
...
thr.launch
...
thr.join
</code>

View File

@@ -0,0 +1,11 @@
[[Property:link_title|EiffelThreads]]
[[Property:title|EiffelThreads]]
[[Property:weight|0]]
[[Property:uuid|AAF0CEF9-7268-492F-9119-872164995898]]
==EiffelThreads Library==
Type: Library<br/>
Platform: Any
The EiffelThreads library includes the main components needed to build multi-threaded systems.

View File

@@ -0,0 +1,26 @@
[[Property:modification_date|Sat, 19 Jan 2019 20:11:49 GMT]]
[[Property:publication_date|Tue, 20 Nov 2018 12:44:24 GMT]]
[[Property:link_title|Concurrency]]
[[Property:title|Concurrency]]
[[Property:weight|-10]]
[[Property:uuid|E76EF4EE-0D90-4AEE-8521-0293A0086AA2]]
== Building concurrent applications in Eiffel ==
'''Concurrency''' is a system's ability to perform several tasks at a time, as with an email client that can download new messages while you are scrolling through previously downloaded ones.
Many applications need concurrency, either for convenience or out of sheer necessity. Operating systems provide a concurrency mechanism in the form of "threading": a program can start several concurrent lines of control, or threads, which run in parallel.
In most programming languages, the way to obtain threaded applications is to rely on a threading library. Eiffel offers this possibility through the [[Eiffelthreads|EiffelThreads library]].
Thread libraries are at a lower level of abstraction than modern programming languages, requiring you to manage the interaction of threads manually through such techniques as mutual exclusion semaphores. Eiffel offers a higher-level mechanism: [[SCOOP]] (Simple Concurrent Object-Oriented Programming), which greatly simplifies the writing of concurrent applications and avoids many of the typical pitfalls of concurrency such as "data races". SCOOP is the recommended approach to concurrent Eiffel programming.
For details see:
* [[SCOOP_tutorial|SCOOP tutorial (hands-on)]]: build a small concurrent email system, step by step.
* The [[SCOOP|SCOOP documentation]] for the recommended approach to concurrent programming in Eiffel.
* The [[EiffelThreads|EiffelThreads documentation]] if you need to exert fine control on the execution and synchronization of threads.