Updated SCOOP documentation.

(Author: Roman Schmocker)

git-svn-id: https://svn.eiffel.com/eiffel-org/trunk@1428 abb3cda0-5349-4a8f-a601-0c33ac3a8c38
This commit is contained in:
eiffel-org
2015-08-05 13:25:04 +00:00
parent 8901c4aca4
commit 36d1c639a8
13 changed files with 807 additions and 316 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

View File

@@ -0,0 +1,3 @@
title=SCOOP Regions
text=The memory of a SCOOP program, splitted into regions.
path=content/scoop-regions

View File

@@ -3,275 +3,29 @@
[[Property:uuid|151838da-80f0-5381-d557-5c5b7727647d]]
==Overview==
SCOOP is ''Simple Concurrent Object-Oriented Programming''. SCOOP allows developers to create object-oriented software systems which will take advantage of multiple, concurrently active execution vehicles. Additionally, SCOOP programming is done at a level of abstraction above the specific details of these implementation vehicles. Read further to get a better idea of what all this means, but for now, the primary message should be: SCOOP is concurrent software development made easy. The basic SCOOP ideas were first published as early as 1993. Since that time, considerable research and development has refined the SCOOP into the model that is implemented in EiffelStudio today.
SCOOP is ''Simple Concurrent Object-Oriented Programming''. SCOOP allows developers to create object-oriented software systems which will take advantage of multiple, concurrently active execution engines while providing strong guarantees that allow programmers to reason like in sequential programs. Read further to get a better idea of what all this means, but for now, the primary message should be: SCOOP is concurrent software development made easy. The basic SCOOP ideas were first published as early as 1993. Since that time, considerable research and development has refined the SCOOP into the model that is implemented in EiffelStudio today.
{{Note|As you begin to use SCOOP, be sure to read the sections of the documentation on [[SCOOP practical matters]] and consider compiling and working with some of the many [[SCOOP examples]].}}
{{Note|As you begin to use SCOOP you should make sure that you correctly set up your project. Information on how to do this can be found at [[Getting Started with SCOOP]]. Also consider compiling and working with some of the many [[SCOOP examples]].}}
==Concurrency==
Concurrency in computation is a situation in which we can expect that a running computer system will have multiple computations executing simultaneously in a controlled fashion to achieve the goals of the system. The simultaneous executions can be handled by widely diverse computational vehicles: separate networked computer systems, separate processors in the same CPU, separate processor cores on a single chip, separate processor threads within a process, separate processes on the same CPU, etc.
Concurrency in computation is a situation in which we can expect that a running computer system will have multiple computations executing simultaneously in a controlled fashion to achieve the goals of the system. The simultaneous executions can be handled by widely diverse computational engines: separate networked computer systems, separate processors in the same CPU, separate processor cores on a single chip, separate processor threads within a process, separate processes on the same CPU, etc...
Concurrent systems would not cause much trouble if the portions of the systems on different processors, processes, or threads were completely independent, that is, they shared no resources. But that would be a rare case indeed. In a concurrent system, simultaneously executing software elements can and do share resources and communicate with each other. This is where the problems can arise; problems in the form of various synchronization issues such as [http://en.wikipedia.org/wiki/Race_condition#Computing race conditions], [http://en.wikipedia.org/wiki/Atomicity_(programming) atomicity] violations, and [http://en.wikipedia.org/wiki/Deadlock deadlocks]. The issues boil down to two essential problems in allowing access to shared resources:
:# '''Avoid deadlocks''': make certain that no two executing threads of control wait perpetually because each needs a resource which is under the control of the other.
:# '''Ensure fairness''': make certain that every participating thread of control eventually gets the opportunity to execute.
:# '''Provide Safety''': Make certain that nothing bad ever happens, like two threads that access the same memory in no defined order, or an invalid interleaving of operations that causes the program to crash.
:# '''Ensure Progress''': Make certain that every participating thread eventually gets the opportunity to execute. Possible problems in this category are deadlocks, starvation (a thread keeps a lock forever, causing another one to wait), fairness etc...
Concurrency control is a rich research area in computer science. Consequently, many schemes have been designed to control concurrent computation.
Indeed, SCOOP is such a model for concurrent computation. SCOOP differs from many other research efforts.
SCOOP is such a model for concurrent computation which differs in some areas from other research efforts.
First, it is a goal of SCOOP to abstract the notion of concurrency to a level above the tools and techniques that are currently available in the target concurrency environment. What this means is that if you were writing a system with multiple process threads, you could do that without SCOOP, using the tools that are currently used in multi-threaded programming, like semaphores and mutexes. Or you could write it in SCOOP using only the SCOOP mechanisms. Likewise with SCOOP, a system intended to run on multiple processors or multiple processor cores also could be written using only those same SCOOP mechanisms that you used for the multi-threaded system.
First, it is a goal of SCOOP to abstract the notion of concurrency to a level above the tools and techniques that are currently available. What this means is that if you were writing a system with multiple process threads, you could do that without SCOOP, using the tools that are currently used in multi-threaded programming, like semaphores and mutexes. Or you could write it in SCOOP using only the SCOOP mechanisms. Likewise with SCOOP, a system intended to run on multiple processors or multiple processor cores also could be written using only those same SCOOP mechanisms that you used for the multi-threaded system.
Second, the SCOOP model, as it is implemented in Eiffel, depends primarily upon Design by Contract with slightly changed contract semantics, and a single new keyword <code>separate</code> added to the Eiffel programming language. As you will see, the semantics of preconditions differ with concurrent execution versus sequential. Also, there are other underlying concepts and rules that need to be understood, but the point is that concurrent Eiffel using SCOOP will look a lot like sequential Eiffel.
Second, the SCOOP model depends primarily upon Design by Contract with slightly changed contract semantics, and a single new keyword <code>separate</code> added to the Eiffel programming language. As you will see, the semantics of preconditions differ with concurrent execution versus sequential. Also, there are other underlying concepts and rules that need to be understood, but the point is that concurrent Eiffel using SCOOP will look a lot like sequential Eiffel.
Third, SCOOP uses the common act of argument passing to identify the necessity for guaranteeing exclusive access.
We will examine the details of how all this fits together and what it means to you as you begin to build concurrent software in Eiffel using SCOOP.
==Object-oriented computation==
Eiffels familiar model for object-oriented computation:
<code>
x.f (a)
</code>
continues to be valid in SCOOP. But the way we understand the model differs slightly. In sequential Eiffel we would refer to this as a ''feature call'', with a client calling feature <code>f</code> on a supplier object (the calls target) currently attached to the entity <code>x</code>, and possibly passing arguments represented by <code>a</code>. We might alternatively refer to <code>x.f (a)</code> as a ''feature application''; specifically the application of feature <code>f</code> to the object associated with <code>x</code>. This is fine in sequential Eiffel, but as you will see, in SCOOP we have to make a distinction between ''feature call'' and ''feature application''. The distinction will become clear as we discuss the notions of processors and separate calls.
==Processors==
In the context of SCOOP, ''processor'' is an abstract notion.
{{definition|Processor|An autonomous thread of control capable of the sequential execution of instructions on one or more objects.}}
{{info|About processors:
<br/>1) Processor is an abstract notion. As such it does not imply any specific concurrency implementation.
<br/>2) At any given time, every runtime object is handled by exactly one processor. A single processor can handle any number of objects.}}
In traditional, sequential Eiffel, although we realize that there is some processor which executes our systems, we dont usually give it much thought. When we do, we generally regard it as a hardware entity on which our software can run.
The term ''processor'' (or, interchangeably, ''handler'') is vital to SCOOP and thought of in a slightly different way than in traditional Eiffel, i. e., not just as a hardware processor. In a concurrent system, there may be any number of ''processors''. Here the term is used in a more abstract sense than before. In SCOOP we think of a processor as any autonomous thread of control capable of applying features to objects. At the level of the SCOOP model, processors are not restricted to a particular type of hardware or software. So, if you were writing software for a hardware implementation with multiple processors, those real processors might correspond to the ''processors'' of SCOOP. But if you were writing a system using multiple process threads, then those threads might correspond to SCOOP ''processors''.
{{note|In the initial SCOOP implementation in EiffelStudio version 6.8, multiple process threads are the only supported concurrency mechanism. So, each SCOOP processor is implemented as a separate thread of execution in a single process. }}
Multiple processors in SCOOP come into play when feature calls on a particular object may actually be applied by a different processor than the one on which the feature call was issued. Of course, this is the important distinction between feature call and feature application that was mentioned above. In SCOOP, the processor which does the feature application may be different from the one that does the feature call. So you can think of feature call as being the ''logging'' or ''queuing'' of a request to have a feature applied.
==Separate types and separate calls==
SCOOP introduces the notion of ''separateness''.
{{definition|Separate type|A type which has been declared including the keyword <code>separate</code>.}}
{{definition|Separate call|A feature call whose target is of a separate type.}}
The determining factor for the use of multiple processors is the use of ''separate types'' and ''separate calls''. In a running system, every object is handled by a processor, but in the case in which there are no separate types or separate calls in a system, then only one processor will be used during execution, i. e., all calls will be non-separate ... and, consequently, there is no SCOOP-based concurrent processing present.
If an entity uses the keyword <code>separate</code> in its declaration, such as:
<code>
my_x: separate X
</code>
it indicates that the application of features to an object attached to <code>my_x</code> may occur on a different processor than the one on which the feature call was made. Such calls, <code>my_x.f</code>, would be considered separate calls. Additionally, the type of <code>my_x</code> is the separate type <code>separate X</code>.
A feature call on <code>my_x</code>, say:
<code>
my_x.f
</code>
would generally be considered a separate call, simply because it is a feature call on an object of a separate type, and therefore could be applied on a different processor. You will see now that separate calls are valid only in certain contexts.
==Access to shared resources==
As mentioned above, the main issue with concurrent systems is the proper control of access to resources that can be shared among simultaneously executing processors.
Traditional solutions to the problem involve the use of “critical sections” of code. These are sections of code in which the shared resource is accessed. Only one processor is allowed to be executing the critical section at a time. So if one process wants to execute the critical section and another is already doing so, then the first must wait. Process synchronization schemes ensure this “mutual exclusion” of access to the critical section.
Rather than using critical sections, SCOOP relies on the mechanism of argument passing to assure controlled access. As a result, there is a restriction placed on separate calls.
{{Rule|name=Separate argument|text=A separate call <code>a_x.f (a)</code> is valid only if <code>a_x</code> is an argument of the enclosing routine.}}
So, according to this rule, for a separate call to be valid, the target of the call must be a formal argument of the routine in which the call occurs. The code below contains both an invalid separate call and a valid one.
<code>
my_separate_attribute: separate SOME_TYPE
...
calling_routine
-- One routine
do
my_separate_attribute.some_feature -- Invalid call: Feature call on separate attribute
enclosing_routine (my_separate_attribute) -- Separate attribute passed as argument
end
enclosing_routine (a_arg: separate SOME_TYPE)
-- Another routine
do
a_arg.some_feature -- Valid call: Feature call on separate argument
end
</code>
In the code above, <code>my_separate_attribute</code> is a class attribute declared as a separate type. In the first line in <code>calling_routine</code> a direct feature call is made to apply <code>some_feature</code> to <code>my_separate_attribute</code>. This is an invalid separate call. The second line calls feature <code>enclosing_routine</code> and passes <code>my_separate_attribute</code> as an argument. <code>enclosing_routine</code> takes an argument of type <code>separate SOME_TYPE</code>. Within <code>enclosing_routine</code> it is valid to call <code>some_feature</code> on <code>a_arg</code>.
{{SeeAlso|The <code>launch_producer</code> feature of the [[Producer-consumer|producer-consumer]] example, a feature which exists for the purpose of compliance with the separate argument rule.}}
In <code>calling_routine</code> above, the call to <code>enclosing_routine</code> has a separate argument:
<code>
enclosing_routine (my_separate_attribute) -- Separate attribute passed as argument
</code>
Because the argument <code>my_separate_argument</code> is of a separate type, then it is subject to be handled by a processor different from the one on which the call to <code>enclosing_routine</code> occurs. As a result, the execution of <code>enclosing_routine</code> will be delayed until the time that the processor which handles <code>my_separate_argument</code> is available for exclusive access. This type of delay is described by the ''Wait rule''.
{{Rule|name=Wait|text=A routine call with separate arguments will execute when all corresponding processors are available and hold them exclusively for the duration of the routine.}}
{{SeeAlso|<br/>
1) The <code>{PHILOSOPHER}.eat</code> feature of the [[Dining philosophers|dining philosophers]] example. This feature has two separate arguments, and will wait to execute until the processors associated with both are available.<br/>
2) [[SCOOP Implementation#The Wait Rule|How the ''Wait rule'' is implemented]] in EiffelStudio. }}
===Controlled expressions===
Valid targets for separate calls, like <code>a_arg</code> in <code>enclosing_routine</code> above are said to be ''controlled''.
{{definition|Controlled/uncontrolled expression|An expression is '''controlled''' if it is attached and either:<br/>1) It is of a non-separate type<br/>2) It is of a separate type and it is handled by the same processor as one of the separate arguments to the enclosing routine.<br/><br/>Otherwise it is '''uncontrolled'''.}}
What the definition of ''controlled expression'' means is that such an expression is controlled with respect to the processor handling the context in which the expression is used (the current context) ... and that means that all objects necessary to the expression are under control of (available for exclusive access by) the current processor and cannot be modified by other processors.
==Synchronous and asynchronous feature calls==
As stated above, when we think of the execution of sequential Eiffel, we tend to equate feature call and feature application. That is, it is expected that for a sequence of two feature calls:
<code>
x.f
y.g
</code>
that the feature application of <code>x.f</code> will complete before <code>y.g</code> begins.
In concurrent Eiffel with SCOOP things are different. This is because a particular feature call, <code>x.f</code>, may occur on one processor, and the consequent feature application (of feature <code>f</code> to <code>x</code>) may occur on a different processor.
{{definition|Synchronous feature call|A feature call in which the execution of the calling client does not proceed until the feature application has completed. }}
{{definition|Asynchronous feature call|A feature call which causes the “logging” of a request by a client for the associated feature application to be executed by the suppliers processor. }}
After an asynchronous feature call, the execution of the client proceeds immediately, possibly in parallel with the application of the feature on some other processor. We'll revisit this point after a look at what it takes for a call to be synchronous or asynchronous.
===What makes a call synchronous or asynchronous?===
First, every feature call is either a synchronous feature call or an asynchronous feature call. For a particular call, the following rules determine which it is:
A feature call is '''synchronous''' in the following cases:
:S1 It is a non-separate call.
:S2 It is a separate call:
::S2.1 To a query, or
::S2.2 To a command which has at least one actual argument which is of a reference type and either
:::S2.2.1 A separate argument of the enclosing routine, or
:::S2.2.2 <code>Current</code>.
A feature call is '''asynchronous''' in the following case:
:A1 It is a separate call to a command with no arguments, or arguments not meeting the criteria of S2.2 above.
Lets look a little closer at those cases determining synchronous calls.
Case S1 is the case of typical sequential Eiffel, where all calls are non-separate, and therefore synchronous. Of course, even in concurrent Eiffel with SCOOP, plenty of non-separate calls will occur, and these will be synchronous.
Case S2.1 says that if a separate call is a query it must be synchronous. This is because even though the feature application will probably occur on a different processor, the instructions following the query will likely depend up on the result of the query, so they must wait until the feature application completes. This situation is known as ''wait by necessity''.
Case S2.2 describes a situation in which a call provides at least one actual argument that is <code>Current</code> or is a separate formal argument of the calls enclosing routine. In this case the client is calling a procedure and passing arguments which are ''controlled'' in the context of the calling routine. That is, the actual arguments are objects upon which the client processor has exclusive access in the enclosing routine. In order for the supplier processor to be able to apply the feature (presumably accessing the argument objects in the process), the client must pass its exclusive access to these objects on to the supplier. This is done through a mechanism called ''access passing''. Because the client has passed its exclusive access to the supplier processor, it cannot continue execution until the called feature has been applied by the supplier processor, and the supplier processor has restored the exclusive access back to the client. Therefore, this type of call must be synchronous.
Now consider the only case, Case A1, determining asynchronous calls.
Separate calls to commands are asynchronous (except as in case S2.2). This means that when a client executes an asynchronous feature call, it “logs” the need for its associated feature application. But then rather than waiting for the feature application to complete, the client routine continues execution of instructions beyond the asynchronous call.
It is in this case that concurrent computation is achieved. The processor of the client object is free to continue processing while the processor handling the target of the asynchronous feature call applies that feature.
==Design by Contract and SCOOP==
The backbone of the Eiffel Method is design by contract. Preconditions, postconditions, and class invariants are used in Eiffel for extending software interfaces into software specification. This is essentially the same in concurrent Eiffel with SCOOP as it is in traditional, sequential Eiffel. However, because of the concurrent nature of processing under SCOOP, the runtime semantics of the elements of Design by Contract are different for concurrent systems.
===Preconditions===
The role of the precondition is somewhat different in SCOOP than in sequential Eiffel. In non-concurrent Eiffel we view the precondition of a routine as defining a set of obligations on potential callers of the routine. That is, the set of conditions that must be true before correct execution of the routine can be expected. So, we could look at the precondition clauses in sequential Eiffel as '''correctness conditions'''. A typical example might be a square root routine that returns the square root of a passed argument value. A precondition clause, i. e., a correctness condition, for this routine will be that the argument must be non-negative. It is the responsibility of the caller to ensure that this property of the argument holds at the time of the feature call.
In concurrent Eiffel, the same correctness conditions are still valid, but there are cases in which we must view the clients role here a little differently. In the case of a precondition clause that depends upon an [[Concurrent Eiffel with SCOOP#Controlled expressions|uncontrolled]] object, even if the client tests the condition ahead of the call, there is no assurance that action by some other concurrent processor may have invalidated the precondition clause between the time that the check was made and the time that the feature application takes place. So, the client cannot be held responsible establishing that this clause holds. This type of precondition clause is called an '''uncontrolled precondition clause'''.
{{definition|Controlled/uncontrolled assertion''' (precondition or postcondition) '''clause| A precondition or postcondition clause for a feature <code>f</code> is controlled if, after substitution of actual arguments for formal arguments, the clause involves only calls on entities which are controlled in the context of the routine which is the caller of <code>f</code>. <br/><br/>Otherwise it is an '''uncontrolled assertion''' (precondition or postcondition) '''clause'''. }}
So, the determination of whether a particular precondition or postcondition clause is controlled or uncontrolled depends upon the context of the calling routine. That means that a particular clause on feature <code>f</code> might be considered controlled when <code>f</code> is called by one caller, but uncontrolled when called by a different caller.
Uncontrolled precondition clauses demand an adaptation of precondition semantics:
{{Info|A violation of a '''controlled precondition clause''' will cause an exception in the caller as soon as the violation is detected. A violation of an '''uncontrolled precondition clause''' does not result in an exception in the caller. Rather, the feature application waits to execute until such time as the precondition clause holds. }}
So, the client's responsibility is limited to those precondition clauses that are controlled. Uncontrolled precondition clauses become '''wait conditions'''.
{{SeeAlso|The <code>{PRODUCER}.store</code> feature in the [[Producer-consumer|producer-consumer]] example. When called by <code>{PRODUCER}.produce</code> it becomes a '''wait condition'''.}}
===Postconditions===
As with preconditions the effect of concurrent execution can make a difference in how postconditions are viewed.
If a routine has executed correctly, then the postcondition of the routine will hold at the time that it terminates ... this is true whether or not concurrency is involved. However, when a postcondition involves separate calls or entities, clients must be cautious about how they depend upon the state guaranteed by postconditions.
===Class invariants===
The '''separate argument rule''' above tells us that separate calls are valid only on targets which are formal arguments of their enclosing routines. Because class invariants are not routines and therefore have no arguments, separate calls are not allowed in class invariants.
{{Info|Technically, it should be possible in a class invariant to code an inline agent that gets passed arguments of separate types, then execute separate calls within the inline agent. But generally, it can be assumed that class invariants contain no separate calls. }}
The semantics of class invariants will be the same as in sequential Eiffel, precisely because invariants must include only non-spearate calls. To put it the terms of SCOOP, the class invariant ensuring the validity of any particular object will be evaluated entirely by the processor handling that object.

View File

@@ -0,0 +1,136 @@
[[Property:title|Asynchronous Calls]]
[[Property:weight|6]]
[[Property:uuid|d3d3873c-5c84-7566-547e-1ede38544081]]
==Overview==
As we've seen in [[Separate Calls]], feature calls to a non-separate target are always synchronous.
Furthermore, queries are always synchronous as well, because the caller has to wait for the result.
{| border="1"
|-
! Target
! Query
! Command
|-
| non-separate
| synchronous
| synchronous
|-
| separate
| synchronous
| potentially asynchronous
|}
Asynchronous calls can therefore only happen on commands with a separate target.
Indeed, such calls are by default executed asynchronously, but there are some important exceptions to this rule.
A command to a separate target is executed synchronously if any of the following applies:
* The client (caller) and supplier (target) region are the same.
* The target region is passive.
* The callee needs a lock currently held by the caller (lock passing).
* The caller holds the locks of the callee (separate callbacks).
== Triggers for Synchronization ==
=== Same Regions ===
The first case happens when a reference is declared separate, but happens to be non-separate. This case follows directly from the type system: A non-separate type <code>A</code> always conforms to its variation <code>separate A</code>. At run-time such cases can be detected with an object test:
<code>
sep_object: separate A
--...
if attached {A} sep_object as non_sep_object then
-- ...
end
</code>
=== Passive Regions ===
In the SCOOP model, a passive region does not have a processor attached to it.
This means that clients of the passive region need to apply features logged against a passive region themselves.
The logical consequence of this is that all call to a passive region, including commands, are executed synchronously.
=== Lock Passing ===
Lock passing is another source of synchronization. It is one of the trickiest issues to detect, and to fully understand it we must first introduce a few more definitions.
In [[Exclusive Access]] we have learned that an object is ''controlled'' if it appears as a formal argument of the enclosing routine. SCOOP however always grants exclusive access over a whole region. We therefore introduce the new term ''Lock'':
{{definition|Lock|Exclusive access to a SCOOP region and all objects therein.}}
Note the difference between ''controlled'' and ''locked'':
* ''Controlled'' applies to a single object, whereas ''locked'' applies to a region.
* The ''controlled'' property can be determined statically at compile-time, whereas ''locked'' is determined at runtime.
* The set of ''controlled'' objects of a processor is always a subset of the set of objects in ''locked'' regions.
{{note|In terms of implementation, a ''lock'' corresponds to an open call queue to a region.}}
Now consider a small classes <code>HASH_STORAGE</code> and <code>EXAMPLE</code>:
<code>
class HASH_STORAGE feature
hash_code: INTEGER
set_hash_code (a_string: separate STRING)
do
hash_code := a_string.hash_code
end
end
class EXAMPLE feature
run (a_hash_storage: separate HASH_STORAGE; a_string: separate STRING)
do
a_hash_storage.set_hash_code (a_string)
io.put_integer (a_hash_storage.hash_code)
end
end
</code>
You might notice a problem here:
In the feature <code>{EXAMPLE}.run</code>, exclusive access to 'a_hash_storage' and 'a_string' is guaranteed by the SCOOP semantics.
Or in other words, the corresponding regions are ''locked''. The feature <code>{HASH_STORAGE}.set_hash_code</code> however needs access to ''a_string'' as well.
In the SCOOP model, as seen so far, this would result in a deadlock. The handler of the <code>HASH_STORAGE</code> object waits for exclusive access on the string object, and the <code>EXAMPLE</code> object waits for the query <code>{HASH_STORAGE}.hash_code</code> to return.
To resolve this problem, SCOOP implements a technique called ''Lock Passing''.
Locks on regions can be passed to the handler of the target of a separate call.
Lock passing happens whenever the client processor (the handler of the <code>EXAMPLE</code> object) has locked a region that holds an object which is passed as an actual argument to a separate call. Note that this also includes non-separate reference objects, because a processor always holds a lock over its own region.
When a client has passed its locks to the supplier processor, it cannot continue execution until the called feature has been applied by the supplier processor, and the supplier processor has given back the locks to the client. Therefore, this type of call must be synchronous.
{{note|During lock passing, a processor gives away all the locks that it currently holds, including the lock on itself.}}
{{note| Lock passing happens for every synchronous call, in particular also for queries and passive processors.}}
The advantage of lock passing is that it enables some very common programming patterns without triggering a deadlock. The disadvantage, however, is that it's hard to tell '''when''' it happens. However, there are a few cases when lock passing is guaranteed to happen, namely when the actual argument passed to a separate call is
* a formal argument of the enclosing routine,
* of a non-separate reference type or
* <code>Current</code>.
There are, however, some cases where it's not immediately obvious that lock passing happens.
For example, a region might be locked because of a controlled argument somewhere further up in the call stack (i.e. not the enclosing routine, but the caller of that routine), or because an object is passed as an argument which happens to be on the same region as one of the controlled objects.
There is a workaround to disable lock passing for a specific call:
<code>
async_call (a_procedure: separate PROCEDURE [ANY, TUPLE])
do
a_procedure.call (Void)
end
example (a_object: separate ANY)
do
async_call (agent a_object.some_feature (Current))
end
</code>
The feature <code>async_call</code> can be defined somewhere in the project and can be reused. The downside is that an agent needs to be created, but there's no lock passing happening, because all arguments to the agent are closed and only <code>Void</code> is passed to the separate call which cannot trigger lock passing.
However, this mechanism should be used with some care, because it's easy to run into one of the above mentioned deadlocks.
=== Separate Callbacks ===
The last occurrence of synchronous calls is closely related to lock passing. If a processor '''A''' has passed a non-separate reference argument to another processor '''B''', and thus has passed its locks away, it cannot proceed its execution. Sometimes however processor '''B''' has to log some calls back to '''A''', which is called a ''separate callback''.
{{definition|Separate Callback | A separate call where the caller holds the locks of the callee. }}
During a separate callback processor '''B''' has to give back the locks it has previously received from '''A'''.
This in turn means '''B''' has to wait until '''A''' has finished its execution of the separate callback and returned the locks, which effectively makes the call synchronous.

View File

@@ -0,0 +1,57 @@
[[Property:title|Design by Contract in SCOOP]]
[[Property:link_title|Design by Contract]]
[[Property:weight|5]]
[[Property:uuid|f8df5904-d1ee-31d7-f618-fb8bf1ddc876]]
The backbone of the Eiffel Method is design by contract. Preconditions, postconditions, and class invariants are used in Eiffel for extending software interfaces into software specification. This is essentially the same in concurrent Eiffel with SCOOP as it is in traditional, sequential Eiffel. However, because of the concurrent nature of processing under SCOOP, the runtime semantics of the elements of Design by Contract are different for concurrent systems.
==Preconditions==
The role of the precondition is somewhat different in SCOOP than in sequential Eiffel. In non-concurrent Eiffel we view the precondition of a routine as defining a set of obligations on potential callers of the routine. That is, the set of conditions that must be true before correct execution of the routine can be expected. So, we could look at the precondition clauses in sequential Eiffel as '''correctness conditions'''. A typical example might be a square root routine that returns the square root of a passed argument value. A precondition clause, i. e., a correctness condition, for this routine will be that the argument must be non-negative. It is the responsibility of the caller to ensure that this property of the argument holds at the time of the feature call.
In concurrent Eiffel, the same correctness conditions are still valid, but there are cases in which we must view the clients role here a little differently.
In the case of a precondition clause that depends on a separate object, even if the client tests the condition ahead of the call, there is no assurance that action by some other concurrent processor may have invalidated the precondition clause between the time that the check was made and the time that the feature application takes place.
In SCOOP preconditions can therefore additionally take the role of a '''wait condition'''.
Wait conditions are useful for cases where the caller can't guarantee that a property on an object is true at the time of the call, but it knows that it will eventually become true.
If a wait condition fails, the current processor will stall its execution, release the locks on its arguments, and wait until the precondition is fulfilled.
A typical example is a <code>CONSUMER</code> object trying to dequeue an item from a shared <code>BUFFER</code>.
In the following example, the precondition in <code>{CONSUMER}.consume</code> is treated as a wait condition:
<code>
class CONSUMER feature
consume (a_buffer: separate BUFFER): INTEGER
require
not_empty: not a_buffer.is_empty
do
Result := a_buffer.item
a_buffer.remove
end
end
</code>
A precondition clause is only treated as a wait condition when there's a separate call.
However, the opposite is not true - not all precondition clauses with a separate call are treated as wait conditions.
The rule is a bit tricky to understand, but as a general rule of thumb, a precondition violation is raised when the SCOOP runtime detects that there's no possibility that a precondition clause may become true in the future if the routine releases its exclusive access.
{{Info| Alright, here are the exact rules: The decision depends on the context of the '''caller''' of the routine.
If one of the separate objects used as a target in a precondition clause is ''locked'' (see definition in [[Asynchronous Calls]]) in the context of the caller, the precondition is treated as a correctness condition. Otherwise, it is a wait condition.}}
{{SeeAlso|The <code>{PRODUCER}.store</code> feature in the [[Producer-consumer|producer-consumer]] example. When called by <code>{PRODUCER}.produce</code> it becomes a '''wait condition'''.}}
==Postconditions==
As with preconditions the effect of concurrent execution can make a difference in how postconditions are viewed.
If a routine has executed correctly, then the postcondition of the routine will hold at the time that it terminates. This is true whether or not concurrency is involved. However, when a postcondition involves separate calls, clients must be cautious about how they depend upon the state guaranteed by postconditions.
==Class invariants==
The '''separate argument''' rule in [[Separate Calls]] tells us that a separate call is valid only on a target which is a formal arguments of the enclosing routine. Because class invariants are not routines and therefore have no arguments, separate calls are not allowed in class invariants.
{{Info|Technically, it should be possible in a class invariant to code an inline agent that gets passed arguments of separate types, then execute separate calls within the inline agent. But generally, it can be assumed that class invariants contain no separate calls. }}
The semantics of class invariants will be the same as in sequential Eiffel, precisely because invariants must include only non-spearate calls. To put it the terms of SCOOP, the class invariant ensuring the validity of any particular object will be evaluated entirely by the processor handling that object.

View File

@@ -1,8 +1,5 @@
[[Property:title|SCOOP examples]]
[[Property:weight|0]]
[[Property:weight|8]]
[[Property:uuid|75ddd9e0-3baf-655a-748f-ea8765a1d06d]]
The examples for SCOOP that are distributed with EiffelStudio are solutions to classic and not-so-classic concurrency problems.

View File

@@ -0,0 +1,107 @@
[[Property:title|Exceptions in SCOOP]]
[[Property:link_title|Exceptions]]
[[Property:weight|7]]
[[Property:uuid|4f760d0c-ff3b-5f8a-7d43-9be855cef17a]]
== Introduction ==
Exceptions are a rather nasty issue in concurrency.
In a shared memory system, an exception can leave a system in an inconsistent state, for example because they jump over an unlock operation.
In message passing systems on the other hand they can introduce unnecessary synchronization just to make sure that no exception happened, or they can create havoc because the recipient of an exception message is no longer in a state where it can handle it.
For SCOOP, the exception mechanism was carefully designed with the following goals in mind:
* Comprehensibility: It should be easy to understand
* Compatibility with exceptions in sequential programs
* Consistency: An exception should not leave objects in a tainted state.
* Asynchrony: Exceptions should not restrict the level of concurrency.
== Exception Propagation ==
Within a single processor, exceptions propagate just like in a sequential program.
When a routine encounters an exception, the <code>rescue</code> clause is entered, and if no <code>retry</code> statement is present, the exception is propagated to the caller.
This ensures backwards compatibility with sequential programs, because when there's only the root processor, the semantics are exactly the same.
Furthermore, this mechanism has proven itself useful for restoring invariants after an exception in order to bring objects to a consistent state.
The interesting case is when an exception propagates between regions, which happens during a separate call.
In that case there are two possibilities:
* The call is synchronous: The exception is propagated to the client region.
* The call is asynchronous: The exception is not propagated, because the client is busy executing something else. Instead, the supplier region is marked as ''dirty''.
This decision was mostly made to ensure comprehensibility.
Propagating an exception to the client in an asynchonous call would be really hard to handle.
The client would have to be ready to handle an exception at any point in time, and there would have been a need for an additional language mechanism to protect critical sections.
Because of these reasons SCOOP restricts exception propagation to synchronous calls only.
== Dirty Regions ==
A region marked as dirty has suffered an exception in an asynchronous call, which could not be propagated to its client.
The dirty mark has a big impact for future separate calls.
{{rule|name=Semantics of Dirty Regions|text= <br/>
1) All calls logged to a dirty region, whether synchronous or asynchronous, are ignored. <br/>
2) A synchronous separate feature call to a dirty region immediately triggers an exception in the client. Afterwards, the region is clean again.}}
The reason for these rules is that a series of commands and a subsequent query often depend on each other.
For example, a first call may instruct the target region to open a file, the next call to append a string to it, followed by a query to get the new size of the file.
If the first call already fails, there's no point in executing subsequent calls.
Even worse, it can make recovery from exceptions very hard to do in the client if it has no idea which calls have been successfully executed after the first exception.
The dirty mark will also vanish when an region is unlocked.
{{rule|name=Unlocking Dirty Regions|text=After releasing a lock on a dirty region, the region is clean again.}}
This is probably the most controversial design decision, because '''it allows for exceptions to be lost'''.
During the design of the exception mechanism, there was a choice of two other solutions.
One would have been to add an automatic "safeguard" synchronization whenever an unlock operation happens, during which exceptions could be propagated. The obvious downside is that it severely limits the uses of asynchrony.
Another solution would have been to preserve the exception, and raise it in the client that next logs a call.
The last solution only partially solves the problem (there might be no next client logging a query at all), but introduces a new problem that processors can get an exception completely out of context.
However, the main reason to choose the "forget-upon-unlock" solution over the other two is that it's easy to simulate the behaviour manually (as you'll see in the next section), while it's impossible to have a "forget-upon-unlock" semantics if one of the other models is used.
{{info|Upon lock passing, the dirtyness of a region is preserved.}}
== Preventing Exception Loss ==
One way to prevent exceptions from being lost is to add a synchronous query at the end of a routine:
<code>
put_character (c: CHARACTER; a_file: separate MY_FILE)
local
l_sync: POINTER
do
a_file.open
a_file.put_character (c)
a_file.close
-- This ensures that exceptions are propagated:
l_sync := a_file.default_pointer
end
</code>
Another possibility is to store the failure in the separate object:
<code>
class MY_FILE feature
is_tainted: BOOLEAN
open
do
-- Open a file.
rescue
is_tainted := True
end
-- other features
end
class CLIENT feature
put_character (c: CHARACTER; a_file: separate MY_FILE)
do
if a_file.is_tainted then
-- Handle exception in `a_file'.
end
a_file.open
a_file.put_character (c)
a_file.close
end
end
</code>

View File

@@ -0,0 +1,145 @@
[[Property:title|Exclusive Access]]
[[Property:weight|4]]
[[Property:uuid|7f5adf71-7169-c54e-9bed-079c84aee8d3]]
The topic of exclusive access was already briefly touched in [[Separate Calls]].
In this chapter, we'll have a deeper look at the exclusive access guarantees given in SCOOP, and what it means in practice.
== Reasoning Guarantees ==
In the SCOOP model, there are two very important guarantees - ''Order Preservation'' and ''Exclusive Access''.
{{definition|Order Preservation | The guarantee that, between any two regions, the order of separate feature calls logged by one region is the same as the order of feature applications by the other region.}}
{{definition|Exclusive Access (to a region) | The guarantee that no intervening features logged by other processors are executed.}}
The first guarantee is always valid.
A SCOOP processor is never allowed to reorder two feature calls when both have been logged by the same region.
However, the order preservation guarantee only holds between two processors - it is not valid in a global context.
For example, when a region A first logs a command to C, and then another region B logs a command to C, you generally don't know which feature will be applied first.
And also the other way around: You have no guarantee on the order of feature application when A logs both an (asynchronous) separate call to B and C.
Exclusive access on the other hand is only valid in certain contexts.
When an object is ''controlled'', SCOOP guarantees exclusive access to the region that contains the object.
We'll define the term ''controlled'' a bit later, but you've already seen an example in [[Separate Calls]]:
A separate object which is passed as an argument.
These two basic guarantees in the SCOOP model are important to reach one of the main goals:
The ability to reason about a program should be preserved in a concurrent program.
SCOOP guarantees that a ''controlled'' object behaves just like in a sequential program,
meaning that Pre- and Postconditions will remain valid in between feature calls because no other processor can interfere.
== Inline Separate ==
Passing a separate object as an argument is not the only way to gain exclusive access in SCOOP.
While in theory the mechanism would be sufficient, experience has shown that it's often bulky and not very elegant to write a new routine whenever one wants to call a feature on a separate target.
Therefore SCOOP introduces the ''Inline Separate'' block to make things a bit easier.
<code>
person: separate PERSON
show_separate_block
do
separate person as l_person do
print (l_person.age)
end
end
</code>
The separate block evaluates the expression on the left hand side of the <code>as</code> keyword, assigns it to the new read-only local variable <code>l_person</code> on the right hand side,
and does whatever is necessary to provide ''exclusive access'' to the region that handles <code>l_person</code>.
You can think of the inline separate block as syntactic sugar that creates a new wrapper routine (although inline separate still lets you access local variables of the enclosing routine):
<code>
person: separate PERSON
show_separate_block
do
anonymous_wrapper (person)
end
anonymous_wrapper (l_person: separate PERSON)
do
print (l_person.age)
end
</code>
== Controlled Objects ==
The Eiffel compiler introduces the concept of a ''controlled'' object to support the exclusive access guarantee.
{{definition|Controlled/uncontrolled object|An object is '''controlled''' if it is attached to a reference that has one of the following properties:<br/>
1) It is of a non-separate type. <br/>
2) It is of a separate type and it appears as a formal argument of the enclosing routine. <br/>
3) It is a local variable of an inline separate block.
<br/><br/>Otherwise it is '''uncontrolled'''.}}
An object is always ''controlled'' with respect to the processor handling <code>Current</code>, and the meaning is that the current processor has ''Exclusive Access'' to the region that holds the object. In particular, this means that no other processor can access or modify a controlled object.
In chapter [[Separate Calls]] we've already mentioned that SCOOP places a restriction on separate calls.
A separate call is allowed if the target appears as a formal argument of the enclosing routine.
While this rule is correct, it does not cover all of the cases.
With the above definition however, we can both simplify and extend the previous ''Separate Argument'' rule:
{{rule|name=Controlled Target|text=A separate call is valid if its target is controlled.}}
The compiler checks this property at compile-time and throws an error if a separate call happens on an uncontrolled target.
{{note| Query results with a non-separate return type are placed on the same processor as the target. This means that, if a processor controls the target, it is also safe to use these query results. The compiler tries to exploit this fact a little by treating such query results as controlled as well within a single expression. This allows to have multi-dot calls on separate target, such as <code> sep_person.spouse.name.out</code>. <br/>
Note that this does not yet cover all cases where ''controlled'' can be inferred, and the compiler may become smarter in the future.}}
== Exclusive Access to multiple regions ==
It is possible to gain exclusive access to multiple regions simultaneously.
This can either be done by passing multiple arguments to a region, or with an ''inline separate'' block.
<code>
balance_with_arguments (incr: separate INCREMENTER; decr: separate DECREMENTER): INTEGER
do
Result := incr.value + decr.value
end
balance_with_inline_separate: INTEGER
do
separate incrementer as incr, decrementer as decr do
Result := incr.value + decr.value
end
end
incrementer: separate INCREMENTER
decrementer: separate DECREMENTER
</code>
Exclusive access to the arguments is granted '''atomically''' in this case.
This ensures that no deadlock can occur when two regions want to gain access to the same regions, provided that they both use the "multiple-arguments-mechanism" of SCOOP and don't lock regions one after the other by hand.
{{SeeAlso|The dining philosopher example makes use of this fact. The feature <code>{PHILOSOPHER}.eat</code> has two separate arguments, which are guaranteed to be locked in one atomic step by the SCOOP runtime. If this were not the case, a deadlock may occur.}}
== Waiting for regions ==
You may wonder how the exclusive access guarantee is implemented in SCOOP.
A simple solution would be to have a lock for every region, and to gain exclusive access, one has to acquire the lock.
Before the 15.01 release, this was indeed how it was implemented.
However, the implementation proved to be flawed because it caused a lot of synchronization and waiting.
In EiffelStudio 15.01 this has changed.
When a processor needs exclusive access to another one, it opens a new ''queue'' to log calls.
There can be multiple open queues to a processor, but the processor will always work on only one queue.
This has an important effect in practice: '''Gaining exclusive access to a region is always non-blocking.'''
Note that this is also true for exclusive access to multiple regions.
Previously it may have been necessary to wait for another processor to relinquish the lock, but now a processor can just proceed with logging calls, which will eventually be executed by the supplier processor.
You may start to wonder now where waiting happens, as it isn't possible to both have a guarantee for exclusive access and fully non-blocking behaviour.
Well, it happens during synchronous calls.
When a client gains exclusive access and starts logging calls, the supplier may not care about it yet because it is busy with something else.
However, as soon as a synchronous call is logged, the client needs to wait for the result.
During that time, the supplier will finish its other tasks and eventually start to execute the calls logged by the suspended client, at which point it will wake up again.
{{note|Due to this, the only way to experience a deadlock in SCOOP is when a processor is stuck in a separate query.}}
To summarize:
* Gaining exclusive access, also to multiple regions, is always non-blocking.
* An asynchronous call is also non-blocking.
* Only a call to a query may be blocking, and thus a place where a processor can be stuck (e.g. in case of deadlock or starvation).

View File

@@ -0,0 +1,74 @@
[[Property:title|Getting Started with SCOOP]]
[[Property:link_title|Getting Started]]
[[Property:weight|-1]]
[[Property:uuid|87f78898-2bbb-b134-0128-e55401a61a05]]
=Introduction=
SCOOP adds only a single keyword to the Eiffel programming language: <code>separate</code>. In any case in which SCOOP is not enabled, the <code>separate</code> keyword is ignored, and the SCOOP-specific validity rules are not applied.
In order to enable SCOOP processing in your project, you use the project settings. There are two mandatory settings. One of them is the option ''Concurrency'', and the value you need is ''SCOOP'', as shown in the section of the Project Settings dialog below.
[[Image:SCOOP project setting]]
The second setting is the precompiled library. This should either be '''base-scoop-safe''', or nothing at all. A precompiled library which is not built with SCOOP support will not work.
{{caution | When you use a precompiled library, the ''Concurrency'' setting for that library trumps whatever ''Concurrency'' value you have set for the project. So, if you use the "base-safe.ecf" precompiled library, your project will not be SCOOP-enabled regardless of the setting of ''Concurrency'' in the Target. Likewise, using the "base-scoop-safe.ecf" precompiled library always produces a SCOOP-enabled project, even if the Target ''Concurrency'' setting is ''No concurrency''.}}
=Creating a new SCOOP project=
== Step 1: Create a new project ==
Create a new project with the EiffelStudio new project wizard by selecting ''Basic application'' from the choices under ''Create project'' when EiffelStudio opens.
The created project will not have SCOOP enabled however (this is true in all EiffelStudio releases so far, but may change in the future).
Do not compile the newly created project or precompile a library if asked - it will just be a waste of time.
== Step 2: Convert the project ==
To enable SCOOP, you'll have to do some work on the project settings. Follow the steps in the next section.
= Convert an existing project to SCOOP =
== Step 1: Adjust Concurrency setting ==
Open the project settings dialogue (Project -> Project settings...).
Select your favourite target (usually there's only one) and extend the section ''Advanced''.
In there you'll find the setting ''Concurrency'', which should have a value ''No concurrency'' for sequential projects. Change it to ''SCOOP''.
In addition to the ''Concurrency'' project setting, it is recommended that for new projects you use [[Creating a new void-safe project#Project settings for void-safe projects|the settings that guarantee void-safety]].
== Step 2: Remove the precompiled library ==
Expand the node ''Groups'' in the project settings on the left.
If your project uses a precompiled library, you should be able to see and expand the node ''Precompile'' as well.
Right click on the item (usually something like "base-safe_precompile") in the category ''Precompile'', then select ''Remove''.
== Step 3: (Optional) Add precompiled base-scoop-safe to the project ==
Select the node ''Groups'' on the left, and then press Ctrl + P on the keyboard (or select ''Add Precompile'' from the toolbox).
Note that this only works when there's no existing precompiled library and when the node ''Groups'' is highlighted.
In the dialog that opens, select "precomp_base-scoop-safe", then close the dialog by clicking ''OK''.
== Step 4: Clean compile ==
Changing the ''Concurrency'' setting and precompiled libraries each would cause you to have to do a [[Clean compile|clean compile]] of your system in order to become effective. To do so, close EiffelStudio and reopen it.
In the dialog that appears, select the previously modified project and then select the ''Compile'' action and add a tick to ''Clean''.
At this point, your project should be void-safe and SCOOP enabled.
== Step 5: Double-check ==
Before doing anything after the clean compile, check the warnings section. If EiffelStudio is complaining that some settings could not be applied, something might be wrong with the precompiled library.
To make sure that SCOOP is really enabled, you can insert the following code snippet somewhere in your code:
<code>
check_scoop (a_string: separate STRING)
local
str: STRING
do
str := a_string
end
</code>
This should trigger a compiler error. If it doesn't, then SCOOP is not correctly set up in your project.

View File

@@ -1,5 +1,5 @@
[[Property:title|SCOOP implementation]]
[[Property:weight|2]]
[[Property:weight|9]]
[[Property:uuid|eeb07907-e036-f3d6-5297-a7c44cfd1900]]
The implementation of SCOOP within EiffelStudio varies from the definition as it has been presented in publications during the model's evolution.
@@ -60,7 +60,7 @@ In version 6.8, agents targeted on separate objects are not supported.
The first implementation of SCOOP, some things that we do commonly in sequential Eiffel become less fluid in the presence of SCOOP. Although not strictly limitations in the implementation of SCOOP principles, in order to make SCOOP programming easier, these are areas that should be improved in future releases. In the meantime, there are workarounds for some of these situations.
==Printing a <code>separate STRING</code> ==
==Printing a separate STRING ==
Suppose you have declared a class attribute of type <code>separate STRING</code>:
@@ -94,16 +94,46 @@ Or use a creation expression and avoid declaring the local variable:
io.put_string (create {STRING}.make_from_separate (my_separate_string))
</code>
== Calling a separate agent ==
Calling a separate agent is a bit tricky, especially if it's a <code>PROCEDURE</code> which should be executed asynchronously.
If the agent does not take any arguments, you must pass Void, otherwise the compiler will generate an empty <code>TUPLE</code> which
is on the same processor as the caller and thus triggers lock passing (see [[Asynchronous Calls]]):
<code>
do_call (proc: separate PROCEDURE [ANY, TUPLE])
do
proc.call (Void)
end
</code>
If the agent does take arguments, things get a bit more tricky. If the call must be asynchronous, you have to do a workaround with the feature <code>{ROUTINE}.empty_operands</code> like this:
<code>
do_call (a_procedure: separate PROCEDURE [ANY, TUPLE[separate STRING]]; a_string: separate STRING)
local
l_tuple: separate TUPLE [separate STRING]
do
l_tuple := a_procedure.empty_operands
set_tuple_string (l_tuple, a_string)
a_procedure.call (l_tuple)
end
set_tuple_string (a_tuple: separate TUPLE [str: separate STRING]; a_string: separate STRING)
do
a_tuple.str := a_string
end
</code>
=Implementation dependent behavior=
==The Wait Rule==
{{note | This only applies to EiffelStudio releases prior to 15.01}}
The [[Concurrent Eiffel with SCOOP#Access to shared resources|Wait Rule]] says: ''A routine call with separate arguments will execute when all corresponding processors are available and hold them exclusively for the duration of the routine.''
In the EiffelStudio implementation, a routine will not necessarily wait for all processors associated with its separate arguments to be available before it ''begins'' execution. The waiting on processors occurs in a "lazy" manner. Execution will only wait on the availability of one of the processors when it actually needs to use the argument associated with that processor. This means that if there are several instructions ahead of the first instruction that references a separate argument, then those several instructions will be executed immediately. Only at the point at which the separate argument's processor is needed will the routine pause and wait for the availability of the processor.
In the EiffelStudio implementation prior to 15.01, a routine will not necessarily wait for all processors associated with its separate arguments to be available before it ''begins'' execution. The waiting on processors occurs in a "lazy" manner. Execution will only wait on the availability of one of the processors when it actually needs to use the argument associated with that processor. This means that if there are several instructions ahead of the first instruction that references a separate argument, then those several instructions will be executed immediately. Only at the point at which the separate argument's processor is needed will the routine pause and wait for the availability of the processor.

View File

@@ -1,52 +0,0 @@
[[Property:title|SCOOP practical matters]]
[[Property:weight|-1]]
[[Property:uuid|87f78898-2bbb-b134-0128-e55401a61a05]]
=Introduction=
''SCOOP practical matters'' is a collection of advice for using SCOOP effectively. Here you will find practical information that supplements the rules and definitions of the basic [[Concurrent Eiffel with SCOOP|SCOOP model]], or that is unique to the implementation of SCOOP in EiffelStudio.
=SCOOP project settings=
SCOOP adds only a single keyword to the Eiffel programming language: <code>separate</code>. In any case in which SCOOP is not enabled, the <code>separate</code> keyword is ignored, and the SCOOP-specific validity rules are not applied.
In order to enable SCOOP processing in your project, you use the project settings. Actually, there's just one mandatory setting: ''Concurrency'', and the value you need is ''SCOOP'', as shown in the section of the Project Settings dialog below. Still, in addition to the ''Concurrency'' project setting, remember that it is recommended that for new projects you use [[Creating a new void-safe project#Project settings for void-safe projects|the settings that guarantee void-safety]].
[[Image: SCOOP project setting]]
{{note|The ''Concurrency'' setting requires a [[Clean compile|clean compile]] to be done in order for a change of value to take effect. }}
==A SCOOP project from the new project wizard==
If you create a project using the EiffelStudio new project wizard (for example, selecting "Basic application" from the choices under "Create project" when EiffelStudio opens), the created project will have neither void-safety nor SCOOP enabled. [This is true in EiffelStudio versions 6.8 and 7.0, but may change in future versions.] So, to enable void-safety and SCOOP in such a default project, you'll have to do some work on the project settings.
===Void-safety settings===
As mentioned above, you'll need to enable the [[Creating a new void-safe project#Project settings for void-safe projects|the settings that guarantee void-safety]].
===Libraries and precompiled libraries===
Notice in the note in [[Creating a new void-safe project#Project settings for void-safe projects|that section of the documentation]], that for a period of transition, you should select the "''libraryname''-safe.ecf" version of libraries for void-safe projects. There is no special SCOOP version of libraries, so if you are not using a precompiled library, then selecting "base-safe.ecf" should be sufficient.
However, there is a special version of the precompiled EiffelBase library for SCOOP. So, if you are using precompiled EiffelBase, you should remove the precompile that the wizard included, and add the precompile with file name "base-scoop-safe.ecf".
{{note|When you use a precompiled library, the ''Concurrency'' setting for that library trumps whatever ''Concurrency'' value you have set for the project. So, if you use the "base-safe.ecf" precompiled library, your project will not be SCOOP-enabled regardless of the setting of ''Concurrency'' in the Target. Likewise, using the "base-scoop-safe.ecf" precompiled library always produces a SCOOP-enabled project, even if the Target ''Concurrency'' setting is ''No concurrency''.}}
===The SCOOP Concurrency setting===
Then, you should select ''SCOOP'' as a value for the ''Concurrency'' setting for your project.
===Clean compile===
Changing the ''Concurrency'' setting, the ''Void-safety'' setting, and precompiled libraries each would cause you to have to do a [[Clean compile|clean compile]] of your system in order to become effective. So, next you should do a clean compile. At this point, your project should be void-safe and SCOOP enabled.

View File

@@ -0,0 +1,114 @@
[[Property:title|Regions and Processors]]
[[Property:weight|2]]
[[Property:uuid|974a41dd-0e36-4d75-edd1-ead6ea4b372d]]
== Regions ==
One of the key ideas in SCOOP is to prohibit simultaneous access to shared memory.
In order to reach this goal, the SCOOP model partitions the heap memory into ''regions''.
{{definition|Region|A set of objects in the heap. The set of all regions in a program is a partition of the heap.}}
Every object in an Eiffel program belongs to exactly one ''region''.
A ''region'' is by itself sequential, meaning that there can only be one routine executed in one object.
There can be multiple regions in a SCOOP program however.
[[Image:SCOOP regions]]
{{info|SCOOP is classified as a message passing concurrency model, because there is no shared memory.}}
{{note|A sequential program is a special case for a SCOOP program that has only one region.}}
A direct access from one region into another is not allowed.
If one wishes to perform a command or a query in an object of a different region, a message has to be sent.
You'll see how this can be done in chapter [[Separate Calls]].
The simple trick of splitting the heap into several regions, where each region by itself is sequential,
prevents one of the trickiest problems in concurrency: Data Races.
In SCOOP you are guaranteed that a data race, meaning a read and write access to the same memory with nondeterministic ordering, can never happen.
== Processors ==
In the SCOOP model, a ''processor'' is used as the engine for execution.
{{definition|Processor| An autonomous thread of control capable of sequential execution of instructions.}}
A ''processor'' is always attached to exactly one region, and is responsible to perform operations on all its objects.
The term ''handler of an object'' is used to denote the processor attached to the region on which the object is placed.
As already mentioned earlier, a ''processor'' cannot access or perform operations on an object in a different ''region''
and has to send a message to the other handler instead.
{{info|''Processor'' is an abstract notion and does not mean the physical silicone chip which is present in every computer.
In SCOOP we think of it as a thread of control capable of applying features to objects.
In theory processors are not restricted to any particular type of hardware or software, for example they could correspond to threads, processes, hardware processors, or machines in a network.
Currently however a SCOOP processor is implemented as a thread.}}
== Separate Types==
To support the concept of regions in a program text, SCOOP extends the type system by introducing a single new keyword: <code>separate</code>.
The <code>separate</code> keyword is used to annotate a reference and means that the object attached to it may be in a different region.
{{definition|Separate type|A type which has been declared including the keyword <code>separate</code>.}}
If an entity uses the keyword <code>separate</code> in its declaration, such as:
<code>
my_x: separate X
</code>
it indicates that the handler of <code>my_x</code> may be different than the handler of <code>Current</code>.
This in turn means that it is forbidden to access and modify <code>my_x</code> directly.
To perform any operation on <code>my_x</code>, a message should be sent to the other handler.
Note that the SCOOP type system allows to attach an object to a separate reference, even if it is actually in the same region as <code>Current</code>.
But it is not possible the other way around: An object in a different region must always be of a separate type.
This is reflected in the type checker, which treats a regular type <code>A</code> as a subtype of the separate type <code>separate A</code>.
In the image above, the three references that cross a processor boundary must be declared separate.
The single reference in Region 2, which stays in the same region, can be of a non-separate type.
== Creating regions and processors ==
In order to turn a sequential program into a concurrent program, one has to create new regions and put objects into them.
The means to achieve this is the creation instruction on an entity whose type is separate.
<code>
my_x: separate X
-- ...
create my_x.make
</code>
The instruction <code>create my_x.make</code> does a lot of things at the same time:
* It creates a new region.
* It creates a new processor for the new region.
* It creates a new object of type X which is placed into the newly created region.
With this new knowledge we can create a small program that generates the object and region graph shown above:
<code>
class APPLICATION create make feature
person: separate PERSON
thing: separate ANY
make
do
create person.make
create thing
end
end
class PERSON create make feature
name: STRING
thing: separate ANY
make
do
create name.make_from_string ("John Doe")
create thing
end
end
</code>

View File

@@ -0,0 +1,126 @@
[[Property:title|Separate Calls]]
[[Property:weight|3]]
[[Property:uuid|c030d521-1420-1570-b63e-9035332a3e26]]
==Separate Calls==
In the [[Regions and Processors | previous chapter]] we've learned that a concurrent SCOOP program consists of several ''regions'' that communicate via message passing.
You may wonder how it is possible to pass a message to another processor, and the answer is simple:
With the good old feature call.
{{definition|Separate call|A feature call whose target is of a separate type.}}
The ''separate call'' is the SCOOP way to pass messages from one region to another.
A call such as this
<code>
do_call (my_x: separate X)
do
my_x.f (42)
end
</code>
roughly corresponds to a message to the handler of <code>my_x</code> with the content:
"Execute feature f on target my_x with argument 42."
Note that there can be a difference between the time a message is sent and the time the feature is executed.
In SCOOP we therefore distinguish between a ''feature call'' and a ''feature application''.
{{definition|Feature call|Register a feature to be executed.}}
{{definition|Feature application|Execute a feature.}}
==Access to shared resources==
The main issue with concurrent systems is the proper control of access to resources that can be shared among simultaneously executing processors.
Traditional solutions to the problem involve the use of “critical sections” of code.
These are sections of code in which the shared resource is accessed.
Only one thread is allowed to be executing the critical section at a time.
So if one thread wants to execute the critical section and another is already doing so, then the first must wait.
Thread synchronization schemes ensure this “mutual exclusion” of access to the critical section.
Rather than using critical sections, SCOOP relies on the mechanism of argument passing to assure exclusive access.
As a result, there is a restriction placed on separate calls.
{{Rule|name=Separate argument|text=A separate call <code>a_x.f (a)</code> is valid if <code>a_x</code> is an argument of the enclosing routine.}}
So, according to this rule, for a separate call to be valid, the target of the call must be a formal argument of the routine in which the call occurs.
The code below contains both an invalid separate call and a valid one.
<code>
my_separate_attribute: separate SOME_TYPE
...
calling_routine
-- One routine
do
my_separate_attribute.some_feature -- Invalid call: Feature call on separate attribute
enclosing_routine (my_separate_attribute) -- Separate attribute passed as argument
end
enclosing_routine (a_arg: separate SOME_TYPE)
-- Another routine
do
a_arg.some_feature -- Valid call: Feature call on separate argument
end
</code>
In the code above, <code>my_separate_attribute</code> is a class attribute declared as a separate type. In the first line in <code>calling_routine</code> a direct feature call is made to apply <code>some_feature</code> to <code>my_separate_attribute</code>. This is an invalid separate call. The second line calls feature <code>enclosing_routine</code> and passes <code>my_separate_attribute</code> as an argument. <code>enclosing_routine</code> takes an argument of type <code>separate SOME_TYPE</code>. Within <code>enclosing_routine</code> it is valid to call <code>some_feature</code> on <code>a_arg</code>.
{{info|Congratulations! You've just gained enough knowledge to write a small SCOOP program.
You may want to have a look at the [[Dining Philosophers]] example, which does not use any of the advanced concepts.}}
{{SeeAlso|The <code>launch_producer</code> feature of the [[Producer-consumer|producer-consumer]] example, a feature which exists for the purpose of compliance with the separate argument rule.}}
==Synchronous and asynchronous feature calls==
When we think of the execution of sequential Eiffel, we tend to equate feature call and feature application. That is, it is expected that for a sequence of two feature calls:
<code>
x.f
y.g
</code>
that the feature application of <code>x.f</code> will complete before <code>y.g</code> begins.
In concurrent Eiffel with SCOOP things are different. This is because a particular feature call, <code>x.f</code>, may occur on one processor, and the consequent feature application (of feature <code>f</code> to <code>x</code>) may occur on a different processor.
{{definition|Synchronous feature call|A feature call in which the execution of the calling client does not proceed until the feature application has completed. }}
{{definition|Asynchronous feature call|A feature call which causes the “logging” of a request by a client for the associated feature application to be executed by the suppliers processor. }}
After an asynchronous feature call, the execution of the client proceeds immediately, possibly in parallel with the application of the feature on some other processor.
===What makes a call synchronous or asynchronous?===
First, every feature call is either a synchronous feature call or an asynchronous feature call. For a particular call, the following rules determine which it is:
A feature call is always '''synchronous''' in the following cases:
:S1 It is a non-separate call.
:S2 It is a separate call to a query.
A feature call may be '''asynchronous''' in the following case:
:A1 It is a separate call to a command.
Lets look a little closer at those cases determining synchronous calls.
Case S1 is the case of typical sequential Eiffel, where all calls are non-separate, and therefore synchronous. Of course, even in concurrent Eiffel with SCOOP, plenty of non-separate calls will occur, and these will be synchronous.
Case S2 says that if a separate call is a query it must be synchronous. This is because even though the feature application will probably occur on a different processor, the instructions following the query will likely depend up on the result of the query, so they must wait until the feature application completes. This situation is known as ''wait by necessity''.
The case A1 is the only case where asynchrony is involved. But be careful with the wording - it says the call '''may be asynchronous''', because there are some exceptions. The exact rules are a bit complex and are described in [[Asynchronous Calls]]. As a general rule of thumb, a separate call is executed asynchronously when the client does not have exclusive access over an object which is needed by the target region.
Asynchronous execution means that when a client processes the call to the command, it “logs” the need for its associated feature application. But then, rather than waiting for the feature application to complete, the client continues execution of instructions beyond the asynchronous call.
It is in this case that concurrent computation is achieved. The processor of the client object is free to continue processing while the processor handling the target of the asynchronous feature call applies that feature.