From 78ff0134c75ca91bbd34ec02bec54eee3dd54e5d Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 03:27:42 -0700 Subject: [PATCH 01/35] Created Using the policy driven framework (markdown) --- Using-the-policy-driven-framework.md | 9 +++++++++ 1 file changed, 9 insertions(+) create mode 100644 Using-the-policy-driven-framework.md diff --git a/Using-the-policy-driven-framework.md b/Using-the-policy-driven-framework.md new file mode 100644 index 00000000..a0e5836d --- /dev/null +++ b/Using-the-policy-driven-framework.md @@ -0,0 +1,9 @@ +# Using the policy driven framework + +## Introduction + +The aim of the policy-driven framework is to allow authors of web-servers to concentrate on the business logic (e.g., in the case of a GET request, generating the content), without having to worry about the details of the HTTP protocol (such as headers and response codes). However, there are so many possibilities in the HTTP protocol, that it is impossible to correctly guess what to do in all cases. Therefore the author has to supply policy decisions to the framework, in areas such as caching decisions. These are implemented as a set of deferred classes for which the author needs to provide effective implementations. + +## Mapping the URI space + +The authors first task is to decide which URIs the server will respond to (we do this using [URI templates](http://tools.ietf.org/html/rfc6570) ) and which methods are supported for each template.This is done in the class that that defines the service (which is often the root class for the application). This class must be a descendant of WSF_ROUTED_SKELETON_SERVICE. \ No newline at end of file From 7dd36014cc987575b1c964efd80effa42b05e9cb Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 03:56:52 -0700 Subject: [PATCH 02/35] Updated Using the policy driven framework (markdown) --- Using-the-policy-driven-framework.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/Using-the-policy-driven-framework.md b/Using-the-policy-driven-framework.md index a0e5836d..077ddfd2 100644 --- a/Using-the-policy-driven-framework.md +++ b/Using-the-policy-driven-framework.md @@ -4,6 +4,12 @@ The aim of the policy-driven framework is to allow authors of web-servers to concentrate on the business logic (e.g., in the case of a GET request, generating the content), without having to worry about the details of the HTTP protocol (such as headers and response codes). However, there are so many possibilities in the HTTP protocol, that it is impossible to correctly guess what to do in all cases. Therefore the author has to supply policy decisions to the framework, in areas such as caching decisions. These are implemented as a set of deferred classes for which the author needs to provide effective implementations. +We aim to provide unconditional compliance [See HTTP/1.1 specification](http://www.w3.org/Protocols/rfc2616/rfc2616-sec1.html#sec1) for you. + ## Mapping the URI space -The authors first task is to decide which URIs the server will respond to (we do this using [URI templates](http://tools.ietf.org/html/rfc6570) ) and which methods are supported for each template.This is done in the class that that defines the service (which is often the root class for the application). This class must be a descendant of WSF_ROUTED_SKELETON_SERVICE. \ No newline at end of file +The authors first task is to decide which URIs the server will respond to (we do this using [URI templates](http://tools.ietf.org/html/rfc6570) ) and which methods are supported for each template.This is done in the class that that defines the service (which is often the root class for the application). This class must be a descendant of WSF_ROUTED_SKELETON_SERVICE. Throughout this tutorial, we will refer to the restbucksCRUD example application, which can be found in the EWF distribution in the examples directory. It's root class, RESTBUCKS_SERVER, inherits from WSF_ROUTED_SKELETON_SERVICE, as well as WSF_DEFAULT_SERVICE. The latter class means that you must specify in the ECF which connector you will use by default.This means you can easily change connectors just by changing the ECF and recompiling. + +### Declaring your URI templates + +In order to map your URI space to handlers (which you will write), you need to implement the routine setup_router. You can see in the example that the ORDER_HANDLER handler is associated with two URI templates. The URI /order is associated with the POST method (only). Any requests to /order with the GET method (or any other method) will result in an automatically generated compliant response being sent on your behalf to the client. The other principle methods (you get compliant responses to the HEAD method for free whenever you allow the GET method) are associated with the URI template /order/{orderid}. Here, orderid is a template variable. It's value for any given request is provided to your application as {WSF_REQUEST}.path_parameter ("orderid"). If the client passes a URI of /order/21, then you will see the value 21. If the client passes /order/fred, you will see the value fred. But if the client passes /order/21/new, he will see a compliant error response generated by the framework. \ No newline at end of file From b55f3636515c6db7dc889b3ef36426d90d2816a6 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 05:30:02 -0700 Subject: [PATCH 03/35] Updated Using the policy driven framework (markdown) --- Using-the-policy-driven-framework.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/Using-the-policy-driven-framework.md b/Using-the-policy-driven-framework.md index 077ddfd2..115804e0 100644 --- a/Using-the-policy-driven-framework.md +++ b/Using-the-policy-driven-framework.md @@ -12,4 +12,13 @@ The authors first task is to decide which URIs the server will respond to (we do ### Declaring your URI templates -In order to map your URI space to handlers (which you will write), you need to implement the routine setup_router. You can see in the example that the ORDER_HANDLER handler is associated with two URI templates. The URI /order is associated with the POST method (only). Any requests to /order with the GET method (or any other method) will result in an automatically generated compliant response being sent on your behalf to the client. The other principle methods (you get compliant responses to the HEAD method for free whenever you allow the GET method) are associated with the URI template /order/{orderid}. Here, orderid is a template variable. It's value for any given request is provided to your application as {WSF_REQUEST}.path_parameter ("orderid"). If the client passes a URI of /order/21, then you will see the value 21. If the client passes /order/fred, you will see the value fred. But if the client passes /order/21/new, he will see a compliant error response generated by the framework. \ No newline at end of file +In order to map your URI space to handlers (which you will write), you need to implement the routine setup_router. You can see in the example that the ORDER_HANDLER handler is associated with two URI templates. The URI /order is associated with the POST method (only). Any requests to /order with the GET method (or any other method) will result in an automatically generated compliant response being sent on your behalf to the client. The other principle methods (you get compliant responses to the HEAD method for free whenever you allow the GET method) are associated with the URI template /order/{orderid}. Here, orderid is a template variable. It's value for any given request is provided to your application as {WSF_REQUEST}.path_parameter ("orderid"). If the client passes a URI of /order/21, then you will see the value 21. If the client passes /order/fred, you will see the value fred. But if the client passes /order/21/new, he will see a compliant error response generated by the framework. + +## Declaring your policy in responding to OPTIONS + +WSF_ROUTED_SKELETON_SERVICE inherits from WSF_SYSTEM_OPTIONS_ACCESS_POLICY. This policy declares that the framework will provide a compliant default response to OPTIONS * requests. If you prefer to not respond to OPTIONS * requests (and I am doubtful if it is fully compliant to make that choice), then you can redefine +is_system_options_forbidden. + +## Declaring your policy on requiring use of a proxy server + +WSF_ROUTED_SKELETON_SERVICE also inherits from WSF_PROXY_USE_POLICY. This determines if the server will require clients to use a proxy server. By default, it will do so for HTTP/1.0 clients. This is a sensible default, as the framework assumes an HTTP/1.1 client throughout. If you are sure that you will only ever have HTTP/1.1 clients, then you can instead inherit from WSF_NO_PROXY_POLICY, as RESTBUCKS_SERVER does. If not, then you need to implement proxy_server. From 0c4a410ac0d118a9596ec0dcb948959a3db89d8a Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 05:51:11 -0700 Subject: [PATCH 04/35] Created Writing the handlers (markdown) --- Writing-the-handlers.md | 3 +++ 1 file changed, 3 insertions(+) create mode 100644 Writing-the-handlers.md diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md new file mode 100644 index 00000000..b59fbd38 --- /dev/null +++ b/Writing-the-handlers.md @@ -0,0 +1,3 @@ +# Writing the handlers + +Now you have to implement each handler. You need to inherit from WSF_SKELETON_HANDLER (as ORDER_HANDLER does). This involves implementing a lot of deferred routines. There are other routines for which default implementations are provided, which you might want to override. \ No newline at end of file From 84c3039806c5ae74b0856e7f9fc0ddc40b47d507 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 05:51:54 -0700 Subject: [PATCH 05/35] Updated Using the policy driven framework (markdown) --- Using-the-policy-driven-framework.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Using-the-policy-driven-framework.md b/Using-the-policy-driven-framework.md index 115804e0..2999ec9a 100644 --- a/Using-the-policy-driven-framework.md +++ b/Using-the-policy-driven-framework.md @@ -22,3 +22,5 @@ is_system_options_forbidden. ## Declaring your policy on requiring use of a proxy server WSF_ROUTED_SKELETON_SERVICE also inherits from WSF_PROXY_USE_POLICY. This determines if the server will require clients to use a proxy server. By default, it will do so for HTTP/1.0 clients. This is a sensible default, as the framework assumes an HTTP/1.1 client throughout. If you are sure that you will only ever have HTTP/1.1 clients, then you can instead inherit from WSF_NO_PROXY_POLICY, as RESTBUCKS_SERVER does. If not, then you need to implement proxy_server. + +Next you have to [write your handler(s)](./Writing-the-handlers) \ No newline at end of file From bf0a8e8efbb7a0f3dd83b52c79645f2ff01fefb8 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 05:57:49 -0700 Subject: [PATCH 06/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index b59fbd38..d1fd9324 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -1,3 +1,11 @@ # Writing the handlers -Now you have to implement each handler. You need to inherit from WSF_SKELETON_HANDLER (as ORDER_HANDLER does). This involves implementing a lot of deferred routines. There are other routines for which default implementations are provided, which you might want to override. \ No newline at end of file +Now you have to implement each handler. You need to inherit from WSF_SKELETON_HANDLER (as ORDER_HANDLER does). This involves implementing a lot of deferred routines. There are other routines for which default implementations are provided, which you might want to override. This applies to both routines defined in this class, and those declared in the three policy classes from which it inherits. + +## Implementing the routines declared directly in WSF_SKELETON_HANDLER + +TODO + +## Implementing the policies + +TODO \ No newline at end of file From f3849679e896771fa5c531c2957925c5fdf6d301 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 06:00:37 -0700 Subject: [PATCH 07/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index d1fd9324..61be5110 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -8,4 +8,8 @@ TODO ## Implementing the policies +* WSF_OPTIONS_POLICY +* WSF_PREVIOUS_POLICY +* WSF_CACHING_POLICY + TODO \ No newline at end of file From 45fd51b4b5de38887850ca28b4bc9842ab7496b2 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 06:02:24 -0700 Subject: [PATCH 08/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 61be5110..081c4a59 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -8,7 +8,7 @@ TODO ## Implementing the policies -* WSF_OPTIONS_POLICY +* [WSF_OPTIONS_POLICY](edit this) * WSF_PREVIOUS_POLICY * WSF_CACHING_POLICY From 7bc09bda8fb84d7d359b842246819b9b28e0ade5 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 06:02:42 -0700 Subject: [PATCH 09/35] Created WSF_OPTIONS_POLICY (markdown) --- WSF_OPTIONS_POLICY.md | 1 + 1 file changed, 1 insertion(+) create mode 100644 WSF_OPTIONS_POLICY.md diff --git a/WSF_OPTIONS_POLICY.md b/WSF_OPTIONS_POLICY.md new file mode 100644 index 00000000..30404ce4 --- /dev/null +++ b/WSF_OPTIONS_POLICY.md @@ -0,0 +1 @@ +TODO \ No newline at end of file From 33d523e5bf2beb0ef52323df780a1fbe9745581e Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 06:03:13 -0700 Subject: [PATCH 10/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 081c4a59..598a05fc 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -8,7 +8,7 @@ TODO ## Implementing the policies -* [WSF_OPTIONS_POLICY](edit this) +* [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) * WSF_PREVIOUS_POLICY * WSF_CACHING_POLICY From 090e294f1060cd5f3db82fdd231f75e86058bbfa Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 06:06:15 -0700 Subject: [PATCH 11/35] Updated WSF_OPTIONS_POLICY (markdown) --- WSF_OPTIONS_POLICY.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/WSF_OPTIONS_POLICY.md b/WSF_OPTIONS_POLICY.md index 30404ce4..2056c430 100644 --- a/WSF_OPTIONS_POLICY.md +++ b/WSF_OPTIONS_POLICY.md @@ -1 +1,3 @@ -TODO \ No newline at end of file +# Implementing routines in WSF_OPTIONS_POLICY + +This class provides a default response to OPTIONS requests other than OPTIONS *. So you don't have to do anything. The default response just includes the mandatory Allow headers for all the methods that are allowed for the request URI. if you want to include a body text, or additional header, then you should redefine this routine. \ No newline at end of file From 7815557f840514613ac37514e2349169989d788f Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 06:14:24 -0700 Subject: [PATCH 12/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 598a05fc..2021f1ae 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -9,7 +9,7 @@ TODO ## Implementing the policies * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) -* WSF_PREVIOUS_POLICY +* [WSF_PREVIOUS_POLICY](./WSF_PREVIOUS_POLICY) * WSF_CACHING_POLICY TODO \ No newline at end of file From c261f02c8472d6217a9a17fe7a2a96ba9651b6f6 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 06:24:45 -0700 Subject: [PATCH 13/35] Created Wsf previous policy (markdown) --- Wsf-previous-policy.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 Wsf-previous-policy.md diff --git a/Wsf-previous-policy.md b/Wsf-previous-policy.md new file mode 100644 index 00000000..37c965c5 --- /dev/null +++ b/Wsf-previous-policy.md @@ -0,0 +1,20 @@ +# Implementing WSF_PREVIOUS_POLICY + +This class provides routines which enable the programmer to encode knowledge about resources that have moved (either temporarily, or permanently), or have been permanently removed. There are four routines, but only one is actually deferred. + +## resource_previously_existed + +By default, this routine says that currently doesn't exist, never has existed. You need to redefine this routine to return True for any URIs that you want to indicate used to exist, and either no longer do so, or have moved to another location. + +## resource_moved_permanently + +If you have indicated that a resource previously existed, then it may have moved permanently, temporarily, or just ceased to exist. In the first case, you need to redefine this routine to return True for such a resource. +## resource_moved_temporarily + +If you have indicated that a resource previously existed, then it may have moved permanently, temporarily, or just ceased to exist. In the second case, you need to redefine this routine to return True for such a resource. + +## previous_location + +You need to implement this routine. It should provide the locations where a resource has moved to. There must be at least one such location. If more than one is provided, then the first one is considered primary. + +If the preconditions for this routine are never met (as is the case by default), then just return an empty list. \ No newline at end of file From 3b517d3c53d1bd3836f84e25305c3c5c343d6d6f Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 06:25:31 -0700 Subject: [PATCH 14/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 2021f1ae..256296e2 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -10,6 +10,4 @@ TODO * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) * [WSF_PREVIOUS_POLICY](./WSF_PREVIOUS_POLICY) -* WSF_CACHING_POLICY - -TODO \ No newline at end of file +* [WSF_CACHING_POLICY](./WSF_CACHING_POLICY) From 259815467c86758253c6ec3de84592910ea607dd Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 07:05:24 -0700 Subject: [PATCH 15/35] Created Wsf caching policy (markdown) --- Wsf-caching-policy.md | 52 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 Wsf-caching-policy.md diff --git a/Wsf-caching-policy.md b/Wsf-caching-policy.md new file mode 100644 index 00000000..e2c5d6ae --- /dev/null +++ b/Wsf-caching-policy.md @@ -0,0 +1,52 @@ +# Implementing WSF_CACHING_POLICY + +This class contains a large number of routines, some of which have sensible defaults. + +## age + +This is used to generate a **Cache-Control: max-age** header. It says how old the response can before a cache will consider it stale (and therefore will need to revalidate with the server). Common values are zero (always consider it stale) and Never_expires (never always mean up to one year) and 1440 (one day). + +## shared_age + +This defaults to the same as age, so you only have to redefine it if you want a different value. If different from age, then we generate a **Cache-Control: s-max-age** header. This applies to shared caches only. Otherwise it has the same meaning as age. This overrides the value specified in age for shared caches. + +## http_1_0_age + +This generates an **Expires** header, and has the same meaning as age, but is understood by HTTP/1.0 caches. By default it has the same value as age. You only need to redefine this if you want to treat HTTP/1.0 caches differently (you might not trust them so well, so you might want to return 0 here). + +## is_freely_cacheable + +This routine says whether a shared cache can use this response for all client. If True, then it generates a **Cache-Control: public** header. If your data is at all sensitive, then you want to return False here. + +## is_transformable + +Non-transparent proxies are allowed to make some modifications to headers. If your application relies on this _not_ happening, then you want to return False here. This is the default, so you don't have to do anything. This means a **Cache-Control: no-transform** header will be generated. +But most applications can return True. + +## must_revalidate + +Some clients request that their private cache ignores server expiry times (and so freely reuse stale responses). If you want to force revalidation anyway in such circumstances, then redefine to return True. In which case, we generate a **Cache-Control: must-revalidate** header. + +## must_proxy_revalidate + +This is the same as must_revalidate, but only applies to shared caches that are configured to serve stale responses. If you redefine to return True, then we generate a **Cache-Control: proxy-revalidate** header. + +## private_headers + +This is used to indicate that parts (or all) of a response are considered private to a single user, and should not be freely served from a shared cache. You must implement this routine. Your choices are: + +1. Return Void. None of the response is considered private. +1. Return and empty list. All of the response is considered private. +1. Return a list of header names. + +If you don't return Void, then a **Cache-Control: private** header will be generated. + +## non_cacheable_headers + +This is similar to private_headers, and you have the same three choices. the difference is that it is a list of headers (or the whole response) that will not be sent from a cache without revalidation. + +If you don't return Void, then a **Cache-Control: no-cache** header will be generated. + +## is_sensitive + +Is the response to be considered of a sensitive nature? If so, then it will not be archived from a cache. We generate a **Cache-Control: no-store** header. \ No newline at end of file From ce04737d46b5eab52818125c2444d42328ed82c8 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 07:18:35 -0700 Subject: [PATCH 16/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 256296e2..5834c9ed 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -4,7 +4,17 @@ Now you have to implement each handler. You need to inherit from WSF_SKELETON_HA ## Implementing the routines declared directly in WSF_SKELETON_HANDLER -TODO +### is_chunking + +HTTP/1.1 supports streaming responses (and providing you have configured your server to use a proxy server in WSF_PROXY_USE_POLICY, this framework guarantees you have an HTTP/1.1 client to deal with). It is up to you whether or not you choose to make use of it. If so, then you have to serve the response one chunk at a time (but you could generate it all at once, and slice it up as you go). In this routine you just say whether or not you will be doing this. So the framework n=knows which other routines to call. + +## includes_response_entity + +The response to a DELETE, PUT or POST will include HTTP headers. It may or may not include a body. It is up to you, and this is where you tell the framework. + +## conneg + +[The HTTP/1.1 specification](http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.1) defines server-driven content negotiation. Based on the Accept* headers in the request, we can determine whether we have a format for the response entity that is acceptable to the client. You need to indicate what formats you support. The framework does the rest. Normally you will have the same options for all requests, in which case you can use a once object. ## Implementing the policies From 9395e31c5343b3fee2929e4520537d833c98e8b7 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 08:50:31 -0700 Subject: [PATCH 17/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 45 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 44 insertions(+), 1 deletion(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 5834c9ed..8f958854 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -16,8 +16,51 @@ The response to a DELETE, PUT or POST will include HTTP headers. It may or may n [The HTTP/1.1 specification](http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.1) defines server-driven content negotiation. Based on the Accept* headers in the request, we can determine whether we have a format for the response entity that is acceptable to the client. You need to indicate what formats you support. The framework does the rest. Normally you will have the same options for all requests, in which case you can use a once object. +## mime_types_supported + +Here you need to indicate which media types you support for responses. One of the entries must be passed to the creation routine for conneg. + +## languages_supported + +Here you need to indicate which languages you support for responses. One of the entries must be passed to the creation routine for conneg. + + +## charsets_supported + +Here you need to indicate which character sets you support for responses. One of the entries must be passed to the creation routine for conneg. + + +## encodings_supported + +Here you need to indicate which compression encodings you support for responses. One of the entries must be passed to the creation routine for conneg. + +## additional_variant_headers + +The framework will write a Vary header if conneg indicates that different formats are supported. This warns caches that they may not be able to use a cached response if the Accept* headers in the request differ. If the author knows that the response may be affected by other request headers in addition to these, then they must be indicated here, so they can be included in a Vary header with the response. + +## predictable_response + +If the response may vary in other ways not predictable from the request headers, then redefine this routine to return True. In that case we will generate a Vary: * header to inform the cache that the response is not necessarily repeatable. + +## matching_etag + +An **ETag** header is a kind of message digest. Clients can use etags to avoid re-fetching responses for unchanged resources, or to avoid updating a resource that may have changed since the client last updated it. +You must implement this routine to test for matches **if and only if** you return non-Void responses for the etag routine. + +## etag + +You are strongly encouraged to return non-Void for this routine. See [Validation Model](http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.3) for more details. + +## modified_since + +You need to implement this. If you do not have information about when a resource was last modified, then return True as a precaution. Of course, you return false for a static resource. + +## treat_as_moved_permanently + +This routine when a PUT request is made to a resource that does not exist. See [PUT](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6) in the HTTP/1.1 specification for why you might want to return zero. + ## Implementing the policies * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) * [WSF_PREVIOUS_POLICY](./WSF_PREVIOUS_POLICY) -* [WSF_CACHING_POLICY](./WSF_CACHING_POLICY) +* [WSF_CACHING_POLICY](./WSF_CACHING_POLICY) \ No newline at end of file From a552b8fcfa3fd27c82787df896e9811fb3a96633 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 09:13:56 -0700 Subject: [PATCH 18/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 8f958854..e82961ec 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -8,54 +8,54 @@ Now you have to implement each handler. You need to inherit from WSF_SKELETON_HA HTTP/1.1 supports streaming responses (and providing you have configured your server to use a proxy server in WSF_PROXY_USE_POLICY, this framework guarantees you have an HTTP/1.1 client to deal with). It is up to you whether or not you choose to make use of it. If so, then you have to serve the response one chunk at a time (but you could generate it all at once, and slice it up as you go). In this routine you just say whether or not you will be doing this. So the framework n=knows which other routines to call. -## includes_response_entity +### includes_response_entity The response to a DELETE, PUT or POST will include HTTP headers. It may or may not include a body. It is up to you, and this is where you tell the framework. -## conneg +### conneg [The HTTP/1.1 specification](http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html#sec12.1) defines server-driven content negotiation. Based on the Accept* headers in the request, we can determine whether we have a format for the response entity that is acceptable to the client. You need to indicate what formats you support. The framework does the rest. Normally you will have the same options for all requests, in which case you can use a once object. -## mime_types_supported +### mime_types_supported Here you need to indicate which media types you support for responses. One of the entries must be passed to the creation routine for conneg. -## languages_supported +### languages_supported Here you need to indicate which languages you support for responses. One of the entries must be passed to the creation routine for conneg. -## charsets_supported +### charsets_supported Here you need to indicate which character sets you support for responses. One of the entries must be passed to the creation routine for conneg. -## encodings_supported +### encodings_supported Here you need to indicate which compression encodings you support for responses. One of the entries must be passed to the creation routine for conneg. -## additional_variant_headers +### additional_variant_headers The framework will write a Vary header if conneg indicates that different formats are supported. This warns caches that they may not be able to use a cached response if the Accept* headers in the request differ. If the author knows that the response may be affected by other request headers in addition to these, then they must be indicated here, so they can be included in a Vary header with the response. -## predictable_response +### predictable_response If the response may vary in other ways not predictable from the request headers, then redefine this routine to return True. In that case we will generate a Vary: * header to inform the cache that the response is not necessarily repeatable. -## matching_etag +### matching_etag An **ETag** header is a kind of message digest. Clients can use etags to avoid re-fetching responses for unchanged resources, or to avoid updating a resource that may have changed since the client last updated it. You must implement this routine to test for matches **if and only if** you return non-Void responses for the etag routine. -## etag +### etag You are strongly encouraged to return non-Void for this routine. See [Validation Model](http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.3) for more details. -## modified_since +### modified_since You need to implement this. If you do not have information about when a resource was last modified, then return True as a precaution. Of course, you return false for a static resource. -## treat_as_moved_permanently +### treat_as_moved_permanently This routine when a PUT request is made to a resource that does not exist. See [PUT](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6) in the HTTP/1.1 specification for why you might want to return zero. From 10caa4c1dfb17ffef608729b3428a7ae5b9f897f Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 09:48:38 -0700 Subject: [PATCH 19/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index e82961ec..461e2a1b 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -7,6 +7,7 @@ Now you have to implement each handler. You need to inherit from WSF_SKELETON_HA ### is_chunking HTTP/1.1 supports streaming responses (and providing you have configured your server to use a proxy server in WSF_PROXY_USE_POLICY, this framework guarantees you have an HTTP/1.1 client to deal with). It is up to you whether or not you choose to make use of it. If so, then you have to serve the response one chunk at a time (but you could generate it all at once, and slice it up as you go). In this routine you just say whether or not you will be doing this. So the framework n=knows which other routines to call. +Currently we only support chunking for GET or HEAD routines. This might change in the future, so if you intend to return True, you should call req.is_get_head_request_method. ### includes_response_entity @@ -59,6 +60,33 @@ You need to implement this. If you do not have information about when a resource This routine when a PUT request is made to a resource that does not exist. See [PUT](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6) in the HTTP/1.1 specification for why you might want to return zero. +## allow_post_to_missing_resource + +POST requests are normally made to an existing entity. However it is possible to create new resources using a POST, if the server allows it. This is where you make that decision. + +If you return True, and the resource is created, a 201 Created response will be returned. + +## content_length + +If you are not streaming the result, the the HTTP protocol requires that the length of the entity is known. You need to implement this routine to provide that information. + +## finished + +If you are streaming the response, then you need to tell the framework when the last chunk has been sent. +To implement this routine, you will probably need to call req.set_execution_variable (some-name, True) in ensure_content_avaiable and generate_next_chunk, and call attached {BOOLEAN} req.execution_variable (some-name) in this routine. + +## description + +This is for the automatically generated documentation that the framework will generate in response to a request that you have not mapped into an handler. + +## delete + +This routine is for carrying out a DELETE request to a resource. If it is valid to delete the named resource, then you should either go ahead and do it, or queue a deletion request somewhere (if you do that then you will probably need to call req.set_execution_variable (some-name-or-other, True). Otherwise you should call req.error_handler.add_custom_error to explain why the DELETE could not proceed (you should also do this if the attempt to delete the resource fails). +Of course, if you have not mapped any DELETE requests to the URI space of this handler, then you can just do nothing. + +## delete_queued + +If in the delete routine, you elected to queue the request, then you need to return True here. You will probably need to check the execution variable you set in the delete routine. ## Implementing the policies * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) From 7e4f51a7ceb4e0a774bfc2e0a94871ba74dea8bf Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 09:58:42 -0700 Subject: [PATCH 20/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 461e2a1b..cd09a20d 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -60,33 +60,34 @@ You need to implement this. If you do not have information about when a resource This routine when a PUT request is made to a resource that does not exist. See [PUT](http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6) in the HTTP/1.1 specification for why you might want to return zero. -## allow_post_to_missing_resource +### allow_post_to_missing_resource POST requests are normally made to an existing entity. However it is possible to create new resources using a POST, if the server allows it. This is where you make that decision. If you return True, and the resource is created, a 201 Created response will be returned. -## content_length +### content_length If you are not streaming the result, the the HTTP protocol requires that the length of the entity is known. You need to implement this routine to provide that information. -## finished +### finished If you are streaming the response, then you need to tell the framework when the last chunk has been sent. To implement this routine, you will probably need to call req.set_execution_variable (some-name, True) in ensure_content_avaiable and generate_next_chunk, and call attached {BOOLEAN} req.execution_variable (some-name) in this routine. -## description +### description This is for the automatically generated documentation that the framework will generate in response to a request that you have not mapped into an handler. -## delete +### delete This routine is for carrying out a DELETE request to a resource. If it is valid to delete the named resource, then you should either go ahead and do it, or queue a deletion request somewhere (if you do that then you will probably need to call req.set_execution_variable (some-name-or-other, True). Otherwise you should call req.error_handler.add_custom_error to explain why the DELETE could not proceed (you should also do this if the attempt to delete the resource fails). Of course, if you have not mapped any DELETE requests to the URI space of this handler, then you can just do nothing. -## delete_queued +### delete_queued If in the delete routine, you elected to queue the request, then you need to return True here. You will probably need to check the execution variable you set in the delete routine. + ## Implementing the policies * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) From 2415a57ab08d9eb3c2d85e10d233dcbc7310a73c Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 7 Aug 2013 23:30:49 -0700 Subject: [PATCH 21/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index cd09a20d..ce845fd8 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -88,6 +88,10 @@ Of course, if you have not mapped any DELETE requests to the URI space of this h If in the delete routine, you elected to queue the request, then you need to return True here. You will probably need to check the execution variable you set in the delete routine. +### deleted + +If delete_queued returns False, then deleted needs to indicate whether or not the delete succeeded. A default implementation is provided that should be satisfactory. + ## Implementing the policies * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) From bc976c37b14bd2398e094151660c6dcf9f87c3b4 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Thu, 8 Aug 2013 00:26:56 -0700 Subject: [PATCH 22/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index ce845fd8..be116a37 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -88,9 +88,37 @@ Of course, if you have not mapped any DELETE requests to the URI space of this h If in the delete routine, you elected to queue the request, then you need to return True here. You will probably need to check the execution variable you set in the delete routine. -### deleted +### ensure_content_available -If delete_queued returns False, then deleted needs to indicate whether or not the delete succeeded. A default implementation is provided that should be satisfactory. +This routine is called for GET and DELETE (when a entity is provided in the response) processing. It's purpose is to make the text of the entity (body of the response) available for future routines (if is_chunking is true, then only the first chunk needs to be made available, although if you only serve, as opposed to generate, the result in chunks, then you will make the entire entity available here). This is necessary so that we can compute the length before we start to serve the response. You would normally save it in an execution variable on the request object (as ORDER_HANDLER does). Note that this usage of execution variables ensures your routines can successfully cope with simultaneous requests. If you encounter a problem generating the content, then add an error to req.error_handler. + +As well as the request object, we provide the results of content negotiation, so you can generate the entity in the agreed format. If you only support one format (i.e. all of mime_types_supported, charsets_supported, encodings_supported and languages_supported are one-element lists), then you are guaranteed that this is what you are being asked for, and so you can ignore them. + +### content + +When not streaming, this routine provides the entity to the framework (for GET or DELETE). Normally you would just access the execution variable that you set in ensure_content_available. Again, the results of content negotiation are made available, but you probably don't need them at this stage. If you only stream responses (for GET), and if you don't support DELETE, then you don't need to do anything here. + +### generate_next_chunk + +When streaming the response, this routine is called to enable you to generate chunks beyond the first, so that you can incrementally generate the response entity. If you generated the entire response entity in +ensure_content_available, then you do nothing here. Otherwise, you will generate the next chunk, and save it in the same execution variable that you use in ensure_content_available (or add an error to req.error_handler). If you don't support streaming, then you don't need to do anything here. + +### next_chunk + +When streaming the response, the framework calls this routine to provides the contents of each generated chunk. If you generated the entire response entity in ensure_content_available, then you need to slice it in this routine (you will have to keep track of where you are with execution variables). If instead you generate the response incrementally, then your task is much easier - you just access the execution variable saved in ensure_content_available/generate_next_chunk. +As in all these content-serving routines, we provide the results of content negotiation. This might be necessary, for instance, if you were compressing an incrementally generated response (it might be more convenient to do the compression here rather than in both ensure_content_available and generate_next_chunk). + +### read_entity + +This is called for PUT and POST processing, to read the entity provided in the request. A default implementation is provided. This assumes that no decoding (e.g. decompression or character set conversion) is necessary. And it saves it in the execution variable REQUEST_ENTITY. + +Currently the framework provides very little support for PUT and POST requests (so you may well need to redefine this routine). There are several reasons for this: + +1. I personally don't have much experience with PUT and POST. +1. It has taken a long time to develop this framework, and to some extent I was working in the dark (I couldn't check what I was doing until the entire framework was written - it wouldn't even compile before then). +1. The idea for the framework came from a code review process on servers I had written for the company that I work for. I had acquired a lot of knowledge of the HTTP protocol in the process, and some of it showed in the code that I had written. It was thought that it would be a good idea if this knowledge were encapsulated in Eiffel, so other developers would be able to write servers without such knowledge. So this framework has been developed in company time. However, at present, we are only using GET requests. + +Experience with converting the restbucksCRUD example to use the framework, shows that it is certainly possible to do POST and PUT processing with it. But enhancements are needed, especially in the area of decoding the request entity. ## Implementing the policies From e9013e548b56c1e64db7027187b3a8b8f93c862b Mon Sep 17 00:00:00 2001 From: colin-adams Date: Thu, 8 Aug 2013 00:56:14 -0700 Subject: [PATCH 23/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index be116a37..ab456856 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -120,6 +120,34 @@ Currently the framework provides very little support for PUT and POST requests ( Experience with converting the restbucksCRUD example to use the framework, shows that it is certainly possible to do POST and PUT processing with it. But enhancements are needed, especially in the area of decoding the request entity. +### is_entity_too_large + +If your application has limits on the size of entities that it can store, then you implement them here. + +### check_content_headers + +This is called after is_entity_too_large returns False. You are supposed to check the following request headers, and take any appropriate actions (such as setting an error, decompression the entity, or converting it to a different character set): + +* Content-Encoding +* Content-Language +* Content-MD5 +* Content-Range +* Content-Type + +At the moment, your duty is to set the execution variable CONTENT_CHECK_CODE to zero, or an HTTP error status code. A future enhancement of the framework might be to provide more support for this. + +### content_check_code + +This simply accesses the execution variable CONTENT_CHECK_CODE set in check_content_headers. if you want to use some other mechanism, then you can redefine this routine. + +### create_resource + +This routine is called when a PUT request is made with a URI that refers to a resource that does not exist (PUT is normally used for updating an existing resource), and you have already decided to allow this. +In this routine you have the responsibilities of: + +1. Creating the resource using the entity in REQUEST_ENTITY (or some decoded version that you have stored elsewhere). +1. Writing the entire response yourself (as I said before, support for PUT and POST processing is poor at present), including setting the status code of 201 Created or 303 See Other or 500 Internal server error). + ## Implementing the policies * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) From 2dac1ff6c9f187a1d21ef8840a127b3937afdd9d Mon Sep 17 00:00:00 2001 From: colin-adams Date: Thu, 8 Aug 2013 01:25:29 -0700 Subject: [PATCH 24/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index ab456856..922ed6cf 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -148,6 +148,30 @@ In this routine you have the responsibilities of: 1. Creating the resource using the entity in REQUEST_ENTITY (or some decoded version that you have stored elsewhere). 1. Writing the entire response yourself (as I said before, support for PUT and POST processing is poor at present), including setting the status code of 201 Created or 303 See Other or 500 Internal server error). +### append_resource + +This routine is called for POST requests on an existing resource (normal usage). + +In this routine you have the responsibilities of: + +1. Storing the entity from REQUEST_ENTITY (or some decoded version that you have stored elsewhere), or whatever other action is appropriate for the semantics of POST requests to this URI. +1. Writing the entire response yourself (as I said before, support for PUT and POST processing is poor at present), including setting the status code of 200 OK, 204 No Content, 303 See Other or 500 Internal server error). + +### check_conflict + +This is called for a normal (updating) PUT request. You have to check to see if the current state of the resource makes updating impossible. If so, then you need to write the entire response with a status code of 409 Conflict, and set the execution variable CONFLICT_CHECK_CODE to 409. +Otherwise you just set the execution variable CONFLICT_CHECK_CODE to 0. + +See [the HTTP/1.1 specification](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10) for when you are allowed to use the 409 response, and what to write in the response entity. If this is not appropriate then a 500 Internal server error would be more appropriate (and set CONFLICT_CHECK_CODE to 500 - the framework only tests for non-zero). + +### conflict_check_code + +This is implemented to check CONFLICT_CHECK_CODE from the previous routine. If you choose to use a different mechanism, then you need to redefine this. + +### check_request + +This is called for PUT and POST requests. You need to check that the request entity (available in the execution variable REQUEST_ENTITY) is valid for the semantics of the request URI. You should set the execution variable REQUEST_CHECK_CODE to 0 if it is OK. If not, set it to 400 and write the full response, including a status code of 400 Bad Request. + ## Implementing the policies * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) From 9c8a034a041d6277e9570174ed4eb1f843bfce0a Mon Sep 17 00:00:00 2001 From: colin-adams Date: Thu, 8 Aug 2013 01:32:31 -0700 Subject: [PATCH 25/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 922ed6cf..252d6bdf 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -172,6 +172,14 @@ This is implemented to check CONFLICT_CHECK_CODE from the previous routine. If y This is called for PUT and POST requests. You need to check that the request entity (available in the execution variable REQUEST_ENTITY) is valid for the semantics of the request URI. You should set the execution variable REQUEST_CHECK_CODE to 0 if it is OK. If not, set it to 400 and write the full response, including a status code of 400 Bad Request. +### request_check_code + +This routine just checks REQUEST_CHECK_CODE. if you choose to use a different mechanism, then redefine it. + +### update_resource + +This routine is called for a normal (updating) PUT request. You have to update the state of the resource using the entity saved in the execution environment variable REQUEST_ENTITY (or more likely elsewhere - see what ORDER_HANDLER does). Then write the entire response including a status code of 204 No Content or 500 Internal server error. + ## Implementing the policies * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) From bbbf958d7d10d73901b838c6cd9ed620ac314f2d Mon Sep 17 00:00:00 2001 From: colin-adams Date: Thu, 8 Aug 2013 02:41:27 -0700 Subject: [PATCH 26/35] Updated Using the policy driven framework (markdown) --- Using-the-policy-driven-framework.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Using-the-policy-driven-framework.md b/Using-the-policy-driven-framework.md index 2999ec9a..af97acde 100644 --- a/Using-the-policy-driven-framework.md +++ b/Using-the-policy-driven-framework.md @@ -1,5 +1,7 @@ # Using the policy driven framework +**This describes a new facility that is not yet in the EWF release** + ## Introduction The aim of the policy-driven framework is to allow authors of web-servers to concentrate on the business logic (e.g., in the case of a GET request, generating the content), without having to worry about the details of the HTTP protocol (such as headers and response codes). However, there are so many possibilities in the HTTP protocol, that it is impossible to correctly guess what to do in all cases. Therefore the author has to supply policy decisions to the framework, in areas such as caching decisions. These are implemented as a set of deferred classes for which the author needs to provide effective implementations. From fe971d07ec6760bce9c7e5214228c39c1c67b05b Mon Sep 17 00:00:00 2001 From: colin-adams Date: Sun, 11 Aug 2013 23:55:50 -0700 Subject: [PATCH 27/35] Updated Using the policy driven framework (markdown) --- Using-the-policy-driven-framework.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Using-the-policy-driven-framework.md b/Using-the-policy-driven-framework.md index af97acde..bf85c9ef 100644 --- a/Using-the-policy-driven-framework.md +++ b/Using-the-policy-driven-framework.md @@ -6,7 +6,7 @@ The aim of the policy-driven framework is to allow authors of web-servers to concentrate on the business logic (e.g., in the case of a GET request, generating the content), without having to worry about the details of the HTTP protocol (such as headers and response codes). However, there are so many possibilities in the HTTP protocol, that it is impossible to correctly guess what to do in all cases. Therefore the author has to supply policy decisions to the framework, in areas such as caching decisions. These are implemented as a set of deferred classes for which the author needs to provide effective implementations. -We aim to provide unconditional compliance [See HTTP/1.1 specification](http://www.w3.org/Protocols/rfc2616/rfc2616-sec1.html#sec1) for you. +We aim to provide unconditional compliance [See HTTP/1.1 specification](http://www.w3.org/Protocols/rfc2616/rfc2616-sec1.html#sec1) for you. Note that byte-ranges are not yet supported. ## Mapping the URI space From 35224b1b171bfe06a43441feb5dfa9ae8b6d062f Mon Sep 17 00:00:00 2001 From: colin-adams Date: Mon, 12 Aug 2013 01:45:58 -0700 Subject: [PATCH 28/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 252d6bdf..933c3a09 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -8,6 +8,7 @@ Now you have to implement each handler. You need to inherit from WSF_SKELETON_HA HTTP/1.1 supports streaming responses (and providing you have configured your server to use a proxy server in WSF_PROXY_USE_POLICY, this framework guarantees you have an HTTP/1.1 client to deal with). It is up to you whether or not you choose to make use of it. If so, then you have to serve the response one chunk at a time (but you could generate it all at once, and slice it up as you go). In this routine you just say whether or not you will be doing this. So the framework n=knows which other routines to call. Currently we only support chunking for GET or HEAD routines. This might change in the future, so if you intend to return True, you should call req.is_get_head_request_method. +Note that currently this framework does not support writing a trailer. ### includes_response_entity @@ -47,10 +48,14 @@ If the response may vary in other ways not predictable from the request headers, An **ETag** header is a kind of message digest. Clients can use etags to avoid re-fetching responses for unchanged resources, or to avoid updating a resource that may have changed since the client last updated it. You must implement this routine to test for matches **if and only if** you return non-Void responses for the etag routine. +Note that if you support multiple representations through content negotiation, then etags are dependent upon +the selected variant. Therefore you will need to have the response entity available for this routine. This can be done in check_resource_exists. ### etag You are strongly encouraged to return non-Void for this routine. See [Validation Model](http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.3) for more details. +Note that if you support multiple representations through content negotiation, then etags are dependent upon +the selected variant. Therefore you will need to have the response entity available for this routine. This can be done in check_resource_exists. ### modified_since @@ -94,6 +99,9 @@ This routine is called for GET and DELETE (when a entity is provided in the resp As well as the request object, we provide the results of content negotiation, so you can generate the entity in the agreed format. If you only support one format (i.e. all of mime_types_supported, charsets_supported, encodings_supported and languages_supported are one-element lists), then you are guaranteed that this is what you are being asked for, and so you can ignore them. +Note that if you support multiple representations through content negotiation, then etags are dependent upon +the selected variant. Therefore you will need to have the response entity available for this routine. In such cases, this will have to be done in check_resource_exists, rather than here, as this routine is called later on. + ### content When not streaming, this routine provides the entity to the framework (for GET or DELETE). Normally you would just access the execution variable that you set in ensure_content_available. Again, the results of content negotiation are made available, but you probably don't need them at this stage. If you only stream responses (for GET), and if you don't support DELETE, then you don't need to do anything here. From bf5bae803d8c8c30c772e4bf8e8fa3daabaa2398 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Mon, 12 Aug 2013 01:49:11 -0700 Subject: [PATCH 29/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 933c3a09..99c557c1 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -4,6 +4,12 @@ Now you have to implement each handler. You need to inherit from WSF_SKELETON_HA ## Implementing the routines declared directly in WSF_SKELETON_HANDLER +### check_resource_exists + +Here you check for the existence of the resource named by the request URI. If it does, then you need to call set_resource_exists on the helper argument. +Note that if you support multiple representations through content negotiation, then etags are dependent upon +the selected variant. If you support etags, then you will need to make the response entity available at this point, rather than in ensure_content_available. + ### is_chunking HTTP/1.1 supports streaming responses (and providing you have configured your server to use a proxy server in WSF_PROXY_USE_POLICY, this framework guarantees you have an HTTP/1.1 client to deal with). It is up to you whether or not you choose to make use of it. If so, then you have to serve the response one chunk at a time (but you could generate it all at once, and slice it up as you go). In this routine you just say whether or not you will be doing this. So the framework n=knows which other routines to call. From 9c8bc59224de4318558a92d66f54b47631459654 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Tue, 13 Aug 2013 00:54:45 -0700 Subject: [PATCH 30/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 99c557c1..6af6d39f 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -2,6 +2,22 @@ Now you have to implement each handler. You need to inherit from WSF_SKELETON_HANDLER (as ORDER_HANDLER does). This involves implementing a lot of deferred routines. There are other routines for which default implementations are provided, which you might want to override. This applies to both routines defined in this class, and those declared in the three policy classes from which it inherits. +## Communicating between routines + +Depending upon the connector (Nino, CGI, FastCGI etc.) that you are using, your handler may be invoked concurrently for multiple requests. Therefore it is unsafe to save state in normal attributes. WSF_REQUEST has a pair of getter/setter routines, execution_variable/set_execution_variable, which you can use for this purpose. +Internally, the framework uses the following execution variable names, so you must avoid them: + +1. REQUEST_ENTITY +1. NEGOTIATED_LANGUAGE +1. NEGOTIATED_CHARSET +1. NEGOTIATED_MEDIA_TYPE +1. NEGOTIATED_ENCODING +1. NEGOTIATED_HTTP_HEADER + +The first one makes the request entity from a PULL or POST request available to your routines. + +The next four make the results of content negotiation available to your routines. The last one makes an HTTP_HEADER available to your routines. You should use this rather than create your own, as it may contain a **Vary** header as a by-product of content negotiation. + ## Implementing the routines declared directly in WSF_SKELETON_HANDLER ### check_resource_exists From 123fc8252e62bf6edcbf3921be2d0e828c8dfa8c Mon Sep 17 00:00:00 2001 From: colin-adams Date: Tue, 13 Aug 2013 08:24:05 -0700 Subject: [PATCH 31/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 6af6d39f..6bc01145 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -18,6 +18,8 @@ The first one makes the request entity from a PULL or POST request available to The next four make the results of content negotiation available to your routines. The last one makes an HTTP_HEADER available to your routines. You should use this rather than create your own, as it may contain a **Vary** header as a by-product of content negotiation. +All six names are defined as constants in WSF_SKELETON_HANDLER, to make it easier for you to refer to them. + ## Implementing the routines declared directly in WSF_SKELETON_HANDLER ### check_resource_exists From 5e62d82e9ce92acc15993d26b96129115c1794b1 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 14 Aug 2013 02:22:22 -0700 Subject: [PATCH 32/35] Updated Wsf previous policy (markdown) --- Wsf-previous-policy.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/Wsf-previous-policy.md b/Wsf-previous-policy.md index 37c965c5..1ba251d0 100644 --- a/Wsf-previous-policy.md +++ b/Wsf-previous-policy.md @@ -1,20 +1,19 @@ -# Implementing WSF_PREVIOUS_POLICY +# WSF_PREVIOUS_POLICY -This class provides routines which enable the programmer to encode knowledge about resources that have moved (either temporarily, or permanently), or have been permanently removed. There are four routines, but only one is actually deferred. +This class deals with resources that have moved or gone. The default assumes no such resources. It exists as a separate class, rather than have the routines directly in WSF_SKELETON_HANDLER, as sub-classing it may be convenient for an organisation. ## resource_previously_existed -By default, this routine says that currently doesn't exist, never has existed. You need to redefine this routine to return True for any URIs that you want to indicate used to exist, and either no longer do so, or have moved to another location. +Redefining this routine is always necessary if you want to deal with any previous resources. ## resource_moved_permanently -If you have indicated that a resource previously existed, then it may have moved permanently, temporarily, or just ceased to exist. In the first case, you need to redefine this routine to return True for such a resource. +Redefine this routine for any resources that have permanently changed location. The framework will generate a 301 Moved Permanently response, and the user agent will automatically redirect the request to (one of) the new location(s) you provide. The user agent will use the new URI for future requests. + ## resource_moved_temporarily -If you have indicated that a resource previously existed, then it may have moved permanently, temporarily, or just ceased to exist. In the second case, you need to redefine this routine to return True for such a resource. +This is for resource that have only been moved for a short period. The framework will generate a 302 Found response. The only substantial difference between this and resource_moved_permanently, is that the agent will use the old URI for future requests. ## previous_location -You need to implement this routine. It should provide the locations where a resource has moved to. There must be at least one such location. If more than one is provided, then the first one is considered primary. - -If the preconditions for this routine are never met (as is the case by default), then just return an empty list. \ No newline at end of file +When you redefine resource_moved_permanently or resource_moved_temporarily, the framework will generate a Location header for the new URI, and a hypertext document to the new URI(s). You **must** redefine this routine to provide those locations (the first one you provide will be in the location header). \ No newline at end of file From bcdfcdd468bd948453b1c83369edf46be477de90 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 14 Aug 2013 02:23:07 -0700 Subject: [PATCH 33/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 6bc01145..37c43630 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -215,5 +215,5 @@ This routine is called for a normal (updating) PUT request. You have to update t ## Implementing the policies * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) -* [WSF_PREVIOUS_POLICY](./WSF_PREVIOUS_POLICY) +* [WSF_PREVIOUS_POLICY](./Wsf-previous-policy) * [WSF_CACHING_POLICY](./WSF_CACHING_POLICY) \ No newline at end of file From aff7948c65088d58eb208ddf91bffd04cb39d284 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 14 Aug 2013 02:23:53 -0700 Subject: [PATCH 34/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 37c43630..50770637 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -216,4 +216,4 @@ This routine is called for a normal (updating) PUT request. You have to update t * [WSF_OPTIONS_POLICY](./WSF_OPTIONS_POLICY) * [WSF_PREVIOUS_POLICY](./Wsf-previous-policy) -* [WSF_CACHING_POLICY](./WSF_CACHING_POLICY) \ No newline at end of file +* [WSF_CACHING_POLICY](./Wsf-caching-policy) \ No newline at end of file From b2d9fe1a4b7e9d8ff60b9a2ea58a649de17857c8 Mon Sep 17 00:00:00 2001 From: colin-adams Date: Wed, 14 Aug 2013 02:47:12 -0700 Subject: [PATCH 35/35] Updated Writing the handlers (markdown) --- Writing-the-handlers.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/Writing-the-handlers.md b/Writing-the-handlers.md index 50770637..87e68a2c 100644 --- a/Writing-the-handlers.md +++ b/Writing-the-handlers.md @@ -13,12 +13,16 @@ Internally, the framework uses the following execution variable names, so you mu 1. NEGOTIATED_MEDIA_TYPE 1. NEGOTIATED_ENCODING 1. NEGOTIATED_HTTP_HEADER +1. CONFLICT_CHECK_CODE +1. CONTENT_CHECK_CODE +1. REQUEST_CHECK_CODE The first one makes the request entity from a PULL or POST request available to your routines. -The next four make the results of content negotiation available to your routines. The last one makes an HTTP_HEADER available to your routines. You should use this rather than create your own, as it may contain a **Vary** header as a by-product of content negotiation. +The next four make the results of content negotiation available to your routines. The sixth one makes an HTTP_HEADER available to your routines. You should use this rather than create your own, as it may contain a **Vary** header as a by-product of content negotiation. +The last three are for reporting the result from check_conflict, check_content and check_request. -All six names are defined as constants in WSF_SKELETON_HANDLER, to make it easier for you to refer to them. +All names are defined as constants in WSF_SKELETON_HANDLER, to make it easier for you to refer to them. ## Implementing the routines declared directly in WSF_SKELETON_HANDLER