Building Node Structures

Introduction

Node structures are built by server-side processes using various built-in functions to retrieve, aggregate and group managed instances. A node structure is built for an application-specific purpose and its make-up reflects the relationships between constituent typedef instances. Unlike static languages where relationships are often bound into the code, in Inq they are only loosely defined by the configuration of native and foreign keys. Building a particular structure at run-time establishes a set of relationships at that time and for a particular purpose.

Managed instances were introduced in the section on typedefs and keys to retrieve them in the section on I/O keys.

The read Function

Managed instances are read from their I/O source by applying a value of a particular key using the read function. It has the following syntax:

"read" "("
        <type-reference> ","
        <key-value>
        ("," ( "keyname" "=" <expression>
             | "target"  "=" <expression>
             | "alias"   "=" <expression>
             | "setname" "=" <expression>
             | "merge"   "=" <expression>
             | "rowname" "=" <expression>
             | "child"   "=" <expression>
             | "max"     "=" <expression>
             )
         ...
        )
       ")"

<type-reference> = ( [ <name-space>":" ]<identifier>
                   | "typeof" "(" <expression> ")"
                   )

<name-space> = ( <package-specification>
               | <package-import-alias>
               )

<key-value> = <expression>

The read function takes at least two arguments:

  1. The typedef to read - this can either be a symbolic reference to the typedef name, with an explicit package if required, or the typedef carried by an instance, returned by the typeof function.
  2. A key value - this can be any map that contains values by the same names as the fields of the key being used. There are a number of ways in which the desired key can be specified and these are discussed further below.

If the key being used is unique then read returns a single instance of the specified typedef, or null when no instance satisfies the key value. For non-unique keys, read creates and returns a node set, described further below.

Specifying The Key

The term key in the context of managed instance retrieval means any one of the keys defined within a typedef. A key value is created using the new function and specifying the key name. Referring to the keys defined in the illustration from the I/O Keys section, here are some examples:

any k = new(Cpty.ByEntityType); // Creates an uninitialised value of key "ByEntityType"

any pk = new(Cpty.pkey, ik); // Creates an instance of the primary key.

In the second example, the instance of the primary key (which is always named pkey) is initialised by the value at $stack.ik. Note that key values are maps comprising the fields of the key definition. Provided a key value contains the fields required, any map will suffice, it does not have to have been created using new.

Referring back to the definition of the read function, the key read will apply is determined in one of the following ways:

  1. If the key was created with new then the key carries its name.
  2. The key can be specified explicitly with the keyname argument, which must evaluate to a string. If present, the keyname argument overrides any implicit name carried in the key value.
  3. If neither of the above apply the primary key is used.

If the typedef Cpty is defined in the package data.static and given a key value k, here is an example of the simplest form of read:

read(data.static:Cpty, k);

The value k is often an instance of the typedef itself received from another Inq environment. In this case, the purpose of the read is to retrieve this environment's managed instance. As we saw when discussing a generic mutation service, k is used as a primary key value. Not having been created as an actual key, read defaults to using the typedef's primary key in case 3, above.

Note
Even though an instance carries its typedef, there is no single-argument invocation of read. Instead, the typedef can be returned with the typeof function, leading to the generic and fairly economical read(typeof(k), k).

The target Argument

The read function is always the first step in building a structure of managed instances, which overall is a working set of data required for some application functionality. As such, read is told where to put the node(s) it will marshal - this is the purpose of the optional target argument.

If absent the default target is the stack. When all we are interested in is reading a single instance by its primary or other unique key then read usage is often the simple case shown above. Referring to the Cpty example, assuming an appropriate key k the example statement creates $stack.Cpty:

read(data.static:Cpty, k);

The alias Argument

When Inq places an instance in the target container it does so using the typedef's name or name override if one has been defined. The alias argument allows this default to be overridden.

If a specified alias evaluates to a string then conventional path forms can be used to access it:

smap m;
read(data.static:Entity, k, alias="foo", target=m);
writeln($catalog.system.out, m.foo.LongName);

However, a path component does not have to be a string, it can be any type in the Inq language. The only restriction imposed is that it must be a unique value in its parent container's name set. Path components that are not strings are expressed as element substitutions:

smap m;
smap p;  // path component is a map
string p.x = "foo-number";
int    p.y = 1;

read(data.static:Entity, k, alias=p, target=m);
writeln($catalog.system.out, m.{p}.LongName);

In this example we have used a map as the path component. A map used as a map key in its parent container will violate the unique key set of that container if it compares equal with any already present.

Note
When the alias argument is a string it is not tokenised as discussed in complex path usage. Therefore, alias strings must not contain path special characters (for example period or asterisk) or they will be interpreted differently afterwards and the node will be inaccessible.

Inq uses maps as path components when it creates structures called node sets. In this case, the maps are the primary key values of the instances themselves, discussed further below.

The Node Set Structure

When read applies a non-unique key it creates a node set structure. Such a structure comprises a top-level container, a number of second-level containers (also called node set children), one for each instance being returned and the instances themselves as shown:

node set structure

When the node set is created by read the top-level container is tagged internally by Inq with the typedef that was used, called Trade in this example. This tag marks Trade as the primary typedef within the node set structure, a topic we return to when discussing how Inq processes deletion events, below.

The node set children act as a container within which instances (or instance sets) of other application types can be placed to express the intended relationships the structure is modeling. Again, this subject is covered further when discussing aggregation.

The target Argument and Seed Map Type

Using a unique key can yield zero or one managed instance, so Inq does not need to build any kind of structure. It simply places any returned instance into the designated target with the specified or default alias.

When read builds a node set structure it needs to determine what type of map container to use. In the same way that declarations can cause the creation of intermediate containers, the containers within the node set structure are of the same type as the target. Application script will determine, on the basis of what the structure will be used for, what type of container is suitable. If the application desires an event-live structure then a seed map of type hmap must be provided:

hmap m;
read(Trade, k, target=m, setname="list");

This use of read will place the node set top level container into m as m.list and build the structure out of hmaps. Note that when non-unique keys are used the setname argument is mandatory.

If a non-event-live structure is required and in the absence of any specific target, the stack is a suitable seed map and the target argument can be omitted.

Names Within The Structure

As the structure is built, so read must place the nodes within it with a suitable name. When the instances are placed in the node set child containers the typedef's default alias (or override) is used. As in the unique key case, it is possible to override this name using the alias argument to read.

The node set children are placed in the top-level container using the primary key value of the primary typedef instance beneath (shown as <pk> in the above diagram). This is a default that satisfies the requirement that the name set within a given map is unique, however it can be overridden using the rowname argument. If supplied, this argument must be a function variable whose return value is used as the map name. Here is an example:

hmap tmp;

cfunc rowNameF = { set k = ($loop.Security,
                            $loop.Book,
                            $loop.RowDate);
                 };

read(xy:CEPos, filter,
     target  = tmp,
     setname = "reconList",
     rowname = rowNameF);

A rowname function variable is executed with the following environment:

  • $loop refers to the instance being placed in the node structure.
  • The current stack frame is unchanged and the statements have access to anything currently on the stack.
  • The instance and its node set child are placed in the structure after the rowname function completes.

This example returns a set containing the Security, Book and RowDate fields of the instance. Of course, the combination of these fields must be unique across the set or the building of the node set will fail.

A rowname function is useful when a number of node sets are built and then processed to produce some sort of output structure. When all the node sets use the same name set, iterating over any one of them yields names that can be used in paths to navigate the others. We will cover an example of this when looking at the groupby function.

Merging Node Set Structures

When no single invocation of read generates the appropriate set of instances subsequent uses of read can merge their result with an existing node set, as in this fragment:

any f = new(xy:Trade.FilterTrade);
f = filter;

hmap nodeSet;

// if we haven't specified any ProductType, then just load CFDs and ES
if (isnull(f.ProductType))
{
  f.ProductType = enum(xy:ProductType, CFD);
  read (xy:Trade, f, target = nodeSet, setname = "list");
  f.ProductType = enum(xy:ProductType, ES);
  read (xy:Trade, f, target = nodeSet, setname = "list", merge=true);
}

The node sets are merged according to the map name for the second-level containers, so if a rowname function is used it should be the same one for all invocations of read.

The Node-Set Child Map Type

Sometimes it is useful for the top-level and node set child containers to be different map types. If the structure is made entirely of smaps then no event is generated if the structure is added to the context. This means that the no propagation will occur and no model-view-controller dispatch will happen. This completely event-dead structure may not be what is required.

If the top-level container is a hmap (determined by the seed) and the node set children are smaps then the result is a structure that will dispatch events from its root but not anywhere beneath. This is enough to enable propagation and MVC but thereafter leaving the structure stable, if that is the requirement.

The child argument must evaluate to a map of the chosen type and is used as the prototype instance for the node set children. Here is an example:

hmap m;

// Read relevant PosExposure set
read(xy:PosExposure, filter, target=m, setname="posExposures", child = smap dummyMap);

// Aggreate the Instrument
aggregate(xy:Instrument, m.posExposures[@first].PosExposure);

add(remove(m.posExposures), atPath);

This script also introduces the aggregate function discussed in the next section, and the add function, which is covered in the Events section.

Setting a Maximum Node Set Size

If it is possible for a key to return very large number of items then a limit can be imposed to ensure the node set is restricted to a given maximum. This might be applicable to filters when most or all of the fields are left as null.

The max argument must evaluate to an integer and specifies the maximum number of node set children minus one the parent will be filled with. Inq has to exceed this number by one to determine that there are indeed more instances that would be returned.

Note
Only non-cached keys can be subjected to a read cap.

If the read operation is capped in this way then the node set parent contains the integer @capped whose value is the number of children. Using the max argument overrides any value set when the key was declared.

The aggregate Function

Having retrieved a single instance or constructed a node set using read, the aggregate function can be used to join related instances of other typedefs into the node space. It has the following syntax:

"aggregate" "("
        <type-reference> ","
        <instance-from>
        ("," ( "keyname"  "=" <expression>
             | "setname"  "=" <expression>
             | "alias"    "=" <expression>
             | "mustjoin" "=" <expression>
             | "key"      "=" <expression>
             | "foreach"  "=" <expression>
             | "rowname"  "=" <expression>
             | "max"      "=" <expression>
             )
         ...
        )
       ")"

<type-reference> = [ <name-space>":" ]<identifier>

<name-space> = ( <package-specification>
               | <package-import-alias>
               )

<instance-from> = <expression>

The aggregate function is similar to read in that it requires a type reference, a key name and a key value. In aggregate's case, however, the key value is (or is derived from) an instance in the node space. In addition, aggregate uses information in the node space to determine whether it is operating on a single instance or multiple instances contained within a node set.

Aggregating From A Single Instance

Using a Unique Key

If a unique key has been applied in a use of read then aggregate can place a related instance into the same parent container. Given an existing path of vars.Trade the following diagram shows an example of the before and after state of a node space:

unique aggregation

In this case, assuming the primary key in the Security typedef of data.static:Security.Security, the following statement accomplishes this:

aggregate(data.static:Security, vars.Trade);

Like read, the simplest form requires a type reference and a key value, however aggregate differs in the following ways:

  1. The key value being applied (in this example to data.static:Security) is an instance of the typedef we are aggregating from.
  2. The target node is implicitly the parent container of the instance. The result is placed in the same node as a sibling.

By default, the typedef's primary key is used and this will be satisfied by the source instance if its foreign key field(s) use the same name(s). Using any unique key extends the node structure in the same way.

Alternative keys are specified with the keyname argument. The common case is to use an existing instance to satisfy the key and because such a value does not carry a key name, when a key other than the primary is required the the keyname argument must be used. The exception to this is where a function is supplied using the key argument, discussed below.

The result is placed in the container node with the map name of the typedef's name or name override. This can be overridden with the alias argument in the same way as read. This can be useful when there are multiple foreign keys for the same typedef, an example of which is given below.

Using Non-unique Keys

Specifying a non-unique key causes aggregate to generate a node set and join it into the node space as a sibling of the source instance as shown:

non-unique aggregation

Considering the earlier example 1-to-many relationship between Entity and Cpty, the following statement does this:

aggregate(Cpty,
          vars.Entity,
          keyname="ByEntity",
          setname="cptys");

In the non-unique case the setname argument must be specified and is the map name of the node set in the parent container. Like read, aggregate creates the node set in the following way:

  1. The node set's top-level container is created and is of the same type as the parent of the source instance.
  2. The top-level container is tagged with the typedef being aggregated to.
  3. The node set children are created likewise and added using the primary key of the instance beneath unless overridden using a rowname function.
  4. The instances are placed in the node set child containers with the map name of their typedef's name or name override.

Aggregating From A Node Set

When aggregate resolves the <instance-from> argument it notes whether the path reference passes through a node set top-level container by checking whether the node is tagged as such. If so, the aggregation is carried out at all child containers. The illustration below shows how the node space is affected by the unique key aggregation:

node set aggregation

The statement performing this aggregation is:

aggregate(Currency,
          vars.[@first].Trade);

Note that a path of this form, using vector access, can only be applied successfully if the node set top-level container is of the appropriate map type. Generally, node sets are built to be displayed in graphical tables or require sorting for order-specific processing. Both of these need vector access support, however the exact instance that is resolved by aggregate is not important. Inq only uses the last path component, Trade in this example, to retrieve each <instance-from> as it iterates across the top-level container children. If the node set is not built with types that permit vector access then the following statement can be used:

aggregate(Currency,
          vars*Trade);

The mustjoin and foreach Arguments

When aggregating from a node set the mustjoin and foreach arguments are applicable. The mustjoin argument is converted to a boolean and defaults to false. If the aggregation fails, that is the key, when applied to the target typedef yields no instance(s), aggregate behaves as follows:

mustjoin=false
The node set child remains in the node set and no new children are added to it.
mustjoin=true
The node set child container is removed from the node set.

If a given join within the node set does not fail or the node set child is not removed, aggregate runs any function variable provided through the foreach argument. A foreach function variable is executed after the aggregate result, if any, has been added to the node space. It runs with the following environment:

  • $loop resolves to the current node set child
  • The current stack frame is unchanged and the statements have access to anything currently on the stack.

Nested Node Sets

When aggregating from a node set using a non-unique key aggregate creates further node sets and places them in the child containers. The following diagram shows a structure of this form:

nested node set

Foreign Key Creation

In the examples presented so far the <instance-from> argument has satisfied the field requirements of the specified (or primary) key of the target typedef directly. Typically, foreign key fields in one typedef have the same names as native fields in another - this is a natural consequence if the typedef fields are declared using references. However, consider the following relationship example, where the two counterparties involved in a transaction are modeled by an instance of Parties:

parties

The Parties type has the Buyer and Seller fields that are foreign keys to the Cpty type. There are two issues that need to be considered:

  1. A key (in this case Cpty.pkey) must be initialised from Parties.Seller or Parties.Buyer.
  2. If both the Buyer and Seller instances are required in the same container node then they must be aliased.

The following script example shows how this can be achieved, assuming we are aggregating from m.Parties:

aggregate(data.static:Cpty,
          m.Parties,
          alias = "Seller"
          key = cfunc f0 = {
                             any k = new(data.static:Cpty.pkey);
                             k.Cpty = $loop.Parties.Seller;
                             k;
                           }
          );

and similarly for the Buyer. The key argument requires a function variable whose return value is the key value to be applied. Creating a genuine key value (as opposed to a map that satisfies the required key fields) has the effect of specifying the key of the target typedef and the keyname argument is not required.

From the example above we can see that the function variable's statement runs with $loop resolving to the container of the <instance-from> argument (a node set child in the case of a node set). Of course, unlike the foreach argument, a key statement runs before any node or node set is joined into the node space and is executed regardless of whether aggregation is taking place from a single instance or a node set. As such, it can be an appropriate place to perform any other required manipulation of the node space, as in this example:

aggregate(data.static:FXConvention,
          listRoot[@first].Trade,
          key = cfunc f0 = {
                             boolean $loop.Flags.Dirty;
                             any k = new(data.static:FXConvention.unique);
                             k.FromCurrency = $loop.Security.Currency;
                             k.ToCurrency   = $loop.Trade.FXCurrency;
                             k;
                           });

Here the statement creates Flags.Dirty underneath the aggregate parent as a "side-effect", for some further application purpose. In the Seller/Buyer example, to avoid repeating the aggregate statement the following line can be added:

aggregate(data.static:Cpty,
          m.Parties,
          alias = "Seller"
          key = cfunc f0 = {
                             any k = new(data.static:Cpty.pkey);
                             k.Cpty = $loop.Parties.Buyer;
                             read(data.static:Cpty, k, target=$loop, alias="Buyer");
                             k.Cpty = $loop.Parties.Seller;
                             k;
                           }
          );

Using the function arguments in this way takes advantage of the environment set up by the aggregate function. In particular this technique takes advantage of the implicit iteration allowing additional structure manipulation in a single pass.

The groupby Function

The groupby function processes a node space to create distinct groupings and optionally executes statements during and at the end of the iteration. It has the following syntax:

"groupby" "("
            <node-root> ","
            <distinct-func> ","
            <start-func>
            ("," ( "foreach"  "=" <expression>
                 | "end"      "=" <expression>
                 )
             ...
            )
          ")"

<node-root> = <expression>

<distinct-func> = <expression>

The the <distinct-func> must resolve to a function variable and is executed with each child of the node at <node-root> as $loop. Its return value is used by groupby to determine whether the sub-structure (rooted at the current child) is distinct from any other yet processed. If so, the <start-func> is executed, but it will not be executed again for any subsequent children that return the same value from <distinct-func>.

The <start-func> does not need to return any meaningful value as such, however whatever value is returned is retained against the corresponding value from <distinct-func> for later use when executing any end argument, discussed below.

The remaining arguments, both function variables, are optional. The foreach argument is executed for all children of the <node-root> with the child node as $loop. The end argument is executed for each distinct value returned by <distinct-func> and with $loop set to the value returned by the corresponding execution of <start-func>.

The various function variables of groupby run with the following special paths available:

@name
The value returned by <distinct-func> when <start-func> and any foreach and end statements are executed.
@count
A counter that is incremented from zero as <distinct-func> and any foreach statement are executed and while any end statement is executed.

Here is a script fragment using groupby:

hmap grouped;

groupby(list,
        cfunc distinctF = { set k = ($loop.SwapPos.Security,
                                     $loop.SwapPos.Book,
                                     $loop.SwapPos.RowDate);
                          },
        cfunc startF  =
        {
          // Create an empty SwapPos, initialise important fields and
          // store under the given name
          any newSwapPos = new(xy:SwapPos);
          newSwapPos.Security   = $loop.SwapPos.Security;
          newSwapPos.Book       = $loop.SwapPos.Book;
          newSwapPos.RowDate    = $loop.SwapPos.RowDate;
          newSwapPos.TradePosn      =
            newSwapPos.SettlePosn   =
            newSwapPos.UnsettlePosn =
            newSwapPos.ClosePosn    = 0;
          any grouped.{@name}.SwapPos = newSwapPos;

        },
        foreach = cfunc foreachF =
        {
          // sum the quantities within the current group;
          grouped.{@name}.SwapPos.TradePosn    += $loop.SwapPos.TradePosn;
          grouped.{@name}.SwapPos.SettlePosn   += $loop.SwapPos.SettlePosn;
          grouped.{@name}.SwapPos.UnsettlePosn += $loop.SwapPos.UnsettlePosn;
          grouped.{@name}.SwapPos.ClosePosn    += $loop.SwapPos.ClosePosn;
        });

This example creates a new node structure under the hmap called grouped that contains one child for each distinct combination of the fields Security, Book and RowDate. The map key of each child node is the combination of these fields, accessed using @name. Further aping the node set structure, the child is a second-level container for an unmanaged instance of the type SwapPos in which the desired results are accumulated.

The read and aggregate functions only appear in server-side script as they use the configured i/o mechanism to retrieve managed instances. However, groupby only iterates over and transforms a structure - it is equally valid in the server and client environments.

Sorting Node Structures

Any node structure can be sorted below a point where the immediate children are held within a container that supports ordering. The sort function has the following syntax:

"sort" "("
         <node-root> ","
         <expression>
         [ ( "," <expression> ) ... ]
         [ ( "," "ignorecase" = <expression> ) ]
         [ ( "," "descending" = <expression> ) ]
        ")"

The sort function operates the algorithm implemented by Collections.sort. Each <expression> specified is applied with a <node-root> child as $loop to yield a value for comparison as less-than, equal to or greater-than another being considered by the sort. Multiple expressions are applied in sequence until the first that yields other than equality. Thus, earlier expressions have the effect of grouping the structure within ordering specified by later ones.

Usually the expressions are simple node paths to values within the child structures, however any statement is acceptable so long as the value it yields is one of the value types, as only these types are valid for the less-than and greater-than operations.

Without further qualification sort will order the structure according to the value types' natural collating sequence. For the string value type, this means case-sensitive lexicographic ordering as defined by java.lang.String. If the optional descending argument converts to boolean true then this ordering is reversed for all expressions.

To effect descending collation for individual expressions the unary minus operator can be applied. In the following example, the node structure is sorted according to instances of a Trade application type, by ascending currency and trade date, most recent first:

sort(tradeList, $this.Trade.Ccy, -$loop.Trade.TradeDate);

Notice that (although meaningless in any other context) negation is supported on date simply for reverse collation. This technique can also be used for strings however it is not recommended and cannot be combined with ignorecase, see below.

Ordering strings

There are different ways in which sort can refine the collation of strings. The ignorecase argument is the simplest and crudest, providing locale-independent case-insensitive ordering as defined by String.compareToIgnoreCase. The ignorecase and descending arguments can be combined.

A better way to control string ordering is to use the collator data type and collate function. A collator, c, for the current locale and with default strength and decomposition properties is created as follows:

collator c;

The collate function requires a collator and up to two string arguments:

  • one string argument when used with sort;
  • two string arguments when used otherwise.

Creating a collator and setting its supported properties of rules, strength and decomposition allows string ordering to be finely specified. To illustrate use of the collate function with sort the example in sort.inq uses a simple array of strings. Depending on the prevailing locale the following output can be expected:

inq -in sort.inq
Unsorted array: [hello, World, again]
Default sorting (case sensitive): [World, again, hello]
Crude case insensitive: [again, hello, World]
Basic descending: [hello, again, World]
Default collation: [again, hello, World]
Default collation, descending: [World, hello, again]
Inq done

When used as an ordering expression, collate only requires a single string argument because sort is using it to resolve successive items within the structure for comparison. However, collate can be used elsewhere to compare two strings, returning -1, 0 or 1 respectively when the first argument is less-than, equal-to or greater-than the second according to the specified collator:

collator c;
collate(c, "hello", "world");
-1
collate(c, "zhello", "world");
1
collate(c, "world", "world");
0
collate(c, "World", "world");
1
c.properties.strength = STRENGTH_SECONDARY;
collate(c, "World", "world");
0
Inq done

Aggregate Functions

Inq has a number of aggregate functions for common operations which offer more succinct expression than using foreach. These are usually applied to a node set because they expect a homogeneous structure.

sum()

"sum" "(" <node-root> "," <expression> ")"

The sum() function computes the sum of the values returned by <expression> which is applied at each child of <node‑root>. Here is an example:

Order.TotalPrice = sum(items, $loop.LineItem.UnitPrice * $loop.LineItem.Qty);

$loop resolves to the node-set child for each iteration. An exception is thrown if any evaluation of <expression> does not resolve.

avg()

"avg" "(" <node-root> "," <expression> ")"

Computes the average of the values returned by <expression> which is applied at each child of <node‑root>. Equivalent to

sum(<node-root>, <expression>) / count(<node-root>)

wavg()

"wavg" "(" <node-root> "," <sum-expression> "," <weight-expression>")"

Computes the weighted average of the values returned by <sum‑expression>, weighting each value by <weight‑expression>.

The sum of <sum‑expression> multiplied by <weight‑expression> is calculated and the result divided by the sum of <weight‑expression>.

Node Sets and Deletion Events

Event propagation through a node space was introduced in the discussion of transactions and is covered further in the section on events. Worth noting here, however, is the internal handling of typedef instance delete events passing through a node set.

When a process deletes a typedef instance and its transaction successfully commits, a deletion event is dispatched to all observers, that is event-live utility containers. As discussed earlier, Inq ensures there is only one physical reference for a given instance, so this event passes through any node space in which the instance has been placed, irrespective of the process-owner of that node space and the process deleting the instance.

Events pass up event-live containment hierarchies, possibly being dispatched to listeners placed at any level and, in the case of server-side user processes and unless consumed, propagating to the peer client process.

When a delete event passes through a node space Inq performs the following processing:

  1. At a node set top-level container, Inq checks if the event originated from an instance of the typedef the container is tagged with. If so, the node set child is removed from the node set. Inq considers that, because the primary typedef instance has been deleted, the sub-structure is no longer viable as a "row". Inq raises a node removed event on the node set child to signal this and this event is dispatched before the deletion event.
  2. At any other container (including node set children) the instance is removed from the container. Inq does not raise additional events in this case as the deletion event is sufficient.

This processing ensures that in the most common cases there is no requirement to handle deletion events in application script. Common observers of node set structures such as GUI tables, for example, will respond appropriately in the face of instance life-cycle deletion events.

Node Structures and Instance Relationships

As noted above, relationships between application types are loosely defined by foreign key dependencies. Relationships between application type instances are defined by their relative positions in a particular node space.

Instances contained within the same parent (referred to as siblings) are related with a bounded cardinality, often one-to-one, though not necessarily only one, as illustrated in the Parties example above. A one-to-many relationship is expressed when an instance and a node set are at the same level. A node set itself represents a tabular data, whereas node sets at successively deeper levels model a tree, either to a bounded maximum or an arbitrary depth when the application type is recursively defined.

Such structures are built as necessary to meet a particular application requirement. Only a small number of scripted steps are needed and a scripted algorithm is applicable to any node space that satisfies all its references.