Post by Ross FinlaysonThen of course obligatory about C++, or, C/C++,
it's about OS threads and, "guarantees".
it's a co-routine, though instead of suspending
it just quits, and instead of switching it just
builds its own monad for the recusion, and
instead of having callbacks, it's always callbacks,
and instead of having futures everywhere,
it's futures everywhere. Try, try again."
In the process model in the runtime though,
it's mostly about what services the bus DMA.
"Systems", programming.
Boxes is nodes, ....
Re-Routines
So, the idea of the re-routine, is a sort of co-routine. That is, it
fits the definition of being a co-routine, though as with that when its
asynchronous filling of the memo of its operation is unfulfilled, it
quits by throwing an exception, then is as expected to to called again,
when its filling of the memo is fulfilled, thus that it returns.
The idea is that re-routines are originated in an origination or
initiation context, an original re-routine, then it invokes either other
re-routines, or plain code with an adaptor to keep the routine going,
side routines, then as with regards to exit routines, and return routines.
It's sort of in the language of the comedy routine, yet is a paradigm in
the parallel and concurrent process model of the cooperating
multithreading, and the co-routine. It's a model of cooperative
multithreading, with the goal of being guaranteed by the syntax of the
language, and little discipline.
The syntax and language of the re-routine is a subset of the ordinary
syntax and language of the runtime.
The basic expectation is that a ready result is a "usable" object or
value, and behaves entirely ordinarily, while an unready result, is an
"un-usable" object or value, that can be assigned and lvalue in the
usual definition, or added to a collection, yet when de-referenced or
accessed, or via inspection, is determined "un-usable", with a defined
behavior, to throw an exception, or throwable, and with the idea that
it's not even necessarily a declared or checked exception. In languages
with exception handling, exceptions un-wind the stack context of their
invocation until, as from they were thrown ("throw", "raise"), they are
caught ("catch", "rescue").
It's figured that copying and moving around an "un-usable" object is
ordinary, then that any sort access its object or value throws, and that
any re-routine, has it so that any object or value it results returning,
is only "usable" objects or values, or collections of usable objects or
values. It's figured that collections or any other holders, are OK with
un-usable objects, only that effectively de-referencing the object or
value that is un-usable, throws an un-usable exception.
So, in languages like Java and C#, which run in a runtime that
interprets the objects and values, and where there's a reserved value
assignable to any object or value type, un-ready results are just
"null". In languages like C++, where there are no un-usable objects of
this type, and where the semantics of assignment may be complicated, and
where de-referenceing "null" causes a segfault or machine error instead
of an exception that can be caught, re-routines return the type of a
value-holder for the object or value, called a "future", then that any
accesses to the object or value through the holder, then can "throw" if
un-ready or return the value or object its value when ready. The
"future" is a library type and mostly common in all languages, and
already has these loose semantics, then that it's un-necessary in Java
or C# because why bother cluttering the signature, if these or
re-routines, though maybe it's a good idea anyways, whether catching
"NullPointerExceptions" is any more involved that catching
"FutureExceptions".
The re-routine thusly, meets only a Java declaration, or as of a pure
abstract C++ method, its method signature is any usual signature that
expect usable arguments, and throws any exception specification, and
returns usable return values, though with never throwing the exceptions,
that indicate the unfulfilled, that is the pending fulfilled, "un-usable
exceptions".
Then, the result of a re-routine, is its usable return value, or, what
usable exceptions, are from the normal flow-of-control in the normal
syntax of the language, and the behavior is defined thusly exactly the
same. The originator of a re-routine, gets called-back with the result,
either the return value or an exception that it can itself re-throw or
swallow or translate, re-routines that call re-routines just get the
return values or have thrown the exceptions, re-routines that call
side-routines, must have that the side-routine calls-back, what is
otherwise an "exit" routine, from the re-routine.
(As a matter of style, then indicating that re-routines are re-routines,
there's one original re-routine, it calls re-routines or side-routines,
and if it calls side-routines, then it's figured that it's an exit
routine, yet not in the usual sense that "exit" means to "exit" the
runtime, just that it expects the side-routine to invoke a call-back it
provides, to re-enter the re-routine.)
The context of the origination, then, is that a thread, is expected to
pick up these re-routines as a task, from a queue or provider, supplier,
of the tasks. The idea is that each re-routine is a class instance, i.e.
it's an object as defined by a class an instance of the class the
object, and the instance, has associated its memo, from the origin.
In languages like Java, there's a "ThreadLocal", and in C++ there's a
storage specification, "thread_local". When the task supplier is
invoked, it's in the calling context, of the thread, a worker or task
worker. The body of "get()", sets the values of otherwise the static or
global ThreadLocal or thread_local, of the task's memo, then that as
long as the thread is working on the task, the re-routine's instance
access to the memo, is specific to original re-routine, and the
re-routines it calls, all in the context, of the same thread. It's
figured that any state of the re-routine, is specific to the instance of
the re-routine, and the thread, and its scope, its thread locals, and
globals. The re-routine instance may be of a bunch of re-routine
instances so their memo is the thread local memo. The memo's nowhere
part of the method signatures, of the re-routines.
Callers calling Re-Routines
This is the origination of a re-routine: it's basically en exit-routine
from the caller, to the submission to the task queue of the re-routine
originator, with calling back the caller.
Re-Routines calling Re-Routines
Re-routines, basically have an interceptor or layer, an aspect, before
invoking the body of the routine.
A: step -1) if the usable return value or usable exception is already in
the memo, return it (throw it respectively)
B: step 0) if any of the arguments, or any of the held values, are
un-usable, then throw an un-usable exception
C: step 1) invoke the body of the re-routine, and
C1: if calling a side-routine, put an un-usable return value in the
memo, and invoke the side-routine, and return an un-usable object
C2: if calling a re-routine, it's these same semantics, the re-routines
keep the same semantics going
C9: when eventually returning or throwing a usable exception, put it in
the memo
Re-Routines calling Side-Routines
It's figured that re-routines calling side-routines, makes the
re-routine an exit-routine, then that when it's called back by the
side-routine, is to initiate the exit-routine as an
exit-re-enter-routine. The idea is that the exit routine provides a
callback, and invokes whatever function in whatever thread context, and
the specification of the callback, is that the original initiator or
originator supplier, has a way to re-submit the task, of the exit-routine.
Then there isn't really a strong compile time guarantee, that
side-routines call-back their exit-re-enter-routine. It's figured that
side-routines must accept a signature of the call-back, and it's figured
they do, thus that the side-routines, call back with the return value or
exception, and, the callback body puts the return value or exception on
the memo, translated as necessary to a usable object or a usable
exception, or a translation of an unusable object or exception as a
usable exceptiopn, re-submits what was the exit-routine, that as the
exit-re-enter-routine, now completes, about how to re-enter the routine,
that the re-routine is re-entrant.
Re-Routines are Re-Entrant
The idea here is that the routine that's unready, when it's fulfilled,
it can either call all over again the entire original re-routine, or, it
can just invoke itself, that on completion invoking its re-routine
caller and so on and so forth, about whether a re-routine can act as a
side-routine in this manner, or it just always calls the original
re-routine which runs all the way through.
The idea is that a re-routine, is entirely according to the flow of
control, and also that all its iterations are ordered, or contained in a
re-routine that anything un-ordered must be the last thing that comes
out of a re-routine, so that in the memo, the entire call-graph of a
re-routine, is just a serial ordering in the siblings, and a very simple
tree, what results a breath-first traversal, in the access to the memo,
the organization of the memo, and the maintenance of the memo.
As the Re-routine is going along, there is that, in the normal
flow-of-control of each re-routine, it's serial, so, the original
re-routine has an entry-point, and that is the root of the memo-tree.
Then, whatever re-routines it calls, are siblings. The idea is that as a
data structure, when a sibling is created, is also created a root for
its children. So, the siblings get added to the root for the children,
and each has added an empty root for its children. The value of the
tree-node, is a holder of either an object or exception, of the
re-routine, to be initially populated with unusable object and
exception. The tree-node is created, on the entry point of the re-routine.
Then, the behavior of re-routines calling re-routines, basically has to
establish for a given invocation of the re-routine, what is its root.
This is basically a path as a list of integers, the n'th child's n'th
child's n'th child's n'th child.
Then, about the maintenance of the tree, is to make is so, that, it
needs to be thread-safe, as any re-routine can write on the memo at any
time. It needs to be thread-safe without any blocking by locking, if
possible, and the only way it can block is by locking, and it can't
deadlock.
The count of siblings is un-known, then as whether to just make a memory
organization, that the re-routine knows its ancestry integers, so the
memo is just a sequence of zero-terminated integer sequences with
object-exception pairs, then those are just concatenated to the memo,
and lookup is the linear in the memo.
(Though, the first encountered values are first, and search can run from
both sides taking turns.) I.e., the re-routine knows its ancestry
integers when its value results from a re-routine or
exit-re-enter-routine, then it updates the memo by concatenating its
ancestry integers and the usable value/exception.
Now why is this any good when the stack of the usual runtime just does
this all already? When the references to the locals are local and on the
stack an offset away? When the runtime will just suspend the entire
stack and block the thread and wait as a co-routine?
Well, the idea is that there isn't unboundedly many threads anyways, and
somehow a conscientious approach to cooperative multithreading, must
arrive at performing as well as preemptive multithreading, and somehow a
model of non-blocking routine, must arrive at performing as well as
blocking routine,
and this doesn't do either but runs as non-blocking code and looks like
blocking code and also launches unfulfilled siblings in parallel
automatically without blocking when their inputs are fulfilled, without
synchronizing or declaring how they meet and join in the satisifaction
of their dependencies, because it's automatically declared in the language.
So, that seems the best thing, that as far as sibling calls, for example
each of a list of a lot of items, is independent, they all get launched
in order with their coordinates the ancestry index, as coming back load
up the memo, the original caller needn't know nor care except write "for
each". They don't block and also launch in parallel courtesy having
usable values at all.
About the memo and maintaining the memo, is that, eventually a
re-routine returns. Then, it doesn't matter what re-routines it calls
return, once all of a re-routines sub-re-routines return, and it puts it
value on the memo, all their values can be zeroed out, resulting on the
eventual conclusion of the re-routine, a memo of all zeros.
Or, you know, neatening it up and logging it.
A most usual idea is that routines start as plain old routines
implementing an interface or base class. Now so far these are interfaces
with only one method, with regards to that otherwise what gets stored in
the memory is ancestry/method/memoized instead of just ancestry/memoized.
Then, plain old routines are sub-classed, overriding and hiding the
routine's methods, providing the default implementation, then just
calling the superclass implementation. The issue then gets into that
re-reroutines and side-routines get separated, which would result a big
mess, as to whether the thread_local should implement this passage of
the state, _without changing the signature of the routines_.