Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new wd-40 test #51

Open
NodixBlockchain opened this issue Jun 26, 2020 · 602 comments
Open

new wd-40 test #51

NodixBlockchain opened this issue Jun 26, 2020 · 602 comments

Comments

@NodixBlockchain
Copy link

No description provided.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-315910415
Original Date: Jul 17, 2017, 6:06 PM CDT


not shared_ptr as the default over GC

inferior except prompt/deterministic reference counting smart pointer paradigm

deterministically destruct instances of a type that has a destructor¹ which is a major feature lacking from GC languages … noting that such types can not used with GC references directly or indirectly as a member of a data structure that is GC referenced

We need optional GC and reference counting. The latter excels at deterministic, prompt finalization (which is required for non-main memory finalization) but has the trade-offs quoted below:

@shelby3 wrote:

However, reference counting deallocation is prompt and more deterministic than GC mark-and-sweep (although I rebutted there, “But please note that reference counting can’t break cyclical references but GC can. And reference counting can cause a cascade/domino effect pause that would not be limited to a maximum pause as hypothetically V8’s incremental mark-and-sweep GC.”). Incremental GC schemes when not overloaded with allocation that escapes generational GC, decrease high-latency stalls.

Has it ever been tried to combine reference counting with GC? Apparently so:

Edaqa Mortoray wrote:

Reference counting with cycle detection adds an interesting twist. I believe both D and Python do this. In order to eliminate memory loops objects are “sometimes” scanned for reference loops. This involves a lot more work than simply touching a reference count — though still not nearly as much as scanning all allocated memory. The impact of this loop detection depends heavily on how often it is scanned, and what the triggers are.

Given our single-threaded model, we do not need atomics on the reference counting and given the proposal of the OP to emphasize borrowing and RAII then we would not get the benefits of a generational GC scheme. So reference counting would give us determinism except in the case of cyclical references and then GC would kick in to break cycles. However, when we know the cyclical references are probable (e.g. any kind of non-acyclic graph such as the DOM or a doubly-linked list), then it is wasted overhead to employ reference counting, although that overhead could be very tiny (i.e. infrequent tracing) if we only want to catch unexpected cycles (i.e. as an insurance policy and we’d want to log these and consider them bugs).

I suppose we could rate-limit reference counting finalization, to prevent the rare aberrant high latency stall.

@fauigerzigerk wrote:

@rbehrends wrote:

It is well-known that the amortized cost of naive reference counting is considerably higher than that of a modern tracing garbage collector (regardless of whether you also support cycle collection). It is also well-known that non-naive RC-based approaches generally make it easier to achieve lower pause times (though cycle collection may throw a spanner in the works).

True, but I think these comparisons are often done under the assumption that memory utilisation remains fairly low. Tracing GCs react very badly to growing memory utilisation in my experience. This is of course ultimately an economic question and maybe it should be modelled that way.

@SKlogic wrote:

@jules wrote:

@jules wrote:

Tracing GC is not always faster. GC time is proportional to the size of the allocated memory, reference counting time is not.

It's not about running out of memory altogether, it's about the heap filling up with dead objects while the GC doesn't yet know that they are dead. Of course things are great when they are great and the GC can keep up with the mutator. Yet failure does happen whether or not you think it should happen, hence all the efforts that I mentioned that are trying to fix the issues.

As I said, in such a case RC would behave even worse.

@david-given wrote:

Regarding latency spikes: that's not entirely true. If you drop the last reference to an object, then freeing that object can cause the last reference to other objects to be dropped, etc.

So it's possible that dropping a reference can cause a large amount of work, which if it happens during a critical path can cause a latency spike. Particularly if finalisers are involved. If your object ownership graph is straightforward this isn't a problem, but if you have large objects with shared ownership it's very easy to be surprised here.

@SKlogic wrote:

It is really hard to get real time guarantees with RC - given that time for cleaning up depends on a size of a structure.

Just imagine it freeing a 10Gb linked list.

Real time pure mark and sweep GCs are easier.

@rbehrends wrote:

Just keep in mind the trade-offs that you are making.

For starters, you are adding a considerable performance overhead to local pointer assignments; you're replacing a register-register move (and one that can be optimized away in many situations by a modern compiler through register renaming, turning it into a no-op) with a register-register move plus two memory updates plus a branch. Even where you have cache hits and the branch can be predicted correctly almost always, you're dealing with cache and BTB pollution; these effects may not fully show up in micro-benchmarks, but only completely manifest in large programs.

@barrkel wrote:

There's no discussion of thread safety and how that affects reference counting. That's probably the biggest problem; you need to use memory barriers to make reference counting safe with multiple threads, and that can kill performance.

@vardump wrote:

Reference counting is fast with one CPU core. But it gets worse and worse as you add cores. Inter-CPU synchronization is slow and the buses can saturate. With enough cores, at some point that'll be all the system is doing

@iofj wrote:

@rbehrends wrote:

It is well-known that…

I submit this is only true for equivalent number of things to keep track of. In practice this is not the case. Languages with GC [typically] go completely overboard with GC, using it completely everywhere, even when not strictly necessary and certainly java does.

In C++, if you have a map with items, you use move semantics and you have have either 0 or 1 refcounts to keep track of, the one for the map itself. The rest is still "refcounted" but without ever touching any integer, by the normal scoping rules. Same goes for Rust code. That's ignoring the fact that a Java object is, at minimum, 11 bytes larger than a C++ object. Given java's boxing rules and string encoding, the difference in object sizes becomes bigger with bigger objects. Because out-of-stackframe RTTI is a basic necessity of tracing garbage collectors this is unavoidable, and cannot be fixed in another language. Bigger object sizes also mean more memory needed, more memory bandwidth needed, ... And Java's constant safety checks also mean

In Java, the same map will give the GC 3 items to keep track of (minimum) per entry in the map, plus half a dozen for the map itself. One for the object that keeps the key and the value, one of the key and one for the value. That's assuming both key and value are boxed primitives, not actual java objects. In that case, it'll be more.

@majke wrote:

Having that in mind, there are couple of problems with refcnt:

  • Cache spill. Increasing/decreasing a counter values in random places in memory kills cache. I remember one time I separated the "string object" data structure in python away from the immutable (!) string value (python has usually C struct object followed in memory by value blob). The python program suddenly went way faster - immutable (readonly) string values in memory were close to each other, mutated refcnts were close to another. Everyone was happier

For this last quoted issue, I am contemplating the ability to employ the unused bits (c.f. also) of 64-bit pointers (64-bit also provides a huge virtual memory space and c.f. also) would enable storing an index into a contiguous array which has the reference counts as array elements so they would be in contiguous pages. If the number of available unused bits is insufficient to count all allocated objects, then perhaps each compiled object type could employ a separate array (i.e. a zero-runtime-cost compile-time array selection), presuming the cache thrashing issue is due to intra-page-size fragmentation, not lack of inter-page contiguity. Also then we do not need to create the array element until the second reference. This is applicable when mostly passing around objects and not accessing all the data of the object as frequently.

Weighted reference counting can be employed to reduce the accesses to the reference count by half (since the source pointer has to be accessed any way), but it requires some unused bits in the pointer to store the weight.


David Ireland wrote:

Don’t get me wrong, Apple must have had a good reason to choose reference counting for memory management. That reason is actually pretty obvious: Objective-C is not a safe language, so you can’t move objects. If you can’t move objects, You have to use something like malloc to minimise fragmentation: you pay a big up front cost to allocate an object in the right place. In most GC schemes, allocation is free – you just increment a pointer. This is fine, because you are going to iterate through all your objects every now and again, and it doesn’t cost a whole lot more to move some of them around at the same time. You get to compact away all the fragmentation this results in. This makes GC fairly miserly with memory too. In iOS , if you want to allocate something bigger than the largest free fragment, you die. With GC, you have to wait a while for the GC to compact everything, after which memory will be compacted 100% efficiently and there will be a single memory extent as big as all available memory. Objective-C can’t have any of these benefits, so Apple chose a different route for iOS, and put the best possible spin on it.

Mobile apps won’t be fast until browsers are re-written in a fast, safe programming language that can share a good GC with ECMAScript. Of course browsers won’t be secure until then either, but I’m not holding my breath. There is no such language that is sufficiently popular, and Google fluffed it’s lines with Go. Maybe ECMAScript implementations will gain so much in performance, that all code that manipulates the DOM will be written in ECMAScript, and we can finally use a high performance GC.


Edit Nov. 18, 2017 Distinguish between non-deterministic reference counting and deterministic RAII.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-315958730
Original Date: Jul 18, 2017, 12:02 AM CDT


@anonymous wrote in private:

Progress on the language design is great to see.

I need progress on code, not language design, but I am trying to have paradigm I can be thinking about when structuring my code in TypeScript + C, so that it can be an easy transition if ever we complete a new PL and also to drive my insights on that potential language.

Concretely, I think it means I will write wrapper classes with getters and setters in TypeScript that take an ArrayBuffer for data and do my malloc in Emscripten and eliminate marshalling that way. Which will be a precursor to what a compiler can do automatically (and later with better integration by outputting ASM.js or WASM directly and maybe one day our own VM since the browser VMs have limitations).

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-325094328
Original Date: Aug 26, 2017, 1:40 AM CDT


Rust’s leadership may be falling into the political hell hole.


Look what Eric wrote about Rust:

http://esr.ibiblio.org/?p=7711&cpage=1#comment-1913931
http://esr.ibiblio.org/?p=7724#comment-1916909
http://esr.ibiblio.org/?p=7303
http://esr.ibiblio.org/?p=7294
http://esr.ibiblio.org/?p=7294#comment-1797560
http://esr.ibiblio.org/?p=7294#comment-1797743
http://esr.ibiblio.org/?p=7711&cpage=1#comment-1911853

They also argue (similar to a point I made to @keean) that type system complexity can kill the economics of a language (this complexity budget has to be minimized):

http://esr.ibiblio.org/?p=7724#comment-1912884

And everyone agrees OOP (virtual inheritance and subtyping) are anti-patterns:

http://esr.ibiblio.org/?p=7724#comment-1912640

Comments about where Rust has overdrawn it’s complexity budget:

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-345494394
Original Date: Nov 18, 2017, 11:58 PM CST


I wrote:

We need optional GC and reference counting. The latter excels at deterministic, prompt finalization (which is required for non-main memory finalization) but has the trade-offs quoted below:

[…]

Edit Nov. 18, 2017 Distinguish between non-deterministic reference counting and deterministic RAII.

Re-reading the prior linked comments…

First realization is that reference counting is non-deterministic and is inferior to GC in every facet (for the applicable use cases of non-deterministic resource management and thus the inherent lack of deterministic finalization) except for interop with non-GC objects as explained below (but potentially at the cost of loss of compaction and loss of automatic cyclical references memory leak detection).

Afaics, a useful facility to add in addition to GC is the zero-cost1 abstraction of deterministic RAII block scope allocated/deallocated on the stack, because it (probably improves performance and) adds determinism of finalization (aka destructors) and interacts with the optimization of low-level performance as explained below.

To achieve such zero-cost resource management abstractions requires typing/tracking the lifetimes of borrows of references so the compiler can prove a resource has no live borrow when it goes out of block scope. I had proposed the compile-time enforced “is stored” annotation on function inputs when an input will be (even transitively) borrowed beyond the lifetime of the function application. A separate concern from the issue of tracking borrows (a la Rust’s total ordering which assumes multithreaded preemption everywhere or my proposed partial ordering in conjunction with singled-threaded event model) for the purpose of preventing concurrent race conditions access (which afair we discussed in depth in the Concurrency thread). This granular typing of the context of side-effects seems more straightforward than some grand concept of typing everything as a (a cordoned context of imperative) side-effect??2

One of the key advantages/requirements of GC is the ability to move objects so that memory can be periodically compacted (with the compaction being amortized over many implicit/delayed deallocation events which would otherwise be explicit/immediate/non-deterministic-latency with reference counting), which enables allocation to be as low-cost as incrementing a pointer to the end of allocated objects in a contiguous memory address space.

As an aside, for highly optimized low-level code to run at maximum performance, it must be possible to freeze the position of objects in the virtual memory address space, else the objects must be copied. The former has a concurrency live and dead-lock issue w.r.t. the said compaction.

For RAII managed data structures that contain pointers to data (i.e. unboxed) which is not RAII managed at the current or containing block scope (i.e. for GC pointers since pointers to data RAII managed in a contained block scope would exhibit use-after-free errors), these pointers have to be mutable by the compaction engine and have to be recorded in the list of references which the GC must trace. To have full interoperability between GC instances and RAII (stack allocated data structure) instances without copying data, incurs either a performance penalty of reloading pointers on each access (in case they’ve been moved by the compactor) else the aforementioned concurrency live and dead-lock issue w.r.t. the said compaction (which is an onerous, undesirable can-of-worms).

Thus, I’m leaning towards that stack allocated data structures should not interop with GC instances, except via copying because (other than deterministic finalization) performance is their primary raison d'être. And interop with GC would (as GC does) also encourage unboxed data structures, which are known to thrash caches and degrade performance. Additionally, I think it should be allowed to have GC instances which are unboxed data structures (e.g. transpiled to ArrayBuffer in JavaScript) so they can be most efficiently copied (i.e. no transformation overhead). One might ponder whether to also have reference counted allocation on the heap , so that these immovable objects would not need to be copied when accessed by low-level performant algorithms (and note that the reason ASM.js didn’t allow very small ArrayBuffers is because there’s no presumption that what was compiled to ASM.js did bounds checking, although not a hindrance because our compiler could do static bounds checking in many cases and insert dynamic checks in the others), but the loss of compaction and the lack of automatic cyclical references detection are onerous. Any attempt to overcome these onerous trade-offs with “stop the world” of the code that is operating on non-GC instances, would suffer from “a concurrency live and dead-lock issue” as well (because think about it that just stopping the threads is not sufficient as the state can be invalid when the threads are restarted, instead they must be stopped at points where the current thread state has no such dependencies).

However, is memory compaction even necessary on 64-bit virtual address spaces wherein the hardware can map the virtual address pages to compacted, available physical pages? Well I guess yes unless most objects are much larger than the ~4Kb page size.

However, afaics perhaps the compaction issue could be significantly mitigated by allocating same size objects from contiguous virtual address space significantly larger than ~4KB in reference counting algorithm. Thus allocations would be first applied to deallocated slots in the address space, if any, per the standard reference counting allocation and deallocation algorithm, which employs a linked-list of LIFO free slots.3

If compaction is required, then I contemplate whether to allow unsafe low-level code which manually employs malloc and free should still be allowed for cases where there’s no other way to achieve performance with dynamic memory allocation.

1 Zero runtime cost because due to the compile-time determinism, we know at compile-time when the deallocation will occur.

2 How does Oleg’s insight conflict with @keean’s prior stance (in our discussions) against AST macros? I vaguely remember that he was against macros because they obfuscated debugging and the type system.

Is this the paper? http://okmij.org/ftp/Computation/having-effect.html#HOPE

I’m unable to readily grok Oleg’s work because he communicates in type theory and Haskell (neither of which am I highly fluent). Afaics he explains with details/examples instead of conceptually and generatively from ground level concepts. And I do not have enough time to apply arduous effort. So he limits his audience to those who are fluent in every facet of the prior art he builds on and/or who can justify the time and effort.

Essentially all side-effects distill to the generative essence of shared state. If state is not shared, then there are no interactions of effects between those processes.

Afaics, a Monad controls the threading of access to an orthogonal context across function application. That orthogonal context can be state in the State monad. Correct? That is why Monads can’t generally compose because they cordon orthogonal contexts.

But what is the goal to achieve? If we model higher-level concepts as imperative building blocks with contexts, we can do unlimited paradigms, but not only will our code become impossible to read because of the overhead of the necessary syntax and type system to allow for such generality (which was essentially my complaint against trying to model everything, e.g. even subtyping, as typeclasses as you were proposing before), but we have an explosion of issues with the interaction of too many paradigms. Afaics, this isn’t headed towards a simple language.

There will not be a perfect language. Type theory will never be complete and consistent. Design is driven by goals.

Thoughts?

3 Richard Jones. Garbage Collection Algorithms For Automatic Dynamic Memory Management, §2.1 The Reference Counting Algorithm

@NodixBlockchain
Copy link
Author

Original Author: @NodixBlockchain
Original URL: https:/keean/zenscript/issues/35#issuecomment-345981613
Original Date: Nov 21, 2017, 4:20 AM CST


If i may :)

For me the main problem i would have with garbage collector is that they are harder to debug and test.

The nightmare i have is you will test the functionality on simple program, then it works, and you keep adding functions, race conditions, and two month latter you realize there is a memory leak or memory corruption somewhere, and you're good for days or weeks of debugging to trace the problem.

With reference counting, it's very easy to trace problem, because you know where the freeing must happen, the point where the last reference to an object is released can be tracked easily, and it make it very easy to track problems, either they are memory corruption due to memory being released prematurely , or memory leaks.

With garbage collector who will parse the whole tree every now and then, it will traverse thousands of pointer allocated over few minutes of running, and it can become much harder to track where the problem is if it free a memory that it shouldn't, or if it doesn't free a memory that it should.

Especially if you go for multi layered scope, with closures, and asynchronous code execution, it can become very hard to make sure the GC will catch everything right. And bugs will generally appear when the program start to become more complex rather than in easy vanilla cases when the program is small and simple.

And if you need to have a system of plugin, like code that are loaded at runtime that can manipulate references itself, it can become also impossible to track deterministically the reference tree at compile time, and the host program might have no clue at all of what the plugin is doing with the reference passed to it. And it's the same problem with distributed application.

For me now when i want to add new feature, especially for memory management, my first concern is not only 'how to make it work', but also how easy it will be to detect problems, and to track what cause the problem.

With reference counting, it's easy to make a break point when reference get freed, and having the debugger to pause everything and inspect all the context at the moment where it happens.

With GC, the analysis is delayed, and all the context of execution that change the state of the object is lost, so it makes it very hard to debug.

For me to make GC programming easy, it needs something like android SDK where all the application is structured at macro level with activities, intents, special class for asynchronous/background execution, and making in sort that lifetime of all objects are predictible due to the structuring of the application, declaration of resources in XML etc.

https://developer.android.com/guide/components/activities/process-lifecycle.html

They have whole lot of classes to build the application on, to make resource management and lifecycle of object more deterministic.

It impose a certain structure to all application classes and component to make memory management fit in case that the GC can easily deal with.

Making general purpose GC to deal with complex language with multi layered scope and asynchronous execution without giving any hint to the pattern that the object is likely to follow by inheriting some class, and structuring the application with built class with predictible behavior seems very hard.

Reference counting is much easier to deal with for the general case.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346013299
Original Date: Nov 21, 2017, 6:34 AM CST


My preference is to manage as much as possible from the stack as you can. There are some simple tricks that can make this work. If all allocation is RAII and you have proceedures not functions then to return some data you simply pass a collection (set/map/array etc) into the proceedure.

Edit: where this gets complicated is you then end up with two kinds of pointers "owning pointers" which must form a Directed-Acyclic-Graph, and non-owning (or reference) pointers (and links between data-structures). The simple solution is to make all 'references' weak-references, so that you have to null-test them when dereferencing to check if the resource they point to has gone away. Then we simply delete things when they go out of scope, removing all resources pointed to by owning-pointers in the DAG. No GC no Reference counting, and a simple non-zero check on dereference. Of cource 'nullable' pointers are not considered a good thing, so we can then look for static techniques where we do not allow references to outlive the object to they reference. This is where Rust's lifetimes come from.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-346107470
Original Date: Nov 21, 2017, 11:51 AM CST


@NodixBlockchain (aka IadixDev @ BCT) wrote:

If i may :)

I can’t speak for @keean (and this is his project area but this is my thread), but personally I’d prefer “please don’t” (or post once for every 10 posts by others in this thread, so as to keep the S/N ratio of the thread higher), i.e that you would comment in your thread “Saluations! :)” instead (or at least not in this/my thread), unless you have something new+important+accurate+well-researched to point out which you have not written already in the past.

If you quote from this thread, and post in your thread, I will probably read (presuming I have time) and perhaps even reply there. I’m trying to avoid this thread becoming another 300+ posts that I can’t wade through when I want to refresh my own memory of what my (and @keean’s) analyses was/is.

@keean and I have been doing backtalking on LinkedIn to reduce clutter in these issues thread. You’re welcome to add me there.

With reference counting, it's very easy to trace problem, because you know where the freeing must happen, the point where the last reference to an object is released can be tracked easily, and it make it very easy to track problems, either they are memory corruption due to memory being released prematurely , or memory leaks.

I’m wondering whether that is a superstitious, anecdotal belief (i.e. aliasing error) or an assertion you actually studied, measured, and verified to be the case with in depth effort for comparing reference counting and tracing garbage collection. Because inherently reference counting doesn’t track the directed graph of memory allocation (i.e. all we have are reference counts and no back pointers to the references that comprise those counts); thus, there’s no feasible way to see which data structures have leaked because they’re no longer reachable from the untraced roots of the said graph in a reference counting GC system— for example due to cyclical references. Whereas, in theory a tracing garbage collector could provide such a graph for debugging.

With reference counting, it's easy to make a break point when reference get freed, and having the debugger to pause everything and inspect all the context at the moment where it happens.

With GC, the analysis is delayed, and all the context of execution that change the state of the object is lost, so it makes it very hard to debug.

You can’t see which/where are all the references to the shared object that had it’s reference count decremented, because it’s the implicit nature of non-deterministic memory allocation to not know which pointers would be the candidates and the reference counting scheme provides no back pointers from the said object to said sharing pointers (as aforementioned).

I agree it wouldn’t normally be possible to set a runtime breakpoint on the change in number of references to a specific object with a tracing GC, because the assignment of pointers doesn’t touch the referenced object (although write-guards for generational GC might get you as close as the virtual page but that might not be helpful). Thus, there’s no way to know which assignment to set a breakpoint on for testing the address of the said shared object. Yet comparably, the more optimized variants of reference counting also have the same debugging problem because they don’t even update the said common object (as explained below) because they keep the reference count in the pointers (and additionally there is the Weighted Reference Counting). In both cases, a sophisticated debugger and language could in theory be designed to set such conditional breakpoints on all pointer assignments (conditionally breaking only if the said shared object address is encountered), if such a facility is helpful for debugging memory leaks. And the tracing GC would have more information about the memory allocation graph.

And if you need to have a system of plugin, like code that are loaded at runtime that can manipulate references itself, it can become also impossible to track deterministically the reference tree at compile time

It seems you’re presuming that reference counting models deterministic memory allocation, but that is not generally the case and (per my reply to @keean below), if that were generally the case then in theory it could be replaced by zero-runtime-cost compile-time deterministic allocation mgmt (albeit with tsuris of a complex compile-time typing system). Afaics, you continue to repeat this belief system of yours (presumably confirmation biased by your presumed preference for your low-level framework you discussed in your thread “Saluations! :)” and per my comments to @keean about C/C++ below) without refuting the points that have been made to you numerous times previously. I’ll recap and augment… (and hoping you’ll keep the not-so-thoroughly-contemplated-studied noise to a minimum in this/my thread please).

https://developer.android.com/guide/components/activities/process-lifecycle.html

They have whole lot of classes to build the application on, to make resource management and lifecycle of object more deterministic.

I don’t see how the process lifecycle has anything to with the generalized problem that in general memory allocation is non-deterministic.

It impose a certain structure to all application classes and component to make memory management fit in case that the GC can easily deal with.

Programming paradigms for avoiding memory leaks are useful regardless of the memory allocation scheme chosen.

I previously mentioned the following disadvantages for reference counting to you (either in the Concurrency thread and/or the other thread you created). And note the first one below is a form of memory leak inherent in reference counting.

Reference counting (aka ARC, which technically is a form of “direct”1 garbage collection) can’t garbage collect cyclical references. And according to an expert book I’m reading there’s no known means of weak pointers or other paradigm which can reliably solve the problem of cyclical references in all scenarios.2 This makes sense because reference counting (RC) doesn’t contemplate the entire directed graph of objects holistically. Thus only “indirect, tracing”1 garbage collection which deals with the entire directed graph will reliably detect and garbage collect cyclical references. However, note that probabilistic, localized tracing combined with RC decrements is comparable to mark-sweep (MS)3 for (even apparently asymptotic) throughput computational complexity costs, yet with an upper bound (“6 ms”) on pause times due to locality of search. Yet presumably pause times are eliminated with a concurrent MS (and as cited below, RC has less throughput than BG-MS).

According to this expert book4, reference counting has significantly worse throughput (except compared to the pathological asymptotic huge heap case for tracing GC which can be avoided15, and perhaps multi-threaded/multi-core/parallelism but I think need to study that more) because of the significant additional overhead for every pointer assignment. This overhead as a percentage is worse for highly optimized, low-level languages (e.g. C/C++) compared to interpreted languages which have high overhead any way. Note that a claimed throughput parity for RC with MS5 (and it’s just culling young generation overhead similar to prior art16) is not parity with generational GC combined with MS (BG-MS).6 Additionally the adjustment of the reference counts on each pointer assignment thrashes the cache7 further reducing performance, except for optimizations which generally have to combined with a tracing GC any way (and they diminish the afaics mostly irrelevant claimed deterministic finality benefit).8 Additionally for multithreaded code, there is additional overhead due to contention over the race condition when updating reference counts, although there are potential amortization optimizations which are also potentially superior for parallelized and/or distributed paradigms9 (but they diminish the afaics mostly irrelevant claimed deterministic finality benefit). Additionally memory allocation management on the heap is non-deterministic, so the determinism of the immediacy of deallocation for referencing counting is afaics more or less irrelevant. Additionally reference counting (especially when executing destructors) can cause a dominoes cascade of latency (aka “stop the world”) and the amortization optimizations10 again diminish the afaics mostly irrelevant claimed deterministic finality benefit.

However, in my next post in this thread, I will propose a variant of deferred reference counting11 in conjunction with a clever generational GC scheme (apparently similar to prior art16), for the avoiding the pathological asymptotic case of tracing GC. See my further comments about this in my reply to @keean below.

Especially if you go for multi layered scope, with closures, and asynchronous code execution, it can become very hard to make sure the GC will catch everything right. And bugs will generally appear when the program start to become more complex rather than in easy vanilla cases when the program is small and simple.

In general, more complexity in the code will increase the probability of semantic memory leaks, even semantic leaks of the form that automatic garbage collection can’t fix because the error is semantic. Reference counting (a form of automated GC) will also suffer these semantic memory leaks. The issue of complexity of semantics, is more or less not peculiar to the type of automatic GC scheme chosen. Thus I don’t view this as a valid complaint against tracing GC in favor of reference counting.

1 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §1 Introduction, pg. 2.

2 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §3.5 Cyclic reference counting, pg. 56 – 62, §3.6 Issues to consider: Cyclic data structures, pg. 71.

3 D. F. Bacon and V. T. Rajan (2001). Concurrent cycle collection in reference counted systems. In J. L. Knudsen, editor, Proceedings of 15th European Conference on Object-Oriented Programming, ECOOP 2001, Budapest, Hungary, June 18-22, volume 2072 of Lecture Notes in Computer Science, pp. 207–235. Springer-Verlag, 2001.

4 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §2.1 The Reference Counting Algorithm, pg. 19 – 25, §3 Reference Counting, pp 43 – 74.

5 R. Shahriyar, S. M. Blackburn, X. Yang, and K. S. McKinley (2012), "Taking Off the Gloves with Reference Counting Immix," in OOPSLA ‘13: Proceeding of the 24th ACM SIGPLAN conference on Object oriented programming systems languages and applications, 2013. Originally found on Steve Blackburn’s website.

6 Stephen Blackburn; Kathryn McKinley (2003). "Ulterior Reference Counting: Fast Garbage Collection without a Long Wait". Proceedings of the 18th annual ACM SIGPLAN conference on Object-oriented programing, systems, languages, and applications. OOPSLA 2003, Table 3 on pg. 9.

7 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §2.1 The Reference Counting Algorithm: The strengths and weakness of reference counting, pg. 22.

8 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §3.3 Limited-field reference counts, pp. 50 – 55, §3.3 Limited-field reference counts: One-bit reference counts, pg. 52, §3.6 Issues to consider: Space for reference counts & Locality of references, pg. 71.

9 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §8.4 Concurrent Reference Counting, pp. 200 – 201, §3.7 Notes, pg. 74, §4.1 Comparisons with reference counting, pg. 77.

10 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §3.1 Non-recursive freeing, pg. 44.

11 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §3.2 Deferred reference counting: The Deutsch-Bobrow algorithm, pg. 46.


@keean wrote:

My preference is to manage as much as possible from the stack as you can. There are some simple tricks that can make this work. If all allocation is RAII and you have procedures not functions then to return some data you simply pass a collection (set/map/array etc) into the procedure.

This is conceptually broadly analogous/related to deferred reference counting12 schemes which rely on compile-time deterministic tracing of the deallocation, such as via linear (aka uniqueness) types.

Although I was obviously also contemplating similarly to you in my prior posts in this thread (after you had reminded me about RAII in our previous discussions and my recollection of programming in C++ in the late 1990s), after reading a book on garbage collection and thinking more about the issues, I’m going to further argue against the compile-time deterministic memory mgmt language design strategy in my upcoming detailed post, because I believe it’s not as generally flexible, generally sound, nor highly comprehensible solution.

These stack memory mgmt paradigms you’re referring to, have narrow applicability because they don’t handle the general case of non-deterministic heap management.

Afaics, it’s a special case methodology that leads to a very complex set of corner cases such as for C/C++ which tries to pile on a bunch of different sort of compile-time optimizations and keep as much memory mgmt on the stack, but lead to a brittle clusterfuck of a language with a 1500 page manual that virtually no one understands entirely (not even the C++ creator Bjarne Stroustrup nor the STL creator Alexander Stepanov, as they both admitted).

And those stack paradigms don’t marry well holistically with a bolt-on tracing garbage collection appendage13 (i.e. was not design holistically with the language), although I have not yet read a more recent research paper on combining tracing GC with Rust.14 To even attempt to apply these compile-time deterministic memory management holistically requires a complex typing tsuris akin to Rust’s total ordering of borrow lifetimes with lifetime annotation parameters for transitivity of borrows from inputs to outputs, and of course the programmer must violate the invariants and create unsafe code in order to handle non-deterministic (i.e. runtime randomized) memory mgmt, which defeats the entire point as then the total order is no longer checked and the unsafety can leak back into the code the compiler thinks is safe.

I apply the 80/20 or 90/10 Pareto principle to prioritization. Although all that special case tsuris might be able to obtain an extra 10% of performance, it has the cost of 90% more effort on debugging, readability of the code, maintenance of the code, complexity of the code, etc.. When you really need that 10% boost (or in the case where the more generalized GC paradigm is much slower for some reason), then go write a module in C++ or Rust. I see no need to invest my effort in trying to recreate (or improve upon) those languages, because there are millions of man-hours already in those ecosystems. Our only prayer of creating something useful in this decade with our resources, is to 90/10% prioritize on clean abstractions that are elegant and generalized and 90% performant with 10% of the effort and complexity.

EDIT: on the coding for readability point, “Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.”

Additionally, perhaps my GC design ideas (to be outlined in my next post) might possibly be compelling (if I haven’t made some errors in my thought process, which is entirely possible given I need to eat dark chocolate to even get my brain to ephemerally coax my brain out of the 105 IQ brain fog gutter and function non-lethargically presumably due to an ongoing TB infection). Note however, that the prior art for GC is apparently vast and seemingly exhaustive in terms of the low hanging fruit of obvious, simple variants16 (i.e. fertile ground presumably only in the realm of esoteric, high complexity, and/or abstruse). Thus, unlikely I discovered something (after a day of reading and contemplation) that hadn’t be considered previously in the prior art, although there appears to be potential prior art similar to my ideas16 that have appeared later than the book I read. Note I independently arrived at my ideas (after about a day of contemplating the 1996 book) and have not yet read that 2003 research paper16 to compare.

Total ordering compile-time paradigms (including attempting to type check everything) are afaics brittle paradigms that break because the programmer must necessarily break out of them because the entropy of the universe is not bounded. I commented on this previously numerous times in these issues threads. The non-determinism (i.e. unbounded randomness and entropy) of the universe (i.e. the real world in I/O) can’t be statically modeled.

My thought is we need to be circumspect about the benefits versus trade-offs of every paradigm and feature we choose in a programming language. And the target use cases (and acumen/preferences of the target programmers demographics) of the programming language will also factor into the decision process.

For example, Haskell programmers favor brevity and mathematical expression with trade-offs such as hidden (2nd, 3rd, and 4th order) details in the form of implicit typing (which the mathematically minded model in their mind without reading them explicitly in the code), which drastically reduces the demographics of programmers who can read the code and participate. Some snobbishly claim this is a good filter to keep the less intelligent programmers away. Haskell has other trade-offs (necessarily to enable the mathematical modelling) which we’ve discussed else where are outside the scope of this concise summary, including but not limited to (and my descriptions may not be entirely accurate given my limited Haskell fluency) the debugging non-determinism tsuris of lazy evaluation (although the requirement for lazy evaluation is disputed), the bottom type populating all types, a total ordering (or algebra) on implementation of each data type for each typeclass interface, a total ordering on the sequential (monadic) threading of I/O processing in the code (requiring unsafe code to fully accommodate the non-determinism of I/O), etc..

Edit: where this gets complicated is you then end up with two kinds of pointers "owning pointers" which must form a Directed-Acyclic-Graph, and non-owning (or reference) pointers (and links between data-structures). The simple solution is to make all 'references' weak-references, so that you have to null-test them when dereferencing to check if the resource they point to has gone away. Then we simply delete things when they go out of scope, removing all resources pointed to by owning-pointers in the DAG. No GC no Reference counting, and a simple non-zero check on dereference. Of cource 'nullable' pointers are not considered a good thing, so we can then look for static techniques where we do not allow references to outlive the object to they reference. This is where Rust's lifetimes come from.

Human managed weak references are an incomplete and human-error prone strategy. Also as you noted then you leak the non-determinism from deallocation to how to handle error conditions of null pointers, leaking the non-determinism into program logic! That is horrible. Whereas, a provably sound weak pointer algorithm has exponential asymptotics and thus is no better or worse than tracing GC mark-sweep.2

12 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §3.2 Deferred reference counting, pg. 45.

13 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, §9 Garbage Collection for C, pg. 227.

14 Y. Lin, S. M. Blackburn, A. L. Hosking, and M. Norrish (2016), "Rust as a Language for High Performance GC Implementation," in Proceedings of the Sixteenth ACM SIGPLAN International Symposium on Memory Management, ISMM ‘16, Santa Barbara, CA, June 13, 2016, 2016.

15 Richard Jones (1996). Garbage Collection Algorithms For Automatic Dynamic Memory Management, Preface: The audience, pg. xxiv, Preface: The Bibliography and the World-Wide Web, pg. xxv.

16 Stephen Blackburn; Kathryn McKinley (2003). "Ulterior Reference Counting: Fast Garbage Collection without a Long Wait". Proceedings of the 18th annual ACM SIGPLAN conference on Object-oriented programing, systems, languages, and applications. OOPSLA 2003. I originally found this on Wikipedia’s page for Reference Counting.


Kindle version of the book is available on Amazon for less then $20 if you don’t want to turn your monitor sideways to read the “free” (copyright violation theft) pdf linked above.

EDIT: a more concise overview resource:

@NodixBlockchain wrote:

https://openresearch-repository.anu.edu.au/bitstream/1885/9053/1/02WholeThesis_Garner.pdf

This papper cover it all :D

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346114141
Original Date: Nov 21, 2017, 12:14 PM CST


The claims made about GC efficiency in those kind of books never turn out to be true in reality. Look at how JavaScript is slow yet emscripten can get within factor of 2 of native 'C', yet both run on the same VM. The difference is that asm.js and WebASM are strongly typed and manually manage the memory. In theory the JIT can make well typed JS code as fast as strongly typed code, so well typed TypeScript should be as fast as the compiled asm.js/WebASM, but its not. The remaining difference is mostly down to memory management in my opinion.

@NodixBlockchain
Copy link
Author

Original Author: @NodixBlockchain
Original URL: https:/keean/zenscript/issues/35#issuecomment-346116255
Original Date: Nov 21, 2017, 12:21 PM CST


Edit: where this gets complicated is you then end up with two kinds of pointers "owning pointers" which must form a Directed-Acyclic-Graph, and non-owning (or reference) pointers (and links between data-structures). The simple solution is to make all 'references' weak-references, so that you have to null-test them when dereferencing to check if the resource they point to has gone away. Then we simply delete things when they go out of scope, removing all resources pointed to by owning-pointers in the DAG. No GC no Reference counting, and a simple non-zero check on dereference. Of cource 'nullable' pointers are not considered a good thing, so we can then look for static techniques where we do not allow references to outlive the object to they reference. This is where Rust's lifetimes come from.

I wonder if this solution would work if the object instance referenced can change during the lifetime of the reference.

Because it would mean you might need to delete the referenced object before the reference get out of scope in case a new object is assigned to it during its lifetime.

@NodixBlockchain
Copy link
Author

Original Author: @barrkel
Original URL: https:/keean/zenscript/issues/35#issuecomment-346119442
Original Date: Nov 21, 2017, 12:33 PM CST


You can see the efficiency of most GCs really easily: just look at % time in GC. For programs designed with generational GC in mind, it won't be above 10% and will more likely be < 2%. GC isn't the culprit for Javascript running slower than webasm. Types are part of it, but it's more about abstraction level. Higher abstraction means the runtime needs to perform more work to get to machine code, and will be using generalisations (aka baked in assumptions about trade-offs) to get there - and on a JIT time budget too. Whereas webasm is designed to be efficiently and unambiguously converted to machine code, so there will be fewer trade-offs, and the compiler targeting webasm can spend much more time analysing the trade-offs involved in reducing abstraction level.

I don't think there's any theory that JS would ever be as fast as webasm
under any reasonable set of assumptions because of the time budgets need
to be taken into consideration. JIT means deadlines.

And that's without getting into the costs of semantic fidelity. You can't
simply pretend JS's prototype inheritance doesn't exist given typescript
source; all the work needs to be done anyhow, just in case, since the web
is a distributed system and thus non-deterministic.


On Tue, 21 Nov 2017, 18:14 Keean Schupke, ***@***.***> wrote: The claims made about GC efficiency in those kind of books never turn out to be true in reality. Look at how JavaScript is slow yet emscripten can get within factor of 2 of native 'C', yet both run on the same VM. The difference is that asm.js and WebASM are strongly typed and manually manage the memory. In theory the JIT can make well typed JS code as fast as strongly typed code, so well typed TypeScript should be as fast as the compiled asm.js/WebASM, but its not. The remaining difference is mostly down to memory management in my opinion.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#35 (comment)>, or mute
the thread
<https:/notifications/unsubscribe-auth/AAHypPkm1PWue63YbgWl4_-23tAvY5g4ks5s4xLugaJpZM4OZwjF>
.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346122168
Original Date: Nov 21, 2017, 12:43 PM CST


TypeScript can force you to stick to classes for objects, allowing the JIT to efficiently optimise these. Really for JS where there are no allocations going on it is as fast as asm.js. GC hides all sorts of costs, like the extra level of pointer indirection.

Probably a major factor is that GC hides the cost of allocation and deallocation, so the programmer does not realise they are thrashing object creation.

What is the fastest garbage collected language?

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346126086
Original Date: Nov 21, 2017, 12:56 PM CST


GC is even worse with multi threads, because it has to stop all the threads to do the collection. This leads to GC pauses, and bad user interface Jank, plus programs crashing due to fragmentation...

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346126913
Original Date: Nov 21, 2017, 12:59 PM CST


The cache issue for me i don't think it's a problem as pointer and reference count are supposed to always be in the same cache line

The reference count needs to be in the object referenced, not the pointer, so it won't be in the same cache line as the pointer. When you have multiple pointers to the same object they all need to increment/decrement the same counter.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346128500
Original Date: Nov 21, 2017, 1:05 PM CST


See: https://softwareengineering.stackexchange.com/questions/203745/demonstration-of-garbage-collection-being-faster-than-manual-memory-management

The key fact for me is that managed memory usage is always significantly worse than the managed. Yes GC can be fast providing you have twice the RAM available. Typically on a multi-tasking machine this means less RAM available for other processes. This results is real slowdown of things like the disk-caching layer because its losing pages to the application.

The end result is that the user experience of using a system with GC is significantly worse than using a system with manual allocation - providing the software does not crash due to bad memory management :-)

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346131548
Original Date: Nov 21, 2017, 1:15 PM CST


It cannot be with the pointer, because when you create a new pointer to the array you need to increment the reference count.

@NodixBlockchain
Copy link
Author

Original Author: @barrkel
Original URL: https:/keean/zenscript/issues/35#issuecomment-346132506
Original Date: Nov 21, 2017, 1:19 PM CST


The argument in favour of GC isn't speed of execution, it's speed of development at very low costs - an extra 2 to 10% or so. Speed of development in memory safe languages is significantly higher; and if you don't like GC, GC isn't the only way to get to memory safety, but it is the easier way.

Reference counting is GC. Arguably, arena allocation, or any other scheme
more sophisticated than paired alloc & free is GC. But those schemes are
usually chosen not because they perform better (though they often do -
free() isn't "free") - but because developer productivity is higher. And
none of those schemes are what make something like JS slower than something
like webasm.

Yes, GC scales because it trades space for time. If you're running on a
small embedded device, it can be a poor tradeoff; if you're trying to
maximize compute throughput across multiple machines, it can cost real
dollars. But these are extremes, and most people aren't on the extremes.

I mostly just butted in because accusing GC for JS / Typescript being
slower than webasm seemed a bit outlandish. GC overhead is directly
measurable; it's trivially disproved for anyone inclined to measure.

I'm gonna unsubscribe from this thread now. I got linked in by an @
reference.


On Tue, Nov 21, 2017 at 7:05 PM Keean Schupke ***@***.***> wrote: See: https://softwareengineering.stackexchange.com/questions/203745/demonstration-of-garbage-collection-being-faster-than-manual-memory-management

The key fact for me is that managed memory usage is always significantly
worse than the managed. Yes GC can be fast providing you have twice the RAM
available. Typically on a multi-tasking machine this means less RAM
available for other processes. This results is real slowdown of things like
the disk-caching layer because its losing pages to the application.

The end result is that the user experience of using a system with GC is
significantly worse than using a system with manual allocation - providing
the software does not crash due to bad memory management :-)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#35 (comment)>, or mute
the thread
<https:/notifications/unsubscribe-auth/AAHypDRVXNDtR_IQ1pw9KW3-aGN6LIWjks5s4x7tgaJpZM4OZwjF>
.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346135291
Original Date: Nov 21, 2017, 1:29 PM CST


I mostly just butted in because accusing GC for JS / Typescript being
slower than webasm seemed a bit outlandish. GC overhead is directly
measurable; it's trivially disproved for anyone inclined to measure.

Actually the difference in performance between JS and WebASM is only about 10-20% anyway, so that is in the 2%-10% region you claimed for the GC, so for some algorithms it could account for 100% of the difference...

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346138363
Original Date: Nov 21, 2017, 1:40 PM CST


Interestingly I am about to port a bunch of asm.js to typescript, as the memory allocation for asm.js / webasm does not work well with the main JS garbage collector, and allocating the heap for the webasm is causing fragmentation of the JS heap. In the end it turns out that the better memory usage, keeping all them memory managed by the GC is better for the application stability than the small performance gain of having the code in asm.js.

Following this argument to its logical conclusion, the better memory usage of manually managed memory for the whole application would be better overall.

Edit: its probably worth pointing out that this application uses a lot of memory, and has some periods when it is performing rapid allocations. This leads to memory usage for a single tab of about 1GB (even though memory buffers are 16mb so with manual allocation it would get nowhere near that level). Frequently when usage gets more than 1GB the web-browser crashes, even though most of that memory is probably free, and just waiting to be collected.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-346186551
Original Date: Nov 21, 2017, 4:47 PM CST


@NodixBlockchain wrote:

What make me think that cyclic dependency are not a problem with the way i handle it, is that i have a case of cyclic dependency like this.

This is off-topic. Explicit, unchecked (i.e. “unsafe”) manual mgmt is not what we are interested in here. The topic was about automatic (or compiler checked deterministic) mgmt. As you know, both @keean and I are significantly into compiler-assisted checking.

I understand you want to participate. But you really have not wrapped your mind around all the concepts, and so you should be more circumspect and lurk more. I said if you want to post to your thread, then I will try to participate occasionally.

Please be respectful and recognize your limitations.

I'm aware of the potential downside, but for me they remain benign for the use case i have for the moment.

After briefly learning about it, I’m no longer interested in your low-level framework. Your monotheistic intent on pushing your low-level preferences diverges the focus of my thread, because you don’t have an open-mind attempting to be objective about research results. Your low-level framework focus is irrelevant to the topic of this thread. Please talk about that in your thread which you entitled “Saluations! :)”. This/my thread is not for trying to convince yourself and/or the world that your low-level framework paradigm is great. Your thread is for that. I wouldn’t ban you even if this were my repository, but I believe curation (not absolute censorship) is valuable. Trust me, if you write anything in your thread which I happen to read, that I think is important enough to be in this thread, I will quote it over in this thread. Also I presented a guideline given that I find your posts have so many errors and low informational value, that you could post after every 10 posts by others in the thread (so as to force you to focus yourself more and keep the S/N ratio here reasonable).

I considered the low-level implications as even more evident by the post which follows this one. And realized that your preferences are a dinosaur. See my next post.

I am discussing here about general purpose, high-level programming language design that employs compiler checking and/or automated runtime GC to facilitate lower costs of development and guard against human error. With some thought given to performance optimization and some perhaps C-like low-level capabilities as long as the result is clean, simple, and elegant and not explicit, manual, bugs-prone human mgmt of memory.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-346199127
Original Date: Nov 21, 2017, 5:53 PM CST


@keean wrote:

The difference is that asm.js and WebASM are strongly typed and manually manage the memory. In theory the JIT can make well typed JS code as fast as strongly typed code, so well typed TypeScript should be as fast as the compiled asm.js/WebASM, but its not. The remaining difference is mostly down to memory management in my opinion.

TypeScript transpiles to JavaScript (ES6), not ASM.js nor WASM.

(As you know, TypeScript aims to be round-trip compatible with and a typed superset of JavaScript— meaning it should pass through ES6 code more or less unchanged.)

Also C code which is transpiled to ASM.js/WASM, is employing different abstractions such as not boxing every damn thing, such as every element of every array.

The VM can’t always statically type objects, so methods and type coersions can require dynamic overhead.

Also I believe JS may be passing all function arguments on the heap (the prototype chain of objects which form the closures we previously discussed), and thus although the throughput and locality (i.e. contiguity for minimizing cache and virtual paging thrashing) is similar (i.e. allocation contiguously on the stack versus the generational heap with only an extra end of area pointer increment for each generational area allocation as compared to one for the SP on each function call), the volume of objects before a collection (i.e. reduction of the end of area pointer, although presumably all function arguments could be allocated as one data structure thus equivalent cost) is much greater than for stack passed arguments and thus perhaps additional thrashing.

C code rarely employs hashmap lookup, yet JS literally encourages it since all object properties and non-numeric array indices employ it.

There are many more details and we would need to study all the reasons:

https://news.ycombinator.com/item?id=7943303
https://www.html5rocks.com/en/tutorials/speed/v8/
https://blog.sessionstack.com/how-javascript-works-inside-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e

This thread is in part my attempt to get the best of low-level ASM.js/WASM married to a better GC scheme that is more compatible with (some of) those lower-level abstractions!

[…] GC hides all sorts of costs, like the extra level of pointer indirection.

Probably a major factor is that GC hides the cost of allocation and deallocation, so the programmer does not realise they are thrashing object creation.

Agreed, e.g. do not realize they’re boxing everything forcing an extra pointer indirection for every read/write of an array element for example.

Actually the difference in performance between JS and WebASM[ASM.js] is only about 10-20% anyway, so that is in the 2%-10% region you claimed for the GC, so for some algorithms it could account for 100% of the difference...

Huh? JS is several times slower than C, and ASM.js is afair purportedly within about 40% slower than native C speed:

https://julialang.org/benchmarks/
https://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=node&lang2=gpp

Perhaps you’re referring to JS highly optimized to avoid features which impair aforelinked V8’s optimization?

Yes the choice is between static or runtime checking. The best is to use the stack, so there is no need for checking, but that does not work for all scenarios.

Generational GC is very efficient nearly as much as the stack (as I explained) and I am going to propose an algorithm that I am thinking might make it even more efficient.

What he fails to realise is that 2-10% is only true when you have enough memory.

It is true that the asymptotic computational complexity (i.e. performance) of tracing GC is pathological both as live memory usage approaches/exceeds the bound of physical memory and as total memory consumption increases for computers with more GBs (or TBs) of DRAM. That is why I cited that research for footnote16 in my prior post—which btw my independently derived idea seems to be roughly analogous to—for removing the pathological asymptotics by leveraging an optimized form of RC for the older generations. I will write my detailed post about this soon.

And that is why I allude in my response to @barrkel to the time vs. space trade-off approaching parity between automated GC and manual/stack/compile-time deterministic paradigms for memory management. The reason that the deterministic paradigms you espouse will not exceed the space vs. time trade-offs for the throughput (nor maximum latency pauses) of state-of-the-art automated GC in general is because I repeat that memory allocation mgmt is inherently non-deterministic (thus requires runtime management)!

Also there is research1 (which I haven’t read yet) which seems to indicate that as the number of (perhaps also non-deterministic) factors (e.g. power and energy management issues) that need to be managed increases, then automatic algorithms may become the most plausible (reasonable cost, time-to-market, stability, maintenance) for most applications with hand-tuning being further, further relegated to the esoteric that demand extreme tuning (e.g. the NASA space shuttle).

Yes GC can be fast providing you have twice the RAM available.

Even manual memory mgmt will have virtual memory paging out to disk as the physical memory is exhausted.

That half memory deal is afaik due to the “semi-space” of a copy collector that is not generational. Afaics, there are better alternatives now. See footnote16 in my prior post. I need to study this more thoroughly though.

Of course manual (both unchecked and compiler checked) mgmt can be a bit faster or correct some pathological cases (as well create potentially other ones ;), but at great cost as I wrote in my prior post as follows:

I apply the 80/20 or 90/10 Pareto principle to prioritization. Although all that special case tsuris might be able to obtain an extra 10% of performance, it has the cost 90% more effort on debugging, readability of the code, maintenance of the code, complexity of the code, etc.. When you really need that 10% boost (or in the case where the more generalized GC paradigm is much slower for some reason), then go write a module in C++ or Rust. I see no need to invest my effort in trying to recreate (or improve upon) those languages, because there are millions of man-hours already in those ecosystems. Our only prayer of creating something useful in this decade with our resources, is to 90/10% prioritize on clean abstractions that are elegant and generalized and 90% performant with 10% of the effort and complexity.

And you alluded to the great cost in terms of bugs and/or tsuris:

using a system with manual allocation - providing the software does not crash due to bad memory management :-)

In my prior post, I had alluded to how attempting to have the compiler check memory mgmt is forcing it into a deterministic, total order, which will necessarily cause the programmer to violate the compiler checks and leads to bugs (unless perhaps that compile-time paradigm is somehow limited to only deterministic instances, but from my study thus far it appears that the cross-pollution/interopt between young and older generation allocation makes non-deterministic, runtime automated algorithm interop with a deterministic young generation have worse characteristics than a runtime, automated generational non-deterministic collector for the younger generation):

To even attempt to apply these compile-time deterministic memory management holistically requires a complex typing tsuris akin to Rust’s total ordering of borrow lifetimes with lifetime annotation parameters for transitivity of borrows from inputs to outputs, and of course the programmer must violate the invariants and create unsafe code in order to handle non-deterministic (i.e. runtime randomized) memory mgmt, which defeats the entire point as then the total order is no longer checked and the unsafety can leak back into the code the compiler thinks is safe.

[…]

Total ordering compile-time paradigms (including attempting to type check everything) are afaics brittle paradigms that break because the programmer must necessarily break out of them because the entropy of the universe is not bounded. I commented on this previously numerous times in these issues threads. The non-determinism (i.e. unbounded randomness and entropy) of the universe (i.e. the real world in I/O) can’t be statically modeled.

1 T. Cao, S. M. Blackburn, T. Gao, and K. S. McKinley, "The Yin and Yang of Power and Performance for Asymmetric Hardware and Managed Software," in ISCA ‘12: The 39th International Symposium on Computer Architecture, 2012. Originally found on Steve Blackburn’s website. Also T. Cao’s master’s thesis prior art.


@barrkel wrote:

The argument in favour of GC isn't speed of execution, it's speed of
development at very low costs - an extra 2 to 10% or so. Speed of
development in memory safe languages is significantly higher

+1.

Cited:

Hertz and Berger [2005] explore the tradeoff between automatic and manual memory management in an artificial setting, using a memory trace from a previous run of the benchmark to determine when objects become unreachable and inserting explicit calls to free objects at the moment they become unreachable. This analysis, while interesting in itself, ignores the enormous programming effort and overhead (as in talloc above) required to track the ownership of objects and determine when they are no longer used. The most frequently cited study is Rovner [1985], who found that Mesa programmers spent 40% of their time implementing memory management procedures and finding errors related to explicit storage management.

Also it is about consistency of memory mgmt, i.e. avoiding the pitfalls of the average programmer or rushed development team.

Yes, GC scales because it trades space for time. If you're running on a
small embedded device, it can be a poor tradeoff; if you're trying to
maximize compute throughput across multiple machines, it can cost real
dollars. But these are extremes, and most people aren't on the extremes.

Well yes, but as I mentioned above in my reply to @keean, and I presume you’re also alluding to, that the differences in space vs. time tradeoffs between automatic and manual memory mgmt are getting closer to low double digit (or even single digit) percentages (i.e. “extremes” of optimization) and thus heavily favoring automatic GC for most use cases. Note I’m not conflating this metric with the 2 - 10% figure for throughput differences that you asserted.

I mostly just butted in because accusing GC for JS / Typescript being
slower than webasm seemed a bit outlandish. GC overhead is directly
measurable; it's trivially disproved for anyone inclined to measure.

I'm gonna unsubscribe from this thread now.

Note though that the metric you propose (although significantly representative given all other factors equivalent) doesn’t account for all impacts of the GC scheme chosen, such as cache and virtual paging locality (contiguity) to minimize thrashing which can significantly impact performance. Note however, that deterministic compile-time (manual explicit or for stack argument conventions and such thus assisted partially implicit) memory mgmt doesn’t necessarily achieve a panacea for this ancillary factor and could actually degrade it because generational and copy GC increase locality.

I appreciate your assistance in helping to explain. And I agree that is an important clarification to emphasize. Please feel free to participate here as often or infrequently as you want.

Btw, @keean is expert in other areas such as C/C++, Haskell, Prolog, type theory, and library algebras. Afaik, he has only been developing seriously with JS/TypeScript for a relatively much shorter period of time. So that might explain it. Or it could also be rushing and trying to answer late in the evening after a long day of managing his company.

@barrkel wrote:

Types are part of it, but it's more about
abstraction level. Higher abstraction means the runtime needs to perform
more work to get to machine code, and will be using generalisations (aka
baked in assumptions about trade-offs) to get there - and on a JIT time
budget too. Whereas webasm is designed to be efficiently and unambiguously
converted to machine code, so there will be fewer trade-offs, and the
compiler targeting webasm can spend much more time analysing the trade-offs
involved in reducing abstraction level.

[…}

And that's without getting into the costs of semantic fidelity. You can't
simply pretend JS's prototype inheritance doesn't exist given typescript
source; all the work needs to be done anyhow, just in case, since the web
is a distributed system and thus non-deterministic.

+1.

@NodixBlockchain
Copy link
Author

Original Author: @NodixBlockchain
Original URL: https:/keean/zenscript/issues/35#issuecomment-346301603
Original Date: Nov 22, 2017, 4:01 AM CST


I’m wondering whether that is a superstitious, anecdotal belief (i.e. aliasing error) or an assertion you actually studied, measured, and verified to be the case with in depth effort for comparing reference counting and tracing garbage collection. Because inherently reference counting doesn’t track the directed graph of memory allocation (i.e. all we have are reference counts and no back pointers to the references that comprise those counts); thus, there’s no feasible way to see which data structures have leaked because they’re no longer reachable from the untraced roots of the said graph in a reference counting GC system— for example due to cyclical references. Whereas, in theory a tracing garbage collector could provide such a graph for debugging.

If you consider reference as objects with a delete operators rather than only a pointer with a free, the destructor can release reference to childs before to release the reference.

It's not anecdotal, i have fully working application server with it's own memory management, object oriented script language, which deal with thousands of objects and i never spent more than 1h on tracking memory leak in the whole development cycle because i have clear methodology to track them.

The script system is actually close to javascript with tree of objects on several layers so i could make generational GC very easily, but there is too much corner case with multi threads and plugin for me to really consider it, over simple reference counting who works for every single case possible with very small code base.

Been doing tons of experiment with different model of memory management.

And i also have many experience with GC language such as AS3, android SDK and javascript, and there are always many instances where i wish there was a manual memory management, because the garbage collector won't really release the memory when it 'should', and it's very often macro managed with the GC only releasing things once in a while, which is far from being optimal if there is need for lot of dynamic object creation inside of a component that need to run for long time allocating lot of new object.

And it's not even multi threaded, and there are already these kind of problem that are recurrent in every GC language out there.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-346319285
Original Date: Nov 22, 2017, 5:09 AM CST


I’m trying to preempt another 1000+ post thread. We already had extensive discussions of your ideas, framework, perspective in the Concurrency and your “Saluations! :)” threads. We don’t need to repeat that again. I’m not trying to be dictatorial or unappreciative of discussion. I’m very interested right now in automated GC, for the reasons I’m explaining. I want to see if we can make it work optimally with the lower-level abstractions and resolve the pathological asymptotics issue.

@NodixBlockchain wrote:

If you consider reference as objects with a delete operators rather than only a pointer with a free, the destructor can release reference to childs before to release the reference.

I already told you that explicitly asking the programmer to remember to break cycles is off-topic (i.e. the destructor will not be automatically fired if there is a cyclic reference preventing the parent most object's reference count from going to 0):

This is off-topic. Explicit, unchecked (i.e. “unsafe”) manual mgmt is not what we are interested in here. The topic was about automatic (or compiler checked deterministic) mgmt. As you know, both @keean and I are significantly into compiler-assisted checking.

Although you’ve deleted (or moved to your thread?) those numerous prior off-topic posts I asked you to (thank you), you want to repeat the same points of what I percieve to be off-topic noise w.r.t. to the focus I want in this thread. I understand you think you have something important to say, but you apparently did not even pay attention or grok that I had already stated that it (i.e. the same point you’re repeating again) was/is off-topic.

@NodixBlockchain wrote:

but there is too much corner case with multi threads

We already discussed in the prior threads that (your and Rust’s paradigm of allowing/presuming) willy-nilly multithreading everywhere is not the paradigm I’m interested in. I’m looking at the single-threaded event model. I will be discussing GC and multithreading in this thread.

@NodixBlockchain wrote:

It's not anecdotal, i have fully working application server with it's own memory management, object oriented script language, which deal with thousands of objects and i never spent more than 1h on tracking memory leak in the whole development cycle because i have clear methodology to track them.

How many times am I going to have to repeat myself that I am not interested in discussing further the anecdotal claims of your low-level framework. You’re apparently overlooking the more holistic point that:

  • We’re (or at least I’m) not interested in your manual “unsafe” (i.e. not checked by compiler not runtime enforced) paradigm. Been there, done that in the 1980s and early 1990s. In late 1990s I adopted simplistic reference counting with C++ for CoolPage. Now I’m advancing further. IMO, you’re a dinosaur.

  • My reasons for not being interested are detailed in my prior response to @keean and @barrkel.

    You’re preferences and experience with your framework are not indicative of the development costs and other trade-offs for the community-at-large with such a paradigm, i.e. your arguments are anecdotal. When you have a large community and a lot of research/surveys/metrics amongst many general purpose applications refuting the exposition I have made based on more extensive sources, then we can discuss yours. No doubt that if you’re personally very dedicated and expert in hand-tuning with your framework specifically designed for the blockchain application you’re building, then you can probably get outstanding metrics as an anecdotal example application, but that isn’t provably (and IMO unlikely to be) relevant in the larger scheme of things w.r.t. to general applications and general developer productivity, maintenance, stability, bugginess, etc.. Again please refer to my prior response to @keean and @barrkel for my reasoning on why manual, human-error prone management of resource factors is a computer science cul-de-sac. Whether my perspective is correct or not, afaics it’s the current focus of this thread. Even the owner of this repository @keean who expressed some preference for some compile-time deterministic memory mgmt, afaics doesn’t want to allow cyclic references memory leaks that require human intervention, as you’re apparently proposing.

    Additionally having your own methodology for your own handcrafted framework is not a valid comparison to what is possible but has not yet been so handcrafted for tracing GC (so as to be able to compare apples-to-apples for a diverse ecosystem of users and use cases), as I had already explained in a prior response as follows:

    In both cases, a sophisticated debugger and language could in theory be designed to set such conditional breakpoints on all pointer assignments (conditionally breaking only if the said shared object address is encountered), if such a facility is helpful for debugging memory leaks. And the tracing GC would have more information about the memory allocation graph.

If you’re interested in your paradigm in spite of what I perceive to be the convincing exposition in my prior posts, then please feel free of course to discuss in a different thread. I may occasionally read there. If you make your best arguments there (not in this thread) and if I see anything I agree with, then I will quote it over here (and invite/entice you to discuss here). Please do not pollute my thread (at least respect the one post for every 10 others request, or 6 if all the intervening posts are lengthy and infrequent) with what I currently perceive to be off-topic discussion. I do not want an interactive chat style discussion in this thread. We can chat back-and-forth realtime in private or another thread. I wanted focused, carefully thought out discussion in this thread that is easy to re-read and condense enough for others to follow. Because this is my thread wherein I am trying to sort of what language paradigms I need and this is a very serious issue I am in a rush to complete and I do not want the distraction of what I perceive to be a boisterous (repetitively arguing the same points over and over) dinosaur and entirely incorrect.

Essentially you’re trying to route me away from extensive sources to your pet project. That is very myopic.

You’re apparently a prolific and very productive programmer (unlike my pitiful self reduced to rubble because still apparently suffering from a perhaps resistant strain of Tuberculosis). You and I aren’t able to collaborate much because you and I are ostensibly headed in different directions w.r.t. programming paradigms, while maintaining very low overhead on throughput.

@NodixBlockchain wrote:

And i also have many experience with GC language such as AS3, android SDK and javascript, and there are always many instances where i wish there was a manual memory management, because the garbage collector won't really release the memory when it 'should', and it's very often macro managed with the GC only releasing things once in a while, which is far from being optimal if there is need for lot of dynamic object creation inside of a component that need to run for long time allocating lot of new object.

It “should” only do so in a balance of throughput, asymptotics, pause latency, and (time & space) locality. If you’re preferring priorities of time locality (immediacy), then not only are you attempting the impossible of defying the non-determinism of memory mgmt (i.e. immediacy is irrelevant when the period of allocation is non-deterministic), you’re also not optimizing the entire equation for memory mgmt. This sort of myopia might not manifest as measured problems in your handcrafted project due to the confirmation bias of handcrafted design decisions, but may in more diverse and general ecosystem of use cases and developers. Your exposition has the hallmarks of an amateur lacking academic study.

You’re referring to automated GC implementations from a decade or more ago. Even Google’s JavaScript V8’s GC is only incremental, generational with a standard pathological asymptotics mark-sweep for the older generation.

I wrote already in this thread that new research has drastically improved this issue:

What he fails to realise is that 2-10% is only true when you have enough memory.

It is true that the asymptotic computational complexity (i.e. performance) of tracing GC is pathological both as live memory usage approaches/exceeds the bound of physical memory and as total memory consumption increases for computers with more GBs (or TBs) of DRAM. That is why I cited that research for footnote16 in my prior post—which btw my independently derived idea seems to be roughly analogous to—for removing the pathological asymptotics by leveraging an optimized form of RC for the older generations. I will write my detailed post about this soon.

And that is why I allude in my response to @barrkel to the time vs. space trade-off approaching parity between automated GC and manual/stack/compile-time deterministic paradigms for memory management. The reason that the deterministic paradigms you espouse will not exceed the space vs. time trade-offs for the throughput (nor maximum latency pauses) of state-of-the-art automated GC in general is because I repeat that memory allocation mgmt is inherently non-deterministic (thus requires runtime management)!

You make me repeat myself because you don’t assimilate all the details I write. Presumably you’re so glued to your mental model of programming that you don’t notice subtleties that paradigm-shift the presumptions.


@keean wrote:

The reference count needs to be in the object referenced, not the pointer, so it won't be in the same cache line as the pointer. When you have multiple pointers to the same object they all need to increment/decrement the same counter.

Agreed, but note in my prior post with the many footnotes about GC, I cited some research (Weighted Reference Counting) that places the count in the pointers for allocation and only has to decrement the shared count on deallocation. Also cited research about employing the reference counting only for a single-bit stored in the pointer in a reference counting generation scheme. Also in an earlier post I suggested to pulling all the counts closer together in contiguous locality to minimize cache line and virtual page faults. There’s also the concept of buffering the increments and decrements to amortize.

@NodixBlockchain
Copy link
Author

Original Author: @NodixBlockchain
Original URL: https:/keean/zenscript/issues/35#issuecomment-346323307
Original Date: Nov 22, 2017, 5:27 AM CST


The thing is my paradigm is the basics for high level language, which is fully working with JS/Json, can generate binary data for webGL apps, or other kind of dynamic application.

Even Node.js or javascript VM use this kind of code internally. You can't have any kind of high level language without this kind of code underlying it.

And for the moment it give more feature than javascript as it work in multi threaded environment and memory can be freed safely when it's not used directly.

The script language can already process HTML data, including form variable, query variable, and interact 100% with js app, and i think it's still cleaner than PHP.

for example https:/NodixBlockchain/nodix/blob/master/export/web/nodix.site#L555

It can parse POST variable, including files, generate js variable and code with live variable from the node etc.

The low level mechanism is only to be seen as a minimalistic VM to run the script language, which either handle network events, or generate html page with dynamic data.

Any kind of advanced high level language will need this sort of code to manage memory, references, multi threads etc

The lexer/VM can be very simple because the low level code already have advanced feature to deal with objects, memory, scope and evaluation of objects data.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-346336143
Original Date: Nov 22, 2017, 6:27 AM CST


@NodixBlockchain, you’re repeating the same points I already discussed with you in the Concurrency and your “Saluations! :)” threads. I refuse to repeat my explanations again to you as to why what you think is applicable, is not applicable to what I am doing. It’s not my job to repeat myself over and over, just because you think I didn’t already refute you before in those other threads.

Please stop. Please.

You can't have any kind of high level language without this kind of code underlying it.

Any kind of advanced high level language will need this sort of code to manage memory, references, multi threads etc

Incorrect. And I do not want to explain why again.

And for the moment it give more feature than javascript as it work in multi threaded environment

Your paradigm for multithreading is not applicable to the direction I’m headed, as I will repeat what I wrote before quoted as follows:

but there is too much corner case with multi threads

We already discussed in the prior threads that (your and Rust’s paradigm of allowing/presuming) willy-nilly multithreading every where is not the paradigm I’m interested in. I’m looking at the single-threaded event model. I will be discussing GC and multithreading in this thread.

I’m not trying to be a jerk. I have clear reasons for the focus that I’m pursuing. I hope you also understand my available time is very limited (and no that is not a valid reason that I should employ your framework).

Sometimes I ponder if the trolls at BCT paid you to come and disrupt me. I’m having enough difficulty as it is making forward progress, and I need to be taken off on repetitive tangents like I need another hole in my head in addition to the one I already have from a hammer.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-346367257
Original Date: Nov 22, 2017, 8:34 AM CST


@keean wrote:

Interestingly I am about to port a bunch of asm.js to typescript, as the memory allocation for asm.js / webasm does not work well with the main JS garbage collector, and allocating the heap for the webasm is causing fragmentation of the JS heap. In the end it turns out that the better memory usage, keeping all them memory managed by the GC is better for the application stability than the small performance gain of having the code in asm.js.

Edit: its probably worth pointing out that this application uses a lot of memory, and has some periods when it is performing rapid allocations. This leads to memory usage for a single tab of about 1GB (even though memory buffers are 16mb so with manual allocation it would get nowhere near that level). Frequently when usage gets more than 1GB the web-browser crashes, even though most of that memory is probably free, and just waiting to be collected.

Agreed afaik you appear to be making a valid and important point, which dovetails with my initial peek into analyses about GC for proper integration with the low-level coding.

The problem you’re experiencing perhaps has something to do with the fact that most browsers are limited to presuming a least common denominator of a 32-bit virtual address space and this gets very complicated as to how they split this up between processes. They also have to protect against writing outside of bounds efficiently. I dug a little bit into the issues (c.f. the last 2 links in quoted text below) but did not completely climb down the rabbit hole.

@shelby3 wrote:

For this last quoted issue, I am contemplating the ability to employ the unused bits (c.f. also) of 64-bit pointers (64-bit also provides a huge virtual memory space and c.f. also) would enable storing an index into a contiguous array which has the reference counts as array elements so they would be in contiguous pages.

I had mentioned to you in private discussion my idea about storing older generation objects in same sized objects memory pages in order to minimize fragmentation with a 64-bit virtual address space. This would rely on less frequent compaction pauses for old generation objects, and would reuse old generation slots instead of waiting to compact them (i.e. not primarily a copy collector).

Also note that very large or small objects are typically handled different by GC algorithms, so there may be an issue there w.r.t. to the 16MB objects you’re allocating.

I agree that the memory allocation seems to have corner cases around the interaction of the GC in JS and the allocation of the ArrayBuffer for ASM.js/WASM. Also of course objects have to copied from one to the other, they’re not interoperable.

However, as I pointed out in prior posts, just because automated GC has a corner issue with a particular scenario, doesn’t enable the presumption that manual memory mgmt is a panacea in terms of the holistic equation of memory mgmt optimization. You gain some control in one area and lose something in other axis of the optimization space (e.g. more human-error prone, less throughput, or worse asymptotics, etc).

@shelby wrote:

It “should” only do so in a balance of throughput, asymptotics, pause latency, and (time & space) locality. If you’re preferring priorities of time locality (immediacy), then not only are you attempting the impossible of defying the non-determinism of memory mgmt (i.e. immediacy is irrelevant when the period of allocation is non-deterministic), you’re also not optimizing the entire equation for memory mgmt. This sort of myopia might not manifest as measured problems in your handcrafted project due to the confirmation bias of handcrafted design decisions, but may in more diverse and general ecosystem of use cases and developers. Your exposition has the hallmarks of an amateur lacking academic study.

@shelby3 wrote:

Note though that the metric you propose (although significantly representative given all other factors equivalent) doesn’t account for all impacts of the GC scheme chosen, such as cache and virtual paging locality (contiguity) to minimize thrashing which can significantly impact performance. Note however, that deterministic compile-time (manual explicit or for stack argument conventions and such thus assisted partially implicit) memory mgmt doesn’t necessarily achieve a panacea for this ancillary factor and could actually degrade it because generational and copy GC increase locality.

I’m contemplating whether I can design my own programming language which models what I want, will tolerably to transpile to JS/TypeScript (or something else) easily enough to allow me to use it now, and then rewrite the VM and compiler later to optimum. This is what I’m analysing now and the reason I’m doing research on GC.

@keean wrote:

Following this argument to its logical conclusion, the better memory usage of manually managed memory for the whole application would be better overall.

How do you arrive at this conclusion? Have the posts I have made since you wrote that, changed your mind?

Afaics, the optimum solution is to better design the GC and VM with the low-level coding in mind. JavaScript was designed by Brendan Eich in 10 days. I presume that since then backward compatibility demands have prevented any holistic overhaul.

Also I’m prepared to deprecate 32-bit OSes as a viable target. I’m future directed because anyway, we’re not going to get something popularized within a couple of years at best. Afaik, even mobile is moving towards 64-bit rapidly. Microsoft Windoze may be a bit resistant, but 64-bit Home is available for those who care when they buy a machine. And anyone still running that spyware without dual booting or running Linux in a VirtualBox is really behind the curve.

@NodixBlockchain
Copy link
Author

Original Author: @NodixBlockchain
Original URL: https:/keean/zenscript/issues/35#issuecomment-346379971
Original Date: Nov 22, 2017, 9:15 AM CST


Sometimes I ponder if the trolls at BCT paid you to come and disrupt me.

Just have to comment on this, you're paranoiac dude lol I don't have anything to do with anyone on BCT lol

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346390097
Original Date: Nov 22, 2017, 9:47 AM CST


How do you arrive at this conclusion? Have the posts I have made since you wrote that, changed your mind?

Because I am good enough to get the manual memory management right, and it will use a lot less memory, which will result in less fragmentation, and less crashes and therefore a better user experience.

I am still convinced that static analysis and region based memory management is a sweet spot for high performance languages.

I think GC is useful for simple coding, but there are some big problems:

  1. it encourages object thrashing
  2. it suffers from fragmentation
  3. it causes user-interface jank due to the 'stop-the-world' nature of GC.

If you have a system with all three of the above problems, there is nothing you can do as a programmer within the language to solve the problem.

You can however solve problems (2) and (3) if you can avoid (1) by limiting object creation. If you can force pre-allocation of resources, and then use effects to restrict new object creation, so that the compiler will prevent you using idioms that thrash the GC then it could be acceptable. The problem is that you need all the library APIs to respect this too, and provide a way for the user to pass the result buffer into the function/procedure. As we know if you give people multiple ways to solve a problem they will probably pick the wrong way.

This is why I am interested in a language that forces people to do things the right way, so that then library APIs will always be written the correct way. Of course this might not make the language very popular, but I think it is an interesting approach to explore.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-346426095
Original Date: Nov 22, 2017, 11:49 AM CST


@NodixBlockchain wrote:

Sometimes I ponder if the trolls at BCT paid you to come and disrupt me.

Just have to comment on this, you're paranoiac dude lol I don't have anything to do with anyone on BCT lol

It’s half-joke and half observing that the trolling of BCT seems to follow me over here. However, it must be stated that you’re not insincere. You truly believe (or believed) your work is applicable to what I want (or to the balanced presentation for other readers/participants here). I just don’t know how much verbiage I have to put out in order to get you to understand my side. Seems like perhaps you understand now. Again I am very much willing to read anything you write in the thread you created on @keean’s repository, if I have time. But I hope you will understand my point in this thread. I suggest re-reading the thread, as I made so many edits to my posts over the past 8+ hours, including my prior responses to you in this thread.

Again thank you for cooperating on my request. That was very honorable. I’ll try to be mutually respectful in your thread.


@keean wrote:

Because I am good enough to get the manual memory management right, and it will use a lot less memory, which will result in less fragmentation, and less crashes and therefore a better user experience.

That appears to ignore the arguments @barrkel and I have made about the 80/20 rule of nature. Those who ignore that rule perpetually fail in business and life. Of course you can do anything with infinite programming man-hour resources (assuming it doesn’t end up in a maintenance clusterfucked hairball), but that is not a valid argument.

That appears to be the hubris of anecdotal experience not backed by a sound analysis of the factors involved: which I will revisit below since apparently my prior writings in this thread are being ignored.

No one should dictate to you which programming language you should use. Likewise, you can’t dictate to the rest of the world, what it prefers in terms of trade-offs in costs and outcomes for choice of paradigms in the programming languages. Java, JavaScript, Python, PHP, Haskell are very popular programming languages which all use tracing GC (and apparently not even the state-of-the-art GC):

@shelby3 wrote:

Even Google’s JavaScript V8’s GC is only incremental, generational with a standard pathological asymptotics mark-sweep for the older generation.

C/C++ are also popular, but afaik they do not match the aggregate market share of the others. As well, C/C++ these days is typically is limited to programs which require low-level control.

My quest in @keean’s repository has to been to get the advantage of GC and perhaps some other aspects of FP such as first-class functions and closures in a simple language that also can do some low-level things such as unboxed data structures. To further erode the use cases where C/C++ would be chosen, to relegate that clusterfucked C++ language tsuris (and perhaps Rust also but the jury is still out on that) more and more to the extreme fringe of use cases. For example, It is ridiculous that I must employ a cumbersome Node.js Buffer library to build an unboxed data structure.

Also I think you may be underestimating the trade-offs (at many different levels such as development time, maintenance, worst case failure modes, scalability, etc) of avoiding the assistance of runtime algorithms. Essentially since non-deterministic resource mgmt is an invariant of programming, you’ll end up writing a runtime GC algorithm to handle it, so I can’t see how your stance makes any sense whatsoever:

@shelby3 wrote:

Yes the choice is between static or runtime checking. The best is to use the stack, so there is no need for checking, but that does not work for all scenarios.

Generational GC is very efficient nearly as much as the stack (as I explained) and I am going to propose an algorithm that I am thinking might make it even more efficient.

What he fails to realise is that 2-10% is only true when you have enough memory.

It is true that the asymptotic computational complexity (i.e. performance) of tracing GC is pathological both as live memory usage approaches/exceeds the bound of physical memory and as total memory consumption increases for computers with more GBs (or TBs) of DRAM. That is why I cited that research for footnote16 in my prior post—which btw my independently derived idea seems to be roughly analogous to—for removing the pathological asymptotics by leveraging an optimized form of RC for the older generations. I will write my detailed post about this soon.

And that is why I allude in my response to @barrkel to the time vs. space trade-off approaching parity between automated GC and manual/stack/compile-time deterministic paradigms for memory management. The reason that the deterministic paradigms you espouse will not exceed the space vs. time trade-offs for the throughput (nor maximum latency pauses) of state-of-the-art automated GC in general is because I repeat that memory allocation mgmt is inherently non-deterministic (thus requires runtime management)!

Also there is research1 (which I haven’t read yet) which seems to indicate that as the number of (perhaps also non-deterministic) factors (e.g. power and energy management issues) that need to be managed increases, then automatic algorithms may become the most plausible (reasonable cost, time-to-market, stability, maintenance) for most applications with hand-tuning being further, further relegated to the esoteric that demand extreme tuning (e.g. the NASA space shuttle).

[…]

1 T. Cao, S. M. Blackburn, T. Gao, and K. S. McKinley, "The Yin and Yang of Power and Performance for Asymmetric Hardware and Managed Software," in ISCA ‘12: The 39th International Symposium on Computer Architecture, 2012. Originally found on Steve Blackburn’s website. Also T. Cao’s master’s thesis prior art.

@barrkel wrote:

The argument in favour of GC isn't speed of execution, it's speed of
development at very low costs - an extra 2 to 10% or so. Speed of
development in memory safe languages is significantly higher

+1.

Also it is about consistency of memory mgmt, i.e. avoiding the pitfalls of the average programmer or rushed development team.

Yes, GC scales because it trades space for time. If you're running on a
small embedded device, it can be a poor tradeoff; if you're trying to
maximize compute throughput across multiple machines, it can cost real
dollars. But these are extremes, and most people aren't on the extremes.

Well yes, but as I mentioned above in my reply to @keean, and I presume you’re also alluding to, that the differences in space vs. time tradeoffs between automatic and manual memory mgmt are getting closer to low double digit (or even single digit) percentages (i.e. “extremes” of optimization) and thus heavily favoring automatic GC for most use cases. Note I’m not conflating this metric with the 2 - 10% figure for throughput differences that you asserted.

In my prior post, I had alluded to how attempting to have the compiler check memory mgmt is forcing it into a deterministic, total order, which will necessarily cause the programmer to violate the compiler checks and leads to bugs (unless perhaps that compile-time paradigm is somehow limited to only deterministic instances, but from my study thus far it appears that the cross-pollution/interopt between young and older generation allocation makes non-deterministic, runtime automated algorithm interop with a deterministic young generation have worse characteristics than a runtime, automated generational non-deterministic collector for the younger generation):

To even attempt to apply these compile-time deterministic memory management holistically requires a complex typing tsuris akin to Rust’s total ordering of borrow lifetimes with lifetime annotation parameters for transitivity of borrows from inputs to outputs, and of course the programmer must violate the invariants and create unsafe code in order to handle non-deterministic (i.e. runtime randomized) memory mgmt, which defeats the entire point as then the total order is no longer checked and the unsafety can leak back into the code the compiler thinks is safe.

[…]

Total ordering compile-time paradigms (including attempting to type check everything) are afaics brittle paradigms that break because the programmer must necessarily break out of them because the entropy of the universe is not bounded. I commented on this previously numerous times in these issues threads. The non-determinism (i.e. unbounded randomness and entropy) of the universe (i.e. the real world in I/O) can’t be statically modeled.


@keean wrote:

I am still convinced that static analysis and region based memory management is a sweet spot for high performance languages.

For the highest quality software (i.e. mission critical stuff) perhaps yes (although I think perhaps algorithmically managed non-determinism will become more reliable than handcrafted), but I see already from my study of the research that gradually (even suddenly) state-of-the-art GC will continuously improve and cannibalize most of it until you’re relegated to programming languages that most people have no interest in.1 You always knew I was interested in mainstream programming languages. So if you remain glued to that stance then we're likely headed in different directions. I thought I was going to be supporting static memory allocation analysis for Lucid, until I realized that generational allocation can be as performant as compile-time stack based for the frequent young objects (and I have a new algorithm in mind to make it more so by eliminating most of the copying, i.e. “sudden” may be about to happen in a few hours).

Btw, I’m known for writing prophetic predictions. There’s some hubris back at ya. kissing_heart

I think GC is useful for simple coding, but there are some big problems:

it encourages object thrashing
it suffers from fragmentation
it causes user-interface jank due to the 'stop-the-world' nature of GC.

This has all been solved as of today. I will be writing down the details shortly.

This is why I am interested in a language that forces people to do things the right way, so that then library APIs will always be written the correct way. Of course this might not make the language very popular, but I think it is an interesting approach to explore.

You mean @keean’s way. I will prove to you that is not the right way w.r.t. to this issue, although I do believe you have many important insights into programming.

1 However, I think it may be correct to allow some form of manual control over memory allocation for programmers to work around corner case issues with the automated GC.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-346441236
Original Date: Nov 22, 2017, 12:48 PM CST


If we are targeting JavaScript, this is irrelevant, as we have to live with the web-browsers GC. Mozilla's work with their Rust browser has shown you get better performance by bringing the DOM into the JavaScript GC, rather than having it reference counted.

I think if you have GC the best model is to have everything managed by the GC, otherwise there seem to be more corner cases. For example mixing JavaScript and WebAsm seems to bad for memory performance.

So either we compile to web-assembly and manage the memory ourselves, or we should use the web-browsers GC. If we have any interaction with GC manages systems we are probably better off using the web-browsers GC.

However when compiling to native, it would be nice to have a static analysis so you don't need a runtime for the binary. I would really like it if a program to add two numbers consisted of a couple of loads and add and a store and nothing else.

As stated above, the benefit of GC is easy development, and freedom from some errors. Unfortunately people don't seem to realise you can still leak memory with GC, it eliminates some simple error cases, but you can still write programs that leak memory like a sieve until they crash.

In my experience it is much harder to debug memory leaks with GC than it is with manual allocation, because it is invisible to the programmer. So GC makes some easy coding easier, and some hard coding harder, but the easy cases are common and the hard ones rarer.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-346452322
Original Date: Nov 22, 2017, 1:33 PM CST


@keean wrote:

If we are targeting JavaScript, this is irrelevant, as we have to live with the web-browsers GC.

I’m considering abandoning JS because of its crappy GC which doesn’t interop with low-level programming and appears to have poor asymptotics and some corner cases around fragmentation. Working on the detailed analyses now.

Originally I was advocating targeting JS because I thought it could become the mainstream VM, but there appears to be severe fundamental flaws, which is ostensibly why WASM has appeared on the scene. Java’s VM is also flawed, such as the lack of native support for unsigned data types.

I think if you have GC the best model is to have everything managed by the GC, otherwise there seem to be more corner cases. For example mixing JavaScript and WebAsm seems to bad for memory performance.

That seems to agree with what I’ve learned thus far.

So either we compile to web-assembly and manage the memory ourselves

This is the direction I’m more strongly considering now based on the aforementioned realization. But I would want to use an automated GC running on top of either WASM or perhaps native.

If we have any interaction with GC manages systems we are probably better off using the web-browsers GC.

No. JS’s GC sucks (doesn’t interop with low-level and isn’t even a state-of-the-art design). Must be replaced.

Web browsers are dinosaurs being disintermediated by apps any way. Opening the door to replace JS with WASM as the universal VM, unless WASM is poorly designed as well, in which something else might rise up. WASM apparently doesn’t support 64-bit virtual address space yet.

However when compiling to native, it would be nice to have a static analysis so you don't need a runtime for the binary.

That has no relevance in the mass market use cases I’m targeting. I’m not targeting for example the most stringent, realtime, embedded market that Rust/C++ is.

As stated above, the benefit of GC is easy development, and freedom from some errors.

And avoiding maintenance hairballs, and many other things I have already explained. You’re understating the massive, pervasive benefits.

Unfortunately people don't seem to realise you can still leak memory with GC, it eliminates some simple error cases, but you can still write programs that leak memory like a sieve until they crash.

Irrelevant. “Programming can have bugs” is not an argument.

GC addresses a fundamental and massive problem in programming, notwithstanding that it does not eliminate all programming bugs.

In my experience it is much harder to debug memory leaks with GC than it is with manual allocation, because it is invisible to the programmer.

Create better analysis tools then. Refer to the response I already made to @NodixBlockchain on that issue. I don’t think your assumption will remain true. The state-of-the-art GC algorithm employs RC for the older generation objects any way. Everything can be analysed that you do with manual now, just need the right tools.

It’s a confirmation bias to find excuses to argue for an inferior programming paradigm.

EDIT: remember we have a complexity budget. I would rather expend that static typing budget on more useful typing than wasting it on what a state-of-the-art GC can handle well. TypeScript demonstrates that a modicum of static typing can be very popular. Simplification is nearly always a win for popularity.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647547069
Original Date: Jun 22, 2020, 9:15 AM CDT


@keean wrote:

There is no sematic requirement to keep the handle after you have called 'finalize'... What are you keeping a closed handle for? It does not make any sense.

I replied before he wrote it:

@keean might argue that he can refactor to remove that issue above but he will not be able to in every case without removing the degrees-of-freedom in general purpose programming, or at least not without needlessly syntactically obfuscating the essential logic of the code. And any refactoring is essentially being explicit, which is what I have advocated so would be agreeing with me (other than trying to argue that explicitness and ARC lifetime congruence can be made semantically isomorphic, but... see next example...).


However those inner braces '{}' are unnecessary because the compiler will call the destructor of 'handle' after the last use in a function automatically (this would be part of the language specification).

That breaks the requirement that cleanup sometimes need to be ordered and the compiler has no way of knowing the order without explicit finalize() calls. You can argue for explicitly setting references to null to cause reference count to go to zero so that the destructor is invoked, and then you are just agreeing with me about the need to be explicit. But by conflating ARC and resource lifetimes you still haven’t resolved the encapsulation issue, in that you may have multiple owners of (references to) some objects each which may share ownership of (references to) some resource (whether intentionally by the programmer or inadvertently due to making it so easy for the programmer to conflate concerns by conflating ARC with resource lifetimes in your PL design), and those owners may have ARC lifetimes related to diverse functionality which is not solely dependent on the functionality provided by the said resource. So how do you propose to be explicit about releasing the said resource earlier than the said ARC lifetimes?

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-647552704
Original Date: Jun 22, 2020, 9:25 AM CDT


@shelby3

might argue that he can refactor to remove that issue above but he will not be able to in every case without removing the degrees-of-freedom in general purpose programming,

Actually I would argue you can always refactor, and that refactoring will always make the programs purpose clearer, and easier to read.

Weak references should only be used for 'cache' data that may be removed at any time.

Now you could write a "file-handle" cache, that would store an open filehandle, ane periodically purge all open handles (a but like your code above), but then would re-open the handle if you tried to access after a close.

In any case the refactor for the second example above is to store a weak reference to the RAII filehanlde, and then delete the filehandle leaving the weak reference so the type would be like this:

var handle : Weak<File>

This is the value you store in your records. Weak would provide a 'delete' funciton that deletes the value stored in it, which would call the desctructor on the RAII file handle.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-647556315
Original Date: Jun 22, 2020, 9:31 AM CDT


@shelby3

function bar(Record record2, Data data) {
   Record record = new Record();
   record.handle = new Weak<RecordHandle>();
   record.callbacks.add(record2.callback);
   record2.handle = record.handle;
   data.record = record2;
   bar2(record); 
   // do something with `record.handle`
   record.handle.delete(); // destroy contents of weak reference.
   delete record.handle; 
   someAsyncFunction();
}

Fixed :-)

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-647557073
Original Date: Jun 22, 2020, 9:32 AM CDT


@shelby3 Its much better to make the 'Weak' references explicit, otherwise you have no idea whether you need to test if the contents exist, nor what data can disappear, and what data/resources you can rely on.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647562558
Original Date: Jun 22, 2020, 9:42 AM CDT


Inserted:

However those inner braces '{}' are unnecessary because the compiler will call the destructor of 'handle' after the last use in a function automatically (this would be part of the language specification).

That breaks the requirement that cleanup sometimes need to be ordered and the compiler has no way of knowing the order without explicit finalize() calls. You can argue for explicitly setting references to null to cause reference count to go to zero so that the destructor is invoked, and then you are just agreeing with me about the need to be explicit. But by conflating ARC and resource lifetimes you still haven’t resolved the encapsulation issue, in that you may have multiple owners of (references to) some objects each which may share ownership of (references to) some resource (whether intentionally by the programmer or inadvertently due to making it so easy for the programmer to conflate concerns by conflating ARC with resource lifetimes in your PL design), and those owners may have ARC lifetimes related to diverse functionality which is not solely dependent on the functionality provided by the said resource. So how do you propose to be explicit about releasing the said resource earlier than the said ARC lifetimes?

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647565361
Original Date: Jun 22, 2020, 9:46 AM CDT


So after all that @keean has now effectively agreed I was correct from the start, lol. (Although he may not realize it, lol)

@keean so you’ve adopted strong and weak references with a paradigm for being explicit about finalization. You have regurgitated what I claimed:

As for punting to strong and weak finalizer references (not to be conflated with strong and weak references), this is a semantic necessity of dynamic resolution of resource lifetimes which can’t be statically analyzed. And the fact that semantic resource lifetimes are not programmatic ARC lifetimes even if you attempt to force them to be.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-647569317
Original Date: Jun 22, 2020, 9:53 AM CDT


@shelby3 You said RAII couldn't do 'X'... I just tried to understand what 'X' was, and when I did I presented the solution (explicit Weak references). So we have shown that RAII does not have the problems you were claiming.

My point that resource lifetimes are isomorphic to handle lifetimes still stands, because the weak reference lets you destroy the handle.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647571922
Original Date: Jun 22, 2020, 9:57 AM CDT


@keean

My point that resource lifetimes are isomorphic to handle lifetimes still stands, because the weak reference lets you destroy the handle.

Nope. Firstly because a weak reference does not let you relinquish the resource — only the strong reference can relinquish it. Also because by conflating concerns you will force inner encapsulated concerns to become propagated to outer separate concerns:

But by conflating ARC and resource lifetimes you still haven’t resolved the encapsulation issue, in that you may have multiple owners of (references to) some objects each which may share ownership of (references to) some resource (whether intentionally by the programmer or inadvertently due to making it so easy for the programmer to conflate concerns by conflating ARC with resource lifetimes in your PL design), and those owners may have ARC lifetimes related to diverse functionality which is not solely dependent on the functionality provided by the said resource. So how do you propose to be explicit about releasing the said resource earlier than the said ARC lifetimes?

IOW, the by-default release due to ARC ownership is presenting to the programmer an assumption of isomorphic lifetime between ARC and resources, which causes the programmer to become lazy about thinking about optimal resource lifetimes. And when the programmer does decide to think about it, they realize the semantics have nothing to do with the ARC lifetime in these cases and thus they use explicit mechanisms to override ARC. So ARC was the wrong paradigm to start with. Put the programmer in the wrong mental model. Even the case where you said the programmer could use explicit braced blocks or the compiler could automatically infer, are a conflation of the fact that resource lifetimes and ARC lifetimes are not isomorphic (i.e. I pointed out that the automatic inference case can’t choose between an ambiguous order when there’s more than one expiring in the same inferred scope, thus an unprincipled paradigm with heuristic outcomes).

Also a non-ARC, GC’ed language can’t intermix ARC’ed resource handles into non-ARC, GC’ed data structures — you create a What Color Is Your Function bifurcation clusterfuck.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-647576794
Original Date: Jun 22, 2020, 10:05 AM CDT


@shelby3

Nope. Because by conflating concerns you will force inner encapsulated concerns to become propagated to outer separate concerns:

Explicit weak references are better than implicit weak references. How will you tell if a reference is weak or not? If all references are weak, then you must test the reference is still valid before every dereference. That sounds like a lot of boilerplate.

Also a non-ARC, GC’ed language can’t intermix ARC’ed resource handles into non-ARC, GC’ed data structures — you create a What Color Is Your Function bifurcation clusterfuck.

True, and this is the point I was making much further up, a GC language can only have finalizers, and that is not good enough, so you must have explicit close statements for resources. If this is the way you choose to go, then using the type-system to statically track whether a file handle is open will help, because it will make sure you don't use-after-close statically, and can enforce a type-guard check in the dynamic case.

My point all along has been that deterministic destructors avoid the need for all that boilerplate, and can provide all the same functionality (with explicit weak references), but you need to use RC for your automatic memory management. I commented that I thought this was why Apple went with RC for Swift, because it prevents more resource bugs than just GC which only prevents memory bugs. However I have shown how you can avoid resource bugs with GC by using the type system, so it just comes down to code clutter/boilerplate.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647578953
Original Date: Jun 22, 2020, 10:09 AM CDT


Inserted:

IOW, the by-default release due to ARC ownership is presenting to the programmer an assumption of isomorphic lifetime between ARC and resources, which causes the programmer to become lazy about thinking about optimal resource lifetimes. And when the programmer does decide to think about it, they realize the semantics have nothing to do with the ARC lifetime in these cases and thus they use explicit mechanisms to override ARC. So ARC was the wrong paradigm to start with. Put the programmer in the wrong mental model. Even the case where you said the programmer could use explicit braced blocks or the compiler could automatically infer, are a conflation of the fact that resource lifetimes and ARC lifetimes are not isomorphic (i.e. I pointed out that the automatic inference case can’t choose between an ambiguous order when there’s more than one expiring in the same inferred scope, thus an unprincipled paradigm with heuristic outcomes).

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-647584001
Original Date: Jun 22, 2020, 10:17 AM CDT


@shelby3 I think implicit weak references are more problematic than RAII for handles. It means that some references (to memory) behave differently (as strong references) to other references (to resources, as weak references).

By including explicit weak references we can treat any reference (to a resource or memory) the same, as a cache that we can clear at any time, and we can rely on them being strong references if not explicitly weak. This seems a more consistent approach to me.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647611857
Original Date: Jun 22, 2020, 10:54 AM CDT


@shelby3

Nope. Because by conflating concerns you will force inner encapsulated concerns to become propagated to outer separate concerns:

Explicit weak references are better than implicit weak references. How will you tell if a reference is weak or not?

It does seem that optimal answer which will also accommodate a GC’ed PL is being explicit about which single reference is the strong one, so then the type system can attempt to track it to insure it is explicitly released. Thus this is orthogonal to ARC. ARC is heuristic way of getting automatic release in many cases but it isn’t complete because not isomorphic and explicitness of the strong reference is required in order to not be heuristic (in some scenarios) and not be oblivious (to inadvertent resource starvation) in other scenarios because the programmer became too comfortable relying on implicit ARC instead of thinking about the explicit resource lifetime.

If all references are weak, then you must test the reference is still valid before every dereference. That sounds like a lot of boilerplate.

Or throw an exception. The point is to make the programmer think about when he releases his single strong reference, that he is sure that all the weak references are no longer in use, even if they still exist. Otherwise with ARC, he must communicate to the owners of all those weak references to tell them to assign null to them, which I think breaks encapsulation in some cases.

One could argue that just make all references strong and make sure you structure your code so that all references have died when you want the resource lifetime to end. But my thought is that this like trying to turn the Titanic. You have encouraged the programmer to scatter references to the wind (e.g. inside layers of encapsulation) and now you want him to unconflate all that in order to conflate it with ARC but in a way that optimizes resource lifetimes instead of ARC lifetimes. Also I am wary of your proposed optimization that the resource lifetime ends implicitly upon the last access to the resource handle. There was my out-of-order heuristic point which admittedly may be a bit contrived or overemphasized. What if I have some callback setup to inform me when the resource’s buffer has flushed to disk? So then inferred ARC optimization you proposed closes the file before the flush but let’s say I have another callback set up to do something after closing the file which depends on that buffer being flushed to disk. Maybe that is also overly contrived.

Also a non-ARC, GC’ed language can’t intermix ARC’ed resource handles into non-ARC, GC’ed data structures — you create a What Color Is Your Function bifurcation clusterfuck.

True, and this is the point I was making much further up, a GC language can only have finalizers, and that is not good enough, so you must have explicit close statements for resources. If this is the way you choose to go, then using the type-system to statically track whether a file handle is open will help, because it will make sure you don't use-after-close statically, and can enforce a type-guard check in the dynamic case.

That is related to my last point upthread to @sighoya that by having only one strong reference responsible for explicit resource lifetime, the compiler could perhaps track it via escape analysis and warn if the programmer has not placed an explicit intent (e.g. a variant of using that allows optional escape) when there are code paths where it doesn’t escape. Or it could just release the resource implicitly analogous to ARC if I was subsequently convinced to abandon being explicit. IOW, a single strong references seems to be equivalent with the best ARC can accomplish if ARC will also use only a single strong reference for resource handles. So thus the unification I was seeking from the start of this discussion yesterday.

That proposal is probably much less intrusive than some complex linear typing to statically check for resource lifetimes with a proliferation of strong references — which I presume will not handle the range of dynamism in general programming language usage.

And remember I wrote yesterday at the very start of this tangential discussion that my original motivations was unifying the issue with non-ARC, GC’ed PLs. So conflating with ARC isn’t even an option for my goal anyway.

My point all along has been that deterministic destructors avoid the need for all that boilerplate, and can provide all the same functionality (with explicit weak references), but you need to use RC for your automatic memory management.

If you presume the strong reference is always optimally congruent with implicit ARC destruction then you seem to have forgotten already my (even recent) points about it being non-isomorphic. Ultimately we need to be explicit with the strong reference sometimes. And the other times I argue it would be best to be explicit about defaulting to stack scope. Even your map example is being explicit because it assigns null to the reference to resource handle object, although it is conflated with removing the object from the map so the programmer is permitted to entirely forget about whether that was the optimal lifetime or that he even released the lifetime at that juncture (with multiple strong references he may not even know if he is or not, until he thinks about it and refactors his code to make the optimal release of the resource lifetime obvious but potentially cluttering the original intent of his essential logic because he has to make the conflated ARC congruent with the resource lifetime). So therefore the implicit stack scoped aspect of ARC was overridden.

I commented that I thought this was why Apple went with RC for Swift, because it prevents more resource bugs than just GC which only prevents memory bugs. However I have shown how you can avoid resource bugs with GC by using the type system, so it just comes down to code clutter/boilerplate.

I think with a single strong reference there does not need to be more clutter/boilerplate with non-ARC, GC (ARC is also a form of GC), but I am still not yet convinced that requiring curt explicitness is worse than letting the compiler release implicitly. IOW a single strong reference with escape analysis should as @sighoya pointed out, be implicitly resolvable at compile-time (although somewhat dubiously I argue not perfectly optimally…yet I would agree with a single strong reference can at least catch all the absence of explicitness at compile-time?).

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647617655
Original Date: Jun 22, 2020, 11:04 AM CDT


Note the edits:

Even your map example is being explicit because it assigns null to the reference to resource handle object, although it is conflated with removing the object from the map so the programmer is permitted to entirely forget about whether that was the optimal lifetime or that he even released the lifetime at that juncture (with multiple strong references he may not even know if he is or not, until he thinks about it and refactors his code to make the optimal release of the resource lifetime obvious but potentially cluttering the original intent of his essential logic because he has to make the conflated ARC congruent with the resource lifetime). So therefore the implicit stack scoped aspect of ARC was overridden.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-647618628
Original Date: Jun 22, 2020, 11:06 AM CDT


@shelby3 I think we are coming to a consensus. Let me try and summarise:

Even your map example is being explicit because it assigns null to the reference to resource handle object.

No it does not. My example language has non-nullable references. A reference if it exists must be valid (a strong reference). The explicit weak reference you must test with (reference.isValid()) before you can dereference, so there is no null here either.

until he thinks about it and refactors his code to make it obvious). So therefore the implicit stack scoped aspect of ARC was overridden.

Again no, the explicit Weak reference tells the programmer that the resource can dissapear at any time. All other references are strong.

A clarification: GC and RC are both forms automatic-memory managemnt. RC is not GC because Garbage Collection is a specific alforithm that is not Reference Counting.

We both agree that RC + Weak-References allows RAII to be used for all cases.

We both agree that a combination of static checks and dynamic tests can make GC + explicit destruction safe, and that finalizers are not good enough.

What seems to remain is a preference for a stylistic difference, I prefer explicit weak references with implicit destructors, you prefer explicit destructors with implicit weak references?

@NodixBlockchain
Copy link
Author

Original Author: @NodixBlockchain
Original URL: https:/keean/zenscript/issues/35#issuecomment-647622063
Original Date: Jun 22, 2020, 11:12 AM CDT


Am i correct if i say shelby issue is on the line of for example if you have an image class that contain a file handle, the lifetime of the image class might be longer than the minimum required lifetime of the filehandle ? And thus the filehandle might remain open longer than it should if only based on RAII ?

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647630119
Original Date: Jun 22, 2020, 11:27 AM CDT


Note another edit:

Nope. Firstly because a weak reference does not let you relinquish the resource — only the strong reference can relinquish it. Also because by conflating concerns you will force inner encapsulated concerns to become propagated to outer separate concerns:

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647634163
Original Date: Jun 22, 2020, 11:35 AM CDT


@NodixBlockchain

Am i correct if i say shelby issue is on the line of for example if you have an image class that contain a file handle, the lifetime of the image class might be longer than the minimum required lifetime of the filehandle ? And thus the filehandle might remain open longer than it should if only based on RAII ?

That’s not the complete conceptualization because of course the image class could set its copy of the file handle to null and RAII would call the destructor if that was the final reference to the file handle object. The issue is that the knowledge about when to optimally release may be outside of and orthogonal to the owner of the reference, so then you have move this hulking mass of a Titanic of ARC object graph to make it coincide with the semantics of the resource lifetime. And if instead you only have one strong reference anyway, then you probably don’t need ARC and can accomplish it statically with implicit escape analysis. Additionally I am claiming that by conflating resource and ARC lifetimes then the programmer will become lazy to resist creating scenarios where the ARC lifetimes and the resource lifetimes become conflated in ways that aren’t necessarily congruent with optimal resource lifetime cleanup. So if the programmer is going to think about and correct that, then in effect he is going to be doing either some form of explicit organization for resource lifetimes (thus has become orthogonal to implicit stack scope ARC lifetimes) or he is going to insure only one strong reference in which case we do not need ARC and can infer it statically which is more efficient and is compatible with non-ARC, GC’ed PLs.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647652254
Original Date: Jun 22, 2020, 12:09 PM CDT


@keean

Even your map example is being explicit because it assigns null to the reference to resource handle object.

No it does not. My example language has non-nullable references.

I see you alluded to move semantics for C++ but that is still being explicit. You move the reference out of the map (and what do you leave in its place except a null in the linked-list for the bucket or you decrement the size of the bucket array and move all the buckets?) then the block or stack frame scope deconstructs the reference you moved (i.e. assigned) it to. That is entirely semantically equivalent to assigning to null for the context of my point. What is the point of this needless verbiage?[Actually I see a valid point, which is that by moving out the implicit ARC action happens at the stack frame scope rather than at the assignment of a null, and thus in some sense does not conflate the removal from the map with the implicit destruction of the handle. Note though that your example did not take this reference as an output from the map.delete thus the implicit conflation remained in your example. However, I find this point interesting because it seems to indicate how to facilitate explicit resource lifetimes per what I am advocating.]

A reference if it exists must be valid (a strong reference). The explicit weak reference you must test with (reference.isValid()) before you can dereference, so there is no null here either.

until he thinks about it and refactors his code to make it obvious). So therefore the implicit stack scoped aspect of ARC was overridden.

Again no, the explicit Weak reference tells the programmer that the resource can dissapear at any time. All other references are strong.

I was referring to the implicit release of the resource upon the destruction of the strong reference in the Map of your upthread example code, which is conflated with your code’s explicit removal from the Map (regardless of how you achieve it by assigning null or explicit move semantics which assigns a null and destructs on stack frame scope). You’re ostensibly not fully assimilating my conceptualization of the holistic issue, including how semantic conflation leads to obfuscation of intent and even encouraging the programmer to ignore any intent at all (leading to unreadable code and worse such as very difficult to refactor code with insidious resource starvation).

A clarification: GC and RC are both forms automatic-memory managemnt. RC is not GC because Garbage Collection is a specific alforithm that is not Reference Counting.

I cited the literature near the top of this issues thread #35 that conflicts with your desired taxonomy. Even Rust’s docs use the term generally:

However, the second part is different. In languages with a garbage collector (GC), the GC keeps track and cleans up memory that isn’t being used anymore, and we don’t need to think about it. Without a GC, it’s our responsibility to identify when memory is no longer being used and call code to explicitly return it, just as we did to request it.

Although I agree your taxonomy seems to make some sense (actually not), the legacy literature seems to use the term GC to refer to all forms of automatic memory management deallocation and to use adjective ‘tracing’ GC to distinguish from ARC. But there is GC which is not ARC and is not tracing such as my new ALP proposal, so I would prefer your taxonomy, although then I am not sure if I should refer the collection of garbage in my ALP idea as GC or as only AMM? So then we have two terms with the same meaning.

IOW, memory has no more life if it can’t be referenced anymore and it continues to have a life for as long as any reference can access it. Whereas non-memory resources live on beyond (or need to die before) the life of the reference to the handle to the resource. The reference to the file handle is not isomorphic with the life of the resource referred to by the file handle.

We both agree that RC + Weak-References allows RAII to be used for all cases.

But semantically conflates for example explicit removal from a Map with implicit release of a resource. Which leads the programmer down a trail of potentially spaghetti conflation of concerns. ARC is for freeing access to a reference (thus it follows that the memory resource can be freed because no reference can access it anymore), not optimally semantic congruent with freeing (non-memory) resources that reference used to know about. They are not semantically equivalent.

We both agree that a combination of static checks and dynamic tests can make GC + explicit destruction safe, and that finalizers are not good enough.

Even GC + implicit destructors if we use static escape analysis with a single explicitly typed strong reference.

What seems to remain is a preference for a stylistic difference, I prefer explicit weak references with implicit destructors, you prefer explicit destructors with implicit weak references?

Well we could even have a single explicit strong reference (thus any weak references are implicitly typed by default, although of course access to a weak reference always requires either explicit conditional test or an implicit runtime exception when use-after-destructed/use-after-finalized) with implicit (or explicit) destructors and make it work with either ARC or non-ARC, GC. Thus I see no advantage to conflating with ARC? And I do conceive disadvantages to conflating it with ARC, not only including that it won’t work with non-ARC, but also because conflating with ARC encourages the programmer to conflate reference access lifetimes with resource lifetimes which are not semantically isomorphic.

So yes I think you are getting closer to understanding my stance, but there still appears to be some amount of disconnect?

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647668285
Original Date: Jun 22, 2020, 12:24 PM CDT


Inserted:

IOW, memory has no more life if it can’t be referenced anymore and it continues to have a life for as long as any reference can access it. Whereas non-memory resources live on beyond (or need to die before) the life of the reference to the handle to the resource. The reference to the file handle is not isomorphic with the life of the resource referred to by the file handle.

@NodixBlockchain
Copy link
Author

Original Author: @sighoya
Original URL: https:/keean/zenscript/issues/35#issuecomment-647669696
Original Date: Jun 22, 2020, 12:27 PM CDT


@shelby3

You can't require the compiler to detect in interior when to release/finalize a resource, only in terminus. But the latter can't be required for dynamic cases, too, i.e. a finalization must be either aborted if some other entity is blocking your resource or you enforce to disrupt the access to that resource and cause the other entities to possibly fail in their obligations.

@NodixBlockchain
Copy link
Author

Original Author: @thickAsABrick
Original URL: https:/keean/zenscript/issues/35#issuecomment-647670179
Original Date: Jun 22, 2020, 12:28 PM CDT


Hello: I'll confirm that I, @thickAsABrick, did not post anything to, or contribute to, this conversation. Not sure why I am receiving these notifications.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647670826
Original Date: Jun 22, 2020, 12:30 PM CDT


@thickAsABrick

Hello: I'll confirm that I, @thickAsABrick, did not post anything to, or contribute to, this conversation. Not sure why I am receiving these notifications.

Hahaha. Sorry I did not check to see you really exist. I guess I should have taken the visual cue when it was bolded in the comment. Apologies. I will not do it again. I hope you are not offended if I find this to be funny.

Hey I wish I had thought of that username before you did.

And @Ichoran thought I was chasing away new community, lol. (He is correct)

(as I try to scurry off stage hoping nobody will notice facepalm )

@NodixBlockchain
Copy link
Author

Original Author: @thickAsABrick
Original URL: https:/keean/zenscript/issues/35#issuecomment-647672022
Original Date: Jun 22, 2020, 12:32 PM CDT


No problem. Just making sure my account was not hacked. Quite honestly, I do not know much about the subject matter that you are discussing.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647673108
Original Date: Jun 22, 2020, 12:34 PM CDT


@thickAsABrick

No problem. Just making sure my account was not hacked. Quite honestly, I do not know much about the subject matter that you are discussing.

Thanks for the understanding. And your last sentence fits perfectly then — not that you should be expected to jump into an esoteric 601+ comments issues thread about highly obscure, abstruse programming language design topics and be able to assimilate it.

I actually like the Jethro Tull song even though it received poor reviews.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647687433
Original Date: Jun 22, 2020, 1:04 PM CDT


@sighoya

You can't require the compiler to detect in interior when to release/finalize a resource, only in terminus. But the latter can't be required for dynamic cases, too, i.e. a finalization must be either aborted if some other entity is blocking your resource or you enforce to disrupt the access to that resource and cause the other entities to possibly fail in their obligations.

If there is only one reference (or at least only one strong reference) to the resource handle then the compiler should be able to reason (by tracking the implicit linear type) when that reference dies either upon exit from its terminal stack scope or assignment of null, i.e. Rust does this but hopefully it can be accomplished without lifetime annotations? (If not, then @Ichoran may be correct to cite that as a utility advantage for Rust’s borrow checker) With multiple strong references to the same resource handle I presume the static analysis required by the compiler would require total, dependent type knowledge a la Coq and thus wouldn’t be viable for a general purpose PL. For multiple strong references I think only a conflation with ARC would be viable, but I have posited that comes with other meta-level baggage.

I was not proposing to track only in the interior (to stack frame scope) when to release. I was proposing to track in interior whether any code paths don’t escape the stack frame scope and thus make sure the programmer has been explicit about whether to release those automatically or has explicitly released them — or alternatively we could discuss whether the compiler should just release them implicitly by default without any using-like explicit intent from the programmer. I argued that being explicit has some advantages. I will think that over again and decide if I have changed my mind about the worth of those posited advantages in light of the discussion that has occurred since I made that claim.

@NodixBlockchain
Copy link
Author

Original Author: @shelby3
Original URL: https:/keean/zenscript/issues/35#issuecomment-647730310
Original Date: Jun 22, 2020, 2:35 PM CDT


Note the edit:

@keean

Even your map example is being explicit because it assigns null to the reference to resource handle object.

No it does not. My example language has non-nullable references.

I see you alluded to move semantics for C++ but that is still being explicit. You move the reference out of the map (and what do you leave in its place except a null in the linked-list for the bucket or you decrement the size of the bucket array and move all the buckets?) then the block or stack frame scope deconstructs the reference you moved (i.e. assigned) it to. That is entirely semantically equivalent to assigning to null for the context of my point. What is the point of this needless verbiage?[Actually I see a valid point, which is that by moving out the implicit ARC action happens at the stack frame scope rather than at the assignment of a null, and thus in some sense does not conflate the removal from the map with the implicit destruction of the handle. Note though that your example did not take this reference as an output from the map.delete thus the implicit conflation remained in your example. However, I find this point interesting because it seems to indicate how to facilitate explicit resource lifetimes per what I am advocating.]

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-647732299
Original Date: Jun 22, 2020, 2:40 PM CDT


@shelby3 I can find a few references to automatic memory management and GC vs ARC, but it seems not widespread. To avoid confusion we should probably use MS (mark sweep) and RC (reference counting) are forms of AMM (automatic memory management), and avoid use of GC altogether.

Whereas non-memory resources live on beyond (or need to die before) the life of the reference to the handle to the resource. The reference to the file handle is not isomorphic with the life of the resource referred to by the file handle.

You can always refactor so the life of the handle is the life of the resource. If the resource needs to be shorter lived you can use a Weak reference. If the resource needs to be longer lived you can store it in a global container (global array, hash map, singleton object etc).

ARC is for freeing access to a reference (thus it follows that the memory resource can be freed because no reference can access it anymore), not optimally semantic congruent with freeing (non-memory) resources that reference used to know about. They are not semantically equivalent.

There is no reason for them not to have the same scope. If I have access to the handle, I should be able to read from the file. If I don't need access any more destroy the handle. There is no need to have a handle to a closed file.

Even GC + implicit if we use static escape analysis with a single strong reference.

No, because a strong reference must never be invalid, so you cannot close a strong reference to a file. A strong reference to a file must be RAII. Basically with a strong reference you must always be able to call read on the handle without an error.

By definition, if you want to be able to explicitly close the handle, it is by definition a weak reference to a file. So with GC the close is explicit, the weakness of the handle is implicit. You have no choice over this (unless you want unsoundness).

Well we could even have a single explicit strong reference (thus any weak references are implicit by default) with implicit (or explicit) destructors and make it work with either ARC or non-ARC, GC

To repeat above you should not use a strong reference to the resource with GC, because that would rely on finalizers to release the handle, and that can lead to resource starvation. It's not safe.

Basically mark-sweep = weak resource handle
ARC = strong or weak are possible.

Edit: Regarding C++, yes you are right you would swap a null there, but that's C++, which is not an ARC language. This would imply that "Weak" is defined:

type Weak<T> = T | null

And therefore Weak would be a billable reference to the strong file handle. However you would not be allowed to just write a null. Weak is an abstract datatype, so the value is private, and you would have to call weak.delete() which calls the destructor on the object contained, and then replaces with a null.

@NodixBlockchain
Copy link
Author

Original Author: @keean
Original URL: https:/keean/zenscript/issues/35#issuecomment-647735836
Original Date: Jun 22, 2020, 2:49 PM CDT


@shelby3

Edit: Regarding C++, yes you are right you would swap a null there, but that's C++, which is not an ARC language. This would imply that "Weak" is defined:

type Weak<T> = T | null

And therefore Weak would be a nullable reference to the strong file handle. However you would not be allowed to just write a null. Weak is an abstract datatype, so the value is private, and you would have to call weak.delete() which calls the destructor on the object contained, and then replaces with a null.

@NodixBlockchain
Copy link
Author

Original Author: @Ichoran
Original URL: https:/keean/zenscript/issues/35#issuecomment-647744776
Original Date: Jun 22, 2020, 3:10 PM CDT


@shelby3 - Um, your supposed code counterexample isn't a counterexample. In fact, the disproof of it being a counterexample was in my Rust example! Here it is again:

https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=2baade67d8ce2cc9df628de2b753f0e6

Here is the code, in case you want to see it here instead of run the runnable example there:

struct Str {
    string: String
}

impl Drop for Str {
fn drop(&mut self) { println!("Dropping!"); }
}

fn main() {
let mut s = String::new();
s.push_str("salmon");
let ss = Str{ string: s };
let l = ss.string.len();
std::mem::drop(ss);
println!("I was size {}", l);
}

Rust drops at block closure, but it resolves lifetimes to last use when needed ("non-lexical lifetimes"), so it could do it automatically at the point where it says std::mem::drop.

So your first example would magically just work. If not, the explicit drop with move semantics (so you can't use it again) would non-magically work.

Note that

using(new Handle){ handle =>
  somethingThatSecretlyBlocksForever()
}

also ties up resources forever. Because the blocking is secret (forgotten), it doesn't look like a problem.

But

let handle = Handle::new();
useHandle(handle);
somethingThatSecretlyBlocksForever();

with automatic dropping of handle when it can no longer be accessed, is safe.

You don't even have to know the secret. It just works. It's just correct.

So using is an antipattern.

Here's another problem with using--the requirement to nest scope.

using("textures".path.stream){ stream =>
  val textures = readSomeTextures(stream)
  using(new GraphicsResource){ g =>
    g.load(textures)
    if (userWantsMoreTextures()) {
      val moreTextures = readMoreTextures(stream)
      g.load(moreTextures)
    }
    g.doThings()
    g.doMoreThings()
  }
}

The problem? You've still go that file handle because you needed it live when you acquired some other resource.

In contrast, RAII handles this flawlessly.

Having to remember to deal with resources is a pain. If the compiler can figure out that they need to be closed, they should just plain be closed. If you need help understanding why your life is so easy now, then an IDE could help you out.

@NodixBlockchain
Copy link
Author

Original Author: @thickAsABrick
Original URL: https:/keean/zenscript/issues/35#issuecomment-647751447
Original Date: Jun 22, 2020, 3:25 PM CDT


My apologies for asking: is there something I can do at my end to stop these emails? I tried unsubscribe, but it does not seem to be helping?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant