Approaches to Resource Disposal

posted in swiftcoding
Published May 23, 2014
Advertisement
I'm working on developing a novel programming language, working title 'aevum'. As part of that process, I'll be writing a series of articles about various aspects of language design and development.

Every long running computer program is going to need to obtain a variety of resources, and those resources are almost always finite. Memory, file handles, threads, GPU resources - all of these are relatively scarce, and exhausting the available supply will have dire consequences, anywhere from killing the program, to crashing the computer on which it runs.

Given this scarcity, it is essential that we can dispose of these resources as soon as we are finished using them (or at least, before they are needed elsewhere). Although that sounds simple enough, it turns out that there are a couple of hurdles to overcome.

The first hurdle relates to ownership. As long as every resource is owned exactly once (i.e. a member variable of one object, or a local variable to one function), then disposal is trivial - a resource is disposed of as soon as it's parent is disposed of. But requiring single ownership of every object comes with disadvantages of its own: with strict single ownership you can't easily maintain cyclic data structures such as bi-directional lists, graphs or trees.

On the other hand, if you elect to allow multiple ownership, you are then faced with the problem of how to determine when a resource is actually no longer being used. Obviously you can't dispose of it as long as even a single owner still exists, but how do you determine that the last owner is gone? You can explicitly keep track of the list of owners for each resource (a la reference counting), at the expense of both storage and performance, or you can at regular intervals scan the entire system to determine objects without owners (a la tracing garbage collectors), at the cost of determinism and performance.


Manual resource disposal

Manual resource disposal was once a staple of imperative languages, and while one might hope that it would be included here as a mere historical footnote, that is sadly not the case. The majority of garbage collected languages (including Java, Python and C#, to name but a few), make little explicit provision for the disposal of scarce non-memory resources. Why they do offer some support for locally-scoped resources (python's with statement, or C#'s using statement), long-lived or shared resources have to be managed by manual calls to a dispose method, potentially augmented by a manual reference counting system.

Why is this less than ideal? Primarily because it places the burden of resource disposal squarely on the programmer. Not only does it require a significant amount of additional code, but forgetting to call the disposal methods in one of any number of places will prevent the disposal of those resources.


Tracing garbage collection
Tracing garbage collectors are the bread and butter of modern programming languages. I'm not going to describe the workings in detail - there are many resources on the subject. Suffice it to say that at regular intervals we trace through all reachable objects, mark them as live, and dispose of everything else. The typical criticism of garbage collectors is that they are not deterministic, and collection may occur at any time, interrupting the normal executing of the program. While that represents a serious problem in hard real-time applications, there are a variety of ways to work around the problem, and I am mostly interested in a more subtle manifestation of the same issue.

The tracing portion of a garbage collection cycle has to touch all reachable memory, and the collection phase has to free every orphaned object, both of which may take a significant amount of time. For this reason, garbage collection is typically delayed until the system detects that it is about to run out of memory. That's all very well, but what if it isn't memory we are about to run out of? The garbage collector doesn't know anything about, say, file handles, so even if you run out of file handles, as long as there is plenty of memory (and modern machines have plentiful memory) garbage collection won't be triggered.

The typical solution to this problem is to require manual disposal of non-memory resources, which results in the same drawbacks we have already discussed above.


Reference counting
Reference counting is the darling of the C++ world, and sees pretty wide use even in other languages. Again, I'm not going to describe the workings in detail, but the basics are to attach a reference count to each object, increment that count each time a reference to the object is created, and decrement the count each time such a reference is destroyed. If the reference count drops to zero you know that there are no references to the object, and it can be deleted immediately.

Reference counting offers one key advantage over garbage collection: all objects can be disposed as soon as they are no longer needed. This is excellent from the perspective of disposing non-memory resources, but it unfortunately goes hand-in-hand with a number of flaws.

The first flaw is that reference counting in vulnerable to cycles, where objects A and B either directly or indirectly refer to each other, thereby preventing the reference count of either from ever reaching zero. This flaw is further is compounded by the fact that many common data structures (doubly-linked lists, bi-directional trees, cyclical graphs, etc.) all involve such circular references. We can mitigate this by making the programmer define which single references actually confers ownership (strong vs weak references), but this adds significant mental overhead for the programmer, and just emulates the ideal case of single ownership. We can also allow cycles, and run cycle detection to break these cycles, but that is roughly equivalent to garbage collection, and shares the same drawbacks.

The second flaw is that not only does the need to attach a reference count to every object consume additional memory, but the near-constant increment/decrement of reference counts also puts a considerable strain on the cache. This can be reduced by careful optimization of where reference counts are updated, and by deferring updates so as to batch them together, but the former adds to programmer complexity, and the latter drastically reduces the benefit of immediate disposal.


ARC (automatic reference counting)
Apple deployed reference counting as a crucial part of their API design with the advent of Mac OS X and their Objective-C frameworks. Initially this reference counting had to be done through manual calls to retain/release methods, and with the addition of some supporting constructs (such as autorelease pools) and a strongly documented convention for programmers to follow, this was very successful (albeit a little tedious).

After a brief foray into traditional garbage collection (which failed to meet the needs of the fast-growing iPhone ecosystem), they hit on a simpler idea: what if the compiler could perform reference counting for you? Mechanically following the same conventions provided to the programmer, and augmented by a couple of language annotations to influence behaviour, the compiler can guarantee to implement correct reference counting, and the optimiser can strip back out most redundant calls to remove much of the overhead thus introduced.

In general it is a very neat system, the major drawback being that it still relies on the programmer to annotate weak references correctly, and there remains some overhead in maintaining the necessary reference counts.


Rust
There are a number of interesting resource management ideas knocking around in the Mozilla foundation's Rust, namely unique pointers, borrowed references, and automatic region management.

Unique pointers are declared with a ~ (tilde character), and they uniquely own the object they point to. As in, no other object holds a pointer to it. If you assign one unique pointer to another, the contents are transferred (not copied) and the original no longer points to the owned object. Unique pointers make resource management a piece of cake, because if the pointer uniquely owns its contents, then we can destroy the contents as soon as the pointer goes out of scope.

Of course, as I mentioned in the introduction, there are a whole class of data structures which are very hard to create with only unique pointers, and that's where borrowed references come in. Rust lets you declare a reference with the & (ampersand character), and this can be used to 'borrow' a reference from a unique pointer. The borrowed reference refers to the same thing the unique pointer does, but it does not own the referenced object, and the compiler guarantees that the reference will not outlive the unique pointer (thus never impacting resource collection at all).

Since our references must be statically guaranteed not to outlive the matching unique pointer, we'd be quite limited in what we could do with these references. For example, we wouldn't be able to store such a reference in a container, because the lifetime of the container might outlast our unique pointer. And this is why we need automatic region management: regions define scopes within which lifetimes exist, and by limiting the reference to the region containing the unique pointer, we guarantee that the reference cannot outlive the pointer. But regions are hierarchical, and automatically created for every scope, so that as long as a container is owned by a child region of the region holding our unique pointer, we can add references to that container freely, secure in the knowledge that the container too will not outlive the original unique pointer.

And the best part is that the compiler can statically determine a bunch of this at compile time, and hand you back nice little errors when you violate regions, thus avoiding most of the runtime overhead. There are of course limitations, the programmer still has to be cognisant of the difference between unique pointers and borrowed references, and attempts to move referenced objects will occasionally induce head-scratching compiler errors. But overall, it is a very solid approach to purely deterministic resource disposal with minimal overhead.


Can we do better?
Maybe.

Apple's ARC and Rust's unique/borrowed/region system are both very promising approaches to improving the accuracy and performance of resource disposal, while lowering the cognitive load on the programmer and reducing the incidence resource leaks. And they both avoid the crippling restrictions on programmers imposed by classical research approaches, such as complete immutability of data or linear types. However, both continue to have some cognitive overhead, and both are relatively new approaches with the possibility of as yet unforeseen complications.

But for now, the trend of compiler-centric resource disposal approaches seems to be here to stay.


http://feeds.wordpress.com/1.0/comments/swiftcoder.wordpress.com/451/ b.gif?host=swiftcoder.wordpress.com&blog=3224297&post=451&subd=swiftcoder&ref=&feed=1

Source
4 likes 2 comments

Comments

JeffCarp

Thank you for writing the article! I found it quite enlightening. :-)

Cheers!

May 24, 2014 02:20 PM
jefferytitan

Thanks for the article. Personally I think that none of the systems are necessarily wrong (except where it allows you to access freed memory). My preference would be allowing the programmer to choose which resource management approach to apply to which resource. Different resources have different requirements. If you're looking at files for example, a settings file, a log file, and a database file all have differing needs based on how often they're accessed, how long they stay open for, and how many processes will be accessing them. I think that marking certain objects as having different management strategies, for example making allocation/deallocation manual for an object pool, would mean that garbage collection would run much faster as it would be working with a smaller object graph.

June 03, 2014 02:18 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement
Advertisement