short-TERM memory lIfespaN estimATiOn and Runtime

Funded by MINCyT-BMWF grant AU1019

Christoph Kirsch, University of Salzburg

Sergio Yovine, University of Buenos Aires

Motivation and goals

Implicit and automatic dynamic memory reclaiming – a.k.a. garbage collection (GC), is clearly beneficial from the standpoint of developing robust software as it liberates the programmer of the highly error-prone task of having to explicitly free objects which are no longer used, a cause of difficult-to-find program bugs which have a tremendous impact on software cost and time-to-market. However, this comes along with unpredictable overheads in time and space, making GC poorly suited for critical resource-constrained real-time systems, where predictability is a must. That is why dynamic memory has been largely ignored by this sector of the software industry in the past. But this picture is rapidly and thoroughly changing today as object-oriented languages are gaining momentum for programming such software. An example of this is the use of real-time Java in time-critical financial applications.

Both groups have been steadily working on different approaches to the problem with the aim of developing innovative techniques to overcome the unpredictable behavior of GC. They have produced satisfactory, though partial, results.

The Salzburg’s group focused on the system aspects of the problem. It has proposed a new memory model called short-term memory for managing short-living objects on the heap. In contrast to the traditional persistent memory model, objects in short-term memory expire after a finite amount of time, which makes deallocation unnecessary. Instead, expiration of objects may be extended, if necessary, by refreshing. A concurrent, incremental, and non-moving implementation of short-term memory for explicit refreshing, called self-collecting mutators, has been developed. It is based on programmer-controlled time and integrated into state-of-the-art runtimes of three programming languages: C, Java, and Go. All memory management operations are lock-free and run in constant time modulo the underlying allocators. The implementation does not require any additional heap management threads. Expired objects may be collected anywhere between one at a time for maximal incrementality and all at once for maximal throughput and minimal memory consumption. The integrated systems are heap management hybrids with persistent memory as default and short-term memory as option. Legacy code runs without any modifications with negligible runtime overhead and constant per-object space overhead. Legacy code can be modified to take advantage of short-term memory by having some but not all objects allocated in short-term memory and managed by explicit refreshing. An experimental study on single- and multi-threaded use cases has been carried out in all three languages. The results showed that using short-term memory (1) simplifies heap management in a state-of-the-art H.264 encoder without additional time and minor space overhead, and (2) improves, at the expense of safety, memory management throughput, latency, and space consumption by reducing the number of GC runs.

The group at UBA focused on the programming and analysis aspects of the problem. Indeed, an appealing alternative to per-object GC consists in resorting to scoped-based memory management, where objects with similar lifetimes are allocated in regions and freed all together when the scope (e.g., a method call) expires. This approach results in time- and space-predictability, at the expense of making programming much harder and error prone. The group has developed a tool that allows automatically transforming a program using a garbage-collected dynamic memory model into one with regions. The approach combines a static points-to and escape analysis, which approximates objects lifetimes, and a static memory estimation analysis, which outputs parametric estimations of region sizes. The tool supports a two-step methodology. First, memory scopes are automatically inferred with the aid of the points-to and escape analysis. Second, the resulting memory layout is refined and fine-tuned by resorting to a region edition tool and explicit program annotations. Region sizes are expressed as functions that are evaluated at runtime for determining the actual sizes. Besides, the tool predicts a symbolic bound of the dynamic memory footprint of a method. This enables checking whether the method can be safely executed without running out of memory. The benefits of the approach have been validated on an aircraft collision-detector benchmark and a real-life banking application. The experiments showed how the tool-chain can help analyzing at compile-time the effects of several possible memory organizations, in order to choose the most space-efficient one.

The goal of this project is to integrate these two complementary approaches into an end-to-end solution to the problem, with automated support at programming, analysis and system levels. This will enable automated code modifications to take advantage of the performance benefits of the short-term memory model without sacrificing safety.


The work-plan consists in improving and extending the techniques described above in order to integrate them in a full-fledged automated approach towards guaranteeing predictable time and space behavior. The idea is to combine the proven benefits of both the short-term memory model and the safe estimation of objects lifetimes by static program analysis, while overcoming their drawbacks, namely potentially unsafe expirations, in the case of short-term memory management, and overly conservative approximations, in the case of lifetime analysis.

Briefly, the short-term memory model proposed by the Salzburg’s group works as follows. Objects may be flagged as short-term any time after their allocation, e.g. when an exact or at least reasonable expiration date is known. If it later turns out that the object will expire too early, the object may be explicitly refreshed with a later expiration date. An object may be refreshed by multiple threads, multiple times, any time. As a result, an object may have multiple expiration dates (for one but also for different threads) since each refresh creates a new expiration date for the object. The expiration semantics is nevertheless simple: an object in short-term memory expires when all its expiration dates have expired. After that the object is said to have expired and will be deallocated by the memory manager.

The explicit refreshing mechanism has been proven to be more general than region-based memory management. This is because not-expired objects can be refreshed individually at any time by the same thread and even by different threads. Still, explicit refreshing requires knowing an upper bound of the lifetime of objects that may be arbitrarily large as long as it is finite. Therefore, explicit refreshing may be done incorrectly. For example, incorrect use of explicit refreshing is missing refreshing information, resulting in memory being reclaimed too early thus creating dangling pointers. Another example of error is multiple refreshing across expiration, i.e., refreshing already expired objects, which may lead to multiple deallocations of an object.

For the group at UBA, the research agenda will consist in extending and improving its static analyses to automatically detect short-living objects to be placed in short-term memory in a safe way. This means predicting object expiration dates in order to avoid the errors explained above. Moreover, the group will seek to produce estimations as precisely as possible, so as to minimize or even completely eliminate the need for multiple refreshing. In contrast to what has been done before, this would require the analysis to provide not only a bound of the lifetime of an object, but a frequency of refreshes. There are no traces in the literature of such kind of analysis. Indeed, this will imply solving two difficult challenges. The first one is the need to improve the granularity of the analyses, notably by looking at finer scopes and object relationships, as well as patterns of accesses, in order to identify refreshing frequencies. The memory consumption estimation will be extended to count not only the number of objects but also the number of explicit refreshes. The second challenge consists in extending the analyses to handle multithreaded code.

The Salzburg group will focus on integrating compile-time information obtained by the UBA analysis into short-term memory management. Ideally, the resulting system is simple, fully incremental, and fast, yet minimizes memory consumption, and removes memory leaks while still providing safety. Existing systems such as real-time garbage collectors are safe and incremental but slow and extremely complex, require more memory than non-real-time designs, and still suffer from reachable memory leaks. The key challenge is thus to deallocate memory of not-needed yet possibly reachable objects earlier (decreasing maximum memory consumption and removing reachable memory leaks) and in bulk (increasing throughput as well as decreasing latency and fragmentation). An important milestone of the cooperation is therefore to identify the correct level of abstraction and interface between the UBA analysis and the Salzburg runtime.

The overall expected outcome is to integrate the results of this cooperative research into an end-to-end tool-chain that produces safe and efficient embedded real-time code with time- and space-predictable dynamic memory usage based on the short-term model. Both groups will jointly work on integrating their techniques and software developments, and also on the experimental evaluation of the resulting tool on several benchmarks in order to demonstrate its benefits.


Christoph Kirsch, PI
Ana Sokolova, Post-doc
Andreas Haas, PhD student
Michael Lippautz, PhD student
Martin Aigner, PhD student

Sergio Yovine, PI
Diego Garbervetsky, Researcher
Sam Hym, Researcher
Gervasio Pérez, PhD student
Diego Hara, undergraduate student
Gonzalo Winniczuk, undergraduate student