Android’s GC VS ARC on iOS – what’s best?
As everyone’s in the programming circles aware, the two leading mobile platforms of today, iOS and Android both currently offer automatic memory management in order to make developers lives easier and reduce time we take to develop software products of ever increasing complexity.
Nevertheless, the respective memory management engines are of different nature – Android is relying on classical Java tracing garbage collector (thereafter GC) while iOS is using a simpler automatic reference counting approach (ARC).
What’s the difference between the two?
Put simply, GC is trying to determine if an object can be disposed by analyzing a chain of references from certain “root” objects per generation, in case none found the object is marked as “dead” and can be “collected” at any moment when GC decided it’s time to free up some resources.
When it comes to ARC, it’s rather simple – the runtime is counting all references to an object held by other objects. As soon as the object’s references count reaches zero it can be destroyed, however, on some ARC’s modifications it’s added to the list of “unreferenced” objects that can be processed periodically instead.
Each approach has its advantages and downsides, find my modest take on those below.
Note that I’ve focused mainly on comparing JVM’s GC used on Android to Apple’s ARC implementation on iOS, there’re many other garbage collection implementations on the market that offer different sets of features and consequently strong and weak points.
Garbage Collector Advantages
1) Cleaning up entire generations of objects thus potentially less defragmented heap in the end
2) Allocating an object once GC compressed and optimized the heap after collection is a rather quick and inexpensive operation
3) Collecting cross-referenced objects (so called “retain cycles”) is fully automatic and does not cause memory leaks contrary to ARC
1) Objects are destroyed in a very deterministic way as soon as they become unused (“unreferenced” would be a better term in the ARC world)
2) As object destruction is done on the foreground it’s generally more efficient on systems with lower resources such as mobile and embedded devices
2) 3) Another direct consequence of the foreground mode is that object destruction is happening all the time, so dreaded “resource spikes” that often cause Android apps to halt almost completely when another object graph is taken care of are avoided
Looking at the list of pro and contra above, a natural question arises – in the end, what’s the best approach to object lifecycle management and memory management in general on the mobile platforms?
As compelled as I feel to answer it in a straightforward manner, it really depends on the circumstances and the platform itself.
That said, when it comes to technical excellence and performance on systems with relatively large memory recourses, the current research and practical evidence points that the modern tracing GCs, such as one used in .Net’s CLR are superior.
However, GC often falls short to perform well with unmanaged programming languages where moving existing objects around the heap is a complicated and costly operation, which is the case with iOS’s Objective-C and even much more modern Swift.
The latter fact along with lower memory requirements for ARC is most likely behind the Apple’s decision to drop the GC support on OS X 10.8 and keep proceeding with ARC, in fact, their mobile platform, iOS, never supported it in the first place.
As for Android, it’s inherited much of the existing Java stack from the desktop world so GC was (and still is) the only option since the day one.
Therefore, the choice of the memory management system is rather multi-faceted and the current state of things in the mobile world is based largely on various historical reasons.
Comments are closed