-[LeaksInstrument measure]
after each run.-[LeaksInstrument hasLeaksInRepresentativeSession]
being falsy.Whether you test your code or not, you’ve probably heard of generational analysis. In short, it’s an invaluable tool that allows you to detect if you have a retain cycle or a leak, and some nasty objects are not being released when you expect them to. The way you can test it is:
What is a logically complete flow? That depends on what your app is doing. For example you’re building one-more-note-taking-app. You have a list of notes, tapping on a note takes you to the editing screen. In this case one of the possible flows would be:
Why would we want to run this several times? Well, depending on your app, first time you might need to warm up some caches and such. While the last time some objects might naturally still persist in memory - either due to autorelease scope, or a particular framework quirks. That’s why you want to take in account all the runs except for the first one and the last one.
As a good practise you should fire this tool once in a while and walk over your application checking if you introduce any retain cycles. While very useful, this can be quite annoying. Once, an iOS engineer told me how he wrote a test checking for retain cycle in a particular scenario. While we didn’t discuss any specifics, from what I understood there was a particular reference (let’s assume a view controller) that was not released. My assumption, is that the test was simply checking for a particular weak variable to turn nil
:
__weak id obj = ...; // this is what we expect to become nil if there are no retain cycles
[self _executeScenario1]; // our typical user-flow
XCTAssertNil(obj);
Some time ago I stumbled upon a brilliant library by Richard Heard, called Objective Beagle. It is a great tool for debugging. It searches all allocated instances, and finds those, matching specified class. I figured that this is just what I needed. After slight refactoring, I had a running prototype, here is how to use it:
- (void)testLeakingExample {
XCTestExpectation *leaksExpectation = [self expectationWithDescription:@"No leaks detected"];
[self _runFlowNTimes:5 progressHandler:^{
[self.instrument measure];
} completionHandler:^{
XCTAssertFalse(self.instrument.hasLeaksInRepresentativeSession, @"%@", self.instrument);
[leaksExpectation fulfill];
}];
[self waitForExpectationsWithTimeout:10 handler:nil];
}
waitForExpectationsWithTimeout:handler:
).instrument
hasLeaksInRepresentativeSession
returning YES
if at least one leak was found.So all you need to do, is to implement your flow (KIF or any BDD library might come handy), make sure you are returning into the starting point, measure the leaks after each run and assert on leaks once you finish.
Why do I believe this is a great test to have? As time passes by, you will add more features to your app or simply redesign underlying architecture. But until the flow exists, this test will make sure that your refactoring did not introduce any leaks.
Now a bit more about the way it works internally. Every recorded session is being diffed against its predecessors, so it contains only the newly added leaks. While you can access allSessions
to get list of leaks from all the measured sessions, most of the times, you want to use representativeSessions
instead. As mentioned above, it returns only meaningful measurements, i.e. allSessions
excluding the first and the last one.
Currently leaks are stored as weakly referenced objects in a NSHashTable
. I’m still experimenting with it, but the current approach is that the instrument will not extend the lifecycle of the object whether it’s leaking or not. However, you might see the hasLeaksInRepresentativeSession
returning YES
while enumeration over the leaks in representativeSessions
will return nothing.
Original implementation in Object Beagle goes to great length to avoid using private or potentially unsafe classes1. In the current implementation, I decided to workaround this problem by limiting classes to those coming from the [UIBundle mainBundle]
. It is both an improvement and a limitation: e.g. current implementation will ignore classes from shared frameworks.
One of the biggest improvements possible would be to allow a more flexible measure
call. E.g. if I know that every run of my scenario produces X
objects of cache, I could’ve specified something like
[self.instruments measureIgnoring:@{
[XYZImageCache class]: @(NSRangeMake(0,3))
}];
Where passed dictionary contains a map of classes to the range of instances I expect to persist.2 In this case, I expect from 0 to 3 instance of XYZImageCache
to survive each run.