Unit Testing Flurry Analytics With OCMock

A Flurry Use Case

Our company’s framework uses Flurry as our default analytics provider. Yet we could use Google Analytics or any other for that matter. An interface is used to shield this choice as well as provide business language appropriate methods within our framework, i.e.

@protocol AJMAnalyticsController <NSObject>
- (void)facebookActivatedWithId:(NSString *)facebookId andEmail:(NSString *)email;
- (void)viewPageNamed:(NSString *)pageName;

A Flurry implementation of this protocol might be contained inside an implementing class AJMFlurryAnalyticsController.

Changing Analytics

When the analytic requirements change or new ones added, how are we sure that this has been implemented correctly? We could run the app, trigger the events and then check Flurry’s dashboard for those recorded events.

The problem though is firstly the delay between the events being sent and becoming visible online. This can take up to 24 hours. And the second is that this relies on correct implementation of the analytic session, principally when a session ends.

Unit tests

Although the Flurry library contains a singleton class, Flurry.h as the mechanism for logging events, we can still test that events are logged using OCMock and a popular cheat called method swizzling.

Method swizzling involves the swapping out the call to one method, with the call to another. I’ve created some helper methods to do so

+ (void)swapClassMethodImplementation:(SEL)methodSelector
                            withClass:(Class)toClass {
    method_exchangeImplementations(class_getClassMethod(fromClass, methodSelector),
                                   class_getClassMethod(toClass, methodSelector));
+ (void)swapInstanceMethodImplementation:(SEL)methodSelector
                               withClass:(Class)toClass {
    method_exchangeImplementations(class_getInstanceMethod(fromClass, methodSelector),
                                   class_getInstanceMethod(toClass, methodSelector));

We know that our Flurry events will be logged through one of class methods defined in Flurry.h, namely one of

+ (void)logEvent:(NSString *)eventName;
+ (void)logEvent:(NSString *)eventName withParameters:(NSDictionary *)parameters;
//... and more

The vast majority of Flurry calls are to [Flurry eventName:withParameters]. This is the method we will swizzle out and replace with a method that the testing class controls. But there is no reason why you can’t swop out more than one method at a time.

Below is the initial setup of the test class.

//  AJMTestFlurryController.m
//  Created by Alasdair on 31/08/2013.
//  Copyright (c) 2013 AJ McCall Ltd. All rights reserved.
#import <SenTestingKit/SenTestingKit.h>
#import "OCMock.h"
#import "Flurry.h"
#import "AJMAnalyticsController.h"
#import "AJMFlurryAnalyticsController.h"
static NSString *actualLogEventName;
static NSString *expectedLogEventName;
static NSDictionary *actualLogEventParameters;
static NSDictionary *expectedLogEventParameters;
@interface AJMTestFlurryController : SenTestCase
@property (nonatomic, strong) AJMAnalyticsController analyticsController;
@implementation AJMTestFlurryController
- (void)setUp {
    [super setUp];    
    [self setupFlurrySwizzleMethods];
    self.analyticsController = [[AJMAnalyticsFlurryController alloc] init];
- (void)tearDown {
    self.analyticsController = nil;
    [self tearDownFlurrySwizzleMethods];
- (void)setupFlurrySwizzleMethods {
    [self swapClassMethodImplementation:@selector(logEvent:withParameters:)
                               forClass:[Flurry class]
                              withClass:[AJMTestFlurry class]];
- (void)tearDownFlurrySwizzleMethods {
    STAssertEqualObjects(actualLogEventName, expectedLogEventName, @"Expected log event name is different to actual log event name ");
    STAssertEqualObjects(actualLogEventParameters, expectedLogEventParameters, @"Expected log event parameters are different to actual log event parameters");
    actualLogEventName = nil;
    expectedLogEventName = nil;
    actualLogEventParameters = nil;
    expectedLogEventParameters = nil;
    [self swapClassMethodImplementation:@selector(logEvent:withParameters:)
                               forClass:[AJMTestFlurry class]
                              withClass:[Flurry class]];

Then to test our analytic controller

- (void)testFacebookActivated {
    expectedLogEventName = @"FL_FACEBOOK_STARTED";
    expectedLogEventParameters = @{@"facebookId": kFacebookID,
                                   @"email": kEmail};
    [self.analyticsController facebookActivatedWithId:kFacebookID andEmail:kEmail];
- (void)testViewPageNamed {
    expectedLogEventName = @"FL_PAGE_VIEWED";
    expectedLogEventParameters = @{@"page_viewed": kPageName};
    [self.analyticsController viewPageNamed:kPageName];

We have successfully tested that when any call from within our application to either facebookActivatedWithId:andEmail or viewPageNamed: will make sure that Flurry logs the right event names and parameters in the correct format.

The meat of the work is actually done inside tearDownFlurrySwizzleMethods. Breaking it down

  • First, the expected event name and actual event names are checked.
  • Second, the expected event parameters and actual event parameters are checked.
  • Remember, both the expected name and parameters are set in the test methods.
  • Lastly some common unit test clean up, the class variables are nilled out for the next test and the Flurry method is swizzled back in.

Putting all the logic inside tearDown means we write simpler and easier to read test methods.

This test class shows how our developers to use TDD when new events are added or event names/parameters changed.


Picasa Uploader Stalls – A Command Line Solution

I’ve recently decided to move my precious, oh so very precious, pictures to the cloud. In the past 12 years I have kept my digital snaps safe and catalogued on my PC. Order was first achieved via an obsession joy in downloading photos into correctly named folders. In the last 3 years I’ve been using Google’s Picasa to do this for me with an equal amount of joy. Safety of my photos, a far more important requirement, was achieved by first copying them onto DVDs, and then when HD space got cheaper and DVD sizes stagnated, backing up my photos onto external storage devices. Now since owning a Mac, I’ve been using Apple’s Time Machine to automate this process.

Although the external devices are kept separate from the laptop, deep inside my linen chest of draws (it throws off the scent of burglars!), the end result is that I now have almost 90G of memories dating back to 2001. I would simply curl up and cease to live if they were accidentally lost forever.

To The Cloud

So I decided to move my photos to the cloud, signing up for an extra 100G with Google. Google is nice enough to increase you GMail to 25G as well. Holly smokes! Back in 2005 when GMail was launched, I never thought I would need, let alone be given, a 1G inbox. God bless you Google. Since all my photos are organised via Picasa, I would use it to upload to Google Drive. And so onto the crux of the problem. Picasa, and especially Picasa uploader*, is rather shit on. At least on my Mac it is, I have no anecdotal evidence on PC to suggest otherwise. Don’t get me wrong, I love how Picasa organises my photos, it’s decent range of touch up tools and it’s face recognition algorithm. But for uploading large numbers of photos to the web, the Picasa uploader stalls. Upon restarting the upload kicks off again from where it left off so at least that’s not too bad.

But I want to runs Picasa overnight when my co-workers’ Internet usage won’t be swamped by my 90G upload and I don’t want to remote desktop in to check on the progress constantly. Basically I want to start, wait sometime, stop and then restart Picasa. Over and and over again until Google now has even more of me in it’s (non evil) belly.

To The Command Line

If you got to this article looking for a solution to a problem similar to mine, and made it this far through the ramblings about my digital archive, then you’ll be pleased to know I’m almost finished. There might be a perl script you can dash out, or a fancy script, but why both. Copy and past this into your terminal and voila!, Picasa is restarted ever 10 minutes.

while true
    sleep 600
    killall Picasa
    sleep 2

Just remember not to close this terminal as Picasa and the repeat batch commands will then terminate.

I hope this helps anyone else who is having similar issues. Let me know if you’re seeing this on PC too. *I’ve had problems with the Facebook uploader as well, but I don’t upload to Facebook except for to allow my mum to see a few pics.

Using Cocos2D with Box2D Tutorial to Blow Leaves Across A Screen.

Recently I was asked to write a quick proof of concept of an interactive iOS magazine advert. The brief run down was to have something autumnal and interactive “Perhaps we could have some leaves that you could of kinda “blow” off the screen” he said. Sounds gimmicky, but it seemed like the perfect project to learn the Cocos2D with Box2D libraries. I’ve documented the process below in case anyone is interested.


Firstly you’ll need to download and install Cocos2d for iOS development.

The Cocos2D Box2D Template

In xCode create a new project from the Cocos2D templates ‘cocos2d v2.x’ -> ‘cocos2d iOS with with Box2d’. This gives us a useful, but pretty much the worst iOS game imaginable. We’re not going to change anything here, but a quick look at the appDelegate we can note a few things. Firstly there’s this thing called a director and it appears to be a view controller that is used to display scenes.

director_ = (CCDirectorIOS*) [CCDirector sharedDirector];
// and add the scene to the stack. The director will run it when it automatically when the view is displayed.
[director_ pushScene: [IntroLayer scene]]; 
// Create a Navigation Controller with the Director
navController_ = [[UINavigationController alloc] initWithRootViewController:director_];
navController_.navigationBarHidden = YES;

We can see that the director acts as a kind of navigation controller, while scenes, are something analagous to ordinary view controllers. The Introlayer scene simply shows Cocos2D’s splash screen and pushes another scene, aptly named HelloWorldLayer, into view. Also note that all our ‘.m’ files are in fact “.mm” as Box2D is written in C rather than Objective-C. Remember this!

Hello Physics

The HelloWorldLayer files contains the logic for a demo which you can run before hand. However stripping it bare for our purposes we should have a HelloWorldLayer.h looking like

#import "cocos2d.h"
#import "Box2D.h"
#import "GLES-Render.h"
//Pixel to metres ratio. Box2D uses metres as the unit for measurement.
//This ratio defines how many pixels correspond to 1 Box2D "metre"
//Box2D is optimized for objects of 1x1 metre therefore it makes sense
//to define the ratio so that your most common object type is 1x1 metre.
#define PTM_RATIO 32
// HelloWorldLayer
@interface HelloWorldLayer : CCLayer{
    // our world object, where in all other objects are created, added and destroyed
    b2World* world;        // strong ref
    // a view that placed inside our world to show some very useful debug stats and objects.
    GLESDebugDraw *m_debugDraw;        // strong ref
// returns a CCScene that contains the HelloWorldLayer as the only child, used when this layer
// is pushed onto the director's "view stack"
+(CCScene *) scene;

And a HelloWorldLayer.mm

#import "HelloWorldLayer.h"
#import "AppDelegate.h"
#import "PhysicsSprite.h"
#pragma mark - HelloWorldLayer
@interface HelloWorldLayer()
-(void) initPhysics;
@implementation HelloWorldLayer
+(CCScene *) scene{
	// 'scene' is an autorelease object.
	CCScene *scene = [CCScene node];
	// 'layer' is an autorelease object.
	HelloWorldLayer *layer = [HelloWorldLayer node];
	// add layer as a child to scene
	[scene addChild: layer];
	// return the scene
	return scene;
-(id) init{
	if( (self=[super init])) {
		// enable events, we will need those later
		self.isTouchEnabled = YES;
		self.isAccelerometerEnabled = YES;
		// init physics
		[self initPhysics];
        //IMPORTANT: Begins the main run loop which calls @selector(update:)
		[self scheduleUpdate];
	return self;
-(void) dealloc{
	delete world;
	world = NULL;
	delete m_debugDraw;
	m_debugDraw = NULL;
	[super dealloc];
-(void) initPhysics{
    // gravity! 
	b2Vec2 gravity;
	gravity.Set(0.0f, -10.0f);
    // NB: World create here
	world = new b2World(gravity);
	// Do we want to let bodies sleep, allow us to optimise for objects that could
    // become inactive.
    // shows run stats in the lower left hand corner, should be
    // disabled when running in production
	m_debugDraw = new GLESDebugDraw( PTM_RATIO );
    // debug flags used when drawing the debug view. see [self draw]
	uint32 flags = 0;
	flags += b2Draw::e_shapeBit;
	//		flags += b2Draw::e_jointBit;
	//		flags += b2Draw::e_aabbBit;
	//		flags += b2Draw::e_pairBit;
	//		flags += b2Draw::e_centerOfMassBit;
-(void) draw{
	// This is only for debug purposes & it is recommend to disable it
	[super draw];
	ccGLEnableVertexAttribs(kCCVertexAttribFlag_Position );
//the main run loop
-(void) update: (ccTime) dt{
	//It is recommended that a fixed time step is used with Box2D for stability
	//of the simulation, however, we are using a variable time step here.
	//You need to make an informed choice, the following URL is useful
	int32 velocityIterations = 8;
	int32 positionIterations = 1;
	// Instruct the world to perform a single step of simulation. It is
	// generally best to keep the time step and iterations fixed.
	world->Step(dt, velocityIterations, positionIterations);	

There are some comments from the Cocos2D boys as well as some extra comments from myself to help explain a few sections of code.

Adding some leaves

So now on to the interesting stuff, adding some leaves. First download these lovely leaf images [leaf1, leaf2] and add them to your project. Then add the following method to your HelloWorldLayer.mm

-(void) addLeaf {
    NSString *leafFileName = [NSString stringWithFormat:@"leaf%dx100.png",1 + arc4random() % 2];
    CGSize winSize = [[CCDirector sharedDirector] winSize];
    CGPoint pos = CGPointMake(arc4random() % (int) winSize.width, arc4random() % (int) winSize.height);
    //create a sprite
    PhysicsSprite *sprite = [PhysicsSprite spriteWithFile:leafFileName];
    [self addChild:sprite];
    CGSize imageSize = [UIImage imageNamed:leafFileName].size;
	// Define the sprite body.
	b2BodyDef bodyDef;
	bodyDef.type = b2_dynamicBody;
	bodyDef.position.Set(pos.x/PTM_RATIO, pos.y/PTM_RATIO);
	b2Body *body = world->CreateBody(&bodyDef);
    //creat sprite shape
    CGFloat radius = imageSize.width < imageSize.height ? imageSize.width/2 : imageSize.height/2;
    b2CircleShape circle;
    circle.m_radius = radius/PTM_RATIO * (2.0f/3.0f);
    //create sprite fixture
    b2FixtureDef fixtureDef;
    fixtureDef.shape = &circle;
    fixtureDef.density = 0.5f;
    fixtureDef.friction = 0.4f;
    //rotate sprite randomly
    body->SetTransform(body->GetPosition(), CC_RADIANS_TO_DEGREES(arc4random() * M_PI));
    //link sprite to physic body
    [sprite setPhysicsBody:body];

Breaking the above code down into it’s various section we first create a sprite. Although it’s more optimal to use sprite sheets (*see Monkey Jump), for my demo purposes I simply used UIImages.

Second we create the body object, an object containing the physical manifestations of a body in our 2D world. We do this by defining a body definition, a collection of values that define the body. First make sure to set the type as b2_dynamicBody, meaning the physics engine will interact with the body. It’s position is randomly generated, and then, the body is added to the world via the world->CreateBody method call. We’ll need the body later and hence hold a reference to it.

This body exists but has no shape or form yet. To do this we repeatedly add b2FixtureDef to the body, with each fixture giving the body some shape and having physical properties such as density, friction etc. Following the code, we define a circle shape for our leaf. Again this isn’t accurate but fine for our demo purposes(*see Monkey Jump). Then we define a fixture as having the shape of a circle, a density of 0.5f and friction of 0.4f. You may play around with these values for better results. We then add the fixture to our body with the C call body->CreateFixture. We rotate the leaf in a random direction, so they don’t all look the same.

Lastly and quite importantly, add the body to the sprite. Now we’ve coupled the image in Cocos2D space with the object in Box2D space.

Let’s run that with a few calls to [self addLeaf] inside your init method and hopefully a few falling leaves will appear on screen.

Moving Leavings

Next we want a top down view of the leaves and the ability to “blow” them across the screen. To simulate a top down view we need to remove gravity. In the initPhysics method can change gravity to

    // gravity! 
    b2Vec2 gravity;
    gravity.Set(0.0f, 0.0f);

Now since our “floor” (screen) has no friction, when a force is applied to our leaves they will appear to move as if they were on ice. We can mimic floor resistance by setting the linear and angular damping (how quickly the respective velocities will decrease) for each leaf. In the body definition method addLeaf.

    //give the leaf some resistance
    //link sprite to physic body
    [sprite setPhysicsBody:body];

Again this isn’t perfect but suitable for my demo purposes.

Now we need to detect whether a user has blown into the microphone. I used this excellent worked example as my guide. Please read it to understand the code below in full detail.

Add the following global variable to your interface

#import <AVFoundation/AVFoundation.h>
@interface HelloWorldLayer() {
    AVAudioRecorder *recorder;
    NSTimer *levelTimer;
    double lowPassResults;
-(void)levelTimerCallback:(NSTimer *)timer;

And the following init method to your implementation, don’t forget to call it inside your main init method.

- (void) initAudioRecoder {
    //The primary function of AVAudioRecorder is, as the name implies, to record audio. As a secondary function it provides audio-level information. So, here we discard the audio input by dumping it to the /dev/null bit bucket — while I can’t find any documentation to support it, the consensus seems to be that /dev/null will perform the same as on any Unix — and explicitly turn on audio metering.
  	NSURL *url = [NSURL fileURLWithPath:@"/dev/null"];
  	NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
                              [NSNumber numberWithFloat: 44100.0],                 AVSampleRateKey,
                              [NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey,
                              [NSNumber numberWithInt: 1],                         AVNumberOfChannelsKey,
                              [NSNumber numberWithInt: AVAudioQualityMax],         AVEncoderAudioQualityKey,
  	NSError *error;
  	recorder = [[AVAudioRecorder alloc] initWithURL:url settings:settings error:&error];
  	if (recorder) {
  		[recorder prepareToRecord];
  		recorder.meteringEnabled = YES;
  		[recorder record];
        levelTimer = [NSTimer scheduledTimerWithTimeInterval: 0.03 target: self selector: @selector(levelTimerCallback:) userInfo: nil repeats: YES];
  	} else {
  		NSLog(@"%@", [error description]);
- (void)levelTimerCallback:(NSTimer *)timer {
	[recorder updateMeters];
	const double ALPHA = 0.05;
	double peakPowerForChannel = pow(10, (0.05 * [recorder peakPowerForChannel:0]));
	lowPassResults = ALPHA * peakPowerForChannel + (1.0 - ALPHA) * lowPassResults;
    if (lowPassResults > 0.80){
        [self blowDetected:lowPassResults];

In a nutshell, we initialise the audio recorder, start a timer to periodically check the audio level, and then using a low pass filter, we determine whether the user has blown hard enough into the mic. In the worked example the lowPassResults threshold was set to 0.95. I lowered it a little bit as I didn’t want my users popping a vein. It is consequently more sensitive to background noise. However this advert is supposed to be imbedded in a reading app… SSSSSHHHHT, quiet please!

So on to blowing the leaves.

- (void)blowDetected:(float) level {
    CGSize winSize = [[CCDirector sharedDirector] winSize];
    CGFloat winMidY = winSize.height / 2;
    CGFloat winX = winSize.width;
    for(b2Body *b = world->GetBodyList(); b; b=b->GetNext()){
        // we don't want every leaf being blown every time in an attempt at a bit of realism
        if(arc4random() % 10 < 3){
            //get body position
            b2Vec2 bPos = b->GetPosition();
            // convert into cocos2d co-ordinates
            bPos *= PTM_RATIO;
            // get the vector from the microphone to the leaf
            b2Vec2 vectorFromBlow = b2Vec2(winX - bPos.x, winMidY - bPos.y);
            // normalise the vector and multiply it with a force
            vectorFromBlow *= -10 * level;
            // add force to the body
            b->ApplyLinearImpulse(vectorFromBlow, b->GetPosition());

I hope the comments on each line help explain what we’re doing here. Perhaps the only thing to flesh out is that we want the leaves to be blown away from the microphone. In order to do this, we create a vector from the microphone’s position ([x, midY] on most devices) and the leaf’s position. We then normalise it so that when we apply the force (a.k.a. wolf strength blow from user), all leaves will seem to move away from the microphone at the same speed.

Remember that our damping values set in addLeaf will slow the leaf down after each blow. Run this and see what happens.

Finishing up

So it’s not perfect, but we have something that for demonstration purposes. There are loads of extra improvements we could make. Such as

  • Use the lowPassResult threshold to make the leaves “shake” when it’s between two values lower than an full breath.
  • Experiment with the leaf body properties as well as the “blowing” force to make them seem more realistic.
  • Give the leaves proper definitions rather than circles.

You can find the source at github.

I made extensive use of the following online resources to make this demo.


I can not recommend the Monkey Jump tutorial enough if you’re serious about making a 2D game. Also the tools mentioned in it would be very helpful in creating sprite sheets and realistic body fixtures.

xCode Snippets

Using xCode snippets will save you time. For instance, I like to have #pragma mark documentation. Here’s my code snippet

#pragma mark - <#Label#>

which added a #pragma mark line whenever I type the letters ‘ppp’. Here’s another that automatically generate property synthesize and lazy loaded getter methods when I type ‘syn’.*

@synthesize <#parameter#> = _<#parameter#>;

- (<#class#> *)<#parameter#> {
    if(!_<#parameter#>) {
    return _<#parameter#>;

What next? So I’d really like to simplify that predefined ‘init’ code snippet and remove the unnecessary extra first line in it (init?). The code snippet could be redefined as

- (id)init{
    if (self = [super init]) {
    return self;

You can’t however simply edit the snippets that come installed with xCode without it a little hack. This isn’t in it’s self a bad thing. I find it’s the little hacks that help me better understand the system I’m working.

xCode snippets are stored in a plist file at


which may be edited (remember to backup first) to fit your needs. Editing xml however is tedious and error prone. I prefer this lovely solution from Matt Wilding at Stackoverflow. A script that copies and redefines all the predefined xCode snippets as user defined ones, making them editable from with in xCode.

Now we have full control over snippets and hopefully can write or modify one or two to save ourselves a little more time and write a little less boiler plate code.

*Unfortunately Apple removed in xCode 4, the function in code snippets where two snippet placeholders (defined as <#pHolder#>) with the same name are updated at the same time. Why would Apple do this? Who the hell knows!

A Little About Me & Then Abelo

Ah sweet Lord, it’s incredibly difficult starting your own blog, especially when it’s supposed to be a tech blog showing off all that you know and do. I’m not sure what I know, nor what I’m going to be doing. All I know is that I’ve written the opening post several times before and never hit the ‘publish’ button. I suppose short and to the point would be the best approach in light of previous failed attempts.
Continue reading