Hello Human Kindness – Rise & Smile App

  • Client:- Dignity Health
  • Agency:- Eleven Inc
  • Role:- Developer

Hello Human Kindness
Nice article on Mashable

Update (05/2014):

Found the links, so they must have made their tweaks to the code and pushed it live.
Rise & Smile on iTunes
Rise & Smile on Google Play

Original Post:

A small part of a much larger project. I built an app for iOS and Android mobile devices where a user can wake up to random, 10 second long user generated videos. The app allows you to set alarms to wake you up with one of these videos, or to generate your own video and upload it to the main site for moderation in the hope that it will be added to the pool of user generated content for use in the app.

Due to some reasons that haven’t been shared with me, I was asked to pack up the code and send it to the agency, who will push it live when they see fit. The app was pretty much complete when I did this, with only some minor UI changes left to complete. No idea what will become of it.

Rise & Smile Image

Most of the development time on this app went into recording video. Due to budgetary reasons (and before any video recording functionality was added), it was decided that the app would be built as an Adobe Air app. Unfortunately, by the time I had joined the project to do the actual build and told the team that there was no way to create the look and feel required for recording using Air, it was too late. There were other problems with the local notifications needed for the alarms, but I’m going to ignore those for now.

Adobe Air is great for cross-platform development. However, one thing it does spectacularly badly is video, be that recording or playing. It’s better at playing on iOS, but it’s nowhere near as good as doing things natively. The problem was that the designs all had video recording happening in the same custom skin of the app, but Adobe Air only allows you to call and display the native camera UI, which is overlaid on top of your app, so we can’t do any custom skinning.

Rise & Smile Image

Popping the user out to the native camera was a horrible solution, because we needed to have a 10 second limit on recordings, and we wanted to have Instagram-like tap and hold to record separate shorter segments that make up the 10 seconds. I was determined to find a way to have native recording in app with the app’s custom UI that allowed us to have such functionality. I wish I hadn’t been so pig-headed.

Air allows access to the Camera class which lets you display the camera feed on screen in your app and to stream the feed to a server, but this doesn’t allow you to record the feed locally. The first hurdle was that the camera feed was rotated, and Adobe’s own documentation flat out says not to use the camera in portrait mode in mobile apps, which is not only a stupid restriction, but incredibly unhelpful.

A Camera instance captures video in landscape aspect ratio. On devices that can change the screen orientation, such as mobile phones, a Video object attached to the camera will only show upright video in a landscape-aspect orientation. Thus, mobile apps should use a landscape orientation when displaying video and should not auto-rotate.

The plan was to manually flip, rotate and scale the camera feed until it looked correct on both iOS and Android (front camera is mirrored on iOS but not Android, sigh), and then to save the image data of each frame into memory and export it in AS3 as an FLV video and upload to the server. I did a test of this on my iPhone 4. I created 5 seconds worth of random 240×320 pixel frames at 24 fps, and attempted to convert them to FLV on the device. After 10 mins of waiting for it to finish, I gave up on that idea.

We then created a native extension to do the conversion. A native extension is basically a block of native iOS or Android code that runs behind your Adobe Air app AS3 code to do something that your AS3 code can’t do. It’s useful for accessing features of the device that Adobe hasn’t gotten round to adding to Air yet. The problem is we’d need to code one for iOS and one for Android, as each would need native code. The idea was that I would pass an array of frames from AS3 to the native extension with mic audio, and the native extension would then stitch the whole lot together and spit out a nice H.264 video for me. This worked relatively well on an iPhone 5/5s, but on Android and iPhone 4/4S, this didn’t work well. Conversion was pretty slow, audio was out of sync, and we could capture maybe 10-12 frames per second. The app on the iPhone 4 would crash if you tried to record a second video due to memory use, even after the memory used in the last recording had been released. Displaying the camera in Adobe Air just caused too much of an overhead.

At this point I had had enough, I decided that the if we wanted to get the results we’d expected, I would have to find a way around the restrictive Adobe Air sandbox and find a way to get the app’s UI on top of the native camera. I thought about recreating the app’s UI natively just for the camera parts of the app, but it would mean that any change to the menu, fonts etc would have to be recreated in Air, Android and iOS, not to mention the fact that I had already built the custom camera UI and would have to do it again from scratch, twice.

I tried using a mixture of native code and Air code to insert the native camera in between the layers of the Air app, but the camera was always either on top, covering the whole Air UI component, or it was below it, and completely covered. In the end I came up with, what I think at least, was a genius solution. Use the native code to show the camera and do the recording, and show a picture of the custom Air app UI, taken in real time, on top of the native camera feed.

I had my UI and I had code already written to send image data to the native extension, so I modified the code to put the native camera on top of everything. I then repeatedly sent a screenshot of the Air UI to the native code, which pasted that screenshot on top of the camera and updated it as new screenshots came in, so it looked like you were still in the Air app, animations and all, when in fact you were completely in a native UI. The nice thing about doing this on iOS was that when the user touches their screen to use the app, these touch events passed through the native images of the UI and the camera, and were caught automatically by the Air app that I had coded originally underneath, without having to write any extra code. Unfortunately this wasn’t true on Android, but it was simply a case of catching the touch events in the native code, and passing them to the Air app, which then used existing code to react accordingly.

With that crow-barred solution, the performance problems, crashes, and audio sync issues all disappeared. We had lightening fast, Instagram-style recording and you’d never know the insanity of how we achieved it.