I’m working on an Android app at the moment, and for a bit of fun I decided to add a startup sound to brighten the day of every user that launches it. Which gives me another opportunity to present some of the advanced language features in Oxygene that make threading such a breeze.
First of all, playing sounds on Android is simplicity itself, especially when the audio is provided as resources (files) within the application package.
The Android SDK prescribes the layout of certain folders within an application package. Most of these folders reside under a res folder (for “resource”). In the case of audio resources the resource folder used is “raw”.
Despite this name, the sound resources (files) placed in this folder must be not raw audio data, but one of the Android supported file formats (of which there are many). In my case I was using a Vorbis .ogg format file. This is quickly and easily added in the Oxygene solution in Visual Studio:
Playing this audio is then simply a matter of initialising a MediaPlayer instance, preparing it with the required audio resource file, and playing it. As I said, I want this to be played when my application is launched, so I add this code to my MainActivity.onCreate() method:
var media := MediaPlayer.create(ApplicationContext, R.raw.ping); media.start;
The audio resource is referenced via a bit of Android framework magic which exposes the contents of the res folders – and their contents – to the code as referencable identifiers. Oxygene picks these up and presents them in code completion lists, as you would expect:
And within the raw folder, the ping file (actually, in the context of the code, it’s an integer identifier that corresponds to – and identifies – the file resource itself):
The first line of my code declares an inline variable (type: MediaPlayer, being inferred). To this is assigned the result of the create() factory method on the MediaPlayer class.
var media := MediaPlayer.create(ApplicationContext, R.raw.ping);
The factory method overload employed accepts a context and a resource file ID. This factory method both instantiates the MediaPlayer and prepares it with the specified file. All that remains is to actually initiate the playback of the file, with the call to start().
media.start();
Job done. But this implementation is dangerously simplistic.
First of all, it is strongly emphasised in the MediaPlayer documentation that any MediaPlayer instance should be release()‘d once no longer required as it may be holding on to scarce – and expensive – resources that other applications may need.
Also, this implementation performs the loading of the audio resource in my UI thread. For a small audio file such as this, any blocking of the thread that might arise is negligible, but I should take care of this in preparation for the day that I upgrade the audio resource to a 7.1 channel lossless HD format…
The playback of the audio is performed in a thread by the MediaPlayer, so my UI startup isn’t blocked by that. But this in itself will cause a problem as we will see in a moment.
But first things first. Let’s move all of this in a background thread. For this, I will employ an anonymous class, creating a sub-class of Thread and overriding the run() method:
var audioThread := new class Thread( run := method begin var media := MediaPlayer.create(ApplicationContext, R.raw.ping); media.start; end); audioThread.start;
So, my media player code is moved to the body of the run() method of my anonymous thread class. Similar inline variable declaration and inferred typing provides me a reference to the new thread, which I start().
Great, now my audio initialisation and playback is all occurring in a background thread and my UI thread can continue to initialise and run my activity while my startup sound plays in the background. Superb!
But still there is the issue of release()‘ing the media player.
We cannot simply call release() immediately after calling start() on the media player, like this:
media.start; media.release; // NOT a good idea
Remember, the media player already uses a background thread for playback, so start() returns almost instantly and if we call release() at that point the media player will simply stop playing the audio. What we need is some mechanism that will notify us when playback is complete.
And what do you know ? The MediaPlayer provides exactly such a mechanism, in the form of an OnCompletion callback. This takes the form of a reference to an OnCompletionListener interface, assigned to the media player. At the risk of stating the obvious, the media player will call the onCompletion() method when playback is complete.
Fortunately, as well as anonymous classes, Oxygene allows us to declare inline, anonymous interface implementations, which we can exploit to install a listener for the completion callback. The syntax for an anonymous interface implementation is similar to that for an anonymous class (after all, an interface has to be implemented by a class). When implementing an interface in this way we declare an anonymous class but rather than the name of a super-class, we instead identify the interface we are implementing.
In this case that listener interface type is declared as a member of the MediaPlayer class itself, so our anonymous inline class implementing this interface takes the initial form:
new class MediaPlayer.OnCompletionListener( ... );
The methods we then assign provide the implementation of the interface methods (as opposed to overriding inherited methods in the case of an anonymous sub-class). In this case there is just the one method to implement, onCompletion(), which is passed a reference to the media player that has completed playback. We will use this to release() that player:
var audioThread := new class Thread( run := method begin var media := MediaPlayer.create(ApplicationContext, R.raw.ping); media.OnCompletionListener := new class MediaPlayer.OnCompletionListener( onCompletion := method(aPlayer: MediaPlayer) begin aPlayer.release(); end); media.start; end); audioThread.start;
And that’s it.
As you can see, indentation can rapidly get out of hand when using this technique, but fortunately you can also see how this entire implementation could be wrapped up inside a neat convenience static method on some utility class for subsequent re-use:
class method AsyncAudio.play(aFileID: Integer); begin var audioThread := new class Thread( run := method begin var media := MediaPlayer.create(ApplicationContext, aFileID); media.OnCompletionListener := new class MediaPlayer.OnCompletionListener( onCompletion := method(aPlayer: MediaPlayer) begin aPlayer.release(); end); media.start; end); audioThread.start; end; // As an example of use, you could then do: AsyncAudio.play(R.raw.ping);
And that’s it. Not bad for an hour’s tinkering. 🙂
Such a helper method could also hide the implementation specifics of Android and provide an equivalent mechanism on other platforms, thus providing a consistent cross-platform API for playing audio in the background on any of the platforms supported by the Elements compiler(s), Android, iOS or .NET
class method AsyncAudio.play(aFileID: Integer); begin {$if COOPER} // Android specifics .. {$elseif NOUGAT} // iOS (actually, Cocoa generally so further conditions may // be needed to discriminate between iOS (Cocoa Touch) and OS X (Cocoa, um, not Touch) .. {$elseif ECHOES} // .NET / WinPhone / WinRT etc .. {$endif} end;
Worth noting is that a class such as this, implemented in Oxygene (ObjectPascal), is directly usable in Hydrogene (RemObjects C#) and vice-versa (you can mix and match Oxygene and Hydrogene within a project, if you feel so inclined).
And if a Java developer starts casting covetous eyes at your convenience method, you could build them a jar and share it with them as well (though they only get to play with the Java version, obviously) !
But, such things are for another time. 🙂
Great post! Im a Oxygene user also and find every day how easy is to get samples for the platform and implement in the right way.
So to play a sound you are creating two threads. If MediaPlayer already uses a background thread why create another? I have not done much in this area so I am trying to understand if there is a benefit to this.
MediaPlayer performs playback in a thread, but the initial loading and preparation of the media file is performed in the calling thread. In this specific case the file was small (< 20KB) so the impact was tiny, and sometimes appeared to have no impact at all. But occasionally it was noticeable (to my critical eye), albeit a minor "stutter" (a tiny fraction of a second) in the starting of the app, rather than a lengthy delay. Quite apart from implementing the best, general approach, in the context of the native Android UX I considered it worth addressing, especially given that it was easy to do so. Moving all of this to a background thread completely eliminates any start-up delay.