Email Updates RSS Subscribe

This blog is created and maintained by the technical team at Hook in an effort to preserve and share the insights and experience gained during the research and testing phases of our development process. Often, much of this information is lost or hidden once a project is completed. These articles aim to revisit, expand and/or review the concepts that seem worth exploring further. The site also serves as a platform for releasing tools developed internally to help streamline ad development.


Hook is a digital production company that develops interactive content for industry leading agencies and their brands. For more information visit


Google Hangouts || AS3 Bridge and Testbed

Posted on July 9th, 2012 by Jake

Das Intro
Oh the Google, so huge and so many new things going on at the same time. Recently we have had a bit of time to play around with one of Google’s newer experiences, the Google Hangout. If you don’t know what that is you can get a bit of a feel for it by watching their video here:

The long and the short of it is that that you can create or join a hangout to video chat with up to 9 other people, making it a max of 10 people in the same hangout at the same time. This is also, of course, part of the Google+ platform, so you will need an account there to participate in all of the hangout goodness.

Before we get into it all, here are the goods:
(Source On GitHub):

Start a new Hangout session with the Testbed app:

Start a Hangout

Of course video chatting is neat and all, but it’s not all that new of a concept, so Hangouts doesn’t stop there. It also includes (for at least the moment) 4 other major features.

1) Hangouts On Air
- you can have lots of viewers of the hangout while still limited to 10 participants
- anything sent through the video/audio stream will be recorded and posted on youtube

2) Screen Sharing
- Allows you to share the video from your desktop/window with the rest of the participants in the hangout. This is of course
video out only, so its not like others can control your mouse.

3) Hangout Extensions
- Haven’t looked too much into these, but I believe you can add extensions that sit beside the hangout, much like the chat

4) Hangout Apps Platform
- developers can write their own apps that interact with the hangout api, this post will focus primarily on the Apps Platform.

The Hangout Platform ( affords us some
pretty great opportunities really. If you think a bit past the obvious, what Google has really built is a pretty robust
Multiplayer Lobby. It takes care of all of the authentication and notifications for you. You can invite people in and they can
all talk either over video, audio, or even text chat. It also provides us with ways of shipping generic data back and forth
between participants as well as maintaining a single persistent state object that is shared between all participants. So, all of
the network communication stuff is also handled for you, and honestly, that’s a heck of a head start to building interesting
multiplayer experiences. So, lets get started doing just that.

Introducing the Hangout Testbed and Actionscript bridge:

If you think that page had a lot of buttons, just wait for page 2:

The Testbed is a Hangout application that sits inside of the Hangout page.

The goals of this project are:
- provide a way to verify that specific features of the Hangout platform are running as we
expect them to

- provide examples of how to use each feature in the Hangouts API

- provide a place to easily add ways of playing with any new features that will be added to
the platform

- an excuse to develop a Javascript/AS3 bridge that allows access to the API from flash.

So with that out of the way, we can start digging into all of this a bit more.

The Anatomy of a Hangout
Though the whole Hangout concept seems rather straightforward on the surface, there are quite a few technical details that need to be understood in order to make interesting things with it.

The first thing to note is that each time you start a hangout a new “session” is started. This session id (embedded in the url of your current hangout) is what allows others to join. So when you start a hangout and invite someone, that invite link has your session id in it. If it did not, all that would happen is the other person would start their own session, and the two of you wouldn’t be in the same Hangout at the same time.

Another bit of trivia about the hangout URL is that if you append &gid=my_app_id to the hangout link, it will automatically load your app when the user joins the hangout session. If that gid parameter is missing, the new participant will see the current video stream from the hangout instead of loading the app. If that user then clicks “apps” at the bottom, your app should be listed, assuming that someone else is using the app in the hangout at that time.

The app selection would look like this:

Lets talk about that video stream a bit, shall we? This stream is the key to getting stream data to all of the other participants in the hangout. Yes, I realize that sounds obvious, but it’s not just passing webcam data around. Everything you see in those feeds are processed server-side and then redistributed back to all participants. So you don’t see your own webcam feed raw, the video data from the camera is sent to the server first. This is an important distinction to keep in mind, because things like face tracked overlays (like the pirate hat in the video from above) are not applied locally and then sent out. The face tracking itself is actually done server side, and the resultant images are sent back to each client. So for instance, when you add a face tracked overlay, the server does the compositing of the overlay on top of the video data from the camera before being sent back to the clients. If you watch closely, when the video feeds bit-rate drops, and everything gets all blocky, you will see that the overlays get blocky as well, meaning they are being streamed back to the client. The same holds true for the static overlays. They are sent to the server, and composited server side.

Another feature tightly connected with this thinking is the OnAir functionality. While the hangout is being technically recorded, the app itself is not being recorded. Its simply the streams that are being sent back to the clients. Therefore if you have an application open, the only thing that will be recorded and posted to youtube are the webcam feeds. But like stated before. Overlays will end up in these recordings because they are part of the stream. The only way to record an app that is loaded in the center of the hangout is to use the Screen Sharing features, and select the hangout browser tab to share. This is not ideal, but because the screen share gets pumped through the stream, it does get recorded and passed to all clients.

When you request a screen share session you pick a window to share:

Once shared it replaces your face in the webcam stream:

But what happens if you screenshare the shared screen? hmm……
A ton of awesome as it turns out :)

“Big fat hairy deal” you say (don’t try to fool me, I know you talk that way), “why do I care how it works, as long as it does?”. The main reason you care is that for right now the Hangout feature set and API are in flux. They are changing quite a lot at a fairly rapid pace. This means that the more you know how the communication systems and the plugin (did I mention you have to install the Google Talk plugin for Hangouts to work?) work, the better you will be able to predict how these changes will affect your application.

Take for example the “Avatar” system. You can set/get/enable/disable an avatar for any participant that is in the hangout. However, this avatar image is not sent through the webcam stream, and is client side only. This means none of the other clients will see the avatar when enabled.

The last thing I want to mention about these streams is about the sound effects. At the moment there is a set of API’s that allow you to load/play/stop/loop a 16bit pcm wav sound file. Currently when you load and play one of those sound files, you may notice that it doesn’t sound as good as when played locally straight from the .wav file. Also currently, you will notice that the sound itself is only played for the the local client. At first this may sound like they are just simply giving you a way to play sound files in a cross-browser sort of way, by allowing you to offload sound effects to the GTalk plugin. For now, I would agree with that. However the fact that the sound quality dips so much when played back, has got me thinking that that sound might be processed server side as well. Meaning that maybe, just maybe, sometime in the near future, we will be able to play a sound through that stream so that all of the other participants will hear it as well. I don’t have any facts (pff who needs facts) to back this up, but if I were a betting man, I’d put money on it. But since I’m not a betting man, take it with a grain of salt :) . Also, it could just be sent to the server for echo cancellation, but where’s the fun there? With that, the A/V portion of this tour has concluded, next stop, generic data and messaging!

Testsound is LaserRocket2.wav( created by EcoDTR ( at

If you check out the package in the docs (, you will see functions and events that deal with two different types of data communication. The first is the “State” data, and the other is the “Message” data. The “State” data was already implemented in the v1.0 release of the API, while the “Message” data came later in the v1.1 release. These are two very different systems.

The “State” data is a shared and persistent object that all enabled participants have access to for read and write. If you are familiar with the Flash Media Server’s Remote Shared Object, it works pretty much the same as that. A client sends a change to the state, and some time later, all of the clients get an event say that the state changed. Within that StateChangedEvent you have access to which keys were added, or removed, as well as a full copy of the current state and state metadata. One quick note on that is it does take time to update the state, usually in the neighborhood of 1 to 2 full seconds from the time the state is changed to the time a client receives the notice. Also be aware that not all the clients will get this notification at the same time. It will be close, but not exact. This means that two clients can update the same data at nearly the same time. In the pre v1.0 releases this meant that sometimes if two updates happened close enough together, the clients would only get one notification, and the first update would simply be lost. This was fixed in v1.0, so now two separate events are fired, one for each update, so the client can then decide what to do with the data, instead of just missing data. Google has set a limit on how often you can make changes to the State within a given period of time. This system is not designed to be used with high frequency, so sending a clients current mouse position to all other clients with this method is a no go. Being as though the latency is so high anyway, its not likely to be useful for that either.

However this “State” object is persistent, and guaranteed to go through to all clients. This means that when a new participant joins the Hangout, that client has the latest State. That is immensely useful for maintaining the more global set up and state of your application. In v1.0 this was the only option for sharing generic data between clients, but that changed in v1.1.

The Messaging system that was implemented in v1.1 filled in the gaps present in the State system. Messages are meant to be used when you need to send data to all clients with a high frequency. So for instance if you wanted to send mouse positions to all clients all the time, this is the way to go. You can send any string you want between clients, which means you can send objects with JSON, now isn’t that handy. Though I’m sure there is some limit on how much data you can send per message, but we haven’t explored those limits yet, if you do, please let us know what they are :) .

So we can send message at a much higher frequency than with the State object, but how much faster can we send these message exactly? That question is exactly what the Ping Tester portion of the Testbed is designed to answer. On our machines here, (all on the same LAN if that matters) we see a round trip ping time in the neighborhood of 140ms. That means, one client sends a ping to the group, and 140ms later, that same client receives a response from another participant. Thats pretty good, especially considering that there is some cost to processing the ping on the remote side and sending the pong response back to the group. While it’s not something you can update every frame, it should be fast enough to send position and player control updates to the group, and maintain decent lag numbers. Of course all of this speed comes with a price. The message is of the not persistent / single shot variety, and its not guaranteed to get to all clients every time. During our fairly light testing of the messaging, we have yet to drop a message, but the docs state that its possible, consider yourself warned :) .

Deploying The Testbed App
I know, I know, enough with the theory, how about actually getting to DO something? Ok, let’s get the testbed app deployed so that you can play with it.

Edit the Testbed config files
- The first script tag loads the Google Hangout sdk for JavaScript. The important
part is the “v=1.1″ part. That is the version of the Hangout API you want to load. So change as needed.

- The next 5 script tags load the various bits of our Hangout Bridge, change the urls to point to wherever you uploaded the files. If you do not have and SSL cert and are loading the files over straight http, chrome will warn about loading insecure data, and force you to reload the hangout. When this happens, you will need to manually start your application from the button at the top.

- change the appSettings.appID to match your new hangout app id. You can get
this id from the API console and right-click->copy link address on the “Enter A
Hangout!” link at the bottom of the page. Paste that someplace, and copy the part
after “gid=”. That string of numbers is your AppID. Fill that in as the appSettings.appID variable in the settings.js file.

- change the rest of the settings to match your deploy environment, Protocol (http/https), base domain, and base path (starting with a “/”). The base path is the root folder of where your app is hosted.

- Upload the files to a regular old web server at the URL you specified in the app.xml and settings.js file

- Setting up a new project in the Google API console
- go here:

- from the dropdown menu in the upper left of the page, choose “Create New Project”

- Give your project a new, and click “Create Project”

- This will create a new project for you, and drop you into the “Services” section.

- Scroll down until you see “Google+ Hangouts API”, and click the button to turn it on.

- Select “Hangouts” from the list in the upper left.

- Fill in the Application URL, which is the URL to your app.xml file.

- Make sure “a Main application” is selected for Application Type

- Fill out the name of your app in the Title section. (this shows up while loading)

- The rest is optional, so for now just click “Save” at the very bottom of the page.

- If all is well, you should be able to click “Enter A Hangout!” and start a new hangout with
your app loaded!

- Click on “Team” in the menu in the upper left hand corner and add the email addresses
of anyone you want to invite to use this app. These are only needed when the app is being used in the Developer Sandbox. Once you take the app public you won’t need to manually add people.

That’s it! You should be able to play around with the multitude of buttons now!

Playing Around With The Buttons
Now that all the boring (but no less important) stuff is out of the way, it’s time to click on stuff!


The app is broken down into two pages, and each page has a bunch of sections. The first page, which is available by default, has most of the newer API features in it.

Tracked Overlays Section
The tracked overlays section highlights all of the things you can do with the face tracked webcam feed overlays. These are things like the pirate hat from the video at the top of the post. Basically you tell the API to load an image from a url (or even a base64 encoded data url if you like) and tell it which facial feature you want to attach it to. For example, if you check the “Left Eye” checkbox a red circle with the letter “L” (bet you can’t guess what that stands for) in it will show up in the webcam feed over your left eye.


Yeah, yeah I hear you, even if you didn’t say it, you thought it :) This is simply because the webcam feed is mirrored on your client. All of the remote clients will see the feed in the “correct” orientation. Trust me it feels really strange when you don’t mirror the feed, so I think its a good thing. However this does add a bit of mindbendiness when positioning stuff, but we will get to that in a bit.

Just incase you are curious right now, this is part of the FaceTrackingOverlay classes in the av.effects package:

We will get into more specifics about how to implement the AS3 portion of the hangout bridge a little later, so for now, just click buttons and see what happens :)

Not surprisingly checking the “Right Eye”, “Nose”, and “Mouth” checkboxes turn on more face tracking overlays. As noted before this are just pngs loaded from a url.

The rest of the buttons in this section affect the Mouth overlay. So turn that on, and click the “Get Offset” button. This will report to you, in the log display at the bottom, the current offset of the mouth overlay. Initially this will be 0 for x and 0 for y, no matter where your face is located in the frame. These offsets are simply offsets based on their root position from the tracking data. So the only way these numbers change is if you change them through the API. So lets try that. Click the arrows on the value steppers in the right column under “Mouth Overlay Settings”. As you change those values, the mouth will be offset from the MOUTH_CENTER tracking feature. With those changed, now click “Get Offset” again, and it will report to you the offset you just set. The same goes for the “Get Rotation” and “Get Scale” buttons. These report an offset in relation to their base, not the final absolute number. You can play around with the scale and the rotation values with the sliders in the right hand column.

The other two checkboxes “Rotate With Face” and “Scale With Face” turn on and off the feature where the overlay gets bigger if you get closer to the camera, and rotates if you tilt your face. These can be turned on and off to suit the needs of your application.

Lastly there is the Tracking Feature ( This is what you set on the overly that tells it which part of the face to stick to. There are 13 unique spots on the face that you can assign to an overlay. To play with this feature, make sure the “Mouth” overlay is turned on, and then change the option in the combobox at the bottom of the second column labeled “Tracking Feature”.

Static Overlays
In contrast to the “Tracked Overlays” section, there is a “Static Overlays” section. These are overlays that you can inject into the stream, and manually position them anywhere you like. These would be good for times when you want to signifiy that a specific participant is part of a specific group, or maybe if you wanted to “raise your hand” in the group, or even something like the “My Time” clock app that sits in the corner showing your local time ( The “My Time” clock is interesting because it makes use of the position, scale, and rotation features of the static overlays.

To demonstrate those features in the Testbed, select the “Gold Star” checkbox to show it in the stream. Now you can play with its position, scale and rotation with the value steppers and sliders. As before, clicking on any of the “Get” buttons below the sliders will report the returned object in the log display at the bottom of the app.

Basic Hangout Info
The section on the bottom left of the first page, displays some basic info about the current hangout session, and the local participant ID.

These text displays are hooked up to the related event dispatchers in SDK/API. The first line deals with an OnAir session. It first tells you if you are in a session that is an OnAir session, and then tells you if the session is currently being broadcast. Next, it checks if the current hangout is set for public viewing (selected when you start a hangout) and lastly what your participant ID is for this session.

The buttons for this section are fairly self explanatory. You can get the current state object, which prints out in the debug log at the bottom of the app. You can also copy the url for the hangout, which includes the appID, which allows you to send that link to others that you want to join the hangout. Lastly, you can Disable/Enable logging (both the JavaScript Logging and the internal app logging) as well as simply clear the log.

Data Testing
Moving over to the “Data Testing” section, we can play around with the Shared State and the Messaging systems. First up is the submitDelta API method. This will add or update the value (textField on the right) for the key (textField on the left) in the Shared State Object. When you fill out the key/value pair and click Submit Delta, a request is made to the system to change the state. When the state is changed, an event is fired, and caught by the bridge. You will see an object description of the State object printed out in the logging area at the bottom. To be more specific the state MetaData is what’s actually being displayed. If you have other clients logged in and running the Testbed app, they will also get notifications that the State has changed, and that object will be displayed in their log display as well. It is worth noting that in our tests the round trip time for requesting an update, and getting the event is usually between 1 and 1.5 seconds. Also this Share State Object is a persistent object, that even late comers to the hangout will get a proper copy of when the join.

Next up is the new Messaging features. This is designed to a be a low latency way of sending strings to all clients in the group ( In contrast to the Share State Object this information is not persistent, and those that have not joined the hangout when the message is sent, will not ever get the message. The payoff for this is message distribution at the speed of 10s and 100s of milliseconds as opposed to the 1000s required for a state update. To test a single message, simply type in a string in the “Message Data” text field, and click “Send Message”. All other participants will get a MessageReceivedEvent from the API, and that object will be displayed in the logging section at the bottom of the app.

In order to test the actual latency of this system, we set up a “Message Ping Test” section. First you select how many pings you want to send, and then how long to wait between sends in milliseconds. The pings will go out to all participants, and then each participant will send back a pong. The Testbed will measure the time it took for that ping to come back from each client, and average the times calculated from each participant. A list in the data grid at the bottom will be generated, showing you the stats in realtime as they happen.

A quick thing to note is that the Testbed does a ton of logging, especially out to the JavaScript console, which puts more stress on the cpu when dealing with rapid pings, which could throw off the time it takes to return the ping from the other participants. I suggest turning off the logging during the test, to get the most accurate results.

Face Tracking Data
In my opinion, the really cool stuff is in the land of the floating green dots on the right. Yep that’s right, each dot represents a spot on the face that the API is giving you the location of, in (mostly) realtime. Plus, it doesn’t stop there, the API will also give you discrete pan, roll, and tilt values for the face! This means you don’t need to do that math yourself, as it gets processed and calculated for you by default *grin*.

When the tracking data changes (which is very often) a FaceTrackingDataChagned event is fired. When this event is fired a FaceTrackingData object is passed to the callback. This object contains a list of tracked features, with their corresponding x,y coords ( It’s worth noting that the x/y coords range from -1 to 1 where 0,0 is the center of the feed. This pattern is used for all offsets for the overlays as well. Additionally rotation, pan, tilt, and roll are all specified in radians.

Lastly on this FaceTrackingData object, there is a hasFace property. This property is important, as it indicates whether or not the system has detected and is tracking a face. If this property is false, no other properties are set on the FaceTrackingData object for that change event.

Also displayed in this section is a list of the current volumes from the microphones of all of the participants in the hangout. The API returns a list of participants IDs and their current volumes.

What’s that you say? Not enough buttons? Well we have a cure for what ails you then :) Click the “More Buttons” button, and take a gander at page 2!

Moar Buttons!
Page 2 is all about getting access to the basic hangout functions and changing or reporting on their status. For instance on this page you can get information about the participants in the hangout, control the mute states of the microphone and camera, as well is showing and hiding the app and its various ancillary panels, like chat. Clicking any of the buttons on this page that return some data, will cause the “Result” panel to be updated with a description of the returned object with all of its properties and values.

Hangout Info
The “Hangout Info” section has all of the buttons needed to give you a good understanding of the state of the hangout and its various properties. The first two buttons are the “Get Participants” and “Get Enabled Participants” buttons. Clicking these will return a list of participant objects ( The first gives you a list of participants in the hangout, whereas the second gives you a list of participants that have the app enabled.

The “Get Hangout Url” and “Get Hangout Id” buttons will return information about this particular hangout session. The first button will give you the link to this particular hangout session, and the second will return the hangout ID. If you send someone the hangout url, it will send them to your current session, making them part of your hangout. This however, will not automatically load the application for them. For that you need to append “&gid=MY_APP_ID” to the end of the link, obviously replacing MY_APP_ID with the ID of your hangout app.

For completeness, “Get Locale” is in there, and it simply returns the locale code for the local participant.

“Get Start Data” returns the data passed into the hangout from the “&gd=MY_DATA” parameter in the hangout url. So for instance:
If you used that URL to start the hangout, it would auto load the Hangout Testbed, and pass in “SweetTestData” as the start data. So when you click the “Get Start Data” button you would simply get “SweetTestData” as a string in return.

“Get State” and “Get State Metadata” return two different representations of the Shared State Object. “Get State” returns the simplest version of the data by just returning an object with each key/value pair set as properties of that object. So for instance the result pane would show something like this:

Object: [object Object]
	- TestKey1: Awesomer Test Data
	- TestKey: Awesome Test Value

The “Get State Metadata” button will return the full state object, including timing information for each key. So the same Shared State Object would look like this:

- Object: [object Object]
	- Object(TestKey1): [object Object]
		- timestamp: 1339784315409
		- timediff: 0
		- value: Awesomer Test Data
		- key: TestKey1
	- Object(TestKey): [object Object]
		- timestamp: 1339784135668
		- timediff: 0
		- value: Awesome Test Value
		- key: TestKey

Lastly in this section the “Hide App” and “Get Is App Visible” buttons do exactly what they say. You can hide the app, and determine if the app is hidden or not. When you hide the app, the current webcam stream will be displayed in large form in the center where the app used to be. To bring the app back, select “Apps” from the bottom of the hangout and choose the app you want to show again. If you look in the log display at the bottom of the app, you will see that the application was notified of these events, as well as many other events. So keep an eye on it to see what types of events are used to notify the application of changes.

Hangout AV
The “Hangout AV” section deals with all of the camera and microphone bits. The “Local” column deals with all of the things that affect the local client, and the “Remote” column is all of the things that you can do to effect the other participants.

The “Set/Get/Clear Avatar” buttons require that there is another participant in the hangout with you. These allow you to load an image over the webcam feed at the bottom of the hangout, but it does not ship that image up the stream to the other clients. These images are local only, but you can apply the image to any of the participants in the current hangout session.

The next two buttons “Toggle Camera Mute” and “Get Camera Mute” will turn the video feed on an off. So when you toggle the Camera Mute ON, your video feed will show up as a black box for everyone in the hangout. Toggling the mute OFF and the video feed returns. Using the “Get Camera Mute” button simply returns a boolean indicating if the camera is muted or not.

The “Toggle Microphone Mute” and “Get Microphone Mute” buttons are very similar. The difference is that they only affect the microphone. This simply means that toggling this ON will block sound from your mic from getting to the rest of the hangout participants. However, the volumes of your microphone are still tracked locally, even when muted, but the other clients are not notified of your volume changes.

Just a quick note about the “Clear” buttons. This is a nice feature of the API as it allows you to return the Mic or Camera to the state that the user last set manually. So if the user mutes the local microphone, and you mess with it from the app, you can simply call the clearMicrophoneMute() function to return it back to the user selected Muted state.

The last three buttons in that column “Get Has Speakers”, “Get Has Microphone”, and “Get Has Camera” all return simple a simple boolean, either true or false. It’s worth noting that when a participant joins a hangout, initially, all of those may return false, until they have fully joined the hangout. There are events to listen for when they change for participants. I would suggest listening for those so you always know the current state of those settings, and not just rely on the initial state.

In the “Remote” column, we can modify settings for the other participants in the hangout session as seen and heard by the local participant. So to be clear, these only affect the environment as experienced by the local participant.

The first two buttons “Get/Set Participant Audio Levels” adjust the volume of the sound from the other participants as heard by the local participant. Clicking set will toggle the volume between 1 (the default) and 10 the max. Any number below 1 will reduce the volume of the participant and any number above will raise the volume (

Next the “Get Participant Volume” button will retrieve the current volume level of a remote participant as heard by the remote microphone. This number runs from 0 – 5. If the remote participant is speaking (or other noise is going on) the number will be greater than 0. If the remote participant is muted, the number will be 0.

The next two buttons “Toggle Participant Audible” and “Is Participant Audible” will mute and unmute the remote participant as heard by the local participant. So in other words, muting a participant in this fashion will still allow all other participants to hear while that person is muted for the local client. The “Is Participant Audible” button will return a boolean set to true if the participant is not muted, and false if they are (locally only).

The “Request Participant Mute” button will pop open a notification on the local client asking for confirmation to mute a remote participant. If you click “Mute Now” in the notification, the remote client is physically muted.

The next two buttons “Toggle Participant Visible” and “Is Participant Visible” will turn off the video of a remote participant for the local client only. The state of visibility will be returned as a simple boolean when “Is Participant Visible” is clicked.

Lastly there is the “Get Volumes” button. This returns an array where the keys are the participant ids and the values are the current volume of that participant.

Hangout Layout
This section is fairly cut and dry. The buttons all do what they say, no exceptions or fancy funny business :)

“Display Notice” displays a native notice at the top of the hangout app. You will notice in the log that an event is caught every time a notice is raised or dismissed.

“Dismiss Notice” makes that previously set notice go away :) . Again an event is dispatched to the bridge.

“Get Has Notice” returns true if there is a notice up, and false if not.

“Toggle Chat Pane” shows and hides the chat pane on the left side.

“Get Is Chat Pane Visible” returns true if the chat pane is open and false if not, sneaky right?

Hangout Sound
The hangout plugin has the ability to load and play 16bit PCM wav files back to the local client. The one nice thing about playing a sound through the plugin is that it gets echo cancelled, so if you are not using headphones and the sound leaks from your speakers back into your mic, it will stop there, and not endlessly feedback through the speakers.

There are basic things you can do with a sound.
- Play it with the “Play Sound” button
- Stop it with the “Stop Sound” button
- Turn the sound looping on and off with the “Toggle Loop Button”
- Toggle the volume up and down with the “Toggle Volume Button”
- Get the current isLooped status with the “Get Is Looped Button”
- Get the current volume of the sound with the “Get Volume Button”

And with that we have finally covered all of the buttons in the Testbed. I’m sure there are more features to come from Google for the Hangout platform, and as they do, we will try our best to get more buttons crammed into this thing to show you how those features work, and to test the system when you think something might be borked. It’s also pretty handy to check out the format of an object that is returned when making requests to the Hangout api.

Hangout AS3/JS Bridge
If you are anything like us, you are going to want to take advantage of all of the fancy Hangout features, while simultaneously generating a stunning interactive experience with flash. The Hangouts API/SDK consists of a javascript library that allows communication to the Hangouts/GTalk browser plugin. Unfortunately there isn’t an AS3 library from Google that will let us use flash to talk to the Hangout platform, so we decided to make our own :) And here it is, in all of its first version glory (Source On GitHub):

Start a new Hangout session with the Testbed app:

Start a Hangout

Our library/bridge was designed with a couple of layers of abstraction. Each layer adds more and more convenience features to make the use of the basic Hangout features more easily accessible, while still giving the developer as much raw access to the API as they want.

To get started, we need to understand a little bit about the Hangout boot up process. When a new Hangout session is started or a new user is trying to join a hangout already in progress, the first thing that happens is the Hangout endpoint website checks to see if the plugin is installed. If its not, it redirects to a page with a download link. If it is installed, the plugin is loaded, and the Hangout UI is displayed. In the center, the plugin asks the user to check their camera, mic, and hair, and gives them a button to click when they are ready to join. At this point, none of the other participants in the hangout are aware this person exists yet. When the new user clicks “join”, all of the other participants are notified that a new participant has joined. Be aware that this participant is just a Hangout participant, and does not yet have an app loaded. If the GID of an app is specified in the url, the application should autoload for that person. If not, they simply see the other participants webcam streams. At this point they can either select an app from the list to load, or the app will be auto loaded, and this is where the fun starts.

The information about your application is set in the app.xml file that was uploaded during the Deployment section at the top of this post. This is an xml file from which the contents are parsed as an HTML file that is loaded into an iFrame in the center of the Hangout window. In our case where we are trying to load a flash application, our set up is pretty standard, with the addition of our Bridge scripts and swfobject.js for embedding.

When our app is loaded the app.xml file loads the scripts in this order:
1) settings.js
- simply holds an object with our start up settings in it, such as paths to the various types
of files that we will be loading later.

2) swfobject.js
- your friend and mine, the all might swfobject, helps to embed swfs.

3) jsBridge.js
- this is our base bridge framework, with some simple convenience methods for talking to
javascript from our swf.

4) hangoutBridge.js
- this is the heart of the beast. This contains functions that are specific to the Hangout
- calls from the swf are eventually processed here, sent to the API, and then it shuffles those results back to the swf.

5) main.js
- news up the hangoutBridge
- does the actual swf emebd

Once those are loaded, our swf is finally loaded, and we can complete the boot process. If you take a gander at the class in the root of the src folder, specifically in the init() method, you will see where we start our handshake with the javascript bridges:

jsBridge = new JSBridge();
jsBridge.addEventListener(JSBridgeEvent.BRIDGE_READY, handleJSBridgeReady);
jsBridge.initJS(false, false);

We new up the AS3 half of the JSBridge, wait for it to be ready, and start the handshake. InitJS() registers a callback with the page, to be called from JS when the bridge is ready, and then calls intiFromSwf() on the javascript bridge. This tells the bridge that the swf is ready, and it should go find it in the dom. Once it does, it stores a reference to that bridge and calls notifyJSReady() on that swf, which was the previously registered callback from initJS(). Once the JSBridge is up, we new up the AS3 side of the HangoutBridge and call init() on it while passing in a reference to the JSBridge. And those are the only required steps. The JSBridge is the absolute lowest and most raw portion of the system, and I wouldn’t suggest using it directly. The next level up is the HangoutBridge itself, and it gives you clean access to the hangout API, but still requires knowledge of the Hangout API itself. However this is where you would make calls to new features that haven’t yet been implemented in the HangoutManager, which is our next level of abstraction.

When all parts of the bridge are up and running, we new up an instance of the HangoutManager class. This hides many of the dirty specifics of the Hangout API itself, while giving you all the modern conveniences like code completion :) . The HangoutManager will do a bunch of things for you automatically, such as keep track of the participants, watch for shared state changes, consolidate all of the events from the different packages in the Hangouts API, and keep relevant data handy and accessible, such as your local participantID. This basically give you a single point of entry to deal with the Hangout, and a convenient place to listen for events. The supported events are enumerated in the class. There are a ton of things you can do straight from the HangoutManager, so for getting started, I would start looking through that. Nearly all of the current hangout API calls can be accessed conveniently through the HangoutManager.

If you take a peek into the app.ui.Page1UI class and the app.ui.Page2UI class, you will see examples of how to do just about anything you can do with the Hangout API.

For instance if we want to get the current Shared State Object metadata we can simply do something like this:

var stateMeta:Object = hangoutManager.getStateMetadata();

Easy as delicious Hangout pie… mmmm… pie….

Lets say we want to do something more involved, like load up and apply a face tracking overlay.

We first, create the resource, which loads the image:


Then we create the overlay itself:

     offset: { x:0, y:0 }
} );

If you pop over to the docs for the API’s createFaceTrackingOverlay:

You can see all of the options that you can pass into the optional settings object.

Once the overlay is set up, all thats left to do is show it:


You can also pass in an optional settings object to change the settings of the overlay during the show call like this:

} );

And lastly hiding it is equally simple:


Now lets say you want a bit more control over how things are called. One more level down in the HangoutManger there are a group of methods that will call functions by name on a particular package in the Hangout API:

These all take a function name as a string as their first parameter, as well as a set of optional parameters. The bridge will location that function name on the Hangouts API and call it passing to it whatever parameters you put in the option parameter set. So for instance:

hangoutManager.callOnHangoutData("sendMessage", "sweet message to all clients");

That line will do the equivalent of making this call on the javascript API:"sweet message to all clients");

Basically, this helps to future proof the bridge a bit. This way when new features are added to the package, you can make a call like the one above in AS3 and it will just work with the Hangout API without having to make any changes to the bridge.

Cool so far right? Great, lets go a touch deeper.

If we leave the safety and convenience of the HangoutManager and head straight for the HangoutBridge, we get more low level access. For instance if Google adds a completely new package to the API we can use this lower level access to make use of that as well, without making changes to the bridge with the use of the callToHangout() method:

public function callToHangout($package:String, $functionName:String, ...$params):Object

For instance (*Spoiler Alert*):

hangoutBridge.callToHangout("", "openTwoWayPortal",currentLocation.address,theMoon.address);

That would call openTwoWayPortal() on the package, passing in two parameters, in this case the start location for the portal and the end location for the portal.

Dear Google,
Please implement the package.
I think it would be swell!

- Jake

So far all we have been doing is shouting at the API from the swf, demanding that it do things for us, and I think its time to listen a bit. Besides, all of this function calling is making me thirsty for a tall frosty glass of carbonated (dare I say bubbled?) events.

As with all things HangoutBridge, there are two ways to do this, the easy way, and the flexible way. Let’s start with the easy way.

Many if not all of the current Hangout API events are replicated as a single dispatch point in the HangoutManager. For a complete list, check out the class.

To listen for these, all we need to do is add an event listener like normal:

hangoutManager.addEventListener(HangoutManagerEvent.FACE_TRACKING_DATA_CHANGED, handleTrackingData);

aaaaannndddd thats it, and we just build our handler for it:

private function handleTrackingData(e:HangoutManagerEvent):void 
_view.hasFaceText.text =;
	if ( == true)
		_view.panText.text =;
		_view.tiltText.text =;
		_view.rollText.text =;
		_leftEyeDot.x = ( * GLOBAL_X_SCALE) + GLOBAL_X_OFFSET;
		_leftEyeDot.y = ( * GLOBAL_Y_SCALE) + GLOBAL_Y_OFFSET;

All of the events handled this way stick any data passed to the JS callback in the Hangout API into the variable. As you can see here:

_view.hasFaceText.text =;

Really that’s all there is to it. The HangoutManager takes care of doing all of the setup, and redispatching of the events from the API.

In the event (see what I did there?) of new features being added or you simply want more control over the events coming from the API we can add and remove events manually through the HangoutBridge itself using:

public function addListenerToHangout


public function removeListenerFromHangout
     $passBackPropertyName:String = "",

This lets us do things like:

hangoutBridge.addListenerToHangout("","onAppVisible", "handleAppVisible", handleAppVisible);

Now anytime the hangout fires off an onAppVisible callback, our AS3 interal handleAppVisible function is called, with the event object passed in as a single parameter to the handler:

private function handleAppVisible($data:Object):void 
     if($data.isAppVisible == true)
         trace("Yeah, you know it, you loooove looking at this app!");
	trace("Y U no like dis app?");

Now if you are asking yourself, where did the “isAppVisible” property come from, I say check the docs:

*Caveat Alert*
One caveat to this event system, is that the $callbackName is global. Meaning, if you add two event listeners from two different classes that both use the $callbackName of “handleAppVisible” then when any event that fires that triggers a JS call to the swf with “handleAppVisible” then only the last listener added will actually have its callback called. So just be aware that you need to make unique names for the $callbackName. The actual name of the callback function in AS3 can be the same, but the name given out to JS needs to be unique.

The last thing is that you may have noticed is the final optional param in the addListenerToHangout() call is something called a $passBackPropertyName. In all honesty the chance that you would really actually need this is fairly slim, but it could be helpful in a couple of situations. What it does, is instead of passing back the entire event object passed in from the API, this would only pass to the event handler the data associated with the property on the event object that matches this name.

For instance, if you add a listener for the “onVolumesChanged” event in the API, you will get passed to the callback an object with a property entitled volumes. Lets say that for some reason you didn’t want to have to do, in the handler, but instead just wanted the list of volumes to be This is when you would add the listener like this:

hangoutBridge.addListenerToHangout("","onVolumesChanged", "handleVolumesChanged", handleVolumesChanged, "volumes");

Now would just be the array of volumes. Sweet right? sure…

That’s all folks!
By now I’m certain that you are tired of hearing my voice in your head as you read this. And honestly I’m tired of hearing me too, for now… Which means we are at the end of this post, which lets face it, is way too long. But if we are honest, it is fairly complete, I think… except for all the stuff I forgot, didn’t know about, or plainly left out. I’m sure you’ll fill me in on everything in the comments ;)

Have a good one, and make some Hangout Apps!!

Also feel free to check us out on G+:

4 Responses to “Google Hangouts || AS3 Bridge and Testbed”
  1. I’m trying to build a Google Hangout App as a game, but I’m unsure with the current Google Hangout API how to target a specific user. So for instance, if I want a specific participant to be the only person that can see a div in the app, how would I go about limiting based on a specific user ID?

  2. I’m trying to build a Google Hangout App as a game, but I’m unsure with the current Google Hangout API how to target a specific user. So for instance, if I want a specific participant to be the only person that can see a div in the app, how would I go about limiting based on a specific user ID?

  3. Rickey Olson says:

    Is it possible to intercept audio data using google hangout api? I writing an app using g hangout for android and I would like to process the audio. To be precessive, I want to denoise speech and use speech-to-text (e.g. google search, sphinx) to make basic voice commands.

  4. I’m trying to build a Google Hangout App as a game, but I’m unsure with the current Google Hangout API how to target a specific user. So for instance, if I want a specific participant to be the only person that can see a div in the app, how would I go about limiting based on a specific user ID?

Leave a Reply