Bloom’s newest iPad app Biologic is available now as a free download from the App Store. Biologic shows your contacts from Twitter, Facebook and LinkedIn as a collection of biological cells filled with their most recent updates. CNET’s Daniel Terdiman has an interview with Ben about the thinking behind the release and our plans for future versions of the app.
We’re excited to share this with the world – and we look forward to trying it out on the new iPad next week! – and we have plans to release several new features in the coming weeks. An update featuring improved UI for settings and item selection is already awaiting App Store approval, and we’d love to hear your suggestions what what comes next. You can talk to us on Twitter, on Facebook, or using the comments on this post.
We’re working on a new back-end to support our visualizations of personal data from services like Twitter, Facebook and LinkedIn. These services now routinely offer https, the secure form of http, as an option to protect users from eavesdropping on open networks. It took a high profile release of a simple tool, FireSheep, to give users the awareness to activate this by default, and many users still aren’t aware that they need to do it. A few months later, https is “table stakes” for online services that handle personal data and it makes sense to activate it in new products by default. For example, your server must support https if you want to use Foursquare’s new push APIs or display content on Facebook after October 1st. Other services will surely follow suit.
If like me you’ve never needed to activate https you might not know what’s involved or what to look for. I’m still learning, but I thought I’d share what I’ve found so far.
To enable https you need to make sure your web server host supports it and you need an SSL certificate. SSL certificates are “just” text files of data that represent a chain of trust from a certificate authority through the certificate seller and down to you. The certificate authority’s credentials are installed in your browser to allow it to verify the certificates of websites you visit. Ideally there are identity checks at all stages of the certification process, though in practice it’s probably just an automated phone message and an email by the time it gets to you. Of course you need a credit card to pay for a certificate; those are table stakes too.
SSL on Heroku
We’re using Heroku for our hosting and they support a few different options for SSL in their add-ons section. If you’re using a *.heroku.com or *.herokuapp.com domain for your app you can just activate “piggy-back” SSL for free and stop reading. I’m told by support that this will be activated by default for new apps and that it definitely supports the new herokuapp.com Cedar stack, which wasn’t clear from the add-on page.
Otherwise, the differences between SSL options are fairly well explained on the SSL add-on page and from Heroku’s perspective it’s a matter of pushing you towards solutions that are most compatible with cloud hosting. Securing every subdomain is usually what you want, but in the cloud this means a dedicated IP address for your domain and you will be charged accordingly. We went with hostname SSL, which is a bit more convenient in the cloud but still requires a load balancer step, so Heroku currently charges $20/month for this option. Hostname SSL enables us to secure a single subdomain like secure.example.com.
Note: if you’re using Heroku their graphical tools are beautifully designed but not especially feature-complete, especially for SSL. It’s better to jump straight to the technical documentation and complete tasks on the command line. Their documentation for SSL support is clear and up-to-date.
Choosing a Provider
After asking around we received a couple of recommendations for an SSL certificate. Most useful articles you’ll find with Google talk about using GoDaddy but we’re not big fans of their up-sell process or tools so we wanted to try another seller. Gandi.net looked good but wouldn’t support our .io domain name, so we went with RapidSSL.
The main thing we were looking for was straight-talk and focus, which is why we chose RapidSSL. Our DNS provider also sells SSL certificates but their documentation seemed out of date and links to help documents were broken. But if you like one of your existing providers then of course check with them first.
Note: RapidSSL is an entry-level reseller of certificates from GeoTrust. If you’re not on a budget, or you want Extended Validation which will show your name in the address bar in modern browsers, then go directly to GeoTrust and pay a little bit more. We’re not conducting credit card transactions and our API traffic is largely behind the scenes, so we’re more interested in privacy and security than user experience at the moment.
Making a key and Certificate Signing Request
To acquire an SSL certificate you need to create a private key for your server and use it to create a Certificate Signing Request which you upload during the purchase process.
Heroku uses Nginx to serve your pages (no matter which language you’re using to serve responses, Nginx sees the requests first and forwards them to your app). RapidSSL didn’t have instructions for creating a key/CSR for Nginx but the instructions for Apache worked fine. I used the openssl command in my Mac terminal, as follows:
- Generate a new key with
openssl genrsa -des3 -out example.com.key 2048
- Generate a CSR with
openssl req -new -key example.com.key -out example.com.csr
– note that Common Name will be the name of the subdomain you’re securing.
- Copy/paste the CSR into RapidSSL’s form as part of the purchase process
- Complete the phone confirmation step. I also needed to activate a new forwarding email address on our domain to handle the admin email, because our .io domain couldn’t provide the relevant admin contacts automatically.
- Await your certificates by email. This wasn’t as rapid as we’d hoped, it took about an hour.
- A web server usually requires an unlocked key, so do
openssl rsa -in example.com.key -out example.com.unlocked.key
to create that. With hindsight you could leave out the -des3 in step one and skip this step.
- Save the keys you receive by email. For filenames we used example.com.crt for ours and intermediate.crt for the certificate authority’s certificate.
- Combine your key and the intermediate key using
cat example.com.crt intermediate.crt > example.com.pem
(.pem and .crt seem to be used interchangeably, please let us know if this is incorrect).
- Upload the certificate and unlocked key to Heroku for your app:
heroku ssl:add example.com.pem example.com.unlocked.key --app example-app
- Activate Hostname SSL, and consent to being charged $20/month:
heroku addons:add ssl:hostname --app example-app
- Lastly, update the CNAME DNS record for your subdomain to point to the new hostname provided to you in an email from Heroku. This arrived as quickly as I could check my email.
That’s it! Phewf! A simple 11-step process. There’s a full transcript of this on github if you want to see what the responses look like, and once you have a certificate Heroku’s SSL docs are excellent. Good luck!
We’re proud to announce that version 2.0 of Planetary, our celestial music player for iPad, is now available for download in the iTunes App Store!
Planetary 2.0 features a full graphical update: new galaxy detail, solar flares, eclipses, atmospheric glow, accretion disks and much more fine detail at planet and moon level. Additional detail is available for iPad 2.
To address your most popular feature requests we’ve added in-app support for playlist selection and shuffle/repeat modes.
Other new features include optional automatic camera motion (great for long playlists!), gyroscope support for iPad 2 and scale and speed sliders to adjust the universe to your tastes.
Several small bugs with version one are also fixed and Planetary 2.0 has much better interactive performance, especially when flying between planets and when interacting with the playhead slider.
Many thanks for all the feedback and kind words we received about version 1 – keep it coming!
Beginnings and Endings
This second release of Planetary also marks the departure of our colleague Robert Hodgin. You might remember that we announced Robert had joined us as Creative Director earlier this year. With hindsight we realize it would have been more appropriate for him to join us as artist in residence for the duration of the Planetary project. In practice that’s what his role has been, and he’s made the choice to step away at this point.
Robert returns to pursuing his own projects and plans to increase his contributions to the Cinder library. We’ll continue to use Cinder in future Bloom projects too so we’re very excited about this and wish him well in all his future endeavors. Thanks Robert, it’s been a fantastic journey!
We hope to play host to more artists-in-residence in future, without blending that process with a more traditional production role unless that’s the right thing to do. If you’re an ambitious digital artist interesed in working with Bloom in this capacity, and helping us define it, please get in touch.
We’re also considering creating a more traditional Creative Director role within our team, if you’re interested in helping us define this role please get in touch.
As with all our openings, we’re not interested in hearing from recruiters or agencies at this time.
During the last couple years, I have become addicted to science shows: particularly shows about astronomy. Everything about the subject fascinates me. Shortly after I joined Bloom, we started working on a new way to visualize and explore your iPad music collection. While coming up with concepts to explore, I stumbled upon the following image.
This photo, taken by the Cassini probe, was the original inspiration for the Planetary app. We decided to model the iPad music library after a model of a solar system. Several solar systems in fact. Each star in our universe will be an artist in your music library. Each of these stars will be orbited by planets representing albums. Each planet will have a moon system with each moon representing a track.
After establishing this initial premise, many more details fell into place. All of the artists in your collection would be a galaxy. Groupings of stars, either by filter or by playlist, would be constellations. The look of the planets would be derived from genre information mixed with album art. The speed of the orbit would be related to track length. So many options!
The first hurdle was to get some content onto the iPad. Happily, Ryan Alexander had already been working on an iPad music library. We ended up using Cinder for the job partially because it is the platform that I have been working with for a couple years but also because Cinder plays quite well with the iPad.
I needed to get up to speed with doing recursive node structures so I coded up a project that would put a dot on the screen. When you tapped this dot, it would create a bunch of orbiting child-dots. These children could also be tapped, creating even more child nodes. This prototype took less than a day to create and I naively thought we would be done with the whole thing in a week, max. Silly me.
Two Dimensions is plenty of dimensions
A couple days later, we implemented the iPad music library code and had our first working music library visualization. It was top-down, two dimensional, the lighting was baked into the texture, it used placeholder 2D planet graphics, and the text layout was far from elegant. But it worked. And even in this initial stage, it started to show its potential.
Someone raised the point that they thought it could be 3D. I got nervous. I had already quietly considered this option, then abandoned it. In my experience, going from 2D to 3D makes the experience approximately 15000% harder. There are so many new things to have to worry about. You have to make things look good at many different scales. You have to create a robust camera model for moving through the space. You have to implement a 3D picker. But for me, the bit that was going to be the hardest was coming up with a solution for rendering the planets and moons.
With a top-down 2D simulation, you can load in a ton of pre-made 2D graphics and you never have to worry about what might happen if the user viewed the planet from a different angle. With the 2D version, you can easily use flat circular graphics to represent the planets. With 3D, you would have to shift to drawing spheres. You could use billboarded 2D graphics (these are images that always align their face towards the camera so you never see the 2D graphic edge-on) but you would then lose a lot of the control of the lighting. And since we were going to show bright stars against a dark background, the lighting model would have to be highly tuned to keep from looking flat or dated. Plus, there is a ton of music out there. If we wanted to represent every single album ever made with a unique graphic, we could not rely solely on pre-made planet images.
The Era of Additive Blending
A quick and lazy solution to many of the problems associated with the switch from 2D to 3D is to simply make everything glow. I made the planets and moons glow as brightly as the stars. In fact, it would be pointless to even refer to them as planets and moons anymore. I enabled additive blending and everything became a star.
With additive blending, you can use flat images of glowing circles instead of spheres. You can render the content in whatever order you want. Best of all, the idea of a ‘planet surface’ goes completely away and you save yourself weeks of work. Everything is all aglow and mission accomplished.
Enter friend and original beta-tester, Tom Coates.
Tom : “You can’t have stars orbiting other stars. That just doesn’t happen.”
Me : “But some star systems have multiple stars. They orbit each other.”
Tom: “Yes, but those are multi-star systems. They aren’t stars orbiting other stars orbiting still other stars. This is stupid. You guys are stupid.”
My inaccurate re-imagining of Tom Coates had a point.
Return of the spheres
There were many unknowns but the one that made me the most worried was whether or not the iPad could handle drawing so many spheres. We quickly swapped in a drawSphere() where it was previously drawing a 2D graphic for each track and planet.
Happily, the iPad handled it like a pro. Even with albums featuring 30 or more tracks, the frame rate stayed at a happy 60. More importantly, it started to look fantastic.
It was time to start thinking more about the look of the planets and moons. The stars were easy. At this point, they were still just 2D graphics. Still no reason to render the stars with spheres because it was going to be overkill. A star in space is going to look like a glowing circle, no matter what angle you view it. But the planets did not glow. They needed to reflect light from the star. They needed to have a dark side. They needed to have a rich surface texture. It would be great if they could have clouds. And they might each have many orbiting moons which all needed to have the same attention to detail.
And speaking of detail, we also needed to figure out a good system for controlling the level of detail based on distance from the camera. The planet that is really far away and has a screen radius of a couple pixels does not require a high resolution sphere. It would be waste of computing power. Additionally, any planets with cloud layers would require us to draw two spheres, one for the planet surface and a slightly larger one for the cloud layer. But again, with planets that are really far away, the cloud layer didn’t need to be rendered at all because you just wouldn’t see it.
In order to speed things up, we created 4 different vertex arrays representing 4 different resolutions of sphere. This helped a great deal and even with many high resolution nearby planets, the frame rate kept humming along. We did run into some errors along the way. In my first attempt at coding a vertex array sphere from scratch, I messed something up and these fantastic abstract forms filled the sky.
Surfacing the planets
Now that we had our spheres, what to put on them? We considered just slapping on the album art but we knew that would just look odd. And what would we do for albums that have no album art? It was time to do a bit of pixel by pixel manipulation.
The creation of the planet surface is a 4 step process.
1) Grab a rectangular block of pixels from around the center of the album art. The size and position of the block you grab would be based on the number of tracks that album has combined with a integer representation of the album name. This way, the planet representing Stateless’s album ‘Matilda’ should look the same on my iPad as it does on yours. If the album art is missing, we skip right to step 3.
2) Make it a mirror image by doubling the image but horizontally reversing one side. This is a quick and easy way to deal with the seam line that what would appear if you just used the non-mirrored version. The seams are still there but their impact is minimized by the mirroring.
3) To make the planet surface a bit more rugged and rough, we add in some additional texture directly on the image. This additional texture is derived from images provided by NASA. We use combinations of photos taken of the surface of Mercury, Venus, Earth, Mars and the cloud patterns from Jupiter. These extra details are burned into our album art graphic.
4) Finally, we add a cloud layer. As we haven’t found a fast enough way to do dynamic clouds (maybe on the iPad 3 or 4?) we ended up using prepared graphics of the earth’s cloud layer, some modified images from NASA’s Blue Marble venture, and some zoomed and cropped textures from our own Flickr images. This cloud layer is actually rendered as a separate slightly larger sphere so it can rotate independent of the planet.
The moons are done similarly. The moon textures are even smaller crops from different parts of the original album art. So with the above example for the Matilda album, you might end up with moons that are solid red orbiting next to a moon that is purple with a blue stripe.
After implementing this 4 step process, the planets still felt like they were missing something. They felt really small. Like marbles. And they also felt very lifeless. They had a barren quality. These planets needed an implication of complexity which would hint at the possibility of life. These planets needed atmosphere.
This was a little tricky. It isn’t as easy as rendering a slightly larger semi-transparent sphere around the planet. You would still end up with a hard edge instead of a fuzzy glow. The solution we chose uses a billboarded graphic of a fuzzy glowing ring and it is drawn at the same position and screen size as the planet. Not only does this give the appearance of an atmosphere, but it helps to mask the aliased edges of the sphere (anti-aliasing is not currently supported on the iPad). With this process, we were able to create a really lush variety of different looks for the planets and moons.
In order to keep the atmosphere graphic aligned with the silhouette of the planet sphere, we had to use spherical billboarding. With regular billboarding (as seen in the 1.0 release), as the planet moved to the edge of the screen it would begin to warp. The FOV of your camera model dictates just how much warping occurs. And once the sphere is warped, the billboard atmosphere stops aligning with the sphere itself and the illusion is broken. With the soon-to-be-updated version of Planetary, spherical billboarding corrects this misalignment by also warping the billboard texture.
The iPad allows you to access most of the metadata that is available in the iPod app which gets its data when you sync to iTunes on the desktop. We use the playcount to control the size of the moon. The higher the playcount, the larger the radius of the moon. The track that is played the most on an album will have a cloud layer and a denser atmosphere. Additionally, the time it takes for a moon to make one full revolution is equal to the duration of the track.
The planets are similarly affected by the iPod library data. The distance at which a planet orbits around its star is related to the release date of the album and the size of the planet is based on the number of tracks on the album.
There has been some serious code optimization since Planetary was released. The nice thing about optimizing code is it frees up some CPU load to add more visual effects. An effect I was anxious to try coding was a realistic eclipse effect. I have never seen an eclipse in real life, but rest assured I will be ready when a total solar eclipse sweeps across the US from Oregon to Georgia in 2017. I researched the effect by watching different videos of an eclipse taking place.
I knew there needed to be a coronal flare. I knew the foreground planet should be in total darkness with a bloom of light around the edge. These were fairly easy to code up once you figure out the math for doing circle/circle area of intersect. The part I hadn’t anticipated being difficult was simulating a nice (faked) HDR light effect. This is a common effect in modern immersive first-person shooters. As you look at the sun or other super bright objects, the rest of the view needs to dim and adjust contrast to simulate what it would be like to look at something much much brighter than anything a computer monitor could represent.
This ended up taking a lot of trial and error. In the end, most of the effect is controlled by adjusting the size of the graphic representing the star’s outer glow. As an eclipse begins, this glow slowly increases. When the eclipse reaches near totality, the outer glow drops abruptly as does the amount of light falling on the eclipsing planet. Together with a some extra dust and smoke textures, a nice eclipse effect is achieved.
And as Tom continued to squeeze more FPS out of the app, I continued to refine this effect by adding shadows cast onto the accretion disk. I didn’t want to do actual cast shadows because it would be too heavy an implementation for the minimal effect I was looking for. So instead, I researched circle/circle tangents to figure out how to draw the eclipse if it were two circles instead of two spheres. Keeping in the world of 2D made the math pretty trivial. I then just draw a few textured triangles with very low opacity and you get nice but very subtle shadows, complete with distinct umbra and penumbra sections (as shown below in outlines).
We are working hard to put out an update to Planetary which will include a few of the most requested features as well as a revamping of the graphics. We are definitely excited about eventually switching to OpenGL ES 2.0 so that we can use GLSL shaders to handle many of the effects but with much more zeal and gumption. Stay tuned for more information about the release date and feature list.
Our first iPad app Planetary launched yesterday and can be downloaded for free from the App Store. We had a great day responding to feedback and requests from our first users and watching the waves of positive responses roll in on Twitter and elsewhere. We’re both humbled and proud to be getting all this attention, especially on such a busy news day.
CNET kicked things off for us yesterday with an IM interview with Ben, a good insight into some of the thinking that’s gone into the app. Gizmodo granted us a coveted App of the Day badge with a lovely video review. We’ve also received welcome mainstream coverage from the New York Times, Time and Wired, specialist reportage from TUAW, AppAdvice and Creative Applications as well as reports from our three favorite visualization blogs Infosthetics, Flowing Data and DataVisualisation.ch.
Our friends at Laughing Squid were among several blogs to use our introductory video from video. It takes a village to produce a video and we couldn’t have done it without Scott Schiller, Tomas Apodaca, Owen Granich-Young and Yoz Grahame. We had great fun making it so here it is again for those who haven’t had a chance to catch it yet:
The video is also a great opportunity to showcase the music of Zoë Keating. Robert is lucky enough to have collaborated with Zoë in live performances such as last year’s In The Trees from Zer01 San Jose. If you like what you hear in our video then be sure to try the full album.