Bloom Announces Seed Funding Round, Led by Betaworks

Playful visual discovery application platform secures seed round funding from Betaworks, SV Angel, and Individual Investors

San Francisco, CA.  April 11, 2011

Bloom Studio Inc. today announced the closing of a seed round of funding led by Betaworks with participation from SV Angel. Additional investors include Stewart Butterfield, co-founder of Flickr. The terms of the financing were not disclosed.

“Betaworks is more than just an investor” said Ben Cerveny, founder and president, Bloom Studio. “Their insights into the landscape at the intersection of culture and technology help us focus on the big opportunities.”

Bloom is rolling out a series of engaging and playful applications on iOS and web platforms that make social media and streaming media datasets easier to explore and understand. Their first applications will be available in the iOS app store later this quarter.

“We are very excited about the vision of Bloom.” said John Borthwick, CEO of betaworks, “At betaworks we have built some very large real time data sets, including bitly, chartbeat and socialflow. Bloom’s vision of turning real time social data into dynamic visual objects is a vital part of the future. These objects will let users understand, navigate, play and use datasets in whole new ways”

Bloom has already begun to demonstrate the kind of fluid playful discovery it’s applications will bring to everyday online activities like Twitter, Facebook, and Instagram. These preview experiences can be found on or directly at and Forthcoming “instruments”, as the small, playful applications are called, will focus on discovery within streaming audio and video services.

About Bloom Studio:

Bloom Studio was founded in July of 2010 by data science and visualization experts Ben Cerveny, Tom Carden, and Jesper Andersen. Ben has designed products and services ranging from mobile operating systems to massively multiplayer online games to amusement park experiences, and brings a 15 year industry perspective on dynamic visual application design. Tom Carden was previously a lead visualization designer at Stamen Design, a celebrated visualization studio. Jesper Andersen comes from Trulia, where he was Product Manager for Data and Econometrics. They are joined by Creative Director Robert Hodgin, who was a founder of the Barbarian Group and has created some of the most compelling generative art works in the medium. His portfolio can be seen at

Please visit and for more information and follow us on twitter at @databloom

Press release at

Web 2.0 Expo Slides

Here are the slides and notes from my Web 2.0 Expo talk Data Visualization for Web Designers:

You can also download the PDF version. Both are exports from Keynote including extra notes and links that should cover most of the material from the session.

Upcoming Bloom appearances: Web 2.0 Expo SF and Geeky by Nature NYC

On Tuesday (March 29th) I’ll be speaking at in San Francisco at Web 2.0 Expo. My talk is Data Visualization for Web Designers: You Already Know How to Do This, here’s the overview:

Today’s web developer is armed with a powerful suite of tools optimized for writing network-aware, data-driven, interactive graphical applications. Modern web browsers provide a powerful flexible programming language (Javascript), an expressive and elaborate styling system (CSS) and two robust, battle tested document models (HTML and SVG). In the rare cases these aren’t enough, new technologies like WebGL and Canvas can fill the gaps and old standbys like Flash haven’t gone anywhere. You know how to do this!

In this session we’ll:

  • take a look at the best examples of interactive, web-based data visualization and talk about how they work and what they achieve (and where they fail)
  • explore the tools, techniques and resources out there for today’s web developers and designers working with graphical presentations of data (e.g. Processing JS, Protovis, D3, Google Maps, etc.)
  • look to the future of data visualization online and what features new technologies like WebGL will offer that we haven’t seen before

On Friday (April 1st) Bloom’s Robert Hodgin is speaking in New York at Geeky by Nature. His talk is Practice Makes Perfect, So What Are You Practicing?, here’s the overview:

Six years ago, I created my first programmed magnetic repulsion effect. In the years that followed, I continued to fine tune and explore the phenomenon of electrostatic fields and gravitational forces.

I used the necessary equations for these invisible forces to create audio visualizations, natural simulations, and artistic interpretations. Everything fell into place as things tend to do when you are dealing with specific formula in controlled situations.

Recently, however, my fascination with these invisible forces has started to work its way into my day to day. I can’t close my eyes to sleep without seeing charged particles spreading out in the blackness. I can’t walk down a crowded sidewalk without thinking about how repulsive forces lead to collision avoidance. I can’t look at flowering tree without considering the complex mathematics and infinite iterations it would take to create such intense beauty and variety.

In my presentation, I will discuss these forces at greater length and show some implementations and unexpected uses.

If you’re at either event (or both!) please don’t hesitate to say hi and ask us more about Bloom!

And then there was

We’re all obsessed with recording not just the hard facts of the cities we live in, but also the soft ambiance of our experience within them.  At least that’s the implication we see from the mass acceptance of geo-social tools and the content you the user create with these tools.  We’ve tried to examine these shared experiences and how they define location with — a map of collective experiences through Instagram photos.

Screenshot of the map

As wonderful as these collected experiences are though, we’ve been limited in the tools we can use to explore this data of personal experience.  Too often the data arrives in a one-dimensional stream designed to help us catch up with what our friends are up to or as a snapshot of what’s happening precisely at that moment — but because they are so fragmented and linearly organized, none of them tell us much about the world as a whole. Even our favorite photo-sharing sites that support geo-coded photos — like Flickr and Instagram — are heavily biased towards a time-series oriented view of the data instead of geographic or otherwise experiential, exploratory views.  Because of this, we’re forced to rely on memory if we want to understand the trends and significance of a collection of images.

Compare this to the tools available to view the hard-facts of cities — crowd-sourced street and architectural information, and so forth — and you can being to see a the large gap between traditional visualization tools and personal and expressive data visualization tools. We are lucky here at Bloom Studios that Ben and Tom, two of our co-founders, have spent years refining the theory and practice of cities, geography, and mapping for hard facts.  As such, there’s a rich toolset for discussing and presenting data — and with we’ve applied this technology stack to present you with the collective experience of Instagram users.

One of Bloom’s central theses is that the experiential and personal data can be transformed into an expressive format using the same tools we’ve become experts in using for traditional factual data.  So can we use visualization tools to provide a new insight into an already rich experience?  In our current social and experiential toolkits, location is an element of context to understand the photo.  What would happen if you inverted this relationship?  What would happen if you used the photo to provide a context for a given location?  That’s the question we’ve tried to examine with attempts to provide a glimpse into the collective experience of users.  We’ve initially created maps that present a collective view – focusing on what’s “interesting” within a given area. is actually a cartogram — it truly measures a variable over a geographical area.  In this case we’re using the notion of “interestingness” to define what defines an area.  Using this variable we select which photos to show in a larger size than others.  We’re not restricting ourselves to a completely linear model of interestingness and size, so that we can provide users with some larger, and recognizable, photos at any zoom level.

This, we hope, gives you a glimpse into the value of and examining experience geographically in a broad way.  Over time we will expand this capability, allowing you to not just view all public data, but to also restrict it to your own views of geographical experiences and those of your friends (as defined by your social network participation), making it more personally relevant — your own social (or personal) map of what matters in the world.

Technology was written using ModestMap.js for the tile mapping and SimpleGeo for the location services and the labels are the Acetate labels from FortiusOne and Stamen.  We’ve extended this stack somewhat to support richer experiences than were available to us out of the box, but have tried to keep all of these extensions as general as possible.  Tile maps are certainly common experiences now, but we did this because we’re trying to explore the possibilities available to data visualizers if they can simply swap out the data source for another – would there be sweet spots of rich experiences made available if we encourages playing with the data sources?  The tile-generation itself was bespoke, and something we’ll look into generalizing further over time and as computation restrictions are relaxed somewhat.

Introducing Fizz

The response to our website and company launch on Monday has been great. We’re already hearing from people who are as excited about our vision for data expression as we are and we’re getting great feedback on our initial offerings, Fizz and Cartagram.

We’re also sensing a blend of curiosity and hope, especially from our friends at blogs like Infosthetics, Flowing Data and We’re working hard to fulfill that hope!

Our long-term plan is to build a product that offers many different visualizations that can be applied to a wide variety of data sources. We’re building the product one piece at a time, starting with Fizz.

Fizz shows recent updates from your network on Facebook or Twitter. Large circles are people, small circles are their updates. Typing in the search box highlights matching terms:

Fizz can connect to data from two places right now: Twitter and Facebook. Both of these are personalized to present recent updates from your own network of connections. We plan to add more data sources soon.

Designing Personal Data Visualizations

The personal nature of the data immediately presents an interesting design problem. How do we show you what Fizz is and does without knowing who you are and what data is relevant to you? We’ve introduced a wireframe mode to the visualization as one possible answer to this question.

Fizz is the first of many visualizations we’re building. It’s adaptated from a fairly common chart, the bubble chart (well implemented by our friends at Many Eyes and offered in open source libraries like Protovis) but we’ve adapted it to be more dynamic and playful. It’s a different way to look at textual information, like tweets, and as we develop it we’ll add extra layers of relationships and connections onto the foundation it provides. It’s also a nice stress test of the new features in browsers like Safari, Chrome and Firefox, and we’re using the Processing JS library to handle the drawing and animation.

As we add new data sources to Fizz we don’t want to be tied to a lowest common denominator treatment of that data. For example, if we add LinkedIn as a data source it might be easy to limit Fizz to showing people and status updates as it does for Facebook and Twitter but we might also want to represent people and companies instead. Ultimately, the Bloom platform will allow these choices to be made by anyone but for now we’re exploring them one by one. That’s why we began simply with Fizz for now, and it will gain in flexibility and expressiveness as we develop our tools.

If you have thoughts on features or inputs we should add to Fizz next then please let us know in the comments, on Twitter, or fill out or feedback form on Google Docs.

In Bloom

Welcome to Bloom!  Our mission to bring you a new type of visual discovery experience is already underway. We’re building a series of bite-sized applications that bring the richness of game interactions and the design values of motion graphics to the depth and breadth of social network activity, locative tools, and streaming media services.  These new ‘visual instruments’ will help you explore your digital life more fluidly and see patterns and rhythms in the online services you care about. And they’re coming to a tablet, media console, or modern web browser near you!

Fizz on

We’re excited to invite you in to our newly redesigned site at, where we’ll be showcasing the first instances of the experiences we’re designing, starting with Fizz and Cartagram.  What is important to realize about these, as with all of our coming applications, is that they are the foundations of a constant flow of ongoing iterative development, much like video game franchises.  As a participant in the Bloom Network, you’ll be presented with an ever-changing, ever-increasing variety of views onto the world’s most popular web services like Facebook, Twitter, Gmail, youTube, Netflix, Dropbox, Instagram, and so forth.  Some of these instruments will be lyrical, some playful, some analytic, many of them combinations of all three, but all will provide compelling and engaging handles on the information that matters to you most, each one evolving and improving over time, building on your understanding of its performance. Starting later this year, you’ll find these instruments on iOS and Android devices, like tablets, phones, and media consoles in the home.

In order to make these experiences possible, many of which will use the latest in 3D graphics, simulation, and data modeling frameworks, we’ve brought together quite a team.  I’m particularly excited to announce that we’ve most recently been joined by our new Creative Director, Robert Hodgin.  For those not familiar with his work, please take a moment to browse his portfolio at or his blog at, and you’ll quickly see why we’re so enthused to welcome him to the team.

still from Solar Rework by Robert Hodgin

His dynamic visuals have backed musicians performing on tour like Peter Gabriel and Aphex Twin, and he designed the popular Magnetosphere music visualizer in Apple’s iTunes.  He is the foremost implementer of Cinder, a C++ graphics framework that underlies much of our work, and is engaged in constant investigations of emergent complexity in generative design, much like the rest of us here at Bloom.

We’re very excited about what lies ahead in the coming years.  The ways in which people interact with computation are changing swiftly as we move into more casual relationships with our digital services on tablets, big screens, and across social networks.  We believe we have some compelling answers about how digital experiences will evolve into these new contexts.  Please, follow along with us and explore these playful, dynamic instruments of discovery together.


This is the blog for Bloom. We’re just getting started. There are a few notes on our work and team at but please check back here for more news soon. Thanks!


Get every new post delivered to your Inbox.