Event Review: Big Data in the Big City
I am always honored when asked to blog on conferences for The NextWomen. When the Big Data Innovation Summit in Las Vegas was announced, I jumped on it. The main reason is that I understood little about Big Data.
My training as a financial auditor and management consultant led me to believe that I can basically operate in any industry and size does not matter. We used to say that the difference between small and large is just additional zeros at the end of the numbers.
What is Big Data?
I did not have to look far for a definition. The editorial in the preview copy of Big Data available at the conference set out the following:
Big Data is data that exceeds the processing capability of conventional database systems. The data is too big, moves too fast, or doesn’t fit the structures of your database architectures. To gain value from this data, you must choose an alternative way to process it.1
Previously, we looked at our computer data as a replacement for paper and a time-saver. Now the volume of the user activities and the associated streams of data create PBs (petabytes) of data to be analyzed. PB was a new term to me – it’s 1015 or 1 million gigabytes.
It was mentioned by a few presenters that companies continue to use many different technologies to handle big data and this problem still needs solving.
The wow factor of how sheer volume in the databases was evident in Andy Edmonds’ eBay presentation; 200 million items in inventory! As we approach Zettabytes (1021) of data, this may be an opportunity for entrepreneurial data gurus.
On the other hand, health care and big data is a bit of a contradiction because bigger is not necessary better. Dr. Yan Chow of Kaiser Permanente discussed the importance of the proper context for the plethora of patient data. With the context stripped away the data may become meaningless. On a positive note, Google was better at predicting the recent outbreaks of flu than health authorities in the US.
In the manufacturing sector, the new term is the Industrial Internet as opposed to the social Internet. Over lunch a General Electrics executive explained the usefulness of the data and analytics gathered on industrial products such as turbines for their customers.
To me, Big Data is an intersection of psychology and data.
Choices, patterns, and behaviors are tracked and analyzed. For example, in the case of Barnes & Nobles, data was used to measure customer loyalty.
We are passed monitoring what customers are doing on our own sites. Bitly presenter Anna Smith explained how their solution now gives information on what your customers are accessing on other sites.
Outside of Caesar’s Palace
I could not go to Las Vegas without a Zappos Tour. I am a long time Tony Hsieh fan and as my UPS driver knows, I am doing my part to jumpstart the economy by online shopping. I am not alone; one in every sixty UPS packages delivered in the US is a Zappos box!
Interesting use of Big Data – just before the flight up, I tweeted out that we are thrilled to attend Big Data but super-excited about the Zappos tour – I received a tweet back from Zappos – how do they do that?
Topsy’s Jamie de Guerre shed some light on that with a presentation titled “How to analyze a 100 billion tweets in milliseconds.” His analogy of data as a high-speed train outlined three ways to analyze what was happening on the train. Options include: use a sampling methodology as the train hurtles by; take the track off the track and examine; or use the real time information generated by the train. In reality the train is the 430 million tweets per day. The Topsy solution follows the real time model and analyzes all Twitter traffic in about 200 milliseconds.
The best example was the Obama for America campaign’s use of superior data and social media. Deputy CAO described how data was used to micro-target voters. The best method to reach voters was selected based on the data whether that was social media, direct mail, or phone calls. Powerful and successful use of big data.
I was struck early on in the conference by two thoughts, first not only is big data mind boggling in terms of scale but discussions can get quickly become extremely technical.
Several times I felt that I was in an episode of the Big Bang Theory sitcom and not as one of the scientist characters.
However many of the presenters described the approach as a traditional project plan which always led back to assessing client needs and how to tailor the data and resulting analytics .
Second, I was thinking that this data gathering and analytics has the potential to spiral out of control. Balaji Thiagarajan from Motorola summed up the necessary approach nicely; 70% of the time should be spent on what data and only 30% on how to gather data. As a former C level executive I would agree with that allocation of effort but also ask the question why. Why do you want this data and whether that particular data actually answers the question or solves the problem.
Overall, a great set of presentations but I was left thinking 'how do we really figure out what data matters?'.
1. Edd Dumbill, Making Sense of Big Data, Big Data Preview Articles, page BD1.
This article was written by Mary Juetten, founder of Traklight.com a site that provides inventors, creators, startups and small businesses with the tools to identify and protect their intellectual property. Visit www.traklight.com and take the RISK QUIZ to assess the risk of losing your IP or upload your ideas into our IP VAULT.
Sign Up to our Newsletter
So you enjoy The NextWomen. Why not sign up to our monthly newsletter?
You get a Letter from the CEO :-), the chance to catch up with the best of our recent articles - and some extra things we throw in once in a while.