Why you don’t need petabytes for a big data opening

Resources   |   
Published April 11, 2014   |   
Toby Wolpe

Big data isn’t necessarily big and can be as much about the complexities of processing information as about volumes or data types.

Personal genetic-profiling services such as 23andMe, which charges $99 to sequence an individual’s genome, illustrate the point, according to Forrester principal analyst Mike Gualtieri.

The resulting data from one individual’s sequenced DNA is only about 800MB, he told an audience at last week’s Hadoop Summit in Amsterdam,

“That’s not a lot. Would you call that big data? If I said 800MB is big data, I’d get laughed out of the room,” Gualtieri said.

“But within that, there are four billion pieces of information and lots of patterns. So it’s a big processing challenge, it’s a big compute challenge. You don’t have to have petabytes of data to have a big-data opportunity or issue.”

In fact, big data is a self-defining concept that Gualtieri described as the frontier of an individual company’s ability to store, process and access data to achieve business outcomes — and those outcomes are mostly about understanding and serving customers.

Read More