Organizations are just digging in with big data, and already portents of change are forming.
This time, it appears to be some re-thinking by HPC (high performance computing) vendors about which hardware delivers the best processing for big data payloads that need to be processed and analyzed as quickly as possible.
Big data’s HPC heritage is grounded in universities and research institutes sharing supercomputers with immense footprints and costs of operation. But this isn’t the space where enterprises and SMBs (small and medium-sized businesses) operate. Instead, these organizations want affordable and scalable power computing for their big data that can be budgeted into their data centers. Unless these organizations opt for cloud service providers to host and run their big data processing and analytics, they’re also going to be looking at real metal (and not virtualized) HPC platforms, because HPC and big data don’t tend to perform well in virtualized environments.
Thus far, the platform of choice in the enterprise data center for big data processing has been x86 servers. Part of the reason is their ease of scalability into big data processing clusters as organizations expand their big data processing capabilities. Another reason is the comparative affordability of x86-grade machines, even though they must be specially configured in order to apply HPC’s parallel processing against big data in an analytics operation. The “catch” in this is that as big data processing in enterprises grows, so do expectations.