There’s no doubt big data is a game-changer, thanks to its ability to store and process huge volumes of data rapidly. Despite its resounding success in the technical arena, big data adoption by the business has largely languished due to its siloed application, non-integration with the rest of the enterprise’s technical assets and complexity in business users learning its lingo — Pig, Hive, etc. But I predict that in 2016 big data will emerge out of its IT shadow and be embraced by business users. How? With the help of big data virtualization, a technology that virtually combines big data with other enterprise data.
Businesses are beginning to power enterprise-wide use cases such as transforming revenue models from perpetual license to subscription license by bridging the gap between big data and back-office systems using data virtualization. In 2016, I expect this momentum to double the rate of adoption of big data technologies, especially in three aspects.
1. Adoption of business applications using telemetry and IoT will double
Businesses will double their adoption of business applications using telemetry data along with existing enterprise data to power new and expanded use cases.
Consider the case of a heavy equipment manufacturing company that uses machine-related sensor IoT data to predict service needs and provide proactive parts replacement. The company derives actionable insights by combining the sensor IoT data with the enterprise data about parts availability, dealers and location to ensure that they provide timely customer service. The sensors monitor the health of various parts within the machine and transmit the useful data, which are stored in a big data system. Data virtualization combines the IoT data stored in a Hadoop system with that of the customer service records stored in back-office systems and provides predictive intelligence to business users.
As the above use case demonstrates, the ability to combine enterprise data with streaming data in real time or near real time from devices will spur the increased adoption of telemetry data and, in fact, the IoT itself.
2. Insights-as-a-Service will become the next business frontier through real-time access to data
In 2016, insights-as-a-service delivered through real-time reporting by seamlessly integrating hot and cold data will pick up steam to provide business users with deeper, historical analysis.
Operating a data warehouse has always been expensive. In 2015, companies chose to partially offload historical data onto cheaper Hadoop stores as natural solutions to drive costs down. However, introducing Hadoop to an existing data warehouse created new problems with no simple way to bridge the gap — two separate data stores with very different modes of access, protocols, data formats and performance and security capabilities. These data silos presented several challenges such as creating a unified report that combines the data from the two systems.
I predict that in 2016 businesses will use data virtualization to solve this problem by creating a virtual data layer on top of both the data warehouse and the Hadoop store, essentially abstracting the access to both systems and seamlessly combining the disparate data into a unified view.
The combined data will be delivered through real-time reporting as a service. Business adoption of this insights-as-a-service will accelerate as they realize the dual benefits of lowering data warehousing operational costs with a Hadoop cold data store as well as providing real-time reporting through a virtual unified view of data that transparently transcends both systems.
3. Big Data will emerge from technologists’ doldrums and become a reality with business user adoption
Until now, only technologists such as data scientists and data architects embraced big data. It has been far from usable by business users despite all the buzz. Data virtualization, through its familiar interface, will increase big data adoption by business users by exempting them from the need to learn Pig, Hive and other highly technical tools.
At a large member-owned Group Purchasing Organization (GPO), product, supplier and member information and other data was siloed across multiple data sources. The company created a Hadoop data lake to consolidate the disparate data to enable the business users to discover the related data and provide services to their members. However, this required that the business users learn new big data technologies like Pig and Hive, which would require additional training and, hence, slow down the time to benefits, hurting the buy-in into the new technology. The company’s previous success with data virtualization, powered by Denodo, helped business users to use SQL to access the data. The company decided to use data virtualization to enable their business users to discover data using the familiar SQL and thus abstract their access directly to Hadoop.
Use cases like those described above will simplify data management, enable faster data discovery and increase business-user adoption of this great technology.
Taming the big elephant in the room
The biggest obstacle for any new technology in crossing the chasm is the business users’ adoption of that technology. In my 25 years of experience in the software industry, I’ve seen IT spend millions of dollars buying and implementing software only to bury it in the technology graveyard because of lack of business-user adoption.
Big data is promising, but it has yet to cross the chasm for business-user adoption. But that’s about to change in 2016. Big data virtualization, by acting as the glue across business users, big data and back-office technologies, will deliver on the promise of doubling the business adoption of yet another great technology.
Ravi Shankar is the chief marketing officer at Denodo Technologies. He is responsible for Denodo’s global marketing efforts, including product marketing, demand generation, communications and partner marketing. Ravi brings to his role more than 25 years of proven marketing leadership from enterprise software leaders such as Oracle and Informatica.