«Buyer’s Guide to Big Data Integration SPONSORED BY CONTENTS Introduction 1 Challenges of Big Data Integration: New and Old 2 From Hub and Spoke to ...»
The rationale for a hybrid architecture is that analytics solutions that run directly on Hadoop are still evolving and do not necessarily support the full breadth of production use cases. In particular, many SQL-like tools on Hadoop perform well in certain use cases, but don’t always deliver the highly interactive analysis performance that the market is used to with relational data sources.
Taking a solution approach that combines the best of Hadoop (extreme scale processing and refinement of diverse data) with the best of analytic databases (speed of thought analysis on large volumes of relational data) often makes more sense.
In such a solution approach, it’s important to be able to deliver data sets and analytics to the business in an on-demand fashion. This can be helped by automating data modeling processes and using parameterized data integration workflows that can adapt to the everchanging business questions that analysts are asking. The goal is to create a process or framework once and avoid repeated requests that result in manual work and lengthen time to decision.
Buyer’s Guide to Big Data Integration CITO Research Advancing the craft of technology leadership Analytics Support It is well known among data analysts in any domain that as much as 80 percent of the work to get an answer or to create an analytical application is done up front to clean and prepare the data. Data integration technology has long been the workhorse of analysts who seek to accelerate the process of cleaning and massaging data.
In the realm of big data, this means that all of the capabilities mentioned so far must be present: easy to use mechanisms for defining transformations, the ability to capture and reuse transformations, the ability to create and manage canonical data stores, and the ability to execute queries. Of course, all of this needs to be present for big data repositories as well as those that combine all forms of data.
By supporting analysts in cleaning and distilling data using machine learning and sharing the results, the process of answering questions, building apps, and supporting visualizations is accelerated.
But analysts will face other problems unique to big data. As we pointed out earlier, big data is often dirty and noisy. Machine learning is needed to ferret out the signal. But machine learning techniques are often difficult to use.
The best big data integration technology will offer a guided experience in which machine learning suggests and then is moved in the right direction by analysts.
This guided approach is required because so many machine learning and advanced analytical techniques are available for many different types of data. The machine learning techniques used to create predictive models from streaming data are far different from those used for categorizing unstructured text.
Once an analyst has created a useful, clean data set, the value of the work can be amplified by allowing sharing and reuse. Right now, environments to support sharing and collaboration are emerging. Some environments support architected blending of big data at the source to enable easier use and optimal storage of big data. Big data integration technology should support such environments.
Preferred Technology Architecture The ideal system for big data integration is different at every company. The most data intensive firms will likely need every capability mentioned. Most companies will need quite a few of them and more as time goes on.
The best way to provision the capabilities for big data integration is to acquire as few systems as possible that have the needed features. Most of the capabilities mentioned are stronger when they are built to work together.
The ideal big data integration technology should simplify complexity, be future proof through abstractions, and invite as many people and systems as possible to make use of data.
A fact of life in the world of data analysis is that everything is going to change. The best technology will insulate you as much as possible from changes. It should be the vendor’s responsibility to create easy to use, powerful abstractions and maintain them going forward.
The fact that big data technologies are evolving should not be your problem. Neither should the inevitable shakeout that will occur as various forms of technology and vendors fade away. Does this represent a form of lock-in? Of course, but in the end, it is better to be married to a higher level of abstraction than a lower level one.
Open source is and has been leading the way in big data innovation. A large part of the innovation in Hadoop and other big data ecosystem components has come via open source projects, not proprietary or closed approaches. Open source leads to a virtuous cycle of greater technology adoption and community-driven improvements. As such, it is key to look for data integration tools that embrace open source innovation and look to align with its capabilities. At the same time, open technologies tend to be more flexible and extensible than proprietary products. In an immature big data integration and analytics landscape where no one vendor can provide a complete out of box solution to meet all anticipated needs, support for the flexibility provided by open standards, open APIs, and well-developed SDKs is paramount.
Buyer’s Guide to Big Data Integration CITO Research Advancing the craft of technology leadership By choosing technology that supports visual data modeling, it is possible to avoid a skill bottleneck. Programming knowledge should not be required for transforming, modeling, and blending data sources. Simplified environments allow more people to interact with data directly and in turn accelerate progress.
One key financial factor in choosing the right technology is the license model. Depending on how your software is deployed and the internal skill set for supporting software, there can be vast differences in the cost to acquire various capabilities. It is important to understand the benefits and drawbacks of traditional licenses, open source software licenses, and various hybrid offerings.
Select solutions based on real-world use cases, not hypothetical applications. Look for integration vendors that have helped customers achieve success specific to big data use cases, most importantly with Hadoop or NoSQL data. While the majority of vendors claim to work with big data, the reality is that many are new to the market or are older vendors that have had success with traditional use cases, but not big data use cases. Another thing to look for is deep services offerings and expertise. Solving major business problems with big data requires best practice architectures, proven project plans, hands-on training, and expert support.
Finally, the best systems for big data integration are built to be embedded into business processes and workflows. The simplified forms of transformation should be able to be pointed at big data sources or at SQL repositories and used inside MapReduce or applications. Data integration tools should enable transformed big data to be accessed through familiar BI tools and used to feed web pages, mobile apps, or enterprise applications.
The Rewards of Getting Big Data Integration Right Data does no good unless it is presented to a human who can somehow benefit from it or unless it is used in an automated system that a human designed. The point of big data integration is to make it as easy as possible to access, understand, and make use of data.
The rewards of getting big data integration right are the benefits that come from greatly expanded and timely use of all available data. Reducing delays, eliminating skill bottlenecks, and getting fresh data to analysts and applications means that an organization can move faster and more effectively.
By purchasing components and systems that are part of a coherent vision while at the same time leveraging ongoing open source innovation, it is possible to minimize cost and avoid compromising on needed capabilities.
The questions we started with should now be easier to answer:
What to buy? As few systems as possible that provide the capabilities you need now and in the future in a way that is easy to use and future proof.
What is the coherent whole? A vision of big data integration that incorporates existing forms and sources of data into a new system that supports all phases of a responsive, dynamic data supply chain.
Solving Big Data Integration Challenges With Pentaho Pentaho’s big data integration and analytics platform provides broad connectivity to any type or source of data, with native support for Hadoop, NoSQL, and analytic databases.
Pentaho’s complete visual big data integration tools eliminate coding in SQL or writing MapReduce Java functions, and empowers you to architect big data blends at the source for more complete and accurate analytics. Learn more at www.pentaho.com.
This paper was created by CITO Research and sponsored by Pentaho
CITO Research CITO Research is a source of news, analysis, research and knowledge for CIOs, CTOs and other IT and business professionals. CITO Research engages in a dialogue with its audience to capture technology trends that are harvested, analyzed and communicated in a sophisticated way to help practitioners solve difficult business problems.
Visit us at http://www.citoresearch.com