Big Data

Big Data Amplifies Need for Quality Data and MDM-Driven Process

  • author image

Editor’s note: Big Data is amplifying the problems resulting from companies having a large amount of bad data. A recent Forrester Research report found that as much as 50 percent of a typical IT budget may be spent on “information scrap and re-work.” Mathew Manathara, vice president of Global MDM Solutions at Software AG, points out that the data points companies are collecting regarding customer behavior and shopping patterns go haywire if the master data quality isn’t right. In this SandHill interview he discusses data quality and governance as well as trends in Master Data Management (MDM) solutions.  

SandHill.com: Businesses have had MDM solutions available for many years. Why is there still a problem with data quality? 

Mathew Manathara: While MDM solutions have been available for a while, I would say that the majority of enterprises have yet to implement a lasting MDM program. Those that have embarked on the MDM journey are more likely to have done tactical MDM projects, as opposed to a strategic MDM program that eventually becomes a way of life in the organization. So sustained data quality is still very much a work in progress.

If you look at data quality issues, they may have been introduced right from the start when the system was being put in place. Or maybe even though somebody made the effort to clean up the data, it still deteriorated over time because they didn’t have the right processes, frameworks and tools in place to ensure ongoing quality.

The problem is that companies look at poor-quality data as more of an annoyance as opposed to being a symptom of more serious process degradation. So they tend to focus on short-term fixes. They may put in place some point-to-point integration or mapping between systems to get data in sync rather than trying to resolve the root cause of the issue.

The business side of the house may look at data quality as an IT responsibility. But in reality business or functional users need to take a more active interest in managing data quality. They are the custodians of the data, IT being just the enabler. Ensuring systematic quality checks can only work if the underlying processes that generate data are correct. 

SandHill.com: Why is there still so much talk about data not being visible? Is this just a matter of data being in silos and therefore not accessible to everyone? 

Mathew Manathara: Data in individual silos may be visible. However, information, which is data consolidated across platforms to give a more cohesive and comprehensive understanding of the data, may not be. This could be the case due to several factors:

  • Inability to consolidate data due to a lack of the required “glue.” Master data forms the glue between different applications and processes. If you do not have a single source of master data or do not have a way of mapping to a common source, then you can consolidate only to a certain extent.
  • Inability to consolidate due to incomplete data set collected at the source.
  • Even if consolidation is possible, it may suffer from inaccuracies due to non-standardized definitions.
  • Lack of a proper information delivery framework. This is not MDM per se; rather, it is related to aspects such as data warehouse and business intelligence platforms. 

SandHill.com: From your observation, where are the benefits of having a data steward or a chief data officer (CDO)? 

Mathew Manathara: The chief data officer isn’t the same as a data steward. The data steward would be somebody on the business side who knows the data well enough to recognize quality issues. A steward doesn’t necessarily own the data; rather, it’s someone who is responsible for the quality of the data.

MDM is less about technology and more about having the right processes and teams in place. It requires direction and “push” from the top and it requires good incentives. That’s where the C-level person or chief data officer comes in. The role of that person is to get people together for organization-wide process improvement.   

SandHill.com: What are the top three best practices for data governance and for implementing an MDM solution? 

Mathew Manathara: There are so many things that need to be addressed. First is having the right team. Data governance is not easy. There is a lot of added work, a lot of documenting and implementing policies. It’s also about communicating the value and the necessity of data governance so that people don’t look at it as a nuisance. Having the right team also means having support from the top and ensuring, where needed, the team is cross-functional and cross-business units.

Second is establishing the scope. It’s a lot of work. You need to take some key areas, whether you approach it from a process level or whether you do it as a business-unit level. Do small steps at a time.

Third, having key data stewards is really critical. Companies need to formalize this role, get the right person in place and give the steward not only the right responsibilities but also the power and accountability for the data quality.

SandHill.com: Implementing an MDM solution requires business process reengineering. Therefore, having effective change management is crucial to success, correct? 

Mathew Manathara: Absolutely. The biggest effort, and the number one challenge, is change management. How do you get all these different groups, sometimes functioning at cross-purposes, to work together? That’s necessary in order to do MDM right in order to capture long-term benefits.

There are exceptions; companies can certainly justify tactical, short-term MDM projects in some cases. But the benefits from these projects tend to have a smaller scope.

The majority of MDM projects, especially if you’re doing it the right way and for the long haul, will require process changes. That’s where the change management comes in. It’s not just about access to the data; it’s also about getting people to work together. And each group might have a reason for having their different definitions. So that’s where the C-level person comes in to get the definitions and processes aligned in line with the bigger picture.

SandHill.com: Do you have an example of great benefits that were achieved by implementing an MDM solution because they had the right team and right scope? 

Mathew Manathara: Sure. A large financial services unit of a conglomerate has a multi-year MDM program that started in 2008 and is ongoing. They have multiple initiatives under the program umbrella lined up for the next two years. One such initiative was to consolidate master data about different types of business counterparties and trading partners. They had multiple trading systems that had counterparty information, and there was no method to how that information was created and updated.

Another big issue was they lacked a full understanding of risk exposure to any one counterparty, because they did not know if a counterparty was related to another, such as being a subsidiary or having a common parent. With the MDM solution, they now have a single view of the counterparty relationships and a complete picture of counterparty risk.

Another project under the MDM program consolidated data from various commercial market data feeds, such as Reuters and Bloomberg. This allowed them to cut down on multiple subscriptions for the same data feed from different parts of the company. The market data was loaded into the MDM hub and deployed to the subscribing users, resulting in significant cost savings.

This company could have taken a short-term approach and restricted the scope to just consolidating the counterparty information from multiple systems, which was the immediate need. They could have gone through the cleansing process and stopped at that point. But they wouldn’t have gotten the tremendous upside without the broader scope of the overarching MDM program. 

SandHill.com: What about the cloud? How does it figure into MDM solutions? 

Mathew Manathara: A few vendors have introduced cloud offerings, but the MDM market has not adopted cloud in the way other solution areas have. Large customers prefer to run MDM behind the firewall. Smaller companies that lack the internal infrastructure and resources needed to manage the solution themselves will be interested.

SandHill.com: In what way do you think MDM solutions are going to evolve over the next two to three years?

Mathew Manathara: MDM solutions are still very much evolving. Whereas customer MDM solutions typically were used for “downstream” or after-the-fact consolidation for reporting purposes (Customer Data Integration, or CDI), today we see a lot more real-time integration and up-front data management as opposed to downstream. Technologies and platforms have improved, and partly due to the business consolidation that has happened in the larger data management and data quality space over the years.

Vendors continue to build out horizontal functionalities and enhance what they already have. We are also seeing broader trends impacting MDM to varying degrees: cloud-enablement, as just mentioned, social data integration, and mobile access.

Further out, I see MDM getting embedded into operational systems. Rather than having specialized products, MDM capabilities would be embedded into business processes and systems. This has the potential to have a huge impact and value, making clean master data available throughout the enterprise.

Matthew Manathara is vice president of Global MDM Solutions at Software AG, where he is responsible for enabling adoption of Software AG’s master data management offerings in the marketplace. Previously, Mathew was co-founder and CEO of Data Foundations Inc., acquired by Software AG in 2010. The company was a pioneer of the multi-domain approach to MDM with its flagship product OneData and quickly built a customer base of some of the world’s most prominent organizations.

Kathleen Goolsby is managing editor at SandHill.com

Comments

By Karl Waldman

Mathew is spot on – a common challenge our customers have is that they “dress” their retailer data with their own corporate master data, only to discover that the master data set is not complete or is simply wrong. This is a big challenge particularly in larger organizations as ownership is not always clear.

This is particularly challenging as it will look like the retailer data is “wrong,” yet it really is that the corporate master data is not fully defined (missing “flavors” or other attributes that are not defined for all UPCs).

Post Your Comment




Leave another comment.

In order to post a comment, you must complete the fields indicated above.

Post Your Comment Close

Thank you for your comment.

Thank you for submitting your comment, your opinion is an important part of SandHill.com

Your comment has been submitted for review and will be posted to this article as soon as it is approved.

Back to Article

Topics Related to this Article