Skip to main content

Rethinking Benchmarks as a Platform for Software Innovation

By October 21, 2014Article

If you’re managing or leading a startup, how you do differentiate yourself from other, perhaps larger, vendors dominating your respective industry? That’s a question that’s most likely crossed your mind as an entrepreneur or it will at some point in your career. Especially in Silicon Valley, where innovation is everywhere and almost every industry imaginable is incredibly noisy, it’s important to think of ways to stand out among the competition. One tried-and-true way to do so is through third-party benchmarks. 

Third-party benchmarks are seen as the most credible validation of an organization’s success. One benchmark within the Big Data industry is the Transaction Process Council’s (TPC) benchmark, TPC-H. The TPC-H benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity and give answers to critical business questions. The performance metric reported by TPC-H reflects multiple aspects of the capability of the system to process queries. 

But let’s not get too technical with the TPC-H benchmark. To put it simply, it measures how quickly a system examining large volumes of data with high degrees of complexity gives answers to business-critical questions. It demonstrates which databases and architectures are fastest. 

From the standpoint of CIOs and CTOs who make technology buying decisions, and for analysts and other industry visionaries constantly scanning the landscape for companies disrupting the space, third-party benchmarks provide audited validation of a vendor’s claims. 

Without validation, companies are left with unsubstantiated marketing claims which essentially turn into a battle of “he-said, she-said.” Because credible data is important to businesses of all sizes, don’t you think independent hard data with which to make buying decisions is critical to a startup’s success? 

Big Data vendors realize that businesses today have copious amounts of data and they need analytics for that data to be fast and able to scale at a moment’s notice. Customers who find that they’re waiting a long time for a technology to take action will ultimately find another product that can produce answers and solutions quicker. 

Customers also realize that businesses need to be able to explore more data, analyze more data and do it faster in order to make better decisions and provide better service. Speed is a sustainable competitive advantage. Today, business data is rapidly expanding and decision makers need proof that the solution they choose to analyze it can process terabytes of data at very fast speeds before they make their choice. Trailblazers in the Big Data industry recognize that’s where benchmarks come in. 

In this specific industry, vendors of all shapes and sizes need third-party benchmarks to accurately evaluate technologies because without them there is no objective standard. Each vendor says his or her product is 10 to 100 times faster than all the rest. 

Vendors also make confusing claims about relative speed using smaller data sets that don’t live up to expectations once you are looking at terabyte-scale data challenges. 

Part of the reason why third-party audited benchmarks are reliable is because every aspect of the hardware and software configuration is specified and priced. You can tell how large the data set was and how many nodes were in the cluster, among many other details. Benchmarks were created to provide uniform and meaningful measures of performance. The comparisons between systems allow customers to make informed decisions about applications like business intelligence and data analytics. 

And if a startup’s claims are backed by a third-party benchmark, perhaps they may just overthrow previous leaders in the space. Of course customers will mostly want to test their own data and volume sets, but audited benchmarks can give them a good argument to pick the best candidates. 

The technology community, and startups in particular, is a better place when results are measured in a reliable and repeatable fashion. In the Big Data industry, then and only then can we get on with the business of implementing data architectures that support real-time business. 

As a software company, validated benchmarks, such as TPC-H, will give you confidence that your company delivers on its promises and sets you apart from other vendors. 

When all is said and done, don’t you want your startup’s claims validated by a third-party in order to take on your competitors? Weigh in with your thoughts and experiences on third-party benchmarks in the comments section below. 

Mathias Golombek is chief technology officer of EXASOL. He joined the board of EXASOL AG in November 2013 and is currently in charge of the company’s technology department. Golombek began his career as a software developer for EXASOL in 2004. He first headed the data-optimization team before taking charge of research and development in 2006.

 

 

 

 

 

Copy link
Powered by Social Snap