CEOs and boards are pushing senior managers throughout their businesses to embrace big data. Case studies carrying huge ROI numbers circulate quickly and broadly, spurring the expectation to follow, or even lead, innovations in data-driven processes.
Companies have already invested trillions of dollars in data and databases. Today, there are certainly opportunities to bring that data to bear in reducing cost and improving performance, but most of those strategies take time and substantial effort. With leadership looking for results now, the race is on to deliver a win for big data and buy time for other strategies to develop.
Good news for supply chain executives: the best answer may be procurement optimization. By investing strategically in big data analytics, procurement executives can find significant value in optimizing smaller, under-the-radar vendors. By simply organizing and analyzing data that up to now was not worth the effort, companies could find up to four times the savings they get out of procurement optimization for their biggest suppliers.
New Data Tools
While the supply chain can be considered a “data-rich” environment, the disparate nature of the underlying data makes unified analysis exceedingly difficult, obscuring long-tail opportunities to improve efficiency and save money. Further, traditional “top-down” and “rule-based” approaches to solving this problem quickly become impractical with vast data scale and variety in enterprise ERP, PLM and supply chain systems – precisely where procurement data typically lives. This is why analysts can spend as much as 80 percent of their time preparing data, leaving only 20 percent for actual analysis and making that analysis inefficient for all but the largest cases.
Our data shows that high-spend suppliers often represent 20 to 25 percent of the opportunity for procurement optimization in most companies. The contracts are already highly consolidated and competitively bid. Invoicing, quality control and delivery times are closely watched. These are not relationships that typically improve significantly with big data analysis.
In the long tail of smaller suppliers, however, inefficiencies creep in, but the work in investigating and rectifying inefficiencies is less cost-effective. Policies to control contracts based on factors like cost or quality expose companies to inefficiencies in inventory commitments and delivery delays. The problem is not identifying causes of waste, but building a comprehensive view of these relationships to cut waste without damaging performance on other axes.
KPMG recently reported in “The Power of Procurement: A Global Survey of Procurement Trends” that among surveyed procurement leaders, half believe their analytics tools fail to meet basic needs and less than 20 percent use predictive analytics at all. Why should they? The systems used to capture this information are often dedicated to single functions or geographies, creating silos of data that are difficult to integrate, and causing downstream analytics to fight with both hands tied behind their backs. This push for big data application is a boon to procurement managers who know inefficiencies exist, but further know that their current analysis tools are no match for the long tail of vendor relationships on their own.
This is a problem tailor-made for big data once we solve a handful of key challenges getting between the raw data and the relevant insights. Among them:
- + Seeing all of the data: Up to 90 percent of the data in any organization is “dark” – unknown to anyone but the primary user of that data. Discovery and unification of all data sources is a need for many big data projects, and certainly applies to procurement.
- + Seeing all of the data for what it is: It is nearly impossible to enforce data formats across organizations and it is often perilous to adopt a single “golden record” that supersedes its source documents. Differences like “model #” versus “model” versus “model number” are not intuitively important, but can matter a great deal to downstream applications and users.
- + Herding the masses and culling the outliers: Small intuitive leaps shouldn’t require executives or even interns to churn through tens of thousands of data sources assuring “ST” in an address always goes in the “state” field, but not all of these calls are so easy to make. Assigning data to the right set requires both the brute power of machine learning and the deft hand of subject matter experts.
- + Mind the gaps: Where critical data is missing, the biggest problem may not be invalid analysis. For example, bad or missing payment dates could create bad forecasts, reports or even substantial fees from late payments.
Insights on Parade
Once the data is readied for analysis, excellent tools are available to quickly evaluate the data and produce candidate strategies for cost reduction, supply chain reorganization, vendor optimization and more. In short, these tools treat the great mass of smaller relationships as you would your largest supplier, improving your general position with vendors relative to cost, flexibility and risk.
For example, in a matter of weeks, the board and CEO demanding big data efficiencies could be reading reports on:
- + How we found the highest quality part, at the cheapest price, from all of our suppliers.
- + New optimized supplier payment terms across all business units and territories.
- + Minimizing supplier risk by matching internal data with external sources on vendor viability.
The first step is embracing the reality of extreme data variety within the supply chain, enabling analysts to analyze spend opportunities using data generated across all supply chain systems.
As organizations continue to heavily invest in information systems, unified datasets will prove critical to answering some of the most impactful P&L questions across all areas of the business. But with proper data preparation, procurement could leap out of the gate, providing the big win for big data that puts company leadership at ease knowing they rule over a truly modern data-driven enterprise.