Why do we need the data supply chain?
The complexity of digitalization projects is constantly increasing. Today, it is about much more than just selecting and implementing new technologies and transferring the original work processes to the new digital environment. Today, everything is interconnected – the corporate reality is networked and full of dependencies. This is especially true for central software systems such as ERP, PIM or CRM. A change of PIM system affects all key business areas, has an impact on numerous data processes and has consequences for all relevant interfaces.
This makes a clear focus on the individual data processes all the more important. How is the data in question created, imported, manipulated, enriched or used, where and by whom? Which data quality rules apply and why? Who ensures that they are adhered to, and how are cross-system data processes and different data models handled within a company? Companies that are in the midst of their transformation to data-driven organizations are asking themselves all of these questions, as they no longer view digitalization projects in isolation for a limited scope of business processes.
What is the data supply chain?
The data supply chain is a modern implementation method for digitalization projects of all kinds. In order to take account of the networked reality of the company, the data supply chain begins with a comprehensive overall analysis of the system architecture, the central data models and the individual data flows. It is important not only to precisely document the current status of these data flows, but also to define the target processes together with all stakeholders at the start of the project in order to obtain a clear and detailed target image that the project team can use as a guide and against which the new processes and solutions must be measured after implementation.
The data supply chain therefore encompasses all important data processes in a company. Like a map, it visualizes the central use cases of the individual business areas, shows connections, synergies and interactions and thus ensures extraordinary transparency, which is essential today in order to make the right decisions within the digitalization strategy and to comprehensively measure the success of projects. It is important to describe the individual data flows very precisely. It must be clear what happens with which data at which point and which software systems are affected by the individual data processes. Before we go into detail about how to measure success in this way, let us briefly explain the difference between the information supply chain and the data supply chain.
Data Supply Chain vs. Information Supply Chain
While the data supply chain traces the individual data flows, the information supply chain describes the system landscape of a company in the three levels of data procurement, data production and data distribution. Typical data procurement systems are ERP, PIM, DAM, MDM, CRM and business intelligence. This is where
central company data is procured from the various data sources or created for the first time and made available to other systems or data recipients. In data production, data is manipulated or globalized – typical systems here are multi-language management systems, data-driven publishing, marketing resource management or workflow management systems. Data distribution is about transferring the data to the various communication channels – these can be store systems, CRM systems or online marketplaces, mobile apps, a GDSN data pool or syndication applications such as feed management tools.
A data flow can pass through several of these systems and data levels. For example, a data flow for a retail company can look like this:
I. Product data is imported from the supplier portal and transferred to the company’s own data model.
II. The product data is then translated into the languages of the three target markets.
III. The product information is published in the online store together with the product images.
Measuring the success of digitization projects
The data supply chain therefore provides the perfect basis for continuously measuring and optimizing the efficiency of internal data streams. This is increasingly important, as conventional measures of success such as return on investment (ROI) are based solely on monetary metrics and therefore fall far short of the mark.
Important key performance indicators for measuring data performance value are
I. Process transparency
II. efficiency benefits
III. impact on quality
IV. Competitive advantage
V. Company transparency
VI. data excellence
VII. Cost savings
These KPIs are collected regularly and documented in these three levels:
I. Original performance
II. expected performance
III. current performance
This allows the development of the entire data supply chain to be tracked and documented clearly and transparently. This allows inefficiencies or weak data flows to be directly identified and appropriate measures to be taken. For example, a review of the data performance value may reveal that the data does not meet the required quality at one point. The concept of the data supply chain makes it easy to identify the causes of quality problems and thus to eliminate them quickly.
Another finding may be that there are actually no data flows in the company that actually lead to a competitive advantage – for example, because the go-to-market is too slow or innovation cycles are not even provided for in the process landscape. Those responsible can respond to this by modeling corresponding processes or taking measures to shorten the time to market – for example with the help of workflow automation or AI.
How is your data supply chain doing?
Do you know what the important central data processes are in your product content life cycle management? Make a non-binding appointment with our experts and find out how the Data Supply Chain can support you in your development towards a data-driven company!