A matter of balance

Optimized supply chains need to be designed for disruptions. By Emile Naus

The supply chain industry has been using the ‘optimization’ word for a very long time. Some of the earliest application of Operational Research was to optimize supply chains and networks, from clever heuristics through to mathematical optimization modelling.

Over the past few years, we have seen substantial disruption to our supply chains, from trade barriers, natural disasters and conflicts to key transport routes getting blocked. There is a key learning that we should take from this – disruption happens, and it is typically unforeseen and not included in the optimization exercises.

For the past 30 years, the global economy has been dominated by low cost of capital, easing trade restrictions and a key focus on lowest cost. Whilst there are clear exceptions at local level, this has driven a behavior where the core objective for the optimization has been to reduce costs.

As a consequence, we now have long lead times to allow us to source based on lowest cost per unit, driving significant inventory levels and supply chains with many nodes. Chains break at their weakest link, and by increasing the number of nodes as well as the size of these nodes, we have created fundamentally fragile supply chains. It took one container vessel to stop the flow of goods in Suez in 2021. When Covid started, many countries couldn’t react quickly since they didn’t have the required infrastructure for PPE manufacturing and distribution. And the war in Ukraine is highlighting the reliance we have on core materials, such as oil and gas, wheat and steel.

We will need to fundamentally rethink how we design our supply chains. Using increasingly sophisticated models, with simpler user interfaces, has led to black-box solutions where data goes in and the answer comes out. But these models are only as good as the data that is fed into them and the assumptions made by the modelers and software developers.

We need to create more resilience and flexibility, and the requirement therefore is to substantially rethink how we optimize our supply chains around this. There are a number of factors to consider:

We need to change the mindset. Supply chains contain significant risk elements, and they need to be factored into the decision process. Simplistic models that just look at cost and capacity miss these elements.

We need to model for disruption. Sensitivity analysis goes beyond the typical ‘what if costs go up by 20 percent’; we need to include serious risk analysis into the models.

We need to change the optimization logic, to include risk and flexibility. This goes against the black box logic of many of these models, but the ability to understand, and change, the inherent logic of the models is critical.

Supply chains have significant ‘external’ (and often invisible) costs, such as environmental and social. It is easy to ignore them, but at some stage these costs will become ‘internal’ and therefore very visible. Some supply chain models explicitly include carbon footprint into their costs, but wider social elements around sourcing need to be considered as well.

When we are evaluating supply chain models, we need to consider the working capital that is a result of the design. Longer supply chains with inherent risks will carry more inventory. The costs of this inventory (both in terms of working capital and obsolescence risk) is typically not included in the models and is often ignored.

All this does not mean that we have to stop using the optimization models; it means we need to enhance how these models work, and we need to enhance our understanding and interpretation of the results. We need to take a wider view of what data and logic needs to go into the models, and we need a mindset that is much broader than reducing the (narrowly defined) operating costs. There are different options for this. We can simply convert various factors to a cost and include it in the optimization logic. At a simple level, we can charge for working capital and for carbon footprint. It is harder to quantify disruption, but it is feasible to test multiple scenarios with varying degrees of cost impacts.

Scenario modelling is often ignored in network design, but the ability to simulate the network under different conditions would be a powerful way to include these elements. Running a large number (thousands if not millions) of scenarios is perfectly feasible with modern infrastructure and would allow each one to be optimized. The overall result would then need to include a balanced view across all scenarios.

The end result will be a much more balanced decision making, that considers long term risk, environmental considerations, working capital as well as operating costs.

Emile Naus is Partner in Operations at BearingPoint in London. BearingPoint supports businesses from the development of supply chain strategy through to the implementation and benefit realization.