
On GPU Depreciation
The last couple of months there has been strong commentary from investors — many of whom we admire — surrounding the current depreciation schedules of GPUs. That is, questions as to whether or not current 5 - year depreciation schedules are accurate. The prevailing argument amongst value-skewed investors is that these schedules are in the worst case completely out of distribution and in the best case, the very edge of feasible range. Fear coursed through FinTwit and the AI/Semi’s community alike, especially on articles that warn if server lives were reduced to one to two years, it could shave off $2 trillion to $4 trillion market cap from current big tech valuations. This is not even to say what the effect would theoretically be on smaller datacenters and neoclouds that own clusters. Many of these players don’t have locked in take-or-pay contracts, and don’t have the same balance sheet flexibility that hyperscalers have to update their existing compute base.
We believe many of these fears are misplaced and stem from a core misunderstanding of the concern in question — namely around what depreciation truly means.
Depreciation at its core is an accounting model for the periodic economic cost of your capital expenditures. In layman's terms, it’s a proxy for the cost using your “fixed” resources — GPUs, networking equipment, etc. This is then factored into a company’s income statement, the point of which is to map the current economic standing of a business. To that end, operating profit (EBIT) and earnings are too both proxies for economic before and after taxes and financing costs. And because it is in theory our best model for economic profit, investors care a lot about it, particularly in public markets. This is part of the reason why stocks tend to trade on an EV/EBIT(DA) or P/E multiple basis. So, when depreciation schedules are longer than the true useful life, they overstate earnings and therefore inflate valuations in public markets. To put some meat on the bones, consider a company that generates $10 in revenue and registers a $2 depreciation cost (with no other costs) over 3 years. Say public company A trades at a multiple then of 5x EV/EBIT — they’d then be valued at $40. Now, if I told you that the depreciation cost was actually understated and that it should have been $3 over 2 years, the company is all of a sudden (on the same multiple) worth $35. The valuations in public markets on the same company are different.
We have strong conviction that the way depreciation is being thought about in public markets today is disconnected from the hardware reality. Namely, we suspect that the hardware deprecation itself is less a driver of the decrease in GPU value than the technical obsolescence from new architecture releases and training regimes. Per Applied Conjecture, “an older, fully depreciated A100, while slower than a new B200 for a single, latency-sensitive query, can be highly cost-effective for throughput-sensitive workloads. When running large, batched workloads, the A100 can be driven to high utilization delivering a lower TCO for that workload than a brand new, expensive B200 that might be under-utilized”. Yes, A100s decrease in value, but it’s new tech that drives this, not key hardware failure. We see similar real world examples today as well. “Azure announced the retirement of its original NC, NCv2, and ND-series VMs (powered by Nvidia K80, P100, and P40 GPUs) for August/September 2023. Given these GPUs were launched between 2014 and 2016, this implies a useful service life of 7-9 years. And more recently, the retirement of the NCv3-series (powered by Nvidia V100 GPUs) was announced for September 2025, approximately 7.5 years after the V100’s launch."
All Models Are Wrong, But Some Are Useful
Many operators and investors alike have taken the usefulness of depreciation from an accounting standpoint and extrapolated some idea that depreciation is even approximately a true proxy for the economic depreciation in the GPU as you use it over time. But as mentioned earlier, the reality is that most of the economic depreciation comes from tech obsolescence which is extraordinarily difficult to predict. Depreciation (the accounting metric) then has no real grounding in the GPU use case —it’s more of a hand-wavey relationship companies are required to slap on in the name of GAAP accounting and to appease financiers. Where depreciation is useful is for equipment that doesn’t suffer from the same idiosyncratic tech risk that GPUs do; equipment whose value is driven by usage.
This is not to say that GPU accounting depreciation is completely meaningless. We already know in public markets it can help drive earnings and therefore prices. But even on the private side, depreciation very critically allows for a tax-shield. That is, it is recognized as an accounting cost and therefore is deductible. Accelerated depreciation in early years increases the present value of tax shields, improving cash flow timing and investment returns. Datacenters are then incentivized towards depreciation schedules skewed towards what they believe is most beneficial from a fiscal reporting standpoint. And indeed GPU deprecation is a massive risk for datacenters. But to attempt to use depreciation schedules to accurately model this is an impossible challenge.
Concerns over GPU depreciation largely initially arose from concerns of public investors surrounding overstated earnings. This then spiraled to debate on the useful life of GPUs. We believe both these debates miss the point — depreciation is simply an accounting metric and the useful life of GPUs today is driven principally by future tech, not hardware malfunction. As a result, depreciation can be used for the benefit of datacenters (within reason) from an accounting standpoint. And should datacenters be concerned about their true GPU economic depreciation, they should explore financial products to manage this risk.