Ornn is expanding our derivatives platform to include memory futures. As the market leader in compute derivatives, we've built the infrastructure that defines how the industry prices, trades, and hedges GPU compute. Now we're extending that same institutional-grade framework to memory, the other critical input in AI infrastructure economics.

Memory prices have long been one of the most volatile components in the technology supply chain, with pricing swings of 250% or more within a single year. Yet until now, buyers and sellers have had no standardized way to hedge this exposure. Our memory futures contracts are already trading with deep liquidity, giving data center operators, semiconductor firms, and financial participants the tools to manage risk and express views on memory pricing.


Our clients have been asking for this. The same organizations using Ornn to hedge GPU compute exposure are managing billions in memory procurement with no derivatives market to turn to. Modern AI workloads are increasingly memory-bound. Training large language models requires not just GPU compute but massive amounts of high-bandwidth memory, and the servers housing these chips require terabytes of system memory. The AI memory wall is real, and procurement teams are feeling it.
The supply side makes volatility inevitable. The memory market is dominated by three manufacturers: Samsung, SK Hynix, and Micron. This concentration creates supply dynamics that amplify price swings. Capacity decisions made years in advance collide with unpredictable demand from AI buildouts, consumer electronics cycles, and data center expansion.
Memory and GPU prices often move together during AI demand spikes, but they respond to different supply constraints. Offering derivatives on both allows participants to trade the spread, hedge more precisely, and construct positions that match their actual infrastructure exposure. This is the natural next step in building out the full financial stack for compute infrastructure.
We're not launching into a vacuum. Ornn's existing network of market makers, institutional traders, and infrastructure operators gives us immediate liquidity in new products. The relationships we've built as the central venue for compute derivatives translate directly into memory. The participants are the same, the hedging needs are adjacent, and the trust is already established.
Contract specifications follow the same principles that made our GPU products successful: monthly settlements, cash-settled against transparent spot pricing, and lot sizes calibrated to real procurement volumes. Whether you're a hyperscaler locking in next quarter's memory costs or a fund expressing a view on the semiconductor cycle, the product is built for serious participants.