Verification Alone Is Not Enough — What the KelpDAO Exploit Reveals About Crypto Data Infrastructure

Share
Verification Alone Is Not Enough — What the KelpDAO Exploit Reveals About Crypto Data Infrastructure

Key takeaways:

  • No crypto systems should rely on infrastructure that creates a single point of failure
  • But data accuracy matters just as much — blockchain data needs to be a system of record for projects to be fully protected
  • Verification didn’t fail in isolation. The issue was the state it operated on
💡
Def box: What is a system of record (in crypto)?
A system of record is a data layer that produces consistent outputs from blockchain activity, independent from any single provider or interpretation.

Verification is often treated as the foundation of crypto infrastructure. The Kelp exploit shows that it isn’t.

Systems can execute perfectly and still produce the wrong outcome if they are relying on incorrect data. The real problem here is the absence of a reliable system of record.

The Kelp Exploit: When “Verified” Data Was Still Wrong

What Happened

On April 18, 2026, North Korean hackers drained over $293.7M million from Kelp DAO.

Kelp DAO, a liquid staking protocol, uses LayerZero’s infrastructure for verification. During the attack, bad actors convinced LayerZero’s cross-chain messaging layer into releasing 116,500 rsETH into their control. Normally, all rsETH is correlated to real ETH deposits on Kelp, which are restaked using EigenLayer — with this hack, almost $293.7M million in rsETH was essentially created out of thin air with zero backing.

The impact extended beyond KelpDAO, as the attacker was able to use unbacked rsETH in downstream protocols, leading to multiple millions of dollars leaving Aave after the attacker borrowed WETH against their unbacked rsETH.

The Deeper Failure: Verified Doesn’t Mean True

This hack was not a traditional “hack” in that sense that a system was breached, broke down, or acted incorrectly. 

Instead, from LayerZero’s perspective, nothing unusual had happened to trigger red flags. The message to create the unbacked rsETH followed the required conditions, was accepted, and execution followed.

The issue sits upstream of verification, at the level of the data being fed into the system. The state being acted on did not reflect reality, but there was no mechanism in place to challenge it. Once the attackers’ input cleared verification, every dependent system treated it as valid. The immediate cause was KelpDAO’s 1/1 DVN configuration, which created a single point of failure at the verification layer.

As CoinDesk writes

“The exploit did not rely on breaking encryption or bypassing smart contracts. Instead, it exposed how fragile systems can become when they depend on layered assumptions.

In simple terms, the tools worked as designed. The way they were configured did not.”

But configuration alone doesn’t fully explain the outcome. Even well-configured systems ultimately depend on the data they ingest.

Why Verification Fails Without Independent Data Layers

Most infrastructure layers in crypto are designed to check whether data conforms to a set of rules:

  • Bridges validate messages
  • Oracles validate feeds
  • RPCs return chain state

Those checks are scoped to the system performing them — but they don’t establish whether the data is consistent across sources, or whether it holds up under recomputation. In practice, this means that multiple systems can process the same underlying activity and arrive at different answers, all while passing their own verification steps.

The Kelp DAO exploit is big enough to draw attention, but we’ve seen the underlying pattern before. Systems depend on upstream data they don’t independently reconstruct and different providers expose different interpretations of state. When those interpretations diverge, there’s no default mechanism to reconcile them.

Verification ensures consistency within a system. It does not guarantee consistency across systems.

The Real Problem: Crypto Has No Default System of Record

Many observers are blaming the Kelp DAO hack on a reliance on a system that created a single point of failure. And while it is not technically safe to have one centralized vulnerability when building a crypto project, it’s also important to note that an accurate data layer is just as necessary a consideration.

Good crypto architecture is about more than just following best practices for multiple data integration points — a reliable, auditable, and canonical system of record for blockchain data should be a requirement of any secure project.

Blockchains Are Ledgers, Not Interpretations

Blockchains record what happened. They do not define what that activity means in a consistent or reusable way.

A transaction can be observed directly. But balances across chains, positions in protocols, or the state of an asset over time all require interpretation on top of those raw events. That interpretation is not built into the chain itself: it’s constructed by the systems that sit on top of it.

Every Layer Reinterprets the Same Data

Most infrastructure doesn’t operate directly on raw blockchain data. It depends on intermediate systems that ingest, structure, and expose that data in usable forms.

Each of those systems makes its own choices about how to represent state. Those choices shape how assets are tracked, how activity is grouped, and how edge cases are handled. The differences are often subtle, but they compound quickly. As a result, two systems can process the same underlying activity and arrive at different answers without either one being obviously wrong.

When Systems Diverge, There’s No Canonical Answer

When those differences surface, there isn’t a default mechanism to resolve them.

One system may show a balance that another does not. One may treat an asset as fully backed while another reflects a gap. Downstream systems still need to act, so they select a source and proceed.

At that point, the question becomes which version of the data was trusted, and whether there was any way to validate it independently.

That’s the gap the Kelp exploit exposed. The attackers’ malicious node forged a message to Kelp’s DVN, but told all other IP addresses the truth in order to avoid security flags (while simultaneously conducting DDoS attacks on other LayerZero RPCs).

The system had no separate reference point to check the state it was acting on, and the attack was carried out successfully: transactions were confirmed that never took place, and an unbacked almost $300 million was drained.

What a System of Record Actually Requires

Independent Data, Not a Single Upstream Source

A system of record needs to build its own view of the data, rather than relying on a single upstream provider.

That means ingesting raw blockchain data and processing it through independent pipelines. The goal is to avoid inheriting errors or assumptions from any one source without a way to detect them.

Cross-Chain Normalization and Consistent Schemas

Once data is ingested, it needs to be structured in a consistent way across chains and protocols.

Without that consistency, the same concept can take on different meanings depending on where it comes from. A balance, a position, or a transfer may be defined slightly differently across systems, which makes it difficult to compare or reconcile them.

Normalization creates a shared structure so that different datasets can be evaluated against the same definitions.

Replayable and Auditable State

A reliable system needs to be able to recompute its outputs from underlying events. That includes reconstructing balances from transactions, rebuilding positions over time, and validating results at a specific point in time. If a number can’t be reproduced from its inputs, it can’t be independently verified.

Replayability turns outputs into something that can be checked, rather than something that has to be accepted.

Detecting When Data Sources Disagree

Even with consistent structure, differences between sources will still occur. A system of record needs to surface those differences instead of silently choosing one version of the data. That creates a point where inconsistencies can be examined or resolved before they propagate further.

Without that step, systems continue to operate on whichever version of state they happen to receive.

Execution Was Correct. The Data Was Not.

The Kelp exploit didn’t require anything to break at the execution layer.

Each system in the chain processed the inputs it received and behaved accordingly. The issue is that these inputs were adversarially manipulated before reaching the systems that verified them.

Once that state cleared verification, there was no independent reference point to challenge it. The system had no way to distinguish between a real transaction and one constructed through compromised infrastructure.

That’s a structural limitation. Most systems in crypto still depend on upstream data they don’t reconstruct or cross-check. When that data is adversarially shaped, the error doesn’t stay isolated. It moves through every layer that depends on it.

A system of record introduces a separate basis for comparison. It allows state to be recomputed, validated across sources, and checked before it is acted on.

This is the direction data infrastructure is moving toward. Platforms like Allium are built around this model, providing an independent, cross-chain data layer that reconstructs state from raw blockchain data and standardizes it across systems.

That doesn’t replace verification. It changes where correctness is established in the stack.

Without that layer, systems will continue to operate on whichever version of state they receive. The Kelp exploit has made that dependency all too visible.

As crypto systems become adversarial, correctness cannot rely on any single verification layer or data source.

A system of record is not an optimization. It becomes a requirement.

The question is no longer whether systems verify data. It’s whether they have an independent way to prove that data is true.

Read more