DCF Attacks & Valuation Litigation

This analysis is based, in part, on my forthcoming article, “Why Does Pseudoscience Still Thrive Under Daubert? The Case of Discounted Cash Flow” Hofstra Journal of International Business and Law, forthcoming, Volume 25, Issue 1 (2025).

Discounted cash flow models are a problem in high-stakes litigation.

You know it, and I know it.

Courts know it too, with one observing recently that “the parties’ experts presented such wildly divergent discounted cash flow models that, in the end, the models were unhelpful to the court.” Fir Tree Value Master Fund, LP v. Jarden Corp., 236 A.3d 313, 315 (Del. 2020)

Judicial opinions expressing this frustration aren’t hard to find:

  • In Manichaean Capital, LLC v. Sourcehov Holdings, Inc., C.A., No. 2017-0673-JRS, 2020 BL 32903 (Del. Ch. Jan. 30, 2020):

After completing their valuation analyses based on several approaches, the experts agree that a discounted cash flow analysis [] is the most reliable tool…. Of course, they disagree on multiple crucial inputs in their DCF analyses, and these disagreements have placed the Court in the now familiar position of grappling with expert-generated valuation conclusions that are solar systems apart. Good times….

  • In re Jarden Corp., No. 12456-VCS, 2019 BL 267465 (Del. Ch. July 19, 2019):

At the close of the trial, I observed, “[w]e are in the classic case where . . . very-well credentialed experts are miles apart. . . . There's some explaining that is required here to understand how it is that two very well-credentialed, I think, well-intended experts view this company so fundamentally differently.”

  • In Wheelabrator Bridgeport, LP v. City of Bridgeport, 133 A.3d 402 (Conn. 2016), the Supreme Court of Connecticut affirmed a trial court’s unwillingness to accept discounted cash flow because the approach lacked credibility:

[If] the discounted cash flow … approach process were credible, then two experienced and knowledgeable appraisers who are given the same basic facts and who use the same income approach would not be over $200,000,000 apart in their valuation of the subject property.

DCF Isn’t Measurement—It’s Counterfactual Storytelling

Discounted cash flow models don’t measure value—they create it. That distinction matters.

In litigation, the pretense that DCF is a method of financial “measurement” cloaks it in scientific respectability it hasn’t earned. But DCF is not an instrument like a ruler or a thermometer. It’s a thought experiment—a counterfactual exercise.

What DCF does is construct a narrative: that this company, under these projections, subject to these adjustments, and discounted at this rate, would generate a stream of future cash flows worth exactly this much today.

But those future cash flows haven’t happened—and almost surely never will.

Good counterfactual reasoning—as explained best by philosopher David Lewis in 1973—is constrained by principles of plausibility and proximity to reality.

That is, not all hypotheticals are created equal. Some are closer to the real world than others. The key idea is that a good counterfactual reflects what might have happened, not what a modeler wishes had happened or what makes the math cleanest.

DCF doesn’t ask: What would have happened under slightly different circumstances? It asks: What happens if we believe everything the modeler assumes? That’s not reasoning. That’s make-believe.

In litigation, surrendering to that make-believe can be worth billions. Courts are presented with “expert” DCFs that deviate wildly—not because one is right and one is wrong, but because both are built on untested assumptions selected to support a side. When those assumptions differ—about future growth, margins, terminal values, discount rates—the entire construct shifts.

There’s no gravitational pull back toward reality, because DCF doesn’t orbit any empirical anchor.

The Math Doesn’t Save It

DCF seduces with structure. It wears the costume of precision: formulas, spreadsheets, cascading line items, and discount factors calculated to the third decimal place. But none of that math redeems the model’s fundamental unreliability. It conceals it.

The basic DCF formula—value equals the sum of future expected cash flows discounted back to the present—is clean and elegant. But as every litigator and investor knows, elegance in theory often collapses in practice.

The problem isn't the math itself. The math is used to process guesses. Future revenue? A guess. Future margins? A guess. Terminal value? Often just a hand-waving perpetuity. Discount rate? A soup of assumptions about “risk,” beta, capital structure, and more.

Even a slight change in any of these inputs can move a DCF result by hundreds of millions of dollars. Yet the formula keeps humming, as if the math somehow purifies the speculation.

It doesn’t. The math here is like arithmetic applied to imaginary numbers—it works internally, but it doesn’t produce truth.

In scientific terms, DCF is non-identifiable: there’s no unique set of inputs that corresponds to a verifiable output. Multiple combinations of plausible assumptions can produce radically different results, none of which can be tested or confirmed against ground truth. In finance as in science, if a model can’t be tested, it can’t be trusted.

DCF thrives in this ambiguity. Experts present models that are internally coherent, mathematically “correct,” and yet wildly disconnected from each other. Judges see this all the time—two seasoned economists, two sets of cash flows, two discount rates, and a gulf of billions between them. But because the math appears sound, these forecasts are treated as rigorous evidence instead of speculative fiction.

We need to stop confusing calculation with reliability. Courts don’t need more decimal places—they need fewer illusions. And legal teams don’t need experts who “tune” a DCF until it fits their case. They need someone who can explain why the math doesn’t make science.

DCF At Its Worst is Pseudoscience

DCF is often treated as a financial science. But in litigation, it too frequently behaves more like pseudoscience.

That’s not just rhetoric—it’s a specific claim about how DCF operates: with the appearance of rigor but without the characteristics that make a model scientifically credible.

Pseudoscience resists falsification. It cannot be disproven because it can be adapted to accommodate nearly any outcome. That’s DCF in litigation. If the plaintiff’s case needs a higher valuation, growth rates rise, margins widen, or terminal values stretch. If the defense wants something lower, discount rates climb, projections become conservative, and the terminal value compresses. All of this happens inside the DCF framework without breaking the model. That’s the point: the model is designed not to fail.

In this way, DCF resembles frameworks long recognized as pseudoscientific—astrology, psychoanalysis, even alchemy. Each has detailed methods, technical terminology, and internal consistency. What they lack is empirical constraint. No matter how many times they miss, their proponents can adjust the inputs and try again. That’s not analysis. That’s a confidence game.

What keeps DCF alive in litigation isn’t performance, but persistence. Courts keep accepting it. Lawyers keep offering it. Experts keep billing for it. But that’s not scientific legitimacy—that’s institutional inertia. Much of what passes for “accepted methodology” in litigation economics survives not because it works, but because it doesn’t break down under cross-examination fast enough.

That’s why I call DCF a pseudoscientific framework. It uses the trappings of science—math, structure, expert authority—to generate an illusion of objectivity. But remove the surface polish, and you’re left with a storytelling machine disguised as a valuation tool. The question isn’t whether DCF can be used responsibly. The question is whether, under adversarial conditions, it ever is.

For investors, GCs, insurers, and trial teams, this isn’t academic. When your side accepts a DCF, you’re entering a theater where the scenery looks scientific, but the script is written for effect.

CAPM and Beta Are Dead—Let’s Stop Pretending Otherwise

If DCF is the house of cards, then CAPM and beta are the shaky foundation on which it rests. And that foundation has already collapsed—just not in the courtroom.

In finance, the capital asset pricing model (CAPM) is functionally obsolete. As far back as the 1990s, Eugene Fama and Kenneth French—whose names are practically synonymous with empirical asset pricing—made it clear in one paper, “The CAPM is Wanted, Dead or Alive, that the model doesn’t work in the real world. CAPM’s core idea—that risk is captured by beta, a single number comparing a stock’s movement to the market—is both elegant and wrong. The data has moved on. Finance moved on. The courts have not.

In litigation, CAPM continues to masquerade as a rigorous way to calculate a discount rate. Experts routinely pull historical betas from Bloomberg or Value Line, adjust them with hand-waving techniques (“we unlevered and relevered the beta…”) and drop them into a formula to produce a “cost of equity.” That number—however fragile—then becomes a key input to the DCF. And once it’s in, it sticks.

I have proposed it as well, simply because that’s just the way that generally accepted practice works.

Bur CAPM does not predict returns. We know this. Beta does not explain risk. These points are not controversial in finance—they are resolved. If a PhD finance student used CAPM as the backbone of their dissertation today, they would fail. But that same model, built on disproven assumptions and abandoned by academia, is still welcomed into litigation under the guise of expert methodology.

It’s worth stating clearly: no serious financial economist believes beta is an adequate or reliable measure of risk. That isn’t fringe—it’s consensus. Even textbook authors now acknowledge that CAPM is useful mainly as a historical artifact or a conceptual benchmark. It belongs in lectures about the history of asset pricing—not in federal courtrooms.

Yet because CAPM is entrenched in DCF practices, it continues to be used. This is how pseudoscientific models survive: they become embedded in procedures and normalized through repetition. The danger isn’t just bad math. It’s the institutional acceptance of flawed methodology as a kind of legal fiction—convenient, familiar, and quietly destructive.

Here’s the consequence: billions of dollars turn on a risk adjustment that has no credible grounding in modern finance. Courts are asked to believe that a number—beta—has predictive value, that this number can be transformed into a cost of capital, and that the resulting discount rate is meaningfully accurate. But none of that is true.


What the Courts Say When They’re Paying Attention

Courts are often polite in their skepticism—but sometimes, their frustration breaks through. And when it does, the truth about DCF becomes hard to miss: judges are weary of it. Not because they misunderstand it, but because they understand it too well. They've seen the wild divergences, the castles built on speculative sand, and the experts straining for credibility under cross. The record is filled with judicial fatigue.

The opinions I set at the beginning of this analysis are not outliers. They’re snapshots of a pattern: courts confronted with sophisticated-sounding models producing completely irreconcilable results. At some point, the problem is not the inputs—it’s the method.

But here’s the tension: despite these acknowledgments, DCF persists. Courts may grumble, but they rarely exclude it. Instead, they “weigh” it, often splitting the difference, or triangulating between flawed models as if the average of speculation might be truth.

This is where litigants must be more assertive. Don’t just accept that DCF will be part of the case. Challenge it. Make the court look hard at its internal contradictions. Use these cases to show that DCF is not a neutral instrument—it’s a tactical device whose reliability depends entirely on who’s wielding it.

And when your case turns on value—whether you're an investor, insurer, in-house counsel, or lead trial lawyer—that challenge can be the difference between a loss and a win.

Daubert Was Supposed to Fix This

When the Supreme Court decided Daubert v. Merrell Dow Pharmaceuticals in 1993, it gave federal courts a mandate: act as gatekeepers. Admit only expert evidence that is both relevant and reliable. No more deference to credentials alone. No more cloaking speculation in the language of science. The Court laid out a clear framework—testability, error rates, peer review, general acceptance—and gave judges the discretion and duty to enforce it.

So why, more than 30 years later, is DCF still gliding through courtrooms mostly unchallenged?

It’s not because it meets the Daubert standard. It’s because litigants and judges alike have mistaken its familiarity for reliability. The method is “generally accepted,” they reason—after all, it’s taught in finance classes, appears in business textbooks, and has been used in valuation for decades. But that’s not what Daubert meant by general acceptance. The gatekeeping test isn't about popularity—it's about whether the method is anchored in empirical rigor and scientific reasoning.

DCF fails on both counts.

It is not testable—its outputs can't be verified against reality, because they're based on hypothetical futures that never occurred. It has no known error rate—how could it, when every valuation is unique and unrepeatable? And while it has certainly been written about and applied, DCF is not subject to meaningful peer review in the contexts where it matters most—litigation models constructed for courtroom performance, not academic scrutiny.

In Kumho Tire Co. v. Carmichael, the Supreme Court extended Daubert to technical and specialized knowledge, emphasizing that courts must assess how an expert’s method is applied—not just what the method is. In other words, courts shouldn’t rubber-stamp a technique just because it has a name. They must look at the logic, the execution, and the reliability of the application in the specific case.

Yet in valuation disputes, that scrutiny is rare. DCF is often admitted without challenge, or challenged only on inputs rather than on admissibility. That’s a missed opportunity. Because Daubert isn’t just a shield to block bad science—it’s a sword to cut through the illusion of it.

Here’s the strategic point: DCF models don’t deserve to be admitted under Daubert. They can’t be tested. They aren’t reliable in practice. And their continued use reflects habit, not scientific validation. That’s not gatekeeping. That’s abdication.

How DCF Wins by Immunizing Itself from Cross

DCF doesn’t survive litigation because it’s strong. It survives because it’s slippery. Once admitted, DCF resists attack—not by withstanding scrutiny, but by absorbing it. That’s what makes it so dangerous.

By design, DCF diffuses responsibility across a fog of assumptions. You can’t cross-examine a formula, and you rarely have enough ammunition to demolish all the inputs. Did the expert overestimate long-term growth? Maybe. Use an aggressive terminal value? Perhaps. Apply an unjustifiably high discount rate? Arguably. But even if you land a few punches, the expert shrugs: “That’s why I tested multiple scenarios.” And the model stays standing.

DCF creates a kind of rhetorical armor. Because it appears so technical—so intricate and exhaustive—judges often assume that any problems must be minor, subjective tweaks rather than fundamental flaws. But the truth is the opposite. The entire model turns on judgment, and small shifts in that judgment can swing outcomes by billions. Yet because those judgments are embedded in a forest of spreadsheets and citations, they’re rarely seen for what they are: litigation strategy masquerading as finance.

This is how DCF immunizes itself. It creates the illusion that it can be corrected through adversarial challenge—when in fact it’s engineered to survive challenge. Cross-examination becomes a ritual, not a reckoning. The expert concedes a few inputs, explains the tradeoffs, reiterates the model’s robustness, and walks off the stand with the valuation intact. Meanwhile, the factfinder is left to “weigh the evidence” between competing DCFs that disagree wildly—but look equally polished.

That’s not a fair fight. That’s theatrics.

The answer isn’t to cross harder. It’s to reframe the battle. The goal should not be to nitpick the inputs—it should be to expose the structure. To show the judge or jury that DCF isn’t a model that can be patched with better numbers. It’s a storytelling device, and in litigation, it’s always telling the version of the story that helps the side paying for it.

This is why clients bring me in. Not to fine-tune a forecast. To challenge the very premise that speculative projection—no matter how carefully constructed—deserves to be treated as economic fact. And to give legal teams the language and leverage to make that case compelling.

DCF doesn’t win because it’s good. It wins because it’s trusted. That’s not reliability.

What Should Replace DCF in the Courtroom?

If DCF is broken, what comes next?

Not nothing. Courts still need valuation evidence. But they need evidence grounded in reality—not in hypotheticals. That means putting empirical anchors back at the center of the conversation. It means favoring what markets actually did over what experts think they might have done. And it means recognizing that in litigation, proximity to real-world transactions is more valuable than the polish of a model.

Start with the most obvious replacement: market prices. When markets are active and informed, the trading price of a company’s stock or bonds is often the best indicator of value. Yes, markets can be noisy. But they are observable. Testable. Constrained. You can’t adjust them into alignment with your theory. That’s what makes them powerful.

Next: comparable transactions. When market prices aren’t available or reliable, look at what similar companies have sold for in arms-length deals. This approach isn’t perfect—nothing is—but it brings valuation back to a place of revealed preference, where actual buyers and sellers are putting real capital behind their estimates. That’s infinitely more credible than a five-year forecast written for litigation.

If counterfactual reasoning is unavoidable—as it sometimes is in damages cases or bankruptcy scenarios—then discipline must come from logic, not modeling flourish. Good counterfactuals are close to the actual world, as David Lewis and decades of philosophical work have emphasized. The best way to evaluate a counterfactual claim is to ask: What else would have happened, and how close is that to reality as we know it?

In that framework, we don’t need DCF. We need reasoned scenarios constrained by real evidence—actual financial performance, past behavior by similar firms, or known industry dynamics. And we need to make explicit which assumptions are carrying the weight. No hiding behind formulas. No burying the key driver in footnotes. Transparent assumptions, logical structure, real-world discipline.

In short: courts and litigants should replace DCF with valuation methods that lose flexibility but gain credibility. The goal isn’t to predict the future. The goal is to estimate value in a way that holds up when someone says, “Show me how you got there.”

DCF can’t be fixed. But the valuation conversation can be reset.

Litigation Strategy: Attack the Model, Not Just the Inputs

If you’re heading into litigation where valuation matters—and you’re up against a DCF—do not fall into the trap of arguing over inputs. That’s the battlefield DCF was designed to win.

DCF invites you to haggle over assumptions: growth rates, margins, discount rates, terminal values. It tempts you to nibble at the edges. But that’s a mistake. Because once the model itself is accepted, the fight becomes one of inches—when what you need is a veto.

The real strategy is to attack the model itself. Not for being “aggressive” or “unrealistic,” but for being inherently unreliable. For failing the standards that courts are supposed to apply to expert evidence. For pretending to be measurement when it’s just speculation in a lab coat.

This isn’t a matter of rhetoric—it’s admissibility. When a DCF is central to the other side’s case, don’t just cross the expert. Move to exclude. Use Daubert. Force the court to confront the fact that DCF, as deployed in litigation, is not scientific, not testable, and not constrained by anything resembling empirical rigor.

That motion won’t always win. But even if it doesn’t, you’ve reframed the narrative. You’re not saying, “Their number is wrong.” You’re saying, “Their whole method doesn’t belong here.” That changes the tone in the courtroom—and it sharpens the court’s skepticism when the expert testifies.

Litigators who do this well don’t treat DCF like a tool that needs recalibration. They treat it like an untrustworthy witness. They don't argue about how a valuation was reached—they argue about whether that approach can reach anything reliable at all.

That’s what I help clients do. I bring the technical fluency to dissect the model, the legal judgment to make the admissibility case, and the credibility to stand toe-to-toe with the opposing expert—whether on the stand, in a brief, or behind the scenes helping you shape the strategy.

Valuation disputes are often the hinge in high-stakes litigation. If DCF walks in unchallenged, it usually walks out victorious. But when you challenge it head-on—not just for how it was used, but for what it is—you give the court a way out. You give your client a shot at winning the valuation war, not just the valuation skirmish.

So the next time a DCF lands on your desk, don’t sigh and accept the game. Flip the board. Stop accepting DCF without a fight.