The Variable Investors Aren’t Pricing In

Why policy is still treated as a constraint when it should shape investment design from the outset.

Capital is moving rapidly into emerging technologies, from artificial intelligence to advanced energy systems to neurotechnology. The investment case is compelling, but the risks are less straightforward, and when underweighted, their downstream consequences can far exceed what was originally modeled.

Sophisticated investors already account for regulatory risk. What remains unclear is whether policy risk is integrated early enough to influence how capital is deployed, rather than addressed after key decisions have already been made.

Incorporating these variables before decisions are locked in introduces real complexity: longer timelines, greater uncertainty, and harder tradeoffs. As a result, policy is often deferred rather than integrated at the outset.

That pattern is familiar and can be costly, not only for investors, but for the people and industries these technologies are intended to serve.

When Policy Catches Up to Capital

The focus is not on what is driving disruption, but on how quickly innovation scales and the lag in governance that follows.

Consider opioids. The drug proved to be a highly lucrative market opportunity, but how many firms modeled the regulatory backlash that followed? Mass prescribing a highly addictive substance failed to account for the policy constraints and harms it would generate. The result was catastrophic: approximately 806,000 lives lost between 1999 and 2023, and litigation costs reaching over $50 billion.

Social media followed a similar trajectory. Platforms expanded to billions of users before leadership fully grappled with content liability and human cost, particularly among young people. Internal research from Meta documented deteriorating mental health outcomes in adolescents years before it became public. The policy response— over 45 states introducing legislation restricting youth access in 2025 alone—arrived a decade late and at significant cost.

These examples do not justify curtailing investment in major societal advancements.  They weren’t failures of innovation, but failures of anticipation. The risks of rapidly scaling transformative technologies without adequate governance systems, across both immediate and long-term horizons, are real, costly, and often predictable.

The technologies examined below present a different challenge: not bad faith cases, but genuine uncertainty in a landscape where regulatory frameworks have not kept pace, illustrating precisely why policy integration at the front end is so valuable. 

Balancing Innovation & Governance

Geothermal’s baseload case is strong. If scalable and cost-effective, it could materially expand the global clean energy mix and strengthen the U.S. position in energy competition. But fracturing granite at industrial scale raises unresolved questions around aquifer impact, induced seismicity, and land use rights. These questions do not remain technical for long. They become political, and when they do, they shape the speed at which projects move forward. According to the International Energy Agency, financing for next-generation geothermal reached nearly $2.2 billion in 2025, an 80 percent increase year over year. Capital is accelerating ahead of the regulatory oversight required to support it.

Artificial intelligence is eliminating entire categories of work. Goldman Sachs estimates that 300 million jobs are exposed to automation globally, and the World Economic Forum projects that 92 million could be displaced by 2030. Whether AI ultimately replaces more jobs than it creates is contested. The policy response, reflected in bipartisan proposals like the AI Workforce PREPARE ACT, is already forming as if the answer is yes. 

Accountability is also a critical factor. As AI systems begin to influence determinations of liability, hiring, and even criminal sentencing, the question becomes more fundamental: who is responsible when life-altering decisions are no longer fully made by humans?

Neurotechnology may be the least governed frontier of all, and perhaps the most consequential for quality of life. Research published in Neurology projects brain disorders will increase globally by 22 percent by 2050. The upside is enormous: restoring function, treating neurological disease, and expanding human capability.

Capital is being deployed globally to advance these efforts. In Texas, the state has committed $3 billion over a ten-year period to catalyze brain health research through the Dementia Prevention Research Institute of Texas. Companies like Neuralink are advancing early human trials. OpenAI has co-funded a brain-computer interface startup, Merge Labs, with an expected valuation of $850 million. The United Kingdom’s Advanced Research and Innovation Agency has committed £69 million to a precision neurotechnology program focused on interfacing with neural circuits. China has announced ambitions to become a global leader in BCI technology by 2027, with a competitive industry by 2030. 

Investment is clearly increasing, but the frameworks to govern it are still underdeveloped.

Neuralink’s trials have already surfaced the kinds of questions that were not resolved at the investment stage, including FDA deficiencies, electrode migration complications, and federal investigations into animal welfare practices. No comprehensive structure yet addresses neural data privacy, informed consent, or device failure liability. Who owns neural data? Can it be subpoenaed? Can it be sold? Can it be used by governments or insurers?

These outcomes are visible in advance of deploying technology directly into the human nervous system faster than governance can keep pace. The question is how seriously they are incorporated into investment decisions. 

Continuous monitoring of the policy environment remains essential, but by the time risks are actively managed in execution, many of the core design choices have already been made. In emerging technologies, not all longer-term implications can be modeled with precision. That is not a realistic standard. What matters is whether policy considerations guide investment at the design stage, before capital is committed and options narrow.

Early integration matters, but sustained attention is equally important as technologies mature and regulatory environments respond. In both cases examined above, earlier and more continuous engagement with emerging governance signals could have meaningfully reduced adverse outcomes.

The Bottom Line

Policy foresight is not a one-time assessment. It is a continuous discipline.

The opioid crisis did not emerge because the product failed. It emerged because the system around it was not fully accounted for. Social media did not falter because the technology was flawed. It faltered because governance was not built alongside it.

That pattern has the potential to repeat. With emerging technologies like geothermal, neurotechnology, and AI, the tradeoffs are more complicated, not less. But unresolved questions do not disappear. When they are deferred, they tend to resurface later, under greater pressure. In many cases, those risks could have been addressed earlier in the investment process.

Anticipating policy does not slow innovation. It determines whether innovation survives contact with reality. 

Next
Next

Policy Leadership Requires a Different Kind of Thinking