Eliminating Vaccine Mandates

 

If you’re following the news lately, you know that Democrats, and a few Republicans, are up in arms about eliminating vaccine mandates. Some fear they won’t be able to get vaccines when they need them. But no one is being denied vaccines. The government is simply allowing people to make their own decision without mandates that force everyone to comply. What’s the science behind this new approach?

Dr. Marty Makary, the new Commissioner of the Food and Drug Administration, explains in an Op-ed published in The Wall Street Journal. Here’s his explanation:

“The Food and Drug Administration last week approved Covid-19 vaccines for adults over 65 and for people 6 months and older who have one or more risk factors that put them at high risk of severe Covid. This regulatory framework brings the U.S. in line with peer nations—in France such vaccines are recommended for people over 80 and in the U.K, for people over 75. Although the world has moved on to a risk-tiered approach, some in the American medical establishment are maintaining their blind faith in a strategy of boosters for all in perpetuity. They should consider these six points:

First, the FDA can approve products only if we believe there is substantial certainty that the benefits outweigh the risks. Currently, we don’t have that confidence for, say, a seventh Covid shot for healthy 12-year-old girl who recently recovered from Covid.

Second, if your doctor deems that you need a Covid vaccine, you can still get one. The FDA can’t regulate the practice of medicine. The FDA grants marketing authorizations, but doctors are able to prescribe drugs off label to people at low risk. In a few states, pharmacists may require a prescription.

Third, as part of our approval the FDA is demanding that all companies run a clinical study assessing whether the new Covid shots improve outcomes in healthy subjects. These post-market randomized trials will use a true placebo control (salt water) so we will learn the side-effect profile. The FDA has asked, and the companies have agreed, to monitor if the spike protein—the molecule the vaccine creates in the body—persists after vaccination. Several studies have shown the persistence of spike protein, which some believe contributes to adverse events. The FDA’s mandate is to compel companies to generate evidence to answer questions Americans have.

Fourth, some argue that Covid shots are needed for those who live with or take care of immunocompromised or elderly people. But no company has ever submitted data to the FDA to prove that Covid shots, which do not halt transmission, prevent caregivers from infecting vulnerable people. Accordingly, the FDA has never allowed companies to advertise that claim.

Fifth, some claim that Covid shots mean kids are less likely to miss school and adults more likely to attend work. No vaccine manufacturer has ever submitted data to the FDA to validate these claims either. Similarly, some argue that repeat doses prevent long Covid. Again, the FDA has no data to support this claim and has never allowed manufacturers to make it. As part of our new framework, we have asked sponsors to measure long Covid symptoms.

Sixth, the FDA’s new framework essentially ends mandates. Mandates were contentious, and ultimately a policy error, pushing hundreds of thousands of people out of work, without clear evidence of helping the public. Since the FDA isn’t approving a vaccine for the healthy school-age and working population, college and school mandates will be legally impossible. Accordingly, the FDA is revoking the emergency-use authorization for Covid vaccines. The emergency is over. The FDA will now return to an evidence-based standard.

Ultimately, no one knows how many shots a healthy person should get in his natural life, or if a healthy person who already had Covid benefits from a seasonal dose. The FDA’s new framework preserves vaccines for those who might most plausibly benefit, while generating evidence for most Americans, who are voting with their feet (or their upper arms) against an eternal annual booster campaign.”

Price Controls and the Pharmaceutical Industry – Part II

 

In Part I of this series, we discussed the history of the pharmaceutical industry and learned that Europe used to dominate the market. It has only been since the turn of the twenty-first century that the United States has taken over domination of the industry. Today, we will discuss how this change came about.

Sally Pipes’s new book “The World’s Medicine Chest: How America Achieved Pharmaceutical Supremacy―and How to Keep It” (2025, Encounter Books), examines the policies and economic forces that helped the United States overtake Europe as the global leader in drug innovation.

Jonathan Miltimore, writing in The Epoch Times, explains what happened. He says, “It’s no mystery why this happened. As Charlie Munger famously said, “Show me the incentive, and I’ll show you the outcome.”

The quote highlights a point that Pipes understands well. She argues that the decline of Europe’s pharmaceutical industry was “a direct and predictable result” of price control schemes.

Companies are far less likely to spend billions on R&D if they aren’t allowed to charge enough to recover their investment and fund future breakthroughs. With the average R&D cost of new drugs hovering at about $2 billion, even modest limits on expected returns can tip the risk-reward balance and deter investment in promising but uncertain therapies. (Pipes, citing journalist Matthew Herper, notes that the median cost is closer to $6 billion.)

While “obscene” pharmaceutical profits dominate today’s headlines, Pipes points out that 90 percent of all U.S. prescriptions are for generic drugs—the highest percentage in the world. Meanwhile, strong patent protection and flexible pricing have fueled innovation, leading to a steady pipeline of new treatments and medical breakthroughs. This framework, as Pipes describes it, is a “collaboration between public and private entities,” which preserves financial incentives for pharmaceutical companies while harnessing the strengths of both sectors to drive innovation and bring new therapies to market.

While the U.S. currently enjoys pharmaceutical hegemony, Pipes points out that the U.S. appears driven to make the same errors as Europe. The Inflation Reduction Act, passed in 2022, “green-lit price controls in the Medicare market for the first time.” In 2026, the federal government will begin setting prices on medicines for the first time. At the same time, the pharmaceutical industry is growing increasingly unpopular, and not just with Democrats hostile to corporate profits.

“Just 13 percent of Republicans supported the pharmaceutical industry in 2023, down from 45 percent in 2020,” Pipes writes.

All of this, Pipes contends, stands to destroy the pharmaceutical innovation that for generations has improved the lives of Americans (and countless people who reside in countries that freeload on America’s entrepreneurial system while capping prices at home).

Pipes rightly warns that federal price controls would be a disaster. Undermining the incentive structure that has produced so many life-saving drugs would be a grave mistake. But it’s worth asking whether such a path becomes inevitable once the federal government moves beyond its constitutional role of protecting individual rights and becomes a primary funder of medical research and a dominant purchaser of treatments.

That was Europe’s trajectory—something that Pipes makes clear—and it’s hard to see how the United States avoids a similar outcome without deeper structural reforms than the ones proposed in this book. The real question is which is more important; cheaper drugs or more innovative drugs that treat diseases today that were untreatable yesterday?

Price Controls and the Pharmaceutical Industry – Part I

 

Everyone knows that the United States is the leading developer of new pharmaceutical drugs. But I was surprised to learn it was not always that way. In fact, in the 1980s, Europe dominated the global pharmaceutical market. How did this change?

Jonathan Miltimore, writing in The Epoch Times, gives us an important history lesson on the pharmaceutical industry. He says, “It may surprise some today that a European company launched the world’s first AIDS treatment, but in the 1980s, it made sense: Europe dominated the global pharmaceutical market, introducing 129 novel drugs in the late ’80s, compared with just 77 in the United States.”

Europe no longer rules the Rx roost, however. Today, the U.S. dominates pharmaceutical innovation and sales, accounting for roughly half of all new drugs, compared with just 22 percent for European firms.

Sally Pipes’s new book “The World’s Medicine Chest: How America Achieved Pharmaceutical Supremacy―and How to Keep It” (2025, Encounter Books), examines the policies and economic forces that helped the United States overtake Europe as the global leader in drug innovation.

To say that the U.S. won pharmaceutical supremacy might be generous. Pipes, president of the free market Pacific Research Institute, spends the first few chapters of her book showing how European countries fumbled away their dominance through bad policies, particularly a fondness for price controls.

Price controls have been failing for thousands of years, and Pipes shows in painstaking detail how these policies destroyed the incentive structure necessary for drug innovation.

To be fair to Europeans, their decision to resort to price controls didn’t happen in a vacuum. They were largely the byproduct—a “natural consequence,” in Pipes’s words—of a different government scheme: universal health care.

Within a decade of establishing the National Health Service, the UK introduced the Voluntary Price Regulation Scheme (1957), a policy framework designed to control the prices of prescription drugs. The bureaucracy’s role soon expanded to include determining what was a “reasonable” amount of profit for a company to make. France and Germany followed, passing their own price control schemes in the late 1980s and early 1990s, and by 2004, every European country had price controls on prescription drugs.

Pipes shows that the sky didn’t fall in Europe immediately. For a while, everything seemed fine. Europeans enjoyed cheaper drugs, although accessibility was occasionally an issue because of price controls. Meanwhile, the continent continued to enjoy pharmaceutical dominance, outpacing the United States in pharmaceutical research and development (R&D) spending, employment, and sales.

As late as 1995, European companies still made up half of the world’s 20 largest pharmaceutical firms by revenue. Yet the writing was already on the wall. The golden age of European pharma was coming to an end, a development that even the EU had anticipated. “It is hard to escape the conclusion that the United States, rather than Europe, is now the main base for pharmaceutical research and development for therapeutic innovation,” the European Commission glumly concluded in 1994.

The EU Commission was not incorrect in its pessimism. By 2002, U.S. companies held claim to 60 percent of global pharmaceutical profits, compared with less than 20 percent for European companies. By 2004, the United States was attracting 80 percent of total R&D spending.

To her credit, Pipes considers other possible explanations for Europe’s pharmaceutical stagnation, including the idea that it stems from the United States’ “world-leading university system.” But as she notes, the data complicate that theory: European scientists published 120,000 pharmaceutical papers from 2017 to 2019—far more than the 72,000 published in the United States—yet relatively little of it translates into viable drugs or commercial breakthroughs (at least in the European market).

Why did this happen? We’ll discuss that in our next post.