Chapter 2 in the R.M. Dolin book, “Truth and Trust in Crisis,” 2021
COVID is the first viral crisis of the digital age, with unprecedented access to data that should, in theory, result in high quality information driving good policy and decision making. COVID burst into our lives in February of 2020, with two academic models shocking our senses with dire predictions of death on a scale not experienced in generations. These models are quickly accepted by the government and media as canonical sources for truth and trust with no serious scientific peer review and around them, questionable COVID narratives develop. It is well understood in COVID postmortems that these models are so badly flawed their only possible utility had to be to promote panic and hysteria. We will go back to the essays and analyzes I generated in 2020 to demonstrate what peer review should have and could have looked like if anyone in government, media and the science communities bothered with due diligence. Keep in mind as we critic the build-up to this crisis, all the analyzes we work through not only offer a more reasonable science-based examination of the data but were made available to both government and media sources in real time but was ignored.
In the digital age, society increasingly relies on computers models and simulations; there’s nothing magical about them, they’re like opinions, anyone can construct one to match any desired outcome. The challenge is developing models that realistically capture a phenomenon’s behavior in a manner that facilitates reasonable and responsible clairvoyance. Computer models are rule-based, meaning they follow a set of programing rules that when executed, guide a simulation toward its solution. Programmers are free to build whatever set of rules suits them and are not bound by the constraints of the real world. For example, it’s an easy programming feat to stipulate that when the temperature during a storm rises above eighty degrees, rainwater flows uphill even though this is physically improbable. Any program containing this rule will produce models predicting summer storms cause rainwater to flow uphill.
Computer models have no inherent ability to recognize when they’re outrageous, absurd, or outright wrong, just as there are no constraints regarding how the results of a computer simulation are interpreted. When I taught graduate courses in numerical methods at the University of New Mexico, I’d begin my first lecture by introducing students to what I called the “Dolin Paradox,” which states
“All software programs, regardless of how poorly or well written, render an answer and that answer, regardless of how pristine the input data is and how perfectly the program is executed, is always wrong.”
To drive the implications of this paradox home, I’d pose the following problem: “Suppose a computer program has two input variables, a and b. The algorithm consists of execution code that multiples a and b together, (i.e., Output=a*b). The algorithm also contains a rule which states, ‘If a is greater than zero, Output is positive.’ Suppose a=2, and b=-3, what’s the Output value? Discuss your assertion in terms of accuracy and precision.”
We know mathematically, a * b = -6, however, the model’s output is positive 6 due to our rule. A somewhat more obtuse point is that the model’s output is not really 6. Numerical programs do not understand abstract concepts like mathematics, instead they rely on the accuracy of the computer, the software compiler, and the input parameters to arrive at a value that is approximately 6, which leads to a discussion of roundoff and error.
Then there’s the matter of precision, which measures how often the same input a=2, and b=-3, generates the same output to a specified level of accuracy. A lot of what’s wrong in the academic models used by the government and media during COVID is captured in this simple example. At times, the academic models are poorly written, at other times poorly executed and generally contain erroneous rules. Almost always the simulation results are poorly interpreted.
In our programing problem, the software operates flawlessly even through the output is logically inconsistent (i.e., wrong). This is analogous to the slippery slope the COVID crisis found itself on. As software increases in complexity, the rules governing their behavior also increase due to efforts by programmers to capture every imagined caveat. As more rules are added, additional input parameters are needed to manage the caveats. These additional parameters can be thought of as knobs that can be “tweaked” to achieve desired outcomes. It’s no different than the knobs used to adjust home stereos; by manipulating volume, balance, base, and treble listeners achieve their interpretation of stereophonic perfection.
Once a program is successfully compiled, each time it’s executed, the operator tweaks input parameters to achieve desired outputs. In the case of the academic models used during COVID, one set of input parameters (i.e., assumptions) might output results indicating two million people die in the next month, while another set of assumptions could just as easily predict death takes a holiday and no one dies next month. In the hands of unqualified analysts, even well written programs can render flawed interpretations; a phenomenon witnessed repeatedly during COVID as unqualified academics and bureaucrats pretended to practice science. An analogy would be trusting me to overhaul my Cummings diesel, yeah, I’m an engineer and yeah, I know a thing or two about engines, but I guarantee you any attempt on my part to overhaul an engine does not end well for me or the engine because I’m not a skilled diesel mechanic.
The two early academic models used during COVID and treated as truth were never challenged, never peer reviewed, never even assessed to determine if they were logically consistent. The first model to become COVID gospel came from the prestigious Imperial College of London[1] that predicts in early March, that by October 500,000 Brits and two-million Americans will die from COVID. This alarming prediction causes widespread panic as we implicitly trust model projections from academic institutions as prestigious as the Imperial College of London.
To put their prediction in perspective, the UK population in 2020[2], is 67.9 million, while the US population is 331 million. This means the UK model projects 0.74% of the British population dies from COVID in the next seven months while 0.6% of the US population dies. Forget for a moment there’s little evidence supporting this assertion and consider the difference between 0.74% of the British population and 0.6% of the U.S. population dying. From a practical standpoint, it could perhaps be argued that the British population is more concentrated, which should lead to higher death rates as there are 660 people per square mile in England[3], while America only has 435 people per square mile[4] and the social distancing imperative was heavily promoted. If we use population density as the causal factor, then to be consistent with 500,000 Brits dying by October, 1.62 million Americans would die in the same period, which means population density is not a likely causal factor and a fair question to have asked is why the inconsistency.
It could perhaps be reasoned that COVID had a stronger presence in the U.S. when the UK model was run, and that overtime the predicted death rates between the two countries would equalize. The fallacy of that assertion is that Europe had a head start on COVID casualties. It could maybe be argued that Brits are heathier and more robust than Americans which is why their death rate is predictably lower, but I think we put that nonsense to rest a couple hundred years ago. The bottom line is that these are the kind of questions that should have been posed to challenge the validity of the UK model, and the Imperial College of London should have been obligated to respond by proving their model was worthy of our trust.
This early stage of the pandemic is perhaps where the greatest travesty of the crisis occurs; the utter abdication of academia and the science community to challenge these dire predictions. Nowhere is this more acute than in the STEM departments of the world’s universities and in organizations like the Royal Society of London. Had peer reviews been held, we’d understand how the UK model concluded one death rate for the UK and a significantly different death rate for the U.S., and what evidence was used to tweak the input parameters (i.e., knobs), to the point where the prediction became so dire. The input assumptions behind the UK model could have been challenged and perhaps we could have averted what became a catastrophe of data mismanagement and misrepresentation. Imagine how different things might have turned out had rigorous peer reviews been performed by intellectuals unafraid of applying unbiased scientific methods to such a vexing problem. The sad reality, however, is that academia and the scientific community long ago ceded truthful rigor for populace narratives in areas ranging from climate control to vaccine efficacy.
Once historians begin their unbiased review of the COVID crisis it will become increasingly clear that governments and media coalesced around common narratives, which as we previously discussed, they’re legally allowed to do even when those narratives are propagandized. Historians will conclude what is equally, if not more, contemptable is the utter abdication of responsibility by academia and the scientific community who should have challenged established narratives, exposed distortions of factual evidence, and fought for truth.
But for now, to put the UK model’s startling predictions further into perspective, on average, 36,000 Americans die each year from flu[5] and in 2017, 61,000 Americans died from flu. In March of 2020, however, the UK model asserts that two million Americans will die from COVID by October. This equates to 285,714 American deaths per month that are in addition to the 191,667 Americans per month that were already expected to die from all other causes like heart disease, cancer, diabetes, etc. Consider the impact this would have on the healthcare and funeral industries if demand suddenly jumps 150% beyond capacity, and you begin to understand the sudden panic this poorly executed and interpreted UK model causes.
Faced with such a catastrophic forecast from a prestigious institute, governments on both sides of the pond accept the UK projection unequivocally and use it as a basis for emergency policy planning, which includes closing the boarders, forcing healthy people into quarantine, shutting down schools, and closing small businesses while big box stores are allowed to thrive. Meanwhile, no one asked the most fundamental question of the crisis, “where’s the evidence supporting this projection and has your model been fully vetted by qualified scientist both in and out of academia?”
While government and media proficiently generate public panic, there’s no collaborating evidence supporting these predictions, just opinion, conjecture, and the lawful ability to spin any propagandized story they decide to sell. In a digital age that increasingly relies on models and projections as sources of truth, your right to have important matters be based on quantifiable evidence should be absolute.
Another interesting aside at this moment in the crisis is that if anyone in government truly believes deaths will skyrocket 150% in the next seven months, why is the level of contingency planning and preparation not on par with the hysteria being promoted? For example, there is no increase in coffin production, no mobile crematoriums are being set up, and the few military field hospitals sporadically established come nowhere near representing a 150% increase in demand. Many things in these early stages of COVID are not adding up to even a casual observer.
The most probable explanation is that American politicians and bureaucrats don’t fully believe the UK model predictions but must appear to be doing something. What they currently lack but desperately need is an equally convincing American model that can be elevated to canonical truth; enter the second academic model of the COVID crisis, courtesy of the University of Washington (UW). For reasons UW administrators have never been asked to explain, they allow non-scientists in their medical school to develop and promote what became the primary U.S. government and media standard for COVID crisis truth. By not demanding a thorough peer review, the UW STEM departments abandon their responsibilities to both the university and the nation, and the impact of their abdication is catastrophic.
In the early 1990’s when virtual reality (VR) is still a concept, I led the development of VR capabilities at Los Alamos to simulate nuclear weapons accident response, which is referred to as a broken arrow event. My research involved teleporting senior scientists into the field to support a response and for that I partnered with medical communities interested in using VR for telemedicine. I visited the University of Washington as their medical school was involved with this fascinating application of advanced technology and was impressed not only with their campus, but with the close collaboration the medical school and engineering departments shared. It’s disappointing that the UW STEM departments abdicated their academic responsibility during the COVID crisis.
Perhaps you think I’m being unduly harsh on academia and UW, but know that in science, trying is not doing and doing is what matters. We are told academia is where the brightest, most free-thinking minds reside, if so, then there’s a quid pro quo obligation academics have for the public’s largess. Consider this, if you’re about to board a newly built aircraft and learn UW engineers designed the cabin decor, but deferred structural and flight dynamic design to the medical school would you board the aircraft? That’s basically what happened during COVID, only the UW model is more dangerous because UW allows it to be used at a national level for policy planning and they had to of been competent enough to understand their predictions were complete nonsense. As you will later learn, demonstrating the absurdity of their model is neither difficult nor time consuming.
Let’s perform a cursory review of UW COVID model. Keep in mind, this occurs in the early stages of COVID, when we still implicitly trust CDC data. To set the stage, it’s mid-March 2020, and like many others I’m getting suspicious COVID is human engineered. The basis for my belief is that back in 2004, the world was bracing for another Chinese pandemic, only this Avian Flu virus[6] was far more lethal than COVID. At the time, I was responsible for communicating the Los Alamos National Laboratory pandemic model to Department of Homeland Security officials. I was also involved with developing the New Mexico state pandemic plan an informing the White House federal pandemic plan. Like the previously prestigious Imperial College of London and University of Washington models, a Los Alamos model was used for policy planning, but since the Avian Flu never achieves sustainable human-to-human transmission, the validity of the Los Alamos model is never quantified.
A stark difference between the Avian Flu and COVID is the death rate for COVID was reportedly 0.17%, while the death rate of the Avian Flu was 68%. To put this another way, 1.7 people die from COVID for every 1,000 people infected. The Avian Flu death rate was 680 deaths per 1,000 infections. You can imagine the planning and policy decisions made based on the Los Alamos model were crucial not only for our survival as individuals, but also for our nation’s survival and as a result Los Alamos conducted rigorous peer reviews of their model.
There’s nothing new or unique about COVID in terms of pandemics, except for the possibility of it being engineered and occurring at the dawn of the digital age. Evidence exists dating back thousands of years that approximately three global pandemics occur every century. Last century’s three major pandemics[7] were the Spanish Flu of 1918, which killed between fifty and one-hundred million people depending on how you count, followed by the Asian Influenza of 1957 that killed 2 million people, which was followed by the Hong Kong Influenza of 1968 that killed 4 million people.
One of the many mischaracterizations the media promoted during COVID was that pandemics are somehow worse than epidemics. Another mischaracterization is that rates of infection are a viable metric for measuring mortality. Infection and death rates have nothing to do with elevating an epidemic to pandemic status. It has nothing to do with morbidity and mortality ratios either. The differential between an epidemic and pandemic is strictly geographical. For example, suppose people get sick in San Diego and the death rate climbs to COVID levels, namely 1.7 deaths for every 1,000 infections. If the illness is contained to San Diego, the CDC classifies the incident as an epidemic.
Suppose the illness spreads to Tijuana, Mexico. Then, we’d be dealing with a pandemic because it now involves two geographic regions. Pandemics can be less severe than epidemics but spread over a wider region. This becomes important as we move toward June of 2020, when the CDC downgrades COVID. Well, that is until the politics of COVID entered the fray and pandemic data becomes politicized. That’s when cute marketing slogans like “follow the science,” become a beguiled part of our lexicon. But that’s getting ahead of the story as we’re still back in March, and the UW model has just been christened as the government and media single source for COVID truth and trust. Think of the UW model as a precursor to the role Anthony Fauci ultimately assumes for America.
The Merriam Webster dictionary defines an epidemic as “An outbreak of disease that spreads quickly and affects many individuals at the same time.”
Pandemics are defined as “a type of epidemic (one with greater range and coverage), an outbreak of a disease that occurs over a wide geographic area and affects an exceptionally high proportion of the population. While a pandemic may be characterized as a type of epidemic, you would not say that an epidemic is a type of pandemic.”
Like a balloon that expands and then contracts, an outbreak starts out as an epidemic, expands into a larger geographical area becoming a pandemic, and then contracts again to a smaller region and thus returns to epidemic status. When we talk about a worldwide outbreak, it’s a pandemic. When we talk only about that outbreak within a confined region, such as the U.S., it’s an epidemic. The term epidemic is derived from the Greek word epi, which means “upon or above,” and demos, which means “people.” The term is first used by Homer but takes on its current meaning when Hippocrates used it to describe a collection of clinical syndromes, such as coughs or diarrhea, occurring and propagating at a given location. Epidemics tend to be short-lived (relative), while a constant presence of an infection or disease in a population is called “endemic;” this is also referred to as a baseline, which is crucial for determining when an epidemic starts and when it ends.
There’s an official threshold defining the start of an epidemic. In general, they occur when an infection/disease (i.e., agent) plus susceptible hosts are present in adequate numbers, and the agent can be effectively transmitted from a source to susceptible hosts. Epidemics[8] result from
- A recent increase in amount or virulence of the agent.
- The recent introduction of the agent into a setting where it has not previously been.
- An enhanced mode of transmission so that more susceptible persons are exposed.
- A change in the susceptibility of the host response to the agent.
- Factors that increase host exposure or involve introduction through new portals of entry.
Epidemics end when “the number of new reported illnesses [/deaths] drops back to the number normally expected.” The CDC has formulas for determining the start and end of epidemics and pandemics. The end of an influenza outbreak is reached when the number of infections or deaths, depending on the metric you’re measuring, drops to a level at or below the number for endemic influenza. For example, the U.S. experiences 36,000 flu deaths during an average flu season, which includes both pneumonia and influenza. When flu deaths drop below a rate of 36,000 deaths per year, the outbreak is over. This number can be adjusted to a seasonal rate, or an evenly distributed annual rate depending on the circumstances and who’s doing the math.
The baseline death rate for events such as flu typically range from 5% to 7% above what’s expected at the height of flu season. According to the CDC, the COVID epidemic will be over in America when the death rate drops below 5.9% more deaths from pneumonia, flu, and COVID (PIC) than expected in a normal year. It’s important to point out this is a combined PIC number and not solely a COVID number.
As we auger into the COVID crisis, we’ll develop a model that accurately predicts when that death rate milestone is achieved, but I warn you, it will be in shocking contrast to the UK and UW models, as well as to other academic and government models that rise in prominence. The model we develop contains a mere four lines of EXCEL code, was written in April of 2020, accurately predicts the monthly death rates for six consecutive months and accurately predicts when the COVID epidemic/pandemic ends. Our model does all that while government and media models collectively predict continuing escalation.
The UW model that the media relies on to promote hysteria, and the government exploits to justify lockdowns and mandates, will be compared against our model. Along the way, other academic institutions with no credible STEM credentials, such as the University of Pennsylvania’s Wharton School of Business, and Harvard University will enter the COVID modeling sweepstakes as well as the government’s Federal Emergency Management Agency (FEMA). We will assess all these models against our little four-line program to demonstrate how easy it can be to model when sound scientific practices are followed.
We’ll begin by jumping into our “way-back machine” and teleporting to March of 2020 so we can follow COVID using World Health Organization (WHO) and CDC data to make bold predictions and to assess the performance of academic and government models. It’s important to use data from the time of assessment because CDC has a history of revising data to meet the needs of prescribed narratives, which means we can’t look up the data they report today for a previous date because it’s likely not the same as the data the models were relying on when they made their projections. It’s confusing but critical for our assessments. Keep in mind that the data presented for a particular period was developed at that time, there is no postmortem modeling in our analyzes, everything presented was modeled in real time.
Our model projections and evidentiary assertions will run counter to what’s being espoused by politicians, the media, and medical experts, who each have narratives needing to be satisfied. We, however, are far less interested in consensus opinion than with auguring through the noise of narratives. As scientists, we understand that so long as our mathematics is sound and our hypothesizes clearly stated, we can withstand pundit criticisms certain to come. It won’t always be easy as I’ll challenge you to trust where the math takes us, even when powerful and loud voices demand you stick to prescribed scripts, but that is the real power and obligation of scientists seeking truth.
Along the way we’ll investigate what can happen when simple foundational science is followed. We’ll explore what real research looks like in the post-Al Gore world of consensus science. You’ll experience the hard road that people of true conviction must take when they are compelled to stand up and say, “these are my science-backed assertions, and I’m willing to defend them.” It’ll seem at times that even the data is suggesting we’re wrong, but then, the data we rely on is revised revealing we were right all along and when we arrive at that moment of vindication, you’ll understand the profound responsibility science must embrace to be the arbitrators of truth and trust in the digital age.
[1] https://www.dailysignal.com/2020/05/16/the-failures-of-an-influential-covid-19-model-used-to-justify-lockdowns/
[2] https://www.worldometers.info/world-population/uk-population/
[3] https://worldpopulationreview.com/countries/united-kingdom-population
[4] https://www.states101.com/populations
[5] https://usafacts.org/articles/how-many-people-die-flu/
[6] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC343850/
[7] https://www.atlas-mag.net/en/article/20th-and-21st-century-s-major-pandemics
[8] Kelsey JL, Thompson WD, Evans AS. Methods in observational epidemiology. New York: Oxford University Press; 1986. p. 216: