How Mathematicians Changed Medical Practice

Rajeev Kurapati MD, MBA
10 min readSep 15, 2018

British physician Richard Lower became the nucleus of the Royal Society in the 1660s when he demonstrated how shaking an open glass tube containing venous blood changed the dark purplish color to bright red as it mixed with air.

As this happened right before the eyes of the audience, Lower categorically proved how venous blood becomes arterial. Then, via his methodical dissections, he traced the circulation of blood as it passed through the lungs and heart. Working like a chemist, he demonstrated how the blood in the test tube behaved like that in our body as it passed through lungs.

Although it’s obvious that the blood in our bodies and blood in a test tube will not exhibit identical properties, the physiochemical activities in the human body were treated no differently than a set of chemical reactions studied in laboratories under controlled conditions. This crude approximation became a necessity in exploring the mechanisms of bodily functions without relying on myth and speculation.

Using controlled experiments, scientists started discovering several isolated biological processes that had until then remained obscure, such as blood circulation, digestion, respiration, and so on. After studying them independently, the goal of piecing together these various vital functions through additional experiments was set in motion. But physiologists soon realized that this exercise was uniquely challenging. While analyzing individual biological phenomena seemed straightforward, it became a formidable task for the human mind to interlink the various, dynamically changing biochemical processes within our bodies.

Corresponding with the discovery of new biological mechanisms was the conception of newer methods to treat various illnesses. As new ideas were being introduced into medical practices, physicians desperately needed a tool that could evaluate the strength of evidence for or against that particular treatment. If such ambiguities weren’t addressed, this budding science would have been left at risk of spiraling back down into the dark ages of metaphysical and speculative theories.

It was mathematicians who came to physicians’ aid. The man whose theories would forever change the study of experimental medicine was the very same man who discovered the laws of gravity.

During his studies, Newton developed extensive methods by which to calculate the properties of fluctuating entities. He named his paper “Fluxion,” which would later come to be known as calculus. Another mathematician, German-born Gottfried Leibniz, also developed calculus independently but around the same time.

Until that time, if we wanted to study how two variables influenced each other, there was no quick way to intuitively interpret the dynamic interaction. Calculus opened the door for us, allowing us to observe the rates of changing functions such as time, force, mass, length, or temperature. The relationships between many quantifiable entities in nature could now be understood using these complex mathematical computations. This eventually became the language of experimentation.

Before the seventeenth century, the medical profession had no interest in the collection and analysis of data. They therefore also saw no value in using calculations in their treatments. It was not a physician or even a mathematician but a tradesman by the name of John Graunt who first utilized statistics in the medical field.

Even with no formal education, Graunt became fascinated with morality statistics. He compiled a book based on his research of the Bills of Mortality, a collection of vital statistics about the citizens of London spanning over 70 years. Published in England in 1665, Graunt’s book Natural and Political Observations Mentioned in a following Index, and made upon the Bills of Mortality (referred to as Observations), illustrated the accounts that were kept as the number of London deaths rose from the plague, which killed one-fourth of England’s population in the year 1625 alone.

Observations attempted to create a system to warn of the onset and spread of the plague, and it paved the way for the usefulness of data in medicine. Over the next one hundred years, statistical thinking began to pervade every area of medicine, from research to policy. These influences came largely from outside healthcare. It was two mathematicians, Thomas Bayes and Pierre-Simon Laplace, who changed the face of medicine forever.

By the late 1700s, French mathematician and astronomer Pierre-Simon Laplace was adamant that statistics should be applied to the entire system of human knowledge, not just physics or chemistry. In regards to medicine, he believed that as the number of observations (data points) increased, the best treatments would manifest themselves.

Laplace encouraged the use of statistical analysis to determine the validity of therapies. Prior to this, the common practice was to merely rely upon the passed-down experiences of senior physicians and anecdotal evidence, meaning there was no precise consistency across the field. Physicians felt that treatments were best chosen based on expert opinions rather than quantitative analysis. The introduction of statistics in medicine put this time-honored tradition to the test.

Additional studies to legitimize the utility of medical practices followed suit. The pinnacle was reached when one study questioned the usefulness of bloodletting, the proverbial practice that had gone unchallenged for nearly 19 centuries. In the aftermath of the French Revolution, Parisian doctor François Joseph Victor Broussais (1772–1838) claimed that all fevers had the same origin: they were manifestations of inflamed organs. Accordingly, leeches were applied to the surface of the body corresponding to the site of the inflammation, and the resultant bloodletting was deemed an efficient treatment. Such theories were highly regarded by contemporary French physicians. This influence can be assessed using an economic measure; in 1825, France exported 10 million leeches, and by 1833, they imported 40 million more.

Unsatisfied with the lack of evidence to the common practice of bloodletting, prominent physician Pierre Charles Alexandre Louis conducted a study of typhoid fever. He collected data for five years in the 1800s to study the efficacy of the practice.

Among 52 fatal cases, he observed that 75 percent had undergone bloodletting. The results perplexed fellow physicians. Louis’s numerical analysis showed that bloodletting increased, rather than decreased, mortality. He used the same method to study the efficacy of bloodletting in the treatment of pneumonitis and tonsillitis, and found no evidence to support its ability to treat these illnesses either. He encouraged fellow physicians to utilize quantitative analysis rather than blindly follow unproven theories. His analysis was practically heretical to the medical community, but to Louis, assumptions meant nothing. Facts were facts, and they were the only key to any truth, including in medicine.

At last, the prospect of developing methods to validate a therapy aided by the use of mathematical models seemed bright. But it would take well over another century to convince the medical profession to incorporate statistical methods in their practice.

Many held stubbornly to the belief that mathematics was unfit to analyze therapeutic practices. The medical community, for instance, didn’t even realize that postpartum mothers in the doctors’ wards of Vienna General Hospital’s First Obstetrical Clinic had a mortality rate of a whopping 35 percent until the Hungarian physician Ignaz Philipp Semmelweis measured it in 1847. To everyone’s surprise, he discovered that doctors’ wards had mortality rates three times as high as those of midwives’. His finding that hand washing was able to reduce mortality rates to below 1 percent had already been published, but the medical community had not yet embraced the practice, and Semmelweis’s observations were rejected. In fact, some doctors even took offense to the idea that they needed to sanitize before attending to patients. But sticking to these antiquated, flawed practices had terrible consequences: Many newborns were left motherless.

Influenced by French chemist Louis Pasture and his germ theory, Joseph Lister walked the wards of his Glasgow hospital and came to believe it was microbes carried in the air that were causing diseases to spread in operating rooms. As a result, he developed a machine to pump a fine mist of carbolic acid into the operating room during surgeries. After the introduction of Lister’s antiseptic methods, the mortality rate for all surgical procedures performed from 1867–70 fell to 15 percent. Lister published these findings of his groundbreaking work with antiseptic surgery in 1870.

While Louis Pasture and Robert Koch developed the germ theory of disease, it was Lister who was instrumental in implementing the practical application of the theory relating to hygienic surgical practices and markedly decreasing the spread of disease during procedures as a result. The division of surgical history into pre- and post-Listerian eras speaks volumes about his impact on fighting infections. But without statistical confirmation of the dramatic drop in mortality, Lister could not have convinced his colleagues to consider his antiseptic theory or his approach to surgical sanitation.

While these improvements were starting to alter the landscape of medicine, resistance from physicians was far more prevalent than a willingness to adopt new practices. But not for long.

Twenty-seven-year-old Karl Pearson was no ordinary mathematician. As the newly appointed professor of mathematics at University College London, he believed he could explain almost everything about “plants, animals, and men” through the application of statistics. “There is no sphere of inquiry which lies outside the legitimate field of science,” he wrote in his acclaimed book on statistics, cleverly titled The Grammar of Science. He developed statistical methodology and attempted to convince the world that this was the next-best way to analyze problems related to biology. His theory of statistical relevancy found few audiences, though. The Royal Society rejected his papers, as biologists found it preposterous for mathematicians to interfere in their space.

The medical profession remained divided on the use of statistical methodology. Those who viewed medicine as an “art” couldn’t digest the idea of crunching numbers to study human biology. Others argued that medicine was a “science” and saw statistics as a means for more objective observation. It would be a student of Pearson’s who would go on to evoke massive appeal to statistics in medicine.

Major Greenwood, perhaps Pearson’s most devoted follower, was a trained doctor in London in 1905. He chose to study under Pearson despite the obvious financial shortcomings of being a statistician. Pearson had the opportunity to turn Greenwood into the first medical statistician and he took full advantage, training him into a numerical wizard who could fully apply statistical methodology to medical practices.

Greenwood’s persistence paid off. He was soon appointed the head of the newly established Department of Statistics at the Lister Institute of Preventive Medicine in London. His reputation spread across the Atlantic, and he began to work with American counterparts to investigate the application of mathematics in the study of human disease. His methods were slowly gaining popularity in the most elite inner circles.

The medical community began to agree that studying statistics was a vital part of training to be a physician, as it ensured that medicine was grounded in science. Greenwood managed to influence some of the young physicians of the time, but it was one of his apprentices — a non-physician by the name of Austin Bradford Hill — who would go on to become the primary driver of using standardized research studies to evaluate the effectiveness and safety of medical devices or drugs. Today, these are known as clinical trials.

Hill was a trailblazer, convincing the medical community to accept the utility of statistics on a massive scale in therapeutics. He was also a visionary. He was a firm believer that those in medicine should not be limited to curing the sick, but also had a responsibility to advance the understanding of health and diseases. This meant that doctors should take up research as a career in addition to their clinical practice. By doing so, he believed that doctors could incorporate the latest scientific knowledge into medical treatments.

Hill was also lucky. The timing was perfect, as new and potent drugs were being industrially produced in the post-war era. He was able to rally supporters for the use of statistical methods to study the efficacy of newly discovered drugs on humans, a concept that had never before been considered.

Prior to World War II, research was very small scale and generally consisted of a few doctors working independently using their own patients, families, and neighbors as subjects. Most of these studies were conducted for immediate, self-serving purposes, such as to find a treatment plan for a particular case. In the past, a few scientists — like Joseph Lister and his work on antisepsis — proved useful, but the small scope of the research typically didn’t have much impact on the overall practice of medicine.

It was Hill’s experiments in the 1940s that laid the groundwork for larger studies in which both the insight of physicians and the statistical design of professional statisticians were combined. Laplace’s vision of using calculus and statistics to explain biological phenomena was finally actualized.

The basic model of statistical methodology was now universally applied to almost every aspect of medicine. Select a topic to investigate, observe and measure the phenomenon, collect data, translate it into equations to be solved and interpreted, and then draw a conclusion. It was working. Larger and larger populations were put to the test by researchers undertaking some of the most expensive experiments in history.

Statistical methods radically transformed the doctor’s practice. Individual opinions and anecdotal evidence were relegated as the least reliable form of information. This was a 180 degree turn from medicine’s earlier history, when these were the most valid references. Randomized controlled trials were elevated to the gold standard, and all physicians were expected to abide. It is now accepted that virtually no drug, no surgical therapy, and no diagnostic test can enter clinical practice without demonstration of its efficacy in clinical trials.

In their work, doctors and researchers have typically been careful to frame the outcomes of statistical evidence within the context of a particular situation. But when communicated to the public, these same results often get passed along in terms that are more black and white. Headlines blast sensational news that a medication has been found to be completely safe or unsafe, and panic ensues as a result. Like most findings in the medical world, though, the issue of the safety of a drug — like hormone replacement therapy for menopause — is far more nuanced. The decision to take estrogen supplements falls along a spectrum of pros and cons, a teeter of benefits one has to weigh against potential risks. The science of statistics allows for the development of general guidelines, and then the “art” of medicine provides for the individualization of that generality.

A condensed excerpt from Physician: How Science Transformed the Art of Medicine (Greenleaf Books, February 2018).

--

--

Rajeev Kurapati MD, MBA

Rajeev Kurapati MD, MBA writes about health, wellness and self-discovery. He is an award winning author.