To save any of his marching bandmates, Steve Marx says, he would run into onrushing traffic with no hesitation. It’s the kind of language often heard from former army buddies, not musicians, but Marx brings up the scenario to show the strength of his feelings about this group. The marching band director at Gettysburg College in Pennsylvania has been participating in musical ensembles for more than 20 years, since he was in high school, and says that “the sort of bonding that you form is extremely strong. It’s like a family.” Everyone is in matching uniforms, musical instruments in hands, marching forward in perfect harmony, left leg, right leg, movements and sounds so synchronized that individuals blur into the greater group. The allure is not even that much about music, he admits. Marching, for him, is mostly about the sense of kinship.
It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk. The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said, delighting at the thought. “So the programmer put in that limit.”
Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer. (A caveat to that conclusion is that there is little agreement about what the term “consciousness” means, let alone how one might go about simulating it.)
In 2003 Bostrom imagined a technologically adept civilization that possesses immense computing power and needs a fraction of that power to simulate new realities with conscious beings in them. Given this scenario, his simulation argument showed that at least one proposition in the following trilemma must be true: First, humans almost always go extinct before reaching the simulation-savvy stage. Second, even if humans make it to that stage, they are unlikely to be interested in simulating their own ancestral past. And third, the probability that we are living in a simulation is close to one.
Before Bostrom, the movie The Matrix had already done its part to popularize the notion of simulated realities. And the idea has deep roots in Western and Eastern philosophical traditions, from Plato’s cave allegory to Zhuang Zhou’s butterfly dream. More recently, Elon Musk gave further fuel to the concept that our reality is a simulation: “The odds that we are in base reality is one in billions,” he said at a 2016 conference.
“Musk is right if you assume [propositions] one and two of the trilemma are false,” says astronomer David Kipping of Columbia University. “How can you assume that?”
To get a better handle on Bostrom’s simulation argument, Kipping decided to resort to Bayesian reasoning. This type of analysis uses Bayes’s theorem, named after Thomas Bayes, an 18th-century English statistician and minister. Bayesian analysis allows one to calculate the odds of something happening (called the “posterior” probability) by first making assumptions about the thing being analyzed (assigning it a “prior” probability).
Kipping began by turning the trilemma into a dilemma. He collapsed propositions one and two into a single statement, because in both cases, the final outcome is that there are no simulations. Thus, the dilemma pits a physical hypothesis (there are no simulations) against the simulation hypothesis (there is a base reality—and there are simulations, too). “You just assign a prior probability to each of these models,” Kipping says. “We just assume the principle of indifference, which is the default assumption when you don’t have any data or leanings either way.”
So each hypothesis gets a prior probability of one half, much as if one were to flip a coin to decide a wager.
The next stage of the analysis required thinking about “parous” realities—those that can generate other realities—and “nulliparous” realities—those that cannot simulate offspring realities. If the physical hypothesis was true, then the probability that we were living in a nulliparous universe would be easy to calculate: it would be 100 percent. Kipping then showed that even in the simulation hypothesis, most of the simulated realities would be nulliparous. That is because as simulations spawn more simulations, the computing resources available to each subsequent generation dwindles to the point where the vast majority of realities will be those that do not have the computing power necessary to simulate offspring realities that are capable of hosting conscious beings.
Plug all these into a Bayesian formula, and out comes the answer: the posterior probability that we are living in base reality is almost the same as the posterior probability that we are a simulation—with the odds tilting in favor of base reality by just a smidgen.
These probabilities would change dramatically if humans created a simulation with conscious beings inside it, because such an event would change the chances that we previously assigned to the physical hypothesis. “You can just exclude that [hypothesis] right off the bat. Then you are only left with the simulation hypothesis,” Kipping says. “The day we invent that technology, it flips the odds from a little bit better than 50–50 that we are real to almost certainly we are not real, according to these calculations. It’d be a very strange celebration of our genius that day.”
The upshot of Kipping’s analysis is that, given current evidence, Musk is wrong about the one-in-billions odds that he ascribes to us living in base reality. Bostrom agrees with the result—with some caveats. “This does not conflict with the simulation argument, which only asserts something about the disjunction,” the idea that one of the three propositions of the trilemma is true, he says.
But Bostrom takes issue with Kipping’s choice to assign equal prior probabilities to the physical and simulation hypothesis at the start of the analysis. “The invocation of the principle of indifference here is rather shaky,” he says. “One could equally well invoke it over my original three alternatives, which would then give them one-third chance each. Or one could carve up the possibility space in some other manner and get any result one wishes.”
Such quibbles are valid because there is no evidence to back one claim over the others. That situation would change if we can find evidence of a simulation. So could you detect a glitch in the Matrix?
Houman Owhadi, an expert on computational mathematics at the California Institute of Technology, has thought about the question. “If the simulation has infinite computing power, there is no way you’re going to see that you’re living in a virtual reality, because it could compute whatever you want to the degree of realism you want,” he says. “If this thing can be detected, you have to start from the principle that [it has] limited computational resources.” Think again of video games, many of which rely on clever programming to minimize the computation required to construct a virtual world.
For Owhadi, the most promising way to look for potential paradoxes created by such computing shortcuts is through quantum physics experiments. Quantum systems can exist in a superposition of states, and this superposition is described by a mathematical abstraction called the wave function. In standard quantum mechanics, the act of observation causes this wave function to randomly collapse to one of many possible states. Physicists are divided over whether the process of collapse is something real or just reflects a change in our knowledge about the system. “If it is just a pure simulation, there is no collapse,” Owhadi says. “Everything is decided when you look at it. The rest is just simulation, like when you’re playing these video games.”
To this end, Owhadi and his colleagues have worked on five conceptual variations of the double-slit experiment, each designed to trip up a simulation. But he acknowledges that it is impossible to know, at this stage, if such experiments could work. “Those five experiments are just conjectures,” Owhadi says.
Zohreh Davoudi, a physicist at the University of Maryland, College Park, has also entertained the idea that a simulation with finite computing resources could reveal itself. Her work focuses on strong interactions, or the strong nuclear force—one of nature’s four fundamental forces. The equations describing strong interactions, which hold together quarks to form protons and neutrons, are so complex that they cannot be solved analytically. To understand strong interactions, physicists are forced to do numerical simulations. And unlike any putative supercivilizations possessing limitless computing power, they must rely on shortcuts to make those simulations computationally viable—usually by considering spacetime to be discrete rather than continuous. The most advanced result researchers have managed to coax from this approach so far is the simulation of a single nucleus of helium that is composed of two protons and two neutrons.
“Naturally, you start to ask, if you simulated an atomic nucleus today, maybe in 10 years, we could do a larger nucleus; maybe in 20 or 30 years, we could do a molecule,” Davoudi says. “In 50 years, who knows, maybe you can do something the size of a few inches of matter. Maybe in 100 years or so, we can do the [human] brain.”
Davoudi thinks that classical computers will soon hit a wall, however. “In the next maybe 10 to 20 years, we will actually see the limits of our classical simulations of the physical systems,” she says. Thus, she is turning her sights to quantum computation, which relies on superpositions and other quantum effects to make tractable certain computational problems that would be impossible through classical approaches. “If quantum computing actually materializes, in the sense that it’s a large scale, reliable computing option for us, then we’re going to enter a completely different era of simulation,” Davoudi says. “I am starting to think about how to perform my simulations of strong interaction physics and atomic nuclei if I had a quantum computer that was viable.”
All of these factors have led Davoudi to speculate about the simulation hypothesis. If our reality is a simulation, then the simulator is likely also discretizing spacetime to save on computing resources (assuming, of course, that it is using the same mechanisms as our physicists for that simulation). Signatures of such discrete spacetime could potentially be seen in the directions high-energy cosmic rays arrive from: they would have a preferred direction in the sky because of the breaking of so-called rotational symmetry.
Telescopes “haven’t observed any deviation from that rotational invariance yet,” Davoudi says. And even if such an effect were to be seen, it would not constitute unequivocal evidence that we live in a simulation. Base reality itself could have similar properties.
Kipping, despite his own study, worries that further work on the simulation hypothesis is on thin ice. “It’s arguably not testable as to whether we live in a simulation or not,” he says. “If it’s not falsifiable, then how can you claim it’s really science?”
For him, there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.
Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.
On August 9, 2017, paleontologists at the American Museum of Natural History in New York City unveiled the largest animal ever to walk the earth. Dubbed Patagotitan mayorum, the reconstructed skeleton of the 100-million-year-old dinosaur was so huge that it didn’t even fit wholly inside the room in which it stood. The dinosaur’s long neck, bulging body and long tail stretched about 120 feet long, with the living animal estimated to weigh in at more than 70 tons. But now it’s shrunk.
In a new study of the available Patagotitan fossils, representing several individuals of differing ages, paleontologist Alejandro Otero and his colleagues have slimmed down Patagotitan to around 57 tons. The full length of the dinosaur is in question, too, especially as no complete skeleton is known. What was heralded as the largest dinosaur of all in 2014 has wound up in a neck-and-neck tie with several other dinosaurian giants such as Argentinosaurus. The shrinkage comes as part of a long history of supersized dinosaurs that have been downsized after their initial discovery. Incomplete fossils, evolving techniques, and the paleontological preoccupation with enormous dinosaurs have all played into the constant quest to find the biggest creature to walk the planet.
Although many dinosaurs lived large—the famous T. rex was 40 feet long and weighed nine tons—all of the very largest dinosaurs belonged to a group called sauropods. These quadrupedal herbivores are immediately recognizable by their tiny heads, long necks, hefty bodies and tapering tails. Dinosaurs such as Brontosaurus and Diplodocus conveyed the standard image of these plant eaters to museumgoers for more than a century. But even these enormous animals weren’t the largest of all.
“The fact that literally a handful of bones indicates that there truly were terrestrial titans of near mythic proportions leaves us in sheer awe,” says University of Toronto paleontologist Cary Woodruff. Not to mention that these dinosaurs are so strange, from the tip of their snouts to the end of their tapering tails. “With nothing quite like sauropods today,” says Macalester College paleontologist Kristi Curry-Rogers, “our work on these creatures is akin to studying aliens.”
One of the first front-runners was Brachiosaurus, a long-necked herbivore known from a paltry collection of bones uncovered in western Colorado in 1900. Even though only about 20 percent of the skeleton was found, comparisons with similar dinosaurs led to estimates that Brachiosaurus was more than 60 feet long and more than 40 feet tall, a giant that towered over the likes of Apatosaurus and Diplodocus.
But there were larger species out there. The “great-dinosaur renaissance” that lasted from the 1970s through the 1990s saw a new bone rush that uncovered several ever larger dinosaurs. Each was given a name befitting its stature, with “Ultrasaurus,” “Supersaurus,” “Seismosaurus,” and more all making news and documentary appearances as the biggest of the big. Yet the initial announcements from the field didn’t hold up once the fossils were brought back to the lab for study. In fact, some of the supposed giants—such as Ultrasaurus—turned out to be misidentified representatives of other species and not quite so exceptional as originally thought.
And then there are the lost giants. Part of a backbone described by fossil hunter E. D. Cope in the 19th century seemed to suggest a sauropod, known as Amphicoelias, that measured almost twice as long as any other. The problem is that the bone was mysteriously lost, and no other example has turned up over more than a century of fossil expeditions. Likewise a dinosaur from India named Bruhathkayosaurus was rumored to be the largest, but those fossils disintegrated and are no longer available to study.
Even among the giants that paleontologists have in hand, determining the winner is challenging. Part of the problem is that many of the largest dinosaur skeletons are incomplete. “When we imagine how unlikely it is for an entire adult sauropod skeleton to overcome the vagaries of the fossil record, it’s not at all surprising that complete specimens are hard to come by,” Curry-Rogers says. A huge amount of sediment was needed to bury the bodies, which were often ravaged by scavengers before burial. Add different analytic methodologies to the mix, and experts often have to revise their expectations. “Another huge problem, no pun intended, is the issue of exactly what is being measured or estimated,” she says, especially as some longer dinosaurs might be lighter than heavier, shorter dinosaurs, meaning there’s no single metric to determine a winner.
“We all can step on a scale today, but how do we weigh something that cannot be traditionally weighed?” Woodruff says. Paleontologists have tried a variety of methods, from dunking plastic models in water to estimate a dinosaur’s volume to looking at the relation between the circumference of thigh and upper-arm bones to mass. Experts continue to compare and refine techniques, and one study published earlier this year found that different techniques are finding similar results. Over time, estimations of dinosaur size are becoming more refined and falling into accord with each other.
New realizations can change experts’ expectations, too. When paleontologists realized that the vertebrae of sauropods were filled with air sacs to keep them light, she notes, paleontologists had to adjust how they determine mass. “With more knowledge, whether in the form of better living models for comparison or better fossils, comes more precision,” Curry-Rogers says.
But how to determine a winner? Various methods might find a difference of a few tons in sauropod size estimates. That’s a large mass for humans, Woodruff says, but “for an animal already weighing 30 to 40 tons, that’s not a terribly dramatic difference.” Still, those variations will likely only keep fueling the persistent quest to identify the largest animal of all. As Woodruff says, “everyone likes a winner.”
I am not the editor in chief of a propaganda farm disguised as a far-right breaking news outlet. But one day last February, just before the world shut down, I got to play one.
About 70 journalists, students and digital media types had gathered at the City University of New York to participate in a crisis simulation. The crisis at hand was the 2020 U.S. presidential election. The game was designed to illuminate how we, as reporters and editors, would respond to a cascade of false and misleading information on voting day—and how public discourse might respond to our coverage. The exercise was hosted by First Draft, a research group that trains people to understand and outsmart disinformation.
After a morning workshop on strategies for reporting on conspiracy theories and writing headlines that don’t entrench lies, the organizers split us up into groups of about 10 people, then gave each “newsroom” a mock publication name. Sitting around communal tables, we assigned ourselves the roles of reporters, editors, social media managers and a communications director. From our laptops we logged into a portal to access the game interface. It looked like a typical work desktop: There was an e-mail inbox, an intraoffice messaging system that functioned exactly like Slack, a microblogging platform that worked exactly like Twitter and a social feed that looked exactly like Facebook. The game would send us messages with breaking events, press releases and tips, and the feeds would respond to our coverage. Several First Draft staffers at a table were the “communications desk,” representing any agency, person or company we might need to “call” to answer questions. Other than that, we received no instruction.
My newsroom was mostly made up of students from C.U.N.Y.’s Craig Newmark Graduate School of Journalism and other local universities. The organizers gave us a few minutes to define our newsrooms’ identities and plan our editorial strategies. The room filled with nervous murmurings of journalists who wanted to fight the bad guys, to beat back misinformation and safeguard election day with earnest, clear-eyed coverage. But I had a different agenda, and I was the one in charge.
“Sorry, team,” I said. “We’re going rogue.”
Simulations should include extreme scenarios if they are to properly scare people into preparing for the unexpected—into updating protocols and rearranging resources or tripping certain automated processes when things go awry. Yet journalists and scientists tend to resist engaging with the outlandish. We dismiss sensational outcomes, aiming to wrangle expectations back into the realm of reason and precedent. In recent years that strategy has often left us reeling. A Nature article published this past August explained why the U.S. was caught flat-footed in its response to COVID-19: despite the fact that government officials, academics and business leaders have participated in dozens of pandemic simulations over the past two decades, none of the exercises “explored the consequences of a White House sidelining its own public health agency,” wrote journalists Amy Maxmen and Jeff Tollefson.
The success of any scenario game, then, depends on the questions it raises. The game doesn’t need to predict the future, but it does need to pry players away from the status quo, to expand their sense of what is possible. And to stress-test the preparedness of a newsroom on November 3, 2020, things needed to get weird.
Disinformation scholars often warn that focusing on the intent of influence operations or the sophistication of their techniques overestimates their impact. It’s true that many disinformation tactics are not robust in isolation. But the targeted victim is fragile; pervasive anxiety and a deep social divide in America make us vulnerable to attacks from afar and within. And because it’s cheap and easy for bad actors to throw proverbial spaghetti at social feeds, occasionally something sticks, leading to massive amplification by major news organizations. This was my goal as an editor in chief of unreality.
The simulation started off slowly. A tip came in through e-mail: Did we see the rumor circulating on social media that people can vote by text message?
As other newsrooms began writing explainers debunking SMS voting, I assigned a reporter to write a “tweet” that would enhance confusion without outright supporting the lie. After a quick edit, we posted: We’re hearing that it’s possible to vote by text message. Have you tried to vote by SMS? Tell us about your experience! It went up faster than any other content, but the social Web reacted tepidly. A couple of people called us out for spreading a false idea. So we dug in with another post: Text message voting is the way of the future—but Democrats shut it down. Why are elites trying to suppress your vote? Story coming soon!
We continued this pattern of baseless suggestions, targeted at whatever people on the feed seemed to already be worried or skeptical about. Eventually some of the other newsrooms caught on that we might not be working in good faith. At first they treated our manipulations as myths to debunk with fact-laden explainers. But our coverage kept getting dirtier. When an editor from a respectable outlet publicly questioned the integrity of my senior reporter, I threatened to take legal action against anyone who maligned her. “We apologize to no one!” I yelled to my team.
My staff was having fun wreaking havoc. The social platforms in the game were controlled by First Draft organizers (who, I later learned, meted out eight “chapters” of preloaded content), as well as manual input from the simulation participants in real time. We watched the feeds react with more and more outrage to the “news” we published. Our comms director stonewalled our competitors, who kept asking us to take responsibility for our actions, even forming a coalition to call us out.
Then a new tip appeared: someone on social media said there was an active shooter at her polling place. Everyone’s attention shifted. The first newsroom to get a comment from the “local police” posted it immediately: At this time, we are not aware of any active shooting threat or event. We are investigating. While other teams shared the message and went to work reporting, I saw a terrible opening in the statement’s inconclusiveness. “Let’s question the integrity of the cops,” I whispered maniacally to my team.
We sent out a post asking whether the report could be trusted. In a forest of fear, the suggestion that voters were at risk from violence was a lightning bolt. Social media lit up with panic. A celebrity with a huge following asked her fans to stay safe by staying home. My newsroom quietly cheered. We had found an editorial focus, and I instructed everyone to build on it. We “tweeted” a dozen times, occasionally promising an in-depth story that never arrived.
Once we were on a roll, I paused to survey the room. I watched the other teams spending all their energy on facts and framing and to-be-sures, scrambling to publish just one article debunking the misleading ideas we had scattered like dandelion seeds. We didn’t even need to lie outright: maybe there was an active shooter! In the fog of uncertainty, we had exploited a grain of possible truth.
Abruptly, the organizers ended the game. Ninety minutes had somehow passed.
I took stock of myself standing up, leaning forward with my hands pressed to the table, adrenaline rippling through my body. I had spent the previous year researching digital disinformation and producing articles on its history, techniques and impact on society. Intellectually I knew that people and groups wanted to manipulate the information environment for power or money or even just for kicks. But I hadn’t understood how that felt.
I scanned the faces of my “colleagues,” seeing them again as humans rather than foot soldiers, and flinched at the way they looked back at me with concern in their eyes.
Our debrief of the simulation confirmed that my newsroom had sabotaged the media environment on Election Day.“You sent the other newsrooms into a tailspin,” First Draft’s deputy director Aimee Rinehart later told me. She said I was the first person to co-opt the game as a “bad steward of the Internet,” which made me wonder if future simulations should always secretly assign one group the role of wily propagandist.
It took hard alcohol and many hours for my nervous system to settle down. The game had rewarded my gaslighting with amplification, and I had gotten to witness the spread of my power, not just in likes and shares but through immediate “real-world” consequences.
Playing the bad guy showed me how the design of platforms is geared toward controlling minds, not expanding them. I’d known this, but now I felt why journalism couldn’t compete against influence operations on the high-speed battlefield of social media—by taking up the same arms as the outrage machine, we would become them. Instead we could strengthen our own turf by writing “truth sandwich” headlines and service articles that anticipate the public’s need for clarity. Because ultimately the problem wasn’t about truths versus lies or facts versus falsehoods. It was about stability and shared reality versus disorientation and chaos. And in that day’s simulation of the 2020 election, chaos had won by suppressing the vote.
In August, Twitter CEO Jack Dorsey was interviewed on the New York Times podcast The Daily, where he was asked explicitly what his company will do if President Donald Trump uses Twitter to declare himself the winner of the 2020 election before the results have been decided. Dorsey paused, then provided a vague answer about learning lessons from the confusion that occurred in 2000 with the Florida recount and working with “peers and civil society to really understand what’s going on.” It was 88 days before the election, and my heart sank.
For those of us who study misinformation and investigate online efforts to interfere with democratic processes around the world, this election feels like our Olympics. It can be hard to remember just how different attitudes around the threat of false and misleading information were back in November 2016, when, two days after the presidential election, Mark Zuckerberg famously claimed that it was “crazy” to suggest that fake news had affected the outcome. Now a misinformation field has emerged, with new journals inspiring cross-disciplinary research, millions of dollars in funding spent on nonprofits and start-ups, and new forms of regulation from the European Union Code of Practice of Disinformation to U.S. legislation prohibiting so-called deepfakes.
Planning for the impact of misinformation on the 2020 election has taken the form of a dizzying number of conferences, research projects and initiatives over the past four years that warned us about the effects of rumors, conspiracies and falsehoods on democracies. Recent months were supposed to be the home stretch. So when Dorsey failed to give a concrete answer to a question about a highly likely scenario, it felt like watching a teammate fall on their face when they should have been nailing the dismount.
Every platform, newsroom, election authority and civil society group could have a detailed response plan for a number of anticipated scenarios—because we have seen them play out before. The most common form of disinformation is that which sows doubt about the election process itself: flyers promoting the wrong election date, videos of ballot boxes that look like they have been tampered with, false claims about being able to vote online circulating on social media and in closed groups on WhatsApp. The low cost of creating and disseminating disinformation allows bad actors to test thousands of different ideas and concepts—they are just looking for one that could do real damage.
We have not grappled with the severity of the situation. Social media platforms seem to have only recently recognized that this election might not end neatly on November 3. Nonprofits whose employees are exhausted after months of COVID-related misinformation work are still scrambling for resources. The public has not been adequately trained to manage the onslaught of misinformation polluting their feeds. Most newsrooms have not run through scenarios to practice how they will cover, say, bombshell leaks in the run-up to Election Day or after the election, when the outcome might be disputed. In the spring of 2017 France saw #macronleaks, the release of 20,000 e-mails connected to Emmanuel Macron’s campaign and financial history two days before the election. Because of a French law that prohibits media mentions of elections in the final 48 hours of the campaign, the impact was limited. The U.S. does not have such protections.
The panic is palpable now. My e-mail inbox is full of requests from platforms to join belatedly assembled task forces and from start-ups wondering whether some technology could be quickly built to “move the needle” on election integrity. There are near-daily updates to platform policies, but these amendments are not comprehensive, lack transparency and have not been independently assessed.
Ultimately the rise of misinformation, polarization and emotion-filled content is our new reality, and the biggest threat we face in this moment is voter suppression. So rather than “muting” friends and family members when they post conspiracy theories on Facebook, start a conversation about the serious damage that rumors and falsehoods are doing to our lives, our health, our relationships and our communities. Do not focus on the veracity of what is being posted; use empathetic and inclusive language to ask how people are voting. No one should be shamed for sharing misinformation because we are all susceptible to it—especially now, when our worlds have been turned upside down and many of us are operating in fight-or-flight mode. To avoid losing ourselves in the noise, we have to help one another adapt.
A truly great scientist not only makes significant technical contributions but also reshapes a discipline’s conceptual landscape through a commanding depth and breadth of vision. Theoretical physicist John D. Barrow, who passed away on September 26 at the age of 67, was one such individual. Barrow’s career spanned the golden age of cosmology, in which the subject was transformed from a scientific backwater to a mainstream precision science. He was both a player and a commentator in these heady times, producing several hundred research papers and scholarly articles, as well as a string of expository books, each a model of wit and clarity that made him a public intellectual worldwide.
A Londoner by birth, Barrow obtained a doctorate from the University of Oxford in 1977 under the direction of Dennis Sciama, joining the ranks of a formidable lineup of mentees that included Martin Rees and Stephen Hawking. This came at a time of crisis in cosmology. Although the big bang hypothesis for the origin of the universe was well established, the originating event itself remained a mystery; in particular, there was puzzlement about the initial conditions. Analysis of the cosmic microwave background radiation—the fading afterglow of the big bang discovered in the late 1960s—indicated that the universe erupted into existence in an astonishingly uniform state. The expansion rate of the universe also matched its gravitating power to extraordinary precision. It looked like a fix. Barrow addressed these foundational questions in a series of papers on smoothing mechanisms applied to chaotic cosmological expansion, followed in later years by analyses that included extensions to Einstein’s general theory of relativity and various alternative theories of gravitation. The currently popular inflationary universe theory, which explains the “fix” as resulting from a sudden burst of accelerating expansion in the first split second of cosmic existence, provided additional fertile ground for Barrow’s theoretical explorations.
After a stint at the University of California, Berkeley, he took up a position at the then relatively new University of Sussex in the south of England, where he produced a stunning output of journal papers, soon making him something of a scientific celebrity. His research addressed issues as diverse as the asymmetry between matter and antimatter in the universe, the theory of black holes, the nature of dark matter and the origin of galaxies. His early preoccupation with initial cosmic conditions led Barrow to reinstate in physical science the ancient philosophical concept of teleology, which (in its various guises) takes into account final as well as initial states. The centerpiece of this approach was a remarkable book published in 1986 and co-authored with physicist Frank Tipler entitled The Anthropic Cosmological Principle. It built on the recognition that if the initial state of the universe or the fundamental constants of physics had deviated—in some cases, by just a tiny amount—from the values we observe, the universe would not be suitable for life. The book is a detailed and extensive compilation of such felicitous biofriendly “coincidences,” and it became a canonical reference text for a generation of physicists. It also provoked something of a backlash for flirting with notions of cosmic purpose and straying too close to theology in some people’s eyes. Nevertheless, its style of “anthropic” reasoning subsequently became a familiar part of the theorist’s arsenal, albeit a still contentious one.
More recently, Barrow was interested in the possibility that the fine-structure constant—an unexplained number that describes the strength of the electromagnetic force—might not be constant at all but rather vary over cosmological scales. He produced a theoretical basis for incorporating such a phenomenon in physical law while also remaining open-minded on the observational evidence. His adventurous choices of research problems typified Barrow’s intellectual style, which was to challenge the hidden assumptions underpinning cherished mainstream theories. Fundamental problems in physics and cosmology may appear intractable, he reasoned, because we are thinking about them the wrong way. It was a mode of thought that resonated with many colleagues, this writer included, who are drawn to reflect on the deepest questions of existence.
In 1999 Barrow moved to the University of Cambridge as a professor in the department of applied mathematics and theoretical physics and became a fellow of Clare Hall College. In parallel, he completed two separate periods as a professor at the select Gresham College, founded in 1597 to organize free public lectures in London. Barrow’s Cambridge appointment included his directorship of the Millennium Mathematics Project. This is an educational program that caters to the needs of elementary and high school children in imaginative ways. But this demanding array of teaching responsibilities did not deter Barrow from his prodigious research output.
Barrow had many talents beyond the realm of theoretical physics and mathematics. In his younger years, he was an Olympic-standard middle-distance runner. Barrow followed sports in general, and running in particular, with undiminished enthusiasm throughout his life. He was a strikingly stylish dresser and regularly traveled to Italy for his sartorial purchases. He was also a connoisseur of fine dining, making him the ideal travelling companion. An engaging raconteur, Barrow boasted a fund of humorous stories about politics, academia and the humanities. Touch on almost any subject, and he would have something entertaining to say about it. Barrow’s scholarship and writing extended to art theory, musicology, history, philosophy and religion—a grasp of human culture aptly recognized by an invitation to deliver the prestigious Centenary Gifford Lectures at the University of Glasgow in 1989 and also by the 2006 Templeton Prize. These acknowledgments were in addition to many notable scientific and academic honors, including being made a Fellow of the Royal Society.
The Barrow family’s members loved Italy, where they maintained many professional and social contacts over the decades. It was in Milan that another remarkable John Barrow project culminated: the premiere of the stage play Infinities, which he had written. It duly received the Premi Ubu Italian theater prize. It was thus, with some poignancy, that John and his wife Elizabeth were able to make one last trip there just a few weeks ago, in the face of onerous coronavirus-related travel restrictions and the debilitating effects of treatment for his colon cancer. John Barrow died at home and is survived by his wife and three children.
Sometimes it is the strange similarities and symmetries of unrelated historical moments that most clearly display the patterns of human experience. Archives separated by oceans can be in dialogue with each other. A case in point: in the National Library of Scotland and the national archives in Cuba, you can find unsettling documents detailing the skull measurements of two renowned Black leaders of the 19th century. These peculiar archival records demonstrate the long relationship between scientific inquiry and racism. Together, they caution against the perennial problem of societal prejudices seeping into scientific “progress.”
Frederick Douglass, the American abolitionist orator and publisher, and Antonio Maceo, the celebrated military hero of the Cuban independence movement, are rarely if ever mentioned together. Yet these men experienced strikingly similar scrutiny about their mixed racial ancestry. Racist commentators asked whether these Black leaders’ achievements were attributable to their partial “European” or “white” blood. The primary objective of 19th-century “racial science” and ethnology was to stratify the human species into superior and inferior racial categories; such ideas could then be used to justify racial oppression.
But in the attacks levied against these two figures, another factor is in play: the erasure of nonwhite excellence. Douglass’s rhetorical mastery and Maceo’s courageous military exploits were testaments to Black artistry, intellect and leadership. By suggesting their triumphs stemmed from their having partial white ancestry, white critics attempted to rob them of their status as exemplars of Black genius.
Why skulls? Why were some 19th-century scientists so crazy for craniums? Samuel George Morton, a scientist from Philadelphia, epitomized this trend. Morton’s office, filled with skulls from around the world (many retrieved by grave robbers), was affectionately known as the “American Golgotha.” To Morton, skulls were the key: cranial characteristics dictated racial difference and supposedly proved Europeans were the pinnacle of human advancement. Morton’s Crania Americana perpetuated the ideas of cranial racial difference that had a sprawling hold on 19th-century medical and popular thought. Even the skulls of celebrated leaders like Douglass and Maceo did not escape racialized scrutiny.
Antonio Maceo deserved his title, “the Bronze Titan.” He achieved the rank of major general, and fought in hundreds of military engagements against Spanish colonial authorities and refused to be slowed by the many wounds he suffered in the field. Maceo’s parents were classified as “pardos libres,” meaning they were free (not enslaved) and mixed-race. Given his Afro-Cuban parentage, Maceo took pride in his position as a public symbol of the potential for racial equality in Cuba. He led multiracial militias and famously rejected the terms of the 1878 Pact of Zanjón for not guaranteeing independence and the total abolition of slavery. Maceo was killed in battle on December 7, 1896, and came to symbolize the collective struggle of a multiracial Cuban population and a national future free from past racial injustices.
In September of 1899, his body was exhumed by Cuban authorities to reinter him at a monument in his honor. In an act that would have been unthinkable had he been white, his bones were measured and analyzed by an anthropological commission to see if he had been more European or African, more white than Black. Historian Marial Iglesias Utset details how the examination “combined, in a splendidly paradoxical way, the ‘patriotic’ motivation to glorify the memory of the independence hero with the application of techniques developed by … defenders of ‘scientific racism.’”
Henry Louis Gates, Jr. describes the scene cogently: “Imagine if Ulysses S. Grant had died during the Civil War. And imagine if scientists then decided to cut him up like a frog in biology class to find out if his skeleton looked more English, say, or Irish. This was scandalous.”
The “estudio antropológico” began with a study of Maceo’s skull. The scientists were impressed with the “lines” of the cranium. They assured the reader that, although the right side of the skull was slightly larger than the left, that was common. They noted happily that, because of its proportions, his cranium could be “confused” with a European’s. With that line, the purpose of the charade became clear. Maceo was being ‘scientifically’ refashioned into a white Cuban hero.
Moving to the rest of the skeleton, the examiners recorded that Maceo appeared to be a man of Herculean strength, “un hombre de una fuerza hercúlea.” This no one doubted; he was the Bronze Titan after all. But then the report took another shameful turn. The scientists described how the mixture of white and Black—“el cruzamiento del blanco y del negro” —created a superior individual when the European side “predominated” and an “inferior” individual when it did not.
Maceo’s measurements were compared with those of “Blacks of Africa,” “Modern Parisians,” and “adult Europeans.” The examiners divined that while Maceo’s skeleton resembled that of a man of African descent, his cranium was more European. The conclusion declared that “given the race to which he belonged and the sphere in which he nurtured and pursued his activities, Antonio Maceo can, in all rightness, be considered as a truly superior man” (Utset’s translation). As this Cuban military hero was being publicly immortalized, white Cuban authorities made sure to whitewash his body, or at least his skull.
Douglass, too, had his head examined. While giving antislavery lectures in the British Isles in 1846, he met prominent phrenologist George Combe in Edinburgh (phrenology is the now-debunked pseudoscience of predicting mental and intellectual capacity by measuring the shape of skull). Combe wrote in his diary: “Frederick Douglas[s] the self-liberated slave from Maryland U.S. breakfasted with us this morning.… The lower ridge & middle perpendicular line of the frontal lobe are large… The head is well balanced…” In his strange phrenological shorthand, Combe recorded all sorts of observations about Douglass’s head. (
Combe viewed himself as anti-slavery, and his examination did not have the same racist and political motivations of the Maceo commission. Still, he was well acquainted with Morton and had publicized his Crania Americana in the U.K. In his publications, Combe subscribed to the sort of natural racial hierarchies Douglass despised. While visiting the United States in 1839, Combe expressed disgust at American slavery, yet he viewed the institution through his “scientific,” racialized lens: “the African has been deprived of freedom and rendered ‘property’ … because he is by nature a tame man…. In both [Africans and Native Americans], the brain is inferior in size, particularly in the moral and intellectual regions, to that of the Anglo-Saxon race, and hence the foundation of the natural superiority of the latter over both.” Combe’s journal reveals how impressed he was with Douglass. He must have seen him as a racial outlier.
While Combe did not dwell on Douglass’s mixed ancestry, others did. The orator had been born to an enslaved woman on the Eastern Shore of Maryland. His father, he presumed, was his and his mother’s white owner. As with Maceo, racists explained his oratorical power by pointing to his “European” or “Anglo-Saxon” blood.
At a public meeting in 1850, Douglass debated a Dr. Grant who postulated that Blacks were of a different species. When Douglass refuted this convincingly, “the Doctor’s adherents cried out that Douglass was not a negro—he was half white—and the inference that they would draw was, that his logic and eloquence all came from his white father and none from his black mother.”
Douglass addressed such ideas in a speech entitled “The Claims of the Negro, Ethnographically Considered,” delivered in 1854. Taking direct aim at Morton, Douglass noted how Crania Americana dripped with “contempt for negroes.” He observed that “an intelligent black man is always supposed to have derived his intelligence from his connection with the white race. To be intelligent is to have one’s negro blood ignored.” Fascinatingly, Douglass argued “that intellect is uniformly derived from the maternal side.” This allowed him to credit his mother with his inherent intellectual fortitude. In his second autobiography, he said of his mother, “I am quite willing, and even happy, to attribute any love of letters I possess … not to my admitted Anglo-Saxon paternity, but to the native genius of my sable, unprotected, and uncultivated mother.”
Douglass’s and Maceo’s interactions with racial science were not identical. Yet their stories intersect in how their scrutinizers sought to decouple their exceptional achievements from the truth of their racial identities. By doing so, white Cubans could comfortably claim Maceo, and white Americans could comfortably dismiss Douglass.
So, what does racial science reveal about society? Today, the strange logics of these pseudosciences have receded, but they are not altogether gone. Racism, greed and fear of an “other” achieving greatness are still very much alive. These stories of Douglass and Maceo challenge us to assess how our society erases Black excellence today. To put it another way, how do predominantly white institutions claim credit for Black achievement?
To Douglass, the question at the root of all of this “scientific moonshine” was “whether the rights, privileges, and immunities enjoyed by some ought not to be shared and enjoyed by all.” Douglass and Maceo achieved greatness in the pursuit of these universal ideals and could not be eclipsed by those seeking to whitewash Black genius.
Instead of thinking about whether to vote Democratic or Republican in the upcoming U.S. election, think about voting to protect science instead of destroying it.
As president, Donald Trump’s abuse of science has been wanton and dangerous. It has also been well documented. Since the November 2016 election, Columbia Law School has maintained a Silencing Science Tracker that records the Trump Administration’s attempts to restrict or prohibit scientific research, to undermine science education or discussion, or to obstruct the publication or use of scientific information. By early October, the tracker had detailed more than 450 cases, including scientific bias and misrepresentation (123 instances), budget cuts (72), government censorship (145), interference with education (46), personnel changes (61), research hindrances (43) and suppression or distortion of information (19).
The Union of Concerned Scientists (UCS) also keeps a tracker of the administration’s attacks on science. It details antiscience rules, regulations and orders; censorship; politicization of grants and funding; restrictions on conference attendance; rollbacks of data collection or data accessibility; sidelining of science advisory committees; and studies that have been halted, edited or suppressed. The fact that so many types of abuse have occurred, and so often that they each warrant their own category, is scary.
Alarmingly, many of the attacks involve the most immediate and long-term threats to people on earth: the COVID-19 pandemic and climate change. In September, for example, Politico reported that Trump’s political appointees in the Department of Health and Human Services were editing weekly reports from the Centers for Disease Control and Prevention (CDC) about the pandemic prior to publication. Ten days later, U.S. Secretary of Energy Dan Brouillette asserted that “no one knows” whether human activities are causing climate change—a refrain that is so tired it has become silly.
Such declarations parrot Trump’s own words and actions. As was widely reported, when the president was touring the California wildfires in mid-September and was asked about the role of climate change, he said, “It’ll just start getting cooler, you just watch.” Wade Crowfoot, California’s secretary for natural resources, replied, “I wish science agreed with you.” To which Trump retorted: “Well, I don’t think science knows, actually.”
Moves by Trump Administration officials to block or alter scientific information have been particularly egregious. In 2016 National Park Service leaders deleted language about climate change in a report done by an agency scientist, Maria Caffrey. She filed a whistleblower complaint, the language was reinstated—and later she was terminated. In May 2019 the U.S. Geological Survey director ordered employees to use climate change models that only project impacts through 2040, cutting off consideration of severe consequences that are likely in the years beyond.
In June 2019 Politico reported that Department of Agriculture officials buried dozens of climate change studies, including one that revealed how rice worldwide growing in an atmosphere with more carbon dioxide would provide less nutrition. The next month, a State Department scientist resigned after the White House blocked him from submitting written testimony to the House Intelligence Committee on the national security dangers of climate change.
In July 2020 the nonpartisan Government Accountability Office revealed how the Trump administration artificially lowered estimates of climate damages to justify weakened climate policies, failing to listen to experts from the National Academies of Sciences, Engineering and Medicine.
Undercutting science has dangerous repercussions. New York Times contributor David Leonhardt, analyzing COVID-19 data from the World Bank and Johns Hopkins University, found that as of September 1, if the U.S. had the same rate of COVID-19 deaths as the world average, 145,000 fewer Americans would have died from the disease.
Trump’s dismissal of medical science is one reason for the awful excess. As Ben Santer, a researcher at the Lawrence Livermore National Laboratory and a member of the National Academies, wrote in a Scientific American article in June: “It was scientifically incorrect for Donald Trump to dismiss the coronavirus as no worse than the seasonal flu, as he did on February 26. It was incorrect to advise U.S. citizens to engage in business as usual, which he did as late as March 10. It was incorrect to imply, as he did in a press briefing on March 19, that the malaria drugs hydroxychloroquine and chloroquine are promising remedies for COVID-19.” (There was already evidence that the medications did not help, and additional findings soon led the U.S. Food and Drug Administration to revoke authorization for their use.) “Dissemination of such inaccurate information helped to spread the novel coronavirus in America faster by delaying the adoption of social distancing.”
Even after Trump became ill with COVID, he continued to mislead the public about the danger of the illness and the safety and efficacy of the experimental treatments he received—while the White House has declined to do the sort of extensive contact tracing public health experts consider vital.
The administration has also been “suppressing CDC reports on how to safely operate businesses, schools and houses of worship during the pandemic,” according to UCS research analyst Anita Desikan, in an August blog post on the organization’s Web site.
Disregard for science threatens people in other ways. Desikan noted that the EPA has discounted the human health effects of particulate air pollution, which numerous studies show contributes to asthma, lung damage and birth defects, and has ignored the dangers of asbestos, a known human carcinogen, raised by its own scientists. The EPA, she noted, has even downplayed harm from a chemical that damages the hearts of human fetuses. EPA’s leaders, appointed by Trump, have rolled back numerous regulations affecting endangered species, clean air, clean water and toxic chemicals—even the neurotoxin mercury—which will increase hazards to human health as well as emissions of greenhouse gases. These threats are particularly important for Black, Latinx and Indigenous communities, which suffer disproportionately from pollution as well as COVID-19.
Science, built on facts and evidence-based analysis, is fundamental to a safe and fair America. Upholding science is not a Democratic or Republican issue. There are plenty of people in red and blue states across the country who respect and need science. Industrial innovation, profitable farming, homeland security, a competitive economy and therefore good jobs, all depend on it. But politicians of different stripes have to get on board to protect science from further demise. In May, for example, the U.S. House of Representatives passed the Scientific Integrity Act as part of the Heroes Act. It would require science-based federal agencies to have a scientific integrity policy that ensures that no one at the agency will “suppress, alter, interfere, or otherwise impede the timely release and communication of scientific or technical findings.” But the bill sits idle in the Senate.
On an individual basis, the most powerful action you can take to protect science is to vote out of office a president who is trying to gut it—and to encourage people you know to do likewise, especially in the battleground states. The same applies to the November elections for key U.S. Senate races. Most senators and representatives do prize facts and evidence-based thinking, yet too many of them remain silent about Trump’s abuse of science. Their silence is complicity. For that reason, the November 3 election should be a day of reckoning.
“By now, I’ve lost any hope of educating my senior colleagues,” I joked after a lengthy debate about the educational implications of COVID-19 at a recent online faculty meeting. “Oh, stop it,” others protested. And in fact, the truth is that I do have hope. It rests squarely on the shoulders of the younger generation, which lacks a baggage of prejudice, scars from past battles or an entrenched agenda. Our junior colleagues are unbiased enough to carry the torch of innovation, they are energetic and fearless enough to shape reality as if it was clay, and they are innocent enough to believe that the future can be better than the past.
Hiring freezes in universities, triggered by COVID-19 budget squeezes, endanger this future. The reduction in job opportunities threatens the careers of postdoctoral fellows and junior faculty. Some faculty searches that were supposed to conclude in March 2020 were cancelled without offers, and new searches have not been scheduled as of yet.
If the current financial austerity persists for a few more years, it may force a cohort of early-career scientists to leave academia. This would inflict an irreversible blow to science. The lesson learned after many scientists left the Soviet Union during glasnost was that it is difficult to resurrect an academic system following an abrupt dilution of its talent pool.
In the coming year, funding agencies like NSF, NASA, DOE or private foundations must extend emergency support to junior researchers who are in need of another year or two of bridge funding. This will constitute a “bridge over troubled water” to our future in science and technology.
But the challenge to academia extends beyond postgraduate jobs to pregraduate education. Will the education system bounce back to in-person classes after a hoped-for vaccine suppresses the pandemic, or will students prefer to attend online classes from a location of their choice at a lower cost? The answer will depend on the supply-and-demand economics of online education in the coming years.
Fundamentally, the underlying question is whether learning is mostly about a transfer of information or about personal and social interactions as well. One could argue both ways, and a compromise might emerge in which the social interactions take place at a different location than the source of the online classes. Already now, many students have rented shared apartments in their favorite location, be it on the beaches of Hawaii or the suburbs of their favorite city, while attending college online. Beyond this academic year, online teaching will advance partnerships between universities and internet companies to expand enrollment dramatically by offering a more affordable hybrid of online and offline degrees.
COVID-19 forced a transition to enhanced household duties, including childcare and eldercare, which took an inescapable toll on the research productivity of many scientists. The impact was particularly acute for laboratory research in which physical presence is essential. To some, the confinement at home increased mental health needs. To many others, the pandemic enhanced financial inequalities and threatened gender equity with women bearing a greater burden of the extra workload in many homes. All of these factors must be addressed by academic planning committees that will adjust the promotion and tenure policies of colleges in order to ensure that diversity and equity will endure.
But the past months have also offered a menu of minor benefits from online communications. For example, online conferences save on travel time and expenses. To engage the audience, the format of online conferences is shifting to debates and dialogues, with recorded lecture videos serving the supporting role of background materials. Long before Zoom, question and answer sessions were advocated by the wise philosopher Socrates as the best method for learning and for stimulating critical thinking.
In addition to the benefits for conferences, boring administrative meetings are no longer a waste of time, since passive participants can quietly pursue other activities on their computers while staying in listening mode. More generally, the lack of interruptions by unanticipated visitors to my work space has allowed me, for the first time, to reach a state of internal tranquility, ataraxia, recommended more than two millennia ago by the ancient Greek philosophers Pyrrho and Epicurus. With this aim in mind, I advocated social distancing long before it became trendy.
One way or another, academic life is likely to be forever changed. The damage caused by COVID-19 can be viewed optimistically as a fertile ground for establishing a better reality. A seismic change of this magnitude offers an opportunity to reboot the education system, aligning it better with our guiding principles and discarding its administrative inefficiencies. Doing so will require the drive and ideas of our younger colleagues, who will also be the primary bearers of the implications. Let’s make sure that they survive the current financial storm and construct a better version of academia, following the fine tradition of improvements since the initial version of the Platonic Academy.