For the first time, a quantum computer made from photons—particles of light—has outperformed even the fastest classical supercomputers.
Physicists led by Chao-Yang Lu and Jian-Wei Pan of the University of Science and Technology of China (USTC) in Shanghai performed a technique called Gaussian boson sampling with their quantum computer, named Jiŭzhāng. The result, reported in the journal Science, was 76 detected photons—far above and beyond the previous record of five detected photons and the capabilities of classical supercomputers.
Unlike a traditional computer built from silicon processors, Jiŭzhāngis an elaborate tabletop setup of lasers, mirrors, prisms and photon detectors. It is not a universal computer that could one day send e-mails or store files, but it does demonstrate the potential of quantum computing.
Last year, Google captured headlines when its quantum computer Sycamore took roughly three minutes to do what would take a supercomputer three days (or 10,000 years, depending on your estimation method). In their paper, the USTC team estimates that it would take the Sunway TaihuLight, the third most powerful supercomputer in the world, a staggering 2.5 billion years to perform the same calculation as Jiŭzhāng.
This is only the second demonstration of quantum primacy, which is a term that describes the point at which a quantum computer exponentially outspeeds any classical one, effectively doing what would otherwise essentially be computationally impossible. It is not just proof of principle; there are also some hints that Gaussian boson sampling could have practical applications, such as solving specialized problems in quantum chemistry and math. More broadly, the ability to control photons as qubits is a prerequisite for any large-scale quantum internet. (A qubit is a quantum bit, analogous to the bits used to represent information in classical computing.)
“It was not obvious that this was going to happen,” says Scott Aaronson, a theoretical computer scientist now at the University of Texas at Austin who along with then-student Alex Arkhipov first outlined the basics of boson sampling in 2011. Boson sampling experiments were, for many years, stuck at around three to five detected photons, which is “a hell of a long way” from quantum primacy, according to Aaronson. “Scaling it up is hard,” he says. “Hats off to them.”
Over the past few years, quantum computing has risen from an obscurity to a multibillion dollar enterprise recognized for its potential impact on national security, the global economy and the foundations of physics and computer science. In 2019, the the U.S. National Quantum Initiative Act was signed into law to invest more than $1.2 billion in quantum technology over the next 10 years. The field has also garnered a fair amount of hype, with unrealistic timelines and bombastic claims about quantum computers making classical computers entirely obsolete.
This latest demonstration of quantum computing’s potential from the USTC group is critical because it differs dramatically from Google’s approach. Sycamore uses superconducting loops of metal to form qubits; in Jiŭzhāng, the photons themselves are the qubits. Independent corroboration that quantum computing principles can lead to primacy even on totally different hardware “gives us confidence that in the long term, eventually, useful quantum simulators and a fault-tolerant quantum computer will become feasible,” Lu says.
A LIGHT SAMPLING
Why do quantum computers have enormous potential? Consider the famous double-slit experiment, in which a photon is fired at a barrier with two slits, A and B. The photon does not go through A, or through B. Instead, the double-slit experiment shows that the photon exists in a “superposition,” or combination of possibilities, of having gone through both A and B. In theory, exploiting quantum properties like superposition allows quantum computers to achieve exponential speedups over their classical counterparts when applied to certain specific problems.
Physicists in the early 2000s were interested in exploiting the quantum properties of photons to make a quantum computer, in part because photons can act as qubits at room temperatures, so there is no need for the costly task of cooling one’s system to a few kelvins (about –455 degrees Fahrenheit) as with other quantum computing schemes. But it quickly became apparent that building a universal photonic quantum computer was infeasible. To even build a working quantum computer would require millions of lasers and other optical devices. As a result, quantum primacy with photons seemed out of reach.
Then, in 2011, Aaronson and Arkhipov introduced the concept of boson sampling, showing how it could be done with a limited quantum computer made from just a few lasers, mirrors, prisms and photon detectors. Suddenly, there was a path for photonic quantum computers to show that they could be faster than classical computers.
The setup for boson sampling is analogous to the toy called a bean machine, which is just a peg-studded board covered with a sheet of clear glass. Balls are dropped into the rows of pegs from the top. On their way down, they bounce off of the pegs and each other until they land in slots at the bottom. Simulating the distribution of balls in slots is relatively easy on a classical computer.
Instead of balls, boson sampling uses photons, and it replaces pegs with mirrors and prisms. Photons from the lasers bounce off of mirrors and through prisms until they land in a “slot” to be detected. Unlike the classical balls, the photon’s quantum properties lead to an exponentially increasing number of possible distributions.
The problem boson sampling solves is essentially “What is the distribution of photons?” Boson sampling is a quantum computer that solves itself by being the distribution of photons. Meanwhile, a classical computer has to figure out the distribution of photons by computing what’s called the “permanent” of a matrix. For an input of two photons, this is just a short calculation with a two-by-two array. But as the number of photonic inputs and detectors goes up, the size of the array grows, exponentially increasing the problem’s computational difficulty.
Last year the USTC group demonstrated boson sampling with 14 detected photons—hard for a laptop to compute, but easy for a supercomputer. To scale up to quantum primacy, they used a slightly different protocol, Gaussian boson sampling.
According to Christine Silberhorn, an quantum optics expert at the University of Paderborn in Germany and one of the co-developers of Gaussian boson sampling, the technique was designed to avoid the unreliable single photons used in Aaronson and Arkhipov’s “vanilla” boson sampling.
“I really wanted to make it practical,” she says “It’s a scheme which is specific to what you can do experimentally.”
Even so, she acknowledges that the USTC setup is dauntingly complicated. Jiŭzhāng begins with a laser that is split so it strikes 25 crystals made of potassium titanyl phosphate. After each crystal is hit, it reliably spits out two photons in opposite directions. The photons are then sent through 100 inputs, where they race through a track made of 300 prisms and 75 mirrors. Finally, the photons land in 100 slots where they are detected. Averaging over 200 seconds of runs, the USTC group detected about 43 photons per run. But in one run, they observed 76 photons—more than enough to justify their quantum primacy claim.
It is difficult to estimate just how much time would be needed for a supercomputer to solve a distribution with 76 detected photons—in large part because it is not exactly feasible to spend 2.5 billion years running a supercomputer to directly check it. Instead, the researchers extrapolate from the time it takes to classically calculate for smaller numbers of detected photons. At best, solving for 50 photons, the researchers claim, would take a supercomputer two days, which is far slower than the 200-second run time of Jiŭzhāng.
Boson sampling schemes have languished at low numbers of photons for years because they are incredibly difficult to scale up. To preserve the sensitive quantum arrangement, the photons must remain indistinguishable. Imagine a horse race where the horses all have to be released from the starting gate at exactly the same time and finish at the same time as well. Photons, unfortunately, are a lot more unreliable than horses.
As photons in Jiŭzhāng travel a 22-meter path, their positions can differ by no more than 25 nanometers. That is the equivalent of 100 horses going 100 kilometers and crossing the finish line with no more than a hair’s width between them, Lu says.
The USTC quantum computer takes its name, Jiŭzhāng, from Jiŭzhāng Suànshù, or “The Nine Chapters on the Mathematical Art,” an ancient Chinese text with an impact comparable to Euclid’s Elements.
Quantum computing, too, has many twists and turns ahead. Outspeeding classical computers is not a one-and-done deal, according to Lu, but will instead be a continuing competition to see if classical algorithms and computers can catch up, or if quantum computers will maintain the primacy they have seized.
Things are unlikely to be static. At the end of October, researchers at the Canadian quantum computing start-up Xanadu found an algorithm that quadratically cut the classical simulation time for some boson sampling experiments. In other words, if 50 detected photons sufficed for quantum primacy before, you would now need 100.
For theoretical computer scientists like Aaronson, the result is exciting because it helps give further evidence against the extended Church-Turing thesis, which holds that any physical system can be efficiently simulated on a classical computer.
“At the very broadest level, if we thought of the universe as a computer, then what kind of computer is it?” Aaronson says. “Is it a classical computer? Or is it a quantum computer?”
So far, the universe, like the computers we are attempting to make, seems to be stubbornly quantum.
Earthquakes send strong tremors through the earth’s crust, recorded by seismometers planetwide. Human bustle also creates an ongoing, high-frequency vibration—a background buzz—in the rock. After cities, states and countries implemented lockdowns to try to slow the spread of COVID-19 this past spring, the volume of human ground noise fell by up to 50 percent on average in various regions, as people stayed home instead of taking cars, buses and trains to work and school and as businesses and industries curtailed operations. The decline, evident for months, was recorded by seismometers as deep as 400 meters underground. “We were surprised,” says seismologist Stephen Hicks of Imperial College London, “that noise from daily human activity penetrated that far down.”
When a male sand-sifting sea star in the coastal waters of Australia reaches out a mating arm to its nearest neighbor, sometimes that neighbor is also male. Undaunted, the pair assume their species’ pseudocopulation position and forge ahead with spawning. Mating, pseudo or otherwise, with a same-sex neighbor obviously does not transfer a set of genes to the next generation—yet several sea star and other echinoderm species persist with the practice.
They are not alone. From butterflies to birds to beetles, many animals exhibit same-sex sexual behaviors despite their offering zero chance of reproductive success. Given the energy expense and risk of being eaten that mating attempts can involve, why do these behaviors persist?
One hypothesis, hotly debated among biologists, suggests this represents an ancient evolutionary strategy that could ultimately enhance an organism’s chances to reproduce. In results published recently in Nature Ecology & Evolution, Brian Lerch and Maria R. Servedio, from the University of North Carolina, Chapel Hill, offer theoretical support for this proposed explanation. They created a mathematical model that calculated scenarios in which mating attempts, regardless of partner sex, might be worth it. The results predicted that, depending on life span and mating chances, indiscriminate mating with any available candidates could in fact yield a better reproductive payoff than spending precious time and energy sorting out one sex from the other.
Although this study does not address sexual orientation or attraction, both of which are common among vertebrate species, it does get at some persistent evolutionary questions: when did animals start distinguishing mates by sex, based on specific cues, and why do some animals apparently remain indiscriminate in their choices?
“What is probably going to get lost in translation for most people is that same-sex sexual behavior doesn’t necessarily equal same-sex sexual orientation,” says Paul Vasey, professor of psychology and research chair at University of Lethbridge, Alberta, who was not involved in the new study. Indiscriminate mating implies a random process between animals that do not transmit any sex-specific signals—whether chemical, sensory or behavioral.
Evolutionary biologists have proposed several explanations for indiscriminate mating attempts that include both same-sex and different-sex sexual behaviors, and Lerch and Servedio’s work adds a new theoretical underpinning to the literature. To predict how time, life span, and sex-specific cues might affect reproductive success, they established a model that had two sexes, one dubbed the “searcher” and one the “target.” They also set some adjustable factors: sex signals from the target could range from “nonexistent” to “always present,” and could be detectable by searchers in a range from “never” to “always.” If the signal were always present and the searcher always detected it, then indiscriminate mating would be nonexistent. But with no signals or weak ones, and with high risks involved in searching, mating with any available partner might tilt the scale toward evolutionary benefit.
The model also suggested an effect involving death and time: for species with short lives, the indiscriminate approach might be the best use of time, maximizing odds of at least one success. Species with the longest lives would likely have more mating opportunities. But indiscriminate mating might benefit them as well—with the luxury of time to take a gamble, these animals might boost reproductive success by taking every mating opportunity that comes along and still be able to compensate for misfires.
Long-lived echinoderms such as the sand-sifting sea star are perhaps an example of the latter. Echinoderms lie just outside the vertebrate family tree and are probably the closest non-bony relatives of animals with backbones. For this reason, scientists often use this animal group as the evolutionary stand-in for a common ancestor of vertebrates. These spiky creatures have rudimentary structures that can detect light, but it is unlikely that the animals send or receive sex-specific visual signals. And, Lerch and Servedio write, echinoderms show little evidence of emitting sex-specific chemical cues over a distance. The mathematical model would predict that animals like these, with a relatively long life and no apparent sex signals, might benefit from indiscriminate mating. In the case of the sand-sifting sea star, real-world observation confirms that at least some pair up for mating regardless of sex. Sea urchins do, too.
Lerch and Servedio’s model “gives us a deeper theoretical understanding to empirically test just how prevalent indiscriminate sexual behaviors are, (and) my guess is, pretty darn prevalent,” says Max Lambert, a conservation biologist and postdoctoral researcher at the University of California, Berkeley, who was not involved in the new work but co-authored a 2019 paper on the subject.
Yet single examples bearing out a theoretical prediction do not prove or disprove that same-sex sexual behaviors sit at the root of the vertebrate family tree. Evolution has been known to shape, dispense with, and yet again shape similar tactics and features over millions of years. It is possible that some species showing these behaviors, including many insects and spiders, might do so from mistaken identity—an imperfect read of a sex-specific signal—rather than by way of a reproductive gamble. After all, some of these animals even occasionally choose the wrong species. Yet another suggested explanation is the undertaking of noncommittal trial runs in preparation for the real thing.
Vasey says multiple evolutionary pathways probably generate same-sex sexual behaviors. “You can certainly see in a particular ecological context how that tactic—if it moves, mate with it—might be a good one,” Vasey says. But “whether you can jump from there to say it was the ancestral pattern? That’s another question that will require more real-world studies to answer.”
For the moment, debate on the topic continues. Lambert, with first author Julia Monk and colleagues, last year published a paper in Nature Ecology & Evolution proposing that some degree of same-sex activity could have been part of an ancestral repertoire of behaviors, after which sex-based signals arose and allowed for more selective mating.
But this proposition highlights a chicken-or-egg question about which came first: sex-specific signals, or different-sex sexual behavior. In the absence of eyewitnesses, the best scientists can do is start with plausible ideas and devise ways to test them. The testable parts of these ideas often rely on the kinds of mathematical models that Lerch and Servedio developed in their paper.
“We were hoping in part to stimulate exactly this type of research, so I was happy to see someone building that argument and testing it theoretically,” says Monk, a Ph.D. candidate and ecologist at the Yale School of the Environment.
A group of researchers wrote a critical response to the hypothesis that Monk and her colleagues proposed, calling for less theory and more testing. Vasey agrees about the need for more hard evidence. “I personally would like to see more data collection to accompany all of this theorizing,” he says. “We need to see what real animals are doing in the real world, because the models are only as good as the assumptions that underlie them.”
Join Scientific American for a conversation about the next steps in humanity’s reconnaissance of Mars. Featuring Casey Dreier, senior space policy adviser at The Planetary Society, and space & physics editor Lee Billings, this deep dive will begin with an overview of NASA’s upcoming Perseverance rover, slated to land on Mars in February 2021 to search for signs of past and present life and to gather samples for future return to Earth.
Dreier and Billings will also discuss the “post-Perseverance” future in which space agencies and private companies may pursue major shifts in Mars exploration strategies, and how those plans could forever change our understanding of—and relationship with—the Red Planet.
China has apparently landed on the moon again — and this time the country plans to bring home some souvenirs.
Chang’e 5, China’s first-ever sample-return mission, successfully touched down today (Dec. 1), according to state media reports. Details on the landing were not immediately available from the China National Space Administration, but the state-run CGTN news channel announced the landing success in a single-sentence statement.
Chang’e 5’s landing was expected to occur at about 10:13 a.m. EST (1513 GMT) near Mons Rümker, a mountain in the Oceanus Procellarum (“Ocean of Storms”) region of the moon.
Two pieces of the four-module, 18,100-lb. (8,200 kilograms) Chang’e 5 mission hit the gray dirt today — a stationary lander and an ascent vehicle. If all goes according to plan, the lander will spend the next few days collecting about 4.4 lbs. (2 kg) of lunar material, some of it dug from up to 6.5 feet (2 meters) beneath the lunar surface.
The sample will then be transferred to the ascent vehicle, which will launch to lunar orbit and meet up with the other two Chang’e 5 elements — an orbiter and an Earth-return craft. The return vehicle will haul the moon dirt and rocks to our planet, with a touchdown planned in Inner Mongolia in mid-December.
That will be a landmark event; pristine lunar samples have not been delivered to Earth since 1976, when the Soviet Union’s Luna 24 mission came home with about 6 ounces (170 grams) of material.
Chang’e 5 just launched on Nov. 23, so it’s packing a lot of action into a few short weeks. The compressed timeline is driven largely by the mission’s energy needs: The Chang’e 5 lander is solar powered, so it must get all of its work done in two Earth weeks at most, before the sun sets at Mons Rümker. (One lunar day lasts about 29 Earth days, so most moon sites receive two weeks of continuous sunlight followed by two weeks of darkness.)
Chang’e 5 is the latest mission in the Chang’e program of robotic lunar exploration, which is named after a moon goddess in Chinese mythology. The Chang’e 1 and Chang’e 2 orbiters launched in 2007 and 2010, respectively, and Chang’e 3 put a lander-rover duo down on the moon’s near side in December 2013.
The Chang’e 5 T1 mission sent a prototype return capsule around the moon and back to Earth in October 2014 to help prepare for Chang’e 5. And in January 2019, the Chang’e 4 lander-rover duo pulled off the first-ever soft touchdown on the moon’s mysterious, largely unexplored far side. Both Chang’e 4 robots remain operational today, as does the Chang’e 3 lander.
Though Chang’e 5’s operational life will be short, the mission is designed to have a long-lasting impact. After all, scientists are still studying the 842 lbs. (382 kg) of lunar material brought to Earth by NASA’s Apollo missions from 1969 to 1972.
Some of the Apollo material came from Oceanus Procellarum, a huge volcanic plain that Apollo 12 explored in late 1969. But Mons Rümker rocks formed just 1.2 billion years ago, whereas all of the samples collected by the Apollo astronauts are more than 3 billion years old.
Chang’e 5 therefore “will help scientists understand what was happening late in the moon’s history, as well as how Earth and the solar system evolved,” the nonprofit Planetary Society wrote in its description of the mission.
Chang’e 5 isn’t the only sample-return game in town. Japan’s Hayabusa2 mission is scheduled to deliver pieces of the asteroid Ryugu to Earth on Dec. 5, and NASA’s OSIRIS-REx probe collected samples of the space rock Bennu in late October. The Bennu samples are scheduled to come home in September 2023.
Copyright 2020 Space.com, a Future company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
After two cable failures in the span of four months, Puerto Rico’s most venerable astronomy facility, the Arecibo radio telescope, has collapsed in an uncontrolled structural failure.
The U.S. National Science Foundation (NSF), which owns the site, decided in November to proceed with decommissioning the telescope in response to the damage, which engineers deemed too severe to stabilize without risking lives. But the NSF needed time to come up with a plan for how to safely demolish the telescope in a controlled manner.
Instead, gravity did the job this morning (Dec. 1) at about 8 a.m. local time, according to reports from the area.
“NSF is saddened by this development,” the agency wrote in a tweet. “As we move forward, we will be looking for ways to assist the scientific community and maintain our strong relationship with the people of Puerto Rico.”
The NSF added that no injuries had been reported, that the top priority was to maintain safety and that more details would be provided when confirmed.
“What a sad day for Astronomy and Planetary science worldwide and one of the most iconic telescopes of all time,” Thomas Zurbuchen, NASA’s associate administrator for science, wrote in a tweet. “My thoughts are with the staff members and scientists who have continued to do great science during the past years and whose life is directly affected by this.”
Images shared on Twitter by Deborah Martorell, a meteorologist for Puerto Rican television stations, compare views of the observatory taken yesterday — showing the 900-ton science platform suspended over the massive dish strung up on cables — and today, when the observatory’s three supporting towers are bare.
None of the three towers collapsed fully, which was one of NSF’s key concerns about leaving the structure as it was. Martorell’s image does appear to show some damage in the knot of buildings at the base of one of the support towers, which includes administrative buildings and a public visitor’s center, although the buildings are still standing.
In an interview with local television station Noticentro, Jonathan Friedman, a physicist who works at Arecibo Observatory and lives nearby, said that he heard a loud rumble that he compared to a train or an avalanche — or to the earthquakes that plagued Puerto Rico in January. Friedman also confirmed that only the tips of the supporting towers broke off, as Martorell’s image suggested.
Since the first cable failure in August, Arecibo Observatory has enforced a safety zone at the facility, although its size changed as damage was incurred and evaluated, Ralph Gaume, director of NSF’s Division of Astronomical Sciences, said during a news conference held on Nov. 19, at which the NSF announced its decision to decommission the telescope.
Even during that news conference, the tenuous state of the telescope was clear. “The structure, as far as I know, is currently standing, so that implies that it is currently stable,” Gaume said at the time, while also noting the 2.5-month gap between the first cable failure and the second.
The massive radio dish has been at the forefront of atmospheric science, radio astronomy, and planetary radar capability for decades. It was also the unusual telescope to become an icon in popular culture, thanks in part to its leading roles in the movies GoldenEye and Contact.
In addition to the telescope, Arecibo Observatory also includes a LIDAR instrument that scientists use to study the area where Earth’s atmosphere and space meet. When the NSF announced that it would decommission the telescope, officials emphasized that a key priority was ensuring Arecibo Observatory as a larger facility would continue.
At the time, the NSF couldn’t assess whether the telescope would be replaced.
“This is a really, really hard morning for Puerto Rico, for science, for our connection to the cosmos,” journalist Nadia Drake, whose father Frank Drake is a former director of Arecibo Observatory, wrote in a tweet. “RIP #Arecibo.”
Copyright 2020 Space.com, a Future company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
COVID-19 has wreaked havoc on Black and Indigenous communities and other people of color, and U.S. medical institutions should be doing everything they can to root out and eliminate entrenched racial inequities. Yet many of the screening assessments used in health care are exacerbating racism in medicine, automatically and erroneously changing the scores given to people of color in ways that can deny them needed treatment.
These race-based scoring adjustments to evaluations are all too common in modern medicine, particularly in the U.S. To determine the chances of death for a patient with heart failure, for example, a physician following the American Heart Association’s guidelines would use factors such as age, heart rate and systolic blood pressure to calculate a risk score, which helps to determine treatment. But for reasons the AHA does not explain, the algorithm automatically adds three points to non-Black patients’ scores, making it seem as if Black people are at lower risk of dying from heart problems simply by virtue of their race. This is not true.
A recent paper in the New England Journal of Medicine presented 13 examples of such algorithms that use race as a factor. In every case, the race adjustment results in potential harm to patients who identify as nonwhite, with Black, Latinx, Asian and Native American people affected to various degrees by different calculations. These “corrections” are presumably based on the long-debunked premise that there are innate biological differences among races. This idea persists despite ample evidence that race—a social construct—is not a reliable proxy for genetics: Every racial group contains a lot of diversity in its genes. It is true that some populations are genetically predisposed to certain medical conditions—the BRCA mutations associated with breast cancer, for instance, occur more frequently among people of Ashkenazi Jewish heritage. But such examples are rare and do not apply to broad racial categories such as “Black” or “white.”
The mistaken conflation of race and genetics is often compounded by outdated ideas that medical authorities (mostly white) have perpetuated about people of color. For example, one kidney test includes an adjustment for Black patients that can hinder accurate diagnosis. It gauges the estimated glomerular filtration rate (eGFR), which is calculated by measuring creatinine, a protein associated with muscle breakdown that is normally cleared by the kidneys. Black patients’ scores are automatically adjusted because of a now discredited theory that greater muscle mass “inherent” to Black people produces higher levels of the protein. This inflates the overall eGFR value, potentially disguising real kidney problems. The results can keep them from getting essential treatment, including transplants. Citing these issues earlier this year, medical student Naomi Nkinsi successfully pushed the University of Washington School of Medicine to abandon the eGFR race adjustment. But it remains widely used elsewhere.
A recent study in Science examined an algorithm used throughout the U.S. health system to predict broad-based health risks. The researchers looked at one large hospital that used this algorithm and found that, based on individual medical records, white patients were actually healthier than Black patients with the same risk score. This is because the algorithm used health costs as a proxy for health needs—but systemic racial inequality means that health care expenditures are higher for white people overall, so the needs of Black people were underestimated. In an analysis of these findings, sociologist Ruha Benjamin, who studies race, technology and medicine, observes that “today coded inequity is perpetuated precisely because those who design and adopt such tools are not thinking carefully about systemic racism.”
The algorithms that are harming people of color could easily be made more equitable, either by correcting the racially biased assumptions that inform them or by removing race as a factor altogether, when it does not help with diagnosis or care. The same is true for devices such as the pulse oximeter, which is calibrated to white skin—a particularly dangerous situation in the COVID pandemic, where nonwhite patients are at higher risk of dangerous lung infections. Leaders in medicine must prioritize these issues now, to give fair and often lifesaving care to people left most vulnerable by an inherently racist system.