SARS-CoV-2 mutations similar to those in the B1.1.7 UK variant could arise in cases of chronic infection, where treatment over an extended period can provide the virus multiple opportunities to evolve, say scientists.
Writing in Nature, a team led by Cambridge researchers report how they were able to observe SARS-CoV-2 mutating in the case of an immunocompromised patient treated with convalescent plasma. In particular, they saw the emergence of a key mutation also seen in the new variant that led to the UK being forced once again into strict lockdown, though there is no suggestion that the variant originated from this patient.
Using a synthetic version of the virus Spike protein created in the lab, the team showed that specific changes to its genetic code — the mutation seen in the B1.1.7 variant — made the virus twice as infectious on cells as the more common strain.
SARS-CoV-2, the virus that causes COVID-19, is a betacoronavirus. Its RNA — its genetic code — is comprised of a series of nucleotides (chemical structures represented by the letters A, C, G and U). As the virus replicates itself, this code can be mistranscribed, leading to errors, known as mutations. Coronaviruses have a relatively modest mutation rate at around 23 nucleotide substitutions per year.
Of particular concern are mutations that might change the structure of the ‘spike protein’, which sits on the surface of the virus, giving it its characteristic crown-like shape. The virus uses this protein to attach to the ACE2 receptor on the surface of the host’s cells, allowing it entry into the cells where it hijacks their machinery to allow it to replicate and spread throughout the body. Most of the current vaccines in use or being trialed target the spike protein and there is concern that mutations may affect the efficacy of these vaccines.
UK researchers within the Cambridge-led COVID-19 Genomics UK (COG-UK) Consortium have identified a particular variant of the virus that includes important changes that appear to make it more infectious: the ΔH69/ΔV70 amino acid deletion in part of the spike protein is one of the key changes in this variant.
Although the ΔH69/ΔV70 deletion has been detected multiple times, until now, scientists had not seen them emerge within an individual. However, in a study published today in Nature, Cambridge researchers document how these mutations appeared in a COVID-19 patient admitted to Addenbrooke’s Hospital, part of Cambridge University Hospitals NHS Foundation Trust.
The individual concerned was a man in his seventies who had previously been diagnosed with marginal B cell lymphoma and had recently received chemotherapy, meaning that his immune system was seriously compromised. After admission, the patient was provided with a number of treatments, including the antiviral drug remdesivir and convalescent plasma — that is, plasma containing antibodies taken from the blood of a patient who had successfully cleared the virus from their system. Despite his condition initially stabilizing, he later began to deteriorate. He was admitted to the intensive care unit and received further treatment, but later died.
During the patient’s stay, 23 viral samples were available for analysis, the majority from his nose and throat. These were sequenced as part of COG-UK. It was in these sequences that the researchers observed the virus’s genome mutating.
Between days 66 and 82, following the first two administrations of convalescent sera, the team observed a dramatic shift in the virus population, with a variant bearing ΔH69/ΔV70 deletions, alongside a mutation in the spike protein known as D796H, becoming dominant. Although this variant initially appeared to die away, it re-emerged again when the third course of remdesivir and convalescent plasma therapy were administered.
Professor Ravi Gupta from the Cambridge Institute of Therapeutic Immunology & Infectious Disease, who led the research, said: “What we were seeing was essentially a competition between different variants of the virus, and we think it was driven by the convalescent plasma therapy.
“The virus that eventually won out — which had the D796H mutation and ΔH69/ΔV70 deletions — initially gained the upper hand during convalescent plasma therapy before being overtaken by other strains, but re-emerged when the therapy was resumed. One of the mutations is in the new UK variant, though there is no suggestion that our patient was where they first arose.”
Under strictly-controlled conditions, the researchers created and tested a synthetic version of the virus with the ΔH69/ΔV70 deletions and D796H mutations both individually and together. The combined mutations made the virus less sensitive to neutralization by convalescent plasma, though it appears that the D796H mutation alone was responsible for the reduction in susceptibility to the antibodies in the plasma. The D796H mutation alone led to a loss of infection in absence of plasma, typical of mutations that viruses acquire in order to escape from immune pressure.
The researchers found that the ΔH69/ΔV70 deletion by itself made the virus twice as infectious as the previously dominant variant. The researchers believe the role of the deletion was to compensate for the loss of infectiousness due to the D796H mutation. This paradigm is classic for viruses, whereby escape mutations are followed by or accompanied by compensatory mutations.
“Given that both vaccines and therapeutics are aimed at the spike protein, which we saw mutate in our patient, our study raises the worrying possibility that the virus could mutate to outwit our vaccines,” added Professor Gupta.
“This effect is unlikely to occur in patients with functioning immune systems, where viral diversity is likely to be lower due to better immune control. But it highlights the care we need to take when treating immunocompromised patients, where prolonged viral replication can occur, giving greater opportunity for the virus to mutate.”
Reference: “SARS-CoV-2 evolution during treatment of chronic infection” by Steven A. Kemp, Dami A. Collier, Rawlings P. Datir, Isabella A. T. M. Ferreira, Salma Gayed, Aminu Jahun, Myra Hosmillo, Chloe Rees-Spear, Petra Mlcochova, Ines Ushiro Lumb, David J. Roberts, Anita Chandra, Nigel Temperton, The CITIID-NIHR BioResource COVID-19 Collaboration, The COVID-19 Genomics UK (COG-UK) Consortium, Katherine Sharrocks, Elizabeth Blane, Yorgo Modis, Kendra Leigh, John Briggs, Marit van Gils, Kenneth G. C. Smith, John R. Bradley, Chris Smith, Rainer Doffinger, Lourdes Ceron-Gutierrez, Gabriela Barcenas-Morales, David D. Pollock, Richard A. Goldstein, Anna Smielewska, Jordan P. Skittrall, Theodore Gouliouris, Ian G. Goodfellow, Effrossyni Gkrania-Klotsas, Christopher J. R. Illingworth, Laura E. McCoy and Ravindra K. Gupta, 5 February 2021, Nature.
The research was largely supported by Wellcome, the Medical Research Council, the National Institute of Health Research, and the Bill and Melinda Gates Foundation.
How two new technologies will help Perseverance, NASA’s most sophisticated rover yet, touch down onto the surface of Mars this month.
After a nearly seven-month journey to Mars, NASA’s Perseverance rover is slated to land at the Red Planet’s Jezero Crater on February 18, 2021, a rugged expanse chosen for its scientific research and sample collection possibilities.
But the very features that make the site fascinating to scientists also make it a relatively dangerous place to land – a challenge that has motivated rigorous testing here on Earth for the lander vision system (LVS) that the rover will count on to safely touch down.
“Jezero is 28 miles wide, but within that expanse there are a lot of potential hazards the rover could encounter: hills, rock fields, dunes, the walls of the crater itself, to name just a few,” said Andrew Johnson, principal robotics systems engineer at NASA’s Jet Propulsion Laboratory in Southern California. “So, if you land on one of those hazards, it could be catastrophic to the whole mission.”
Enter Terrain-Relative Navigation (TRN), the mission-critical technology at the heart of the LVS that captures photos of the Mars terrain in real time and compares them with onboard maps of the landing area, autonomously directing the rover to divert around known hazards and obstacles as needed.
“For Mars 2020, LVS will use the position information to figure out where the rover is relative to safe spots between those hazards. And in one of those safe spots is where the rover will touch down,” explained Johnson.
If Johnson sounds confident that LVS will work to land Perseverance safely, that’s because it allows the rover to determine its position relative to the ground with an accuracy of about 200 feet or less. That low margin of error and high degree of assurance are by design, and the result of extensive testing both in the lab and in the field.
“We have what we call the trifecta of testing,” explained JPL’s Swati Mohan, guidance, navigation, and control operations lead for Mars 2020.
2014 flight tests on Masten’s Xombie VTVL system demonstrated the lander vision system’s terrain-relative navigation and fuel-optimal large divert guidance (G-FOLD) capabilities. The flights proved the system’s ability to autonomously change course to avoid hazards on descent and adopt a newly calculated path to a safe landing site. The successful field tests enabled the technology to be greenlighted for inclusion on NASA’s Mars 2020 mission. Credit: NASA/JPL-Caltech
Mohan said that the first two testing areas – hardware and simulation – were done in a lab.
“That’s where we test every condition and variable we can. Vacuum, vibration, temperature, electrical compatibility – we put the hardware through its paces,” said Mohan. “Then with simulation, we model various scenarios that the software algorithms may encounter on Mars – a too-sunny day, very dark day, windy day – and we make sure the system behaves as expected regardless of those conditions.”
But the third piece of the trifecta – the field tests – require actual flights to put the lab results through further rigor and provide a high level of technical readiness for NASA missions. For LVS’s early flight tests, Johnson and team mounted the LVS to a helicopter and used it to estimate the vehicle’s position automatically as it was flying.
“That got us to a certain level of technical readiness because the system could monitor a wide range of terrain, but it didn’t have the same kind of descent that Perseverance will have,” said Johnson. “There was also a need to demonstrate LVS on a rocket.”
That need was met by NASA’s Flight Opportunities program, which facilitated two 2014 flights in the Mojave Desert on Masten Space Systems’ Xombie – a vertical takeoff and vertical landing (VTVL) system that functions similarly to a lander. The flight tests demonstrated LVS’s ability to direct Xombie to autonomously change course and avoid hazards on descent by adopting a newly calculated path to a safe landing site. Earlier flights on Masten’s VTVL system also helped validate algorithms and software used to calculate fuel-optimal trajectories for planetary landings.
“Testing on the rocket laid pretty much all remaining doubts to rest and answered a critical question for the LVS operation affirmatively,” said JPL’s Nikolas Trawny, a payload and pointing control systems engineer who worked closely with Masten on the 2014 field tests. “It was then that we knew LVS would work during the high-speed vertical descent typical of Mars landings.”
Johnson added that the suborbital testing in fact increased the technology readiness level to get the final green light of acceptance into the Mars 2020 mission.
“The testing that Flight Opportunities is set up to provide was really unprecedented within NASA at the time,” said Johnson. “But it’s proven so valuable that it’s now becoming expected to do these types of flight tests. For LVS, those rocket flights were the capstone of our technology development effort.”
With the technology accepted for Mars 2020, the mission team began to build the final version of LVS that would fly on Perseverance. In 2019, a copy of that system flew on one more helicopter demonstration in Death Valley, California, facilitated by NASA’s Technology Demonstration Missions program. The helicopter flight provided a final check on over six-years of multiple field tests.
But Mohan pointed out that even with these successful demonstrations, there will be more work to do to ensure a safe landing. She’ll be at Mission Control for the landing, monitoring the health of the system every step of the way.
“Real life can always throw you curve balls. So, we’ll be monitoring everything during the cruise phase, checking power to the camera, making sure the data is flowing as expected,” Mohan said. “And once we get that signal from the rover that says, ‘I’ve landed and I’m on stable ground,’ then we can celebrate.”
About Flight Opportunities
The Flight Opportunities program is funded by NASA’s Space Technology Mission Directorate (STMD) and managed at NASA’s Armstrong Flight Research Center in Edwards, California. NASA’s Ames Research Center in California’s Silicon Valley manages the solicitation and evaluation of technologies to be tested and demonstrated on commercial flight vehicles.
About Technology Demonstration Missions
Also under the umbrella of STMD, the program is based at NASA’s Marshall Space Flight Center in Huntsville, Alabama. The program bridges the gap between scientific and engineering challenges and the technological innovations needed to overcome them, enabling robust new space missions.
More About the Mission
A key objective for Perseverance’s mission on Mars is astrobiology, including the search for signs of ancient microbial life. The rover will characterize the planet’s geology and past climate, pave the way for human exploration of the Red Planet, and be the first mission to collect and cache Martian rock and regolith (broken rock and dust).
Subsequent missions, currently under consideration by NASA in cooperation with the European Space Agency, would send spacecraft to Mars to collect these cached samples from the surface and return them to Earth for in-depth analysis.
The Mars 2020 mission is part of a larger program that includes missions to the Moon as a way to prepare for human exploration of the Red Planet. Charged with returning astronauts to the Moon by 2024, NASA will establish a sustained human presence on and around the Moon by 2028 through NASA’s Artemis lunar exploration plans.
JPL, which is managed for NASA by Caltech in Pasadena, California, built and manages operations of the Perseverance rover.
New study finds grapes increased resistance to sunburn and reduced markers of UV damage.
A recent human study published in the Journal of the American Academy of Dermatology found that consuming grapes protected against ultraviolet (UV) skin damage. Study subjects showed increased resistance to sunburn and a reduction in markers of UV damage at the cellular level. Natural components found in grapes known as polyphenols are thought to be responsible for these beneficial effects.
The study, conducted at the University of Alabama, Birmingham and led by principal investigator Craig Elmets, M.D., investigated the impact of consuming whole grape powder — equivalent to 2.25 cups of grapes per day — for 14 days against photodamage from UV light. Subjects’ skin response to UV light was measured before and after consuming grapes for two weeks by determining the threshold dose of UV radiation that induced visible reddening after 24 hours — the Minimal Erythema Dose (MED). Grape consumption was protective; more UV exposure was required to cause sunburn following grape consumption, with MED increasing on average by 74.8%. Analysis of skin biopsies showed that the grape diet was associated with decreased DNA damage, fewer deaths of skin cells, and a reduction in inflammatory markers that if left unchecked, together can impair skin function and can potentially lead to skin cancer.
It is estimated that 1 in 5 Americans will develop skin cancer by the age of 70. Most skin cancer cases are associated with exposure to UV radiation from the sun: about 90% of nonmelanoma skin cancers and 86% of melanomas, respectively. Additionally, an estimated 90% of skin aging is caused by the sun.
“We saw a significant photoprotective effect with grape consumption and we were able to identify molecular pathways by which that benefit occurs — through repair of DNA damage and downregulation of proinflammatory pathways,” said Dr. Elmets. “Grapes may act as an edible sunscreen, offering an additional layer of protection in addition to topical sunscreen products.”
“Dietary table grape protects against UV photodamage in humans: 1. clinical evaluation” by Allen S.W. Oak, MD; Rubina Shafi, PhD; Mahmoud Elsayed, MD; Sejong Bae, PhD; Leah Saag, CRNP; Casey L. Wang, MD; Mohammad Athar, PhD and Craig A. Elmets, MD, 20 January 2021, Journal of the American Academy of Dermatology.
DOI: 10.1016/j.jaad.2021.01.035 “Dietary table grape protects against UV photodamage in humans: 2. molecular biomarker studies” by Allen S.W. Oak, MD; Rubina Shafi, PhD; Mahmoud Elsayed, MD; Bharat Mishra, MS; Sejong Bae, PhD; Stephen Barnes, PhD; Mahendra P. Kashyap, PhD; Andrzej T. Slominski, MD, PhD; Landon S. Wilson, BS; Mohammad Athar, PhD and Craig A. Elmets, MD, 20 January 2021, Journal of the American Academy of Dermatology.
DOI: 10.1016/j.jaad.2021.01.036 “Skin Cancer Facts and Statistics” 26 January 2021, Skin Cancer Foundation website.
For decades, the speed of our computers has been growing at a steady pace. The processor of the first IBM PC released 40 years ago, operated at a rate of roughly 5 million clock cycles per second (4.77 MHz). Today, the processors in our personal computers run around 1000 times faster.
However, with current technology, they’re not likely to get any faster than that.
For the last 15 years, the clock rate of single processor cores has stalled at a few Gigahertz (1 Gigahertz = 1 billion clock cycles per second). And the old and tested approach of cramming ever more transistors on a chip will no longer help in pushing that boundary. At least not without breaking the bank in terms of power consumption.
A way out of the stagnation could come in the form of optical circuits in which the information is encoded in light rather than electronics. In 2019, an IBM Research team together with partners from academia built the world’s first ultrafast all-optical transistor capable of operating at room temperature. The team now follows up with another piece of the puzzle, a silicon waveguide that links up such transistors, carrying light between them with minimal losses.
Wiring up the transistors of an optical circuit with silicon waveguides is an important requirement to make compact, highly integrated chips. That’s because it’s easier to place other needed components such as electrodes in its close vicinity if the waveguide is made of silicon. The techniques used for that purpose have been refined for decades in the semiconductor industry.
However, silicon being a notoriously strong absorber of visible light makes it great for capturing sunlight in a photovoltaics panels but a poor choice for a waveguide where light absorption means signal loss.
Making a fence to confine light
So, the IBM researchers thought of ways to use the mature silicon technology while circumventing the absorption issue. Their solution involves nanostructures called high contrast gratings with a striking behavior that some of the team members had already discovered over 10 years ago, albeit for another application.
A high contrast grating consists of nanometer sized “posts” lined up to form a sort of fence that prevents light from escaping. The posts are 150 nanometers in diameter and are spaced in such a way that light passing through the posts interferes destructively with light passing between posts. Destructive interference is a well-known phenomenon by which waves oscillating out of sync cancel each other out at a point in space. It affects light, which is an electromagnetic wave, just as it does sound and other types of wave. In this case, the destructive interference makes sure that no light can “leak” through the grating. Instead, most of the light gets reflected back inside the waveguide. The IBM researchers also showed that absorption of light inside the posts themselves is minimal. All this together translates in losses of only 13 percent along a light travel path of 1 millimeter inside the waveguide. For comparison: Along already only one hundredth of that distance (10 micrometers) in a pure silicon waveguide without the gratings, the losses would amount to 99.7 percent.
Simulations for precise grating design
On its face, the basic idea behind the high contrast gratings looks simple. However, it was indeed surprising when the researchers found out for the first time that they could keep light from being absorbed by a “dark” material like silicon.
Back in 2010, when they first observed the grating effect, it occurred in a laser microcavity which helped because the light amplification by the laser would compensate for the losses. Also, they had the light hitting the gratings at almost 90 degrees which is a sweet spot for the grating effect to kick in. But keeping the losses low in a waveguide without the benefit of the laser gain and at almost grazing light incidence was much more challenging.
To make sure their grating design would be up to the task, the team ran simulations showing how light propagation inside the waveguide would change with varying grating dimensions. They found out that the grating would provide efficient guiding of light over a broad band of wavelengths. All they needed to do was choose the right spacing between the grating posts and make the posts themselves to the right thickness within a precision margin of 15 nanometers. Using a standard silicon photonics fabrication process, those requirements proved manageable. In fact, the experiments confirmed what the simulations had predicted in terms of low loss for visible light in the range between 550 and 650 nanometers.
Potential benefits for optical circuits and beyond
The team found some evidence through simulations that this design can be used to make not only straight waveguides but also guide the light around corners. But they haven’t yet run the experiments to confirm this idea. Even if it proves feasible, some further optimization will be needed to keep the additional losses low in that case. Looking ahead, a next step will be to engineer the efficient coupling of the light out of the waveguides into other components. That will be a crucial step in the team’s multi-year exploratory research project with the goal of integrating the all-optical transistors they demonstrated in 2019 into integrated circuits capable of performing simple logic operations.
The team believes that their low-loss silicon waveguide could enable new photonic chip designs for use in biosensing and other applications that rely on visible light. It could also benefit the engineering of more efficient optical components such as lasers and modulators widely used in telecommunications.
Reference: “Low-loss optical waveguides made with a high-loss material” by Darius Urbonas, Rainer F. Mahrt and Thilo Stöferle, 12 January 2021, Light: Science & Applications.
A person’s intake of whole eggs and cholesterol was positively associated with their risk of death, while intake of egg whites or egg substitutes was negatively associated with death in a new study published this week in PLOS Medicine by Yu Zhang of Zhejiang University College of Biosystems Engineering and Food Science, Jingjing Jiao of Zhejiang University School of Medicine, China, and colleagues.
Whether consumption of egg and cholesterol is detrimental to cardiovascular health and longevity is highly debated, and data from large-scale cohort studies are scarce. In the new study, researchers used data on 521,120 participants from the NIH-AARP Diet and Health Study. Participants were aged 50-71 years old, 41.2% women, 91.8% non-Hispanic white, and were recruited from 6 states and 2 cities in the US between 1995 and 1996.
During a mean follow-up of 16 years, 129,328 deaths occurred in the cohort. Whole egg consumption, as reported in a food questionnaire, was significantly associated with higher all-cause mortality after adjusting for demographic characteristics and dietary factors (P<0.001), but not after further adjusting for cholesterol intake (P=0.64). Every intake of an additional 300 mg dietary cholesterol intake per day was associated with a 19% higher all-cause mortality (95% CI 1.16-1.22) and each additional half a whole egg per day was associated with a 7% higher all-cause mortality (95% CI 1.06-1.08). In contrast, egg whites/substitutes consumption was significantly associated with lower all-cause mortality (P<0.001). Replacing half a whole egg with an equivalent amount of egg whites/substitutes was associated with a reduction of 3% in cardiovascular disease mortality.
“Our findings suggest limiting cholesterol intake and replacing whole eggs with egg whites/substitutes or other alternative protein sources for facilitating cardiovascular health and long-term survival,” the authors say.
Reference: 9 February 2021, PLOS Medicine.
Funding: Y.Z. is supported by the National Key Research and Development Program of China (2017YFC1600500). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
More preventive measures, such as shielding underneath tables and improving air conditioner filtration efficiency, could reduce exposure to COVID-19 within air-conditioned restaurants.
The detailed physical processes and pathways involved in the transmission of COVID-19 are still not well understood. Researchers decided to use advanced computational fluid dynamics tools on supercomputers to deepen understanding of transmission and provide a quantitative assessment of how different environmental factors influence transmission pathways and airborne infection risk.
A restaurant outbreak in China was widely reported as strong evidence of airflow-induced transmission of COVID-19. But it lacked a detailed investigation about exactly how transmission occurred.
Why did some people get infected while others within the same area did not? What specific role did ventilation and air conditioning play in disease transmission? Exploring these questions can help develop more pinpointed preventative measures to improve our safety.
In Physics of Fluids, from AIP Publishing, Jiarong Hong and colleagues at the University of Minnesota report using advanced simulation methods to capture the complex flows that occur when the cold airflow from air conditioners interacts with the hot plume from a dining table and the transport of virus-loading particles within such flows.
“Our simulation captures various physical factors, including turbulent air flow, thermal effect, aerosol transport in turbulence, limited filtration efficiency of air conditioners, as well as the complex geometry of the space, all of which play a role in airborne transmission,” said Hong.
Although many computer simulation studies of airborne transmission of COVID-19 have been conducted recently, few directly link the prediction of high-fidelity computational fluid dynamics simulation with the actual infection outbreaks reported through contact tracing.
This work is the first realistic case simulated and linked directly with the prediction of simulation.
“It was enabled by advanced computational tools used in our simulation, which can capture the complex flows and aerosol transport and other multiphysics factors involved in a realistic setting,” Hong said.
The results show a remarkable direct linkage between regions of high aerosol exposure index and the reported infection patterns within the restaurant, which provides strong support to airborne transmission in this widely reported outbreak.
By using flow structure analysis and reverse-time tracing of aerosol trajectories, the researchers further pinpointed two potential transmission pathways that are currently being overlooked: the transmission caused by aerosols rising from beneath a table and transmission due to reentry aerosols associated with limited filtration efficiency of air conditioners.
“Our work highlights the need for more preventive measures, such as shielding more properly underneath the table and improving the filtration efficiency of air conditioners,” Hong said. “More importantly, our research demonstrates the capability and value of high-fidelity computer simulation tools for airborne infection risk assessment and the development of effective preventive measures.”
Reference: “Simulation-based study of COVID-19 outbreaks associated with air-conditioning in a restaurant” by Han Liu, Sida He, Lian Shen and Jiarong Hong, 9 February 2021, Physics of Fluids.
A unique collaboration among experts from several areas within MSK leads to findings about how inflammation appears to be driving the neurologic effects seen in some COVID-19 patients.
One of the dozens of unusual symptoms that have emerged in COVID-19 patients is a condition that’s informally called “COVID brain” or “brain fog.” It’s characterized by confusion, headaches, and loss of short-term memory. In severe cases, it can lead to psychosis and even seizures. It usually emerges weeks after someone first becomes sick with COVID-19.
In the February 8, 2021, issue of the journal Cancer Cell, a multidisciplinary team from Memorial Sloan Kettering reports an underlying cause of COVID brain: the presence of inflammatory molecules in the liquid surrounding the brain and spinal cord (called the cerebrospinal fluid). The findings suggest that anti-inflammatory drugs, such as steroids, may be useful for treating the condition, but more research is needed.
“We were initially approached by our colleagues in critical care medicine who had observed severe delirium in many patients who were hospitalized with COVID-19,” says Jessica Wilcox, the Chief Fellow in neuro-oncology at MSK and one of the first authors of the new study. “That meeting turned into a tremendous collaboration between neurology, critical care, microbiology, and neuroradiology to learn what was going on and to see how we could better help our patients.”
Recognizing a Familiar Symptom
The medical term for COVID brain is encephalopathy. Members of MSK’s Department of Neurology felt well-poised to study it, Dr. Wilcox says, because they are already used to treating the condition in other systemic inflammatory syndromes. It is a side effect in patients who are receiving a type of immunotherapy called chimeric antibody receptor (CAR) T cell therapy, a treatment for blood cancer. When CAR T cell therapy is given, it causes immune cells to release molecules called cytokines, which help the body to kill the cancer. But cytokines can seep into the area around the brain and cause inflammation.
When the MSK team first began studying COVID brain, though, they didn’t know that cytokines were the cause. They first suspected that the virus itself was having an effect on the brain. The study in the Cancer Cell paper focused on 18 patients who were hospitalized at MSK with COVID-19 and were experiencing severe neurologic problems. The patients were given a full neurology workup, including brain scans like MRIs and CTs and electroencephalogram (EEG) monitoring, to try to find the cause of their delirium. When nothing was found in the scans that would explain their condition, the researchers thought the answer might lie in the cerebrospinal fluid.
MSK’s microbiology team devised a test to detect the COVID-19 virus in the fluid. Thirteen of the 18 patients had spinal taps to look for the virus, but it was not found. At that point, the rest of the fluid was taken to the lab of MSK physician-scientist Adrienne Boire for further study.
Using Science to Ask Clinical Questions
Jan Remsik, a research fellow in Dr. Boire’s lab in the Human Oncology and Pathogenesis Program and the paper’s other first author, led the analysis of the fluid. “We found that these patients had persistent inflammation and high levels of cytokines in their cerebrospinal fluid, which explained the symptoms they were having,” Dr. Remsik says. He adds that some smaller case studies with only a few patients had reported similar findings, but this study is the largest one so far to look at this effect.
“We used to think that the nervous system was an immune-privileged organ, meaning that it didn’t have any kind of relationship at all with the immune system,” Dr. Boire says. “But the more we look, the more we find connections between the two.” One focus of Dr. Boire’s lab is studying how immune cells are able to cross the blood-brain barrier and enter this space, an area of research that’s also important for learning how cancer cells are able to spread from other parts of the body to the brain.
“One thing that was really unique about Jan’s approach is that he was able to do a really broad molecular screen to learn what was going on,” Dr. Boire adds. “He took the tools that we use in cancer biology and applied them to COVID-19.”
The inflammatory markers found in the COVID-19 patients were similar, but not identical, to those seen in people who have received CAR T cell therapy. And as with CAR T cell therapy, the neurologic effects are sometimes delayed. The initial inflammatory response with CAR T cell treatment is very similar to the reaction called cytokine storm that’s often reported in people with COVID-19, Dr. Wilcox explains. With both COVID-19 and CAR T cell therapy, the neurologic effects come days or weeks later. In CAR T cell patients, neurologic symptoms are treated with steroids, but doctors don’t yet know the role of anti-inflammatory treatments for people with neurologic symptoms of COVID-19. “Many of them are already getting steroids, and it’s possible they may be benefitting,” Dr. Wilcox says.
“This kind of research speaks to the cooperation across the departments at MSK and the interdisciplinary work that we’re able to do,” Dr. Boire concludes. “We saw people getting sick, and we were able to use our observations to ask big clinical questions and then take these questions into the lab to answer them.”
Reference: “Inflammatory Leptomeningeal Cytokines Mediate COVID-19 Neurologic Symptoms in Cancer Patients” by Jan Remsik, Jessica A. Wilcox, N. Esther Babady, Tracy A. McMillen, Behroze A. Vachha, Neil A. Halpern, Vikram Dhawan, Marc Rosenblum, Christine A. Iacobuzio-Donahue, Edward K. Avila, Bianca Santomasso and Adrienne Boire, 16 January 2021, Cancer Cell.
Dr. Boire is an inventor on a patent related to modulating the permeability of the blood-brain barrier and is an unpaid member of the scientific advisory board of EVREN Technologies.
This work was funded by National Institutes of Health grant P30 CA008748, the Pew Charitable Trusts, the Damon Runyon Cancer Research Foundation, and the Pershing Square Sohn Cancer Research Alliance GC239280. It was also supported by the American Brain Tumor Association Basic Research Fellowship, the Terri Brodeur Breast Cancer Foundation Fellowship, and the Druckenmiller Center for Lung Cancer Research.
An Epidemiological Blueprint for Understanding the Dynamics of a Pandemic
Scientific and public health experts have been raising the alarm for decades, imploring public officials to prepare for the inevitability of a viral pandemic. Infectious epidemics seemingly as benign as “the flu” and as deadly as the Ebola virus provided ample warning, yet government officials seemed caught off guard and ill prepared for dealing with COVID-19. Three future-oriented researchers and policy experts map out an “Epidemiological Blueprint for Understanding the Dynamics of a Pandemic.”
Researchers around the world have become forensic, Sherlock Holmes-like “consulting detectives” for government officials and public health organizations. Handling tens of thousands of samples, epidemiologists, like ETH Zurich Professor Tanja Stadler, can now reconstruct the transmission of SARS-CoV-2 in areas where contact tracing is otherwise unavailable. Unlike the fictional Holmes, today’s researchers benefit from real-time statistical tools to decipher the genetic code of various viral strains.
Stadler, who serves on the Swiss National COVID Science Task Force says, “Just like in humans, the genetic code of pathogens reveals a blueprint with information about the virus’ evolution and its origins. The blueprint enables us to understand the type and possible origin of the virus strains circulating within a country; identify new variants with novel characteristics; and determine its reproductive rate — the average number of secondary infections perpetuated by an infected person.”
Stadler’s team monitors the spread of new variants within Switzerland and places the sequences in an international context. Prior to the discovery of the new B 1.1.7 variant in the United Kingdom, scientists used Stadler’s genomic data to identify another variant that spread rapidly throughout Europe over the summer of 2020. It was first detected in an agricultural region of Spain and some possible super-spreading events led to the rapid expansion of this variant. Compared to B 1.1.7, the variant of the virus from Spain showed no transmission advantage over the original virus strain. The timing of the outbreak of this strain occurred in a summer vacation period and, according to Stadler, likely spread when foreign visitors returned home to Switzerland, the UK, and to other countries.
Like many other viruses, SARS-CoV-2 mutates every two-weeks. Scientists are unable to determine at this point, how fast the virus adapts to the human immune system and whether or not annual vaccinations will be necessary in the future. Currently, patient meta-data and genomic sequencing are not connected. Disconnected data represents one of the many missing links for fully understanding the dynamics of the pandemic. Stadler proposes that if scientists were able to connect this information while, of course, ensuring patient privacy, they would be better able to respond to important questions about new variants and their rates of transmission.
The Hunt for Animal X
Over the past quarter of a century, bats have transmitted some of the world’s deadliest outbreaks of Zoonotic viruses. Since bats live in high-density colonies and are the only mammals that fly, they often serve as an intermediary viral host between animals (horses, pigs, and even camels) or transmit viruses directly to humans. Professor Linfa Wang from the Duke-NUS Medical School explains that one of the concerning aspects of SARS-CoV-2 is the fact that humans can also transmit the virus to other species, as we have seen reported with minks and other animals. Animals can then re-transmit mutated strains of the virus back to humans in a process known as “spillback.”
Mitigating future viral pandemics has prompted international experts and scientists to hunt down “Animal X” to determine the origin of SARS-Cov-2. While the hunt may start in Wuhan, China, the high number of bat colonies in parts of South-East Asia and Southern China leave experts suspecting that similar viruses may have been circulating in the human population of these regions for many years. Recent findings have confirmed such hypotheses. To the best of Professor Wang’s knowledge, bat colonies in North America currently do not carry any SARS-like viruses, but given the potential for spillback, Wang recommends a Serological survey. Monitoring changes in bat populations could serve as an advance warning system for potential future public health threats.
In May 2020, just 70 days after Wang conceived the idea, he and his team developed and patented the first U.S. Food and Drug Administration (FDA) approved neutralizing antibody detection test for SARS-CoV-2. Known as the “cPass,” the test measures neutralizing antibodies that may prove valuable in developing a future “immunity passport.” Working with the World Health Organization (WHO), Wang is now creating a global surveillance protocol, an international standard measurement unit, and neutralizing antibody testing. These heroic feats in the face of a pandemic are, perhaps, what prompted his unofficial title as “The Batman of Singapore.”
Facing Existential Threat
Microbes existed long before the human species and they will likely be around long after we cease to exist. While it may not seem so in the midst of a pandemic, “In the modern world of medicine, we have (for the most part) won the battle against the microbes,” says Dr. Michael Osterholm, Director of the Center for Infectious Disease Research and Policy at the University of Minnesota. Osterholm also served on the Biden Transition Team’s COVID-19 Advisory Board. He has spent much of his career in a chess-like match anticipating microbial evolution’s next move and strategizing public health policies to address unimaginable threats.
A blueprint for understanding the dynamics of a pandemic requires a “creative imagination — an ability to anticipate the unthinkable and create a plausible public response,” says Osterholm. Referencing the death rate of U.S. soldiers during World War I, Osterholm indicates that nearly 7 out of 8 American soldiers died not from combat, but from the 1918 Spanish flu pandemic. With an historical knowledge of pandemics and outbreaks such as SARS, MERS, and Ebola, he asks, “Why did COVID-19 catch the world off guard, unprepared, and seemingly unable to fathom the sheer scale of the pandemic’s impact?” The current pandemic is most likely “not even the big one,” he suggests. “Another influenza pandemic, like the Spanish flu, could prove even more devastating than COVID-19.”
Infectious disease exposes the weaknesses in global societies from the world’s food systems to demographic inequalities. Osterholm explained that in order to feed the nearly 8 billion people on earth, we raise about 23 billion chickens, and, as of 2020, 678 million pigs. While avian flu viruses generally do not infect humans, when chickens live in close proximity to pigs, transmission occurs. Pigs can contract both human and bird viruses creating genetic exchanges and new mutations transmissible to humans with potentially deadly outcomes. Osterholm emphasized that ethnic groups and indigenous societies are suffering a disproportionate impact for a myriad of reasons — many of which stem from societal discrimination, inequality, and poverty.
Tanja Stadler, Linfa Wang, and Michael Osterholm agree and advocate for an internationally coordinated response to COVID-19. Osterholm expressed a need for understanding how public health practices interface with everyday life in various countries around the world. He says, “The greatest vaccines and the best tools in the world will be rendered ineffective unless we achieve public support and acceptance.”
Systems designed to detect deepfakes — videos that manipulate real-life footage via artificial intelligence — can be deceived, computer scientists showed for the first time at the WACV 2021 conference which took place online from January 5 to 9, 2021.
Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs that cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.
“Our work shows that attacks on deepfake detectors could be a real-world threat,” said Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student and first co-author on the WACV paper. “More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes in even when an adversary may not be aware of the inner workings of the machine learning model used by the detector.”
In deepfakes, a subject’s face is modified in order to create convincingly realistic footage of events that never actually happened. As a result, typical deepfake detectors focus on the face in videos: first tracking it and then passing on the cropped face data to a neural network that determines whether it is real or fake. For example, eye blinking is not reproduced well in deepfakes, so detectors focus on eye movements as one way to make that determination. State-of-the-art Deepfake detectors rely on machine learning models for identifying fake videos.
XceptionNet, a deep fake detector, labels an adversarial video created by the researchers as real. Credit: University of California San Diego
The extensive spread of fake videos through social media platforms has raised significant concerns worldwide, particularly hampering the credibility of digital media, the researchers point out. “If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” said Paarth Neekhara, the paper’s other first coauthor and a UC San Diego computer science student.
Researchers created an adversarial example for every face in a video frame. But while standard operations such as compressing and resizing video usually remove adversarial examples from an image, these examples are built to withstand these processes. The attack algorithm does this by estimating over a set of input transformations how the model ranks images as real or fake. From there, it uses this estimation to transform images in such a way that the adversarial image remains effective even after compression and decompression.
The modified version of the face is then inserted in all the video frames. The process is then repeated for all frames in the video to create a deepfake video. The attack can also be applied on detectors that operate on entire video frames as opposed to just face crops.
The team declined to release their code so it wouldn’t be used by hostile parties.
High success rate
Researchers tested their attacks in two scenarios: one where the attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model; and one where attackers can only query the machine learning model to figure out the probabilities of a frame being classified as real or fake.
In the first scenario, the attack’s success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos. This is the first work that demonstrates successful attacks on state-of-the-art Deepfake detectors.
“To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses,” the researchers write. “We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector.”
To improve detectors, researchers recommend an approach similar to what is known as adversarial training: during training, an adaptive adversary continues to generate new deepfakes that can bypass the current state of the art detector; and the detector continues improving in order to detect the new deepfakes.
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples
*Shehzeen Hussain, Malhar Jere, Farinaz Koushanfar, Department of Electrical and Computer Engineering, UC San Diego
Paarth Neekhara, Julian McAuley, Department of Computer Science and Engineering, UC San Diego
A new NASA paper provides the most detailed map to date of near-surface water ice on the Red Planet.
So you want to build a Mars base. Where to start? Like any human settlement, it would be best located near accessible water. Not only will water be crucial for life-support supplies, but it will also be used for everything from agriculture to producing the rocket propellant astronauts will need to return to Earth.
Schlepping all that water to Mars would be costly and risky. That’s why NASA has engaged scientists and engineers since 2015 to identify deposits of Martian water ice that could be within reach of astronauts on the planet’s surface. But, of course, water has huge scientific value, too: If present-day microbial life can be found on Mars, it would likely be nearby these water sources as well.
A new study appearing in Nature Astronomy includes a comprehensive map detailing where water ice is most and least likely to be found in the planet’s northern hemisphere. Combining 20 years of data from NASA’s Mars Odyssey, Mars Reconnaissance Orbiter, and the now-inactive Mars Global Surveyor, the paper is the work of a project called Subsurface Water Ice Mapping, or SWIM. The SWIM effort is led by the Planetary Science Institute in Tucson, Arizona, and managed by NASA’s Jet Propulsion Laboratory in Southern California.
“The next frontier for Mars is for human explorers to get below the surface and look for signs of microbial life,” said Richard Davis, who leads NASA’s efforts to find Martian resources in preparation for sending humans to the Red Planet. “We realize we need to make new maps of subsurface ice to improve our knowledge of where that ice is for both scientific discovery and having local resources astronauts can rely on.”
In the near future, NASA plans to hold a workshop for multidisciplinary experts to assess potential human-landing sites on Mars based on this research and other science and engineering criteria. This mapping project could also inform surveys by future orbiters NASA hopes to send to the Red Planet.
NASA recently announced that, along with three international space agencies, the signing of a statement of intent to explore a possible International Mars Ice Mapper mission concept. The statement brings the agencies together to establish a joint concept team to assess mission potential as well as partnership opportunities between NASA, the Agenzia Spaziale Italiana (the Italian Space Agency), the Canadian Space Agency, and the Japan Aerospace Exploration Agency.
Location, Location, Location
Ask Mars scientists and engineers where the most accessible subsurface ice is, and most will point to the area below Mars’ polar region in the northern hemisphere. On Earth, this region is where you find Canada and Europe; on Mars, it includes the plains of Arcadia Planitia and glacier-filled valleys in Deuteronilus Mensae.
Such regions represent a literal middle ground between where to find the most water ice (the poles) and where to find the most sunlight and warmth (the equator). The northern midlatitudes also offer favorable elevations for landing. The lower the elevation, the more opportunity a spacecraft has to slow down using friction from the Martian atmosphere during its descent to the surface. That’s especially important for heavy human-class landers, since Mars’ atmosphere is just 1% as dense as Earth’s and thus provides less resistance for incoming spacecraft.
“Ultimately, NASA tasked the SWIM project with figuring out how close to the equator you can go to find subsurface ice,” said Sydney Do, the Mars Water Mapping Project lead at JPL. “Imagine we’ve drawn a squiggly line across Mars representing that ice boundary. This data allows us to draw that line with a finer pen instead of a thick marker and to focus on parts of that line that are closest to the equator.”
But knowing whether a surface is hiding ice isn’t easy. None of the instrument datasets used in the study were designed to measure ice directly, said the Planetary Science Institute’s Gareth Morgan, the SWIM-project co-lead and the paper’s lead author. Instead, each orbiter instrument detects different physical properties – high concentrations of hydrogen, high radar-wave speed, and the rate at which temperature changes in a surface – that can suggest the presence of ice.
“Despite having 20 years of data and a fantastic range of instruments, it’s hard to combine these datasets, because they’re all so different,” Morgan said. “That’s why we assessed the consistency of an ice signal, showing areas where multiple datasets indicate ice is present. If all five datasets point to ice – bingo.”
If, say, only two of them did, the team would try to suss out how consistent the signals were and what other materials could be creating them. While the different datasets weren’t always a perfect fit, they often complemented one another. For example, current radars peer deep underground but don’t see the top 30 to 50 feet (10 to 15 meters) below the surface; a neutron spectrometer aboard one orbiter measured hydrogen in the uppermost soil layer but not below. High-resolution photos revealed ice tossed onto the surface after recent meteorite impacts, providing direct evidence to complement radar and other remote-sensing indicators of water ice.
While Mars experts pore over these new maps of subsurface ice, NASA is already thinking about what the next steps would be. For one, blind spots in currently available data can be resolved by sending a new radar mission to Mars that could home in on the areas of greatest interest to human-mission planners: water ice in the top layers of the subsurface.
A future radar-focused mission targeting the near surface could also tell scientists more about the mix of materials found in the layer of rock, dust, and other material found on top of ice. Different materials will require specialized tools and approaches for digging, drilling, and accessing water-ice deposits, particularly in the extreme Martian environment.
Mapping efforts in the 2020’s could help make human missions to Mars possible as early as the 2030’s. But before that, there’ll be a robust debate about the location of humanity’s first outpost on Mars: a place where astronauts will have the local water-ice resources needed to sustain them while also being able to make high-value discoveries about the evolution of rocky planets, habitability, and the potential for life on worlds beyond Earth.
Reference: “Availability of subsurface water-ice resources in the northern mid-latitudes of Mars” by G. A. Morgan, N. E. Putzig, M. R. Perry, H. G. Sizemore, A. M. Bramson, E. I. Petersen, Z. M. Bain, D. M. H. Baker, M. Mastrogiuseppe, R. H. Hoover, I. B. Smith, A. Pathare, C. M. Dundas and B. A. Campbell, 8 February 2021, Nature Astronomy.