Connect with us

Science

First clinical trial of gene editing to help target cancer

Published

on

Enlarge / The process of repairing the damage cause by CRISPR can cause complicated DNA rearrangements.

The ability to edit genes has raised the prospect of treating genetic conditions and arming the body to better handle infectious diseases and cancers. But for that potential to be realized, we need to deal with a variety of safety issues and work out the ethics of when the technology is appropriate to use.

Today, scientists are releasing the results of a clinical trial designed to test the safety of gene editing as a way of fighting cancer. The results are promising, in that a version of the CRISPR gene-editing system that’s already a few years out of date appears to be safe when used to direct immune cells to attack cancer. But the cancers that it was meant to treat simply evolved ways of slipping past immune surveillance.

Editing genes to fight cancer

While there have been a number of gene-editing systems developed, CRISPR/CAS9 is currently the most flexible and efficient. It creates cuts in specific DNA sequences, directed to the sequence by a short piece of RNA. The normal cellular process of repairing these cuts often results in small deletions, which can knock out any genes affected. Alternately, if a replacement sequence is made available, the repair can incorporate the replacement, thus altering the targeted sequence. Either of these, however, can sometimes cause problems by cutting at related sequences or when the repair process accidentally creates large rearrangements.

For the clinical trial, this gene-editing system has been combined with recently developed immune therapies that target cancer. There is a class of immune T cells that kill cells recognized as foreign, either because they come from a different person (such as after an organ transplant) or because they are infected with a bacteria or virus. These cells can also recognize and attack cancer but often don’t, in part because cancer cells are so similar to healthy ones. People have engineered versions of the T cells’ recognition system that specifically target cancer cell and placed these back into patients, helping the immune system attack the cancer, sometimes with spectacular results.

As part of the clinical trial, gene editing was used to improve the efficiency of the cancer-targeting T cells. This was done in two different ways.

Of mice and TCR

The first was to target a gene that normally functions to tone down the immune system (called PDCD1). There has been evidence generated in mice that using antibodies that block the protein made from this gene will increase the immune system’s attack on cancers. For this work, the researchers targeted the CRISPR system to delete part of the gene itself, inactivating it. This poses a potential risk, as a failure to tone down the immune response can lead to problematic conditions such as autoimmune diseases.

The other way gene editing was used was to knock out the T cell’s normal system for recognizing foreign cells, called the T cell receptor (TCR). The TCR is composed of two related proteins that form a binary receptor complex. Engineered versions of this protein are the ones used to get cells to recognize and kill cancer. Normally, these engineered versions of the TCR are simply inserted into an immune cell, where both they and the cell’s normal TCR genes are also active. The result is four different TCR parts active at the same time, resulting in a variety of hybrid TCRs. At best, these are ineffective and will reduce the total amount of active TCR in a cell. At worst, they’ll cause the T cell to attack healthy cells.

For the trial, the researchers generated CRISPR constructs that targeted the cell’s normal TCR genes. When successfully deleted, this would ensure that the only TCR on the cell’s surface would recognize cancer cells.

Into the clinic

Putting these pieces together, the researchers decided to work with patients who had cancers recognized by a known version of the TCR genes. That meant myeloma, melanoma, and sarcoma patients who had failed other therapies and who had progressed far enough that potentially life-threatening risks weren’t a problem. The researchers started with a total of six patients, but three of them ended up failing to meet the criteria for the trial by the time everything else was ready.

That “everything else” involved obtaining T cells from the patients themselves and then doing gene editing on them to delete the two TCR genes and the immune regulatory gene. While the rates of successful editing were high, the procedure is nowhere near 100 percent effective, and rates of editing varied from nearly half down to 15 percent, depending on the gene. That means most of the T cells placed back into the patient would still have some intact genes. While the minority would be expected to have all three genes edited, the populations that respond best tend to live longer in the body.

Separately, the researchers inserted the genes for a T cell receptor that’s known to recognize these cancer types. With everything in place, they tested the cells for any problematic effects of all this engineering.

Sequencing of the DNA from engineered cells showed that there were some off-target edits, but the rates varied among the genes. This suggests there’s some work left to do in terms of designing the gene-editing constructs. There were also some large chromosomal rearrangements in response to the editing. The most common was a single deletion that took out both T cell receptors, which was fine for the purposes of this work. Other large rearrangements were present, but they tended to drop out of the population of engineered cells over time, possibly due to detrimental effects on their growth.

With that level of off-target effects considered an acceptable risk, the researchers then infused the engineered cells into three of the patients.

About what you’d expect

It’s important to emphasize that the patients chosen for initial safety testing are very far along in disease progression, making it difficult for anything to reverse their progression. That’s integral to the risk calculation of being involved in testing of what may be a first-of-its-kind therapy.

And in terms of safety, things seem quite promising. There were no serious adverse affects of the T cell infusions, no sign of a problematic immune response, and the cells persisted in the patients up to nine months after the transfusions, indicating they were tolerated well. Testing of these cells suggested that many of them had been converted into memory cells, which are able to respond quickly following new stimulation.

The response to the tumor, however, was limited. Two patients appeared to stabilize, while the third showed a response in some tissues but not in others. Ultimately, however, the disease began to progress again, and one of the patients has since died.

In examining the cancer cells from these patients, the researchers found something that you might expect: the protein recognized by the TCR used in these experiments had seen its levels reduced. This allowed the cancer to escape detection by the immune system—especially an immune system that had been reprogrammed to recognize this protein. It’s a standard evolutionary response to this sort of pressure and has been seen in cancers in other contexts.

Still, from the perspective of the goal of this trial—basic safety—the trial was a success and will likely lead to further safety tests on a larger population. These will likely be able to leverage advances in gene editing that have occurred since the first trial was designed and involve enough patients that we’re likely to be able to detect a broader spectrum of responses to the therapy. It’s possible that larger trials could identify a sub-population of patients where this therapy works better or provide hints of how to combine it with additional therapies that improve its effectiveness.

Science, 2020. DOI: 10.1126/science.aba9844  (About DOIs).

Continue Reading

Science

Archaeologists recreated three common kinds of Paleolithic cave lighting

Published

on

Enlarge / Spanish archaeologists recreated three common types of Paleolithic lighting systems.

Medina-Alcaide et al, 2021, PLOS ONE

In 1993, a media studies professor at Fordham University named Edward Wachtel visited several famous caves in southern France, including Lascaux, Font-de-Gaume, Les Combarelles, and La Mouthe. His purpose: to study the cave art that has justly made these caves famous.  Wachtel was puzzled by what he called “spaghetti lines” on the drawings, partially obscuring them. There were also images of, say, an ibex with two heads, a mammal with three trunks, or a bull drawing superimposed over the drawing of a deer.

His guide for the La Mouthe tour was a local farmer, and since there were no electric lights in this cave, the farmer brought along a gas lantern. When the farmer swung the lantern inside the cave, the color schemes shifted, and the engraved lines seemed to animate. “Suddenly, the head of one creature stood out clearly,” Wachtel recalled. “It lived for a second, then faded as another appeared.” As for those mysterious spaghetti lines, “they became a forest or a bramble patch that concealed and then reveled the animals within.”

Wachtel subsequently published a paper entitled, “The First Picture Show: Cinematic Aspects of Cave Art,” in which he concluded that the cave drawings were meant to be perceived in three dimensions—one of them being time. These could have been the first “protomovies,” he thought.

It’s an intriguing take, although it must be said that Wachtel’s ideas are speculative. There is no way to definitively prove what those prehistoric cave artists intended, and therefore it’s unwise to draw strong inferences about these being cinematic in nature, or to assume that this tells us anything about prehistoric artists’ conception of time. But his point about the importance of viewing cave paintings under the lighting conditions in which they were created and viewed in prehistoric times is sound.

La Mouthe: (left) Painted etching of a hut (or an animal trap). Edward Wachtel found that a moving, flickering light source would cause the colors of the hut to change, and the animals around it to appear and disappear. (right) A sketch shows "spaghetti lines" over various animals
Enlarge / La Mouthe: (left) Painted etching of a hut (or an animal trap). Edward Wachtel found that a moving, flickering light source would cause the colors of the hut to change, and the animals around it to appear and disappear. (right) A sketch shows “spaghetti lines” over various animals

Wachtel’s story recently resurfaced in a Twitter thread, and it couldn’t be more timely. Lighting sources could indeed hold vital clues to the different ways prehistoric peoples used caves, according to a new paper by a team of Spanish scientists, published in the journal PLOS ONE. They conducted in situ experiments with three different kinds of Paleolithic lighting sources, in the hopes of shedding some light (pun intended) on what those various illumination methods might tell us about the emergence of “human symbolic and artistic behavior” in the form of cave art.

There are nearly 350 such prehistoric caves in France and Spain alone, including the oldest cave painting yet known: a red hand stencil in Maltravieso cave in Caceres, Spain, likely drawn by a Neanderthal some 64,000 years ago. (The oldest known depiction of an animal was discovered in 2018 on the island of Borneo in Indonesia, dating back 40,000 years.) The Spanish team chose to conduct their experiments at the Isuntza 1 Cave in Spain’s Basque country, and selected two distinct spaces in particular.

The first was a large, wide chamber with walls of bedrock, with 99.7 percent relative humidity and an average temperature of 17.6 degrees C (63.6 degrees F).  They thought it would be ideal as a “staying chamber” for the experiments The second space was a second, slightly smaller chamber with similar relative humidity (99.9 percent) and average temperatures (14.2 degrees C, or 57.5 degrees F) similar to the first space. The two spaces are connected by a rough passage 40 meters long (about 131 feet).

Upper Paleolithic cave paintings in Altamira Cave, Spain.
Enlarge / Upper Paleolithic cave paintings in Altamira Cave, Spain.

DEA Picture Library/De Agostini/Getty Images

The Spanish researchers chose lighting types for their eight experiments based on known archaeological data: five torches tested in both spaces and the passage, as well as two stone lamps with animal fat, and a small fireplace, both tested just in the first space. All the torches were made from dry juniper branches joined together, like the remains of ancient torches found in the Aldene and Reseau Clastres caves. The researchers included a bit of birch to act as tinder, and added pine resin, animal fat, or a combination thereof to assess how well different fuel types worked.

The lamps were replicas of a sandstone lamp found in La Mouthe Cave in Dordogne, France. They used bovine animal fat as fuel, with three juniper wicks, arranged in a teepee shape inside the lamp. They also built a small fireplace on a clay substrate in the first chamber with juniper and oak as wood fuel.

For all the lighting experiments, the team measured how long the lighting source lasted (duration); the total amount of light reaching a specific surface or point relative to the human eye (illuminance, or lux); how much illumination was emitted in certain directions (luminous intensity); the minimum distance between the light source and total darkness (action radius); and luminance, which connects light intensity with the surface of the source. They also kept track of the highest temperature reached by each type of lighting source.

Those measurements showed that the various lighting sources had very different characteristics, and thus were probably used in different contexts. The wooden torches, for instance, emitted light in all directions, up to nearly six meters (19.6 feet), and lasted an average of 41 minutes. The torches exhibited uneven light intensity, and often needed to be relit by waving them from side to side, and they produced a lot smoke. So they worked best for exploring caves or crossing wide spaces. The team also found that adding resin intensified the flame, while adding animal fat extended its duration.

In contrast, the grease lamps emitted weaker light akin to the intensity of a candle, over a span of three meters (9.8 feet) or so. They burned consistently, and didn’t smoke, for over an hour, but they had a dazzling effect if the person was moving and didn’t illuminate the floor very well. Also, “It was necessary to maintain constant control over the wick to prevent it from sinking into the fatty fuel, causing the flame to be extinguished,” the authors wrote. This makes the lamps better suited for lighting small cave spaces over a longer period, complementing the advantages of the torches.

As for the fireplace—the only truly static system—its illumination covered a range of 6.6 meters (21.6 feet). However, it burned for just 30 minutes and gave off a lot of white smoke, making it unsuitable for use unless there were strong enough air currents to disperse that smoke. “The fireplace location was not appropriately placed regarding air currents,” the authors noted, which are “essential to achieving a prolonged stay underground. However, in the case of large fires, convection currents are produced, and they would be efficient enough to evacuate gases outside of the cave.”

The Spanish team also built a virtual 3D model of a section of the Atxurra cave known as the Ledge of the Horses. It’s a naturally formed platform just above a passage floor, with two panels of about 50 animal engravings: bison, goats, horses, and hinds, many of them overlapping. The ledge was also littered with scattered charcoal, lithic tools, and ashes from three probable fireplaces. In the virtual model, they conducted a spatial analysis of all three tested lighting sources.

The modeling showed that the decorated panels would be “barely perceptible” to someone standing in the lower parts of the gallery, even if that person were carrying a lamp or a torch. It would need to be illuminated from the top of the ledge to be seen. In contrast, the fireplaces appeared to be strategically located to illuminate the entire decorated space. Torches did prove to be a good lighting source for accessing that space, however, with an estimated travel time of 38.39 minutes—in line with the measured duration of the torches. “It does not seem by chance that the optimal routes estimated to access this space are covered with scattered charcoals, surely fallen from the torches used in the Magdalenian period,” the authors wrote.

The findings have no direct bearing on Wachtel’s speculation about prehistoric cinematic art. But the more archaeologists learn about Paleolithic lighting sources, the more we will understand about how those lighting sources affect human perception in a cave environment, with implications for the emergence of cave art. That’s why the Spanish team thinks it is essential to continue conducting these kinds of experiments.

“Only with a large corpus of archaeological remains, including different types of lighting systems (and fuels), studied through an interdisciplinary approach, will it be possible to adequately reproduce Paleolithic light resources,” they concluded in their paper, “Our experiments in Paleolithic lighting point to planning in the human use of caves in this period, and the importance of lighting studies to travel the activities carried out by our ancestors in the deep areas of caves. “

DOI: PLOS ONE, 2021. 10.1371/journal.pone.0250497  (About DOIs).

Continue Reading

Science

Two Viking burials, separated by an ocean, contain close kin

Published

on

Ida Marie Odgaard AFP

Roughly a thousand years ago, a young man in his early 20s met a violent end in England. 800 kilometers (500 miles) away, in Denmark, an older man who had survived a lifetime of battles died sometime in his 50s. At first glance, there’s nothing to suggest a connection between them over such a distance. But according to a recent study of their DNA, the two men were second-degree relatives: half-siblings, uncle and nephew, or grandfather and grandson.

Today, their skeletons lie side-by-side in the National Museum of Denmark, reunited after centuries, Agence France-Presse (AFP) reported.

Geneticists sequenced the pair’s DNA as part of a much larger study, which sampled and sequenced ancient DNA from more than 400 human skeletons at sites across Europe and Greenland. That data revealed that Vikings were much more ethnically diverse than historians have often assumed, and it helped track the migrations that defined the Viking Age. Against the backdrop of those larger patterns, the ancient DNA from two skeletons, buried hundreds of kilometers apart under very different circumstances, told a much more personal story.

“This is a big discovery because now you can trace movements across space and time through a family,” Jeannette Varberg of the National Museum of Denmark said.

Given what is known about the Viking Age, it’s easy to imagine at least the broad strokes of this family’s story. The 50-year-old may have been a veteran of raids along the coast of continental Europe, or a returning veteran of raids on the British Isles; his bones showed evidence of old, long-healed wounds sustained in combat. But he lived to a relatively old age for his time and occupation (as they say, beware an old man in a profession where men usually die young).

The 20-year-old may have may have died during a raid on the English coast, or he may have been caught up in King Ethelred II’s 1002 CE purge of Danes living in England. He ended up in a mass grave in Oxford, England, with his skull shattered by the blows that killed him. It’s reasonable to speculate that the two men knew each other, or at least knew of each other, but there’s not enough evidence for archaeologists to say whether they lived at the same time, or which of them was born first.

“It’s very difficult to tell if they lived in the same age or they differ maybe by a generation, because you have no material in the grave that can give a precise dating,” Varberg said.

It’s plausible that the young man who died in England went to battle with thoughts of impressing a sibling, an uncle, or a grandfather back in Denmark; perhaps they fought side-by-side, or perhaps he was hoping to live up to his elder’s stories. Then again, it’s equally plausible that the veteran warrior who died in Denmark remembered the stories of a sibling or older relative who died in battle far to the west.

Either way, the pair of warriors are an excellent reminder of what ancient DNA—and archaeology, more generally—can tell us about the past, from sweeping large-scale patterns of human movements to the much more personal lives of individual people and families. And once in a great while, both kinds of stories emerge from the same study.

Continue Reading

Science

The efforts to make text-based AI less racist and terrible

Published

on

Getty Images

In July 2020, OpenAI launched GPT-3, an artificial intelligence language model that quickly stoked excitement about computers writing poetry, news articles, and programming code. Just as quickly, it was shown to sometimes be foulmouthed and toxic. OpenAI said it was working on fixes, but the company recently discovered GPT-3 was being used to generate child porn.

Now OpenAI researchers say they’ve found a way to curtail GPT-3’s toxic text by feeding the program roughly 100 encyclopedia-like samples of writing by human professionals on topics like history and technology but also abuse, violence, and injustice.

OpenAI’s project shows how the tech industry is scrambling to constrain the dark side of a technology that’s shown enormous potential but also can spread disinformation and perpetuate biases. There’s a lot riding on the outcome: Big tech companies are moving rapidly to offer services based on these large language models, which can interpret or generate text. Google calls them central to the future of search, and Microsoft is using GPT-3 for programming. In a potentially more ominous development, groups are working on open source versions of these language models that could exhibit the same weaknesses and share them more widely. So researchers are looking to understand how they succeed, where they fall short, and how they can be improved.

Abubakar Abid is CEO of machine-learning testing startup Gradio and was among the first people to call attention to GPT-3’s bias against Muslims. During a workshop in December 2020, Abid examined the way GPT-3 generates text about religions using the prompt “Two ___ walk into a.” Looking at the first 10 responses for various religions, he found that GPT-3 mentioned violence once each for Jews, Buddhists, and Sikhs, twice for Christians, but nine out of 10 times for Muslims. In a paper earlier this year, Abid and several coauthors showed that injecting positive text about Muslims to a large language model reduced the number of violence mentions about Muslims by nearly 40 percentage points.

Other researchers are trying different approaches. Emily Dinan, a research engineer at Facebook AI Research, is testing ways to eliminate toxic text by making more of it. Dinan hires Amazon Mechanical Turk contractors to say awful things in conversations with language models to provoke them to generate hate speech, profanity, and insults. Humans then label that output as safe or unsafe; those labels help train AI to identify toxic speech.

GPT-3 has shown impressive ability to understand and compose language. It can answerSAT analogy questions better than most people, and it was able to fool Reddit users without being found out.

But even its creators knew GPT-3’s tendency to generate racism and sexism. Before it was licensed to developers, OpenAI released a paper in May 2020 with tests that found GPT-3 has a generally low opinion of Black people and exhibits sexism and other forms of bias. Despite those findings, OpenAI announced plans to commercialize the technology a month later. That’s a sharp contrast from the way OpenAI handled an earlier version of the model, GPT-2, in 2019. Then, it initially released only small versions of the model. At the same time, partners in academia issued multiple studies of how large language models can be misused or adversely impact society.

In the recent paper highlighting ways to reduce the toxicity of GPT-3, OpenAI disclosed tests showing the base version of GPT-3 refers to some people as animals and associates white people with terms like “supremacy” and “superiority”; such language perpetuates long-held stereotypes and dehumanizes non-white people. GPT-3 also makes racist jokes, condones terrorism, and accuses people of being rapists.

In another test, Xudong Shen, a National University of Singapore PhD student, rated language models based on how much they stereotype people by gender or whether they identify as queer, transgender, or nonbinary. He found that larger AI programs tended to engage in more stereotyping. Shen says the makers of large language models should correct these flaws. OpenAI researchers also found that language models tend to grow more toxic as they get bigger; they say they don’t understand why that is.

Text generated by large language models is coming ever closer to language that looks or sounds like it came from a human, yet it still fails to understand things requiring reasoning that almost all people understand. In other words, as some researchers put it, this AI is a fantastic bullshitter, capable of convincing both AI researchers and other people that the machine understands the words it generates.

UC Berkeley psychology professor Alison Gopnik studies how toddlers and young people learn to apply that understanding to computing. Children, she said, are the best learners, and the way kids learn language stems largely from their knowledge of and interaction with the world around them. Conversely, large language models have no connection to the world, making their output less grounded in reality.

“The definition of bullshitting is you talk a lot and it kind of sounds plausible, but there’s no common sense behind it,” Gopnik says.

Yejin Choi, an associate professor at the University of Washington and leader of a group studying common sense at the Allen Institute for AI, has put GPT-3 through dozens of tests and experiments to document how it can make mistakes. Sometimes it repeats itself. Other times it devolves into generating toxic language even when beginning with inoffensive or harmful text.

To teach AI more about the world, Choi and a team of researchers created PIGLeT, AI trained in a simulated environment to understand things about physical experience that people learn growing up, such as it’s a bad idea to touch a hot stove. That training led a relatively small language model to outperform others on common sense reasoning tasks. Those results, she said, demonstrate that scale is not the only winning recipe and that researchers should consider other ways to train models. Her goal: “Can we actually build a machine learning algorithm that can learn abstract knowledge about how the world works?”

Choi is also working on ways to reduce the toxicity of language models. Earlier this month, she and colleagues introduced an algorithm that learns from offensive text, similar to the approach taken by Facebook AI Research; they say it reduces toxicity better than several existing techniques. Large language models can be toxic because of humans, she says. “That’s the language that’s out there.”

Perversely, some researchers have found that attempts to fine-tune and remove bias from models can end up hurting marginalized people. In a paper published in April, researchers from UC Berkeley and the University of Washington found that Black people, Muslims, and people who identify as LGBT are particularly disadvantaged.

The authors say the problem stems, in part, from the humans who label data misjudging whether language is toxic or not. That leads to bias against people who use language differently than white people. Coauthors of that paper say this can lead to self-stigmatization and psychological harm, as well as force people to code switch. OpenAI researchers did not address this issue in their recent paper.

Jesse Dodge, a research scientist at the Allen Institute for AI, reached a similar conclusion. He looked at efforts to reduce negative stereotypes of gays and lesbians by removing from the training data of a large language model any text that contained the words “gay” or “lesbian.” He found that such efforts to filter language can lead to data sets that effectively erase people with these identities, making language models less capable of handling text written by or about those groups of people.

Dodge says the best way to deal with bias and inequality is to improve the data used to train language models instead of trying to remove bias after the fact. He recommends better documenting the source of the training data and recognizing the limitations of text scraped from the web, which may overrepresent people who can afford internet access and have the time to make a website or post a comment. He also urges documenting how content is filtered and avoiding blanket use of blocklists for filtering content scraped from the web.

Dodge created a checklist for researchers with about 15 data points to enforce standards and build on the work of others. Thus far the checklist has been used more than 10,000 times to encourage researchers to include information essential to reproducing their results. Papers that met more of the checklist items were more likely to be accepted at machine learning research conferences. Dodge says most large language models lack some items on the checklist, such as a link to source code or details about the data used to train an AI model; one in three papers published do not share a link to code to verify results.

But Dodge also sees more systemic issues at work. He says there’s growing pressure to move AI quickly from research into production, which he says can lead researchers to publish work about something trendy and move on without proper documentation.

In another recent study, Microsoft researchers interviewed 12 tech workers deploying AI language technology and found that product teams did little planning for how the algorithms could go wrong. Early prototyping of features such as writing aids that predict text or search completion tended to focus on scenarios in which the AI component worked perfectly.

The researchers designed an interactive “playbook” that prompts people working on an AI language project to think about and design for failures of AI text tech in the earliest stages. It is being tested inside Microsoft with a view to making it a standard tool for product teams. Matthew Hong, a researcher at the University of Washington who worked on the study with three colleagues while at Microsoft, says the study shows how AI language technology has in some ways changed faster than software industry culture. “Our field is going through a lot of growing pains trying to integrate AI into different products,” he says. “People are having a hard time catching up [and] anticipating or planning for AI failures.”

This story originally appeared on wired.com.

Continue Reading

Trending