Connect with us

Science

The efforts to make text-based AI less racist and terrible

Published

on

Getty Images

In July 2020, OpenAI launched GPT-3, an artificial intelligence language model that quickly stoked excitement about computers writing poetry, news articles, and programming code. Just as quickly, it was shown to sometimes be foulmouthed and toxic. OpenAI said it was working on fixes, but the company recently discovered GPT-3 was being used to generate child porn.

Now OpenAI researchers say they’ve found a way to curtail GPT-3’s toxic text by feeding the program roughly 100 encyclopedia-like samples of writing by human professionals on topics like history and technology but also abuse, violence, and injustice.

OpenAI’s project shows how the tech industry is scrambling to constrain the dark side of a technology that’s shown enormous potential but also can spread disinformation and perpetuate biases. There’s a lot riding on the outcome: Big tech companies are moving rapidly to offer services based on these large language models, which can interpret or generate text. Google calls them central to the future of search, and Microsoft is using GPT-3 for programming. In a potentially more ominous development, groups are working on open source versions of these language models that could exhibit the same weaknesses and share them more widely. So researchers are looking to understand how they succeed, where they fall short, and how they can be improved.

Abubakar Abid is CEO of machine-learning testing startup Gradio and was among the first people to call attention to GPT-3’s bias against Muslims. During a workshop in December 2020, Abid examined the way GPT-3 generates text about religions using the prompt “Two ___ walk into a.” Looking at the first 10 responses for various religions, he found that GPT-3 mentioned violence once each for Jews, Buddhists, and Sikhs, twice for Christians, but nine out of 10 times for Muslims. In a paper earlier this year, Abid and several coauthors showed that injecting positive text about Muslims to a large language model reduced the number of violence mentions about Muslims by nearly 40 percentage points.

Other researchers are trying different approaches. Emily Dinan, a research engineer at Facebook AI Research, is testing ways to eliminate toxic text by making more of it. Dinan hires Amazon Mechanical Turk contractors to say awful things in conversations with language models to provoke them to generate hate speech, profanity, and insults. Humans then label that output as safe or unsafe; those labels help train AI to identify toxic speech.

GPT-3 has shown impressive ability to understand and compose language. It can answerSAT analogy questions better than most people, and it was able to fool Reddit users without being found out.

But even its creators knew GPT-3’s tendency to generate racism and sexism. Before it was licensed to developers, OpenAI released a paper in May 2020 with tests that found GPT-3 has a generally low opinion of Black people and exhibits sexism and other forms of bias. Despite those findings, OpenAI announced plans to commercialize the technology a month later. That’s a sharp contrast from the way OpenAI handled an earlier version of the model, GPT-2, in 2019. Then, it initially released only small versions of the model. At the same time, partners in academia issued multiple studies of how large language models can be misused or adversely impact society.

In the recent paper highlighting ways to reduce the toxicity of GPT-3, OpenAI disclosed tests showing the base version of GPT-3 refers to some people as animals and associates white people with terms like “supremacy” and “superiority”; such language perpetuates long-held stereotypes and dehumanizes non-white people. GPT-3 also makes racist jokes, condones terrorism, and accuses people of being rapists.

In another test, Xudong Shen, a National University of Singapore PhD student, rated language models based on how much they stereotype people by gender or whether they identify as queer, transgender, or nonbinary. He found that larger AI programs tended to engage in more stereotyping. Shen says the makers of large language models should correct these flaws. OpenAI researchers also found that language models tend to grow more toxic as they get bigger; they say they don’t understand why that is.

Text generated by large language models is coming ever closer to language that looks or sounds like it came from a human, yet it still fails to understand things requiring reasoning that almost all people understand. In other words, as some researchers put it, this AI is a fantastic bullshitter, capable of convincing both AI researchers and other people that the machine understands the words it generates.

UC Berkeley psychology professor Alison Gopnik studies how toddlers and young people learn to apply that understanding to computing. Children, she said, are the best learners, and the way kids learn language stems largely from their knowledge of and interaction with the world around them. Conversely, large language models have no connection to the world, making their output less grounded in reality.

“The definition of bullshitting is you talk a lot and it kind of sounds plausible, but there’s no common sense behind it,” Gopnik says.

Yejin Choi, an associate professor at the University of Washington and leader of a group studying common sense at the Allen Institute for AI, has put GPT-3 through dozens of tests and experiments to document how it can make mistakes. Sometimes it repeats itself. Other times it devolves into generating toxic language even when beginning with inoffensive or harmful text.

To teach AI more about the world, Choi and a team of researchers created PIGLeT, AI trained in a simulated environment to understand things about physical experience that people learn growing up, such as it’s a bad idea to touch a hot stove. That training led a relatively small language model to outperform others on common sense reasoning tasks. Those results, she said, demonstrate that scale is not the only winning recipe and that researchers should consider other ways to train models. Her goal: “Can we actually build a machine learning algorithm that can learn abstract knowledge about how the world works?”

Choi is also working on ways to reduce the toxicity of language models. Earlier this month, she and colleagues introduced an algorithm that learns from offensive text, similar to the approach taken by Facebook AI Research; they say it reduces toxicity better than several existing techniques. Large language models can be toxic because of humans, she says. “That’s the language that’s out there.”

Perversely, some researchers have found that attempts to fine-tune and remove bias from models can end up hurting marginalized people. In a paper published in April, researchers from UC Berkeley and the University of Washington found that Black people, Muslims, and people who identify as LGBT are particularly disadvantaged.

The authors say the problem stems, in part, from the humans who label data misjudging whether language is toxic or not. That leads to bias against people who use language differently than white people. Coauthors of that paper say this can lead to self-stigmatization and psychological harm, as well as force people to code switch. OpenAI researchers did not address this issue in their recent paper.

Jesse Dodge, a research scientist at the Allen Institute for AI, reached a similar conclusion. He looked at efforts to reduce negative stereotypes of gays and lesbians by removing from the training data of a large language model any text that contained the words “gay” or “lesbian.” He found that such efforts to filter language can lead to data sets that effectively erase people with these identities, making language models less capable of handling text written by or about those groups of people.

Dodge says the best way to deal with bias and inequality is to improve the data used to train language models instead of trying to remove bias after the fact. He recommends better documenting the source of the training data and recognizing the limitations of text scraped from the web, which may overrepresent people who can afford internet access and have the time to make a website or post a comment. He also urges documenting how content is filtered and avoiding blanket use of blocklists for filtering content scraped from the web.

Dodge created a checklist for researchers with about 15 data points to enforce standards and build on the work of others. Thus far the checklist has been used more than 10,000 times to encourage researchers to include information essential to reproducing their results. Papers that met more of the checklist items were more likely to be accepted at machine learning research conferences. Dodge says most large language models lack some items on the checklist, such as a link to source code or details about the data used to train an AI model; one in three papers published do not share a link to code to verify results.

But Dodge also sees more systemic issues at work. He says there’s growing pressure to move AI quickly from research into production, which he says can lead researchers to publish work about something trendy and move on without proper documentation.

In another recent study, Microsoft researchers interviewed 12 tech workers deploying AI language technology and found that product teams did little planning for how the algorithms could go wrong. Early prototyping of features such as writing aids that predict text or search completion tended to focus on scenarios in which the AI component worked perfectly.

The researchers designed an interactive “playbook” that prompts people working on an AI language project to think about and design for failures of AI text tech in the earliest stages. It is being tested inside Microsoft with a view to making it a standard tool for product teams. Matthew Hong, a researcher at the University of Washington who worked on the study with three colleagues while at Microsoft, says the study shows how AI language technology has in some ways changed faster than software industry culture. “Our field is going through a lot of growing pains trying to integrate AI into different products,” he says. “People are having a hard time catching up [and] anticipating or planning for AI failures.”

This story originally appeared on wired.com.

Continue Reading

Science

Missouri AG wages war on masks as state blazes with delta cases

Published

on

Enlarge / Eric Schmitt, Missouri Attorney General.

Missouri has been one of the hardest-hit states so far in these early days of a delta-fueled COVID-19 surge. Cases increased nearly 500 percent since the start of July, while vaccinations stalled. Right now, with just 41 percent of the state fully vaccinated, 112 of the state’s 114 counties have high or substantial levels of coronavirus spread. Hospitalizations are up statewide, and some facilities have already run out of ventilators and seen intensive care units hit maximum capacity. Deaths are also increasing, with more than 300 people losing their lives this month since July 1. And the proportion of COVID-19 tests coming back positive is still rising, suggesting that things will likely only get worse in the weeks to come.

By nearly every metric, this entirely preventable surge is tragic. Yet, it hasn’t stopped the Show Me State’s Republican attorney general, Eric Schmitt, from waging war on local health restrictions aimed at trying to curb transmission. On Monday, Schmitt filed a lawsuit to stop St. Louis County and St. Louis City from enforcing mask mandates for fully vaccinated people and children, which took effect that day.

The timing of the lawsuit is awkward. It partly rests on now-outdated guidance from the Centers for Disease Control and Prevention that fully vaccinated people didn’t need to wear masks in most indoor settings. “The Mask Mandates are arbitrary and capricious because they require vaccinated individuals to wear masks, despite the CDC guidance that this is not necessary,” the lawsuit claims. The rest of the lawsuit didn’t argue that masks were ineffective at curbing transmission but rather claimed that they were unnecessary for children—despite that they are largely ineligible for vaccinations—and that requiring them is “unconstitutional.” Otherwise, the lawsuit nitpicked language of the mandates, such as alleging that they didn’t define the word “dwelling.”

The CDC reversed its mask policy Tuesday, citing evidence that even fully vaccinated people are catching and spreading the hypertransmissible delta coronavirus variant—though at much lower frequencies than unvaccinated people. The agency now recommends universal masking in K-12 schools and that fully vaccinated people mask in indoor public settings when local transmission is high or substantial. Both the city and county of St. Louis have high levels of COVID-19 transmission, as defined by the CDC.

Lies and freedoms

Still, Schmitt is not backing down. Though his office did not immediately respond to a request from Ars, Schmitt took to Twitter and Fox News to blast the CDC’s update.

“People are tired of being lied to by elites & the ruling class,” Attorney General Schmitt, who is also running for US Senate, tweeted on Tuesday evening. “We were told—get vaccinated and you don’t have to wear a mask. Now the vaccinated are forced to wear masks in St. Louis. Kids forced to wear masks too. The lies go on and on.”

On Wednesday, Kansas City’s Democratic Mayor Quinton Lucas announced that he, too, would reinstate an indoor mask mandate in the city for all persons aged five and older, regardless of vaccination status. And Schmitt quickly said that he would sue to stop that mandate as well.

“To the great people of Kansas City: I will be filing a lawsuit to protect your freedoms,” Schmitt tweeted Wednesday. This mask mandate is about politics & control, not science. You are not subjects but citizens of what has been the freest country in the world & I will always fight for you.”

It’s unclear how the lawsuits will pan out, but Mayor Lucas has already noted that he intends to put up a fight of his own. A press release from his office stated:

In light of recent litigation between the State of Missouri and the City and County of St. Louis, Mayor Lucas will also introduce a resolution in the weeks ahead for City Council support of emergency actions. Mayor Lucas stands with Mayor Tishaura Jones and County Executive Sam Page in protecting Missourians from the spread of COVID-19.

In St. Louis, County Executive Page also stood behind the mask mandate. The courts will decide its fate, he said, according to the St. Louis Post-Dispatch, but “until then, the law stands.” Page argued that masks are necessary to help lower transmission as more people get vaccinated. “These cases, and this curve is shooting straight up,” he said. “And if we don’t make some decisions fast, we’re going to be in a bad spot.”

Continue Reading

Science

A global index to track the health of tropical rainforests

Published

on

We’ve known for decades that tropical rainforests are special. They’re nearly unrivaled in biodiversity, and research has shown that they absorb more carbon dioxide than any other ecosystem. A recent study showed that the tropics sequester four times as much carbon dioxide as temperate and boreal ecosystems combined—and several studies have estimated that all terrestrial ecosystems combined sequester as much as 30 percent of the total carbon dioxide in the atmosphere each year.

We’ve also known for decades that these ecosystems are at risk of vanishing. As much as 20 percent of tropical rainforests have been cleared in the last 30 years, with an additional 10 percent lost to degradation. Beyond these direct threats, forests worldwide, and especially rainforests, are experiencing severe losses due to climate change—notably higher temperatures and drought.

Until now, there haven’t been means to systematically keep tabs on the health of these critical ecosystems. But a collaboration of nearly 50 institutions has recently developed a comprehensive index to measure the health and vulnerability of all tropical rainforests around the world. The result is a potential warning system that allows scientists and policymakers to monitor and prioritize which forests are at the highest risk of irreversible damage and loss.

“Rainforests regulate the Earth’s climate; if they cannot function well—the patterns of climate will change almost everywhere on Earth,” says lead author Sassan Saatchi of the Jet Propulsion Laboratory at the California Institute of Technology and the Institute of Environment and Sustainability at the University of California, Los Angeles. “If we lose the ability of the tropical forests to function normally, for example absorb the carbon from the atmosphere, it may mean that all of the efforts we’re doing in terms of climate change mitigation may become moot.”

Preventing tipping points

This new tropical forest vulnerability index (TFVI) combines close to a dozen data sets spanning up to 37 years of measurements (1982–2018). Past efforts have focused on limited regions or have largely relied on fieldwork, but the TFVI brings together a wide range of available measurements and models of rainforest stresses and responses.

Stress measurements included climate data about temperature, the dryness of the air (vapor pressure deficit), and the amount of water entering and leaving each system (the water balance). Responses included tree cover, carbon storage, above-ground biomass, productivity, and evapotranspiration—the amount of water that each system exchanges with the atmosphere during photosynthesis. Available measurements also included an existing biodiversity index and the temperature of the land surface or, in forests, the temperature of the tree and leaf surfaces. In addition, models and satellite data were calibrated with ground readings as much as possible.

When combined, these measurements give a historical record of the overall health and functioning of rainforests around the world over the past several decades. From this, it’s possible to see how much stress different forests have survived in the past and to flag current and future stress readings that are far outside of past variations. The researchers suggest that too many stresses may create feedback loops that lead to tipping points, beyond which a forest will no longer recover, and the damage will become irreversible. Such tipping points can be rapid and cause mass tree deaths, or they can trigger relatively gradual transitions to a different ecosystem type, like a savanna.

Global rainforest health check

Based on current measurements, the index shows that forests in the Amazon are particularly vulnerable, while African forests in the Congo Basin are more resilient. The authors suspect this may be due to less development in the Congo Basin, as well as a history of more frequent droughts there. Forests in Asia are particularly threatened by land use and forest fragmentation.

The intention of the index is to help the early identification of forests that are in the most need of additional protections, as well as to provide specific guidance on exactly which stress factors the forests are experiencing. While some interventions, like slowing climate change, may require longer-term solutions, others—like forest fragmentation—could be managed with restoration projects.

The authors were also careful to include measures of uncertainty throughout the index, and the code for the index is freely accessible so that anyone can use it and, hopefully, improve upon it as well. Every month, the index is updated with the latest data, and the team expects to launch an online version in the next year.

“This index is not the ultimate answer. One of the biggest caveats is that, as much as we know about the ecosystem, we are always surprised how the ecosystem works, and it can still become vulnerable to something that we don’t know about, or it may become resilient to something that we thought it has been vulnerable to,” says Saatchi. “But this is a work in progress, and the more knowledge we gain in terms of the function of these ecosystems, the better we can predict which direction they’ll go. The index will help us to continuously take the pulse of global rainforests as their health is changing.”

Science Advances, 2021. DOI: 10.1126/sciadv.abe9829

K.E.D. Coan is a freelance journalist covering climate and environment stories at Ars Technica. She has a PhD in chemistry and chemical biology.

Continue Reading

Science

Rocket Lab not yet close to profitability, proxy statement reveals

Published

on

Enlarge / Peter Beck, founder of Rocket Lab, is seen as essential to Rocket Lab’s success.

Rocket Lab

Running a rocket launch company is an expensive proposition. You need hundreds of employees, lots of expensive machines and tooling, plenty of hardware, and at least one launch site. To make matters worse, for a purely commercial launch firm like Rocket Lab, you typically only get paid when you deliver someone’s satellite into orbit.

So it is perhaps no surprise that the US-based company, which launches from New Zealand and has about 600 employees, has been losing a lot of money. According to a new proxy statement, Rocket Lab experienced net losses of $30 million and $55 million in 2019 and 2020, respectively. Given the company’s financial position, an independent auditor, according to the proxy statement, “expressed substantial doubt” about Rocket Lab’s “ability to continue as a going concern.”

These are the kinds of details we rarely see in the often financially opaque launch business, but as part of the process of converting into a publicly traded Special Purpose Acquisition Company, Rocket Lab had to make extensive financial disclosures. The full 712-page document can be downloaded here.

Rocket Lab reported revenue of $48 million in 2019 and $35 million in 2020. The decrease last year was due, in part, to the COVID-19 pandemic, the company said. It has contracts for 15 additional Electron launches for this year and beyond, valued at $127 million in launch and space systems revenue.

As of March 31 of this year, Rocket Lab has $34.2 million of cash and cash equivalents on hand. In addition to this, the company said it has access to both a $35 million revolving line of credit and a $100 million secured loan with Hercules Capital that is not repayable until June 2024. Rocket Lab acknowledged that there may be a fairly long pathway to profitability.

“We expect to continue to incur net losses for the next several years and we may not achieve or maintain profitability in the future,” the proxy statement says. “We believe there is a significant market opportunity for our business, and we intend to invest aggressively to capitalize on this opportunity.”

These financial losses may not cool the ardor of investors in Vector Acquisition Corporation, which is seeking to merge with Rocket Lab later this summer. Shareholders in Vector are due to vote on the proposed merger at a meeting on August 20. This merger will provide Rocket Lab with about $500 million in cash.

One reason investors will probably still be interested in Rocket Lab is that, unlike a lot of the space companies that have recently gone the SPAC route to become publicly traded, the launch company has solid revenue, demonstrated hardware, and a path toward growing its business.

Rocket Lab is already working to expand beyond small launch, including building its own satellites, performing satellite servicing in orbit, and building a medium-lift rocket called Neutron with a reusable first stage. In the proxy statement, Rocket Lab noted that Neutron has lift capacity of up to 8 metric tons to low-Earth orbit, 2 tons to the Moon, and 1.5 tons to Mars and Venus. Its first launch may occur as early as 2024.

Neutron, the company said, “will enable significantly higher revenue per launch with its capability to deploy larger spacecraft and greater numbers of spacecraft per launch as compared to our Electron launch vehicle, and will also be capable of supporting crewed flight and cargo resupply to the International Space Station.”

In terms of risks, the company cited the unexpected but potential loss of Peter Beck as its leader. Fiery, charismatic, and demanding of his employees, Beck has relentlessly promoted the Rocket Lab brand publicly and been a key driver of its technological innovation, the company said.

“We are highly dependent on the services of Peter Beck, our President, Chief Executive Officer and Chairman,” the proxy statement said. “Mr. Beck is the source of many, if not most, of the ideas and execution driving our company. If Mr. Beck were to discontinue his service to us due to death, disability or any other reason, we would be significantly disadvantaged.”

Continue Reading

Trending