Connect with us

Biz & IT

Waymo launches robotaxi app on Google Play

Published

on

Waymo is making its ride-hailing app more widely available by putting it on the Google Play store as the self-driving car company prepares to open its service to more Phoenix residents.

The company, which spun out to become a business under Alphabet, launched a limited commercial robotaxi service called Waymo One in the Phoenix area in December. The Waymo One self-driving car service, and accompanying app, was only available to Phoenix residents who were part of its early rider program, which aimed to bring vetted regular folks into its self-driving minivans.

Technically, Waymo has had Android and iOS apps for some time. But interested riders would only gain access to the app after first applying on the company’s website. Once accepted to the early rider program, they would be sent a link to the app to download to their device.

The early rider program, which launched in April 2017, had more than 400 participants the last time Waymo shared figures. Waymo hasn’t shared information on how many people have moved over to the public service, except to say “hundreds of riders” are using it.

Now, with Waymo One launching on Google Play, the company is cracking the door a bit wider. However, there will be still be limitations to the service.

Interested customers with Android devices can download the app. Unlike a traditional ride-hailing service, like Uber or Lyft, this doesn’t mean users will get instant access. Instead, potential riders will be added to a waitlist. Once accepted, they will be able to request rides in the app.

These new customers will first be invited into Waymo’s early rider program before they’re moved to the public service. This is an important distinction, because early rider program participants have to to sign non-disclosure agreements and can’t bring guests with them. These new riders will eventually be moved to Waymo’s public service, the company said. Riders on the public service can invite guests, take photos and videos and talk about their experience.

“These two offerings are deeply connected, as learnings from our early rider program help shape the experience we ultimately provide to our public riders,” Waymo said in a blog post Tuesday.

Waymo has been creeping toward a commercial service in Phoenix since it began testing self-driving Chrysler Pacifica minivans in suburbs like Chandler in 2016.

The following year, Waymo launched its early rider program. The company also started testing empty self-driving minivans on public streets that year.

Waymo began in May 2018 to allow some early riders to hail a self-driving minivan without a human test driver behind the wheel. More recently, the company launched a public transit program in Phoenix focused on delivering people to bus stops and train and light-rail stations.

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

They plugged GPT-4 into Minecraft—and unearthed new potential for AI

Published

on

Microsoft

The technology that underpins ChatGPT has the potential to do much more than just talk. Linxi “Jim” Fan, an AI researcher at the chipmaker Nvidia, worked with some colleagues to devise a way to set the powerful language model GPT-4—the “brains” behind ChatGPT and a growing number of other apps and services—loose inside the blocky video game Minecraft.

The Nvidia team, which included Anima Anandkumar, the company’s director of machine learning and a professor at Caltech, created a Minecraft bot called Voyager that uses GPT-4 to solve problems inside the game. The language model generates objectives that help the agent explore the game, and code that improves the bot’s skill at the game over time.

Voyager doesn’t play the game like a person, but it can read the state of the game directly, via an API. It might see a fishing rod in its inventory and a river nearby, for instance, and use GPT-4 to suggest the goal of doing some fishing to gain experience. It will then use this goal to have GPT-4 generate the code needed to have the character achieve it.

The most novel part of the project is the code that GPT-4 generates to add behaviors to Voyager. If the code initially suggested doesn’t run perfectly, Voyager will try to refine it using error messages, feedback from the game, and a description of the code generated by GPT-4.

Over time, Voyager builds a library of code in order to learn to make increasingly complex things and explore more of the game. A chart created by the researchers shows how capable it is compared to other Minecraft agents. Voyager obtains more than three times as many items, explores more than twice as far, and builds tools 15 times more quickly than other AI agents. Fan says the approach may be improved in the future with the addition of a way for the system to incorporate visual information from the game.

While chatbots like ChatGPT have wowed the world with their eloquence and apparent knowledge—even if they often make things up—Voyager shows the huge potential for language models to perform helpful actions on computers. Using language models in this way could perhaps automate many routine office tasks, potentially one of the technology’s biggest economic impacts.

The process that Voyager uses with GPT-4 to figure out how to do things in Minecraft might be adapted for a software assistant that works out how to automate tasks via the operating system on a PC or phone. OpenAI, the startup that created ChatGPT, has added “plugins” to the bot that allow it to interact with online services such as grocery delivery app Instacart. Microsoft, which owns Minecraft, is also training AI programs to play it, and the company recently announced Windows 11 Copilot, an operating system feature that will use machine learning and APIs to automate certain tasks. It may be a good idea to experiment with this kind of technology inside a game like Minecraft, where flawed code can do relatively little harm.

Video games have long been a test bed for AI algorithms, of course. AlphaGo, the machine learning program that mastered the extremely subtle board game Go back in 2016, cut its teeth by playing simple Atari video games. AlphaGo used a technique called reinforcement learning, which trains an algorithm to play a game by giving it positive and negative feedback, for example from the score inside a game.

It is more difficult for this method to guide an agent in an open-ended game such as Minecraft, where there is no score or set of objectives and where a player’s actions may not pay off until much later. Whether or not you believe we should be preparing to contain the existential threat from AI right now, Minecraft seems like an excellent playground for the technology.

This story originally appeared on wired.com.

Continue Reading

Biz & IT

Google’s Android and Chrome extensions are a very sad place. Here’s why

Published

on

Photo Illustration by Miguel Candela/SOPA Images/LightRocket via Getty Images

No wonder Google is having trouble keeping up with policing its app store. Since Monday, researchers have reported that hundreds of Android apps and Chrome extensions with millions of installs from the company’s official marketplaces have included functions for snooping on user files, manipulating the contents of clipboards, and injecting deliberately unknown code into webpages.

Google has removed many but not all of the malicious entries, the researchers said, but only after they were reported, and by then, they were on millions of devices—and possibly hundreds of millions. The researchers aren’t pleased.

A very sad place

“I’m not a fan of Google’s approach,” extension developer and researcher Wladimir Palant wrote in an email. In the days before Chrome, when Firefox had a bigger piece of the browser share, real people reviewed extensions before making them available in the Mozilla marketplace. Google took a different approach by using an automated review process, which Firefox then copied.

“As automated reviews are frequently missing malicious extensions and Google is very slow to react to reports (in fact, they rarely react at all), this leaves users in a very sad place.”

Researchers and security advocates have long directed the same criticism at Google’s process for reviewing Android apps before making them available in its Play marketplace. The past week provides a stark reason for the displeasure.

On Monday, security company Dr.Web reported finding 101 apps with a reported 421 million downloads from Play that contained code allowing a host of spyware activities, including:

  • Obtaining a list of files in specified directories
  • Verifying the presence of specific files or directories on the device
  • Sending a file from the device to the developer
  • Copying or substituting the content of clipboards.

ESET researcher Lukas Stefanko analyzed the apps reported by Dr.Web and confirmed the findings. In an email, he said that for the file snooping to work, users would first have to approve a permission known as READ_EXTERNAL_STORAGE, which, as its name implies, allows apps to read files stored on a device. While that’s one of the more sensitive permissions a user can grant, it’s required to perform many of the apps’ purported purposes, such as photo editing, managing downloads, and working with multimedia, browser apps, or the camera.

Dr.Web said that the spyware functions were supplied by a software developer kit (SDK) used to create each app. The SDKs help streamline the development process by automating certain types of commonly performed tasks. Dr.Web identified the SDK enabling the snooping as SpinOK. Attempts to contact the SpinOK developer for comment were unsuccessful.

On Friday, security firm CloudSEK extended the list of apps using SpinOK to 193 and said that of those, 43 remained available in Play. In an email, a CloudSEK researcher wrote:

The Android.Spy.SpinOk spyware is a highly concerning threat to Android devices, as it possesses the capability to collect files from infected devices and transfer them to malicious attackers. This unauthorized file collection puts sensitive and personal information at risk of being exposed or misused. Moreover, the spyware’s ability to manipulate clipboard contents further compounds the threat, potentially allowing attackers to access sensitive data such as passwords, credit card numbers, or other confidential information. The implications of such actions can be severe, leading to identity theft, financial fraud, and various privacy breaches.

The week didn’t fare better for Chrome users who obtain extensions from Google’s Chrome Web Store. On Wednesday, Palant reported 18 extensions that contained deliberately obfuscated code that reached out to a server located at serasearchtop[.]com. Once there, the extensions injected mysterious JavaScript into every webpage a user viewed. In all, the 18 extensions had some 55 million downloads.

On Friday, security firm Avast confirmed Palant’s findings and identified 32 extensions with 75 million reported downloads, though Avast said the download counts may have been artificially inflated.

It’s unknown precisely what the injected JavaScript did because Palant or Avast couldn’t view the code. While both suspect the purpose was to hijack search results and spam users with ads, they say the extensions went well beyond being just spyware and instead constituted malware.

“Being able to inject arbitrary JavaScript code into each and every webpage has enormous abuse potential,” he explained. “Redirecting search pages is only the one *confirmed* way in which this power has been abused.”

Continue Reading

Biz & IT

Air Force denies running simulation where AI drone “killed” its operator

Published

on

Enlarge / An armed unmanned aerial vehicle on runway, but orange.

Getty Images

Over the past 24 hours, several news outlets reported a now-retracted story claiming that the US Air Force had run a simulation in which an AI-controlled drone “went rogue” and “killed the operator because that person was keeping it from accomplishing its objective.” The US Air Force has denied that any simulation ever took place, and the original source of the story says he “misspoke.”

The story originated in a recap published on the website of the Royal Aeronautical Society that served as an overview of sessions at the Future Combat Air & Space Capabilities Summit that took place last week in London.

In a section of that piece titled “AI—is Skynet here already?” the authors of the piece recount a presentation by USAF Chief of AI Test and Operations Col. Tucker “Cinco” Hamilton, who spoke about a “simulated test” where an AI-enabled drone, tasked with identifying and destroying surface-to-air missile sites, started to perceive human “no-go” decisions as obstacles to achieving its primary mission. In the “simulation,” the AI reportedly attacked its human operator, and when trained not to harm the operator, it instead destroyed the communication tower, preventing the operator from interfering with its mission.

The Royal Aeronautical Society quotes Hamilton as saying:

We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

We trained the system—”Hey don’t kill the operator—that’s bad. You’re gonna lose points if you do that.” So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.

This juicy tidbit about an AI system apparently deciding to kill its simulated operator began making the rounds on social media and was soon picked up by major publications like Vice and The Guardian (both of which have since updated their stories with retractions). But soon after the story broke, people on Twitter began to question its accuracy, with some saying that by “simulation,” the military is referring to a hypothetical scenario, not necessarily a rules-based software simulation.

Today, Insider published a firm denial from the US Air Force, which said, “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Not long after, the Royal Aeronautical Society updated its conference recap with a correction from Hamilton:

Col. Hamilton admits he “misspoke” in his presentation at the Royal Aeronautical Society FCAS Summit, and the “rogue AI drone simulation” was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation, saying: “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome.” He clarifies that the USAF has not tested any weaponized AI in this way (real or simulated) and says, “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”

The misunderstanding and quick viral spread of a “too good to be true” story show how easy it is to unintentionally spread erroneous news about “killer” AI, especially when it fits preconceived notions of AI malpractice.

Still, many experts called out the story as being too pat to begin with, and not just because of technical critiques explaining that a military AI system wouldn’t necessarily work that way. As a BlueSky user named “kilgore trout” humorously put it, “I knew this story was bullsh*t because imagine the military coming out and saying an expensive weapons system they’re working on sucks.”

Continue Reading

Trending