Connect with us

Gadgets

Nexar’s Live Map is like Street View with pictures from 5 minutes ago – TechCrunch

Published

on

We all rely on maps to get where we’re going or investigate a neighborhood for potential brunch places, but the data we’re looking at is often old, vague or both. Nexar, maker of dashcam apps and cameras, aims to put fresh and specific data on your map with images from the street taken only minutes before.

If you’re familiar with dash cams, and you’re familiar with Google’s Street View, then you can probably already picture what Live Map essentially is. It’s not quite as easy to picture how it works or why it’s useful.

Nexar sells dash cams and offers an app that turns your phone into one temporarily, and the business has been doing well, with thousands of active users on the streets of major cities at any given time. Each node of this network of gadgets shares information with the other nodes — warning of traffic snarls, potholes, construction and so on.

The team saw the community they’d enabled trading videos and sharing data derived by automatic analysis of their imagery, and, according to co-founder and CTO Bruno Fernandez-Ruiz, asked themselves: Why shouldn’t this data be available to the public as well?

Actually there are a few reasons — privacy chief among them. Google has shown that properly handled, this kind of imagery can be useful and only minimally invasive. But knowing where someone or some car was a year or two ago is one thing; knowing where they were five minutes ago is another entirely.

Fortunately, from what I’ve heard, this issue was front of mind for the team from the start. But it helps to see what the product looks like in action before addressing that.

Zooming in on a hexagonal map section, which the company has dubbed “nexagons,” polls the service to find everything the service knows about that area. And the nature of the data makes for extremely granular information. Where something like Google Maps or Waze may say that there’s an accident at this intersection, or construction causing traffic, Nexar’s map will show the locations of the orange cones to within a few feet, or how far into the lanes that fender-bender protrudes.

You can also select the time of day, letting you rewind a few minutes or a few days — what was it like during that parade? Or after the game? Are there a lot of people there late at night? And so on.

Right now it’s limited to a web interface, and to New York City — the company has enough data to launch in several other areas in the U.S. but wants to do a slower roll-out to identify issues and opportunities. An API is on the way as well. (Europe, unfortunately, may be waiting a while, though the company says it’s GDPR-compliant.)

The service uses computer vision algorithms to identify a number of features, including signs (permanent and temporary), obstructions, even the status of traffic lights. This all goes into the database, which gets updated any time a car with a Nexar node goes by. Naturally it’s not in 360 and high definition — these are forward-facing cameras with decent but not impressive resolution. It’s for telling what’s in the road, not for zooming in to spot a street address.

Detection Filtering

Of course, construction signs and traffic jams aren’t the only things on the road. As mentioned before it’s a serious question of privacy to have constantly updating, public-facing imagery of every major street of a major city. Setting aside the greater argument of the right to privacy in public places and attendant philosophical problems, it’s simply the ethical thing to do to minimize how much you expose people who don’t know they’re being photographed.

To that end, Nexar’s systems carefully detect and blur out faces before any images are exposed to public view. License plates are likewise obscured so that neither cars nor people can be easily tracked from image to image. Of course, one may say that here is a small red car that was on 4th, and is on 5th a minute later — probably the same. But systematic surveillance rather than incidental is far easier with an identifier like a license plate.

In addition to protecting bystanders, Nexar has to think of the fact that an image from a car by definition places that car in a location at a given time, allowing them to be tracked. And while the community essentially opts into this kind of location and data sharing when they sign up for an account, it would be awkward if the public website let a stranger track a user all the way to their home or watch their movements all day.

“The frames are carefully random to begin with so people can’t be soloed out,” said Fernandez-Ruiz. “We eliminate any frames near your house and your destination.” As far as the blurring, he said that “We have a pretty robust model, on par with anything you can see in the industry. We probably are something north of 97-98% accurate for private data.”

So what would you do with this kind of service? There is, of course, something fundamentally compelling about being able to browse your city in something like real time.

“On Google, there’s a red line. We show you an actual frame — a car blocking the right lane right there. It gives you a human connection,” said Fernandez-Ruiz. “There’s an element of curiosity about what the world looks like, maybe not something you do every day, but maybe once a week, or when something happens.”

No doubt we are many of us guilty of watching dash-cam footage or even Street View pictures of various events, pranks and other occurrences. But basic curiosity doesn’t pay the bills. Fortunately there are more compelling use cases.

“One that’s interesting is construction zones. You can see individual elements like cones and barriers — you can see where exactly they are, when they’re started etc. We want to work with municipal authorities, departments of transportation, etc. on this — it gives them a lot of information on what their contractors are doing on the road. That’s one use case that we know about and understand.”

In fact there are already some pilot programs in Nevada. And although it’s rather a prosaic application of a 24/7 surveillance apparatus, it seems likely to do some good.

But the government angle brings in an unsavory line of thinking — what if the police want to get unblurred dash cam footage of a crime that just happened, or one of many such situations where tech’s role has historically been a mixed blessing?

“We’ve given a lot of thought to this, and it this concerns our investors highly,” Fernandez-Ruiz admitted. “There are two things we’ve done. One is we’ve differentiated what data the user owns and what we have. The data they send is theirs — like Dropbox. What we get is these anonymized blurred images. Obviously we will comply with the law, but as far as ethical applications of big data and AI, we’ve said we’re not going to be a tool of an authoritarian government. So we’re putting processes in place — even if we get a subpoena, we can say: This is encrypted data, please ask the user.”

That’s some consolation, but it seems clear that tools like this one are more a question than an answer. It’s an experiment by a successful company and may morph into something ubiquitous and useful or a niche product used by professional drivers and municipal governments. But in tech, if you have the data, you use it. Because if you don’t, someone else will.

You can test out Nexar’s Live Map here.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Gadgets

Apple and Google’s AI wizardry promises privacy—at a cost

Published

on

Getty Images

Since the dawn of the iPhone, many of the smarts in smartphones have come from elsewhere: the corporate computers known as the cloud. Mobile apps sent user data cloudward for useful tasks like transcribing speech or suggesting message replies. Now Apple and Google say smartphones are smart enough to do some crucial and sensitive machine learning tasks like those on their own.

At Apple’s WWDC event this month, the company said its virtual assistant Siri will transcribe speech without tapping the cloud in some languages on recent and future iPhones and iPads. During its own I/O developer event last month, Google said the latest version of its Android operating system has a feature dedicated to secure, on-device processing of sensitive data, called the Private Compute Core. Its initial uses include powering the version of the company’s Smart Reply feature built into its mobile keyboard that can suggest responses to incoming messages.

Apple and Google both say on-device machine learning offers more privacy and snappier apps. Not transmitting personal data cuts the risk of exposure and saves time spent waiting for data to traverse the internet. At the same time, keeping data on devices aligns with the tech giants’ long-term interest in keeping consumers bound into their ecosystems. People that hear their data can be processed more privately might become more willing to agree to share more data.

The companies’ recent promotion of on-device machine learning comes after years of work on technology to constrain the data their clouds can “see.”

In 2014, Google started gathering some data on Chrome browser usage through a technique called differential privacy, which adds noise to harvested data in ways that restrict what those samples reveal about individuals. Apple has used the technique on data gathered from phones to inform emoji and typing predictions and for web browsing data.

More recently, both companies have adopted a technology called federated learning. It allows a cloud-based machine learning system to be updated without scooping in raw data; instead, individual devices process data locally and share only digested updates. As with differential privacy, the companies have discussed using federated learning only in limited cases. Google has used the technique to keep its mobile typing predictions up to date with language trends; Apple has published research on using it to update speech recognition models.

Rachel Cummings, an assistant professor at Columbia who has previously consulted on privacy for Apple, says the rapid shift to do some machine learning on phones has been striking. “It’s incredibly rare to see something going from the first conception to being deployed at scale in so few years,” she says.

That progress has required not just advances in computer science but for companies to take on the practical challenges of processing data on devices owned by consumers. Google has said that its federated learning system only taps users’ devices when they are plugged in, idle, and on a free internet connection. The technique was enabled in part by improvements in the power of mobile processors.

Beefier mobile hardware also contributed to Google’s 2019 announcement that voice recognition for its virtual assistant on Pixel devices would be wholly on-device, free from the crutch of the cloud. Apple’s new on-device voice recognition for Siri, announced at WWDC this month, will use the “neural engine” the company added to its mobile processorsto power up machine learning algorithms.

The technical feats are impressive. It’s debatable how much they will meaningfully change users’ relationship with tech giants.

Presenters at Apple’s WWDC said Siri’s new design was a “major update to privacy” that addressed the risk associated with accidentally transmitting audio to the cloud, saying that was users’ largest privacy concern about voice assistants. Some Siri commands—such as setting timers—can be recognized wholly locally, making for a speedy response. Yet in many cases transcribed commands to Siri—presumably including from accidental recordings—will be sent to Apple servers for software to decode and respond. Siri voice transcription will still be cloud-based for HomePod smart speakers commonly installed in bedrooms and kitchens, where accidental recording can be more concerning.

Google also promotes on-device data processing as a privacy win and has signaled it will expand the practice. The company expects partners such as Samsung that use its Android operating system to adopt the new Privacy Compute Core and use it for features that rely on sensitive data.

Google has also made local analysis of browsing data a feature of its proposal for reinventing online ad targeting, dubbed FLoC and claimed to be more private. Academics and some rival tech companies have said the design is likely to help Google consolidate its dominance of online ads by making targeting more difficult for other companies.

Michael Veale, a lecturer in digital rights at University College London, says on-device data processing can be a good thing but adds that the way tech companies promote it shows they are primarily motivated by a desire to keep people tied into lucrative digital ecosystems.

“Privacy gets confused with keeping data confidential, but it’s also about limiting power,” says Veale. “If you’re a big tech company and manage to reframe privacy as only confidentiality of data, that allows you to continue business as normal and gives you license to operate.”

A Google spokesperson said the company “builds for privacy everywhere computing happens” and that data sent to the Private Compute Core for processing “needs to be tied to user value.” Apple did not respond to a request for comment.

Cummings of Columbia says new privacy techniques and the way companies market them add complexity to the trade-offs of digital life. Over recent years, as machine learning has become more widely deployed, tech companies have steadily expanded the range of data they collect and analyze. There is evidence some consumers misunderstand the privacy protections trumpeted by tech giants.

A forthcoming survey study from Cummings and collaborators at Boston University and the Max Planck Institute showed descriptions of differential privacy drawn from tech companies, media, and academics to 675 Americans. Hearing about the technique made people about twice as likely to report they would be willing to share data. But there was evidence that descriptions of differential privacy’s benefits also encouraged unrealistic expectations. One-fifth of respondents expected their data to be protected against law enforcement searches, something differential privacy does not do. Apple’s and Google’s latest proclamations about on-device data processing may bring new opportunities for misunderstandings.

This story originally appeared on wired.com.

Continue Reading

Gadgets

Amazon joins Apple, Google by reducing its app store cut

Published

on

Enlarge / The Amazon Fire HD 8 tablet, which runs Amazon’s Fire OS.

Apparently following the lead of Apple and Google, Amazon has announced that it will take a smaller revenue cut from apps developed by teams earning less than $1 million annually from their apps on the Amazon Appstore. The same applies to developers who are brand-new to the marketplace.

The new program from Amazon, called the Amazon Appstore Small Business Accelerator Program, launches in Q4 of this year, and it will reduce the cut Amazon takes from app revenue, which was previously 30 percent. (Developers making over $1 million annually will continue to pay the original rate.) For some, it’s a slightly worse deal than Apple’s or Google’s, and for others, it’s better.

Amazon’s new indie-friendly rate is 20 percent, in contrast to Apple’s and Google’s 15 percent. Amazon seeks to offset this difference by granting developers 10 percent of their Appstore revenue in the form of a credit for AWS. For certain developers who use AWS, it could mean that Amazon’s effective cut is actually 10 percent, not 15 or 20 percent.

But for some, it amounts to something more like giving the developer a coupon on a purchase of services from Amazon than actually putting more cash in their pockets. It leaves small developers who aren’t spending a bunch of money on Amazon’s services with a worse deal than they’d get on Apple’s or Google’s marketplaces.

As with Apple’s program—but not Google’s—the lower rate applies to developers only if they made $1 million or less in total (in this case, the numbers assessed are those from the previous year). Crossing that threshold will lead developers to pay the older, higher rate on all of their earnings. In contrast, Google always takes a smaller cut of the first million in a given year and then applies the bigger cut to revenues after $1 million without changing the amount it took from the first million.

The Amazon Appstore primarily exists as the app store for Amazon’s Android-based Fire OS software that runs on tablets. It’s also offered as an alternative App Store for users of other Android-based operating systems.

All three companies are facing various forms of regulatory scrutiny, and that scrutiny was likely a factor in Apple’s decision to cut the fees it applies to apps released by small developers on the Apple App Store. Google followed shortly afterward for its Google Play marketplace.

Continue Reading

Gadgets

Microsoft’s Linux repositories were down for 18+ hours

Published

on

Enlarge / In 2017, Tux was sad that he had a Microsoft logo on his chest. In 2021, he’s mostly sad that Microsoft’s repositories were down for most of a day.

Jim Salter

Yesterday, packages.microsoft.com—the repository from which Microsoft serves software installers for Linux distributions including CentOS, Debian, Fedora, OpenSUSE, and more—went down hard, and it stayed down for around 18 hours. The outage impacted users trying to install .NET Core, Microsoft Teams, Microsoft SQL Server for Linux (yes, that’s a thing) and more—as well as Azure’s own devops pipelines.

We first became aware of the problem Wednesday evening when we saw 404 errors in the output of apt update on an Ubuntu workstation with Microsoft Teams installed. The outage is somewhat better documented at this .NET Core-issue report on Github, with many users from all around the world sharing their experiences and theories.

The short version is, the entire repository cluster which serves all Linux packages for Microsoft was completely down—issuing a range of HTTP 404 (content not found) and 500 (Internal Server Error) messages for any URL—for roughly 18 hours. Microsoft engineer Rahul Bhandari confirmed the outage roughly five hours after it was initially reported, with a cryptic comment about the infrastructure team “running into some space issues.”

Eighteen hours after the issue was reported, Bhandari reported that the mirrors were once again available—although with temporarily degraded performance, likely due to cold caches. In this update, Bhandari said that the original cause of the outage was “a regression in [apt repositories] during some feature migration work that resulted in those packages becoming unavailable on the mirrors.”

We’re still waiting for a comprehensive incident report, since Bhandari’s status updates provide clues but no real explanations. The good news is, we can confirm that packages.microsoft.com is indeed up once again, and it is serving packages as it should.

Continue Reading

Trending