Connect with us

Biz & IT

Popular avatar app Boomoji exposed millions of users’ contact lists and location data

Published

on

Popular animated avatar creator app Boomoji, with more than five million users across the world, exposed the personal data of its entire user base after it failed to put passwords on two of its internet-facing databases.

The China-based app developer left the ElasticSearch databases online without passwords — a U.S.-based database for its international customers and a Hong Kong-based database containing mostly Chinese users’ data in an effort to comply with China’s data security laws, which requires Chinese citizens’ data to be located on servers inside the country.

Anyone who knew where to look could access, edit or delete the database using their web browser. And, because the database was listed on Shodan, a search engine for exposed devices and databases, they were easily found with a few keywords.

After TechCrunch reached out, Boomoji pulled the two databases offline. “These two accounts were made by us for testing purposes,” said an unnamed Boomoji spokesperson in an email.

But that isn’t true.

The database contained records on all of the company’s iOS and Android users — some 5.3 million users as of this week. Each record contained their username, gender, country and phone type.

Each record also included a user’s unique Boomoji ID, which was linked to other tables in the database. Those other tables included if and which school they go to — a feature Boomoji touts as a way for users to get in touch with their fellow students. That unique ID also included the precise geolocation of more than 375,000 users that had allowed the app to know their location at any given time.

Worse, the database contained every phone book entry of every user who had allowed the app access to their contacts.

One table had more than 125 million contacts, including their names (as written in a user’s phone book) and their phone numbers. Each record was linked to a Boomoji’s unique ID, making it relatively easy to know whose contact list belonged to whom.

Even if you didn’t use the app, anyone who has your phone number stored on their device and used the app more than likely uploaded your number to Boomoji’s database. To our knowledge, there’s no way to opt out or have your information deleted.

Given Boomoji’s response, we verified the contents of the database by downloading the app on a dedicated iPhone using a throwaway phone number, containing a few dummy, but easy-to-search contact list entries. To find friends, the app matches your contacts with those registered with the app in its database. When we were prompted to allow the app access to our contacts list, the entire dummy contact list was uploaded instantly — and viewable in the database.

So long as the app was installed and had access to the contacts, new phone numbers would be automatically uploaded.

Yet, none of the data was encrypted. All of the data was stored in plaintext.

Although Boomoji is based in China, it claims to follow California state law, where data protection and privacy rules are some of the strongest in the U.S. We asked Boomoji if it has or plans to inform California’s attorney general of the exposure as required by state law, but the company did not answer.

Given the vast amount of European users’ information in the database, the company may also face penalties under the EU’s General Data Protection Regulation, which can impose fines of up to four percent of the company’s global annual revenue for serious breaches.

But given its China-based presence, it’s not clear, however, what actionable repercussions the company could face.

This is the latest in a series of exposures involving ElasticSearch instances, a popular open source search and database software. In recent weeks, several high-profile data exposures have been reported as a result of companies’ failure to practice basic data security measures — including Urban Massage exposing its own customer database, Mindbody-owned FitMetrix forgetting to put a password on its servers and Voxox, a communications company, which leaked phone numbers and two-factor codes on millions of unsuspecting users.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Until further notice, think twice before using Google to download software

Published

on

Getty Images

Searching Google for downloads of popular software has always come with risks, but over the past few months, it has been downright dangerous, according to researchers and a pseudorandom collection of queries.

“Threat researchers are used to seeing a moderate flow of malvertising via Google Ads,” volunteers at Spamhaus wrote on Thursday. “However, over the past few days, researchers have witnessed a massive spike affecting numerous famous brands, with multiple malware being utilized. This is not ‘the norm.’”

One of many new threats: MalVirt

The surge is coming from numerous malware families, including AuroraStealer, IcedID, Meta Stealer, RedLine Stealer, Vidar, Formbook, and XLoader. In the past, these families typically relied on phishing and malicious spam that attached Microsoft Word documents with booby-trapped macros. Over the past month, Google Ads has become the go-to place for criminals to spread their malicious wares that are disguised as legitimate downloads by impersonating brands such as Adobe Reader, Gimp, Microsoft Teams, OBS, Slack, Tor, and Thunderbird.

On the same day that Spamhaus published its report, researchers from security firm Sentinel One documented an advanced Google malvertising campaign pushing multiple malicious loaders implemented in .NET. Sentinel One has dubbed these loaders MalVirt. At the moment, the MalVirt loaders are being used to distribute malware most commonly known as XLoader, available for both Windows and macOS. XLoader is a successor to malware also known as Formbook. Threat actors use XLoader to steal contacts data and other sensitive data from infected devices.

The MalVirt loaders use obfuscated virtualization to evade end-point protection and analysis. To disguise real C2 traffic and evade network detections, MalVirt beacons to decoy command and control servers hosted at providers including Azure, Tucows, Choopa, and Namecheap. Sentinel One researcher Tom Hegel wrote:

As a response to Microsoft blocking Office macros by default in documents from the Internet, threat actors have turned to alternative malware distribution methods—most recently, malvertising. The MalVirt loaders we observed demonstrate just how much effort threat actors are investing in evading detection and thwarting analysis.

Malware of the Formbook family is a highly capable infostealer that is deployed through the application of a significant amount of anti-analysis and anti-detection techniques by the MalVirt loaders. Traditionally distributed as an attachment to phishing emails, we assess that threat actors distributing this malware are likely joining the malvertising trend.

Given the massive size of the audience threat actors can reach through malvertising, we expect malware to continue being distributed using this method.

Google representatives declined an interview. Instead, they provided the following statement:

Bad actors often employ sophisticated measures to conceal their identities and evade our policies and enforcement. To combat this over the past few years, we’ve launched new certification policies, ramped up advertiser verification, and increased our capacity to detect and prevent coordinated scams. We are aware of the recent uptick in fraudulent ad activity. Addressing it is a critical priority and we are working to resolve these incidents as quickly as possible.

Anecdotal evidence that Google malvertising is out of control isn’t hard to come by. Searches seeking software downloads are probably the most likely to turn up malvertising. Take, for instance, the results Google returned for a search Thursday looking for “visual studio download”:

Clicking that Google-sponsored link redirected me to downloadstudio[.]net, which is flagged by VirusTotal as malicious by only a single endpoint provider:

On Thursday evening, the download this site offered was detected as malicious by 43 antimalware engines:

Continue Reading

Biz & IT

ChatGPT sets record for fastest-growing user base in history, report says

Published

on

Enlarge / A realistic artist’s depiction of an encounter with ChatGPT Plus.

Benj Edwards / Ars Technica / OpenAI

On Wednesday, Reuters reported that AI bot ChatGPT reached an estimated 100 million active monthly users last month, a mere two months from launch, making it the “fastest-growing consumer application in history,” according to a UBS investment bank research note. In comparison, TikTok took nine months to reach 100 million monthly users, and Instagram about 2.5 years, according to UBS researcher Lloyd Walmsley.

“In 20 years following the Internet space, we cannot recall a faster ramp in a consumer internet app,” Reuters quotes Walmsley as writing in the UBS note.

Reuters says the UBS data comes from analytics firm Similar Web, which states that around 13 million unique visitors used ChatGPT every day in January, doubling the number of users in December.

ChatGPT is a conversational large language model (LLM) that can discuss almost any topic at an almost human level. It reads context and answers questions easily, though sometimes not accurately (improving its accuracy is a work in progress). After launching as a free public beta on November 30, the GPT-3 powered AI bot has inspired awe, wonder, and fear in education, computer security, and finance. It’s shaken up the tech industry, prompting a $10 billion investment from Microsoft and causing Google to see its life flash before its eyes.

Also on Wednesday, OpenAI announced ChatGPT Plus, a $20 per month subscription service that will offer users faster response times, preferential access to ChatGPT during peak times, and priority access to new features. It’s an attempt to keep up with the intense demand for ChatGPT that has often seen the site deny users due to overwhelming activity.

Over the past few decades, researchers have noticed that technology adoption rates are quickening, with inventions such as the telephone, television, and the Internet taking shorter periods of time to reach massive numbers of users. Will generative AI tools be next on that list? With the kind of trajectory shown by ChatGPT, it’s entirely possible.

Continue Reading

Biz & IT

Netflix stirs fears by using AI-assisted background art in short anime film

Published

on

Enlarge / A still image from the short film Dog and Boy,, which uses image synthesis to help generate background artwork.

Netflix

Over the past year, generative AI has kicked off a wave of existential dread over potential machine-fueled job loss not seen since the advent of the industrial revolution. On Tuesday, Netflix reinvigorated that fear when it debuted a short film called Dog and Boy that utilizes AI image synthesis to help generate its background artwork.

Directed by Ryotaro Makihara, the three-minute animated short follows the story of a boy and his robotic dog through cheerful times, although the story soon takes a dramatic turn toward the post-apocalyptic. Along the way, it includes lush backgrounds apparently created as a collaboration between man and machine, credited to “AI (+Human)” in the end credit sequence.

In the announcement tweet, Netflix cited an industry labor shortage as the reason for using the image synthesis technology:

As an experimental effort to help the anime industry, which has a labor shortage, we used image generation technology for the background images of all three-minute video cuts!

Netflix and the production company WIT Studio tapped Japanese AI firm Rinna for assistance with generating the images. They did not announce exactly what type of technology Rinna used to generate the artwork, but the process looks similar to a Stable Diffusion-powered “img2img” process than can take an image and transform it based on a written prompt.

The film is currently available to view for free on YouTube.

Netflix’s official Dog and Boy promotional video.

Almost immediately, Twitter users responded with a torrent of negative replies to Netflix’s tweet announcing the film, such as, “I know a ton of animators looking for work if you guys are struggling to find them (are you looking very hard?).” Several others quoted legendary Studio Ghibli animator Hayao Miyazaki as saying that AI-powered art “is an insult to life itself.”

In a news release, Netflix expressed its hopes that the new technology would assist with future animation productions (translated by Google Translate): “As a studio, Netflix focuses on supporting creators in the creation of works on a daily basis. As the shortage of human resources in the animation industry is seen as an issue, we hope that this initiative will contribute to the realization of a flexible animation production process through appropriate support for creators using the latest technology.”

It also looks like Makihara also wanted to push boundaries in animation by using AI technology as part of the production process. The Netflix release quoted him as saying, “By combining tools and hand-drawn techniques, we can create something unique to humans … I think that the core of the story is ‘drawing a human being.’ I think that it will be possible to secure and return to its roots, which will eventually strengthen the strengths of Japanese animation and expand its possibilities.”

Labor shortage or not, AI assistance may possibly speed up production times and lower production costs, allowing the creation of more animated content than ever before. But will people be happy about it? That remains to be seen.

Continue Reading

Trending