Connect with us

Biz & IT

A critical iPhone and iPad bug that lurked for 8 years may be under active attack

Published

on

ZecOps

A critical bug that has lurked in iPhones and iPads for eight years appears to be under active attack by sophisticated hackers to hack the devices of high-profile targets, a security firm reported on Wednesday.

The exploit is triggered by sending booby-trapped emails that, in some cases, require no interaction at all and, in other cases, require only that a user open the message, researchers from ZecOps said in a post. The malicious emails allow attackers to run code in the context of the default mail apps, which make it possible to read, modify, or delete messages. The researchers suspect the attackers are combining the zero-day with a separate exploit that gives full control over the device. The vulnerability dates back to iOS 6 released in 2012. Attackers have been exploiting the bug since 2018 and possibly earlier.

Enormous scope

“With very limited data we were able to see that at least six organizations were impacted by this vulnerability— and the full scope of abuse of this vulnerability is enormous,” ZecOps researchers wrote. “We are confident that a patch must be provided for such issues with public triggers ASAP.”

Targets from the six organizations include:

  • Individuals from a Fortune 500 organization in North America
  • An executive from a carrier in Japan
  • A VIP from Germany
  • Managed security services providers in Saudi Arabia and Israel
  • A journalist in Europe
  • Suspected: An executive from a Swiss enterprise

Zerodays, or vulnerabilities that are known to attackers but not the manufacturer or the general public, are rarely exploited in the wild against against users of iPhones and iPads. Some of the only known incidents a 2016 attack that installed spyware on the phone of a dissident in the United Arab Emirates, a WhatsApp exploit in May of last year that was transmitted with a simple phone call, and attacks that Google disclosed last August.

Apple has currently patched the flaw in the beta for iOS 13.4.5. At the time this post went live, a fix in the general release had not yet been released.

Malicious mails that trigger the flaw work by consuming device memory and then exploiting a heap overflow, which is a type of buffer overflow that exploits an allocation flaw in memory reserved for dynamic operations. By filling the heap with junk data, the exploit is able to inject malicious code that then gets executed. The code triggers strings that include 4141…41, which are commonly used by exploit developers. The researchers believe the exploit then deletes the mail.

A protection known as address space layout randomization prevents attackers from knowing the memory location of this code and thus executing in a way that takes control of the device. As a result, the device or application merely crashes. To overcome this security measure, attackers must exploit a separate bug that reveals the hidden memory location.

Little or no sign of attack

The malicious mails need not be prohibitively large. Normal-size emails can consume enough RAM using rich text format documents, multi-part content, or other methods. Other than a temporary device slowdown, targets running iOS 13 aren’t likely to notice any signs that they’re under attack. In the event that the exploit fails on a device running iOS 12, meanwhile, the device will show a message that says “This message has no content.”

ZecOps said the attacks are narrowly targeted but provided only limited clues about the hackers carrying them out or targets who were on the receiving end.

“We believe that these attacks are correlative with at least one nation-state threat operator or a nation-state that purchased the exploit from a third-party researcher in a Proof of Concept (POC) grade and used ‘as-is’ or with minor modifications (hence the 4141..41 strings),” ZecOps researchers wrote. “While ZecOps refrain from attributing these attacks to a specific threat actor, we are aware that at least one ‘hackers-for-hire’ organization is selling exploits using vulnerabilities that leverage email addresses as a main identifier.”

The most visible third-party organization selling advanced smartphone exploits is Israel-based NSO Group, whose iOS and Android exploits over the past year have been found being used against activists, Facebook users, and undisclosed targets. NSO Group has come under sharp criticism for selling its wares in countries with poor human-rights records. In recent months, the company has vowed to serve only organizations with better track records.

It’s generally against security community norms to disclose vulnerabilities without giving manufacturers time to release security patches. ZecOps said it released its research ahead of a general release fix because the zeroday alone isn’t enough to infect phones, the bugs had already been mentioned in the beta release, and the urgency created by the six organizations the firm believes are under active attack

To prevent attacks until Apple releases a general-availability patch, users can either install the beta 13.4.5 or use an alternate email app such as Gmail or Outlook. Apple representatives didn’t respond to an email seeking comment for this post.

Continue Reading

Biz & IT

Deepfake celebrities begin shilling products on social media, causing alarm

Published

on

Enlarge / A cropped portion of the AI-generated version of Hanks that the actor shared on his Instagram feed.

Tom Hanks

News of AI deepfakes spread quickly when you’re Tom Hanks. On Sunday, the actor posted a warning on Instagram about an unauthorized AI-generated version of himself being used to sell a dental plan. Hanks’ warning spread in the media, including The New York Times. The next day, CBS anchor Gayle King warned of a similar scheme using her likeness to sell a weight-loss product. The now widely reported incidents have raised new concerns about the use of AI in digital media.

“BEWARE!! There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it,” wrote Hanks on his Instagram feed. Similarly, King shared an AI-augmented video with the words “Fake Video” stamped across it, stating, “I’ve never heard of this product or used it! Please don’t be fooled by these AI videos.”

Also on Monday, YouTube celebrity MrBeast posted on social media network X about a similar scam that features a modified video of him with manipulated speech and lip movements promoting a fraudulent iPhone 15 giveaway. “Lots of people are getting this deepfake scam ad of me,” he wrote. “Are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem.”

A screenshot of Tom Hanks' Instagram post warning of an AI-generated version of him selling a dental plan.
Enlarge / A screenshot of Tom Hanks’ Instagram post warning of an AI-generated version of him selling a dental plan.

Tom Hanks / Instagram

We have not seen the original Hanks video, but from examples provided by King and MrBeast, it appears the scammers likely took existing videos of the celebrities and used software to change lip movements to match AI-generated voice clones of them that had been trained on vocal samples pulled from publicly available work.

The news comes amid a larger debate on the ethical and legal implications of AI in the media and entertainment industry. The recent Writers Guild of America strike featured concerns about AI as a significant point of contention. SAG-AFTRA, the union representing Hollywood actors, has expressed worries that AI could be used to create digital replicas of actors without proper compensation or approval. And recently, Robin Williams’ daughter, Zelda Williams, made the news when she complained about people cloning her late father’s voice without permission.

As we’ve warned, convincing AI deepfakes are an increasingly pressing issue that may undermine shared trust and threaten the reliability of communications technologies by casting doubt on someone’s identity. Dealing with it is a tricky problem. Currently, companies like Google and OpenAI have plans to watermark AI-generated content and add metadata to track provenance. But historically, those watermarks have been easily defeated, and open source AI tools that do not add watermarks are available.

A screenshot of Gayle King's Instagram post warning of an AI-modified video of the CBS anchor.

A screenshot of Gayle King’s Instagram post warning of an AI-modified video of the CBS anchor.

Gayle King / Instagram

Similarly, attempts at restricting AI software through regulation may remove generative AI tools from legitimate researchers while keeping them in the hands of those who may use them for fraud. Meanwhile, social media networks will likely need to step up moderation efforts, reacting quickly when suspicious content is flagged by users.

As we wrote last December in a feature on the spread of easy-to-make deepfakes, “The provenance of each photo we see will become that much more important; much like today, we will need to completely trust who is sharing the photos to believe any of them. But during a transition period before everyone is aware of this technology, synthesized fakes might cause a measure of chaos.”

Almost a year later, with technology advancing rapidly, a small taste of that chaos is arguably descending upon us, and our advice could just as easily be applied to video and photos. Whether attempts at regulation currently underway in many countries will have any effect is an open question.

Continue Reading

Biz & IT

Researchers show how easy it is to defeat AI watermarks

Published

on

James Marshall/Getty Images

Soheil Feizi considers himself an optimistic person. But the University of Maryland computer science professor is blunt when he sums up the current state of watermarking AI images. “We don’t have any reliable watermarking at this point,” he says. “We broke all of them.”

For one of the two types of AI watermarking he tested for a new study— “low perturbation” watermarks, which are invisible to the naked eye—he’s even more direct: “There’s no hope.”

Feizi and his coauthors looked at how easy it is for bad actors to evade watermarking attempts. (He calls it “washing out” the watermark.) In addition to demonstrating how attackers might remove watermarks, the study shows how it’s possible to add watermarks to human-generated images, triggering false positives. Released online this week, the preprint paper has yet to be peer-reviewed; Feizi has been a leading figure examining how AI detection might work, so it is research worth paying attention to, even in this early stage.

It’s timely research. Watermarking has emerged as one of the more promising strategies to identify AI-generated images and text. Just as physical watermarks are embedded on paper money and stamps to prove authenticity, digital watermarks are meant to trace the origins of images and text online, helping people spot deepfaked videos and bot-authored books. With the US presidential elections on the horizon in 2024, concerns over manipulated media are high—and some people are already getting fooled. Former US President Donald Trump, for instance, shared a fake video of Anderson Cooper on his social platform Truth Social; Cooper’s voice had been AI-cloned.

This summer, OpenAI, Alphabet, Meta, Amazon, and several other major AI players pledged to develop watermarking technology to combat misinformation. In late August, Google’s DeepMind released a beta version of its new watermarking tool, SynthID. The hope is that these tools will flag AI content as it’s being generated, in the same way that physical watermarking authenticates dollars as they’re being printed.

It’s a solid, straightforward strategy, but it might not be a winning one. This study is not the only work pointing to watermarking’s major shortcomings. “It is well established that watermarking can be vulnerable to attack,” says Hany Farid, a professor at the UC Berkeley School of Information.

This August, researchers at the University of California, Santa Barbara and Carnegie Mellon coauthored another paper outlining similar findings, after conducting their own experimental attacks. “All invisible watermarks are vulnerable,” it reads. This newest study goes even further. While some researchers have held out hope that visible (“high perturbation”) watermarks might be developed to withstand attacks, Feizi and his colleagues say that even this more promising type can be manipulated.

The flaws in watermarking haven’t dissuaded tech giants from offering it up as a solution, but people working within the AI detection space are wary. “Watermarking at first sounds like a noble and promising solution, but its real-world applications fail from the onset when they can be easily faked, removed, or ignored,” Ben Colman, the CEO of AI-detection startup Reality Defender, says.

“Watermarking is not effective,” adds Bars Juhasz, the cofounder of Undetectable, a startup devoted to helping people evade AI detectors. “Entire industries, such as ours, have sprang up to make sure that it’s not effective.” According to Juhasz, companies like his are already capable of offering quick watermark-removal services.

Others do think that watermarking has a place in AI detection—as long as we understand its limitations. “It is important to understand that nobody thinks that watermarking alone will be sufficient,” Farid says. “But I believe robust watermarking is part of the solution.” He thinks that improving upon watermarking and then using it in combination with other technologies will make it harder for bad actors to create convincing fakes.

Some of Feizi’s colleagues think watermarking has its place, too. “Whether this is a blow to watermarking depends a lot on the assumptions and hopes placed in watermarking as a solution,” says Yuxin Wen, a PhD student at the University of Maryland who coauthored a recent paper suggesting a new watermarking technique. For Wen and his co-authors, including computer science professor Tom Goldstein, this study is an opportunity to reexamine the expectations placed on watermarking, rather than reason to dismiss its use as one authentication tool among many.

“There will always be sophisticated actors who are able to evade detection,” Goldstein says. “It’s ok to have a system that can only detect some things.” He sees watermarks as a form of harm reduction, and worthwhile for catching lower-level attempts at AI fakery, even if they can’t prevent high-level attacks.

This tempering of expectations may already be happening. In its blog post announcing SynthID, DeepMind is careful to hedge its bets, noting that the tool “isn’t foolproof” and “isn’t perfect.”

Feizi is largely skeptical that watermarking is a good use of resources for companies like Google. “Perhaps we should get used to the fact that we are not going to be able to reliably flag AI-generated images,” he says.

Still, his paper is slightly sunnier in its conclusions. “Based on our results, designing a robust watermark is a challenging but not necessarily impossible task,” it reads.

This story originally appeared on wired.com.

Continue Reading

Biz & IT

Vulnerable Arm GPU drivers under active exploitation. Patches may not be available

Published

on

Getty Images

Arm warned on Monday of active ongoing attacks targeting a vulnerability in device drivers for its Mali line of GPUs, which run on a host of devices, including Google Pixels and other Android handsets, Chromebooks, and hardware running Linux.

“A local non-privileged user can make improper GPU memory processing operations to gain access to already freed memory,” Arm officials wrote in an advisory. “This issue is fixed in Bifrost, Valhall and Arm 5th Gen GPU Architecture Kernel Driver r43p0. There is evidence that this vulnerability may be under limited, targeted exploitation. Users are recommended to upgrade if they are impacted by this issue.”

The advisory continued: “A local non-privileged user can make improper GPU processing operations to access a limited amount outside of buffer bounds or to exploit a software race condition. If the system’s memory is carefully prepared by the user, then this in turn could give them access to already freed memory.”

Getting access to system memory that’s no longer in use is a common mechanism for loading malicious code into a location an attacker can then execute. This code often allows them to exploit other vulnerabilities or to install malicious payloads for spying on the phone user. Attackers often gain local access to a mobile device by tricking users into downloading malicious applications from unofficial repositories. The advisory mentions drivers for the affected GPUs being vulnerable but makes no mention of microcode that runs inside the chips themselves.

The most prevalent platform affected by the vulnerability is Google’s line of Pixels, which are one of the only Android models to receive security updates on a timely basis. Google patched Pixels in its September update against the vulnerability, which is tracked as CVE-2023-4211. Google has also patched Chromebooks that use the vulnerable GPUs. Any device that shows a patch level of 2023-09-01 or later is immune to attacks that exploit the vulnerability. The device driver on patched devices will show as version r44p1 or r45p0.

CVE-2023-4211 is present in a range of Arm GPUs released over the past decade. The Arm chips affected are:

  • Midgard GPU Kernel  Driver: All versions from r12p0 – r32p0
  • Bifrost GPU Kernel Driver: All versions from r0p0 – r42p0
  • Valhall GPU Kernel Driver: All versions from r19p0 – r42p0
  • Arm 5th Gen GPU Architecture Kernel Driver: All versions from r41p0 – r42p0

Devices believed to use the affected chips include the Google Pixel 7, Samsung S20 and S21, Motorola Edge 40, OnePlus Nord 2, Asus ROG Phone 6, Redmi Note 11, 12, Honor 70 Pro, RealMe GT, Xiaomi 12 Pro, Oppo Find X5 Pro, and Reno 8 Pro and some phones from Mediatek.

Arm also makes drivers for the affected chips available for Linux devices.

Little is currently known about the vulnerability, other than that Arm credited discovery of the active exploitations to Maddie Stone, a researcher in Google’s Project Zero team. Project Zero tracks vulnerabilities in widely used devices, particularly when they’re subjected to zero-day or n-day attacks, which refer to those targeting vulnerabilities for which there are no patches available or those that have very recently been patched.

Arm’s Monday advisory disclosed two additional vulnerabilities that have also received patches. CVE-2023-33200 and CVE-2023-34970 both allow a non-privileged user to exploit a race condition to perform improper GPU operations to access already freed memory.

All three vulnerabilities are exploitable by an attacker with local access to the device, which is typically achieved by tricking users into downloading applications from unofficial repositories.

It’s currently unknown what other platforms, if any, have patches available. Until this information can be tracked down, people should check with the manufacturer of their device. Sadly, many vulnerable Android devices receive patches months or even years after becoming available, if at all.

Continue Reading

Trending