Connect with us

Biz & IT

Microsoft bringing Dynamics 365 mixed reality solutions to smartphones

Published

on

Last year Microsoft introduced several mixed reality business solutions under the Dynamics 365 enterprise product umbrella. Today, the company announced it would be moving these to smartphones in the spring, starting with previews.

The company announced Remote Assist on HoloLens last year. This tool allows a technician working onsite to show a remote expert what they are seeing. The expert can then walk the less-experienced employee through the repair. This is great for those companies that have equipped their workforce with HoloLens for hands-free instruction, but not every company can afford the new equipment.

Starting in the spring, Microsoft is going to help with that by introducing Remote Assist for Android phones. Just about everyone has a phone with them, and those with Android devices will be able to take advantage of Remote Assist capabilities without investing in HoloLens. The company is also updating Remote Assist to include mobile annotations, group calling and deeper integration with Dynamics 365 for Field Service, along with improved accessibility features on the HoloLens app.

IPhone users shouldn’t feel left out though because the company announced a preview of Dynamics 365 Product Visualize for iPhone. This tool enables users to work with a customer to visualize what a customized product will look like as they work with them. Think about a furniture seller working with a customer in their homes to customize the color, fabrics and design in place in the room where they will place the furniture, or a car dealer offering different options such as color and wheel styles. Once a customer agrees to a configuration, the data gets saved to Dynamics 365 and shared in Microsoft Teams for greater collaboration across a group of employees working with a customer on a project.

Both of these features are part of the Dynamics 365 spring release and are going to be available in preview starting in April. They are part of a broader release that includes a variety of new artificial intelligence features such as customer service bots and a unified view of customer data across the Dynamics 365 family of products.

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Deepfake celebrities begin shilling products on social media, causing alarm

Published

on

Enlarge / A cropped portion of the AI-generated version of Hanks that the actor shared on his Instagram feed.

Tom Hanks

News of AI deepfakes spread quickly when you’re Tom Hanks. On Sunday, the actor posted a warning on Instagram about an unauthorized AI-generated version of himself being used to sell a dental plan. Hanks’ warning spread in the media, including The New York Times. The next day, CBS anchor Gayle King warned of a similar scheme using her likeness to sell a weight-loss product. The now widely reported incidents have raised new concerns about the use of AI in digital media.

“BEWARE!! There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it,” wrote Hanks on his Instagram feed. Similarly, King shared an AI-augmented video with the words “Fake Video” stamped across it, stating, “I’ve never heard of this product or used it! Please don’t be fooled by these AI videos.”

Also on Monday, YouTube celebrity MrBeast posted on social media network X about a similar scam that features a modified video of him with manipulated speech and lip movements promoting a fraudulent iPhone 15 giveaway. “Lots of people are getting this deepfake scam ad of me,” he wrote. “Are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem.”

A screenshot of Tom Hanks' Instagram post warning of an AI-generated version of him selling a dental plan.
Enlarge / A screenshot of Tom Hanks’ Instagram post warning of an AI-generated version of him selling a dental plan.

Tom Hanks / Instagram

We have not seen the original Hanks video, but from examples provided by King and MrBeast, it appears the scammers likely took existing videos of the celebrities and used software to change lip movements to match AI-generated voice clones of them that had been trained on vocal samples pulled from publicly available work.

The news comes amid a larger debate on the ethical and legal implications of AI in the media and entertainment industry. The recent Writers Guild of America strike featured concerns about AI as a significant point of contention. SAG-AFTRA, the union representing Hollywood actors, has expressed worries that AI could be used to create digital replicas of actors without proper compensation or approval. And recently, Robin Williams’ daughter, Zelda Williams, made the news when she complained about people cloning her late father’s voice without permission.

As we’ve warned, convincing AI deepfakes are an increasingly pressing issue that may undermine shared trust and threaten the reliability of communications technologies by casting doubt on someone’s identity. Dealing with it is a tricky problem. Currently, companies like Google and OpenAI have plans to watermark AI-generated content and add metadata to track provenance. But historically, those watermarks have been easily defeated, and open source AI tools that do not add watermarks are available.

A screenshot of Gayle King's Instagram post warning of an AI-modified video of the CBS anchor.

A screenshot of Gayle King’s Instagram post warning of an AI-modified video of the CBS anchor.

Gayle King / Instagram

Similarly, attempts at restricting AI software through regulation may remove generative AI tools from legitimate researchers while keeping them in the hands of those who may use them for fraud. Meanwhile, social media networks will likely need to step up moderation efforts, reacting quickly when suspicious content is flagged by users.

As we wrote last December in a feature on the spread of easy-to-make deepfakes, “The provenance of each photo we see will become that much more important; much like today, we will need to completely trust who is sharing the photos to believe any of them. But during a transition period before everyone is aware of this technology, synthesized fakes might cause a measure of chaos.”

Almost a year later, with technology advancing rapidly, a small taste of that chaos is arguably descending upon us, and our advice could just as easily be applied to video and photos. Whether attempts at regulation currently underway in many countries will have any effect is an open question.

Continue Reading

Biz & IT

Researchers show how easy it is to defeat AI watermarks

Published

on

James Marshall/Getty Images

Soheil Feizi considers himself an optimistic person. But the University of Maryland computer science professor is blunt when he sums up the current state of watermarking AI images. “We don’t have any reliable watermarking at this point,” he says. “We broke all of them.”

For one of the two types of AI watermarking he tested for a new study— “low perturbation” watermarks, which are invisible to the naked eye—he’s even more direct: “There’s no hope.”

Feizi and his coauthors looked at how easy it is for bad actors to evade watermarking attempts. (He calls it “washing out” the watermark.) In addition to demonstrating how attackers might remove watermarks, the study shows how it’s possible to add watermarks to human-generated images, triggering false positives. Released online this week, the preprint paper has yet to be peer-reviewed; Feizi has been a leading figure examining how AI detection might work, so it is research worth paying attention to, even in this early stage.

It’s timely research. Watermarking has emerged as one of the more promising strategies to identify AI-generated images and text. Just as physical watermarks are embedded on paper money and stamps to prove authenticity, digital watermarks are meant to trace the origins of images and text online, helping people spot deepfaked videos and bot-authored books. With the US presidential elections on the horizon in 2024, concerns over manipulated media are high—and some people are already getting fooled. Former US President Donald Trump, for instance, shared a fake video of Anderson Cooper on his social platform Truth Social; Cooper’s voice had been AI-cloned.

This summer, OpenAI, Alphabet, Meta, Amazon, and several other major AI players pledged to develop watermarking technology to combat misinformation. In late August, Google’s DeepMind released a beta version of its new watermarking tool, SynthID. The hope is that these tools will flag AI content as it’s being generated, in the same way that physical watermarking authenticates dollars as they’re being printed.

It’s a solid, straightforward strategy, but it might not be a winning one. This study is not the only work pointing to watermarking’s major shortcomings. “It is well established that watermarking can be vulnerable to attack,” says Hany Farid, a professor at the UC Berkeley School of Information.

This August, researchers at the University of California, Santa Barbara and Carnegie Mellon coauthored another paper outlining similar findings, after conducting their own experimental attacks. “All invisible watermarks are vulnerable,” it reads. This newest study goes even further. While some researchers have held out hope that visible (“high perturbation”) watermarks might be developed to withstand attacks, Feizi and his colleagues say that even this more promising type can be manipulated.

The flaws in watermarking haven’t dissuaded tech giants from offering it up as a solution, but people working within the AI detection space are wary. “Watermarking at first sounds like a noble and promising solution, but its real-world applications fail from the onset when they can be easily faked, removed, or ignored,” Ben Colman, the CEO of AI-detection startup Reality Defender, says.

“Watermarking is not effective,” adds Bars Juhasz, the cofounder of Undetectable, a startup devoted to helping people evade AI detectors. “Entire industries, such as ours, have sprang up to make sure that it’s not effective.” According to Juhasz, companies like his are already capable of offering quick watermark-removal services.

Others do think that watermarking has a place in AI detection—as long as we understand its limitations. “It is important to understand that nobody thinks that watermarking alone will be sufficient,” Farid says. “But I believe robust watermarking is part of the solution.” He thinks that improving upon watermarking and then using it in combination with other technologies will make it harder for bad actors to create convincing fakes.

Some of Feizi’s colleagues think watermarking has its place, too. “Whether this is a blow to watermarking depends a lot on the assumptions and hopes placed in watermarking as a solution,” says Yuxin Wen, a PhD student at the University of Maryland who coauthored a recent paper suggesting a new watermarking technique. For Wen and his co-authors, including computer science professor Tom Goldstein, this study is an opportunity to reexamine the expectations placed on watermarking, rather than reason to dismiss its use as one authentication tool among many.

“There will always be sophisticated actors who are able to evade detection,” Goldstein says. “It’s ok to have a system that can only detect some things.” He sees watermarks as a form of harm reduction, and worthwhile for catching lower-level attempts at AI fakery, even if they can’t prevent high-level attacks.

This tempering of expectations may already be happening. In its blog post announcing SynthID, DeepMind is careful to hedge its bets, noting that the tool “isn’t foolproof” and “isn’t perfect.”

Feizi is largely skeptical that watermarking is a good use of resources for companies like Google. “Perhaps we should get used to the fact that we are not going to be able to reliably flag AI-generated images,” he says.

Still, his paper is slightly sunnier in its conclusions. “Based on our results, designing a robust watermark is a challenging but not necessarily impossible task,” it reads.

This story originally appeared on wired.com.

Continue Reading

Biz & IT

Vulnerable Arm GPU drivers under active exploitation. Patches may not be available

Published

on

Getty Images

Arm warned on Monday of active ongoing attacks targeting a vulnerability in device drivers for its Mali line of GPUs, which run on a host of devices, including Google Pixels and other Android handsets, Chromebooks, and hardware running Linux.

“A local non-privileged user can make improper GPU memory processing operations to gain access to already freed memory,” Arm officials wrote in an advisory. “This issue is fixed in Bifrost, Valhall and Arm 5th Gen GPU Architecture Kernel Driver r43p0. There is evidence that this vulnerability may be under limited, targeted exploitation. Users are recommended to upgrade if they are impacted by this issue.”

The advisory continued: “A local non-privileged user can make improper GPU processing operations to access a limited amount outside of buffer bounds or to exploit a software race condition. If the system’s memory is carefully prepared by the user, then this in turn could give them access to already freed memory.”

Getting access to system memory that’s no longer in use is a common mechanism for loading malicious code into a location an attacker can then execute. This code often allows them to exploit other vulnerabilities or to install malicious payloads for spying on the phone user. Attackers often gain local access to a mobile device by tricking users into downloading malicious applications from unofficial repositories. The advisory mentions drivers for the affected GPUs being vulnerable but makes no mention of microcode that runs inside the chips themselves.

The most prevalent platform affected by the vulnerability is Google’s line of Pixels, which are one of the only Android models to receive security updates on a timely basis. Google patched Pixels in its September update against the vulnerability, which is tracked as CVE-2023-4211. Google has also patched Chromebooks that use the vulnerable GPUs. Any device that shows a patch level of 2023-09-01 or later is immune to attacks that exploit the vulnerability. The device driver on patched devices will show as version r44p1 or r45p0.

CVE-2023-4211 is present in a range of Arm GPUs released over the past decade. The Arm chips affected are:

  • Midgard GPU Kernel  Driver: All versions from r12p0 – r32p0
  • Bifrost GPU Kernel Driver: All versions from r0p0 – r42p0
  • Valhall GPU Kernel Driver: All versions from r19p0 – r42p0
  • Arm 5th Gen GPU Architecture Kernel Driver: All versions from r41p0 – r42p0

Devices believed to use the affected chips include the Google Pixel 7, Samsung S20 and S21, Motorola Edge 40, OnePlus Nord 2, Asus ROG Phone 6, Redmi Note 11, 12, Honor 70 Pro, RealMe GT, Xiaomi 12 Pro, Oppo Find X5 Pro, and Reno 8 Pro and some phones from Mediatek.

Arm also makes drivers for the affected chips available for Linux devices.

Little is currently known about the vulnerability, other than that Arm credited discovery of the active exploitations to Maddie Stone, a researcher in Google’s Project Zero team. Project Zero tracks vulnerabilities in widely used devices, particularly when they’re subjected to zero-day or n-day attacks, which refer to those targeting vulnerabilities for which there are no patches available or those that have very recently been patched.

Arm’s Monday advisory disclosed two additional vulnerabilities that have also received patches. CVE-2023-33200 and CVE-2023-34970 both allow a non-privileged user to exploit a race condition to perform improper GPU operations to access already freed memory.

All three vulnerabilities are exploitable by an attacker with local access to the device, which is typically achieved by tricking users into downloading applications from unofficial repositories.

It’s currently unknown what other platforms, if any, have patches available. Until this information can be tracked down, people should check with the manufacturer of their device. Sadly, many vulnerable Android devices receive patches months or even years after becoming available, if at all.

Continue Reading

Trending