Connect with us

Biz & IT

The Google Assistant gets more visual

Published

on


Google today is launching a major visual redesign of its Assistant experience on phones. While the original vision of the Assistant focused mostly on voice, half of all interactions with the Assistant actually include touch. So with this redesign, Google acknowledges that and brings more and larger visuals to the Assistant experience.

If you’ve used one of the recent crop of Assistant-enabled smart displays, then some of what’s new here may look familiar. You now get controls and sliders to manage your smart home devices, for example. Those include sliders to dim your lights and buttons to turn them on or off. There also are controls for managing the volume of your speakers.Even in cases where the Assistant already offered visual feedback — say when you ask for the weather — the team has now also redesigned those results and brought them more in line with what users are already seeing on smart displays from the likes of Lenovo and LG. On the phone, though, that experience still feels a bit more pared down than on those larger displays.

With this redesign, which is going live on both Android and in the iOS app today, Google is also bringing a little bit more of the much-missed Google Now experience back to the phone. While you could already bring up a list of upcoming appointments, commute info, recent orders and other information about your day from the Assistant, that feature was hidden behind a rather odd icon that many users surely ignored. Now, after you’ve long-pressed the home button on your Android phone, you can swipe up to get that same experience. I’m not sure that’s more discoverable than previously, but Google is saving you a tap.

In addition to the visual redesign of the Assistant, Google also today announced a number of new features for developers. Unsurprisingly, one part of this announcement focuses on allowing developers to build their own visual Assistant experiences. Google calls these “rich responses” and provides developers with a set of pre-made visual components that they can easily use to extend their Assistant actions. And because nothing is complete with GIFs, they can now use GIFs in their Assistant apps, too.

But in addition to these new options for creating more visual experiences, Google is also making it a bit easier for developers to take their users money.

While they could already sell physical goods through their Assistant actions, starting today, they’ll also be able to sell digital goods. Those can be one-time purchases for a new level in a game or recurring subscriptions. Headspace, which has long offered a very basic Assistant experience, now lets you sign up for subscriptions right from the Assistant on your phone, for example.

Selling digital goods directly in the Assistant is one thing, but that sale has to sync across different applications, too, so Google today is also launching a new sign-in service for the Assistant that allows developers to log in and link their accounts.

“In the past, account linking could be a frustrating experience for your users; having to manually type a username and password — or worse, create a new account — breaks the natural conversational flow,” the company explains. “With Google Sign-In, users can now create a new account with just a tap or confirmation through their voice. Most users can even link to their existing accounts with your service using their verified email address.”

Starbucks has already integrated this feature into its Assistant experience to give users access to their rewards account. Adding the new Sign-In for the Assistant has almost doubled its conversion rate.

Source link



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Adobe Stock begins selling AI-generated artwork

Published

on

Enlarge / An AI-generated watercolor illustration, now eligible for inclusion in Adobe Stock.

Benj Edwards / Ars Technica

On Monday, Adobe announced that its stock photography service, Adobe Stock, would begin allowing artists to submit AI-generated imagery for sale, Axios reports. The move comes during Adobe’s embrace of image synthesis and also during industry-wide efforts to deal with the rapidly growing field of AI artwork in the stock art business, including earlier announcements from Shutterstock and Getty Images.

Submitting AI-generated imagery to Adobe Stock comes with a few restrictions. The artist must own (or have the rights to use) the image, AI-synthesized artwork must be submitted as an illustration (even if photorealistic), and it must be labeled with “Generative AI” in the title.

Further, each AI artwork must adhere to Adobe’s new Generative AI Content Guidelines, which require the artist to include a model release for any real person depicted realistically in the artwork. Artworks that incorporate illustrations of people or fictional brands, characters, or properties require a property release that attests the artist owns all necessary rights to license the content to Adobe Stock.

A stock photo odyssey

An example of AI-generated artwork available on Adobe Stock.
Enlarge / An example of AI-generated artwork available on Adobe Stock.

Earlier this year, the arrival of image synthesis tools like Stable Diffusion, Midjourney, and DALL-E unlocked a seemingly unlimited fountain of generative artwork that can imitate common art styles in various media, including photography. Each AI tool allows an artist to create a work based on a text description called a prompt.

In September, we covered some early instances of artists listing AI artwork on stock photography websites. Shutterstock reportedly initially reacted by removing some generative art, but later reversed course by partnering with OpenAI to generate AI artwork on the site. In late September, Getty Images banned AI artwork, fearing copyright issues that have not been fully tested in court.

Beyond those legal concerns, AI-generated artwork has proven ethically problematic among artists. Some criticized the ability of image synthesis models to reproduce artwork in the styles of living artists, especially since the AI models gained that ability from unauthorized scrapes of websites.

Despite those controversies, Adobe openly embraces the growing trend of image synthesis, which has shown no signs of slowing down.

“I’m confident that our decision to responsibly accept content made by generative AI serves both customers and contributors,” Sarah Casillas, Adobe Stock’s senior director of content, said in a statement emailed to Adobe Stock members. “Knowledge of stock, craft, taste, and imagination are critical to success on a stock marketplace where customers demand quality, and these are attributes that our successful contributors can continue to bring—no matter which tools they choose.”

Continue Reading

Biz & IT

No Linux? No problem. Just get AI to hallucinate it for you

Published

on

Enlarge / An AI-generated illustration of an AI-hallucinated computer.

Benj Edwards / Ars Technica

Over the weekend, experimenters discovered that OpenAI’s new chatbot, ChatGPT, can hallucinate simulations of Linux shells and imagine dialing into a bulletin board system (BBS). The chatbot, based on a deep learning AI model, uses its stored knowledge to simulate Linux with surprising results, including executing Python code and browsing virtual websites.

Last week, OpenAI made ChatGPT freely available during a testing phase, which has led to people probing its capabilities and weaknesses in novel ways.

On Saturday, a DeepMind research scientist named Jonas Degrave worked out how to instruct ChatGPT to act like a Linux shell by entering this prompt:

I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.

On Monday, Ars found that the trick still works. After entering this prompt, instead of chatting, OpenGPT will accept simulated Linux commands. It then returns responses in “code block” formatting. For example, if you type ls -al, you’ll see an example directory structure.

After setting up the virtual Linux prompt in ChatGPT, typing
Enlarge / After setting up the virtual Linux prompt in ChatGPT, typing “ls -al” returns a simulated directory structure.

Benj Edwards

ChatGPT can simulate a Linux machine because enough information about how a Linux machine should behave was included in its training data. That data likely includes software documentation (like manual pages), troubleshooting posts on Internet forums, and logged output from shell sessions.

ChatGPT generates responses based on which word is statistically most likely to follow the last series of words, starting with the prompt input by the user. It continues the conversation (in this case, a simulated Linux console session) by including all of your conversation history in successive prompts.

Degrave found that the simulation goes surprisingly deep. Using its knowledge of the Python programming language (that powers GitHub Copilot), ChatGPT’s virtual Linux machine can execute code too, such as this string created by Degrave as an example: echo -e “x = lambda y: y*5+3;print(‘Result: ‘ + str(x(6)))” > run.py && python3 run.py. According to Degrave, it returns the correct value of “33”.

Executing Python code within the virtual ChatGPT Linux machine.
Enlarge / Executing Python code within the virtual ChatGPT Linux machine.

Benj Edwards

During our testing, we found you can create directories, change between them, install simulated packages with apt-get, and even Telnet into a simulated MUSH and build a room or connect to a MUD and fight a troll.

Whenever deficiencies emerge in the simulation, you can tell ChatGPT how you want it to behave using instructions in curly braces, as spelled out in the original prompt. For example, while “connected” to our simulated MUD, we broke character and asked ChatGPT to summon a troll attack. Combat proceeded as expected (keeping track of hit points properly) until the troll died at the hands of our twice-virtual sword.

While simulating a MUD (within Telnet, within Linux, within ChatGPT), you can adjust the simulation by giving it ChatGPT suggestions.
Enlarge / While simulating a MUD (within Telnet, within Linux, within ChatGPT), you can adjust the simulation by giving it ChatGPT suggestions.

Benj Edwards

In Degrave’s examples (which he wrote about in detail on his blog), he also built a Docker file, checked for a GPU, pinged a simulated domain name, browsed a simulated website with lynx, and more. The simulated rabbit hole goes deep, and ChatGPT can even hallucinate new Linux commands.

Dialing a hallucinated BBS

In a prompting maneuver similar to conjuring up an AI-hallucinated Linux shell, someone named gfodor on Twitter discovered that OpenGPT could simulate calling a vintage dial-up BBS, including initializing a modem, entering a chat room, and talking to a simulated person.

A Twitter used named gfodor discovered that ChatGPT can simulate calling a BBS.
Enlarge / A Twitter used named gfodor discovered that ChatGPT can simulate calling a BBS.

As long as the prompt does not trigger its built-in filters related to violence, hate, or sexual content (among other things), ChatGPT seems willing to go along with almost any imaginary adventure. People have also discovered it can play tic-tac-toe, pretend to be an ATM, or simulate a chat room.

In a way, ChatGPT is acting like a text-based Holodeck, where its AI will attempt to simulate whatever you want it to do.

We should note that while hallucinating copiously is ChatGPT’s strong suit (by design), returning factual information reliably remains a work in progress. Still, with AI like ChatGPT around, the future of creative gaming may be very fun.

Continue Reading

Biz & IT

Syntax errors are the doom of us all, including botnet authors

Published

on

Enlarge / If you’re going to come at port 443, you best not miss (or forget to put a space between URL and port).

Getty Images

KmsdBot, a cryptomining botnet that could also be used for denial-of-service (DDOS) attacks, broke into systems through weak secure shell credentials. It could remotely control a system, it was hard to reverse-engineer, didn’t stay persistent, and could target multiple architectures. KmsdBot was a complex malware with no easy fix.

That was the case until researchers at Akamai Security Research witnessed a novel solution: forgetting to put a space between an IP address and a port in a command. And it came from whoever was controlling the botnet.

With no error-checking built in, sending KmsdBot a malformed command—like its controllers did one day while Akamai was watching—created a panic crash with an “index out of range” error. Because there’s no persistence, the bot stays down, and malicious agents would need to reinfect a machine and rebuild the bot’s functions. It is, as Akamai notes, “a nice story” and “a strong example of the fickle nature of technology.”

KmsdBot is an intriguing modern malware. It’s written in Golang, partly because Golang is difficult to reverse engineer. When Akamai’s honeypot caught the malware, it defaulted to targeting a company that created private Grand Theft Auto Online servers. It has a cryptomining ability, though it was latent while the DDOS activity was running. At times, it wanted to attack other security companies or luxury car brands.

Researchers at Akamai were taking apart KmsdBot and feeding it commands via netcat when they discovered that it had stopped sending attack commands. That’s when they noticed that an attack on a crypto-focused website was missing a space. Assuming that command went out to every working instance of KmsdBot, most of them crashed and stayed down. Feeding KmsdBot an intentionally bad request would halt it on a local system, allowing for easier recovery and removal.

Larry Cashdollar, principal security intelligence repsonse engineer at Akamai, told DarkReading that almost all KmsdBot activity his firm was tracking has ceased, though the authors may be trying to reinfect systems again. Using public key authentication for secure shell connections, or at a minimum improving login credentials, is the best defense in the first place, however.

Continue Reading

Trending