Connect with us

Biz & IT

Fortnite for Android no longer requires an invite

Published

on


Fortnite’s journey to Android has been an adventure unto itself. It first launched as a Samsung exclusive, alongside the Note 9, before circumventing the Play Store to arrive on Google’s Mobile operating system.

Until now, however, actually getting the game required going to the site, signing up and waiting for an invite. Epic announced today via Twitter that it’s finally cutting that red tape. While the company is still sidestepping Play in order to keep its earnings to itself, downloading the game is a simple as scanning a QR code from its site.

Not that any of those extra steps were hurting the game. The wildly popular hit 15 million installs a mere three weeks after launching on the OS.

Source link





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Adobe Stock begins selling AI-generated artwork

Published

on

Enlarge / An AI-generated watercolor illustration, now eligible for inclusion in Adobe Stock.

Benj Edwards / Ars Technica

On Monday, Adobe announced that its stock photography service, Adobe Stock, would begin allowing artists to submit AI-generated imagery for sale, Axios reports. The move comes during Adobe’s embrace of image synthesis and also during industry-wide efforts to deal with the rapidly growing field of AI artwork in the stock art business, including earlier announcements from Shutterstock and Getty Images.

Submitting AI-generated imagery to Adobe Stock comes with a few restrictions. The artist must own (or have the rights to use) the image, AI-synthesized artwork must be submitted as an illustration (even if photorealistic), and it must be labeled with “Generative AI” in the title.

Further, each AI artwork must adhere to Adobe’s new Generative AI Content Guidelines, which require the artist to include a model release for any real person depicted realistically in the artwork. Artworks that incorporate illustrations of people or fictional brands, characters, or properties require a property release that attests the artist owns all necessary rights to license the content to Adobe Stock.

A stock photo odyssey

An example of AI-generated artwork available on Adobe Stock.
Enlarge / An example of AI-generated artwork available on Adobe Stock.

Earlier this year, the arrival of image synthesis tools like Stable Diffusion, Midjourney, and DALL-E unlocked a seemingly unlimited fountain of generative artwork that can imitate common art styles in various media, including photography. Each AI tool allows an artist to create a work based on a text description called a prompt.

In September, we covered some early instances of artists listing AI artwork on stock photography websites. Shutterstock reportedly initially reacted by removing some generative art, but later reversed course by partnering with OpenAI to generate AI artwork on the site. In late September, Getty Images banned AI artwork, fearing copyright issues that have not been fully tested in court.

Beyond those legal concerns, AI-generated artwork has proven ethically problematic among artists. Some criticized the ability of image synthesis models to reproduce artwork in the styles of living artists, especially since the AI models gained that ability from unauthorized scrapes of websites.

Despite those controversies, Adobe openly embraces the growing trend of image synthesis, which has shown no signs of slowing down.

“I’m confident that our decision to responsibly accept content made by generative AI serves both customers and contributors,” Sarah Casillas, Adobe Stock’s senior director of content, said in a statement emailed to Adobe Stock members. “Knowledge of stock, craft, taste, and imagination are critical to success on a stock marketplace where customers demand quality, and these are attributes that our successful contributors can continue to bring—no matter which tools they choose.”

Continue Reading

Biz & IT

No Linux? No problem. Just get AI to hallucinate it for you

Published

on

Enlarge / An AI-generated illustration of an AI-hallucinated computer.

Benj Edwards / Ars Technica

Over the weekend, experimenters discovered that OpenAI’s new chatbot, ChatGPT, can hallucinate simulations of Linux shells and imagine dialing into a bulletin board system (BBS). The chatbot, based on a deep learning AI model, uses its stored knowledge to simulate Linux with surprising results, including executing Python code and browsing virtual websites.

Last week, OpenAI made ChatGPT freely available during a testing phase, which has led to people probing its capabilities and weaknesses in novel ways.

On Saturday, a DeepMind research scientist named Jonas Degrave worked out how to instruct ChatGPT to act like a Linux shell by entering this prompt:

I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.

On Monday, Ars found that the trick still works. After entering this prompt, instead of chatting, OpenGPT will accept simulated Linux commands. It then returns responses in “code block” formatting. For example, if you type ls -al, you’ll see an example directory structure.

After setting up the virtual Linux prompt in ChatGPT, typing
Enlarge / After setting up the virtual Linux prompt in ChatGPT, typing “ls -al” returns a simulated directory structure.

Benj Edwards

ChatGPT can simulate a Linux machine because enough information about how a Linux machine should behave was included in its training data. That data likely includes software documentation (like manual pages), troubleshooting posts on Internet forums, and logged output from shell sessions.

ChatGPT generates responses based on which word is statistically most likely to follow the last series of words, starting with the prompt input by the user. It continues the conversation (in this case, a simulated Linux console session) by including all of your conversation history in successive prompts.

Degrave found that the simulation goes surprisingly deep. Using its knowledge of the Python programming language (that powers GitHub Copilot), ChatGPT’s virtual Linux machine can execute code too, such as this string created by Degrave as an example: echo -e “x = lambda y: y*5+3;print(‘Result: ‘ + str(x(6)))” > run.py && python3 run.py. According to Degrave, it returns the correct value of “33”.

Executing Python code within the virtual ChatGPT Linux machine.
Enlarge / Executing Python code within the virtual ChatGPT Linux machine.

Benj Edwards

During our testing, we found you can create directories, change between them, install simulated packages with apt-get, and even Telnet into a simulated MUSH and build a room or connect to a MUD and fight a troll.

Whenever deficiencies emerge in the simulation, you can tell ChatGPT how you want it to behave using instructions in curly braces, as spelled out in the original prompt. For example, while “connected” to our simulated MUD, we broke character and asked ChatGPT to summon a troll attack. Combat proceeded as expected (keeping track of hit points properly) until the troll died at the hands of our twice-virtual sword.

While simulating a MUD (within Telnet, within Linux, within ChatGPT), you can adjust the simulation by giving it ChatGPT suggestions.
Enlarge / While simulating a MUD (within Telnet, within Linux, within ChatGPT), you can adjust the simulation by giving it ChatGPT suggestions.

Benj Edwards

In Degrave’s examples (which he wrote about in detail on his blog), he also built a Docker file, checked for a GPU, pinged a simulated domain name, browsed a simulated website with lynx, and more. The simulated rabbit hole goes deep, and ChatGPT can even hallucinate new Linux commands.

Dialing a hallucinated BBS

In a prompting maneuver similar to conjuring up an AI-hallucinated Linux shell, someone named gfodor on Twitter discovered that OpenGPT could simulate calling a vintage dial-up BBS, including initializing a modem, entering a chat room, and talking to a simulated person.

A Twitter used named gfodor discovered that ChatGPT can simulate calling a BBS.
Enlarge / A Twitter used named gfodor discovered that ChatGPT can simulate calling a BBS.

As long as the prompt does not trigger its built-in filters related to violence, hate, or sexual content (among other things), ChatGPT seems willing to go along with almost any imaginary adventure. People have also discovered it can play tic-tac-toe, pretend to be an ATM, or simulate a chat room.

In a way, ChatGPT is acting like a text-based Holodeck, where its AI will attempt to simulate whatever you want it to do.

We should note that while hallucinating copiously is ChatGPT’s strong suit (by design), returning factual information reliably remains a work in progress. Still, with AI like ChatGPT around, the future of creative gaming may be very fun.

Continue Reading

Biz & IT

Syntax errors are the doom of us all, including botnet authors

Published

on

Enlarge / If you’re going to come at port 443, you best not miss (or forget to put a space between URL and port).

Getty Images

KmsdBot, a cryptomining botnet that could also be used for denial-of-service (DDOS) attacks, broke into systems through weak secure shell credentials. It could remotely control a system, it was hard to reverse-engineer, didn’t stay persistent, and could target multiple architectures. KmsdBot was a complex malware with no easy fix.

That was the case until researchers at Akamai Security Research witnessed a novel solution: forgetting to put a space between an IP address and a port in a command. And it came from whoever was controlling the botnet.

With no error-checking built in, sending KmsdBot a malformed command—like its controllers did one day while Akamai was watching—created a panic crash with an “index out of range” error. Because there’s no persistence, the bot stays down, and malicious agents would need to reinfect a machine and rebuild the bot’s functions. It is, as Akamai notes, “a nice story” and “a strong example of the fickle nature of technology.”

KmsdBot is an intriguing modern malware. It’s written in Golang, partly because Golang is difficult to reverse engineer. When Akamai’s honeypot caught the malware, it defaulted to targeting a company that created private Grand Theft Auto Online servers. It has a cryptomining ability, though it was latent while the DDOS activity was running. At times, it wanted to attack other security companies or luxury car brands.

Researchers at Akamai were taking apart KmsdBot and feeding it commands via netcat when they discovered that it had stopped sending attack commands. That’s when they noticed that an attack on a crypto-focused website was missing a space. Assuming that command went out to every working instance of KmsdBot, most of them crashed and stayed down. Feeding KmsdBot an intentionally bad request would halt it on a local system, allowing for easier recovery and removal.

Larry Cashdollar, principal security intelligence repsonse engineer at Akamai, told DarkReading that almost all KmsdBot activity his firm was tracking has ceased, though the authors may be trying to reinfect systems again. Using public key authentication for secure shell connections, or at a minimum improving login credentials, is the best defense in the first place, however.

Continue Reading

Trending