Connect with us

Biz & IT

Samsung spilled SmartThings app source code and secret keys

Published

on

A development lab used by Samsung engineers was leaking highly sensitive source code, credentials and secret keys for several internal projects — including its SmartThings platform, a security researcher found.

The electronics giant left dozens of internal coding projects on a GitLab instance hosted on a Samsung-owned domain, Vandev Lab. The instance, used by staff to share and contribute code to various Samsung apps, services and projects, was spilling data because the projects were set to “public” and not properly protected with a password, allowing anyone to look inside at each project, access and download the source code.

Mossab Hussein, a security researcher at Dubai-based cybersecurity firm SpiderSilk who discovered the exposed files, said one project contained credentials that allowed access to the entire AWS account that was being used, including more than 100 S3 storage buckets that contained logs and analytics data.

Many of the folders, he said, contained logs and analytics data for Samsung’s SmartThings and Bixby services, but also several employees’ exposed private GitLab tokens stored in plaintext, which allowed him to gain additional access from 42 public projects to 135 projects, including many private projects.

Samsung told him some of the files were for testing but Hussein challenged the claim, saying source code found in the GitLab repository contained the same code as the Android app, published in Google Play on April 10.

The app, which has since been updated, has more than 100 million installs to date.

“I had the private token of a user who had full access to all 135 projects on that GitLab,” he said, which could have allowed him to make code changes using a staffer’s own account.

Hussein shared several screenshots and a video of his findings for TechCrunch to examine and verify.

The exposed GitLab instance also contained private certificates for Samsung’s SmartThings’ iOS and Android apps.

Hussein also found several internal documents and slideshows among the exposed files.

“The real threat lies in the possibility of someone acquiring this level of access to the application source code, and injecting it with malicious code without the company knowing,” he said.

Through exposed private keys and tokens, Hussein documented a vast amount of access that if obtained by a malicious actor could have been “disastrous,” he said.

A screenshot of the exposed AWS credentials, allowing access to buckets with GitLab private tokens (Image: supplied)

Hussein, a white-hat hacker and data breach discoverer, reported the findings to Samsung on April 10. In the days following, Samsung began revoking the AWS credentials, but it’s not known if the remaining secret keys and certificates were revoked.

Samsung still hasn’t closed the case on Hussein’s vulnerability report, close to a month after he first disclosed the issue.

“Recently, an individual security researcher reported a vulnerability through our security rewards program regarding one of our testing platforms,” Samsung spokesperson Zach Dugan told TechCrunch when reached prior to publication. “We quickly revoked all keys and certificates for the reported testing platform and while we have yet to find evidence that any external access occurred, we are currently investigating this further.”

Hussein said Samsung took until April 30 to revoke the GitLab private keys. Samsung also declined to answer specific questions we had and provided no evidence that the Samsung-owned development environment was for testing.

Hussein is no stranger to reporting security vulnerabilities. He recently disclosed a vulnerable back-end database at Blind, an anonymous social networking site popular among Silicon Valley employees — and found a server leaking a rolling list of user passwords for scientific journal giant Elsevier.

Samsung’s data leak, he said, was his biggest find to date.

“I haven’t seen a company this big handle their infrastructure using weird practices like that,” he said.

Read more:

Source link



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

Adobe Stock begins selling AI-generated artwork

Published

on

Enlarge / An AI-generated watercolor illustration, now eligible for inclusion in Adobe Stock.

Benj Edwards / Ars Technica

On Monday, Adobe announced that its stock photography service, Adobe Stock, would begin allowing artists to submit AI-generated imagery for sale, Axios reports. The move comes during Adobe’s embrace of image synthesis and also during industry-wide efforts to deal with the rapidly growing field of AI artwork in the stock art business, including earlier announcements from Shutterstock and Getty Images.

Submitting AI-generated imagery to Adobe Stock comes with a few restrictions. The artist must own (or have the rights to use) the image, AI-synthesized artwork must be submitted as an illustration (even if photorealistic), and it must be labeled with “Generative AI” in the title.

Further, each AI artwork must adhere to Adobe’s new Generative AI Content Guidelines, which require the artist to include a model release for any real person depicted realistically in the artwork. Artworks that incorporate illustrations of people or fictional brands, characters, or properties require a property release that attests the artist owns all necessary rights to license the content to Adobe Stock.

A stock photo odyssey

An example of AI-generated artwork available on Adobe Stock.
Enlarge / An example of AI-generated artwork available on Adobe Stock.

Earlier this year, the arrival of image synthesis tools like Stable Diffusion, Midjourney, and DALL-E unlocked a seemingly unlimited fountain of generative artwork that can imitate common art styles in various media, including photography. Each AI tool allows an artist to create a work based on a text description called a prompt.

In September, we covered some early instances of artists listing AI artwork on stock photography websites. Shutterstock reportedly initially reacted by removing some generative art, but later reversed course by partnering with OpenAI to generate AI artwork on the site. In late September, Getty Images banned AI artwork, fearing copyright issues that have not been fully tested in court.

Beyond those legal concerns, AI-generated artwork has proven ethically problematic among artists. Some criticized the ability of image synthesis models to reproduce artwork in the styles of living artists, especially since the AI models gained that ability from unauthorized scrapes of websites.

Despite those controversies, Adobe openly embraces the growing trend of image synthesis, which has shown no signs of slowing down.

“I’m confident that our decision to responsibly accept content made by generative AI serves both customers and contributors,” Sarah Casillas, Adobe Stock’s senior director of content, said in a statement emailed to Adobe Stock members. “Knowledge of stock, craft, taste, and imagination are critical to success on a stock marketplace where customers demand quality, and these are attributes that our successful contributors can continue to bring—no matter which tools they choose.”

Continue Reading

Biz & IT

No Linux? No problem. Just get AI to hallucinate it for you

Published

on

Enlarge / An AI-generated illustration of an AI-hallucinated computer.

Benj Edwards / Ars Technica

Over the weekend, experimenters discovered that OpenAI’s new chatbot, ChatGPT, can hallucinate simulations of Linux shells and imagine dialing into a bulletin board system (BBS). The chatbot, based on a deep learning AI model, uses its stored knowledge to simulate Linux with surprising results, including executing Python code and browsing virtual websites.

Last week, OpenAI made ChatGPT freely available during a testing phase, which has led to people probing its capabilities and weaknesses in novel ways.

On Saturday, a DeepMind research scientist named Jonas Degrave worked out how to instruct ChatGPT to act like a Linux shell by entering this prompt:

I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.

On Monday, Ars found that the trick still works. After entering this prompt, instead of chatting, OpenGPT will accept simulated Linux commands. It then returns responses in “code block” formatting. For example, if you type ls -al, you’ll see an example directory structure.

After setting up the virtual Linux prompt in ChatGPT, typing
Enlarge / After setting up the virtual Linux prompt in ChatGPT, typing “ls -al” returns a simulated directory structure.

Benj Edwards

ChatGPT can simulate a Linux machine because enough information about how a Linux machine should behave was included in its training data. That data likely includes software documentation (like manual pages), troubleshooting posts on Internet forums, and logged output from shell sessions.

ChatGPT generates responses based on which word is statistically most likely to follow the last series of words, starting with the prompt input by the user. It continues the conversation (in this case, a simulated Linux console session) by including all of your conversation history in successive prompts.

Degrave found that the simulation goes surprisingly deep. Using its knowledge of the Python programming language (that powers GitHub Copilot), ChatGPT’s virtual Linux machine can execute code too, such as this string created by Degrave as an example: echo -e “x = lambda y: y*5+3;print(‘Result: ‘ + str(x(6)))” > run.py && python3 run.py. According to Degrave, it returns the correct value of “33”.

Executing Python code within the virtual ChatGPT Linux machine.
Enlarge / Executing Python code within the virtual ChatGPT Linux machine.

Benj Edwards

During our testing, we found you can create directories, change between them, install simulated packages with apt-get, and even Telnet into a simulated MUSH and build a room or connect to a MUD and fight a troll.

Whenever deficiencies emerge in the simulation, you can tell ChatGPT how you want it to behave using instructions in curly braces, as spelled out in the original prompt. For example, while “connected” to our simulated MUD, we broke character and asked ChatGPT to summon a troll attack. Combat proceeded as expected (keeping track of hit points properly) until the troll died at the hands of our twice-virtual sword.

While simulating a MUD (within Telnet, within Linux, within ChatGPT), you can adjust the simulation by giving it ChatGPT suggestions.
Enlarge / While simulating a MUD (within Telnet, within Linux, within ChatGPT), you can adjust the simulation by giving it ChatGPT suggestions.

Benj Edwards

In Degrave’s examples (which he wrote about in detail on his blog), he also built a Docker file, checked for a GPU, pinged a simulated domain name, browsed a simulated website with lynx, and more. The simulated rabbit hole goes deep, and ChatGPT can even hallucinate new Linux commands.

Dialing a hallucinated BBS

In a prompting maneuver similar to conjuring up an AI-hallucinated Linux shell, someone named gfodor on Twitter discovered that OpenGPT could simulate calling a vintage dial-up BBS, including initializing a modem, entering a chat room, and talking to a simulated person.

A Twitter used named gfodor discovered that ChatGPT can simulate calling a BBS.
Enlarge / A Twitter used named gfodor discovered that ChatGPT can simulate calling a BBS.

As long as the prompt does not trigger its built-in filters related to violence, hate, or sexual content (among other things), ChatGPT seems willing to go along with almost any imaginary adventure. People have also discovered it can play tic-tac-toe, pretend to be an ATM, or simulate a chat room.

In a way, ChatGPT is acting like a text-based Holodeck, where its AI will attempt to simulate whatever you want it to do.

We should note that while hallucinating copiously is ChatGPT’s strong suit (by design), returning factual information reliably remains a work in progress. Still, with AI like ChatGPT around, the future of creative gaming may be very fun.

Continue Reading

Biz & IT

Syntax errors are the doom of us all, including botnet authors

Published

on

Enlarge / If you’re going to come at port 443, you best not miss (or forget to put a space between URL and port).

Getty Images

KmsdBot, a cryptomining botnet that could also be used for denial-of-service (DDOS) attacks, broke into systems through weak secure shell credentials. It could remotely control a system, it was hard to reverse-engineer, didn’t stay persistent, and could target multiple architectures. KmsdBot was a complex malware with no easy fix.

That was the case until researchers at Akamai Security Research witnessed a novel solution: forgetting to put a space between an IP address and a port in a command. And it came from whoever was controlling the botnet.

With no error-checking built in, sending KmsdBot a malformed command—like its controllers did one day while Akamai was watching—created a panic crash with an “index out of range” error. Because there’s no persistence, the bot stays down, and malicious agents would need to reinfect a machine and rebuild the bot’s functions. It is, as Akamai notes, “a nice story” and “a strong example of the fickle nature of technology.”

KmsdBot is an intriguing modern malware. It’s written in Golang, partly because Golang is difficult to reverse engineer. When Akamai’s honeypot caught the malware, it defaulted to targeting a company that created private Grand Theft Auto Online servers. It has a cryptomining ability, though it was latent while the DDOS activity was running. At times, it wanted to attack other security companies or luxury car brands.

Researchers at Akamai were taking apart KmsdBot and feeding it commands via netcat when they discovered that it had stopped sending attack commands. That’s when they noticed that an attack on a crypto-focused website was missing a space. Assuming that command went out to every working instance of KmsdBot, most of them crashed and stayed down. Feeding KmsdBot an intentionally bad request would halt it on a local system, allowing for easier recovery and removal.

Larry Cashdollar, principal security intelligence repsonse engineer at Akamai, told DarkReading that almost all KmsdBot activity his firm was tracking has ceased, though the authors may be trying to reinfect systems again. Using public key authentication for secure shell connections, or at a minimum improving login credentials, is the best defense in the first place, however.

Continue Reading

Trending