Connect with us

Biz & IT

GitHub launches a mobile app, smarter notifications and improved code search

Published

on

At its annual Universe conference today, Microsoft -owned GitHub announced a couple of new products, as well as the general availability of a number of tools that developers have been able to test for the last few months. The two announcements that developers will likely be most interested in are the launch of GitHub’s first native mobile app and an improved notifications experience. But in addition to that, it is also taking GitHub Actions, the company’s workflow automation and CI/CD solution, as well as GitHub Packages, out of beta. GitHub is also improving its code search, adding scheduled reminders and launching a pre-release program that will allow users to try out new features before they are ready for a wider rollout.

GitHub is also extending its sponsor program, which until now allowed you to tip individual open-source contributors for their work, to the project level. With GitHub Sponsors, anybody can help fund a project and the members of that project then get to choose how to use the money. These projects have to be open source and have a corporate or nonprofit entity attached to it (and a bank account).

“Developers are what’s driving us and we’re building the tools and the experiences to help them come together to create the world’s most important technologies and to do it on an open platform and ecosystem,” GitHub SVP of Product Shanku Niyogi told me. Today’s announcements, he said, are driven by the company’s mission to improve the developer experience. Over the course of the last year, the company launched well over 150 new features and enhancements, Niyogi stressed. For its Universe show, the company decided to highlight the new mobile app and notification enhancements, though.

The new mobile app, which is now out in beta for iOS, with Android support coming soon, offers all of the basic features you’d want from a mobile app like this. The team decided to focus squarely on the kind of mobile use cases that would make the most sense for a developer on the go, so you’ll be able to share feedback on discussions, review a few lines of code and merge changes, but this isn’t meant to be a tool that replicated the full GitHub experience, though at least on the iPad, you do get a bit more screen real estate to work with.

“When you start to look at the tablet experience, that then extends out because you now got more space,” explained Niyogi. “You can look at the code, you can navigate some of that, we support some of the key same keyboard shortcuts that github.com does to be able to look at a larger amount of content and a larger amount of code. So, the idea is the experience scales with the mobile devices you have, and but it’s also designed for the things you’re likely to do when you’re not using your computer.”

Others have built mobile apps for GitHub before, of course, and it turns out that the developers of GitHawk, which was launched by a group of engineers from Instagram, recently joined GitHub to help the company in its efforts to get this new app off the ground.

The second major new feature is the improved notifications experience. As every GitHub user on even a medium-sized team knows, GitHub’s current set of notifications can quickly become overwhelming. That’s something the GitHub team was also keenly aware of, so the company decided to build a vastly improved system that includes filters, as well as an inbox for all of your notifications right inside of GitHub.

“The experience for developers today can result in an inbox in Gmail or whatever email client you use with tons and tons of notifications — and it can end up being kind of hard to know what matters and what’s just noise,” Kelly Stirman, GitHub’ VP of Strategy and Product Management, said. “We’ve done a bunch of things over the last year to make notifications better, but what we’ve done is a big step. We’ve reimagined what notifications should be.”

Using filters and rules, developers can zero in on the notifications that matter to them, all without flooding your inbox with unnecessary noise. Developers can customize these filters to their hearts’ content. That’s also where the new mobile experience fits in well. “Many times, the notification will be sent to you when you’re not at your computer, when you’re not at your desktop,” noted Stirman. “And that notification might be somebody asking for your help to unblock something. And so it’s natural we think that we need to extend the GitHub experience beyond the desktop to a mobile experience.”

Talking about notifications: GitHub also today announced a new feature in a limited preview that adds a few more notifications to your inbox. You can now set up scheduled reminders for pending code reviews.

Among the rest of today’s announcements, the improved code search stands out because that’s definitely an area where some improvements were necessary. This new code search is currently in limited beta, but should roll out to all users over the next few months. It’ll introduce a completely new search experience, the company says, that can match special characters and casing, among other things.

Also new are code review assignments, now in public beta, and a new way to navigate code on GitHub.

Source link



Source link

Continue Reading

Biz & IT

Starlink dishes go into “thermal shutdown” once they hit 122° Fahrenheit

Published

on

Enlarge / Starlink satellite dish and equipment in the Idaho panhandle’s Coeur d’Alene National Forest.

A Starlink beta user in Arizona said he lost Internet service for over seven hours yesterday when the satellite dish overheated, demonstrating one of the drawbacks of SpaceX’s broadband service. When the user’s Internet service was disrupted, the Starlink app provided an error message saying, “Offline: Thermal shutdown.” The dish “overheated” and “Starlink will reconnect after cooling down,” the error message said.

The user, named Martin, posted a screenshot of the error message on Reddit. He contacted Starlink support, which told him, “Dishy will go into thermal shutdown at 122F and will restart when it reaches 104F.” Martin decided to give the dish a little water so it could cool down. He pointed a sprinkler at Dishy, and once it cooled enough to turn back on, “I immediately heard YouTube resume playback,” he wrote yesterday.

But the Internet restoration was short-lived, Martin told Ars in a chat today.

“The fix was temporary,” he told us. “When I stopped the sprinkler, [the dish] heated back up and would cycle back on for a few minutes and go back down for thermal shutdown. The overheating started that day about 11:30 am and came back for good about 7 pm… I’m currently headed to a hardware store to get materials to build a solar shade/sail around the dish to see if it doesn’t impact connection and speed.”

Martin uses the ground behind his house to set up his dish because it is the only spot with no obstructions. But there’s “no shade to speak of,” he wrote in the Reddit comment thread.

Thermal shutdowns affect other users

Officially, SpaceX has said that “Dishy McFlatface” is certified to operate from 22° below zero up to 104° Fahrenheit. Temperatures reached about 120° yesterday in Martin’s town of Topock, near Arizona’s border with California, he said. Though Dishy doesn’t go into thermal shutdown until it hits 122°, the dish can obviously get hotter than the air temperature.

“I’m thinking the radiating heat from the ground is effectively cooking the bottom of the dish, [while] the top of the dish is cooked by the sun,” Martin told Ars. In addition to the shade he’s building, Martin said he is “waiting for permitting for a HAM radio tower” that would lift the dish off the ground to help keep it cool enough to operate.

Martin said he also had very short outages on several days since last week, but service came back before he had time to confirm whether they were caused by heat. SpaceX told users to expect periodic outages during beta, so Martin’s previous outages could have been due either to heat or satellite availability.

Another user in Virginia experienced a half-hour outage due to overheating on a day with temperatures in the low 80s, according to a Reddit post two months ago.

Martin’s post spurred a response from a beta user who also reported thermal shutdowns. “You’re not the only one. My Starlink is located 50 miles south of Grand Canyon in remote area,” one person wrote yesterday. “It’s been off and on also. It stopped today one hour after cool down period but quit again as [of] ~12:30. Last reported temp at my weather station was 103 degrees.”

The 122° F shutdown temperature was mentioned three weeks ago in a Reddit post by a user who had also been given the figure by Starlink support. “‘That’s it??’ was my thought. On a 90 degree day, the rooftop of my house can be around 125 degrees,” that user wrote.

“Are you sure that wasn’t Celsius?” another asked. (122° C converts to 251.6° F.)

Like Martin, other Starlink users may have to find creative ways to keep their dishes cool as the summer months arrive.

Dishy’s heat management

As we wrote in December, a teardown of Dishy McFlatface showed some of its heat-management components, including a metal shield that’s peppered with blue dots made of thermally conductive material that conducts heat away from the PCB and into the shield.

Ken Keiter, the engineer who performed the teardown, was interviewed by Vice’s Motherboard section for a story about the Arizona resident today:

Keiter told Motherboard that while reasonable consideration was given to heat dissipation in Dishy’s design, he could see the potential for problems.

“The phased array assembly comprises a PCBA (printed circuit board assembly) adhered to an aluminum backplate which serves several purposes—acting as RF shielding, providing structural rigidity and, most relevantly, acting as a radiative thermal mass (heat sink) for the components on the PCBA,” Keiter said.

Heat is funneled from the circuit board to the aluminum backplate using a foam-like thermal interface material (TIM). The backplate itself resides in a weather-sealed cavity containing a small amount of air. As this backplate heats up, the air surrounding it also heats, transferring thermal energy via the plastic enclosure to the outside environment, Keiter said.

“Here’s the problem: at some point, the combined thermal energy being absorbed by Dishy’s face and being dumped by the components into the backplate, the air surrounding it, and the enclosure exceeds the amount that is being dissipated to the outside environment,” he noted.

Keiter said that software changes could “make the system more thermally efficient” but that it’s possible SpaceX will need to make “a significant hardware revision for the commercial launch.” He called it “a really tricky engineering problem with some insanely tight constraints.”

We contacted SpaceX today and will update this article if we get a response.

SpaceX seeks stability before exiting beta

The Starlink public beta began in October 2020, and there’s still no word on when exactly it will hit commercial availability. But the service could happen within months, as SpaceX CEO Elon Musk has said that Starlink will be available to “most of Earth” by the end of 2021 and the whole planet by next year. Still, SpaceX expects to have a limited number of slots in each geographic region because of capacity constraints.

SpaceX is seeking Federal Communications Commission permission to deploy up to 5 million user terminals in the US. Over 500,000 people have ordered Starlink, and Musk has said he expects all of those users to get service. But he also said that SpaceX will face “more of a challenge when we get into the several million user range.” The biggest limitation would be in densely populated urban areas; rural users would have better odds of getting service.

As noted earlier, Starlink warns beta users to expect “brief periods of no connectivity at all”—even if they don’t run into thermal shutdowns. “We still have a lot of work to do to make the network reliable,” SpaceX president and COO Gwynne Shotwell said in April. “We still have drops, not necessarily just because of where the satellites are in the sky.” SpaceX will keep the service in beta “until the network is reliable and great and something we’d be proud of,” Shotwell said.

The Verge reviewed Starlink last month and found frustrating reliability problems. “Like the similarly over-hyped mmWave 5G, Starlink is remarkably delicate. Even a single tree blocking the dish’s line of sight to the horizon will degrade and interrupt your Starlink signal,” The Verge wrote.

Starlink is only part of the solution

The service will surely become more stable by the time SpaceX moves it from beta to general availability, as Shotwell promised. Even in beta, Starlink is providing much-needed connectivity to people with no other options. If SpaceX brings reliable broadband to a few million users, that would be a success, but there may be tens of millions of Americans without access to high-speed broadband. Tens of millions of others have to pay whatever the cable company demands because there’s no competition where they live.

Widespread fiber-to-the-home deployment would make a bigger difference for more Internet users than Starlink. President Joe Biden pledged to lower prices and deploy “future-proof” broadband to all Americans, but he’s already scaled back his plan in the face of opposition from Republicans and incumbent ISPs. AT&T has been lobbying against nationwide fiber and funding for municipal networks, and AT&T CEO John Stankey expressed confidence last week that Congress will steer legislation in the direction that AT&T favors.

Continue Reading

Biz & IT

Nameless malware collects 1.2TB of sensitive data and stashes it online

Published

on

Researchers have discovered yet another massive trove of sensitive data, a dizzying 1.2TB database containing login credentials, browser cookies, autofill data, and payment information extracted by malware that has yet to be identified.

In all, researchers from NordLocker said on Wednesday, the database contained 26 million login credentials, 1.1 million unique email addresses, more than 2 billion browser cookies, and 6.6 million files. In some cases, victims stored passwords in text files created with the Notepad application.

The stash also included over 1 million images and more than 650,000 Word and .pdf files. Additionally, the malware made a screenshot after it infected the computer and took a picture using the device’s webcam. Stolen data also came from apps for messaging, email, gaming, and file-sharing. The data was extracted between 2018 and 2020 from more than 3 million PCs.

A booming market

The discovery comes amid an epidemic of security breaches involving ransomware and other types of malware hitting large companies. In some cases, including the May ransomware attack on Colonial Pipeline, hackers first gained access using compromised accounts. Many such credentials are available for sale online.

Alon Gal—co-founder and CTO of security firm Hudson Rock—said that in many cases, such data such is first collected by stealer malware installed by an attacker attempting to steal cryptocurrency or commit a similar type of crime.

The attacker “will likely then try to steal cryptocurrencies, and once he is done with the information, he will sell to groups whose expertise is ransomware, data breaches, and corporate espionage,” Gal told me. “These stealers are capturing browser passwords, cookies, files, and much more and sending it to the [command and control server] of the attacker.”

NordLocker researchers said there’s no shortage of sources for attackers to secure such information.

“The truth is, anyone can get their hands on custom malware,” the researchers wrote. “It’s cheap, customizable, and can be found all over the web. Dark web ads for these viruses uncover even more truth about this market. For instance, anyone can get their own custom malware and even lessons on how to use the stolen data for as little as $100. And custom does mean custom—advertisers promise that they can build a virus to attack virtually any app the buyer needs.”

NordLocker hasn’t been able to identify the malware used in this case. Gal said that from 2018 to 2019, widely used malware included Azorult and, more recently, an info stealer known as Raccoon. Once infected, a PC will regularly send pilfered data to a command and control server operated by the attacker.

In all, the malware collected account credentials for almost 1 million sites, including Facebook, Twitter, Amazon, and Gmail. Of the 2 billion cookies extracted, 22 percent remained valid at the time of the discovery. The files can be useful in piecing together the habits and interests of the victim, and if the cookies are used for authentication, they give access to the person’s online accounts. NordLocker provides other figures here.

People who want to determine if their data got swept up by the malware can check the Have I Been Pwned breach notification service.

Continue Reading

Biz & IT

Hackers can mess with HTTPS connections by sending data to your email server

Published

on

When you visit an HTTPS-protected website, your browser doesn’t exchange data with the webserver until it has ensured that the site’s digital certificate is valid. That prevents hackers with the ability to monitor or modify data passing between you and the site from obtaining authentication cookies or executing malicious code on the visiting device.

But what would happen if a man-in-the-middle attacker could confuse the browser into accidentally connecting to an email server or FTP server that uses a certificate that’s compatible with the one used by the website?

The perils of speaking HTTPS to an email server

Because the domain name of the website matches the domain name in the email or FTP server certificate, the browser will, in many cases, establish a Transport Layer Security connection with one of these servers rather than the website the user intended to visit.

Because the browser is communicating in HTTPS and the email or FTP server is using SMTP, SFTP, or another protocol, the possibility exists that things might go horribly wrong—a decrypted authentication cookie could be sent to the attacker, for instance, or an attacker could execute malicious code on the visiting machine.

The scenario isn’t as farfetched as some people might think. New research, in fact, found that roughly 14.4 million webservers use a domain name that’s compatible with the cryptographic credential of either an email or FTP server belonging to the same organization. Of those sites, about 114,000 are considered exploitable because the email or FTP server uses software that’s known to be vulnerable to such attacks.

Such attacks are possible because of the failure of TLS to protect the integrity of the TCP connection itself rather than the integrity of just the server speaking HTTP, SMTP, or another Internet language. Man-in-the-middle attackers can exploit this weakness to redirect TLS traffic from the intended server and protocol to another, substitute endpoint and protocol.

“The basic principle is that an attacker can redirect traffic intended for one service to another, because TLS does not protect the IP address or port number,” Marcus Brinkmann, a researcher at Ruhr University Bochum in Germany, told me. “In the past, people have considered attacks where the MitM attacker redirects a browser to a different web server, but we are considering the case where the attacker redirects the browser from the webserver to a different application server such as FTP or email.”

Cracks in the cornerstone

Typically abbreviated as TLS, Transport Layer Security uses strong encryption to prove that an end user is connected to an authentic server belonging to a specific service (such as Google or Bank of America) and not an impostor masquerading as that service. TLS also encrypts data as it travels between an end user and a server to ensure that people who can monitor the connection can’t read or tamper with the contents. With millions of servers relying on it, TLS is a cornerstone of online security.

In a research paper published on Wednesday, Brinkmann and seven other researchers investigated the feasibility of using what they call cross-protocol attacks to bypass TLS protections. The technique involves an MitM attacker redirecting cross-origin HTTP requests to servers that communicate over SMTP, IMAP, POP3, or FTP, or another communication protocol.

The main components of the attack are (1) the client application used by the targeted end user, denoted as C; (2) the server the target intended to visit, denoted as Sint; and (3) the substitute server, a machine that connects using SMTP, FTP, or another protocol that’s different from the one serverint uses but with the same domain listed in its TLS certificate.

The researchers identified three attack methods that MitM adversaries could use to compromise the safe browsing of a target in this scenario. They are:

Upload Attack. For this attack, we assume the attacker has some ability to upload data to Ssub and retrieve it later. In an upload attack, the attacker tries to store parts of the HTTP request of the browser (specifically the Cookie header) on Ssub. This might, for example, occur if the server interprets the request as a file upload or if the server is logging incoming requests verbosely. On a successful attack, the attacker can then retrieve the content on the server independently of the connection from C to Ssub and retrieve the HTTPS session cookie.

Download Attack—Stored XSS. For this attack, we assume the attacker has some ability to prepare stored data on Ssub and download it. In a download attack, the attacker exploits benign protocol features to “download” previously stored (and specifically crafted) data from Ssub to C. This is similar to a stored XSS vulnerability. However, because a protocol different from HTTP is used, even sophisticated defense mechanisms against XSS, like the Content-Security-Policy
(CSP), can be circumvented. Very likely, Ssub will not send any CSP by itself, and large parts of the response are under the control of the attacker.

Reflection Attack—Reflected XSS. In a reflection attack, the attacker tries to trick the server Ssub into reflecting parts of C’s request in its response to C. If successful, the attacker sends malicious JavaScript within the request that gets reflected by Ssub. The client will then parse the answer from the server, which in turn can lead to the execution of JavaScript in the context of the targeted web server.

The MitM adversary can’t decrypt the TLS traffic, but there are still other things the adversary can do. Forcing the target’s browser to connect to an email or FTP server instead of the intended webserver, for instance, might cause the browser to write an authentication cookie to the FTP server. Or it could enable cross-site scripting attacks that cause the browser to download and execute malicious JavaScript hosted on the FTP or email server.

Enforcing ALPN and SNI protections

To prevent cross-protocol attacks, the researchers proposed stricter enforcement of two existing protections. The first is known as application layer protocol negotiation, a TLS extension that allows an application layer such as a browser to negotiate what protocol should be used in a secure connection. ALPN, as it’s usually abbreviated, is used to establish connections using the better-performing HTTP/2 protocol without additional round trips.

By strictly enforcing ALPN as it’s defined in the formal standard, connections created by browsers or other app layers that send the extension are not vulnerable to cross-protocol attacks.

Similarly, use of a separate TLS extension called server name indication can protect against cross-hostname attacks if it’s configured to terminate the connection when no matching host is found. “This can protect against cross-protocol attacks where the intended and substitute server have different hostnames, but also against some same-protocol attacks such as HTTPS virtual host confusion or context confusion attacks,” the researchers wrote.

The researchers are calling their cross-protocol attacks ALPACA, short for “application layer protocols allowing cross-protocol attacks.” At the moment, ALPACA doesn’t pose a major threat to most people. But the risk posed could increase as new attacks and vulnerabilities are discovered or TLS is used to protect additional communications channels.

“Overall, the attack is very situational and targets individual users,” Brinkmann said. “So, the individual risk for users is probably not very high. But over time, more and more services and protocols are protected with TLS, and more opportunities for new attacks that follow the same pattern arise. We think it’s timely and important to mitigate these issues at the standardization level before it becomes a larger problem.”

Continue Reading

Trending