Connect with us

Mobile

Unfolding the Samsung Galaxy Fold – TechCrunch

Published

on

The Galaxy Fold is real. I’ve held it in my hands — a few of them, actually. Samsung’s briefing this morning was littered with the things, in different colors and different states of unfolded. A month or so ago, this was anything but a given.

After eight years of teasing a folding device, Samsung finally pulled the trigger at its developer’s conference late last year. But the device was shrouded in darkness. Then in February, it took the stage as the Galaxy Fold, but there was no phone waiting for us. Ditto for Mobile World Congress a week later, when the device was trapped like a carbonite Han Solo behind a glass display.

With preorders for the phone opening today, ahead of an expected April 26 sale, things were getting down to the wire for Samsung. But this morning, at an event in New York, the Galaxy Fold was on full display, ready to be put through its paces. We happily did just that in the hour or so we had with the product.

Once you get over the surprise that it’s real and about to ship, you find yourself pretty impressed with what Samsung’s done here. It’s easy to get frustrated about a product the company’s essentially been teasing since showing off its first flexible display at 2011, but a radically new form factor is an easy contender for first-generation woes. The fold, on the other hand, is a device that’s been run through the wringer.

Samsung’s already shown us what fold testing looks like in a promotion video that debuted a few weeks back. The handset was subject to 200,000 of those machine folds, which amounts to a lot more than the life of the product. And yes, before you ask, they were subjected to drop testing, the same sort of violent gadget abuse Samsung puts the rest of its gadgets through — both open and closed.

Ditto for the eight-point battery test it’s been subjecting all of its devices to since the Note 7. That’s doubly important given the fact that the Galaxy Fold sports twice the battery. All told, it has 4380mAh, split in two, on either side of the fold. That amounts to “all day battery life” according to Samsung. That’s the same claim you’ll get on most of these devices ahead of launch. Though the Fold apparently presents an extra layer of ambiguity, given that the company isn’t entirely sure how people are actually going to use the thing, once they get it in their hands.

The folding mechanism works well, snapping shut with a satisfying sound, thanks in part to some on-board magnets hidden near the edge. In fact, when the Fold is lying screen down, it has the tendency to attract pieces of metal around it. I found myself absent-mindedly opening and closing the thing. When not in use, it’s like an extremely expensive fidget spinner.

Samsung’s done a remarkable job maintaining the design language from the rest of the Galaxy line. But for the odd form factor, the Fold looks right at home alongside the S10 and the like. The rounded metallic corners, the camera array and, yes, the Bixby button are all on board here.

The edges are split in two, with each screen getting its own half. When the Fold is open, they sit next to each other, with a small gap between the two. When the phone is folded, they pull apart, coming together at a 90 degree angle from the hinge. It’s an elegant solution, with a series of interlocking gears that allow the system to fold and unfold for the life of the product.

Unsurprisingly, Samsung tested a variety of different form factors, but said this was the most “intuitive” for a first-gen product like this. Of course, numerous competing devices have already taken different approaches, so it’s going to be fascinating watching what the industry ultimately lands on when more of these products are out in the world.

Unfolded, the device is surprisingly thin — a hair under the iPhone XS. Folded, it’s a bit beefier than two iPhones, owing to a gap between the displays. While the edges of the device come into contact when closed, they form a long, isosceles triangle, with a gap that increases as you move toward the middle.

Unfolded, the seam in the middle of the display is, indeed, noticeable. It’s subtle, though. You’ll really only notice it as your finger drags across it or when the light hits it the right way. That’s just part of life in the age of the folding phone, so get used to it.

The inner display measures 7.3 inches. Compare that to, say, the iPad Mini’s 7.9. So, small for a tablet, but way too big to stick in your pocket without folding it up. The size of the interior display renders the notch conversation a bit moot. There’s actually a pretty sizable cutout in the upper-right corner for the front-facing camera.

Samsung’s been working with Google and a handful of developers, including WhatsApp and Spotify, to create a decent experience for users at launch. There are two key places this counts: app continuity and multi-app windows. The first lets you open an app on the small screen and pick up where you left off on the big one, once unfolded. The second makes it possible to have three apps open at once — something that’s become standard on tablets in the last couple of years.

Both work pretty seamlessly, though the functionality is limited to those companies that have enabled it. Samsung says it’s an easy addition, but the speed with which developers adopt it will depend largely on the success of these devices. Given that Samsung’s worked hand in hand with Google/Android on this, however, gives the company a big leg up on the competition.

All told, I’m pretty impressed with what amounts to a first-gen product. This thing was a long time in the making, and Samsung clearly wanted to get things right. The company admittedly had some of the wind taken out of its sails when Huawei announced its own folding device a few days later.

That product highlighted some of the Fold’s shortcomings, including the small front-facing screen and somewhat bulky design language. The Fold’s not perfect, but it’s a pretty solid first take at a new smartphone paradigm. And with a starting price of $1,980, it’s got a price to match. You’re essentially paying double for twice the screen.

Samsung, Huawei and the rest of the companies exploring the space know that they’re only going to sell so many of these things in the first go-round at this price point. Everyone’s still exploring aspects like folding mechanisms, essentially making early adopters guinea pigs this time out.

But while the fold doesn’t feel like a phone that’s achieved its final form, it’s a surprisingly well-realized first-generation phone.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Mobile

Android’s winter update adds new features to Gboard, Maps, Books, Nearby Share and more – TechCrunch

Published

on

Google announced this morning Android phones will receive an update this winter that will bring some half-dozen new features to devices, including improvements to apps like Gboard, Google Play Books, Voice Access, Google Maps, Android Auto, and Nearby Share. The release is the latest in a series of update bundles that now allow Android devices to receive new features outside of the usual annual update cycle.

The bundles may not deliver Android’s latest flagship features, but they offer steady improvements on a more frequent basis.

One of the more fun bits in the winter update will include a change to “Emoji Kitchen,” the feature in the Gboard keyboard app that lets users combine their favorite emoji to create new ones that can be shared as customized stickers. To date, users have remixed emoji over 3 billion times since the feature launched earlier this year, Google says. Now, the option is being expanded. Instead of offering hundreds of design combinations, it will offer over 14,000. You’ll also be able to tap two emoji to see suggested combinations or double tap on one emoji to see other suggestions.

Image Credits: Google

This updated feature had been live in the Gboard beta app, but will now roll out to Android 6.0 and above devices in the weeks ahead.

Another update will expand audiobook availability on Google Play Books. Now, Google will auto-generate narrations for books that don’t offer an audio version. The company says it worked with publishers in the U.S. and U.K. to add these auto-narrated books to Google Play Books. The feature is in beta but will roll out to all publishers in early 2021.

An accessibility feature that lets people use and navigate their phone with voice commands, Voice Access, will also be improved. The feature will soon leverage machine learning to understand interface labels on devices. This will allow users to refer to things like the “back” and “more” buttons, and many others by name when they are speaking.

The new version of Voice Access, now in beta, will be available to all devices worldwide running Android 6.0 or higher.

An update for Google Maps will add a new feature to one of people’s most-used apps.

In a new (perhaps Waze-inspired) “Go Tab,” users will be able to more quickly navigate to frequently visited places — like a school or grocery store, for example — with a tap. The app will allow users to see directions, live traffic trends, disruptions on the route, and gives an accurate ETA, without having to type in the actual address. Favorite places — or in the case of public transit users, specific routes — can be pinned in the Go Tab for easy access. Transit users will be able to see things like accurate departure and arrival times, alerts from the local transit agency, and an up-to-date ETA.

Image Credits: Google

One potentially helpful use case for this new feature would be to pin both a transit route and driving route to the same destination, then compare their respective ETAs to pick the faster option.

This feature is coming to both Google Maps on Android as well as iOS in the weeks ahead.

Android Auto will expand to more countries over the next few months. Google initially said it would reach 36 countries, but then updated the announcement language as the timing of the rollout was pushed back. The company now isn’t saying how many countries will gain access in the months to follow or which ones, so you’ll need stay tuned for news on that front.

Image Credits: Google

The final change is to Nearby Share, the proximity-based sharing feature that lets users share things like links, files, photos and and more even when they don’t have a cellular or Wi-Fi connection available. The feature, which is largely designed with emerging markets in mind, will now allow users to share apps from Google Play with people around them, too.

To do so, you’ll access a new “Share Apps” menu in “Manage Apps & Games” in the Google Play app. This feature will roll out in the weeks ahead.

Some of these features will begin rolling out today, so you may receive them earlier than a timeframe of several “weeks,” but the progress of each update will vary.

Continue Reading

Mobile

iPhones can now automatically recognize and label buttons and UI features for blind users – TechCrunch

Published

on

Apple has always gone out of its way to build features for users with disabilities, and Voiceover on iOS is an invaluable tool for anyone with a vision impairment — assuming every element of the interface has been manually labeled. But the company just unveiled a brand new feature that uses machine learning to identify and label every button, slider, and tab automatically.

Screen Recognition, available now in iOS 14, is a computer vision system that has been trained on thousands of images of apps in use, learning what a button looks like, what icons mean, and so on. Such systems are very flexible — depending on the data you give them, they can become expert at spotting cats, facial expressions, or as in this case the different parts of a user interface.

The result is that in any app now, users can invoke the feature and a fraction of a second later every item on screen will be labeled. And by “every,” they mean every — after all, screen readers need to be aware of every thing that a sighted user would see and be able to interact with, from images (which iOS has been able to create one-sentence summaries of for some time) to common icons (home, back) and context-specific ones like “…” menus that appear just about everywhere.

The idea is not to make manual labeling obsolete — developers know best how to label their own apps, but updates, changing standards, and challenging situations (in-game interfaces, for instance) can lead to things not being as accessible as they could be.

I chatted with Chris Fleizach from Apple’s iOS accessibility engineering team, and Jeff Bigham from the AI/ML accessibility team, about the origin of this extremely helpful new feature. (It’s described in a paper due to be presented next year.)

“We looked for areas where we can make inroads on accessibility, like image descriptions,” said Fleizach. “In iOS 13 we labeled icons automatically – Screen Recognition takes it another step forward. We can look at the pixels on screen and identify the hierarchy of objects you can interact with, and all of this happens on device within tenths of a second.”

The idea is not a new one, exactly; Bigham mentioned a screen reader, Outspoken, which years ago attempted to use pixel-level data to identify UI elements. But while that system needed precise matches, the fuzzy logic of machine learning systems and the speed of iPhones’ built-in AI accelerators means that Screen Recognition is much more flexible and powerful.

It wouldn’t have been possibly just a couple years ago — the state of machine learning and the lack of a dedicated unit for executing it meant that something like this would have been extremely taxing on the system, taking much longer and probably draining the battery all the while.

But once this kind of system seemed possible, the team got to work prototyping it with the help of their dedicated accessibility staff and testing community.

“VoiceOver has been the standard bearer for vision accessibility for so long. If you look at the steps in development for Screen Recognition, it was grounded in collaboration across teams — Accessibility throughout, our partners in data collection and annotation, AI/ML, and, of course, design. We did this to make sure that our machine learning development continued to push toward an excellent user experience,” said Bigham.

It was done by taking thousands of screenshots of popular apps and games, then manually labeling them as one of several standard UI elements. This labeled data was fed to the machine learning system, which soon became proficient at picking out those same elements on its own.

It’s not as simple as it sounds — as humans, we’ve gotten quite good at understanding the intention of a particular graphic or bit of text, and so often we can navigate even abstract or creatively designed interfaces. It’s not nearly as clear to a machine learning model, and the team had to work with it to create a complex set of rules and hierarchies that ensure the resulting screen reader interpretation makes sense.

The new capability should help make millions of apps more accessible, or just accessible at all, to users with vision impairments. You can turn it on by going to Accessibility settings, then VoiceOver, then VoiceOver Recognition, where you can turn on and off image, screen, and text recognition.

It would not be trivial to bring Screen Recognition over to other platforms, like the Mac, so don’t get your hopes up for that just yet. But the principle is sound, though the model itself is not generalizable to desktop apps, which are very different from mobile ones. Perhaps others will take on that task; the prospect of AI-driven accessibility features is only just beginning to be realized.

Continue Reading

Mobile

VSCO acquires mobile app Trash to expand into AI-powered video editing – TechCrunch

Published

on

VSCO, the popular photo and video editing app, today announced it has acquired AI-powered video editing app Trash, as the company pushes further into the video market. The deal will see Trash’s technology integrated into the VSCO app in the months ahead, with the goal of making it easier for users to creatively edit their videos.

Trash, which was co-founded by Hannah Donovan and Genevieve Patterson, cleverly uses artificial intelligence technology to analyze multiple video clips and identify the most interesting shots. It then stitches your clips together automatically to create a final product. In May, Trash added a feature called Styles that let users pick the type of video they wanted to make — like a recap, a narrative, a music video or something more artsy.

After Trash creates its AI-powered edit, users can opt to further tweak the footage using buttons on the screen that let them change the order of the clips, change filters, adjust the speed or swap the background music.

Image Credits: Trash

With the integration of Trash’s technology, VSCO envisions a way to make video editing even more approachable for newcomers, while still giving advanced users tools to dig in and do more edits, if they choose. As VSCO co-founder and CEO Joel Flory explains, it helps users get from that “point zero of staring at their Camera Roll…to actually putting something together as fast as possible.”

“Trash gets you to the starting point, but then you can dive into it and tweak [your video] to really make it your own,” he says.

The first feature to launch from the acquisition will be support for multi-clip video editing, expected in a few months. Over time, VSCO expects to roll out more of Trash’s technologies to its user base. As users make their video edits, they may also be able to save their collection of tweaks as “recipes,” like VSCO currently supports for photos.

“Trash brings to VSCO a deep level of personalization, machine learning and computer vision capabilities for mobile that we believe can power all aspects of creation on VSCO, both now and for future investments in creativity,” says Flory.

The acquisition is the latest in a series of moves VSCO has made to expand its video capabilities.

At the end of 2019, VSCO picked up video technology startup Rylo. A few months later, it had leveraged the investment to debut Montage, a set of tools that allowed users to tell longer video stories using scenes, where they could also stack and layer videos, photos, colors and shapes to create a collage-like final product. The company also made a change to its app earlier this year to allow users to publish their videos to the main VSCO feed, which had previously only supported photos.

More recently, VSCO has added new video effects, like slowing down, speeding up or reversing clips and new video capture modes.

As with its other video features, the new technology integrations from Trash will be subscriber-only features.

Today, VSCO’s subscription plan costs $19.99 per year, and provides users with access to the app’s video editing capabilities. Currently, more than 2 million of VSCO’s 100 million+ registered users are paid subscribers. And, as a result of the cost-cutting measures and layoffs VSCO announced earlier this year, the company has now turned things around to become EBITDA positive in the second half of 2020. The company says it’s on the path to profitability, and additional video features like those from Trash will help.

Image Credits: Trash

VSCO’s newer focus on video isn’t just about supporting VSCO’s business model, however, it’s also about positioning the company for the future. While the app grew popular during the Instagram era, today’s younger users are more often posting videos to TikTok instead. According to Apple, TikTok was the No. 2 most downloaded free app of the year — ahead of Instagram, Facebook and Snapchat.

Though VSCO doesn’t necessarily envision itself as only a TikTok video prep tool, it does have to consider that growing market. Similar to TikTok, VSCO’s user base consists of a younger, Gen Z demographic; 75% of VSCO’s user base is under 25, for example, and 55% of its subscribers are also under 25. Combined, its user base creates more than 8 million photos and videos per day, VSCO says.

As a result of the acquisition, Trash’s standalone app will shut down on December 18.

Donovan will join VSCO as Director of Product and Patterson as Sr. Staff Software Engineer, Machine Learning. Other Trash team members, including Karina Bernacki, Chihyu Chang and Drew Olbrich, will join as Chief of Staff, Engineering Manager and Sr. Software Engineer for iOS, respectively.

“We both believe in the power of creativity to have a healthy and positive impact on people’s lives,” said Donovan, in Trash’s announcement. “Additionally, we have similar audiences of Gen Z casual creators; and are focused on giving people ways to express themselves and share their version of the world while feeling seen, safe, and supported,” she said.

Trash had raised a total of $3.3 million — a combination of venture capital and $500,000 in grants — from BBG, Betaworks, Precursor and Dream Machine, as well as the National Science Foundation. (Multiple TechCrunch connections here: BBG is backed by our owner Verizon Media, while Dream Machine is the fund created by former TechCrunch editor Alexia Bonatsos.)

“Han and Gen and the Trash team have always paid attention to the needs of creators first and foremost. My hope is that the VSCO and Trash partnership will turn all of us into creators, and turn the gigabytes of latent videos on our phones from trash to treasures,” said Bonatsos, in a statement about the deal.

Flory declined to speak to the deal price, but characterized the acquisition as a “win-win for both the Trash team and for VSCO.”

Continue Reading

Trending