Connect with us

Biz & IT

Review: Apple’s iPhone XR is a fine young cannibal

Published

on

This iPhone is great. It is most like the last iPhone — but not the last “best” iPhone — more like the last not as good iPhone. It’s better than that one though, just not as good as the newest best iPhone or the older best iPhone.

If you’re upgrading from an iPhone 7 or iPhone 8, you’re gonna love it and likely won’t miss any current features while also getting a nice update to a gesture-driven phone with Face ID. But don’t buy it if you’re coming from an iPhone X, you’ll be disappointed as there are some compromises from the incredibly high level of performance and quality in Apple’s last flagship, which really was pushing the envelope at the time.

From a consumer perspective, this is offering a bit of choice that targets the same kind of customer who bought the iPhone 8 instead of the iPhone X last year. They want a great phone with a solid feature set and good performance but are not obsessed with ‘the best’ and likely won’t notice any of the things that would bug an iPhone X user about the iPhone XR.

On the business side, Apple is offering the iPhone XR to make sure there is no pricing umbrella underneath the iPhone XS and iPhone XS Max, and to make sure that the pricing curve is smooth across the iPhone line. It’s not so much a bulwark against low-end Android, that’s why the iPhone 8 and iPhone 7 are sticking around at those low prices.

Instead it’s offering an ‘affordable’ option that’s similar in philosophy to the iPhone 8’s role last year but with some additional benefits in terms of uniformity. Apple gets to move more of its user base to a fully gesture-oriented interface, as well as giving them Face ID. It benefits from more of its pipeline being dedicated to devices that share a lot of components like the A12 and True Depth camera system. It’s also recognizing the overall move towards larger screens in the market.

If Apple was trying to cannibalize sales of the iPhone XS, it couldn’t have created a better roasting spit than the iPhone XR.

Screen

Apple says that the iPhone XR has ‘the most advanced LCD ever in a smartphone’ — their words.

The iPhone XR’s screen is an LCD, not an OLED. This is one of the biggest differences between the iPhone XR and the iPhone XS models, and while the screen is one of the best LCDs I’ve ever seen, it’s not as good as the other models. Specifically, I believe that the OLED’s ability to display true black and display deeper color (especially in images that are taken on the new XR cameras in HDR) set it apart easily.

That said, I have a massive advantage in that I am able to hold the screens side by side to compare images. Simply put, if you don’t run them next to one another, this is a great screen. Given that the iPhone XS models have perhaps the best displays ever made for a smartphone, coming in a very close second isn’t a bad place to be.

A lot of nice advancements have been made here over earlier iPhone LCDs. You get True Tone, faster 120hz touch response and wide color support. All on a 326 psi stage that’s larger than the iPhone 8 Plus in a smaller body. You also now get tap-to-wake, another way Apple is working hard to unify the design and interaction language of its phones across the lineup.

All of these advancements don’t come for free to an LCD. There was a lot of time, energy and money spent getting the older technology to work as absolutely closely as possible to the flagship models. It’s rare to the point of non-existence that companies care at all to put in the work to make the lower end devices feel as well worked as the higher end ones. For as much crap as Apple gets about withholding features to get people to upsell, there is very little of that happening with the iPhone XR, quite the opposite really.

There are a few caveats here. First, 3D touch is gone, replaced by ‘Haptic Touch’ which Apple says works similarly to the MacBook’s track pad. It provides feedback from the iPhone’s Taptic vibration engine to simulate a ‘button press’ or trigger. In practice, the reality of the situation is that it is a very prosaic ‘long press to activate’ more than anything else. It’s used to trigger the camera on the home screen and the flashlight, and Apple says it’s coming to other places throughout the system as it sees it appropriate and figures out how to make it feel right.

I’m not a fan. I know 3D touch has its detractors, even among the people I’ve talked to who helped build it, I think it’s a clever utility that has a nice snap to it when activating quick actions like the camera. In contrast, on the iPhone XR you must tap and hold the camera button for about a second and a half — no pressure sensitivity here obviously — as the system figures out that this is an intentional press by determining duration, touch shape and spread etc and then triggers the action. You get the feedback still, which is nice, but it feels disconnected and slow. It’s the best case scenario without the additional 3D touch layer, but it’s not ideal.

I’d also be remiss if I didn’t mention that the edges of the iPhone XR screen have a slight dimming effect that is best described as a ‘drop shadow’. It’s wildly hard to photograph but imagine a very thin line of shadow around the edge of the phone that gets more pronounced as you tilt it and look at the edges. It’s likely an effect of the way Apple was able to get a nice sharp black drop-off at the edges that gets that to-the-edges look of the iPhone XR’s screen.

Apple is already doing a ton of work rounding the corners of the LCD screen to make them look smoothly curved (this works great and is nearly seamless unless you bust out the magnifying loupe) and it’s doing some additional stuff around the edge to keep it looking tidy. They’ve doubled the amount of LEDs in the screen to make that dithering and the edging possible.

Frankly, I don’t think most people will ever notice this slight shading of dark around the edge — it is very slight — but when the screen is displaying mostly white and it’s next to the iPhone XS it’s visible.

Oh, the bezels are bigger. It makes the front look slightly less elegant and screenful than the iPhone XS, but it’s not a big deal.

Camera

Yes, the portrait mode works. No, it’s not as good as the iPhone XS. Yes, I miss having a zoom lens.

All of those things are true and easily the biggest reason I won’t be buying an iPhone XR. However, in the theme of Apple working its hardest to make even its ‘lower end’ devices work and feel as much like its best, it’s really impressive what has been done here.

The iPhone XR’s front-facing camera array is identical to what you’ll find in the iPhone XS. Which is to say it’s very good.

The rear facing camera is where it gets interesting, and different.

The rear camera is a single lens and sensor that is both functionally and actually identical to the wide angle lens in the iPhone XS. It’s the same sensor, the same optics, the same 27mm wide-angle frame. You’re going to get great ‘standard’ pictures out of this. No compromises.

However, I found myself missing the zoom lens a lot. This is absolutely a your mileage may vary scenario, but I take the vast majority of my pictures with the telephoto lens. Looking back at my year with the iPhone X I’d say north of 80% of my pictures were shot with the telephoto, even if they were close ups. I simply prefer the “52mm” equivalent with its nice compression and tight crop. It’s just a better way to shoot than a wide angle — as any photographer or camera company will tell you because that’s the standard (equivalent) lens that all cameras have shipped with for decades.

Wide angle lenses were always a kludge in smartphones and it’s only in recent years that we’ve started getting decent telephotos. If I had my choice, I’d default to the tele and have a button to zoom out to the wide angle, that would be much nicer.

But with the iPhone XR you’re stuck with the wide — and it’s a single lens at that, without the two different perspectives Apple normally uses to gather its depth data to apply the portrait effect.

So they got clever. iPhone XR portrait images still contain a depth map that determines foreground, subject and background, as well as the new segmentation map that handles fine detail like hair. While the segmentation maps are roughly identical, the depth maps from the iPhone XR are nowhere as detailed or information rich as the ones that are generated by the iPhone XS.

See the two maps compared here, the iPhone XR’s depth map is far less aware of the scene depth and separation between the ‘slices’ of distance. It means that the overall portrait effect, while effective, is not as nuanced or aggressive.

In addition, the iPhone XR’s portrait mode only works on people.You’re also limited to just a couple of the portrait lighting modes: studio and contour.

In order to accomplish portrait mode without the twin lens perspective, Apple is doing facial landmark mapping and image recognition work to determine that the subject you’re shooting is a person. It’s doing depth acquisition by acquiring the map using a continuous real-time buffer of information coming from the focus pixels embedded in the iPhone XR’s sensor that it is passing to the A12 Bionic’s Neural Engine. Multiple neural nets analyze the data and reproduce the depth effect right in the viewfinder.

When you snap the shutter it combines the depth data, the segmentation map and the image data into a portrait shot instantaneously. You’re able to see the effect immediately. It’s wild to see this happen in real time and it boggles thinking about the horsepower needed to do this. By comparison, the Pixel 3 does not do real time preview and takes a couple of seconds to even show you the completed portrait shot once it’s snapped.

It’s a bravura performance in terms of silicon. But how do the pictures look?

I have to say, I really like the portraits that come out of the iPhone XR. I was ready to hate on the software-driven solution they’d come up with for the single lens portrait but it’s pretty damn good. The depth map is not as ‘deep’ and the transitions between out of focus and in focus areas are not as wide or smooth as they are on iPhone XS, but it’s passable. You’re going to get more funny blurring of the hair, more obvious hard transitions between foreground and background and that sort of thing.

And the wide angle portraits are completely incorrect from an optical compression perspective (nose too large, ears too small). Still, they are kind of fun in an exaggerated way. Think the way your face looks when you get to close to your front camera.

If you take a ton of portraits with your iPhone, the iPhone XS is going to give you a better chance of getting a great shot with a ton of depth that you can play with to get the exact look that you want. But as a solution that leans hard on the software and the Neural Engine, the iPhone XR’s portrait mode isn’t bad.

Performance

Unsurprisingly, given that it has the same exact A12 Bionic processor, but the iPhone XR performs almost identically to the iPhone XS in tests. Even though it features 3GB of RAM to the iPhone XS’ 4GB, the overall situation here is that you’re getting a phone that is damn near identical as far as speed and capability. If you care most about core features and not the camera or screen quirks, the iPhone XR does not offer many, if any, compromises here.

Size

The iPhone XR is the perfect size. If Apple were to make only one phone next year, they could just make it XR-sized and call it good. Though I am now used to the size of the iPhone X, a bit of extra screen real-estate is much appreciated when you do a lot of reading and email. Unfortunately, the iPhone XS Max is a two-handed phone, period. The increase in vertical size is lovely for reading and viewing movies, but it’s hell on reachability. Stretching to the corners with your thumb is darn near impossible and to complete even simple actions like closing a modal view inside an app it’s often easiest (and most habitual) to just default to two hands to perform those actions.

For those users that are ‘Plus’ addicts, the XS Max is an exercise in excess. It’s great as a command center for someone who does most of their work on their iPhones or in scenarios where it’s their only computer. My wife, for instance, has never owned her own computer and hasn’t really needed a permanent one in 15 years. For the last 10 years, she’s been all iPhone, with a bit of iPad thrown in. I myself am now on a XS Max because I also do a huge amount of my work on my iPhone and the extra screen size is great for big email threads and more general context.

But I don’t think Apple has done enough to capitalize on the larger screen iPhones in terms of software — certainly not enough to justify two-handed operation. It’s about time iOS was customized thoroughly for larger phones beyond a couple of concessions to split-view apps like Mail.

That’s why the iPhone XR’s size comes across as such a nice compromise. It’s absolutely a one-handed phone, but you still get some extra real-estate over the iPhone XS and the exact same amount of information appears on the iPhone XR’s screen as on the iPhone XS Max in a phone that is shorter enough to be thumb friendly.

Color

Apple’s industrial design chops continue to shine with the iPhone XR’s color finishes. My tester iPhone was the new Coral color and it is absolutely gorgeous.

The way Apple is doing colors is like nobody else. There’s no comparison to holding a Pixel 3, for instance. The Pixel 3 is fun and photographs well, but super “cheap and cheerful” in its look and feel. Even though the XR is Apple’s mid-range iPhone, the feel is very much that of a piece of nicely crafted jewelry. It’s weighty, with a gorgeous 7-layer color process laminating the back of the rear glass, giving it a depth and sparkle that’s just unmatched in consumer electronics.

The various textures of the blasted aluminum and glass are complimentary and it’s a nice melding of the iPhone 8 and iPhone X design ethos. It’s massively unfortunate that most people will be covering the color with cases, and I expect clear cases to explode in popularity when these phones start getting delivered.

It remains very curious that Apple is not shipping any first-party cases for the iPhone XR — not even the rumored clear case. I’m guessing that they just weren’t ready or that Apple was having issues with some odd quirk of clear cases like yellowing or cracking or something. But whatever it is, they’re leaving a bunch of cash on the table.

Apple’s ID does a lot of heavy lifting here, as usual. It often goes un-analyzed just how well the construction of the device works in conjunction with marketing and market placement to help customers both justify and enjoy their purchase. It transmits to the buyer that this is a piece of quality kit that has had a lot of thought put into it and makes them feel good about paying a hefty price for a chunk of silicon and glass. No one takes materials science anywhere as seriously at Apple and it continues to be on display here.

Should you buy it?

As I said above, it’s not that complicated of a question. I honestly wouldn’t overthink this one too much. The iPhone XR is made to serve a certain segment of customers that want the new iPhone but don’t necessarily need every new feature. It works great, has a few small compromises that probably won’t faze the kind of folks that would consider not buying the best and is really well built and executed.

“Apple’s pricing lineup is easily its strongest yet competitively,” creative Strategies’ Ben Bajarin puts it here in a subscriber piece. “The [iPhone] XR in particular is well lined up against the competition. I spoke to a few of my carrier contacts after Apple’s iPhone launch event and they seemed to believe the XR was going to stack up well against the competition and when you look at it priced against the Google Pixel ($799) and Samsung Galaxy 9 ($719). Some of my contacts even going so far to suggest the XR could end up being more disruptive to competitions portfolios than any iPhone since the 6/6 Plus launch.”

Apple wants to fill the umbrella, leaving less room than ever for competitors. Launching a phone that’s competitive in price and features an enormous amount of research and execution that attempt to make it as close a competitor as possible to its own flagship line, Apple has set itself up for a really diverse and interesting fiscal Q4.

Whether you help Apple boost its average selling price by buying one of the maxed out XS models or you help it block another Android purchase with an iPhone XR, I think it will probably be happy having you, raw or cooked.

Source link

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Biz & IT

They plugged GPT-4 into Minecraft—and unearthed new potential for AI

Published

on

Microsoft

The technology that underpins ChatGPT has the potential to do much more than just talk. Linxi “Jim” Fan, an AI researcher at the chipmaker Nvidia, worked with some colleagues to devise a way to set the powerful language model GPT-4—the “brains” behind ChatGPT and a growing number of other apps and services—loose inside the blocky video game Minecraft.

The Nvidia team, which included Anima Anandkumar, the company’s director of machine learning and a professor at Caltech, created a Minecraft bot called Voyager that uses GPT-4 to solve problems inside the game. The language model generates objectives that help the agent explore the game, and code that improves the bot’s skill at the game over time.

Voyager doesn’t play the game like a person, but it can read the state of the game directly, via an API. It might see a fishing rod in its inventory and a river nearby, for instance, and use GPT-4 to suggest the goal of doing some fishing to gain experience. It will then use this goal to have GPT-4 generate the code needed to have the character achieve it.

The most novel part of the project is the code that GPT-4 generates to add behaviors to Voyager. If the code initially suggested doesn’t run perfectly, Voyager will try to refine it using error messages, feedback from the game, and a description of the code generated by GPT-4.

Over time, Voyager builds a library of code in order to learn to make increasingly complex things and explore more of the game. A chart created by the researchers shows how capable it is compared to other Minecraft agents. Voyager obtains more than three times as many items, explores more than twice as far, and builds tools 15 times more quickly than other AI agents. Fan says the approach may be improved in the future with the addition of a way for the system to incorporate visual information from the game.

While chatbots like ChatGPT have wowed the world with their eloquence and apparent knowledge—even if they often make things up—Voyager shows the huge potential for language models to perform helpful actions on computers. Using language models in this way could perhaps automate many routine office tasks, potentially one of the technology’s biggest economic impacts.

The process that Voyager uses with GPT-4 to figure out how to do things in Minecraft might be adapted for a software assistant that works out how to automate tasks via the operating system on a PC or phone. OpenAI, the startup that created ChatGPT, has added “plugins” to the bot that allow it to interact with online services such as grocery delivery app Instacart. Microsoft, which owns Minecraft, is also training AI programs to play it, and the company recently announced Windows 11 Copilot, an operating system feature that will use machine learning and APIs to automate certain tasks. It may be a good idea to experiment with this kind of technology inside a game like Minecraft, where flawed code can do relatively little harm.

Video games have long been a test bed for AI algorithms, of course. AlphaGo, the machine learning program that mastered the extremely subtle board game Go back in 2016, cut its teeth by playing simple Atari video games. AlphaGo used a technique called reinforcement learning, which trains an algorithm to play a game by giving it positive and negative feedback, for example from the score inside a game.

It is more difficult for this method to guide an agent in an open-ended game such as Minecraft, where there is no score or set of objectives and where a player’s actions may not pay off until much later. Whether or not you believe we should be preparing to contain the existential threat from AI right now, Minecraft seems like an excellent playground for the technology.

This story originally appeared on wired.com.

Continue Reading

Biz & IT

Google’s Android and Chrome extensions are a very sad place. Here’s why

Published

on

Photo Illustration by Miguel Candela/SOPA Images/LightRocket via Getty Images

No wonder Google is having trouble keeping up with policing its app store. Since Monday, researchers have reported that hundreds of Android apps and Chrome extensions with millions of installs from the company’s official marketplaces have included functions for snooping on user files, manipulating the contents of clipboards, and injecting deliberately unknown code into webpages.

Google has removed many but not all of the malicious entries, the researchers said, but only after they were reported, and by then, they were on millions of devices—and possibly hundreds of millions. The researchers aren’t pleased.

A very sad place

“I’m not a fan of Google’s approach,” extension developer and researcher Wladimir Palant wrote in an email. In the days before Chrome, when Firefox had a bigger piece of the browser share, real people reviewed extensions before making them available in the Mozilla marketplace. Google took a different approach by using an automated review process, which Firefox then copied.

“As automated reviews are frequently missing malicious extensions and Google is very slow to react to reports (in fact, they rarely react at all), this leaves users in a very sad place.”

Researchers and security advocates have long directed the same criticism at Google’s process for reviewing Android apps before making them available in its Play marketplace. The past week provides a stark reason for the displeasure.

On Monday, security company Dr.Web reported finding 101 apps with a reported 421 million downloads from Play that contained code allowing a host of spyware activities, including:

  • Obtaining a list of files in specified directories
  • Verifying the presence of specific files or directories on the device
  • Sending a file from the device to the developer
  • Copying or substituting the content of clipboards.

ESET researcher Lukas Stefanko analyzed the apps reported by Dr.Web and confirmed the findings. In an email, he said that for the file snooping to work, users would first have to approve a permission known as READ_EXTERNAL_STORAGE, which, as its name implies, allows apps to read files stored on a device. While that’s one of the more sensitive permissions a user can grant, it’s required to perform many of the apps’ purported purposes, such as photo editing, managing downloads, and working with multimedia, browser apps, or the camera.

Dr.Web said that the spyware functions were supplied by a software developer kit (SDK) used to create each app. The SDKs help streamline the development process by automating certain types of commonly performed tasks. Dr.Web identified the SDK enabling the snooping as SpinOK. Attempts to contact the SpinOK developer for comment were unsuccessful.

On Friday, security firm CloudSEK extended the list of apps using SpinOK to 193 and said that of those, 43 remained available in Play. In an email, a CloudSEK researcher wrote:

The Android.Spy.SpinOk spyware is a highly concerning threat to Android devices, as it possesses the capability to collect files from infected devices and transfer them to malicious attackers. This unauthorized file collection puts sensitive and personal information at risk of being exposed or misused. Moreover, the spyware’s ability to manipulate clipboard contents further compounds the threat, potentially allowing attackers to access sensitive data such as passwords, credit card numbers, or other confidential information. The implications of such actions can be severe, leading to identity theft, financial fraud, and various privacy breaches.

The week didn’t fare better for Chrome users who obtain extensions from Google’s Chrome Web Store. On Wednesday, Palant reported 18 extensions that contained deliberately obfuscated code that reached out to a server located at serasearchtop[.]com. Once there, the extensions injected mysterious JavaScript into every webpage a user viewed. In all, the 18 extensions had some 55 million downloads.

On Friday, security firm Avast confirmed Palant’s findings and identified 32 extensions with 75 million reported downloads, though Avast said the download counts may have been artificially inflated.

It’s unknown precisely what the injected JavaScript did because Palant or Avast couldn’t view the code. While both suspect the purpose was to hijack search results and spam users with ads, they say the extensions went well beyond being just spyware and instead constituted malware.

“Being able to inject arbitrary JavaScript code into each and every webpage has enormous abuse potential,” he explained. “Redirecting search pages is only the one *confirmed* way in which this power has been abused.”

Continue Reading

Biz & IT

Air Force denies running simulation where AI drone “killed” its operator

Published

on

Enlarge / An armed unmanned aerial vehicle on runway, but orange.

Getty Images

Over the past 24 hours, several news outlets reported a now-retracted story claiming that the US Air Force had run a simulation in which an AI-controlled drone “went rogue” and “killed the operator because that person was keeping it from accomplishing its objective.” The US Air Force has denied that any simulation ever took place, and the original source of the story says he “misspoke.”

The story originated in a recap published on the website of the Royal Aeronautical Society that served as an overview of sessions at the Future Combat Air & Space Capabilities Summit that took place last week in London.

In a section of that piece titled “AI—is Skynet here already?” the authors of the piece recount a presentation by USAF Chief of AI Test and Operations Col. Tucker “Cinco” Hamilton, who spoke about a “simulated test” where an AI-enabled drone, tasked with identifying and destroying surface-to-air missile sites, started to perceive human “no-go” decisions as obstacles to achieving its primary mission. In the “simulation,” the AI reportedly attacked its human operator, and when trained not to harm the operator, it instead destroyed the communication tower, preventing the operator from interfering with its mission.

The Royal Aeronautical Society quotes Hamilton as saying:

We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

We trained the system—”Hey don’t kill the operator—that’s bad. You’re gonna lose points if you do that.” So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.

This juicy tidbit about an AI system apparently deciding to kill its simulated operator began making the rounds on social media and was soon picked up by major publications like Vice and The Guardian (both of which have since updated their stories with retractions). But soon after the story broke, people on Twitter began to question its accuracy, with some saying that by “simulation,” the military is referring to a hypothetical scenario, not necessarily a rules-based software simulation.

Today, Insider published a firm denial from the US Air Force, which said, “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Not long after, the Royal Aeronautical Society updated its conference recap with a correction from Hamilton:

Col. Hamilton admits he “misspoke” in his presentation at the Royal Aeronautical Society FCAS Summit, and the “rogue AI drone simulation” was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation, saying: “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome.” He clarifies that the USAF has not tested any weaponized AI in this way (real or simulated) and says, “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”

The misunderstanding and quick viral spread of a “too good to be true” story show how easy it is to unintentionally spread erroneous news about “killer” AI, especially when it fits preconceived notions of AI malpractice.

Still, many experts called out the story as being too pat to begin with, and not just because of technical critiques explaining that a military AI system wouldn’t necessarily work that way. As a BlueSky user named “kilgore trout” humorously put it, “I knew this story was bullsh*t because imagine the military coming out and saying an expensive weapons system they’re working on sucks.”

Continue Reading

Trending