Connect with us

Social

Bendgate Pro? Apple’s new iPad is easily bent out of shape

Published

on


Flexible. (Screenshot by ZDNet)

In recent years, new devices have been a little straight-laced.

There was a time, you see, when the most important thing about a new phone was whether it could be easily bent.

Also: Best Black Friday 2018 deals: Business Bargain Hunter’s top picks

The iPhone 6 Plus, for example, was the subject of the famous Bendgate. Initial tests by instant examiners showed that it wasn’t wise to put the phone in your back pocket and sit on it.

Now, though, we may have a new, warped Apple problem.

Famed tester Zach Nelson, aka Jerry Rig Everything, got hold of the new iPad Pro and bent it completely out of shape.

“The iPad Pro doesn’t have any of that structural integrity stuff,” says Nelson in his usual, inflexibly deadpan mode.

He adds that it’s like “tinfoil wrapped around mashed potatoes.”

Well, it’s quite thin. And, when things are quite thin they’re often quite bendable and easily mashed.

What’s perhaps a little surprising, though, is how easily this new iPad Pro bends and breaks. It doesn’t seem as if Nelson exerts too much energy to achieve his destructive ends.

It is, of course, often entertaining when the first thing people want to do with a new product is attempt to destroy it.

It’s also, though, a wily reminder that when manufacturers try to give customers what they (think they) want, a few compromises might just occur along the way.

Ultimately, when you buy an extremely useful, extremely thin, and extremely light device, it’s worth being extremely responsible in the way you look after it.

Many people — dare I even suggest, most people — aren’t.

CNET: Black Friday deals 2018 | Best Holiday gifts 2018 | Best TVs for the holidays

Phones are thrown into pockets and end up stuffed with lint.

Tablets are thrown into bags and their survival depends on what else is in the bag, how hard the bag is thrown, what sort of surface it strikes when the throwing is complete, and whether the bag is ever used as a cushion to sit on or as a goalpost for an impromptu game of soccer.

Of course, this just might contribute a little to Apple’s repair profits — I’m sure they get shoved under “services.”

With phones, most people display their lack of taste and wrap their phones in cases for added protection.

Though there are cases for the new iPad Pro, I’m not sure how much they can do to protect the entire structure.

TechRepublic: A guide to tech and non-tech holiday gifts to buy online | Photos: Cool gifts for bosses to buy for employees | The do’s and don’ts of giving gifts to coworkers

Yes, this means you’re going to have to look after your new machine.

Can you cope with that?

Previous and related coverage:

Best tablet Black Friday deals: Apple iPad, Amazon Fire, and more

There are plenty of sales to be found in this year’s Black Friday ads, even if tablets are no longer the must-have holiday gifts they were a few years ago.

HP takes on the iPad in the classroom with new Education Edition PCs and services

HP aims to transform the classroom – and perhaps slow down Apple’s efforts to expand iPad usage in schools – with a range of new Education Edition PCs.

Here’s the iPad Pro that professionals really want

Apple keeps trying to position the iPad Pro as a replacement to a desktop. But there’s one limitation that Apple seems to stubbornly refuse to do anything about.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social

New privacy bill would put major limits on targeted advertising – TechCrunch

Published

on

A new bill seeks to dramatically reshape the online advertising landscape to the detriment of companies like Facebook, Google and data brokers that leverage deep stores of personal information to make money from targeted ads.

The bill, the Banning Surveillance Advertising Act, introduced by Reps. Anna Eshoo (D-CA) and Jan Schakowsky (D-IL) in the House and Cory Booker (D-NJ) in the Senate, would dramatically limit the ways that tech companies serve ads to their users, banning the use of personal data altogether.

Any targeting based on “protected class information, such as race, gender, and religion, and personal data purchased from data brokers” would be off-limits were the bill to pass. Platforms could still target ads based on general location data at the city or state level and “contextual advertising” based on the content a user is interacting with would still be allowed.

The bill would empower the FTC and state attorneys general to enforce violations, with fines of up to $5,000 per incident for knowing violations.

“The ‘surveillance advertising’ business model is premised on the unseemly collection and hoarding of personal data to enable ad targeting,” Rep. Eshoo said. “This pernicious practice allows online platforms to chase user engagement at great cost to our society, and it fuels disinformation, discrimination, voter suppression, privacy abuses, and so many other harms.”

Sen. Booker called the targeted advertising model “predatory and invasive,” stressing how the practice exacerbates misinformation and extremism on social media platforms.

Privacy-minded companies including search engine maker DuckDuckGo and Proton, creator of ProtonMail, backed the legislation along with organizations including the Electronic Privacy Information Center (EPIC), the Anti-Defamation League, Accountable Tech and Common Sense Media.

Continue Reading

Social

Snapchat says it’s getting better at finding illicit drug dealers before users do – TechCrunch

Published

on

Snapchat has faced increasing criticism in recent years as the opioid crisis plays out on social media, often with tragic results.

In October, an NBC investigation reported the stories of a number of young people aged 13 to 23 who died after purchasing fentanyl-laced pills on Snapchat. Snapchat parent company Snap responded by committing to improve its ability to detect and remove this kind of content and ushering users who search for drug-related content to an educational harm reduction portal.

Snapchat provided a glimpse at its progress against illicit drug sales on the platform, noting that 88 percent of the drug-related content it finds is now identified proactively by automated systems, with community reporting accounting for the other 12 percent. Snap says this number is up by a third since its October update, indicating that more of this content is being detected up front before being identified by users.

“Since this fall, we have also seen another important indicator of progress: a decline in community-reported content related to drug sales,” Snap wrote in a blog post. “In September, over 23% of drug-related reports from Snapchatters contained content specifically related to sales, and as a result of proactive detection work, we have driven that down to 16% as of this month. This marks a decline of 31% in drug-related reports. We will keep working to get this number as low as possible.”

The company says that it also recently introduced a new safeguard that prevents 13 to 17 year-old users from showing up in its Quick Add user search results unless they have friends in common with the person searching. That precaution is meant to discourage minors from connecting with users they don’t know, in this case to deter online drug transactions.

Snapchat is also adding information from the CDC on the dangers of fentanyl into its “Heads Up” harm reduction portal and partnering with the Community Anti-Drug Coalitions of America (CADCA), a global nonprofit working to “prevent substance misuse through collaborative community efforts.”

The company works with experts to identify new search terms that sellers use to get around its rules against selling illicit substances. Snapchat calls the work to keep its lexicon of drug sales jargon up to date “a constant, ongoing effort.”

The U.S. Drug Enforcement Administration published a warning last month about the dangers of pills purchased online that contain fentanyl, a synthetic opioid that is deadlier in much smaller doses than heroin. Because fentanyl increasingly shows up in illicitly purchased drugs, including those purchased online, it can prove fatal to users who believed they were ingesting other substances.

In December, DEA Administrator Anne Milgram called Snapchat and other social media apps “haven[s] for drug traffickers” in a December interview with CBS. “Because drug traffickers are harnessing social media because it is accessible, they’re able to access millions of Americans and it is anonymous and they’re able to sell these fake pills that are not what they say they are,” Milgram said.

While social media platforms dragged their feet about investing in proactive, aggressive content moderation, online drug sales took root. Companies have sealed up some of the more obvious ways to find illicit drugs online (a few years ago it was as simple as searching #painpills on Instagram, for instance) but savvy sellers adapt their practices to get around new rules as they’re made.

The rise of fentanyl is a significant factor exacerbating the American opioid epidemic and the substance’s prevalence in online sales presents unique challenges. In an October hearing on children’s online safety, Snap called the issue the company’s “top priority,” but many lawmakers and families affected by online drug sales remain skeptical that social media companies are taking their role in the opioid crisis seriously.

 

Continue Reading

Social

Twitter expands misinformation reporting feature to more international markets – TechCrunch

Published

on

Last August, Twitter introduced a new feature in select markets, including the U.S., that invited users to report misinformation they encountered on its platform — including things like election-related or Covid-19 misinformation, for example. Now the company is rolling out the feature to more markets as its test expands. In addition to the U.S., Australia, and South Korea, where the feature had already gone live, Twitter is rolling out the reporting option to users in Brazil, Spain, and the Philippines.

The company also offered an update on the feature’s traction, noting that the company has received more than 3.7 million user-submitted reports since its debut. For context, Twitter has around 211 million monetizable active daily users, as of its most recent earnings, 37 million of which are U.S.-based and 174 million based in international markets.

According to Yoel Roth, Twitter’s head of site integrity, the “vast majority” of content the company takes action on for misinformation is identified proactively through automation (which accounts for 50%+ of enforcements) or proactive monitoring. User-submitted reports via the new feature, however, Twitter to identify patterns of misinformation — an area where Twitter has seen the most success so far from the feature, Roth says. This is particularly true in areas like non-text-baed misinformation like media and URLs that link to content hosted off Twitter’s platform.

But he also noted that when Twitter reviewed a subset of individual reported tweets, only around 10% were considered “actionable” compared with 20-30% in other policy areas, as many tweets analyzed didn’t contain misinformation at all.

In markets where the feature is available, users can report misinformation by clicking the three-dot menu in the upper-right of a tweet, then choosing the “report tweet” option. From there, they’ll be able to click the option “it’s misleading.”

While Twitter already offered a way to report violating content on its platform before the addition of the new flagging option, its existing reporting flow didn’t offer a clear way to report tweets containing misinformation. Instead, users would have to pick from options like “it’s suspicious or spam” or “it’s abusive or harmful,” among others, before further narrowing down how the specific tweet was in violation of Twitter’s rules.

The ability to flag tweets as misinformation allows users to more quickly and directly flag content that may not fit into existing rules, as well. But the reports themselves are tied into Twitter’s existing enforcement flow, where a combination of human review and moderation is used to determine if a punitive action should take place. Twitter had also said the reported tweets would be sorted for review based on priority — meaning tweets from accounts with a large following or those showing higher levels of engagement would be reviewed first.

The feature is rolling out at a time when social networks are being pressured to clean up the misinformation they’ve allowed to spread across their platforms, or risk regulation that will enforce such cleanups and perhaps even enact penalties for not doing so.

The flagging option is not the only way Twitter is working to fight misinformation. The company also runs an experiment called Birdwatch, which aims to crowdsource fact-checking by allowing Twitter users to annotate misleading tweets with factual information. This service is still in pilot testing and being updated based on user feedback.

Continue Reading

Trending