Connect with us

Gadgets

The damage of defaults – TechCrunch

Published

on

Apple popped out a new pair of AirPods this week. The design looks exactly like the old pair of AirPods. Which means I’m never going to use them because Apple’s bulbous earbuds don’t fit my ears. Think square peg, round hole.

The only way I could rock AirPods would be to walk around with hands clamped to the sides of my head to stop them from falling out. Which might make a nice cut in a glossy Apple ad for the gizmo — suggesting a feeling of closeness to the music, such that you can’t help but cup; a suggestive visual metaphor for the aural intimacy Apple surely wants its technology to communicate.

But the reality of trying to use earbuds that don’t fit is not that at all. It’s just shit. They fall out at the slightest movement so you either sit and never turn your head or, yes, hold them in with your hands. Oh hai, hands-not-so-free-pods!

The obvious point here is that one size does not fit all — howsoever much Apple’s Jony Ive and his softly spoken design team believe they have devised a universal earbud that pops snugly in every ear and just works. Sorry, nope!

A proportion of iOS users — perhaps other petite women like me, or indeed men with less capacious ear holes — are simply being removed from Apple’s sales equation where earbuds are concerned. Apple is pretending we don’t exist.

Sure we can just buy another brand of more appropriately sized earbuds. The in-ear, noise-canceling kind are my preference. Apple does not make ‘InPods’. But that’s not a huge deal. Well, not yet.

It’s true, the consumer tech giant did also delete the headphone jack from iPhones. Thereby depreciating my existing pair of wired in-ear headphones (if I ever upgrade to a 3.5mm-jack-less iPhone). But I could just shell out for Bluetooth wireless in-ear buds that fit my shell-like ears and carry on as normal.

Universal in-ear headphones have existed for years, of course. A delightful design concept. You get a selection of different sized rubber caps shipped with the product and choose the size that best fits.

Unfortunately Apple isn’t in the ‘InPods’ business though. Possibly for aesthetic reasons. Most likely because — and there’s more than a little irony here — an in-ear design wouldn’t be naturally roomy enough to fit all the stuff Siri needs to, y’know, fake intelligence.

Which means people like me with small ears are being passed over in favor of Apple’s voice assistant. So that’s AI: 1, non-‘standard’-sized human: 0. Which also, unsurprisingly, feels like shit.

I say ‘yet’ because if voice computing does become the next major computing interaction paradigm, as some believe — given how Internet connectivity is set to get baked into everything (and sticking screens everywhere would be a visual and usability nightmare; albeit microphones everywhere is a privacy nightmare… ) — then the minority of humans with petite earholes will be at a disadvantage vs those who can just pop in their smart, sensor-packed earbud and get on with telling their Internet-enabled surroundings to do their bidding.

Will parents of future generations of designer babies select for adequately capacious earholes so their child can pop an AI in? Let’s hope not.

We’re also not at the voice computing singularity yet. Outside the usual tech bubbles it remains a bit of a novel gimmick. Amazon has drummed up some interest with in-home smart speakers housing its own voice AI Alexa (a brand choice that has, incidentally, caused a verbal headache for actual humans called Alexa). Though its Echo smart speakers appear to mostly get used as expensive weather checkers and egg timers. Or else for playing music — a function that a standard speaker or smartphone will happily perform.

Certainly a voice AI is not something you need with you 24/7 yet. Prodding at a touchscreen remains the standard way of tapping into the power and convenience of mobile computing for the majority of consumers in developed markets.

The thing is, though, it still grates to be ignored. To be told — even indirectly — by one of the world’s wealthiest consumer technology companies that it doesn’t believe your ears exist.

Or, well, that it’s weighed up the sales calculations and decided it’s okay to drop a petite-holed minority on the cutting room floor. So that’s ‘ear meet AirPod’. Not ‘AirPod meet ear’ then.

But the underlying issue is much bigger than Apple’s (in my case) oversized earbuds. Its latest shiny set of AirPods are just an ill-fitting reminder of how many technology defaults simply don’t ‘fit’ the world as claimed.

Because if cash-rich Apple’s okay with promoting a universal default (that isn’t), think of all the less well resourced technology firms chasing scale for other single-sized, ill-fitting solutions. And all the problems flowing from attempts to mash ill-mapped technology onto society at large.

When it comes to wrong-sized physical kit I’ve had similar issues with standard office computing equipment and furniture. Products that seems — surprise, surprise! — to have been default designed with a 6ft strapping guy in mind. Keyboards so long they end up gifting the smaller user RSI. Office chairs that deliver chronic back-pain as a service. Chunky mice that quickly wrack the hand with pain. (Apple is a historical offender there too I’m afraid.)

The fixes for such ergonomic design failures is simply not to use the kit. To find a better-sized (often DIY) alternative that does ‘fit’.

But a DIY fix may not be an option when discrepancy is embedded at the software level — and where a system is being applied to you, rather than you the human wanting to augment yourself with a bit of tech, such as a pair of smart earbuds.

With software, embedded flaws and system design failures may also be harder to spot because it’s not necessarily immediately obvious there’s a problem. Oftentimes algorithmic bias isn’t visible until damage has been done.

And there’s no shortage of stories already about how software defaults configured for a biased median have ended up causing real-world harm. (See for example: ProPublica’s analysis of the COMPAS recidividism tool — software it found incorrectly judging black defendants more likely to offend than white. So software amplifying existing racial prejudice.)

Of course AI makes this problem so much worse.

Which is why the emphasis must be on catching bias in the datasets — before there is a chance for prejudice or bias to be ‘systematized’ and get baked into algorithms that can do damage at scale.

The algorithms must also be explainable. And outcomes auditable. Transparency as disinfectant; not secret blackboxes stuffed with unknowable code.

Doing all this requires huge up-front thought and effort on system design, and an even bigger change of attitude. It also needs massive, massive attention to diversity. An industry-wide championing of humanity’s multifaceted and multi-sized reality — and to making sure that’s reflected in both data and design choices (and therefore the teams doing the design and dev work).

You could say what’s needed is a recognition there’s never, ever a one-sized-fits all plug.

Indeed, that all algorithmic ‘solutions’ are abstractions that make compromises on accuracy and utility. And that those trade-offs can become viciously cutting knives that exclude, deny, disadvantage, delete and damage people at scale.

Expensive earbuds that won’t stay put is just a handy visual metaphor.

And while discussion about the risks and challenges of algorithmic bias has stepped up in recent years, as AI technologies have proliferated — with mainstream tech conferences actively debating how to “democratize AI” and bake diversity and ethics into system design via a development focus on principles like transparency, explainability, accountability and fairness — the industry has not even begun to fix its diversity problem.

It’s barely moved the needle on diversity. And its products continue to reflect that fundamental flaw.

Many — if not most — of the tech industry’s problems can be traced back to the fact that inadequately diverse teams are chasing scale while lacking the perspective to realize their system design is repurposing human harm as a de facto performance measure. (Although ‘lack of perspective’ is the charitable interpretation in certain cases; moral vacuum may be closer to the mark.)

As WWW creator, Sir Tim Berners-Lee, has pointed out, system design is now society design. That means engineers, coders, AI technologists are all working at the frontline of ethics. The design choices they make have the potential to impact, influence and shape the lives of millions and even billions of people.

And when you’re designing society a median mindset and limited perspective cannot ever be an acceptable foundation. It’s also a recipe for product failure down the line.

The current backlash against big tech shows that the stakes and the damage are very real when poorly designed technologies get dumped thoughtlessly on people.

Life is messy and complex. People won’t fit a platform that oversimplifies and overlooks. And if your excuse for scaling harm is ‘we just didn’t think of that’ you’ve failed at your job and should really be headed out the door.

Because the consequences for being excluded by flawed system design are also scaling and stepping up as platforms proliferate and more life-impacting decisions get automated. Harm is being squared. Even as the underlying industry drum hasn’t skipped a beat in its prediction that everything will be digitized.

Which means that horribly biased parole systems are just the tip of the ethical iceberg. Think of healthcare, social welfare, law enforcement, education, recruitment, transportation, construction, urban environments, farming, the military, the list of what will be digitized — and of manual or human overseen processes that will get systematized and automated — goes on.

Software — runs the industry mantra — is eating the world. That means badly designed technology products will harm more and more people.

But responsibility for sociotechnical misfit can’t just be scaled away as so much ‘collateral damage’.

So while an ‘elite’ design team led by a famous white guy might be able to craft a pleasingly curved earbud, such an approach cannot and does not automagically translate into AirPods with perfect, universal fit.

It’s someone’s standard. It’s certainly not mine.

We can posit that a more diverse Apple design team might have been able to rethink the AirPod design so as not to exclude those with smaller ears. Or make a case to convince the powers that be in Cupertino to add another size choice. We can but speculate.

What’s clear is the future of technology design can’t be so stubborn.

It must be radically inclusive and incredibly sensitive. Human-centric. Not locked to damaging defaults in its haste to impose a limited set of ideas.

Above all, it needs a listening ear on the world.

Indifference to difference and a blindspot for diversity will find no future here.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Gadgets

Fitbits will soon lose the ability to sync with computers

Published

on

Enlarge / The Fitbit Ionic currently lets you download music to the device.

Valentina Palladino

Fitbit owners who like to sync their fitness tracker with a computer to enable offline listening of downloaded music without a monthly fee will soon need to change their approach.

As spotted by 9to5Google on Saturday, Fitbit will no longer allow users to sync their devices over a computer starting in October.

“On October 13, 2022, we’re removing the option to sync your Fitbit device with the Fitbit Connect app on your computer,” a Fitbit support page reads. “Download and use the Fitbit app on your phone to sync your device.”

The Fitbit Connect desktop software lets you transfer music from your computer to the wearable if you have a supporting watch. Newer devices, like the Fitbit Sense and Versa 3, cannot store downloaded music.

After October, owners of older Fitbits, like the Fitbit Versa and Versa 2, will also have to rely on subscriptions to not-so-popular services for offline music. “After the Fitbit Connect app on your computer is deactivated, you can continue to transfer music to your watch through the Deezer app. Customers in the United States can also use the Pandora app,” Fitbit’s support page says.

Deezer and Pandora both require monthly subscriptions for music downloading and offline listening, with fees starting at $10 per month after any eligible trial periods.

Remember, Fitbits still don’t let you add music you’ve downloaded through streaming services, though you can control music on your smartphone with a Fitbit.

The new limitation shouldn’t last forever. When Google acquired Fitbit in 2021, the fitness tracker company confirmed that Fitbits running Google’s Wear OS are on the way. Wear OS has offline support for subscribers to Spotify Premium and YouTube Music.

The death of the Fitbit Connect desktop app will mean that Fitbit wearers who have managed to avoid the brand’s mobile app have fewer options. An increasingly subscription-focused marketplace has been coming for a long time now. Fitbit Connect is still downloadable on Windows 10 and Mac OS X, but the company says the Fitbit app for iPhone and Android provides the “best experience.”

For now, you can still download and listen to music from your Fitbit; you just won’t be able to add more songs after October.

Continue Reading

Gadgets

Google hits back at Sonos with voice command patent lawsuit

Published

on

Enlarge / Sonos Beam soundbar.

Sonos

Google and Sonos are headed back to court. After Google lost an earlier patent case over speaker volume controls, Google is now suing Sonos over voice control technology. Google confirmed the lawsuit to The Verge this morning, with the company saying it wants to “defend our technology and challenge Sonos’s clear, continued infringement of our patents.” Google alleges infringement of seven patents related to voice input, including hot-word detection and a system that determines which speaker in a group should respond to voice commands.

Sonos has typically supported the Google Assistant and Amazon Alexa for voice control, but Google and Amazon are also Sonos’s biggest speaker competitors. So Sonos launched its own voice assistant feature in May, opening it up to this new pile of Google patents. (For now, Sonos supports all three options.)

Google rarely uses patents offensively, but this is part of a multi-lawsuit battle that has sent the company’s smart speaker line reeling after Google lost a previous ruling in January. Rather than pay royalties to Sonos, Google decided to reach into customers’ homes and start breaking devices they had already bought. Google stripped Nest Audio and Google Home speakers of the ability to control volume for a speaker group, turning what was an effortless and common-sense task into an ordeal requiring a screen full of individual sliders. It’s hard to overstate how annoying this is for consumers, as volume control is a primary function of any speaker.

Sonos originated the connected speaker concept, but it has been facing competition from Big Tech giants in recent years. Sonos says it gave Google an inside look at its operations in 2013 while Sonos was asking for Google Play Music support and that Google used that access to “blatantly and knowingly copy” Sonos’s technology. Google’s first smart speaker launched three years later.

Continue Reading

Gadgets

Rumors, delays, and early testing suggest Intel’s Arc GPUs are on shaky ground

Published

on

Enlarge / Arc is Intel’s attempt to shake up the GPU market.

Almost a year ago, Intel made a big announcement about its push into the dedicated graphics business. Intel Arc would be the brand name for a new batch of gaming GPUs, pushing far beyond the company’s previous efforts and competing directly with Nvidia’s GeForce and AMD’s Radeon GPUs.

Arc is the culmination of years of work, going back to at least 2017, when Intel poached AMD GPU architect Raja Koduri to run its own graphics division. And while Intel would be trying to break into an established and fiercely competitive market, it would benefit from the experience and gigantic install base that the company had cultivated with its integrated GPUs.

Intel sought to prove its commitment to Arc by showing off a years-long road map, with four separate named GPU architectures already in the pipeline. Sure, the GPUs wouldn’t compete with top-tier GeForce and Radeon cards, but they would address the crucial mainstream GPU market, and high-end cards would follow once the brand was more established.

All of that makes Arc a lot more serious than Larrabee, Intel’s last effort to break into the dedicated graphics market. Larrabee was canceled late in its development because of delays and disappointing performance, and Arc GPUs are actual things that you can buy (if only in a limited way, for now). But the challenges of entering the GPU market haven’t changed since the late 2000s. Breaking into a mature market is difficult, and experience with integrated GPUs isn’t always applicable to dedicated GPUs with more complex hardware and their own pool of memory.

Regardless of the company’s plans for future architectures, Arc’s launch has been messy. And while the company is making some efforts to own those problems, a combination of performance issues, timing, and financial pressures could threaten Arc’s future.

Early turbulence

A year after its announcement, it seems that Arc is already on shaky ground. Intel has proven characteristically incapable of meeting its initial launch estimates, just barely managing to pull off a paper launch of two low-end laptop GPUs in Q1 (the original launch window) and failing to follow up with widely available desktop cards in Q2. The company has been very public about its driver struggles with drivers, which are hurting the cards’ performance in older but still widely played games. And the graphics division is losing money at a time when revenue is tumbling across the company.

And that’s just what is happening in public. A report from the German-language Igor’s Lab claims that Intel’s board partners (the ones who would be putting the Arc GPU dies on boards, packaging them, and shipping them out) and the OEMs who would be putting Arc GPUs into their prebuilt computers are getting frustrated with the delays and lack of communication.

A long, conspiratorial video from YouTuber Moore’s Law is Dead goes even farther, suggesting (using a combination of “internal sources” and speculation) that people in Intel’s graphics division are “lying” to consumers and others in the company about the state of the GPUs, that the first-generation Alchemist architecture has fundamental performance-limiting flaws, and that Intel is having internal discussions about discontinuing Arc GPUs after the second-generation “Battlemage” architecture.

We’ve contacted Intel and several GPU manufacturers to see if they had anything to share on the matter; the short version is no—Intel has no news on release dates. Asus says it “[doesn’t] currently have anything in the pipeline for Intel Arc on the North America side,” and other companies haven’t responded yet. For his part, Intel graphics VP Raja Koduri has said publicly that “we are very much committed to our roadmap” and that there will be “more updates from us this quarter” and “four new product lines by the end of the year.”

Continue Reading

Trending