Connect with us

Mobile

The 9 biggest questions about Google’s Stadia game streaming service – TechCrunch

Published

on

Google’s Stadia is an impressive piece of engineering to be sure: Delivering high definition, high framerate, low latency video to devices like tablets and phones is an accomplishment in itself. But the game streaming services faces serious challenges if it wants to compete with the likes of Xbox and PlayStation, or even plain old PCs and smartphones.

Here are our nine biggest questions about what the service will be and how it’ll work.

1. What’s the game selection like?

We saw Assassin’s Creed: Odyssey (a lot) and Doom: Eternal, and a few other things running on Stadia, but otherwise Google’s presentation was pretty light on details as far as what games exactly we can expect to see on there.

It’s not an easy question to answer, since this isn’t just a question of “all PC games,” or “all games from these 6 publishers.” Stadia requires a game be ported, or partly recoded to fit its new environment — in this case a Linux-powered PC. That’s not unusual, but it isn’t trivial either.

Porting is just part of the job for a major studio like Ubisoft, which regularly publishes on multiple platforms simultaneously, but for a smaller developer or a more specialized game, it’s not so straightforward. Jade Raymond will be in charge of both first-party games just for Stadia as well as developer relations; she said that the team will be “working with external developers to bring all of the bleeding edge Google technology you have seen today available to partner studios big and small.”

What that tells me is that every game that comes to Stadia will require special attention. That’s not a good sign for selection, but it does suggest that anything available on it will run well.

2. What will it cost?

Perhaps the topic Google avoided the most was what the heck the business model is for this whole thing.

Do you pay a subscription fee? Is it part of YouTube or maybe YouTube Red? Do they make money off sales of games after someone plays the instant demo? Is it free for an hour a day? Will it show ads every 15 minutes? Will publishers foot the bill as part of their normal marketing budget? No one knows!

It’s a difficult play because the most obvious way to monetize also limits the product’s exposure. Asking people to subscribe adds a lot of friction to a platform where the entire idea is to get you playing within 5 seconds.

Putting ads in is an easy way to let people jump in and have it be monetized a small amount. You could even advertise the game itself and offer a one-time 10 percent off coupon or something. Then mention that YouTube Red subscribers don’t see ads at all.

Sounds reasonable, but Google didn’t mention anything like this at all. We’ll probably hear more later this year closer to launch, but it’s hard to judge the value of the service when we have no idea what it will cost.

3. What about iOS devices?

Google and Apple are bitter rivals in a lot of ways, but it’s hard to get around the fact that iPhone owners tend to be the most lucrative mobile customers. Yet there were none in the live demo and no availability mentioned for iOS.

Depending on its business model, Google may have locked itself out of the App Store. Apple doesn’t let you essentially run a store within its store (as we have seen in cases like Amazon and Epic) and if that’s part of the Stadia offering, it’s not going to fly.

An app that just lets you play might be a possibility, but since none was mentioned, it’s possible Google is using Stadia as a platform exclusive to draw people to Pixel devices. That kind of puts a limit on the pitch that you can play on devices you already have.

4. What about games you already own?

A big draw of game streaming is to buy a game once and play it anywhere. Sometimes you want to play the big awesome story parts on your 60-inch TV in surround sound, but do a little inventory and quest management on your laptop at the cafe. That’s what systems like Steam Link offer.

Epic Games is taking on Steam with its own digital game store, which includes higher take-home revenue rates for developers.

But Google didn’t mention how its ownership system will work, or whether there would be a way to play games you already own on the service. This is a big consideration for many gamers.

It was mentioned that there would be cross platform play and perhaps even the ability to bring saves to other platforms, but how that would work was left to the imagination. Frankly I’m skeptical.

Letting people show they own a game and giving them access to it is a recipe for scamming and trouble, but not supporting it is missing out on a huge application for the service. Google’s caught between a rock and a hard place here.

5. Can you really convert viewers to players?

This is a bit more of an abstract question, but it comes from the basic idea that people specifically come to YouTube and Twitch to watch games, not play them. Mobile viewership is huge because streams are a great way to kill time on a train or bus ride, or during a break at school. These viewers often don’t want to play at those times, and couldn’t if they did want to!

So the question is, are there really enough people watching gaming content on YouTube who will actually actively switch to playing just like that?

Photo: Maskot / Getty Images

To be fair, the idea of a game trailer that lets you play what you just saw five seconds later is brilliant. I’m 100 percent on board there. But people don’t watch dozens of hours of game trailers a week — they watch famous streamers play Fortnite and PUBG and do speedruns of Dark Souls and Super Mario Bros 1. These audiences are much harder to change into players.

The potential of joining a game with a streamer, or affecting them somehow, or picking up at the spot they left off, to try fighting a boss on your own or seeing how their character controls, is a good one, but making that happen goes far, far beyond the streaming infrastructure Google has created here. It involves rewriting the rules on how games are developed and published. We saw attempts at this from Beam, later acquired by Microsoft, but it never really bloomed.

Streaming is a low-commitment, passive form of entertainment, which is kind of why it’s so popular. Turning that into an active, involved form of entertainment is far from straightforward.

6. How’s the image quality?

Games these days have mind-blowing graphics. I sure had a lot of bad things to say about Anthem, but when it came to looks that game was a showstopper. And part of what made it great were the tiny details in textures and subtle gradations of light that are only just recently possible with advances in shaders, volumetric fog, and so on. Will those details really come through in a stream?

Damn.

Don’t get me wrong. I know a 1080p stream looks decent. But the simple fact is that high-efficiency HD video compression reduces detail in a noticeable way. You just can’t perfectly recreate an image if you have to send it 60 times per second with only a few milliseconds to compress and decompress it. It’s how image compression works.

For some people this won’t be a big deal. They really might not care about the loss of some visual fidelity — the convenience factor may outweigh it by a ton. But there are others for whom it may be distracting, those who have invested in a powerful gaming console or PC that gives them better detail at higher framerates than Stadia can possibly offer.

It’s not apples to apples but Google has to consider these things, especially when the difference is noticeable enough that game developers and publishers start to note that a game is “best experienced locally” or something like that.

7. Will people really game on the go?

I don’t question whether people play games on mobile. That’s one of the biggest businesses in the world. But I’m not sure that people want to play Assassin’s Creed: Odyssey on their iPa… I mean, Pixel Slate. Let alone their smartphone.

Games on phones and tablets are frequently time-killers driven by addictive short-duration game sessions. Even the bigger, more console-like games on mobile usually aim for shorter play sessions. That may be changing in some ways for sure but it’s a consideration, and AAA console games really just aren’t designed for 5-10 minute gaming sessions.

Add to that that you have to carry around what looks like a fairly bulky controller and this becomes less of an option for things like planes, cafes, subway rides, and so on. Even if you did bring it, could you be sure you’ll get the 10 or 20 Mbps you’ll need to get that 60FPS video rate? And don’t say 5G. If anyone says 5G again after the last couple months I’m going to lose it.

Naturally the counterpoint here is Nintendo’s fabulously successful and portable Switch. But the Switch plays both sides, providing a console-like experience on the go that makes sense because of its frictionless game state saving and offline operation. Stadia doesn’t seem to offer anything like that. In some ways it could be more compelling, but it’s a hard sell right now.

8. How will multiplayer work?

Obviously multiplayer gaming is huge right now and likely will be forever, so the Stadia will for sure support multiplayer one way or another. But multiplayer is also really complicated.

It used to be that someone just picked up the second controller and played Luigi. Now you have friend codes, accounts, user IDs, automatic matchmaking, all kinds of junk. If I want to play The Division 2 with a friend via Stadia, how does that work? Can I use my existing account? How do I log in? Are there IP issues and will the whole rigmarole of the game running in some big server farm set off cheat detectors or send me a security warning email? What if two people want to play a game locally?

Many of the biggest gaming properties in the world are multiplayer focused, and without a very, very clear line on this it’s going to turn a lot of people off. The platform might be great for it — but they have some convincing to do.

9. Stadia?

Branding is hard. Launching a product that aims to reach millions and giving it a name that not only represents it well but isn’t already taken is hard. But that said… Stadia?

I guess the idea is that each player is kind of in a stadium of their own… or that they’re in a stadium where Ninja is playing, and then they can go down to join? Certainly Stadia is more distinctive than stadium and less copyright-fraught than Colosseum or the like. Arena is probably out too.

If only Google already owned something that indicated gaming but was simple, memorable, and fit with its existing “Google ___” set of consumer-focused apps, brands, and services.

Oh well!

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Mobile

When the Earth is gone, at least the internet will still be working – TechCrunch

Published

on

The internet is now our nervous system. We are constantly streaming and buying and watching and liking, our brains locked into the global information matrix as one universal and coruscating emanation of thought and emotion.

What happens when the machine stops though?

It’s a question that E.M. Forster was intensely focused on more than a century ago in a short story called, rightly enough, “The Machine Stops,” about a human civilization connected entirely through machines that one day just turn off.

Those fears of downtime are not just science fiction anymore. Outages aren’t just missing a must-watch TikTok clip. Hospitals, law enforcement, the government, every corporation — the entire spectrum of human institutions that constitute civilization now deeply rely on connectivity to function.

So when it comes to disaster response, the world has dramatically changed. In decades past, the singular focus could be roughly summarized as rescue and mitigation — save who you can while trying to limit the scale of destruction. Today though, the highest priority is by necessity internet access, not just for citizens, but increasingly for the on-the-ground first responders who need bandwidth to protect themselves, keep abreast of their mission objectives, and have real-time ground truth on where dangers lurk and where help is needed.

While the sales cycles might be arduous as we learned in part one and the data trickles have finally turned to streams in part two, the reality is that none of that matters if there isn’t connectivity to begin with. So in part three of this series on the future of technology and disaster response, we’re going to analyze the changing nature of bandwidth and connectivity and how they intersect with emergencies, taking a look at how telcos are creating resilience in their networks while defending against climate change, how first responders are integrating connectivity into their operations, and finally, exploring how new technologies like 5G and satellite internet will affect these critical activities.

Wireless resilience as the world burns

Climate change is inducing more intense weather patterns all around the world, creating second- and third-order effects for industries that rely on environmental stability for operations. Few industries have to be as dynamic to the changing context as telecom companies, whose wired and wireless infrastructure is regularly buffeted by severe storms. Resiliency of these networks isn’t just needed for consumers — it’s absolutely necessary for the very responders trying to mitigate disasters and get the network back up in the first place.

Unsurprisingly, no issue looms larger for telcos than access to power — no juice, no bars. So all three of America’s major telcos — Verizon (which owns TechCrunch’s parent company Verizon Media, although not for much longer), AT&T and T-Mobile — have had to dramatically scale up their resiliency efforts in recent years to compensate both for the demand for wireless and the growing damage wrought by weather.

Jay Naillon, senior director of national technology service operations strategy at T-Mobile, said that the company has made resilience a key part of its network buildout in recent years, with investments in generators at cell towers that can be relied upon when the grid cannot. In “areas that have been hit by hurricanes or places that have fragile grids … that is where we have invested most of our fixed assets,” he said.

Like all three telcos, T-Mobile pre-deploys equipment in anticipation for disruptions. So when a hurricane begins to swirl in the Atlantic Ocean, the company will strategically fly in portable generators and mobile cell towers in anticipation of potential outages. “We look at storm forecasts for the year,” Naillon explained, and do “lots of preventative planning.” They also work with emergency managers and “run through various drills with them and respond and collaborate effectively with them” to determine which parts of the network are most at risk for damage in an emergency. Last year, the company partnered with StormGeo to accurately predict weather events.

Predictive AI for disasters is also a critical need for AT&T. Jason Porter, who leads public sector and the company’s FirstNet first-responder network, said that AT&T teamed up with Argonne National Laboratory to create a climate-change analysis tool to evaluate the siting of its cell towers and how they will weather the next 30 years of “floods, hurricanes, droughts and wildfires.” “We redesigned our buildout … based on what our algorithms told us would come,” he said, and the company has been elevating vulnerable cell towers four to eight feet high on “stilts” to improve their resiliency to at least some weather events. That “gave ourselves some additional buffer.”

AT&T has also had to manage the growing complexity of creating reliability with the chaos of a climate-change-induced world. In recent years, “we quickly realized that many of our deployments were due to weather-related events,” and the company has been “very focused on expanding our generator coverage over the past few years,” Porter said. It’s also been very focused on building out its portable infrastructure. “We essentially deploy entire data centers on trucks so that we can stand up essentially a central office,” he said, empathizing that the company’s national disaster recovery team responded to thousands of events last year.

Particularly on its FirstNet service, AT&T has pioneered two new technologies to try to get bandwidth to disaster-hit regions faster. First, it has invested in drones to offer wireless services from the sky. After Hurricane Laura hit Louisiana last year with record-setting winds, our “cell towers were twisted up like recycled aluminum cans … so we needed to deploy a sustainable solution,” Porter described. So the company deployed what it dubs the FirstNet One — a “dirigible” that “can cover twice the cell coverage range of a cell tower on a truck, and it can stay up for literally weeks, refuel in less than an hour and go back up — so long-term, sustainable coverage,” he said.

AT&T’s FirstNet One dirigible to offer internet access from the air for first responders. Image Credits: AT&T/FirstNet

Secondly, the company has been building out what it calls FirstNet MegaRange — a set of high-powered wireless equipment that it announced earlier this year that can deploy signals from miles away, say from a ship moored off a coast, to deliver reliable connectivity to first responders in the hardest-hit disaster zones.

As the internet has absorbed more of daily life, the norms for network resilience have become ever more exacting. Small outages can disrupt not just a first responder, but a child taking virtual classes and a doctor conducting remote surgery. From fixed and portable generators to rapid-deployment mobile cell towers and dirigibles, telcos are investing major resources to keep their networks running continuously.

Yet, these initiatives are ultimately costs borne by telcos increasingly confronting a world burning up. Across conversations with all three telcos and others in the disaster response space, there was a general sense that utilities just increasingly have to self-insulate themselves in a climate-changed world. For instance, cell towers need their own generators because — as we saw with Texas earlier this year — even the power grid itself can’t be guaranteed to be there. Critical applications need to have offline capabilities, since internet outages can’t always be prevented. The machine runs, but the machine stops, too.

The trend lines on the frontlines are data lines

While we may rely on connectivity in our daily lives as consumers, disaster responders have been much more hesitant to fully transition to connected services. It is precisely in the middle of a tornado and the cell tower is down that you realize a printed map might have been nice to have. Paper, pens, compasses — the old staples of survival flicks remain just as important in the field today as they were decades ago.

Yet, the power of software and connectivity to improve emergency response has forced a rethinking of field communications and how deeply technology is integrated on the ground. Data from the frontlines is extremely useful, and if it can be transmitted, dramatically improves the ability of operations planners to respond safely and efficiently.

Both AT&T and Verizon have made large investments in directly servicing the unique needs of the first responder community, with AT&T in particular gaining prominence with its FirstNet network, which it exclusively operates through a public-private partnership with the Department of Commerce’s First Responder Network Authority. The government offered a special spectrum license to the FirstNet authority in Band 14 in exchange for the buildout of a responder-exclusive network, a key recommendation of the 9/11 Commission, which found that first responders couldn’t communicate with each other on the day of those deadly terrorist attacks. Now, Porter of AT&T says that the company’s buildout is “90% complete” and is approaching 3 million square miles of coverage.

Why so much attention on first responders? The telcos are investing here because in many ways, the first responders are on the frontiers of technology. They need edge computing, AI/ML rapid decision-making, the bandwidth and latency of 5G (which we will get to in a bit), high reliability, and in general, are fairly profitable customers to boot. In other words, what first responders need today are what consumers in general are going to want tomorrow.

Cory Davis, director of public safety strategy and crisis response at Verizon, explained that “more than ever, first responders are relying on technology to go out there and save lives.” His counterpart, Nick Nilan, who leads product management for the public sector, said that “when we became Verizon, it was really about voice [and] what’s changed over the last five [years] is the importance of data.” He brings attention to tools for situational awareness, mapping, and more that are a becoming standard in the field. Everything first responders do “comes back to the network — do you have the coverage where you need it, do you have the network access when something happens?”

The challenge for the telcos is that we all want access to that network when catastrophe strikes, which is precisely when network resources are most scarce. The first responder trying to communicate with their team on the ground or their operations center is inevitably competing with a citizen letting friends know they are safe — or perhaps just watching the latest episode of a TV show in their vehicle as they are fleeing the evacuation zone.

That competition is the argument for a completely segmented network like FirstNet, which has its own dedicated spectrum with devices that can only be used by first responders. “With remote learning, remote work and general congestion,” Porter said, telcos and other bandwidth providers were overwhelmed with consumer demand. “Thankfully we saw through FirstNet … clearing that 20 MHz of spectrum for first responders” helped keep the lines clear for high-priority communications.

FirstNet’s big emphasis is on its dedicated spectrum, but that’s just one component of a larger strategy to give first responders always-on and ready access to wireless services. AT&T and Verizon have made prioritization and preemption key operational components of their networks in recent years. Prioritization gives public safety users better access to the network, while preemption can include actively kicking off lower-priority consumers from the network to ensure first responders have immediate access.

Nilan of Verizon said, “The network is built for everybody … but once we start thinking about who absolutely needs access to the network at a period of time, we prioritize our first responders.” Verizon has prioritization, preemption, and now virtual segmentation — “we separate their traffic from consumer traffic” so that first responders don’t have to compete if bandwidth is limited in the middle of a disaster. He noted that all three approaches have been enabled since 2018, and Verizon’s suite of bandwidth and software for first responders comes under the newly christened Verizon Frontline brand that launched in March.

With increased bandwidth reliability, first responders are increasingly connected in ways that even a decade ago would have been unfathomable. Tablets, sensors, connected devices and tools — equipment that would have been manual are now increasingly digital.

That opens up a wealth of possibilities now that the infrastructure is established. My interview subjects suggested applications as diverse as the decentralized coordination of response team movements through GPS and 5G; real-time updated maps that offer up-to-date risk analysis of how a disaster might progress; pathfinding for evacuees that’s updated as routes fluctuate; AI damage assessments even before the recovery process begins; and much, much more. In fact, when it comes to the ferment of the imagination, many of those possibilities will finally be realized in the coming years — when they have only ever been marketing-speak and technical promises in the past.

Five, Gee

We’ve been hearing about 5G for years now, and even 6G every once in a while just to cause reporters heart attacks, but what does 5G even mean in the context of disaster response? After years of speculation, we are finally starting to get answers.

Naillon of T-Mobile noted that the biggest benefit of 5G is that it “allows us to have greater coverage” particularly given the low-band spectrum that the standard partially uses. That said, “As far as applications — we are not really there at that point from an emergency response perspective,” he said.

Meanwhile, Porter of AT&T said that “the beauty of 5G that we have seen there is less about the speed and more about the latency.” Consumers have often seen marketing around voluminous bandwidths, but in the first-responder world, latency and edge computing tends to be the most desirable features. For instance, devices can relay video to each other on the frontlines, without necessarily needing a backhaul to the main wireless network. On-board processing of image data could allow for rapid decision-making in environments where seconds can be vital to the success of a mission.

That flexibility is allowing for many new applications in disaster response, and “we are seeing some amazing use cases coming out of our 5G deployments [and] we have launched some of our pilots with the [Department of Defense],” Porter said. He offered an example of “robotic dogs to go and do bomb dismantling or inspecting and recovery.”

Verizon has made innovating on new applications a strategic goal, launching a 5G First Responders Lab dedicated to guiding a new generation of startups to build at this crossroads. Nilan of Verizon said that the incubator has had more than 20 companies across four different cohorts, working on everything from virtual reality training environments to AR applications that allow firefighters to “see through walls.” His colleague Davis said that “artificial intelligence is going to continue to get better and better and better.”

Blueforce is a company that went through the first cohort of the Lab. The company uses 5G to connect sensors and devices together to allow first responders to make the best decisions they can with the most up-to-date data. Michael Helfrich, founder and CEO, said that “because of these new networks … commanders are able to leave the vehicle and go into the field and get the same fidelity” of information that they normally would have to be in a command center to receive. He noted that in addition to classic user interfaces, the company is exploring other ways of presenting information to responders. “They don’t have to look at a screen anymore, and [we’re] exploring different cognitive models like audio, vibration and heads-up displays.”

5G will offer many new ways to improve emergency responses, but that doesn’t mean that our current 4G networks will just disappear. Davis said that many sensors in the field don’t need the kind of latency or bandwidth that 5G offers. “LTE is going to be around for many, many more years,” he said, pointing to the hardware and applications taking advantage of LTE-M standards for Internet of Things (IoT) devices as a key development for the future here.

Michael Martin of emergency response data platform RapidSOS said that “it does feel like there is renewed energy to solve real problems,” in the disaster response market, which he dubbed the “Elon Musk effect.” And that effect definitely does exist when it comes to connectivity, where SpaceX’s satellite bandwidth project Starlink comes into play.

Satellite uplinks have historically had horrific latency and bandwidth constraints, making them difficult to use in disaster contexts. Furthermore, depending on the particular type of disaster, satellite uplinks can be astonishingly challenging to setup given the ground environment. Starlink promises to shatter all of those barriers — easier connections, fat pipes, low latencies and a global footprint that would be the envy of any first responder globally. Its network is still under active development, so it is difficult to foresee today precisely what its impact will be on the disaster response market, but it’s an offering to watch closely in the years ahead, because it has the potential to completely upend the way we respond to disasters this century if its promises pan out.

Yet, even if we discount Starlink, the change coming this decade in emergency response represents a complete revolution. The depth and resilience of connectivity is changing the equation for first responders from complete reliance on antiquated tools to an embrace of the future of digital computing. The machine is no longer stoppable.


Future of Technology and Disaster Response Table of Contents


Continue Reading

Mobile

Longevity startup Gero AI has a mobile API for quantifying health changes – TechCrunch

Published

on

Sensor data from smartphones and wearables can meaningfully predict an individual’s ‘biological age’ and resilience to stress, according to Gero AI.

The ‘longevity’ startup — which condenses its mission to the pithy goal of “hacking complex diseases and aging with Gero AI” — has developed an AI model to predict morbidity risk using ‘digital biomarkers’ that are based on identifying patterns in step-counter sensor data which tracks mobile users’ physical activity.

A simple measure of ‘steps’ isn’t nuanced enough on its own to predict individual health, is the contention. Gero’s AI has been trained on large amounts of biological data to spots patterns that can be linked to morbidity risk. It also measures how quickly a personal recovers from a biological stress — another biomarker that’s been linked to lifespan; i.e. the faster the body recovers from stress, the better the individual’s overall health prognosis.

A research paper Gero has had published in the peer-reviewed biomedical journal Aging explains how it trained deep neural networks to predict morbidity risk from mobile device sensor data — and was able to demonstrate that its biological age acceleration model was comparable to models based on blood test results.

Another paper, due to be published in the journal Nature Communications later this month, will go into detail on its device-derived measurement of biological resilience.

The Singapore-based startup, which has research roots in Russia — founded back in 2015 by a Russian scientist with a background in theoretical physics — has raised a total of $5 million in seed funding to date (in two tranches).

Backers come from both the biotech and the AI fields, per co-founder Peter Fedichev. Its investors include Belarus-based AI-focused early stage fund, Bulba Ventures (Yury Melnichek). On the pharma side, it has backing from some (unnamed) private individuals with links to Russian drug development firm, Valenta. (The pharma company itself is not an investor).

Fedichev is a theoretical physicist by training who, after his PhD and some ten years in academia, moved into biotech to work on molecular modelling and machine learning for drug discovery — where he got interested in the problem of ageing and decided to start the company.

As well as conducting its own biological research into longevity (studying mice and nematodes), it’s focused on developing an AI model for predicting the biological age and resilience to stress of humans — via sensor data captured by mobile devices.

“Health of course is much more than one number,” emphasizes Fedichev. “We should not have illusions about that. But if you are going to condense human health to one number then, for a lot of people, the biological age is the best number. It tells you — essentially — how toxic is your lifestyle… The more biological age you have relative to your chronological age years — that’s called biological acceleration — the more are your chances to get chronic disease, to get seasonal infectious diseases or also develop complications from those seasonal diseases.”

Gero has recently launched a (paid, for now) API, called GeroSense, that’s aimed at health and fitness apps so they can tap up its AI modelling to offer their users an individual assessment of biological age and resilience (aka recovery rate from stress back to that individual’s baseline).

Early partners are other longevity-focused companies, AgelessRx and Humanity Inc. But the idea is to get the model widely embedded into fitness apps where it will be able to send a steady stream of longitudinal activity data back to Gero, to further feed its AI’s predictive capabilities and support the wider research mission — where it hopes to progress anti-ageing drug discovery, working in partnerships with pharmaceutical companies.

The carrot for the fitness providers to embed the API is to offer their users a fun and potentially valuable feature: A personalized health measurement so they can track positive (or negative) biological changes — helping them quantify the value of whatever fitness service they’re using.

“Every health and wellness provider — maybe even a gym — can put into their app for example… and this thing can rank all their classes in the gym, all their systems in the gym, for their value for different kinds of users,” explains Fedichev.

“We developed these capabilities because we need to understand how ageing works in humans, not in mice. Once we developed it we’re using it in our sophisticated genetic research in order to find genes — we are testing them in the laboratory — but, this technology, the measurement of ageing from continuous signals like wearable devices, is a good trick on its own. So that’s why we announced this GeroSense project,” he goes on.

“Ageing is this gradual decline of your functional abilities which is bad but you can go to the gym and potentially improve them. But the problem is you’re losing this resilience. Which means that when you’re [biologically] stressed you cannot get back to the norm as quickly as possible. So we report this resilience. So when people start losing this resilience it means that they’re not robust anymore and the same level of stress as in their 20s would get them [knocked off] the rails.

“We believe this loss of resilience is one of the key ageing phenotypes because it tells you that you’re vulnerable for future diseases even before those diseases set in.”

“In-house everything is ageing. We are totally committed to ageing: Measurement and intervention,” adds Fedichev. “We want to building something like an operating system for longevity and wellness.”

Gero is also generating some revenue from two pilots with “top range” insurance companies — which Fedichev says it’s essentially running as a proof of business model at this stage. He also mentions an early pilot with Pepsi Co.

He sketches a link between how it hopes to work with insurance companies in the area of health outcomes with how Elon Musk is offering insurance products to owners of its sensor-laden Teslas, based on what it knows about how they drive — because both are putting sensor data in the driving seat, if you’ll pardon the pun. (“Essentially we are trying to do to humans what Elon Musk is trying to do to cars,” is how he puts it.)

But the nearer term plan is to raise more funding — and potentially switch to offering the API for free to really scale up the data capture potential.

Zooming out for a little context, it’s been almost a decade since Google-backed Calico launched with the moonshot mission of ‘fixing death’. Since then a small but growing field of ‘longevity’ startups has sprung up, conducting research into extending (in the first instance) human lifespan. (Ending death is, clearly, the moonshot atop the moonshot.) 

Death is still with us, of course, but the business of identifying possible drugs and therapeutics to stave off the grim reaper’s knock continues picking up pace — attracting a growing volume of investor dollars.

The trend is being fuelled by health and biological data becoming ever more plentiful and accessible, thanks to open research data initiatives and the proliferation of digital devices and services for tracking health, set alongside promising developments in the fast-evolving field of machine learning in areas like predictive healthcare and drug discovery.

Longevity has also seen a bit of an upsurge in interest in recent times as the coronavirus pandemic has concentrated minds on health and wellness, generally — and, well, mortality specifically.

Nonetheless, it remains a complex, multi-disciplinary business. Some of these biotech moonshots are focused on bioengineering and gene-editing — pushing for disease diagnosis and/or drug discovery.

Plenty are also — like Gero —  trying to use AI and big data analysis to better understand and counteract biological ageing, bringing together experts in physics, maths and biological science to hunt for biomarkers to further research aimed at combating age-related disease and deterioration.

Another recent example is AI startup Deep Longevity, which came out of stealth last summer — as a spinout from AI drug discovery startup Insilico Medicine — touting an AI ‘longevity as a service’ system which it claims can predict an individual’s biological age “significantly more accurately than conventional methods” (and which it also hopes will help scientists to unpick which “biological culprits drive aging-related diseases”, as it put it).

Gero AI is taking a different tack toward the same overarching goal — by honing in on data generated by activity sensors embedded into the everyday mobile devices people carry with them (or wear) as a proxy signal for studying their biology.

The advantage being that it doesn’t require a person to undergo regular (invasive) blood tests to get an ongoing measure of their own health. Instead our personal device can generate proxy signals for biological study passively — at vast scale and low cost. So the promise of Gero’s ‘digital biomarkers’ is they could democratize access to individual health prediction.

And while billionaires like Peter Thiel can afford to shell out for bespoke medical monitoring and interventions to try to stay one step ahead of death, such high end services simply won’t scale to the rest of us.

If its digital biomarkers live up to Gero’s claims, its approach could, at the least, help steer millions towards healthier lifestyles, while also generating rich data for longevity R&D — and to support the development of drugs that could extend human lifespan (albeit what such life-extending pills might cost is a whole other matter).

The insurance industry is naturally interested — with the potential for such tools to be used to nudge individuals towards healthier lifestyles and thereby reduce payout costs.

For individuals who are motivated to improve their health themselves, Fedichev says the issue now is it’s extremely hard for people to know exactly which lifestyle changes or interventions are best suited to their particular biology.

For example fasting has been shown in some studies to help combat biological ageing. But he notes that the approach may not be effective for everyone. The same may be true of other activities that are accepted to be generally beneficial for health (like exercise or eating or avoiding certain foods).

Again those rules of thumb may have a lot of nuance, depending on an individual’s particular biology. And scientific research is, inevitably, limited by access to funding. (Research can thus tend to focus on certain groups to the exclusion of others — e.g. men rather than women; or the young rather than middle aged.)

This is why Fedichev believes there’s a lot of value in creating a measure than can address health-related knowledge gaps at essentially no individual cost.

Gero has used longitudinal data from the UK’s biobank, one of its research partners, to verify its model’s measurements of biological age and resilience. But of course it hopes to go further — as it ingests more data. 

“Technically it’s not properly different what we are doing — it just happens that we can do it now because there are such efforts like UK biobank. Government money and also some industry sponsors money, maybe for the first time in the history of humanity, we have this situation where we have electronic medical records, genetics, wearable devices from hundreds of thousands of people, so it just became possible. It’s the convergence of several developments — technological but also what I would call ‘social technologies’ [like the UK biobank],” he tells TechCrunch.

“Imagine that for every diet, for every training routine, meditation… in order to make sure that we can actually optimize lifestyles — understand which things work, which do not [for each person] or maybe some experimental drugs which are already proved [to] extend lifespan in animals are working, maybe we can do something different.”

“When we will have 1M tracks [half a year’s worth of data on 1M individuals] we will combine that with genetics and solve ageing,” he adds, with entrepreneurial flourish. “The ambitious version of this plan is we’ll get this million tracks by the end of the year.”

Fitness and health apps are an obvious target partner for data-loving longevity researchers — but you can imagine it’ll be a mutual attraction. One side can bring the users, the other a halo of credibility comprised of deep tech and hard science.

“We expect that these [apps] will get lots of people and we will be able to analyze those people for them as a fun feature first, for their users. But in the background we will build the best model of human ageing,” Fedichev continues, predicting that scoring the effect of different fitness and wellness treatments will be “the next frontier” for wellness and health (Or, more pithily: “Wellness and health has to become digital and quantitive.”)

“What we are doing is we are bringing physicists into the analysis of human data. Since recently we have lots of biobanks, we have lots of signals — including from available devices which produce something like a few years’ long windows on the human ageing process. So it’s a dynamical system — like weather prediction or financial market predictions,” he also tells us.

“We cannot own the treatments because we cannot patent them but maybe we can own the personalization — the AI that personalized those treatments for you.”

From a startup perspective, one thing looks crystal clear: Personalization is here for the long haul.

 

Continue Reading

Mobile

Following Apple’s launch of privacy labels, Google to add a ‘safety’ section in Google Play – TechCrunch

Published

on

Months after Apple’s App Store introduced privacy labels for apps, Google announced its own mobile app marketplace, Google Play, will follow suit. The company today pre-announced its plans to introduce a new “safety” section in Google Play, rolling out next year, which will require app developers to share what sort of data their apps collect, how it’s stored, and how it’s used.

For example, developers will need to share what sort of personal information their apps collect, like users’ names or emails, and whether it collects information from the phone, like the user’s precise location, their media files or contacts. Apps will also need to explain how the app uses that information — for example, for enhancing the app’s functionality or for personalization purposes.

Developers who already adhere to specific security and privacy practices will additionally be able to highlight that in their app listing. On this front, Google says it will add new elements that detail whether the app uses security practices like data encryption; if the app follows Google’s Families policy, related to child safety; if the app’s safety section has been verified by an independent third party; whether the app needs data to function or allows users to choose whether or not share data; and whether the developer agrees to delete user data when a user uninstalls the app in question.

Apps will also be required to provide their privacy policies.

While clearly inspired by Apple’s privacy labels, there are several key differences. Apple’s labels focus on what data is being collected for tracking purposes and what’s linked to the end user. Google’s additions seem to be more about whether or not you can trust the data being collected is being handled responsibility, by allowing the developer to showcase if they follow best practices around data security, for instance. It also gives the developer a way to make a case for why it’s collecting data right on the listing page itself. (Apple’s “ask to track” pop-ups on iOS now force developers to beg inside their apps for access user data).

Another interesting addition is that Google will allow the app data labels to be independently verified. Assuming these verifications are handled by trusted names, they could help to convey to users that the disclosures aren’t lies. One early criticism of Apple’s privacy labels was that many were providing inaccurate information — and were getting away with it, too.

Google says the new features will not roll out until Q2 2022, but it wanted to announce now in order to give developers plenty of time to prepare.

Image Credits: Google

There is, of course, a lot of irony to be found in an app privacy announcement from Google.

The company was one of the longest holdouts on issuing privacy labels for its own iOS apps, as it scrambled to review (and re-review, we understand) the labels’ content and disclosures. After initially claiming its labels would roll out “soon,” many of Google’s top apps then entered a lengthy period where they received no updates at all, as they were no longer compliant with App Store policies.

It took Google months after the deadline had passed to provide labels for its top apps. And when it did, it was mocked by critics — like privacy-focused search engine DuckDuckGo — for how much data apps like Chrome and the Google app collect.

Google’s plan to add a safety section of its own to Google Play gives it a chance to shift the narrative a bit.

It’s not a privacy push, necessarily. They’re not even called privacy labels! Instead, the changes seem designed to allow app developers to better explain if you can trust their app with your data, rather than setting the expectation that the app should not be collecting data in the first place.

How well this will resonate with consumers remains to be seen. Apple has made a solid case that it’s a company that compares about user privacy, and is adding features that put users in control of their data. It’s a hard argument to fight back against — especially in an era that’s seen too many data breaches to count, careless handling of private data by tech giants, widespread government spying, and a creepy adtech industry that grew to feel entitled to user data collection without disclosure.

Google says when the changes roll out, non-compliant apps will be required to fix their violations or become subject to policy enforcement. It hasn’t yet detailed how that process will be handled, or whether it will pause app updates for apps in violation.

The company noted its own apps would be required to share this same information and a privacy policy, too.

 

Continue Reading

Trending