Connect with us

Tech News

Thomas Reardon and CTRL-Labs are building an API for the brain – TechCrunch



From Elon’s Neuralink to Bryan Johnson’s Kernel, a new wave of businesses are specifically focusing on ways to access, read and write from the brain.

The holy grail lies in how to do that without invasive implants, and how to do it for a mass market.

One company aiming to do just that is New York-based CTRL-labs, who recently closed a $28 million Series B. The team, comprising over 12 PHDs, is decoding individual neurons and developing an electromyography-based armband that reads the nervous signals travelling from the brain to the fingers. These signals are then translated into desired intentions, enabling anything from thought-to-text to moving objects.

Scientists have known about electrical activity in the brain since Hans Berger first recorded it using an EEG in 1924, and the term “brain computer interface” (BCI) was coined as early as the 1970s by Jacques Vidal at UCLA. Since then most BCI applications have been tested in the military or medical realm. Although it’s still the early innings of neurotech commercialization, in recent years the pace of capital going in and company formation has picked up. 

For a conversation with Flux I sat down with Thomas Reardon the CEO of CTRL-labs and discussed his journey to founding the company. Reardon explained why New York is the best place to build a machine learning based business right now and how he recruits top talent. He shares what developers can expect when the CTRL-kit ships in Q1 and explains how a brain control interface may well make the smartphone redundant.

An excerpt is published below. Full transcript on Medium.

AMLG: I’m excited to have Thomas Reardon on the show today. He is the co-founder and CEO of CTRL-labs a company building the next generation of non-invasive neural computing here in Manhattan. He’s just cycled from uptown — thanks for coming down here to Chinatown. Reardon was previously the founder of a startup called Avegadro, which was acquired by Openwave. He also spent time at Microsoft where he was project lead on Internet Explorer. He’s one of the founders of the Worldwide Web Consortium, a body that has established many of the standards that still govern the Web, and he’s one of the architects of XML and CSS. Why don’t we get into your background, how you got to where you are today and why you’re the most excited to be doing what you’re doing right now.

TR: My background — well I’m a bit of an old man so this is a longer story. I have a commercial software background. I didn’t go to college when I was younger. I started a company at 19 years old and ended up at Microsoft back in 1990, so this was before the Windows revolution stormed the world. I spent 10 years at Microsoft. The biggest part of that was starting up the Internet Explorer project and then leading the internet architecture effort at Microsoft so that’s how I ended up working on things like CSS and XML, some of the web nerds out there should be deeply familiar with those terms. Then after doing another company that focused on the mobile Internet, and Openwave, where I served as CTO, I got a bit tired of the Web. I got fatigued at the sense that the Web was growing up not to introduce any new technology experience or any new computer science to the world. It was just transferring bones from one grave to another. We were reinventing everything that had been invented in the 80s and early 90s and webifying it but we weren’t creating new experiences. I got profoundly turned off by the evolution of the Web and what we were doing to put it on mobile devices. We weren’t creating new value for people. We weren’t solving new human problems. We were solving corporate problems. We were trying to create new leverage for the entrenched companies.

So I left tech in 2003. Effectively retired. I decided to go and get a proper college education. I went and studied Greek and Latin and got a degree in classics. Along the way I started studying neuroscience and was fascinated by the biology of neurons. This led me to grad school and doing a Ph.D. which I split across Duke and Columbia. I’d woken up some time in like 2005 2006 and was reading an article in The New York Times. It was something about a cell and I scratched my head and said, we all hear that term we all talk about cells and cells in the body, but I have no idea what a cell really is. To the point where a New York Times article was too deep for me, and that almost embarrassed me and shocked me and led me down this path of studying biology in a deeper almost molecular way.

AMLG: So you were really in the heart of it all when you were working at Microsoft and building your startup. Now you are building this company in New York — we’ve got Columbia and NYU and there’s a lot of commercial industries — does that feel different for you, building a company here?

TR: Well let’s look at the kind of company we’re building. We’re building a company which is at its heart about machine learning. We’re in an era in which every startup tries to have a slide in their deck that says something about ML, but most of them are a joke in comparison. This is the place in the world to build a company that has machine learning at its core. Between Columbia and NYU and now Cornell Tech, and the unbelievably deep bench of machine learning talent embedded in the finance industry, we have more ML people at an elite level in New York than any place on earth. It’s dramatic. Our ability to recruit here is unparalleled. We beat the big five all the time. We’re now 42 people and half of them are Ph.D. scientists. For every single one of them we were competing against Google, Facebook, Apple.

AMLG: Presumably this is a more interesting problem for them to work on. If they want to go work at Goldman in AI they can do that for a couple of years, make some dollars and then come back and do the interesting stuff.

TR: They can make a bigger salary but they will work on something that nobody in the rest of the world will ever get to hear about. The reason why people don’t talk about all this ML talent here is when it’s embedded in finance you never get to hear about it. It’s all secret. Underneath the waters. The work we’re doing and this new generation of companies that have ML at their core — even a company like Spotify is, on the one hand fundamentally a licensing and copyright arbitrage company, but on the other hand what broke out for Spotify was their ML work. It was fundamental to the offer. That’s the kind of thing that’s happening in New York again and again now. There’s lots of companies — like a hardware company — that would be scary to build in New York. We have a significant hardware component to what we’re doing. It is hard to recruit A team world-class hardware folks in New York but we can get them. We recently hired the head of product from Peloton who formerly ran Makerbot.

AMLG: We support that and believe there’s a budding pool here. And I guess the third bench is neuro, which Columbia is very strong in.

Larry Abbott helped found the Center of Theoretical Neuroscience at Columbia

TR: Yes as is NYU. Neuroscience is in some sense the signature department at Columbia. The field breaks across two domains — the biological and the computational. Computational neuroscience is machine learning for real neurons, building operating computational models of how real neurons do their work. It’s the field that drives a lot of the breakthroughs in machine learning. We have these biologically inspired concepts in machine learning that come from computational neuroscience. Colombia has by far the top computational neuroscience group in the world and probably the top biological neuroscience group in the world. There are five Nobel Prize winners in the program and Larry Abbott the legend of theoretical neuroscience. It’s its an unbelievably deep bench.

AMLG: How do you recruit people that are smarter than you? This is a question that everyone listening wants to know.

Patrick Kaifosh, Thomas Reardon, Tim Machado the co-founders of CTRL-labs

TR: I’m not dumb but I’m not as smart as my co-founder and I’m not as smart as half of the scientific staff inside the company. I affectionately refer to my co-founder as a mutant. Patrick Kaifosh, who’s chief scientist. He is one of the smartest human beings I’ve ever known. Patrick is one of those generational people that can change our concept of what’s possible, and he does that in a first principles way. The recruiting part is to engage people in a way that lets them know that you’re going to take all the crap away that allows them to work on the hardest problems with the best people.

AMLG: I believe it and I’ve met some of them. So what was the conversation with Kaifosh and Tim when when you first sat down and decided to pursue the idea?

TR: So we were wrapping up our graduate studies, the three of us. We were looking at what it would be like to stay in academia and the bureaucracy involved in trying to be a working scientist in academia and writing grants. We were looking around at the young faculty members we saw at Columbia and thought, that doesn’t look like they’re having fun.

AMLG: When you were leaving Columbia it sounds like there wasn’t another company idea. Was it clear that this was the idea that you wanted to pursue at that time?

TR: What we knew is we wanted to do something collaborative. We did not think, let’s go build a brain machine interface. We don’t actually like that phrase, we like to call them neural interfaces. We didn’t think about neural interfaces at all. The second idea we had, an ingredient we put into the stew and started mixing up was, was that we wanted to leverage experimental technologies from neuroscience that hadn’t yet been commercialized. In some sense this was like when Genentech was starting in the mid 70s. We had found the crystal structure of DNA back in the late 40s, there had been 30 years of molecular biology, we figured out DNA then RNA then protein synthesis then ribosome. Thirty years of molecular biology but nobody had commercialized it yet. Then Genentech came along with this idea that we could make synthetic protein, that we could start to commercialize some of these core experimental techniques and do translation work and bring value back to humanity. It was all just sitting there on the shelf ready to be exploited.

We thought OK what are the technologies in neuroscience that we use at the bench that could be exploited? For instance spike sorting, the ability to listen with a single electrode to lots of neurons at the same time and see all the different electrical impulses and de-convolve them. You get this big noisy signal and you can see the individual neurons activity. So we started playing with that idea, lets harvest the last 30 or 40 years of bench experimental neuroscience. What are the techniques that were invented that we could harvest?

AMLG: We’ve been reading about these things and there’s been so much excitement about BMI but you haven’t really seen things in market things that people can hack around with. I don’t know why that gap hasn’t been filled. Does no one have the balls to go take these off the shelf and try and turn them into something or is it a timing question?

The brain has upper motor neurons in the cortex which map to lower motor neurons in the spinal cord, which send long axons down to contact the muscles. They release neurotransmitters that turn individual muscle fibres on and off. Motor units have 1:1 correspondence with motor neurons. When motor neurons fire in the spinal cord, an output signal from the brain, you get a direct response in the muscle. If those EMG signals can be decoded, then you can decode the zeros and ones of the nervous system — action potential

TR: Some of this is chutzpah and some of it is timing. The technologies that we are leveraging weren’t fully developed for how we’re using them. We had to do some invention since we started the company three years ago. But they were far enough along that you could imagine the gap and come up with a way to cross the gap. How could we, for instance, decode an individual neuron using a technology called electromyography. Electromyography has been around for probably over a century and that’s the ability to — 

AMLG: Thats what we call EMG.

TR: EMG. Yes you can record the electrical activity of a muscle. EKG electrocardiography is basically EMG for the heart alone. You’re looking at the electrical activity of the heart muscles. We thought if you improve this legacy technology of EMG sufficiently, if you improve the signal to noise, you ought to be able to see the individual fibers of a muscle. If you know some neuroanatomy what you figure out is that the individual fibers correspond to individual neurons. And by listening to individual fibers we can now reconstruct the activity of individual neurons. That’s the root of a neural interface. The ability to listen to an individual neuron.

EEG toy “the Force Trainer”

AMLG: My family are Star Wars fans and we had a device one Christmas that we sat around playing with, the force trainer. If you put the device around your head and stare long enough the thing is supposed to move. Everything I’ve ever tried has been like that has been like that Force Trainer, a little frustrating — 

TR: Thats EEG, electroencephalography. That’s when you put something on your skull and record the electrical activity. The waves of activity that happen in the cortex, in the outer part of your brain.

AMLG: And it doesn’t work well because the skull is too thick?

TR: There’s a bunch of reasons why it doesn’t work that well. The unfortunate thing is that when most people hear about it that’s one of the first things they think about like, oh well all my thinking is up here in the cortex right underneath my skull and that’s what you’re interfacing with. That is actually —

AMLG: A myth?

TR: Both a myth and the wrong approach. I’m going have to go deep on this one because it’s subtle but important. The first thing is let’s just talk about the signal qualities of EEG versus what we’re doing where we listen to individual neurons and do it without having to drill into your body or place an electrode inside of you. EEG is trying to listen to the activity of lots of neurons all at the same time tens of thousands hundreds of thousands of neurons and kind of get a sense of what the roar of those neurons is. I liken it to sitting outside of Giant Stadium with a microphone trying to listen to a conversation in Section 23 Row 4 seat 9. You can’t do it. At best you can tell is that one of the teams scored you hear the roar of the entire stadium. That’s basically what we have with EEG today. The ability to hear the roar. So for instance we say the easiest thing to decode with EMG is surprise. I could put a headset on you and tell if you’re surprised.

AMLG: That doesn’t seem too handy.

TR: Yup not much more than that. Turns out surprise is this global brain state and your entire brain lights up. In every animal that we do this in surprise looks the same — it’s a big global Christmas tree that lights up across the entire brain. But you can’t use that for control. And this cuts to the name of our company, CTRL-labs. I don’t just want to decode your state. I want to give you the ability to control things in the world in a way that feels magical. It feels like Star Wars. I want you to feel like the Star Wars Emperor. What we’re trying to do is give you control and a kind of control you’ve never experienced before.

The MYO armband by Canadian startup Thalmic Labs

AMLG: This is control over motion right? Maybe you can clarify — where I’ve seen other companies like MYO, which was an armband, it was really motion capture where people were capturing how you intended to gesture, rather than what you were thinking about?

TR: Yeah. In some sense we’re a successor to MYO (Thalmic Labs) — if Thalmic had been built by neuroscientists you would have ended up on the path that we’re on now.

Thomas Reardon demonstrating Myo control

We have two regimes of control, one we call Myo control and the other we call Neuro control. Myo control is our ability to decode what ultimately becomes your movements. The electrical input to your muscles that cause your muscles to contract, and then when you stop activating them they slowly relax. We can decode the electrical activity that goes into those muscles even before the movement has started and even before it ends and recapitulate that in a virtual way. Neuro control is something else. It’s kind of exotic and you have to try it to believe it. We can get to the level of the electrical activity of neurons — individual neurons — and train you rapidly on the order of seconds to control something. So imagine you’re playing a video game and you want to push a button to hop like you’re playing Sonic the Hedgehog. I can train you in seconds to turn on a single neuron in your spinal cord to control that little thing.

AMLG: When I came to visit your lab in 2016 the guy had his hand out here. I tried it — it was an asteroid field.

TR: Asteroids, the old Atari game.

Patrick Kaifosh playing Asteroids — example of Neuro Control [from CTRL-labs, late 2017]

AMLG: Classic. And you’re doing fruit ninja now too? It gets harder and harder.

TR: It does get harder and harder. So the idea here is that rather than moving you can just turn these neurons on and off and control something. Really there’s no muscle activity at that point you’re just activating individual neurons, they might release a little pulse, a little electrical chemical transmission to the muscle, but the muscle can’t respond at that level. What you find out is rather than using your neurons to control say your five fingers, you can use your neurons to control 30 virtual fingers without actually moving your hand at all.

AMLG: What does that mean for neuroplasticity. Do you have to imagine the third hand fourth hand fifth hand, or your tail like in Avatar?

TR: This is why I focus on the concept of control. We’re not trying to decode what you’re “thinking.” I don’t know what a thought is and there’s nobody in neuroscience who does know what a thought is. Nobody. We don’t know what consciousness is and we don’t know what thoughts are. They don’t exist in one part of the brain. Your brain is one cohesive organ and that includes your spinal cord all the way up. All of that embodies thought.

Inside Out (2015, Pixar). Great movie. Not how the brain, thoughts or consciousness work

AMLG: That’s a pretty crazy thought as thoughts go. I’m trying to mull that one over.

TR: It is. I want to pound that home. There’s not this one place. There’s not a little chair (to refer to Dan Dennett) there’s not like a chair in a movie theater inside your brain where the real you sits watching what’s happening and directing it. No, there’s just your overall brain and you’re in there somewhere across all of it. It’s that collection of neurons together that give you this sense of consciousness.

What we do with Neuro Control and with CTRL-kit the device that we’ve built is give you feedback. We show you by giving you direct feedback in real time, millisecond level feedback, how to train a neuron to go move say a cursor up and down, to go chase something or to jump over something. The way this works is that we engage your motor nervous system. Your brain has a natural output port — a USB port if you will — that generates output. In some sense this is sad for people, but I have to tell you your brain doesn’t do anything except turn muscles on and off. That’s the final output of the brain. When you’re generating speech when you’re blinking your eyes at me when you’re folding your hands and using your hands to talk to me when you’re moving around when you’re feeding yourself. Your brain is just turning muscles on and off. That’s it. There is nothing else. It does that via motor neurons. Most of those are in your spine. Those motor neurons, it’s not so much that they’re plastic — they’re adaptive. So motor control is this ability to use neurons for very adaptive tasks. Take a sip of water from that bottle right in front of you. Watch what you’re doing.

Intention capture — rather than going through devices to interact, CTRL-labs will take the electrical activity of the body and decode that directly, allowing us to use that high bandwidth information to interact with all output devices. [Watch Reardon’s full keynote at O’Reilly]

AMLG: Watch me spill it all over myself — 

TR: You’re taking a sip. Everything you just did with that bottle you’ve never done that before. You’ve never done that task. In fact you just did a complicated thing, you actually put it around the microphone and had to use one hand then use the other hand to take the cap off the bottle. You did all of that without thinking. There was no cognitive load involved in that. That bottle is different than any other bottle, its slippery it’s got a certain temperature, the weight changes. Have you ever seen these robots try to pour water. It’s comical how difficult it is. You do it effortlessly, like you’re really good —

AMLG: Well I practiced a few times before we got here.

TR: Actually you did practice! The first year two years of your life. That’s all you were doing was practicing, to get ready for what you just did. Because when you’re born you can’t do that. You can’t control your hands you can’t control your body. You actually do something called motor babbling where you just shake your hands around and move your legs and wiggle your fingers and you’re trying to create a map inside your brain of how your body works and to gain control. But gain flexible, adaptive control.

AMLG: That’s the natural training that babies do, which is sort of what you’re doing in terms of decoding ?

TR: We are leveraging that same process you went through when you were a year to two years old to help you gain new skills that go beyond your muscles. So that was all about you learning how to control your muscles and do things. I want to emphasize what you did again is more complex than anything else you do. It’s more complex than language than math than social skills. Eight billion people on earth that have a functioning nervous system, every other one of them no matter what their IQ can do it really well. That’s the part of the brain that we’re interfacing with. That ability to adapt in real time to a task skillfully. That’s not plasticity in neuroscience. It’s adaptation.

AMLG: What does that mean in terms of the amount of decoding you’ve had to do. Because you’ve got a working demo. And I know that people have to train for their own individual use right?

Myo control attempts to understand what each of the 14 muscles in the arm are doing, then deconvolve the signal into individual channels that map out to muscles. If they can build an accurate online map CTRL-labs believes there is no reason to have a keyboard or mouse 


TR: In Myo control it works for anybody right out of the box. With Neuro control it adjusts to you. In fact the model that’s built is custom to you, it wouldn’t work on anybody else it wouldn’t work on your twin. Because your twin would train it differently. DNA is not determinative of your nervous output. What you have to realize is we haven’t decoded the brain —  there’s 15 billion neurons there. What we’ve done is created a very reduced but highly functional piece of hardware that listens to neurons in the spinal cord and gives you feedback that allows you to individually control those neurons.

When you think about the control that you exploit every day it’s built up of two kinds of things what we call continuous control — think of that as a joystick, left and right, and much left how much right. Those are continuous controls. Then we have discrete controls or symbols. Think of that as button pushing or typing. Every single control problem you face, and that’s what your day is filled with whether taking a sip of water walking down the street getting in a car driving a car. All of the control problems reduce to some combination of continuous control (swiping) and discrete control (button pushing.) We have this ability to get you to train these synthetic forms of up down left right dimensions if you will, that allows you to control things without moving but then allow you to move beyond the five fingers in your hand and get access to say 30 virtual fingers. What that opens up? Well think about everything you control.

AMLG: I’m picturing 30 virtual fingers right now —and I do want to get into VR, there’s lots of forms one can take in there. The surprising thing to me in terms of target uses and there’s so many uses you can imagine for this in early populations, was that you didn’t start the company for clinical populations or motor pathologies right? A lot of people have been working on bionics. I have a handicapped brother— I’ve been to his school and have seen the kids with all sorts of devices. They’re coming along, and obviously in the army they’ve been working on this. But you are not coming at it from that approach?

TR: Correct. We started the company almost ruthlessly focused on eight billion people. The market of eight billion. Not the market of a million or 10 million who have motor pathologies. In some sense this is the part that’s informed by my Microsoft time. So in the academy when you’re doing neuroscience research almost everybody focuses on pathologies, things that break in the nervous system and what we can do to help people and work around them. They’ll work on Parkinsons or Alzheimers or ALS for motor pathologies. What commercial companies get to do is bring new kinds of deep technology to mass markets, but which then feed back to clinical communities. By pushing and making this stuff work at scale across eight billion people, the problems that we have to solve will ultimately be the same problems that people who want to bring relief to people with motor pathologies need to solve. If you do it at scale lots of things fall out that wouldn’t have otherwise fallen out.

AMLG: It’s fascinating because you’re starting with we’re gonna go big. You’ve said you would like your devices, whether sold by you or by partners, to be on a million people within three or four years. A lot of things start in the realm of science but don’t get commercialized on a large scale. When you launched Explorer, at one point it had 95 percent market share so you’ve touched that many people before — 

Internet Explorer browser market share, 2002–2016

TR: Yes and it’s addicting, when you’ve been able to put software into a billion plus hands. That’s the kind of scale that you want to work on and that’s the kind of impact that I want to have and the team wants to have.

AMLG: How do you get something like this to that scale?

TR: One user at a time. You pick segments in which there are serious problems to solve and proximal problems. You’ve talked about VR. We think we solve a key problem in virtual reality augmented reality mixed reality. These emerging, immersive computing paradigms. No immersive computing technology so far has won. There is no default. There’s no standard. Nobody’s pointing at any thing and saying “oh I can already see how that’s the one that’s going to win.” It’s not Oculus it’s not Microsoft Hololens it’s not Magic Leap. But the investment is still happening and we’re now years into this new round of virtual realities. The investment is happening because people still have a hunger for it. We know we want immersive computing to work. What’s not working? It’s kind of obvious. We designed all of these experiences to get data, images, sounds into you. The human input problem. These immersive technologies do breakthrough work to change human input. But they’ve done nothing so far to change human output. That’s where we come in. You can’t have a successful immersive computing platform without solving the human output problem of how do I control this? How do I express my intentions? How do I express language inside of virtual reality? Am I typing or am I not typing?

AMLG: Everyone’s doing the iPad right now. You go into VR and you’re holding a thing that’s mimicking the real world.

TR: What we call skeuomorphic experiences that mimic real life, and that’s terrible. The first developer kits for the Oculus Rift you know shipped with an Xbox controller. Oh my god is that dumb. There’s a myth that the only way to create a new technology is to make sure it has a deep bridge to the past. I call bullshit on that. We’ve been stuck in that model and it’s one of the diseases of the venture world, “we’re Uber for neurons” and it’s Uber for this or that.

AMLG: Well ironically people are afraid to take risks in venture. If you suddenly design a new way of communicating or doing human output it’s, “that’s pretty risky, it should look more like the last thing.”

TR: I’m deeply thankful to the firms that stepped up to fund us, Spark and Matrix and most recently Lux and Google Ventures. We’ve got venture folks who want to look around the bend and make a big bet on a big future.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

iPhone 14 – Things we know so far



Rumors about the iPhone 14 started popping up online before the iPhone 13 was announced — and though we don’t yet know what Apple has planned, there’s enough info floating around to speculate. The company is rumored to be working on a foldable iPhone, at least based on certain patents, but there’s no guarantee a folding model is in the pipeline at this time. In all likeliness, the next iteration of the iPhone will be called the iPhone 14 and it’ll stick to the trusted form factor from previous years.

Based on the rumors coming in, the iPhone 14 is likely to ditch the notch (which is thinner on the iPhone 13 lineup) and it may not have a camera bump on the rear. These are the two most interesting assumptions amid the other expectations that should pique your interest.

iPhone 14 display and body

Jon Prosser

The iPhone 14 is likely to share the same flat-edged design as the iPhone 13 with some changes in the display and body. The iPhone 13 brought a 120Hz ProMotion display to the iPhone Pro variants and rumors suggest that all four iPhone 14 models may come with this display tech. This isn’t guaranteed, however, as The Elec reported that ProMotion will be a Pro model exclusive and the standard iPhone 14 could feature an LTPS OLED display without the 120Hz ProMotion option.

There is a twist to the possible models the iPhone 14 could arrive in, however. Reportedly, the smaller 5.4-inch iPhone 13 mini will not have a successor in 2022; Apple may leave this size out of the equation and focus on larger-screen options. Instead of the mini, Apple may release the iPhone 14 Max with a 6.7-inch display – same as that of the iPhone 13 Pro Max – delivering a bigger screen model with a possibly larger battery, as well.

This means there would be four models in the iPhone 14 lineup but without the smaller screen option available in the iPhone 12 and 13 product lines. Based on the leaks, the upcoming product line may feature a 6.1-inch iPhone 14 and 14 Pro, as well as a larger 6.7-inch iPhone 14 Max and 14 Pro Max.

When it comes to the body design, meanwhile, prominent leaker Jon Prosser believes Apple will eliminate the camera bulge on the back of the iPhone 14 by using a thicker chassis. Some allegedly leaked images of the iPhone 14 Pro show a design resemblance to the iPhone 4 right from its front and back to the flat sides and the circular volume buttons.

Additionally, rumors also claim the iPhone 14 will feature a titanium alloy chassis, including a JP Morgan Chase report, as noted by Patently Apple. Titanium, which is stronger and more scratch-resistant than aluminum, has already been introduced on the Apple Watch and it may finally arrive on the iPhone line next year.

The notchless design

iPhone 14

Jon Prosser

If there is one thing that Apple fans want the iPhone to do away with, it’s the notch. The iPhone 13 was rumored to ditch this annoying design choice, but ultimately it remained — though its overall size was trimmed a bit from previous models.

With the launch of the iPhone 14, Apple is likely to herald the future of notch-less design, at least with the Pro models. Removal of the notch doesn’t mean a change in functionality, mind. Apple analyst Ming-Chi Kuo believes Apple will ditch the notch and replace it with a hole-punch selfie camera instead.

The facial scanning tech, meanwhile, will likely find a new home. The Face ID on iPhone 14, at least according to the rumors, will be placed under the display. Apple is believed to be working on the possibility of under-display Face ID, a claim that has been substantiated by multiple analysts, including Mark Gurman of Bloomberg.

iPhone 14 camera

iPhone 14

Jon Prosser

A new iPhone is always launched with better camera technology and the iPhone 14 isn’t likely to be an exception. This model will reportedly feature a tweaked appearance with a bump-free rear camera model — it’ll be built flush into the glass body, the leaks allege.

Analyst Kuo believes the iPhone 14 Pro models could beef up the main camera to 48-megapixel. Also rumored is the possibility of a periscope zoom lens and 8K video recording. From how the other OEMs are seeing the camera space, Apple could join the league with a quad-camera for the Pro models and a triple camera on the standard iPhone 14 models.

A powerful chip


Each new iPhone comes with a more powerful and efficient processor. With that in mind, the A16 Bionic chipset is expected to power the iPhone 14. This will reportedly be built either on a 3nm or 4nm process by TSMC. Initially, it was believed that the chip would be based on a 3nm process, but there’s reason to believe that plan may have changed.

TSMC has talked about a shortage of 3nm chips, which means the iPhone 14 could feature a chip built on the 4nm process. This would offer certain advantages over the 5nm A15 chip in the iPhone 13 (via Tom’s Guide).

Other notable possibilities

iOS 15.1


The iPhone 12 made 5G on smartphones more acceptable. With the iPhone 13, it was all about network speed. Consumers have even bigger expectations for the iPhone 14. Apple could take on the challenge by utilizing the first 10-gigabit 5G modem – Snapdragon X65 – to offer improvements in both speed and connectivity.

Though the European Union proposes mandatory USB-C on all devices – including iPhones – Apple is likely to continue without it. Instead, rumors indicate the company may eliminate the iPhone’s Lightning port in favor of MagSafe charging to get rid of the port entirely.

With user safety in mind, Apple is also reportedly working on a crash detection feature for the iPhone 14. This alleged feature would detect an accident using the phone’s sensors and accelerometer, then instantly dial emergency services for help (via WSJ).

Final thoughts

Apple made some hearts skip a beat when it launched the new MacBook Pro models at almost twice the cost of their predecessors. Something similar is likely not in the works for the new iPhone, but things could change by the time the iPhone 14 is actually launched. The iPhone 14 lineup is expected to launch in September 2022 based on Apple’s history, but that may depend heavily on the wider industry’s status at that time and whether chip shortages remain an issue.

Continue Reading

Tech News

Microsoft’s DNA storage research just hit a huge milestone



Microsoft has detailed a major breakthrough in its work on synthetic DNA storage, specifically on improving data throughput. The proof-of-concept is the subject of a new study from Microsoft Research and a team at the University of Washington’s Molecular Information Systems Laboratory (MISL), paving the way for a future in which the world’s data is stored on lab-made DNA, not tapes and hard drives.

Billion Photos/Shutterstock

Old tech still dominates

Microsoft has spent years working on synthetic DNA data storage, a promising technology that aims to address growing storage demands. The company paints an elaborate, if not mind-boggling, picture centered around present-day and future data needs — the huge quantity of information that already exists, the amount produced every day, and growth predictions over the next two years.

Assuming those predictions are accurate, there will be approximately 8.9 zettabytes of data in storage around the world by 2024, according to IDC. That works out to around 9 million petabytes of data, which is still more than the average person can visualize. Microsoft translates that figure into a more relatable context: a single zettabyte would be equivalent to installing Windows 11 on more than 15 billion computers.


Multiple types of data storage are commonly used, and though they seem positively archaic at this point, tape cartridges remain the most appealing commercial option due to their density (via IBM).

Magnetic tape has been around for several decades and offers some distinct benefits for companies that produce vast amounts of data: they help keep information secured away from hackers and can pack hundreds of terabytes of data in a small form factor. IBM says one tape cartridge utilizing its latest tech has a 580TB capacity, which would require more than three-quarters of a million CDs to store.

Using tape cartridges for data archival is a practice that will stick around for years, but there’s strong demand for a modern alternative that offers even greater density while eliminating many of the old tech’s problems. That, Microsoft says, is where synthetic DNA data storage comes in.

Why DNA?

Tape cartridges need to be rewritten every three or so decades at most, which is a short period of time when it comes to long-term data archiving. Synthetic DNA, on the other hand, is far more durable, Microsoft says, with the potential to preserve data for thousands of years. On top of that, synthetic DNA will likely drastically reduce the environmental impact of data centers, with Microsoft citing evidence that indicates lower water and energy use, as well as decreased greenhouse gas emissions.

Synthetic DNA data storage can only be a viable option if certain big hurdles are addressed, however. The technology is currently limited by low data throughput, specifically the rate at which data can be written. This, Microsoft notes, is a big stumbling block to large-scale synthetic DNA storage, not to mention the costs associated with the tech at this.

DNA storage graph

Image: Microsoft Research

The newly announced breakthrough revolves around throughput, presenting a proof-of-concept molecular controller. The researchers describe this innovation as a “tiny DNA storage writing mechanism on a chip,” which drastically improves how tightly DNA-synthesis spots are packed. The result is proof that higher levels of writing throughput are possible.

At its core, synthetic DNA storage involves moving data back and forth from molecules to bits. Microsoft explains that two things are critical for making DNA a viable commercial-scale storage option:

The first requires translating digital bits (ones and zeros) into strands of synthetic DNA representing these bits with encoding software and a DNA synthesizer. The second is to read and decode the information back into bits to recover that information into digital form again with a DNA sequencer and decoding software.

The company goes into extensive details about the new development and the wider processes involved in synthetic DNA storage in a new blog post. Storing data in DNA requires the information (in the form of digital bits) to be embedded in a DNA sequence’s A/C/T/G bases. The DNA chain is then synthesized, which typically involves a photochemical process.

Microsoft goes on to explain that electrochemical DNA synthesis side-steps some of the limitations inherent to photochemistry; it involves an array, electrodes, and cathodes. The new work details a synthesis method that successfully increased the rate at which the data was written in synthetic DNA, therefore boosting the throughput and, by proxy, decreasing the costs associated with synthesizing the DNA.

Though synthetic DNA storage isn’t yet ready to replace magnetic tape, Microsoft sees this latest development as a key step toward that reality. In its blog post detailing the study, Microsoft explained:

A natural next step is to embed digital logic in the chip to allow individual control of millions of electrode spots to write kilobytes per second of data in DNA. From there, we foresee the technology reaching arrays containing billions of electrodes capable of storing megabytes per second of data in DNA. This will bring DNA data storage performance and cost significantly closer to tape.

Continue Reading

Tech News

Apple Watch Series 7 Review



Nobody can deny that the Apple Watch won the smartwatch wars, and the latest Apple Watch Series 7 only extends that lead. A collection of endearing enhancements rather than the all-out reinvention that some expected, 2021’s version blends a bigger display with the improvements of watchOS 8, for a result that, though predictable, is no less impressive for it.

Both watch and display are slightly larger, though the former’s mild growth is not something you’re going to notice day to day. The latter, though, is more obvious. The 41mm (from $399) and 45mm (from $429) versions have a screen that’s nearly 20-percent bigger than on the previous-generation Apple Watch. It’s still a beautiful OLED panel, crisp and easy to read, and Apple says the always-on mode – when the smartwatch is in standby rather than raised up – is brighter than before. Just how much brighter, a new algorithm decides.

Honestly, I’m not sure the Apple Watch display needed to be any bigger. Not for my (corrected) eyesight, anyway, though I’ll concede that if you typically wear reading glasses then the larger fonts of the Series 7 probably are an improvement. Still, it’s worth noting that you’ve been able to increase font size and weight in watchOS for some time now.

What’s turned out to make a bigger difference is, quite literally, the edge cases. The Apple Watch’s screen now continues under the curved sides of its cover glass; viewed off-angle, it gives a fascinating three-dimensional effect, akin to stacked physical complications on a mechanical watch face.

Unless you’re the sort of – brave – person who wears two watches at once, one on each wrist, most of us make a singular decision about what graces their arm. I have a few “nice” mechanical watches already, but I choose to wear the Apple Watch for a user-experience the others can’t deliver. The trade-off is that the digital watch, with its accommodations to functionality, has never quite felt like a piece of charming jewelry in the same way that a traditional timepiece might.

Call me crazy, but the way the Apple Watch Series 7’s screen melds so interestingly into the curvature of the glass feels like a nod back to one of the key lures of old-school watches. Something that’s not necessarily a functional decision, but which elevates the smartwatch nonetheless. No, the dedicated Rolex or IWC owner may still not find that enough to make the switch, but it’s enough to have caught my eye when I glance down at my angled wrist.

The rest of the hardware feels very familiar. Apple says the front crystal is tougher than before, and the whole watch now has IP6X certification for dust resistance along with WR50 water resistance. It means you can take it swimming and wear it without concern on the beach, though I’d still – as with any watch – be cautious about banging it against hard objects.

There are aluminum, stainless steel, and titanium cases to choose from, in a variety of colors depending on the metal. Factor in the growing array of Apple’s own and third-party bands, and you can feasible take your Apple Watch from the gym to the office to a fancy wedding without it looking out of place. I’m rather partial to the blue aluminum of my review unit, though the green version is striking, too.

Battery life is about the same – 18 hours of typical use – but there’s a new charger included in the box. That promises up to 33-percent speedier recharging, though only with the Series 7, since it also relies on changes Apple made inside the watch itself. We’re still not quite at supercharge levels yet, but it did trim down a top-up when I forgot to recharge the Apple Watch overnight as I normally would. If you’re in the habit of tracking sleep with the wearable then the improvements are likely even more useful; in the time it would take to have a leisurely shower, you could more than likely add sufficient juice for the rest of the day.

If there’s one place the larger screen pays dividends, it’s Apple’s addition of an on-screen QWERTY keyboard to watchOS. Until now, Siri and voice-to-text dictation was the primary text input method for Apple Watch, bar a handful of canned responses to messages and the like. It works okay, but I doubt I’m the only person who feels self-conscious talking into their watch like a wannabe Dick Tracy. Or, you could scribble a letter at a time, a quieter if more time-consuming system.

The on-screen keyboard offers another approach. It uses the same autocorrect as on iPhone, along with auto-complete, to minimize the amount of tapping and swiping you’ll need to do. You can peck at each letter, or drag your fingertip around and let the mighty algorithm do its decoding. Most of the time, I’ve found, it’s been accurate.

You’re not going to be sending lengthy emails or writing term papers this way, but it’s another welcome step toward the Apple Watch feeling like a standalone device in its own right, rather than an adjunct to the iPhone. It’s worth noting that only the Series 7 gets the QWERTY keyboard, one of a handful of watchOS 8 features exclusive to the newer, larger model.

One of the reasons I wear an Apple Watch daily is fitness tracking. I’m not a fan of working out, and so watchOS’ needling reminders to close my move, stand, and exercise rings are one of those things that I hate-appreciate. The array of sensors is not really changed from last year’s watch: blood oxygen saturation, which is very dependent on where the Apple Watch is positioned on your wrist; heart rate tracking; ECG for signs of irregular heart rhythm; and an always-on altimeter that tracks height. Tempting as it is to think of the Apple Watch as a mini doctor on your wrist, though, it’s not a medical device.

Improvements in watchOS have made tracking cycling more accurate, Apple says, as well as better figuring out just how much effort you’re actually putting in if you’ve got an e-bike. Fall detection should handle falls from while cycling more intelligently, too. Since I’m usually clipped into a Peloton instead, however, I’ve not noticed those improvements in daily life.

Similarly, if I had one of the latest BMW’s with their support for digital key, I could use the Apple Watch Series 7 and its U1 chip to unlock the car when I got close. Sadly I do not, though Apple does say it’s working with other automakers on implementing the technology. Given the rate of change of the car industry versus the tech world, mind, you could probably wait for the Apple Watch Series 8 or 9 before there’s a much bigger choice in vehicles.

Attempting to aid your patience there is the new Mindfulness app. It absorbs the functionality of the old Breathe app – which would periodically, infuriatingly, remind you to breathe – and adds a Reflect mode, which encourages 1-5 minutes of meditation. I could probably do with taking time out for that as much as any other middle-aged man who spends too much of the day online, but there’s something about the Apple Watch’s prompts that pumps my blood pressure instead. You can, of course, turn those notifications off.

Apple Watch Series 7 Verdict

On the one hand, the Apple Watch Series 7 is another incremental upgrade. If you already have a Series 6 on your wrist, or even a Series 5, you could realistically sit 2021’s version out and simply upgrade watchOS for many of the newer improvements. All the same, it’s a testament to just how good the Apple Watch was, and is, that Apple hasn’t really needed to reinvent the wheel in order to maintain its lead. I know a fair few people who stick with their iPhone predominantly because they don’t want to give up their Apple Watch, and I can’t say I blame them.

If you’re in that group, then the new watchOS is probably the best place to start. Those yet to dive into Apple Watch ownership, however, should begin their journey here. Apple may not have made vast changes to this generation of wearable, but the Apple Watch is still the best smartwatch, and the Apple Watch Series 7 is the best of the best.

Continue Reading