Connect with us

Gadgets

Kiwi’s food delivery bots are rolling out to 12 more colleges – TechCrunch

Published

on

If you’re a student at UC Berkeley, the diminutive rolling robots from Kiwi are probably a familiar sight by now, trundling along with a burrito inside to deliver to a dorm or apartment building. Now students at a dozen more campuses will be able to join this great, lazy future of robotic delivery as Kiwi expands to them with a clever student-run model.

Speaking recently at TechCrunch’s Robotics + AI Session at the Berkeley campus, Kiwi’s Felipe Chavez and Sasha Iatsenia discussed the success of their burgeoning business and the way they planned to take it national.

In case you’re not aware of the Kiwi model, it’s basically this: When you place an order online with a participating restaurant, you have the option of delivery via Kiwi. If you so choose, one of the company’s fleet of knee-high robots with insulated, locking storage compartments will swing by the place, your order is put within, and it brings it to your front door (or as close as it can reasonably get). You can even watch the last bit live from the robot’s perspective as it rolls up to your place.

The robots are what Kiwi calls “semi-autonomous.” This means that although they can navigate most sidewalks and avoid pedestrians, each has a human monitoring it and setting waypoints for it to follow, on average every five seconds. Iatsenia told me that they’d tried going full autonomous and that it worked… most of the time. But most of the time isn’t good enough for a commercial service, so they’ve got humans in the loop. They’re working on improving autonomy, but for now this is how it is.

That the robots are being controlled in some fashion by a team of people in Colombia (from where the co-founders hail) does take a considerable amount of the futurism out of this endeavor, but on reflection it’s kind of a natural evolution of the existing delivery infrastructure. After all, someone has to drive the car that brings you your food, as well. And in reality, most AI is operated or informed directly or indirectly by actual people.

That those drivers are in South America operating multiple vehicles at a time is a technological advance over your average delivery vehicle — though it must be said that there is an unsavory air of offshoring labor to save money on wages. That said, few people shed tears over the wages earned by the Chinese assemblers who put together our smartphones and laptops, or the garbage pickers who separate your poorly sorted recycling. The global labor economy is a complicated one, and the company is making jobs in the place it was at least partly born.

Whatever the method, Kiwi has traction: it’s done more than 35,000 deliveries at an increasing rate since it started two years ago (now up to over 10,000 per month) and the model seems to have proven itself. Customers are happy, they get stuff delivered more than ever once they get the app and there are fewer and fewer incidents where a robot is kicked over or, you know, catches on fire. Notably, the founders said onstage, the community has really adopted the little vehicles, and should one overturn or be otherwise interfered with, it’s often set on its way soon after by a passerby.

Iatsenia and Chavez think the model is ready to push out to other campuses, where a similar effort will have to take place — but rather than do it themselves by raising millions and hiring staff all over the country, they’re trusting the robotics-loving student groups at other universities to help out.

For a small and low-cash startup like Kiwi, it would be risky to overextend by taking on a major round and using that to scale up. They started as robotics enthusiasts looking to bring something like this to their campus, so why can’t they help others do the same?

So the team looked at dozens of universities, narrowing them down by factors important to robotic delivery: layout, density, commercial corridors, demographics and so on. Ultimately they arrived at the following list:

  • Northern Illinois University
  • University of Oklahoma
  • Purdue University
  • Texas A&M
  • Parsons
  • Cornell
  • East Tennessee State University
  • University of Nebraska-Lincoln
  • Stanford
  • Harvard
  • NYU
  • Rutgers

What they’re doing is reaching out to robotics clubs and student groups at those colleges to see who wants to take partial ownership of Kiwi administration out there. Maintenance and deployment would still be handled by Berkeley students, but the student clubs would go through a certification process and then do the local work, like a capsized bot and on-site issues with customers and restaurants.

“We are exploring several options to work with students down the road, including rev share,” Iatsenia told me. “It depends on the campus.”

So far they’ve sent 40 robots to the 12 campuses listed and will be rolling out operations as the programs move forward on their own time. If you’re not one of the unis listed, don’t worry — if this goes the way Kiwi plans, it sounds like you can expect further expansion soon.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Gadgets

Shipping times for Apple’s $19 Polishing Cloth slip to late November

Published

on

Enlarge / If you wanted to polish your Apple products, bad news: you’ll need to wait at least a month to get Apple’s Polishing Cloth.

Apple

Between ongoing supply chain issues, chip shortages, and pent-up demand, Apple’s new MacBook Pros were always going to be hard to get. They’ve been up for preorder for less than 24 hours, and if you order one now, you probably won’t get it before November or December.

But the new laptops aren’t Apple’s only in-demand product: The shipping times for Apple’s $19 microfiber Polishing Cloth have also already slipped back into mid to late November. Unfortunately, this means that your compatible iPhones, iPads, Macs, Apple Watches, and iPods will need to remain unpolished for at least a month. It’s unclear whether the delays are being caused by low supply, overwhelming demand, or some combination of both.

The Polishing Cloth, folded over in a visually appealing manner. Without testing, we can't say whether the Apple logo is cosmetic or if it meaningfully improves the polishing experience.

The Polishing Cloth, folded over in a visually appealing manner. Without testing, we can’t say whether the Apple logo is cosmetic or if it meaningfully improves the polishing experience.

Apple

The Polishing Cloth boasts support for an impressive range of Apple products, which Apple lists out in detail on the Cloth’s product page. The list includes iPhones as old as 2014’s iPhone 6, every generation of Apple Watch, and even the old iPod nano and iPod shuffle. Without testing, however, we can’t confirm whether the Polishing Cloth will adequately polish older unsupported devices or non-Apple gadgets like Android phones or the Nintendo Switch.

The Polishing Cloth isn’t a new Apple product—it has shipped with the company’s $5,000 Pro Display XDR since that monitor was released back in 2019. But this is the first time that Apple has offered its best, most premium polishing experience to the users of its other devices.

Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Listing image by Apple

Continue Reading

Gadgets

The new MacBook Pro seems to have an HDMI 2.0 port, not 2.1

Published

on

Enlarge / Farthest right: The HDMI port on the MacBook Pro.

Lee Hutchinson

The newly announced 14-inch and 16-inch MacBook Pro models have HDMI ports, but they have a limitation that could be frustrating for many users over the long term, according to Apple’s specs page for both machines and as noted by Paul Haddad on Twitter.

The page says the HDMI port has “support for one display with up to 4K resolution at 60 Hz.” That means users with 4K displays at 120 Hz (or less likely, 8K displays at 60 Hz) won’t be able to tap the full capability of those displays through this port. It implies limited throughput associated with an HDMI 2.0 port instead of the most recent HDMI 2.1 standard, though there are other possible explanations for the limitation besides the port itself, and we don’t yet know which best describes the situation.

There aren’t many monitors and TVs that do 4K at 120 frames per second, and those that do are expensive. But they do exist, and they’re only going to get more common. In fact, it seems a safe bet that after a few years, 4K@120 Hz may become the industry standard.

So while this is an edge-case problem for only certain users with ultra-high-end displays right now, that won’t always be the case. The limitation could become frustrating for a much broader range of users sometime in the lifetime of a new MacBook Pro purchased today.

Of course, 4K@120 Hz is still achievable via the Thunderbolt port, and there are Thunderbolt-to-HDMI and Thunderbolt-to-DisplayPort adapters that will help users sidestep the issue. And the new MacBook Pro itself has a variable refresh rate screen that often refreshes at 120 Hz.

So if you want to connect the new MacBook Pro to a high-end display, no one’s stopping you. It just might cost more money to achieve, and the HDMI port might feel vestigial and useless to a lot of people in four or five years.

Before this week’s update to the MacBook Pro line, Apple went several years without offering HDMI ports on MacBook Pro computers at all, instead using only Thunderbolt. This redesign also saw Apple reintroduce the SD card slot, which was omitted in the last major MacBook Pro redesign in 2016.

Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Continue Reading

Gadgets

The “Google Silicon” team gives us a tour of the Pixel 6’s Tensor SoC

Published

on

Enlarge / A promo image for the Google Tensor SoC.

Google

The Pixel 6 is official, with a wild new camera design, incredible pricing, and the new Android 12 OS. The headline component of the device has to be the Google Tensor “system on chip” (SoC), however. This is Google’s first main SoC in a smartphone, and the chip has a unique CPU core configuration and a strong focus on AI capabilities.

Since when is Google a chip manufacturer, though? What are the goals of Tensor SoC? Why was it designed in its unique way? To get some answers, we sat down with members of the “Google Silicon” team—a name I don’t think we’ve heard before.

Google Silicon is a group responsible for mobile chips from Google. That means the team designed previous Titan M security chips in the Pixel 3 and up, along with the Pixel Visual Core in the Pixel 2 and 3. The group has been working on main SoC development for three or four years, but it remains separate from the Cloud team’s silicon work on things like YouTube transcoding chips and Cloud TPUs.

Phil Carmack is the vice president and general manager of Google Silicon, and Monika Gupta is the senior director on the team. Both were nice enough to tell us a bit more about Google’s secretive chip.

Most mobile SoC vendors license their chip architecture from ARM, which also offers some (optional) guidelines on how to design a chip using its cores. And, apart from Apple, most of these custom designs stick pretty closely to these guidelines. This year, the most common design is a chip with one big ARM Cortex-X1 core, three medium A78 cores, and four slower, lower-power A55 cores for background processing.

Now wrap your mind around what Google is doing with the Google Tensor: the chip still has four A55s for the small cores, but it has two Arm Cortex-X1 CPUs at 2.8 GHz to handle foreground processing duties.

For “medium” cores, we get two 2.25 GHz A76 CPUs. (That’s A76, not the A78 everyone else is using—these A76s are the “big” CPU cores from last year.) When Arm introduced the A78 design, it said that the core—on a 5nm process—offered 20 percent more sustained performance in the same thermal envelope compared to the 7nm A76. Google is now using the A76 design but on a 5nm chip, so, going by ARM’s description, Google’s A76 should put out less heat than an A78 chip. Google is basically spending more thermal budget on having two big cores and less on the medium cores.

So the first question for the Google Silicon team is: what’s up with this core layout?

Carmack’s explanation is that the dual-X1 architecture is a play for efficiency at “medium” workloads. “We focused a lot of our design effort on how the workload is allocated, how the energy is distributed across the chip, and how the processors come into play at various points in time,” Carmack said. “When a heavy workload comes in, Android tends to hit it hard, and that’s how we get responsiveness.”

This is referring to the “rush to sleep” behavior most mobile chipsets exhibit, where something like loading a webpage has everything thrown at it so the task can be done quickly and the device can return to a lower-power state quickly.

“When it’s a steady-state problem where, say, the CPU has a lighter load but it’s still modestly significant, you’ll have the dual X1s running, and at that performance level, that will be the most efficient,” Carmack said.

He gave a camera view as an example of a “medium” workload, saying that you “open up your camera and you have a live view and a lot of really interesting things are happening all at once. You’ve got imaging calculations. You’ve got rendering calculations. You’ve got ML [machine learning] calculations, because maybe Lens is on detecting images or whatever. During situations like that, you have a lot of computation, but it’s heterogeneous.”

A quick aside: “heterogeneous” here means using more bits of the SoC for compute than just the CPU, so in the case of Lens, that means CPU, GPU, ISP (the camera co-processor), and Google’s ML co-processor.

Carmack continued, “You might use the two X1s dialed down in frequency so they’re ultra-efficient, but they’re still at a workload that’s pretty heavy. A workload that you normally would have done with dual A76s, maxed out, is now barely tapping the gas with dual X1s.”

The camera is a great case study, since previous Pixel phones have failed at exactly this kind of task. The Pixel 5 and 5a both regularly overheat after three minutes of 4K recording. I’m not allowed to talk too much about this right now, but I did record a 20 minute, 4K, 60 FPS video on a Pixel 6 with no overheating issues. (I got bored after 20 minutes.)

This is what the phone looks like, if you're wondering.
Enlarge / This is what the phone looks like, if you’re wondering.

Google

So, is Google pushing back on the idea that one big core is a good design? The idea of using one big core has only recently popped up in Arm chips, after all. We used to have four “big” cores and four “little” cores without any of this super-sized, single-core “prime” stuff.

“It all comes down to what you’re trying to accomplish,” Carmack said. “I’ll tell you where one big core versus two wins: when your goal is to win a single-threaded benchmark. You throw as many gates as possible at the one big core to win a single-threaded benchmark… If you want responsiveness, the quickest way to get that, and the most efficient way to get high-performance, is probably two big cores.”

Carmack warned that this “could evolve depending on how efficiency is mapped from one generation to the next,” but for the X1, Google claims that this design is better.

“The single-core performance is 80 percent faster than our previous generation; the GPU performance is 370 percent faster than our previous generation. I say that because people are going to ask that question, but to me, that’s not really the story,” Carmack explained. “I think the one thing you can take away from this part of the story is that although we’re a brand-new entry into the SoC space, we know how to make high-frequency, high-performance circuits that are dense, fast, and capable… Our implementation is rock solid in terms of frequencies, in terms of frequency per watt, all of that stuff. That’s not a reason to build an all-new Tensor SoC.”

Continue Reading

Trending