When a smartphone manufacturer reveals a new model to a captivated audience, what it’s trying to do is leverage the tools of fashion marketing to make subtle, and often semi-relevant, changes to its product line exciting and motivating. Look, the manufacturer beckons, we changed the way our corners are rounded, we relocated the button you don’t want to a place you won’t notice it, and we removed the one button you do want because, hey, it’s fashion!
Smartphones succeed or fail not because of the placement of their buttons or the smoothness of their corners, but as a result of how their operating platforms deliver services their users want. Windows Phone failed not because it was a bad phone (it wasn’t). It could not deliver the services users wanted, in the way they wanted them.
Service delivery is the make-or-break issue in the technology business. If your service fails to be both innovative and efficient — a pairing that’s much harder to achieve than it too often seems — it will fail in the market. Every successful technology product was built on a successful technology platform. The product that fails is the one whose platform was left behind when the service delivery model moved beyond it. Just ask BlackBerry.
Kubernetes is a service delivery engine. It takes a workload that produces a service engineered for people to use simply and methodically, and distributes it to the locations where that workload may be used most efficiently. And now, just as importantly, Kubernetes supports methods that make that workload more discoverable, both to the people whose applications are looking for them, as well as to other workloads that may cooperate with them. The way these methods are being implemented, DNS — the system that resolves names to addresses on the Internet — may be rendered redundant or even unnecessary.
The new definition of network automation
Brendan Burns, distinguished engineer at Microsoft and Kubernetes’ co-creator, believes that developers of software and services will now begin paying attention to the ideal that the services everyone is building will need to play nicely with one another.
“I think a lot of what people are going to start automating is the ways in which services work together,” Burns told ZDNet. “Even just mundane things like access control — if you think about how you authenticate one service to another service, there’s a lot of very mechanistic stuff today in order to make that work, be it issuing and rolling certificates, or using an identity system. Those things are conceptually very easy. You say, ‘I want to have a new user named Scott, and I want him to be able to call this service.’ Actually putting that into an operable, managed system is not simple.
“That’s an example of the kind of stuff that’s required,” he continued, “but doesn’t get done because it’s too hard. Developers say, ‘Well, they’re all my systems, and we’re all friends, so we’re going to have one token and I’m not going to differentiate.’ And then somebody spins up a development mode test, they send it to production, and they take down production because they doubled the traffic on the production endpoint. Whereas if they’d had access controls to differentiate between production traffic and developer traffic, they could very easily shunt off that developer traffic.”
Burns’ example points to a problem with most network automation today, especially with a first-generation virtualization platform. Software developers need the means to test the efficacy of their services before making them available to general customers (“sending them to production”). Most organizations don’t have the resources to give developers their own fully isolated, scale-model networks with which to test their works in progress. So test traffic has to cohabit the same network as production traffic.
VMware has tried to implement a way to segregate network traffic by workload class, using a methodology it introduced called microsegmentation. Think of it as a system of software-based firewalls on the server side, applying access control policies and behavior management rules that apply to specifically identified services. Firewalls enforce behavioral policies on communications systems that may not have “good behavior,” however that may be defined, built in — but they typically do so after the fact, once the services they marshal have already been deployed.
The more evolved system that Burns envisions is one where rules of a sort are capable of specifying how the orchestrator should respond to these requests, fulfilling a role not unlike VMware’s microsegmentation. He points to Microsoft’s Azure Functions mechanism as a way of developing orchestrated responses to certain events, such as an increase in size for an online storage bucket, or an incoming request for data. But he envisions less code, not more. The result would be an orchestration platform that’s capable of moving a service, even while it’s running, to an area of the platform whose importance is sufficient for the quantity and priority of the work it’s performing.
The culmination of Burns’ ideal system includes this concept of the service mesh. If you’re familiar with the idea of software-defined networking (SDN), you know that inside a data center, addresses can be applied to services and other workloads, not just servers and hardware. This is perhaps the catalyst for the entire containerization movement: the fact that a workload has its own address.
When the Internet first became the backbone of a commercial market, servers were given domains, and those domains were mapped to IP addresses. Those domain names typically identified the corporate owners of the servers, and subdomains identified the departments in charge of those servers. So addresses reflected the budgets of their corporate owners, not the work they did.
Up until recently, the destination point for a request from a service over an enterprise network, happened to be the address of a virtual machine (VM) where that service was being hosted. Containerization changed that relationship. In enterprises where Kubernetes oversees this level of infrastructure, the orchestrator can direct that request toward the service itself. There may actually be many copies of that service running simultaneously, so this re-routing process now incorporates what older architectures still call load balancing.
What could replace DNS
The Domain Name System (DNS) of the Internet translates URLs — the names for the owners of network space — into the numeric addresses to which data packets are routed. Enterprises that conduct business and commerce online use these addresses as gateways, which are transfer points between the outside Internet and inside the data center. There, machines still have IP addresses, but they use a different logic than the system that supports the Web. In fact, many enterprise networks use overlays, which map one set of addresses onto another. The overlay map can be changed pretty much as necessary, enabling a system where a service or an address may be reliably called using one address, and the request can be relayed to wherever the other one happens to be today. This is one of the methods required to enable workloads to be relocated from one server to another, physical or virtual.
Using DNS to resolve which function belonged in what domain has always been a performance bottleneck. Containerization takes the first step in breaking that bottleneck. Service mesh takes a giant leap further. Because microservices are both highly portable and highly volatile, a service mesh employs active agents to locate where workloads have moved. Think of how the wireless telephone network must use logic to resolve where a customer’s device is located — logic the wireline network could never have employed — and you’ll get the basic idea.
Here’s where the revolution begins to do real damage to the old system. The way services on the Internet have traditionally worked required a sophisticated method of location called service discovery. (I’d compare it to a kind of telephone directory that had pages that were yellow, but I can’t just say “yellow pages” without potentially getting into a trademark dispute.) It was a way of leveraging DNS to resolve the issue of which IP address represents what service. In 2015, when containerization first caught fire, before the advent of Kubernetes, it seemed service discovery could be its ultimate, unresolvable bottleneck, the point where connecting the new world to the old world would prove impractical or even impossible.
As happens surprisingly frequently in the history of technology, service mesh architecture was created by a handful of different engineers simultaneously. At its outset, service mesh was a way for services distributed within a network to find each other and to make use of one another, especially so applications that essentially use the same library functions wouldn’t have to maintain duplicates of the same code. When a function inside a container has a dependency linking it to library code, that code need not be contained within the same unit at the same address — the service mesh can resolve dependencies such as this in real-time. With Istio and other service mesh platforms, each service’s identity and access policies are maintained in an exclusive service registry, which is used instead of the conventional DNS lookup function. This way, in a perfectly meshed data center, all functions can be interoperable with one another. And if each service can find a way of declaring its own purpose, the service discovery problem could be solved — at least within the enterprise network boundaries.
Originally, the service mesh’s purpose was to help workloads inside a network make contact with one another. But communications networks throughout history rarely stay bottled up for long. Late last year, SDN tools provider Avi Networks began promoting the idea of leveraging its existing service platform, called Vantage, as a mechanism for extending service meshes such as Istio beyond customer premises and into multiple public cloud spaces. This architecture could enable cross-platform service discovery, which would arguably preclude the need for DNS — one of the defining services of the Internet — in many cases if not all.
If you recall the name “Avi Networks,” you’re a regular ZDNet reader. VMware acquired Avi last June, and announced the following August it had already integrated a good chunk of Avi’s engineering into its NSX network virtualization platform.
Like all technologies born from service-defined networking SDN, a service mesh has a control plane kept separately from the data plane. This way, the controlling functions of the mesh are bound tightly together, giving applications their own address space and their own traffic flow. Think of service mesh as the evolved form of a network overlay: a system where the routes are developed organically, and the policies for using those routes are determined and enforced along the way.
The Service Mesh Interface
VMware’s work follows up on innovations completed just weeks earlier at Microsoft. Last May, Microsoft’s Burns, along with colleague Gabe Monroy, announced their introduction into the community of a concept called the Service Mesh Interface (SMI), which is a way for different mesh platforms built around Kubernetes (there are quite a few) to connect with one another and share accessibility.
Monroy’s explanation at that time speaks to the tremendous implications for the evolution not only of data center networks, but network security:
“Today with the explosion of micro-services, containers, and orchestration systems like Kubernetes, engineering teams are faced with securing, managing, and monitoring an increasing number of network endpoints,” he wrote. “Service mesh technology provides a solution to this problem by making the network smarter, much smarter. Instead of teaching all your services to encrypt sessions, authorize clients, emit reasonable telemetry, and seamlessly shift traffic between application versions, service mesh technology pushes this logic into the network, controlled by a separate set of management APIs.”
It would turn the Internet inside out, at least insofar as its job as a provider of services is concerned.
Brendan Burns explained it this way: “The Service Mesh Interface is really more about interoperability and building an ecosystem than anything else. There’s two different personas in any ecosystem: tool vendors or utility vendors, and end users. In both cases, having an abstraction between those two makes sense. You see this all over computing: We have standards so that multiple vendors can sell the same thing, and they work with the user. A good example would be USB. Every single person who makes a Bluetooth headset or keyboard can build a USB connector, and know it will work for the user. For service mesh, that’s really important, because if you’re building a tool that, say, knows how to do canary releases, if you have to tightly bind it to a specific service mesh implementation, then you’re limiting your available customers to only those people using that service mesh. If you write a really great tool, but it only works with Linkerd, then everybody who uses Istio can’t use your tool, even if they love it.
“If I’m a user, especially in a new technology world, and buying something new, it’s scary if I’m wedding myself deeply to the implementation,” continued Burns.
So in the near term, SMI will enable service mesh implementations to be interchangeable, making services independent from their implementations. In the longer term, it could pave a route for a universal service mesh concept to bridge the gaps between these implementations, producing a kind of network of networks. . . which is coincidentally the image Vint Cerf had in his mind when he first tried to explain his idea of Internet Protocol.
With this, a big chunk of the 20th century Web could find itself escorted out the back door.
Learn more — From the CBS Interactive Network
Toyota foils leakers by offering an official image of the 2022 Tundra
Earlier this week, leaked images were going around claiming to show the next generation 2022 Toyota Tundra. Automakers never like leaks, and often they simply deny that the images are of their vehicle or ignore the leak altogether. However, Toyota used a different tactic when images of its 2022 Tundra leaked, choosing to release an official image of the truck.
2022 Tundra TRD Pro
With Toyota’s move, talk of the 2022 Tundra has moved from the leaked images to Toyota’s official image. However, it’s worth noting that Toyota only offered a single image of the TRD Pro version of the Tundra and offered no details on the truck. Last month, SlashGear posted a review of the 2021 Tundra TRD Pro, highlighting that it was the last hurrah for the current generation of the truck.
However, it does offer a nice opportunity for us to compare the exterior of the 2021 model to the 2022 model. What we see is significant changes on the exterior of the truck. While the overall profile remains virtually the same, the 2022 has a completely new front end that closely resembles the style used on the Tacoma and 4Runner SUV. That means a large black grille with hexagonal openings and bulky Toyota branding on the grille.
It’s unclear if non-TRD Pro versions will have the same front-end treatment. Another interesting tidbit that is easily seen from the official Toyota photograph is that the truck is equipped with an LED light bar underneath the Toyota logo in the grill and what appear to be LEDs underneath the grill on the front black portion of the bumper. The headlights are much smaller and appear to be LED.
The truck has modest black fender extensions and rolls on very attractive black wheels. We also note that the truck has integrated sidesteps to make it easier to get in and out. Unfortunately, there’s no indication of what changes might have been made to the interior or under the hood of the truck at this time.
Ford to purchase Electriphi for integration with Ford Pro services for EV fleets
Ford has announced it will purchase Electriphi, a California-based provider of charging management and fleet monitoring software for electric vehicles. Ford intends to integrate Electriphi capabilities with its Ford Pro services to develop advanced charging and energy management experiences for commercial users. Many large commercial fleet operators are actively transitioning from combustion-powered vehicles to electric vehicles, and managing charging is a significant challenge.
Ford believes that the acquisition of Electriphi will help spur the adoption of the new F-150 Lightning Pro and E-Transit van by fleet operators around the country and the world. The automaker also notes that the acquisition is part of its plan to invest more than $30 billion by 2025 to enable it to lead in electrification for both commercial and retail customers.
Ford Pro is a new global business within Ford designed to help improve commercial customer productivity and develop advanced charging and energy management services. Charging infrastructure and managing charging capabilities for large fleets of electric vehicles is seen as one of the biggest challenges to the adoption of electric vehicles by commercial users. Ford Pro estimates that the depot charging industry will grow to over 600,000 full-size trucks and vans by 2030.
Ford Pro expects to have over $1 billion in revenue from charging by 2030. Ford’s full-electric E-Transit van is currently scheduled to begin shipping later this year, and the F-150 Lightning Pro will begin shipping in the spring of 2022. Electriphi had a team of over 30 employees, and the software they developed is designed to simplify the electrification of fleets, save energy cost, and track critical metrics like the real-time status of vehicles, chargers, and maintenance services. Ford expects to close the acquisition this month at undisclosed terms. Ford Pro will begin for customers in North America, but it will launch in Europe later.
2021 Volkswagen Jetta Review: Sober Value
Volkswagen would probably call the 2021 Jetta “pragmatic,” and rationality certainly is the name of the game for one of the most affordable cars on the market right now. A mainstay of the compact sedan segment since 1979, the Jetta always promised a balance between the playful Golf and the grown-up Passat. These days, though, the Jetta may have matured a little too far.
Much as with the Golf in the US, VW has pared back the Jetta configurations to a single engine. In fact it’s the same engine: a 1.4-liter turbocharged four-cylinder, with 147 horsepower and 184 lb-ft of torque. The cheapest 2021 Jetta, the S trim from $18,995 (plus $995 destination), comes with a six-speed manual. So, too, does the $22,795 Jetta R-Line.
Otherwise you get an eight-speed automatic, with front-wheel drive across the board. In the case of my 2021 Jetta SEL Premium – the swankiest Volkswagen offers – it pushes pricing to $28,045 plus destination. Part of that is the Cold Weather Package, which is $500 on lesser trims, and the equally priced Driver-Assistance Package.
All Jetta get LED front and rear lights, and R-Line and above upgrade the 16-inch alloy wheels to 17-inch versions. SE and above have heated side mirrors and a panoramic power sunroof. SE and above get dual-zone automatic climate control and heated front seats; cars with the Cold Weather Package have a heated steering wheel and heated rear seats. Only the SEL Premium has actual leather upholstery, though.
On the safety side, automatic post-collision braking is standard across the board, while SE and above get forward collision warnings with emergency braking, blind spot monitoring, and rear cross-traffic alerts. SEL and SEL Premium cars throw in adaptive cruise control with lane-keeping assistance.
The Jetta may have the same engine as the 2021 Golf, but the end result still feels fairly different. The Golf has, of course, near-sublime chassis tuning, and is altogether more eager with its 147 horses. Even with the same platform underneath, the Jetta plays things a little more grown-up. It’s surprisingly zippy from a standing start, easily pulling away, but corners see more body roll and the steering is dialed in light.
I suspect that’s what Jetta owners like, though, and certainly it’s a relaxed and unchallenging experience from behind the wheel. The Jetta GLI promises a few more thrills, thanks in no small part to its active damping, but this regular car is unlikely to get your heart rate up.
The same could be said for the cabin, which is dark and sober enough that you could assume Volkswagen is going through its goth phase. Matte black plastics sit alongside gloss black plastics, and the sprinkling of dark silver trim around the clusters of controls isn’t enough to lift the interior out of its somber monochrome.
The switchgear feels good, but the rest of the plastics are only middling, and all the button blanks around the transmission shifter are a reminder that even in SEL Premium form you don’t get a huge number of toys. The 8-inch touchscreen on SEL and SEL Premium trims now runs MIB3, a newer version of VW’s infotainment system; S, SE, and R-Line cars get a 6.5-inch touchscreen and the older MIB2. So, too, the two highest trims pack the Volkswagen Digital Cockpit, with a screen replacing the analog gauges.
MIB3 is clean and easy to use, though VW’s graphics don’t stray from the pallid aesthetic of the rest of the interior. There’s Apple CarPlay and Android Auto, plus a wireless charging pad, and both SEL and SEL Premium cars get a 400 watt Beats Audio system with eight speakers and a subwoofer. There’s a surprising degree of bass from that, along with two USB-C ports.
Where the Jetta does stand out – including against the Golf – is in economy. The EPA says you’ll get the same 29 mpg in the city, but highway driving is rated for up to 39 mpg (versus the Golf’s 36 mpg) for a single point advantage at 33 mpg combined. In practice, it’s not difficult to meet those figures either, not least because the Jetta doesn’t especially encourage profligate manners behind the wheel. Highway driving in particular feels tuned for steady plodding rather than anything approaching urgency.
Practicality tips things back in the Golf’s favor, with the Jetta offering 14.1 cu-ft of trunk space versus its hatchback cousin’s 17.4 cu-ft. Still, it feels bigger than that, there’s a 60/40 split rear seat, and adult passengers back there only had a slight dip in headroom to complain about. A four-year/50,000 mile warranty is a little more generous than what many in the category are offering.
2021 Volkswagen Jetta Verdict
I’ve said it before: VW’s attentions seem to be on its electrification strategy and the ID range, and that leaves cars like the 2021 Jetta out in the shadows. The compact sedan isn’t a bad car, just an unmemorable one, and the problem there is that it finds itself with competition that rival automakers are taking a lot more seriously.
The new 2022 Honda Civic Sedan, for example, is similarly priced but has a fantastic cabin and is more rewarding dynamically. The Mazda3 has beguiling looks and is far more enjoyable to drive than the Jetta. There’s not really anything objectively wrong with Volkswagen’s car, and those on an extreme budget might find its lesser-equipped trims appealing, but even those who think of their vehicles as appliances will find more to appreciate elsewhere.
Archaeologists recreated three common kinds of Paleolithic cave lighting
Enlarge / Spanish archaeologists recreated three common types of Paleolithic lighting systems. Medina-Alcaide et al, 2021, PLOS ONE In 1993,...
Toyota foils leakers by offering an official image of the 2022 Tundra
Earlier this week, leaked images were going around claiming to show the next generation 2022 Toyota Tundra. Automakers never like...
Ford to purchase Electriphi for integration with Ford Pro services for EV fleets
Ford has announced it will purchase Electriphi, a California-based provider of charging management and fleet monitoring software for electric vehicles....
Two Viking burials, separated by an ocean, contain close kin
Ida Marie Odgaard AFP Roughly a thousand years ago, a young man in his early 20s met a violent end...
The efforts to make text-based AI less racist and terrible
Getty Images In July 2020, OpenAI launched GPT-3, an artificial intelligence language model that quickly stoked excitement about computers writing...
Social1 year ago
CrashPlan for Small Business Review
Gadgets3 years ago
A fictional Facebook Portal videochat with Mark Zuckerberg – TechCrunch
Mobile3 years ago
Memory raises $5M to bring AI to time tracking – TechCrunch
Social3 years ago
iPhone XS priciest yet in South Korea
Cars3 years ago
What’s the best cloud storage for you?
Security3 years ago
Google latest cloud to be Australian government certified
Cars3 years ago
SK Telecom and Samsung to collaborate on 5G for enterprise
Social3 years ago
Apple’s new iPad Pro aims to keep enterprise momentum