Connect with us

Cars

Down an invisible road: Service mesh’s unpaved path to practicality

Published

on

Orchestration gets its own conference: Why KubeCon is now a thing
Scott Fulton says thousands of IT professionals descend upon San Diego to see frank discussions and live demos. Is there a cohesive strategy for the users of, and contributors to, Kubernetes going forward? Are we ready to make service mesh a part of our lives? Read more: https://zd.net/2s948pe

“Applications are going through a reimagining process,” declared Vijoy Pandey, Cisco’s vice president and CTO of cloud computing. “You’re taking an application, and breaking it down into smaller pieces: microservices, containers. We’re going thinner and thinner.”

Breaking down applications on a server into discrete, compartmentalized, containerized functions offers a wealth of benefits. It lets data centers distribute microservices to the processors and systems best suited for them. It enables massive services to evolve continually and incrementally. It improves security by enforcing policies that govern how these functions communicate, and by separating databases and data streams into virtual appliances that are guarded by gateways.

There is a tradeoff, and it’s at least equally as huge: It decomposes systems into colossal Petri dishes of seemingly indistinguishable micro-organisms.

Truth in visibility

191127-service-mesh-001.jpg

Our “map” for this edition of Scale is an actual network connectivity plot of real microservices deployed at UK-based e-bank Monzo.

“We’ve fully embraced the idea of microservices,” wrote Calvin French-Owen, co-founder of data and analytics infrastructure provider Segment in 2015. The reason, Owen explained, was visibility:  “When we’re getting paged at 3 a.m. on a Tuesday, it’s a million times easier to see that a given worker is backing up compared to adding tracing through every single function call of a monolithic app.”

“In early 2017 we reached a tipping point with a core piece of Segment’s product,” wrote Segment software engineer Alexandra Noonan three years later. The reason, Noonan explained, was visibility. “It seemed as if we were falling from the microservices tree, hitting every branch on the way down.

“Instead of enabling us to move faster,” she continued, “the small team found themselves mired in exploding complexity. The essential benefits of this architecture became burdens. As our velocity plummeted, our defect rate exploded. Eventually, the team found themselves unable to make headway, with three full-time engineers spending most of their time just keeping the system alive. Something had to change.”

It was a whiplash effect that software engineers immediately noticed, including at Microsoft.

191121-kubecon-009-brian-redmond-microsoft.jpg

“Some companies certainly give up,” remarked Microsoft product manager Brian Redmond, during a session at KubeCon 2019 in San Diego. “If you go down a path of microservices, you’re probably going to need better procedures, and things to do automation around them.”

In addition to consolidated practices around testing and deployment, Redmond suggests a service mesh. Its principal benefit, as he described it, citing a lengthy blog post from William Morgan, the co-creator of one service mesh entitled Linkerd (pronounced “linker · dee“), is that it decouples the functions that make connectivity possible in a microservices application, from the actual jobs those microservices are being depended upon to perform. The term “mesh” refers to a network topology that has no hierarchical or symmetrical structure whatsoever, where any node is theoretically capable of connecting with any other node.

Understanding why this bizarre-sounding tool has suddenly become so vitally necessary requires us to dial ourselves back in time over a half-century.

The subroutine resurfaces

In the 1960s, a computer program was an enumerated sequence of instructions. The first reusable code in such a program was an instruction cluster that was called by the number of its first line, like GOSUB 900, and that ended with the dead-end statement RETURN. Reusable code is a necessity in every application. It was one way of making algorithms — functions that seek their solutions using repetition — feasible. Another was a repetitive loop clause, marked at the top with FOR and the bottom with NEXT, but even here, subroutines were often called by the code in-between.

191127-service-mesh-002.jpg

As programming languages evolved, large blocks of reusable code were stacked into libraries. At first, these libraries were compiled directly into the object code, making executable files more and bulkier. Later, in the first “distributed programming” systems (the phrase means something much different today), separate library files could be dynamically linked with each other. This made multitasking feasible in Microsoft Windows. The most common linking method was called the remote procedure call (RPC).

But even those earliest models relied on binding platforms, such as a common operating system, or some shared stretch of infrastructure. In today’s networked environments, distributed functions no longer share anything but a protocol to contact one another — a common interface. So the methodologies for threading distributed functions and microservices together, to borrow a phrase that may have special meaning to regular readers of ZDNet Scale, have been all over the map.

“As we go thinner and thinner, the connectivity between these pieces of a singular application, is getting worse and worse,” remarked Cisco’s Pandey, speaking with ZDNet. “Things that were library calls within a monolithic app are now RPC calls across the globe, potentially. And the developer doesn’t give a damn.”

It’s not that the software developer does not care about the quality of service — that’s not what Pandey is implying here. He’s explaining that the glue that binds functions together in a singular application, has been replaced over the years in evolutionary surges: starting with the collective binding of the source code, with multiple sets of dynamically linked functions, then with Web services contacted using REST API calls, and now with something else entirely: the network overlay. In this extremely distributed model, where every component is a virtual appliance with a network address, the gaps between networked functions may be immeasurably small, or they may be the size of the planet.

None of the activity that brings these functions together as a cohesive unit, is the concern of any of the software developers responsible for these unlinked functions. In most cases, they invoke functions with little more concern than they had when they wrote GOSUB 900.

191120-kubecon-010-vijoy-pandey-cisco.jpg

“For the developer. . . it’s just doing a single functional call and returning a value, or returning something. So it’s really thin,” Pandey went on, “and it’s trying to connect to another function, which could be anywhere — it could be on-prem, it could be in a different [cloud] region, it could be in a GCP [Google Cloud Platform] BigQuery — it could be anything!  And rightfully so, the developer says, ‘Why should I care?  I’m trying to build this application as quickly as possible, because your business, my business depends on this. Why should I care about the infrastructure?”http://www.zdnet.com/”

Service rediscovery

Like “Web services” in 1999, “service mesh” has yet to be generally understood, even by people who may be using it right this moment. Enterprise networks are software-defined. They use a second set of addresses (an overlay) that are mapped not to appliances, switches, or routers, but the gateways of applications and services. Right now, most of these overlays require Internet-style lookups of the Domain Name Service (DNS), for any Web-routed request to locate the correct destination.

A service mesh would enable addresses to be mapped to functions more directly, thus simplifying the addressing scheme. Service discovery (finding the right location for a requested function) could be accomplished by any number of more logical, more programmable methods, which may involve DNS or which could replace it entirely. These methods may be governed by policy, making them far more secure. What’s more, in a network using microservices where functions are scattered about and redistributed according to the unpredictable dicta of the orchestrator (in this case, very likely Kubernetes), service mesh may be necessary to resolve requests to mercurially changing addresses.

At one level, it’s a networking technique. Yet service mesh may become the switchboard for the next version of the cloud. If Kubernetes continues blazing trails for distributed computing systems and reforming the way cloud infrastructure is constructed, then service mesh will follow in its wake.

What has yet to be determined are the following: Which service mesh will it be?  Who will inevitably be responsible for it?  And how soon will the wide adoption of this technology in data center infrastructure change the way functions and services are utilized in every network, everywhere?

191118-kubecon-013-brian-redbeard.jpg

“We learned through the evolution of OpenStack and Kubernetes that there is a good mechanism of handling the design of a distributed system,” explained Brian “Redbeard” Harrington, Red Hat’s very accurately nicknamed principal product manager for his company’s Service Mesh platform, based around Istio. “Which is to say, there is a control plane and there is a data plane.”

Software-defined networking (SDN) brought forth the architectural principle of separating traffic related to the user application from traffic involving control and management of the system, into two planes as Redbeard described. Originally, this was to protect the control functions of the network from being disrupted by erroneous or even malicious functionality in the userspace. But in a distributed network where applications are containerized (using the highly portable virtualization system first made popular by Docker), whose connectivity is enhanced with a service mesh, this plane separation makes feasible a new concept: a containerized function that does not need a map of the network it inhabited, or even to query something else for information from such a map, to behave as part of the network.

“Historically with service mesh, part of the purpose was to glue the intercommunications between services together,” continued Red Hat’s Redbeard, “handling the discovery and rerouting of traffic. With Istio, we are able to achieve that same goal.”

control-plane.png

Service mesh architectural diagram for Linkerd.  [Courtesy servicemesh.io]

Each instance of a service in this distributed network may be equipped with a kind of proxy called a sidecar, which serves as the connectivity agent on the data plane, handling incoming and outgoing requests. For Linkerd, this sidecar is given the undramatic name linkerd-proxy, and is depicted residing in the data plane in the diagram above; for Istio, whose architecture actually differs very little, the sidecar proxy is provided by a CNCF project named Envoy. In this diagram, each blue rectangle represents a Kubernetes pod; each grey box within it is a container. Within a pod, its containers all share resources, so the application in the data plan has access to linkerd-proxy, but does not need to know or care about anything in the controller pod — to borrow Vijoy Pandey’s phrase, the developer doesn’t give a damn. All the network and service configuration functions, along with monitoring and visibility, are handled by containers stationed on the control plane, while the sidecar travels with the application through the system.

Like most everything in the “container ecosystem,” a complete service mesh is an assembly of multiple components, some of which even hail from other open-source projects.

191118-kubecon-012-brian-redbeard.jpg

“For observability with Istio, and specifically in the context of OpenShift Service Mesh,” continued Redbeard, “we mean two things: the idea of tracing our applications, generally through the use of Jaeger, but also collecting and scraping of metrics out of the actual proxy components with Prometheus. . . The distributed tracing piece actually refers to making a request across a call chain, graphing the latency between all of those, and even capturing header information from the requests in a sampled manner.”

Sampling has been the key to monitoring highly complex systems, especially when events compound one another to render an effective logging system impossible. Rather than recording every transaction in a ledger whose size would soon span galaxies, a sampling system gathers enough information about the behavior of each component in the network for a monitoring system to ascertain whether any part of that behavior is worth monitoring more closely.

red-hat-kiali.jpg

A service entity graph in Kiali, part of OpenShift Service Mesh.


[Courtesy Red Hat]

Red Hat has led the development of OpenShift’s own component for this purpose, called Kiali [pictured above]. This is perhaps the functionality that Segment could have used three years earlier. “We have glued a bunch of these components together,” Redbeard continued, “that we think any operator or administrator is going to need, to have a well-running service mesh, and coordinated them through the [Red Hat] Operator Framework.”

Cisco offers an alternate, or perhaps additional, approach for what it calls Network Service Mesh (NSM, not to be confused with Cisco Network Services Manager, Cisco Network Segmentation Manager, or Cisco Network Security Monitoring [PDF]). As Vijoy Pandey reminded us, network services in the classic OSI model have been compartmentalized into seven layers, with user application traffic inhabiting the highest, Level 7. The viability of service meshes such as Linkerd and Istio, he told ZDNet, presume that the underlying network layers — particularly Layer 2 (data link), Layer 3 (network map), and Layer 4 (transport) — are already fully resolved. He made the case that a service mesh can resolve connecting a single Kubernetes pod to the underlying network, but not so much one pod to another pod. Cisco’s NSM effort, he said, has been joined by Red Hat and VMware.

191120-kubecon-011-vijoy-pandey-cisco.jpg

“NSM basically solves that Layer 2, Layer 3, Layer 4 connectivity problem,” said Pandey, “across a multi-cloud domain, in a manner that is friendly to the developer.

191126-cisco-nsm-diagram.jpg

Cisco’s desired end-state for a Network Service Mesh environment for Kubernetes.


[Courtesy Cisco]

“Let’s break it down into two solid verticals,” he continued. “One vertical has everything to do with containers and Kubernetes, but that purest type of environment is only in our dreams. The other vertical is where there are containers and Kubernetes, along with bare metal, mainframes, VMs, and everything else under the sun — the brownfield mess that all of our customers deal with. In the Kubernetes/containers/serverless environment, we are assuring that NSM is going to be open-source ’til the cows come home. We are making sure that connectivity problem, for a Kubernetes container deployment, is always going to be open source and free. In that perspective, NSM is going to be a first-class citizen with any Kubernetes deployment.”

At the time of our interview, he said, NSM was within two production deployments of meeting the minimum criteria (there should be three) that Kubernetes maintainer CNCF mandates to qualify for “incubation” — to be fully maintained by CNCF, like Kubernetes itself was in its first years. By next year, Cisco hopes to have five independent users for NSM. At that point, he hopes, NSM can become a part of every CNCF-certified Kubernetes deployment. In other words, in this hopeful and bright future, if you’re deploying Kubernetes, you’re deploying NSM.

What this would mean, if everything turns out as Cisco has planned, is that the class of Layer 2 and Layer 3 infrastructure that telecommunications providers say they require to make virtualized network functions (VNF) run in secure isolation, will be achievable using containerized network functions (CNF) instead. NSM would weave a dedicated network of addresses supporting the orchestrator and other systems that maintain workloads. Then a Layer 7 service mesh, such as Istio or Linkerd, would still weave a network overlay for connecting those workloads, but this time with the assurance of underlying connectivity — even if the nodes in which those workloads are maintained reside on different cloud platforms.

191127-service-mesh-003.jpg

It would be an altogether different Internet than the one we rely upon today, one which may be less dependent upon DNS for resolving fixed addresses — some have speculated it could be independent of DNS. The definition of domains could change, and thus domain names as we have come to know them. It could make applications and services available to users on a per-use basis, perhaps independently of the cloud or telco that hosts them.

But this is not a prediction. We’ve been at this crossroads before, only to make U-turns, or to get stuck spinning in traffic circles. Reimagining the way things should be in our data centers, has already become an industry unto itself. Yet this time, there is one underlying certainty: There are many organizations that, like Segment, have no more time to waste. Reality demands one permanent solution.

190714-scale-logo.jpg

Learn more — From the CBS Interactive Network

Elsewhere



Source link

Continue Reading

Cars

Street Legal 2017 BAC Mono lands at auction

Published

on

Some sports cars are made for comfort and cruising long distances, and some are more built for days spent at the track. The BAC Mono certainly falls into the second category. It’s a track day god and one of the highest performance single-seat cars you could ever own. The coolest part of the BAC Mono, in this case, is that it is registered and legal to drive on the streets in the US.

The car is finished in white with a red wrap over the top and a black interior. The seat appears to be done in leather and the car has an open cockpit complete with a digital steering wheel that looks straight out of an F1 car. The car up for auction has only 3,000 miles on the odometer.

The Carfax report shows no accidents, and it features 17-inch wheels and AP Racing brakes. The Williams harness inside the cockpit is FIA approved. The car was constructed in England and then assembled in the US. The seller is a dealership and says that the only modifications to the car are a red wrap over white paint. The car was originally orange.

It does have aftermarket stickers on the sides and a GoPro mount above the seat for recording track day festivities. Power for all BAC Mono cars comes from a 2.3-liter 4-cylinder that makes 280 horsepower and 207 pound-foot of torque. The engine is from Ford and was modified by Cosworth. Power goes to the rear wheels via a 6-speed sequential transmission shifted with paddles behind the steering wheel.

The AP Racing calipers squeeze carbon ceramic brakes. The car appears to be ready for the track or streets. It’s sure to be one of the fastest rides at the track day. The car has three days to go at cars&bids and is at $85,500 right now.

Continue Reading

Cars

2022 Acura MDX Type S revealed as Acura heads to Pikes Peak

Published

on

The Pike’s Peak International Hill Climb is going on soon as automakers and racers from around the world converge on Colorado Springs to race up the mountain. Among the many automakers participating in the event for 2021 is Acura. The automaker has revealed the 2022 MDX Type S as the tow vehicle for the Acura race team.

Acura has confirmed that it’s entering four different racecars into the 99th running of the Pikes Peak International Hill Climb that are all going to be driven by the Acura R&D engineers. All four of the vehicles are production-based and will race up the mountain on June 27, taking on the challenging 12.42-mile 156-turn mountain course.

The hill climb is one of the most dangerous races that automakers can participate in as cars have been destroyed and participants have been killed during the event over the years. It’s not drivers in cars that face the most danger during the race. Motorcyclists are much more likely to die during the event. The 2022 MDX Type S is being used to tow one of the TLX Type S cars from the team shop in Bremen, Ohio, to Pike’s Peak, Colorado. The SUV will also be used to support the Accurate team during the competition.

The MDX Type S is a high-performance version of the company’s SUV featuring a turbocharged engine. The vehicle is the first Acura SUV to wear the Type S badge and is the fastest and most powerful SUV Acura has ever built. It will go on sale later this year. For now, Acura is mum on any performance specifications for the MDX Type S.

Acura has historically done well at the hill climb and currently holds the hybrid class record set last year by driver James Robinson in the “Time Attack” Acura NSX at 10:48.094. Acura also holds the open division record at 9:24.433, set by Peter Cunningham driving the Acura TLX GT racing car.

Continue Reading

Cars

Toyota foils leakers by offering an official image of the 2022 Tundra

Published

on

Earlier this week, leaked images were going around claiming to show the next generation 2022 Toyota Tundra. Automakers never like leaks, and often they simply deny that the images are of their vehicle or ignore the leak altogether. However, Toyota used a different tactic when images of its 2022 Tundra leaked, choosing to release an official image of the truck.

2022 Tundra TRD Pro

With Toyota’s move, talk of the 2022 Tundra has moved from the leaked images to Toyota’s official image. However, it’s worth noting that Toyota only offered a single image of the TRD Pro version of the Tundra and offered no details on the truck. Last month, SlashGear posted a review of the 2021 Tundra TRD Pro, highlighting that it was the last hurrah for the current generation of the truck.

However, it does offer a nice opportunity for us to compare the exterior of the 2021 model to the 2022 model. What we see is significant changes on the exterior of the truck. While the overall profile remains virtually the same, the 2022 has a completely new front end that closely resembles the style used on the Tacoma and 4Runner SUV. That means a large black grille with hexagonal openings and bulky Toyota branding on the grille.

It’s unclear if non-TRD Pro versions will have the same front-end treatment. Another interesting tidbit that is easily seen from the official Toyota photograph is that the truck is equipped with an LED light bar underneath the Toyota logo in the grill and what appear to be LEDs underneath the grill on the front black portion of the bumper. The headlights are much smaller and appear to be LED.

2021 Tundra TRD Pro

The truck has modest black fender extensions and rolls on very attractive black wheels. We also note that the truck has integrated sidesteps to make it easier to get in and out. Unfortunately, there’s no indication of what changes might have been made to the interior or under the hood of the truck at this time.

Continue Reading

Trending