Connect with us

Cars

Service mesh: What it is and why it matters so much now

Published

on

A service mesh is an emerging architecture for dynamically linking to one another the chunks of server-side applications — most notably, the microservices — that collectively form an application. These can be the components that were intentionally composed as part of the same application, as well as those from different sources altogether that may benefit from sharing workloads with one another.

Real-world service meshes you can use now

Perhaps the oldest effort in this field — one which, through its development, revealed the need for a service mesh in the first place — is an open source project called Linkerd (pronounced “linker — dee”), now maintained by the Cloud-Native Computing Foundation. Born as an offshoot of a Twitter project, Linkerd popularized the notion of devising a proxy for each service capable of communicating with similar proxies, over a purpose-built network. Its commercial steward, Buoyant, has recently merged a similar effort called Conduit into the project, to form Linkerd 2.0.

Meanwhile at car-sharing service Lyft, an engineer named Matt Klein devised a method for building a network that represented existing code — even when it was bound to a legacy “monolith” — as microservices with APIs. This became Envoy, which is now one of the components of a project that includes the work of IBM and Google, to produce a framework called Istio.

Also: Open source SDN project could let network admins duplicate production environments TechRepublic

A portion of “Dancer in a Cafe” [1912] by Jean Metzinger, part of the Albright-Knox Art Gallery collection, in the public domain.

Historical precedent

When it’s doing its job the way it was intended, a service mesh enables potentially thousands of microservices sharing a distributed data center platform to communicate with one another, and participate together as part of an application, even if they weren’t originally constructed as components of that application to begin with.

Its counterpart in the server/client and Web applications world is something you may be familiar with: Middleware. After the turn of the century, components of Web applications were being processed asynchronously (not in time with one another), so they often needed some method of inter-process communication, if only for coordination. The enterprise service bus (ESB) was one type of middleware that could conduct these conversations under the hood, making it possible for the first time for many classes of server-side applications to be integrated with one another.

A microservices application is structured very differently from a classic server/client model. Although its components utilize APIs at their endpoints, one of the hallmarks of its behavior is the ability for services to replicate themselves throughout the system as necessary — to scale out. Because the application structure is constantly changing, it becomes more difficult over time for an orchestrator like Kubernetes to pinpoint each service’s location on a map. It can orchestrate a complex containerized application, but as scale rises linearly, the effort required rises exponentially.

Suddenly, servers really need a service mesh to serve as their communications hub, especially when there are a multitude of simultaneous instances (replicas) of a service propagated throughout the system, when a component of code only needs to contact one.

Also: How the Linkerd service mesh can help businesses TechRepublic

From unknown entity to vital necessity

Most modern applications, with fewer and fewer exceptions, are hosted in a data center or on a cloud platform, and communicate with you via the Internet. For decades, some portion of the server-side logic — often large chunks — has been provided by reusable code, through components called libraries. The C programming language pioneered the linking of common libraries; more recently, operating systems such as Microsoft Windows provided dynamic link libraries (DLL) which are patched into applications at run time.

So obviously you’ve seen services at work, and they’re nothing new in themselves. Yet there is something relatively new called microservices, which as we’ve explained here in some depth, are code components designed not only to be patched into multiple applications on-demand, but also scale out. This is how an application supports multiple users simultaneously without replicating itself in its entirety — or, even less efficiently, replicating the virtual server in which it may be installed, which is how load balancing has worked up to now during the first era of virtualization.

A service mesh is an effort to keep microservices in touch with one another, as well as the broader application, as all this scaling up and down is going on. It is the most liberal, spare-no-effort, pull-out-all-the-stops approach to enabling a microservices architecture for a server-side application, with the aim of guaranteeing connectivity, availability, and low latency.

Also: Why it’s time to open source the service mesh TechRepublic

SDN for the very top layer

Think of a service mesh as software-defined networking (SDN) at the level of executable code. In an environment where all microservices are addressable by way of a network, a service mesh redefines the rules of the network. It takes the application’s control plane — its network of contact points, like its nerve center — and reroutes its connections through a kind of dynamic traffic management complex. This hub is made up of several components that monitor the nature of traffic in the network, and adapt the connections in the control plane to best suit it.

SDN separates the control plane from the data plane of a network, in order that it can literally rebuild the control plane as necessary. This brings components that need each other closer together, without impacting the data plane on which the payload is bound. In the case of network servers that address each other using Layers 3 and 4 of the OSI network model, SDN routes packets along simplified paths to increase efficiency and reduce latency.

Borrowing that same idea, a service mesh such as Istio produces a kind of network overlay for Layer 7 of OSI, decoupling the architecture of the service network from that of the infrastructure. This way, the underlying network can be changed with far fewer chances of impacting service operations and microservices connectivity.

Also: What is SDN? How software-defined networking changed everything

180828-vmworld-2018-day-2-02-bahubali-shetti.jpg

[Photo by Scott Fulton]

“As soon as you install it, the beauty of Istio and all its components,” remarked Bahubali Shetti, director of public cloud solutions for VMware during a recent public demonstration, “is that it automatically loads up components around monitoring and logging for you. So you don’t have to load up Prometheus or Jaeger [respectively]; it comes with them already. And it gives you a couple of additional visibility tools.

“This is a service-to-service intercommunications mechanism,” Shetti continued. “You can have services on GKE

, PKS [Pivotal Kubernetes Service] and VKE [VMware Kubernetes Engine], all interconnected and running. It helps manage all of that.”

Also: What is SDN? How software-defined networking changed everything

Complementing, not overlapping, Kubernetes

Now, if you’re thinking, “Isn’t network management at the application layer the job of the orchestrator (Kubernetes)?” then think of it like this: Kubernetes doesn’t really want to manage the network. It has a very plain, unfettered view of the application space as multiple clusters for hosting pods, and would prefer things stay that way, whether it’s running on-premises, in a hybrid cloud, or on a “cloud-native” service platform such as Azure AKS or Pivotal PKS. When a service mesh is employed, it takes care of all the complexity of connections on the back end, ensuring that the orchestrator can concentrate on the application rather than its infrastructure.

Also: What Kubernetes really is, and how orchestration redefines the data center

Key benefits

The very sudden rise of the service mesh, and particularly of the Istio framework, is important for the following reasons:

  • It helps standardize the profile of microservices-based applications. The behavior of a highly distributed application can be very dependent on the network that supports it. When such behaviors are drastically different, it can be a challenge for a configuration management system to maintain availability for an application on one network that has far fewer challenges on another network. A service mesh does all the folding, spindling, and mutilating — it makes a unique data center look plainer and more unencumbered to the orchestrator.
  • It opens up greater opportunities for monitoring, and then potentially improving, the behavior of distributed applications. A good service mesh is designed to place highly requested components in a location on the application control plane where they can be most easily accessible — not unlike a very versatile “speed dial.” So it’s already looking for components that fail health checks or that utilize resources less efficiently. This data can be charted and shared, revealing behavioral traits that developers can take note of when they’re improving their builds with each new iteration.
  • It creates the potential for a new type of dynamic, policy-based security mechanism. As we explored last December in ZDNet Scale, microservices pose a unique challenge in that each one may have a very brief lifespan, making the issue of an unimpeachable identity to it almost pointless. A service mesh has an awareness of microservice instances that transcends identity — its job is to know what’s running and where. It can enforce policies on microservices based on their type and their behavior, without resorting to the rigamarole of assigning them unique identities.

Previous and related coverage:

Microservices and containers in service meshes mean less chaos, more agility

For enterprises, it’s full speed ahead with microservices. This may speed up the development of chaos-proof service meshes.

To be a microservice: How smaller parts of bigger applications could remake IT

If your organization could deploy its applications in the cloud the way Netflix does, could it reap the same kinds of benefits that Netflix does? Perhaps, but its business model and maybe even its philosophy might have to be completely reformed — not unlike jumping the chasm from movies-by-mail to streaming content.

Micro-fortresses everywhere: The cloud security model and the software-defined perimeter

A months-old security firm has become the braintrust of engineers working to build the Software-Defined Perimeter — a mechanism for enforcing firewall and access rules on a per-user level. How would SDP remake the ancient plan of the software fortress?

More from ZDNet scale:

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Cars

The Feature That You Likely Didn’t Know Your iPhone Camera Had

Published

on

If you’ve ever wanted to take photos while recording video without having to resort to screen captures of video stills, Apple has something for that in almost all of the new phones it’s released since September 2019. QuickTake is a built-in and easy-to-use feature that lets you record video and snap pictures using the same device, with no need to switch between camera modes or download any additional camera apps.

There’s a small catch, however. While the process is very simple when you know how to turn it on, it may affect the overall quality of your photos. In essence, if your photo settings are adjusted for higher-quality images, those settings won’t carry over to video. And since QuickTake uses video camera sensors rather than the regular ones, there’s not much you can do to change that. Newer iPhone models do support up to 4K video, which could yield better results.

Regardless, whatever your reasons for wanting to take photos while simultaneously recording video with your iPhone may be, it’s a very simple process.

How to use QuickTake

Making use of your iPhone’s QuickTake feature doesn’t require any special setup or settings changes — it’s already part of the default Camera app so long as you’re using iOS 13 or newer.

  1. Open the Camera app and leave it on the default Photo mode. You should see “Photo” highlighted in yellow, just above the Shutter Button.
  2. When you’re ready to record, press and hold the Shutter Button to begin recording video. Recording will stop if you release the Shutter Button.
  3. Slide your finger from the Shutter Button over to the Lock icon in the bottom-right corner of the screen (where the button for swapping between front- and rear-facing cameras normally is).
  4. The Lock icon will change to a small Shutter Button, and the video recording button will change to the regular recording icon. At this point, your iPhone will continue to record video if you remove your finger from the screen.
  5. While your video is recording, tap the small Shutter Button in the bottom-right corner of the screen to take photos.
  6. Tap the recording button (it will look like a Stop button while recording) to stop taking video.

The QuickTake video you’ve recorded and all of the photos you snapped will appear in your Photos app. Due to videos being added to the Photos app once recording stops (rather than when it starts), the new video will appear after your QuickTake photos.

Continue Reading

Cars

The Science Behind The Deadly Lake

Published

on

A buildup of carbon dioxide gas is not uncommon for crater lakes, with many of them occasionally releasing bubbles of it over time. Volcanic activity taking place below the Earth’s surface (and below the lake itself) will cause gasses to seep up through the lakebed and into the water. Something that generally isn’t a concern as deeper, colder water is able to absorb substantial amounts of carbon dioxide, but if the concentration gets too dense it can create bubbles that float up to and burst on the surface of the water.

This in itself is common, and the volume of carbon dioxide usually released in this manner will dissipate into the air quickly. However, it’s theorized that Lake Nyos had been amassing an uncharacteristically large amount of gas due to a combination of factors like location, local climate, overall depth, and water pressure. Once that buildup had been disturbed, it all came rocketing out.

Whether it was due to a rock slide, strong winds, or an unexpected temperature change throwing off the delicate balance is still unknown. But whatever the catalyst was, it caused the lower layer of deep, carbon-infused water to start to rise. Which then began to warm up, reducing its ability to contain the gas. The resulting perpetual cycle of rising waters and gasses creates the type of explosion you might see after opening a carbonated beverage after it’s been shaken vigorously.

Continue Reading

Cars

The Super Nintendo’s Secret Weapon

Published

on

The Super Nintendo featured seven different video rendering modes, each offering a different level of display detail, shown in one to four background layers. Most of the Super Nintendo’s games utilized Mode 1, which could display 16-color sprites and backgrounds on two layers plus a 4-color sprite on a third layer. This little trick was the key to the parallax scrolling effect you’d see in games like “Super Mario World,” where background elements would scroll at different rates from foreground elements.

Mode 7, however, was the only one of these display modes that permitted advanced visual effects. In a nutshell, Mode 7 allows the Super Nintendo to take a 2D image and apply 3D rendering effects to it, such as scrolling, curving, stretching, and more. By switching to Mode 7, games could transform one of their background layers into an independently moving image, which could be used for gameplay modifications and simple spectacle. Plus, with a bit of creative warping, a 2D image could be changed into a pseudo-3D view, having 2D sprites move around in a flat 3D space. It’s kind of like rolling a ball on a treadmill.

Continue Reading

Trending