Fotolia

Get started Bring yourself up to speed with our introductory content.

Explore real-world edge computing examples for network teams

Developed out of a necessity to handle increasing amounts of data, edge computing will disrupt traditional computing and benefit various modern use cases.

Edge computing examples abound beyond autonomous vehicles and mobile device connectivity. Real-world examples can prove the necessity of edge computing techniques for organizations.

While many network teams may not directly handle edge computing responsibilities, one factor they should consider is how teams can better monitor data they can't move to a different location for analysis. This issue is possible to tackle and overcome, and authors V.K. Cody Bumgardner and Caylin Hickey provide a real-world edge computing example in this Q&A. Insights from Bumgardner and Hickey provide helpful directions for IT teams to discover how they can use edge computing to analyze organizational data and gain valuable insights from it.

In their book Making Sense of Edge Computing, Bumgardner and Hickey provide fundamentals of edge computing for IT professionals, as well as key, real-world edge computing examples that already exist despite the concept's immaturity. The book is available now from Manning Publications Co. through the Manning Early Access Program.

Editor's note: The following interview was edited for length and clarity.

How has edge computing evolved to where it is today?

V.K. Cody BumgardnerV.K. Cody Bumgardner

V.K. Cody Bumgardner: Fundamentally, our question is: Do you move data someplace to do something with it, or do you move functions to the data?

We've [experienced] an era of complete centralization. Go back as far as you want in computing, and everything was connected to a central host; whether in a mainframe or client server, data was centralized someplace. At that point, mostly everybody had the same level of access. You operated on the same platform, and the data you operated on was probably fixed.

Then, we started getting into distributed systems. If you look at the last five or six years, with how much data is generated on mobile devices, do we want to transmit all that data from somebody's cellphone, car or house to some cloud resource without knowing the data's value before?

Edge computing book coverClick here to learn more about
this book. Enter code 'TechTarget40'
at checkout, and receive a 40%
discount on this title.

How do we go about managing a bunch of devices that generate data all over the place? We need to harvest some of that data and, sometimes, [react] to it as fast as possible. How do we do that? A lot of the edge part of edge computing initially was from a device standpoint, but edges can be anywhere. In the end, it's computing not just from my phone to a remote hub, but potentially something in a house or city and an intermediate spot in between.

[Edge computing] started out as a necessity because of the way data generation and computing progressed. The techniques that make up edge computing will be required to extract the most value from data. It is a necessity -- not necessarily for a particular technology, but for a computational technique going forward as we generate more data.

Caylin HickeyCaylin Hickey

Caylin Hickey: It's taking advantage of the different data that can exist on a device now. As the number of devices grows and the quality and variety of data grow, you can't send all that back to a central server. So, you may push these tests to devices, act on them there, and glean or state changes to push back and learn from these devices instead, as a way to combat the growth of data.

Bumgardner: For example, there's a processor on the phone I can run. At what point am I collecting more data than I can generate on the phone? At what point is that data so valuable it needs to be pushed to some higher-level device?

Hickey: Edge computing is an abstraction between the device and data you use off it. [It can] expose data streams in ways that the device itself can take advantage of, via other processing functions, or other devices that may have more useful hardware can use. You can take that data and make use of it.

Edge computing is an abstraction between the device and data you use off it … You can take that data and make use of it.
Caylin HickeyAuthor

You can fork the same data into different utility streams by treating it abstractly from the underlying system. By making the application layer abstract, you can push these down to where the data exists to expose these in stream format. It makes gathering, experimenting or pushing out functions more streamlined for people who gather data from various geographic, bisected or dissected areas.

What are examples of edge computing skills for network engineers?

Bumgardner: A lot of times now, you have a cloud engineer. They're neither storage, networking, applications nor systems infrastructure. They deal with things on an abstract level.

You end up having people who can crosscut various technologies as the edge computing [people]. From the [applications] standpoint, it's high level and similar to cloud, but it's not somebody else's data center. This is more of an abstraction of underlying infrastructure. The frameworks themselves do a lot of the lifting to secure data [and] determine where things need to run and where they can be provisioned. Look at it as a next generation of cloud beyond the remote data center -- a ubiquitous type of computing.

One practical example of [edge computing] would be: How do you monitor something if you can't move the data? At StarLight in Chicago, we were on a project to do international research exchange monitoring. That's something right now that requires edge computing techniques.

StarLight is an exchange that hands off fiber and network connectivity to different places all over the world. There [are] 37 100 Gb links that go into StarLight, but they don't have a supercomputer attached to 37 network segments. So, how do they do network analysis on that massive amount of data but not move it?

We, along with various collaborators, had hardware that could do hardware-accelerated capture. Before we could do that, we had switches we could interface with to pull off certain segments of this link, merge with a section of this link and push that toward the hardware accelerator device. Then, that subset of generated data from the hardware is provided as output. That's how you distill 37 100 Gb links into something manageable.

We did this in a generalized way. To make that platform available to other people, it has to be abstracted in a way that somebody could run a measurement at [different] locations and make it happen. The system figures out what has to happen for measurements to take place. We have to coordinate.

This area has emerged because there are problems right now where you have to use edge computing techniques. Those that can adopt this, much like people adopted the cloud, will have a competitive advantage over people who just use data centers.

I hope people don't have a preconceived notion of edge computing as just dealing with mobile devices because that's not the case. Six or seven years ago, nobody knew anything about it, but now, it's big. This is coming. And it will change computing when it's fully realized.

This was last published in May 2020

Dig Deeper on Edge computing

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchUnifiedCommunications

SearchMobileComputing

SearchDataCenter

SearchITChannel

Close