Tales & Tips from the Trenches: Demystifying Edge Computing

With increasing number of Internet of Things (IoT) getting connected and the ongoing boom in Artificial Intelligence (AI), Machine Learning (ML), Human Language Technologies (HLT) and other similar technologies, comes the demanding need for robust and secure data management in terms of data processing, data handling, data privacy, and data security. This, in turn, emphasized the significance of Edge Computing at the present and in the future.

What is Edge Computing?

To start off, the word ‘edge’ represents geographic distribution. Edge Computing is the practice of collecting, storing, processing, and analyzing data at or near the source of the data instead of in a centralized data processing warehouse location away from the data origin. In other words, bringing applications and data storage closer to where it is collected and utilized. The growing amount of data (IoT) and the limitations of the networking layer (and computation) currently lead to a decentralized system like Edge Computing. Edge Computing is a distributed computing paradigm that brings computation and data storage closer to the location to improve latency and bandwidth.

Edge Computing Architecture

Figure 1 shows a typical Edge Computing architecture. In the Edge Computing concept, the servers will be very close to the users compared to the cloud environment. It will provide better service, and latency will also be low compared to the cloud. There will be an increase in the performance in terms of less computing power.

Click on image to enlarge

The layers are defined as follows:

Cloud: In an Edge Computing application, the cloud can serve as long-term storage; coordinator of the immediate lower levels or powerful resource for irregular tasks. This can be a public or private cloud.

Edge Nodes: An edge node is a generic way of referring to any edge device, edge server, or edge gateway on which Edge Computing can be performed. They can range from base stations, routers or switches up to small-scale data centers.

Edge Gateway: Anedge gateway will handle the networks, access controls, data privacy etc.​

Edge Devices: Edge devices are usually purpose-built for a single type of computation and often limited in their communication capabilities. Devices on this layer might be smart watches, traffic lights or environmental sensors.

With this basic understanding of Edge Computing, lets discuss why Edge Computing is needed.

Why Do We Need Edge Computing?

Billions of connected devices are coming online, and these devices will be producing massive amounts of data from just about everywhere, including cars, ships, satellites, farms, cell towers, and rooftops. Current cloud-centric architectures will not be able to scale to meet this opportunity. This will necessitate evolving data management infrastructure that is close to where data is produced. Data will need to be managed, analyzed, and processed at the edge so that its value can be assessed and utilized in real-time.

Key Reasons Why We Do Need Edge Computing

Improve latency: In most cases, latency is a byproduct of distance. Although fast connections may make networks seem to work instantaneously, data is still constrained by the laws of physics. It can’t move faster than the speed of light, although innovations in fiber optic technology allow it to get about two-thirds of the way there. Reducing the physical distance between the data source and its eventual destination is the best strategy for reducing latency. By locating key processing tasks closer to end users, Edge Computing can deliver faster and more responsive services.

Reduce Bandwidth: Computing at the Edge means less data transmission as most of the heavy lifting is done by the edge devices. Instead of sending the raw data to the cloud, most of the processing is done at the edge, and only the result is sent to the cloud, thus requiring less data transmission bandwidth.

Improved Data Security: The result of avoiding data transmittal over large distances means the reduction of attack and data breach vulnerabilities.This in turn eliminates data redundancies.

Cost Savings: Retaining as much data within your edge locations reduces the need for costly bandwidth to connect data, and bandwidth translates directly into dollars. Edge Computing is not about eliminating the need for the cloud, but to optimize the data flow in order to maximize the operating costs. 

Now, you have begun to grasp the basics of Edge Computing.


Approved for Public Release, Distribution Unlimited. Public Release Case Number 21-1302

The author’s affiliation with The MITRE Corporation is provided for identification purposes only and is not intended to convey or imply MITRE’s concurrence with, or support for, the positions, opinions, or viewpoints expressed by the author. ©2021 The MITRE Corporation. ALL RIGHTS RESERVED

This quarter’s column is written by Anandhi Sutti with THE MITRE CORPORATION and has over 20 years of experience in Data Management. She has helped public and private sector clients to oversee and strategize the implementation of Data Management, Cloud Computing and Business Intelligence (BI) projects. She has strong expertise in Business Intelligence, Data Analytics, Data Modeling, Data Architecture, Data Catalog, Cloud and Data Strategy along with architecting complex systems and applications using a Software Development Life Cycle (SDLC). Anandhi has a master’s degree in computer applications and bachelor’s degree in mathematics.

Share this post

Bonnie O'Neil

Bonnie O'Neil

Bonnie O'Neil is a Principal Computer Scientist at the MITRE Corporation, and is internationally recognized on all phases of data architecture including data quality, business metadata, and governance. She is a regular speaker at many conferences and has also been a workshop leader at the Meta Data/DAMA Conference, and others; she was the keynote speaker at a conference on Data Quality in South Africa. She has been involved in strategic data management projects in both Fortune 500 companies and government agencies, and her expertise includes specialized skills such as data profiling and semantic data integration. She is the author of three books including Business Metadata (2007) and over 40 articles and technical white papers.

scroll to top
We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept