Inside the Briefcase

IT Briefcase Exclusive Interview: Getting the Most Out of Open Source While Managing License Compliance, Risk, and Security

IT Briefcase Exclusive Interview: Getting the Most Out of Open Source While Managing License Compliance, Risk, and Security

with Kendra Morton, Flexera
In this interview, Kendra Morton,...

Why DEM Matters More Than Ever in Financial Services

Why DEM Matters More Than Ever in Financial Services

Remember waiting in line at the bank? Banking customers...

How to Transform Your Website into a Lead Generating Machine

How to Transform Your Website into a Lead Generating Machine

Responsive customer service has become of special importance, as...

Ironclad SaaS Security for Cloud-Forward Enterprises

Ironclad SaaS Security for Cloud-Forward Enterprises

The 2015 Anthem data breach was the result of...

The Key Benefits of Using Social Media for Business

The Key Benefits of Using Social Media for Business

Worldwide, there are more than 2.6 billion social media...

The Edge’s Impact on Performance Monitoring Strategies

August 17, 2020 No Comments

Featured article by Nith Mehta, Executive Vice President, Technical Services, Catchpoint

perf mon 300x225 The Edge’s Impact on Performance Monitoring Strategies

Growth

By 2023, IDC predicts that the edge computing market – which was unheard of a few years ago – will be worth $34 billion. This growth, driven by 5G and latency-sensitive applications such as IoT and VR, is putting pressure on system operators to build edge datacenters and deploy IT equipment at the last mile.

The edge is a new frontier, and therefore poses major monitoring challenges for IT. One such challenge is the sheer volume of data that monitoring systems will need to collect as more “things” such as driverless vehicles and jet engines are connected to the Internet.

Another challenge facing IT is accessibility. For example, if 5G is fully deployed, there will be many more cell towers than today, some of which will place antennas as close as 500 feet apart as high-frequency radio waves are weaker when transmitted over long distances through objects. IT teams must determine how many of these small cell sites must be monitored and similarly how they select which edge datacenters from which to monitor. How extensive does the monitoring footprint need to be in order to truly cover the edge in all its manifestations?

Developing the right strategy to gain a comprehensive picture of how things are performing from an edge perspective is critical. Here are three things IT teams should consider:

#1. Monitor from where it matters most

The edge may be a new frontier, but as has always been the case with technology, the only way to fix problems is to monitor activity from where it matters the most. If you have an app hosted on the edge and monitoring from a regional data center, you won’t be able to tell whether your edge app is working well or if your customer can access it. As with all kinds of troubleshooting, such monitoring needs to take place where the app itself is located.

When monitoring at the edge, keep in mind that instead of connecting directly to a backbone provider, your end user connects to a local ISP or mobile provider via a “last mile” access network. Consumer ISPs don’t consistently deliver the bandwidth they advertise, as consumption peaks and networks get saturated. By correlating last mile monitoring with backbone and broadband, you can determine if performance problems are related to site infrastructure or to the end user’s network, which is critical.

#2. Ensure actionable context from the data

In its complexity, the edge has made it more challenging than ever to obtain actionable network performance data. Enterprises need to ensure that they have full visibility into the network and across application layers in order to proactively detect issues and quickly identify their root causes.

For example, with full visibility into private and public networks, you can monitor for congestion, packet loss and peering issues. You can also protect against DDoS, DNS Cache poisoning and BGP route hijacks.

Bringing actionable context, or meaningful actions, from the data you collect allows you to specifically test parts of your application or test specific edge locations with minimal production changes. By creating custom breakdowns and metrics, you can examine the performance of each edge location and proactively detect any issues.

It is then critical to utilize this performance data to gather insight and intelligence to enable faster decisions, better real-time customer experience and optimized business process.

#3 Don’t forget API Monitoring

CDNs such as Akamai, Fastly and Cloudflare are already running services from the edge. To ensure that API services (which currently power most AI-recommendations) are always on and are as reliable as possible, providers are beginning to offer edge services that move API traffic onto their edge networks, allowing them to serve API responses from edge servers instead of the origin servers. This is where API monitoring is already critical, not just from an availability perspective, but to allow insight as to whether or not API calls are returning the correct responses to ensure the integrity of the service.

Conclusion:

The edge will continue to thrive and the issues regarding performance and digital experience won’t disappear anytime soon. Enterprises must take a hard look at their monitoring strategy and determine if the systems they have in place will provide the necessary insight and perspective needed to ensure positive digital experiences.

DATA and ANALYTICS 

Sorry, the comment form is closed at this time.

ADVERTISEMENT

Gartner