The Edge Is Where It’s At
An interview with IDC’s Shawn Fitzgerald on Edge Computing and Digital Transformation
In preparation for my session: Edge Computing: From Hype to Disruption panel at the Post Industrial Summit, I had the chance to conduct a pre-interview with IDC’s Shawn Fitzgerald. Shawn is Global Research Director for World Wide Digital Transformation Strategies at IDC Insights. With that title comes one of the most interesting research mandates I have ever encountered.
Cutting across all industries and technologies, Shawn leads the overall practice that works to understand how organizations are using technology to drive their digital transformations (what IDC refers to as DX). Besides the numerous reports, events, and presentations that Shawn delivers, he also leads the creation of a massive DX use case database, that includes thousands of specific examples of how companies are driving their DX initiatives.
I had the opportunity to ask Shawn a few questions in preparation for our panel. Here is a sneak-peak at some of the topics we will be covering.
Blaine: Welcome Shawn, appreciate you taking the time to contribute your thoughts on edge computing and its impact on digital transformation efforts.
Shawn: No problem Blaine, you are the edgiest person I know!
Blaine: Good one, Shawn. Let’s start by talking about how people should think about edge computing.
Shawn: Edge computing requires a fundamental reframing about the way people think about data. It’s about being able to ensure that the signal is not lost in the noise. That ‘noisy’ data is processed near where it is being created in real-time, and only the most critical data is transferred for centralized processing.
For example, take the new Mars Perseverance Rover. There is no way all the data its systems generate can be sent back to earth to enable real-time command and control. Most of that data needs to be processed near the Rover ‘edge’.
Blaine: And that enables the Rover to have some degree of autonomy, right?
Shawn: Absolutely. And it’s not just Mars Rovers. As all kinds of machines and systems become increasingly autonomous, it’s necessary for them to be able to process most of their data locally.
Blaine: I know you have a background in the maritime industry. How are these kinds of systems driving value in that arena?
Shawn: Value streams are increasingly about operating in real-time. Think about the benefit of improving system performance on a ship. If you could more effectively control a ship based on current conditions and the status of ship systems in real-time, you can significantly reduce the use of bunker fuel and therefore CO2 emissions. That saves real money and the environment as well.
Blaine: I often refer to this as putting a ‘micro cloud at sea’, basically the idea of bringing distributed computing capability to the ship itself so most data does not need to be sent somewhere else by satellite.
Shawn: I like that!
Blaine: A lot of people talk about ‘dark data’ – data that is basically unused because there is no ability to process it in place and it’s too expensive or time consuming to transfer it. How big is that problem?
Shawn: It’s a big problem and a key driver of edge computing. But at the same time, many businesses are still solving the ‘no data’ problem. You have to have the data in the first place to be able to process it. That situation is changing very quickly, however.
Blaine: True. Now I’ve always been amazed at IDC’s DX use case database. What’s the relationship between edge computing and the thousands of examples in this database?
Shawn: I would estimate that at least 90% of those use cases leverage big data and/or AI in some way. So edge computing is a natural fit with many of those since much of that data simply must be captured, computed, and responded to in real-time near where it is being created.
Blaine: Shawn it’s been great, but let’s save some of the best stuff for the panel discussion. Looking forward to it!
Shawn: Me as well. Thanks Blaine.