By: Dr. Frode Alirash-Roarson, Chief Scientist –
New and emerging technologies are shaping a new type of power grid: a smart grid. Tomorrow’s smart grid will challenge the traditional value chain, leaving behind the aging infrastructure and taking advantage of digital technologies to identify and respond to changes in power consumption.
The emerging smart grid is a network combination of communication, edge computing, automation and new technology that work together to create a network that is more efficient, stable and greener. The new and emerging technologies in smart grids all talk about the big buzz words: AI, ML, predictive and preventive maintenance.
There are multiple new initiatives that are set up to address these topics and result in the optimization of the new smart grid. A common denominator for all of these topics is that they are reliant on data and the availability of real time data.
There are a multitude of vendors that can offer sensors, smart meters, edge gateways, etc. that in their own way offer some sort of access to this data. The vendors have different approaches to the sampling resolution of the data and how it is provided to the client. One of the challenges of obtaining good datasets for Machine Learning and or analysis, is that one will have to sample at a high rate to achieve a relatively large dataset. The underlying cost of data traffic and further processing in the cloud will also become significant.
There are several companies that today offer the opportunity to measure and collect data with a high sampling rate (number of events per second from sensor) in the power substations. The challenge with those systems is that the amount of data collected becomes very large, but the value of this data is often perceived low.
The reason for this, is that one cannot see trends etc. without doing aggregations and analyses. This type of aggregation and analysis are typically done in a cloud-based environment and the costs of both data transmission and analysis can easily become too high, depending on the amount of data that is collected.
“We have seen several examples of customers who have turned down the frequency of both sampling and collection from equipment in the network because the costs of moving data are too high.” – Dr. Frode Alirash-Roarson
Most of the offerings that are on the market with regards to measuring conduction in the network stations, collect and transmit data without doing any kind of processing on the edge. This entails a transaction cost (LTE / 5G subscription) in cases where one does not have their own fiber connected network that can be used as a data carrier. As stated, before this kind of cost, could mean that one would not collect a large enough and accurate dataset that can be used for AI / ML analytics. In order to be able to start with analytic, predictions and machine learning, companies must have access to both real-time data that has a high sampling rate and historical data.
Today’s solutions, where you simply pass on data without doing any form of edge analytics, also entail a certain risk that the data may be lost if the data carrier used goes down. With edge computing nodes, one is able to store all the acquired data and, in the event of a data carrier outage, no data is lost. The edge nodes will simply resend the collected data when the data link is up and running again or over an alternative data carrier if available.
If one chooses to install edge computing nodes out in the network, one will introduce a hardware that is capable of capturing sensor and measurement data with as high sampling rate the sensors can offer. The nodes can be used to store and process the acquired data, so that results and/or alarms can be sent to a central solution. Metadata can be shared with third parties while maintaining a secure infrastructure.
Another benefit of processing the data where it resides is that it will also result in a reduction of data carrier cost, due to the fact that only the metadata is forwarded. The high sampling rate that was made possible by the edge computing nodes will have a follow-on effect of high quality and accurate data.
Having access to a large, accurate and high-quality data set, will be crucial when providing this data to customers or third-party vendors for further analytics.
With the introduction of edge computing nodes in substations, one will achieve access to high sampling rates from sensors, which can include: voltage, current, machine sound, other sensor data such as moisture and humidity, ambient temperature, transformer temperature, etc. Edge computing nodes will offer the ability to move AI and ML models from the cloud to the edge for analysis and “automatic “decision making. Rules and alarms can be controlled from the edge nodes directly.
On the grounds of the collected sensory data, one can start the process of moving from inefficient, periodic maintenance over to real-time, condition-based maintenance for the next generation of smart utility grids.