The energy network transition will require more agile, flexible and interconnected networks. Digitalisation of assets and processes will play a key part in the preparation of a net-zero capable network. Whilst the IEC61850 suite of standards has been widely adopted for SCADA systems, enhanced system and asset awareness will be required and will be based on IoT technology in many cases. Correlating both data sets and interfacing to common business applications will be a key enabler and value lever for the energy transition. Remote data collection from SCADA and IoT sensors will also require appropriate security solutions that can guarantee the integrity of each of the separate security zones. This project will investigate new solutions for operational data collection and reporting, edge computing and security.
Benefits
There are multiple applications and value levers in terms of optimised asset management, operational and system data reporting. This work has also got the potential to enable remote deployment of new and enhanced automation schemes. The virtualisation of reporting and diagnostic functions will also reduce the number of hardware platforms required when rolling out SCADA systems and enable remote management of some of the configurations. The reduced hardware requirements will provide a saving of £320k over the life of the assets (10 years in this case) and reduced cost for change management due to remote configurability will save approximately £370k. Maintenance costs will also be reduced by £7.5k and overall the net benefit in terms of Net Present Value (NPV) is estimated as £371k based on £295k project spend. This is based on a 10 year assessment following a 2 year development period, required for the development of a solution that can be rolled out. The technology is expected to deliver significantly greater benefits from optimised asset management of primary and secondary assets which are however currently not quantified at this stage. Follow up projects for digital twin applications, system awareness and analysis tools as well as enhanced system integrity protection and remedial action schemes are among those applications that this technology plays a key role in and their benefits will be quantified in future projects.
Learnings
Outcomes
Year 2021/2022:
As a result of the project delay there were no outcomes to report during 2021/22.
Year 2022/2023
- SCADA events and telemetry can be securely exported to the cloud.
- PMU can be streamed.
- We can process the edge though AI/ML.
- Syslogs can be securely exported to the cloud.
- Fault records can be securely exported to the cloud.
Year 2023/2024
A successful exhibition of a profound display of technological innovation with the implementation of a Secure Edge Platform, leveraging a collection of diverse products such as EdgeXpert, Node-RED, Influx DB, and a suite of custom-built applications was achieved.
The project was comprised of eight designed work packages, each concentrating on crucial areas such as data ingestion and storage, data enrichment, rules engine and edge to cloud communication. These elements of the project were not simply assembled in a haphazard manner but were constructed in such a way that represented an intricate web of interconnected operations, all collectively working towards the same goal.
At the heart of the project was a rudimentary concept, a basic, yet robust foundation from which a sophisticated architecture could grow and flourish. This architecture, impressively flexible in its design, was demonstrative of a system capable of addressing a diverse range of topics, from data management to cloud communications.
Lessons Learnt
Year 2021/2022:
No project specific learning was reported due to the delay in the programme. Detailed learning and outcomes are to be reported in subsequent progress and closure reports.
Year 2022/2023:
WP 1
- Edge Platforms (in the (Demilitarised Zone) DMZ) should not directly connect to Operational Technology (OT)
- Devices as this tightly couples any solution and has security concerns.
- MQTT and NATS protocols are better suited to broker patterns than industrial protocols like OPCUA, DNP3, IEC101/104, IEC61850 and more.
- Retention periods must be set to avoid filling up resources and causing adverse conditions.
- Container based workloads were much easier to manage.
- Security practice prevented the use of Edge Builder as a SEP deployment tool. National Grid understands there is a future need to review processes around patching container-based applications.
- System integrators will require both software development skills and industrial automation knowledge to support the technologies.
WP 2
- Data enrichment works on meta-data which is rarely transmitted with a process value.
- Highly flexible message brokers are required to handle meta-data between distributed systems.
- Adding applications to SEP will require additional infrastructure resources - there will be a point where horizontal scalability becomes a requirement and SEP becomes distributed.
- The Microservice Architecture allows development in a language native to the skills of a development team/vendor. Throughout this WP we developed applications in Go and Typescript.
WP 3
- Telegraf instances cannot be used in the cloud to convert data to the Influx Line Protocol due to increased latency.
- Azure Event Hub had delays in data transfer due to storage constraints, so Nats.io was used in conjunction with Node-RED to overcome the issue.
- Influx Edge Replication only supports the replication of write operations and therefore could become out of sync.
PMU streaming to the cloud (part of WP3)
- Event-Hub has some significant latency when visualising the data via Azure Data Explorer into Grafana. This could be a result of using Grafana’s ingestion plugin - Microsoft's reference architectures utilise PowerBI for real-time visualisations. Using PowerBI instead of Grafana might mitigate against the latency issues.
- Microsoft recommends creating a custom application to handle faster ingestion rates into the event hub as the out-of-the-box methods have increased latency.
- Using Grafana with Azure Data Explorer limits the query functionality to the Kusto Query Language (KQL). This doesn’t seem to be a major issue but could limit some of the charting functionality in the future.
- Grafana dashboards can produce artefacts whilst the points are being rendered, and these artefacts usually correct themselves on the next render update.
- Influx cannot ingest high-frequency data without buffering the points first, thus Influx recommends batching 5000 points per write request.
WP 4
- Node-RED is a useful tool to visually represent event flows, however it requires a reasonable amount of JavaScript knowledge to add complex rules.
- Using Node-RED as a rule engine promotes heavy reliance on community packages, these are often outdated and not very well supported.
- Edge Xpert’s ability to subscribe to notifications and forward them onto other endpoints makes the solution highly extendible.
- Edge Xpert’s notification functionality is limited, notifications can only be filtered based on a timestamp and a category.
- Using Nats leaf node technology makes it easier to publish notification between different networks and standardises the communication between systems.
Year 2023/2024:
An event-driven microservice approach has vast advantages over traditional architectural approaches. The solution is vendor agnostic and therefore provides greater flexibility in the future. Custom microservices can be commissioned to provide solutions to bespoke problems and each service can be built with high scalability and availability in mind to handle the demand without affecting the existing solutions operations. The proof of concept (PoC) has demonstrated that a microservice approach can be considered for future projects that involve areas like data ingestion, data transformation, data storage and data analysis as it can form a standardised approach to a common set of problems.
- The current policies and procedures can hinder the performance and impact a microservice solution can have, because of heavy constraints that might not be relevant for an Edge solution or where alternative measures could be implemented that align to newer practices.
- Following an agile methodology allowed the project scope to be adjusted where necessary to prioritise areas that needed further work. This meant that the project delivery constantly adapted to recommendations and feedback. The agility of the methodology made it possible to adjust changes in requirements resulting from further research knowledge. This proved invaluable and should be utilised in other PoCs.