RSK Orbital currently conduct detailed aerial photography of the NTS network over a four year rolling programme for the purposes of TD1 population density and Building Proximity Analysis.
The survey output is a digital report identifying building and population change since the last survey (four years previously), categorized into severity. The photography captured for this purpose is delivered via the RSK Orbital Visivi tool. Consecutive photographs have been captured with considerable overlap, with an effective orientation to identify usage, type and condition of the buildings in the corridor. This potentially also lends itself to creating 3D models of the pipeline route from the same data.
Generating 3D models we propose to then analyse a central corridor spanning three metres either side of the pipeline centreline to create a Digital Surface Model (DSM). Using the DSM, We propose to identify the precise mapped whereabouts of vegetation ingress. The results can be reported against the Network length, Pipeline or sub sections thereof.
In addition to providing the visual evidence which will clearly show the colour and shape of the vegetation, we also propose to report on the following:
- Self-seeded vegetation locations, Using DSM against a wider area Digital Terrain Model (DTM)
- Sub-Metre Vegetation (not on a hedge row), using GIS analysis
- Sub-Metre Vegetation (on a hedge row), using GIS analysis
- Height of vegetation, using the DSM
- Volume of vegetation, using our software tools
- Vegetation Species - For this trial using our in-house arboriculturalists
- Criticality of vegetation ingress delivered via a thematic map
- Comparison analysis for year-on-year showing differences in vegetation volume and location
All applicable outputs will be ESRI Shape file Map layers for use with your corporate GIS systems. Otherwise the data will also be displayed and be accessible via the Visivi tool.
We propose to utilise data from a pipeline flown during the 2020 survey runs for the purpose of the trials, but also propose to refly the same line with the new camera and camera rigs which we will be using for the 2021 surveys. This will give an interesting idea of how the technology is developing and can be honed to provide the best possible data for the purposes at hand.
Benefits
The benefits are that the data is already being collected for the TD1 so no additional flights will be required, with the associated improvements to safety, increases in efficiency, lessening impact on the environment and possible tie-ins to future developments for data gathering and surveillance.
Learnings
Outcomes
RSK Orbital conducts detailed aerial photography of the NTS network over a four year rolling programme for the purposes of TD1 population density and Building Proximity Analysis. The results of which are presented to National Grid on the RSK ‘Visivi’ platform for access. The existing survey outputs are varied but are largely focused on the production of a digital report identifying building and population change since the previous survey along any given pipeline, categorized into the severity of any potential threat to the pipeline’s integrity. It was this aerial photography that was used as the basis for this project to understand if it could be used for vegetation management within the proximity of the pipeline.
The project report has been split into various sections detailing the findings throughout and are as follows:
Vegetation recognition with Existing Data
The primary consideration at this stage was to ascertain whether it was possible to identify the different type of vegetation from the existing aerial photography. In order to do this, the services of one of an in-house (RSK Group) arboriculturists were sort. When presented with the existing raw imagery (important that it was raw as the more it is processed into other formats the less detail is retained, even marginal differences can affect the outcomes), it was possible to see quite clearly in the main where there was tree growth and certainly where there was potential self-seeding. In conclusion, on inspecting the existing data the arboriculturist can determine between broadleaf and coniferous, but in no more granular detail on the original data.
Vegetation Location
As the imagery in Visivi collected for the population density is already georeferenced then each pixel on the imagery can be identified in XYZ. This enables mapping of each element of visible vegetation, however there were a number of challenges. The first was to identify the vegetation, the second is accuracy of location. In order to provide the levels of accuracy required for population density mapping we control the image data with a subset of OS mapping on which are some of the elements that appear in the imagery. This enables us to determine that the imagery and the overlays are in the correct position. The accuracy of the control is therefore excellent at the positions at which it is applied but can deviate between points once the imagery is sat on the control.
For population density mapping this is acceptable because we are mapping the control elements. However, when we are trying to locate features like trees that are not themselves controlled, then a factor of drift can be apparent. The end result is that the offset to the real world is between 0.5cm- 40cm at its outer limit. This is again less affected in the centre of the imagery, where the 3 metre corridor can be found for the purposes of vegetation mapping. To increase the accuracy in the positioning and provide additional data we then assessed if and how we could use the existing data to derive a 3D view of the corridor.
3D Model from existing Imagery
The outputs of the 3D modelling from the historical data show mixed results for the various requirements assessed.
Self-seeded vegetation locations, Using DSM against a wider area Digital Terrain Model (DTM)
The self seeded locations are visible on the 3D model, however they are less defined than on the original image data. The accuracy of georeferencing is higher on the oblique stills than the controlled 3D model. An increase in full ground control would support a more accurate solution for the 3D model however, identification and location is better from the original data.
Sub-Metre Vegetation not on a hedge row
The vegetation can be seen and identified on the original imagery data, however when modelled there is not enough resolution to determine accurately the vegetation amount not on a hedgerow. The angle, the resolution and the occlusions created render accurate modelling of the vegetation inadequate for determining vegetation volume.
Sub-Metre Vegetation on a hedge row, using GIS analysis
Although it can be clearly identified on the imagery, there is not enough resolution in the 3D model to determine accurately the vegetation volume not on a hedgerow. The angle of the oblique imagery, the resolution of the vegetation and the occlusions created render accurate modelling of the vegetation difficult.
Height of vegetation, using the DSM
Vegetation height is accurate and can be determined from the 3D model as can be seen on the image below. The height is not reliant on resolution or clarity.
Volume of vegetation,
Volumes can be generated from the point cloud, however these are volumes of the vegetation extents. The difficulty arises when trying to calculate the physical vegetation volumes in the pointcloud. Therefore we have concluded that in the historic data volumes cannot be calculated with accuracy.
Vegetation Species
Creating the 3D model does not improve the ability to determine the vegetation type. The original imagery does enable the user to identify vegetation by species. The vegetation can also be mapped from this data
Comparison analysis for year-on-year showing differences in vegetation volume and location
The volumes are clearly shown so differences can be estimated. The accuracy of the estimation will depend on the amount of growth and the time of year flown. It is critical that the flight dates are consistent between flights.
Alert the user of maintenance schedules before growth becomes critical to the assets
Not possible from the historic dataset
Provide the coordinates of the extents of the vegetation growth
Location of change can be identified using multiple data sets. Change is apparent, volume accuracy is not.
Latest Aerial Photography Technology
As part of this trial the same route was photographed with the very latest aerial camera technology. The camera is a 150megapixel Phase One and to show the potential the pipeline was filmed in both directions and angled the lens at a steeper rake, in order to minimise the occlusions.
The imagery generates four times as many pixels as the existing data in one direction alone. The resultant zoom levels and therefore detail means that not only is vegetation identification far easier but the amount of data available to generate 3D models is much more plentiful.
The imagery was run through the software and produced models of significantly more robust nature than those from the existing data to the point where even every vehicle track can be identified and modelled. The vegetation can therefore be modelled and mapped with accuracy. Utilising the RTK data accuracies of 200mm on the ground can be achieved without the need for ground control.
Building the pointclouds with the higher resolution data required many attempts over numerous weeks. The process created an extreme load on the processors as each image is approximately 200MB and there are thousands of such images. In order to undertake the modelling for the entire network it would create challenges for the software and hardware, resulting in inevitably slow turn-around times and/or very high compute costs and we predict much re-processing. The nature of the builds are that they can fail at any given moment, right up to the point where they are complete. The reason for the fail needs to be identified and rectified, then the process begins again. Although possible, the time taken to generate the end results would be considerable and likely outweigh any cost benefits.
Conclusion
The imagery collected both historically and with the latest equipment is data rich to the extent that identification of vegetation by type is perfectly over the pipeline. Each image is fully georeferenced (Oblique imagery accuracy differs through the image). Therefore using this imagery to map the present vegetation is achievable. It is just a matter of how this data should be used to achieve it.
As seen, the building of the model from the raw data is impractical. We therefore suggest that there is a practical and speedier and more efficient approach to address the particular challenge. Left with the original georeferenced imagery in its photographic rather than pointcloud or model state, the detail is crisp and clear. The ability to see vegetation and to identify by type is evident and eminently possible. So how best to report on it?
The first consideration due to the amount of imagery captured is that of an Artificial Intelligence solution. In order to do this, we need to train a feature recognition algorithm to identify vegetation growth since the last survey. AI for feature extraction can be excellent at identifying defined and very specific items and even change in these items. Even in instances where the target feature is very specific, such as a valve in an AGI, it requires thousands of images of training data to learn the target from different angles, colours, scales, etc. In our scenario the vagaries of our desired output and the inconsistency of the data capture mean that there are no set hard and fast rules for which we can train the system with any degree of potential accuracy.
Inevitably the training of the AI would undoubtedly be a bigger task than the data gathering in the first place. The irregular nature of the data capture is fourfold. The imagery can be collected from different directions, heights, lighting conditions and at different times of the year. In the imagery the features, especially bushes, can be vague and ill defined. Understanding what something is, as well as its scale, would be challenging for an AI to determine given these varying parameters. Even in clear cut cases where an AI is looking to identify very defined image features it really has only 90% accuracy, meaning the results would need to be verified throughout by an operative. Again, the results in trials are good given specific input data and decent resolution.
An alternative option is to compare the point clouds against each other, however there would be a large discrepancy between the surveys generating untold false positives based on accuracies, time of year, and general differences such as parked cars, open gates etc. This too would result in the need to verify the results. If the aim of the exercise is to utilise the data already gathered to enjoy efficiencies for additional applications, then a simpler more effective approach could be made. In the process of generating the population data the operatives pour over each image in order to harvest the required intelligence. This process is rooted in comparing the current imagery against the survey from four years before. For similar reasons to the above this is best suited to individuals who can discern what they are looking at on screen. It would be very possible for these same individuals to map and report on encroaching and new vegetation during the process.
Lessons Learnt
By taking aerial photography at higher levels of detail, computer processing power needs to be increased at a similar level otherwise analysis of the data will take a considerable amount of time and be at risk of failing during any models being run.