In this post, I describe the methodology that I used while performing the “2017” wind speed variable work on phase 2 of the BOEM aliquot grid project. This was the second iteration of the phase 2 wind speed work. This process was significantly different from the work that was previously done on the “2020” data. Please note though that wherever there is a sufficient amount of similarity between work shown here and what was done on the “2020” data, portions of the writing may be directly carried over to this post. No point in trying to reinvent the wheel!
PLEASE NOTE: As all data sources/end products used/generated in this workflow are deemed public, permission to document was requested and subsequently granted.
Goal, Data Input, and Deliverables
Processing Overview
Source Data Overview
With this work, the source datasets were cleaner and, most importantly, consistent across the entire extent. However, the offshore extent was significantly smaller compared to what was found in the “2020” data. The major difference, however, was in how the wind speed data was already contained in separate monthly and annual point feature classes.
As was the case for the “2020” as well as phase 1 work, the final wind speed data was ultimately joined to an aliquot grid polygon dataset with an extent that covers the entire U.S. Pacific Coast Exclusive Economic Zone (EEZ). However, because of the smaller extent of the source data, summarized wind speed data only extends from just offshore to approximately 110km. As such, the remaining aliquots to the west contained null values for wind speed. BOEM was more interested in looking closer to shore anyways so it worked out just fine.
Source Data Preprocessing
Since the new source data was much easier to work with, I was able to do much of the preprocessing in Python. While the bullet points in the below slide do a good job of describing the majority of the work that was done with Python, the projection work was simply performed with the Project tool. At the bottom of the slide, I made a note stating that individual scripts were used for each of the different processing tasks as there was the usual process of discovery that happens when performing a workflow that has many steps. However, once I knew the methodology was solid, I created a single script that would run the entire process – including projection – from start to finish. That script can be found here.
Wind Speed Data To Aliquot Grid Join (intermediate)
Once the preprocessing was completed with the Python scripts, I was able to very cleanly join the wind speed data back to the aliquot grid polygon feature class that was supplied with the source wind speed point data. Note that a fair amount of redundant field cleanup needed to be done due to having to join the multiple point datasets to the polygon feature class.
Calculating The Number Of Months Above 7m/s
As was the case on the “2020” version of the work, the next step was to derive, for each aliquot, the number of months where the average wind speed (24-hour and 5pm – 9pm) was above 7 meters/second. To accomplish this, I wrote a Python script that would run though each line in the table (i.e., each aliquot) and count the number of months that met the criteria. Note that the script stored on GitHub was actually used on the phase 1 work. The algorithm, however, was the same for both pieces (2020 and 2017) of the phase 2 work.
Wind Speed Data To Aliquot Grid Join (final)
The final step for the summarized wind speed data was to join the data to the larger EEZ-extented aliquot grid feature class. Although the wind speed data did not go all the way out to the western boundary of the EEZ, other variables in the project did so this was a way to keep things consistent. You may notice that a spatial join was used in an earlier step of this portion of the work along with some work involving centroids. The reason for this was that there was an issue in the “2017” datasets where close to 6000 aliquot IDs contained the incorrect UTM zone identifier. All aliquots were supposed to have either “10” or “11” as the UTM identifier but a large number of those that should have had “10” actually had “09” instead. This made performing a clean tabular join an unviable option. Instead, centroids were created in the table containing the “2017” data which were then used in a subsequent spatial join to the larger EEZ-extented aliquot grid dataset. The screenshot below shows a very small corner of the data where this error occurred. The labels in each polygon are the aliquot identifiers for the larger EEZ-extented layer (top) and the smaller “2017” layer (bottom) Note the differing zone identifiers (NI09 vs. NI10).
Interpolated Wind Speed Raster Creation
Similar to the “2020” work, after finishing the summarized data processing, it was time to create the IDW interpolated rasters. This time around, only 13 rasters were created in total for the 24-hour period (12 monthly and 1 annual, no 5pm – 9pm rasters were created). Although the extent was already established with the point data, a “2017” perimeter layer was used as a mask for good measure.
Wind Speed Raster Reclassification
The next step was to reclassify the stretched symbology of the rasters which would aid in the proper classification of the vector versions of these layers to be created after this step. Although still a bit of a puzzle as some time had passed, this wasn’t quite as tricky as it was the first time around!
The following is carried straight over from the “2020” write up (with data value modifications where applicable).
The values used for the “New” classes (“New values” in the screenshot) were chosen because the tool only accepts integer values in this field. Since the desired classifications needed to be floating-point values, based on the mid-point between the lower and upper limits of the classes, the fields were simply recalculated to floats in the tables for the polygon features that were derived from these reclassified rasters. In the case of the red-boxed example below, the range is 5 to 5.499999 (ostensibly 5.5) so I simply added 5 and 5.5 then multiplied by 10 to remove the decimal point. The class is thus assigned the value 105, which can later be divided by 20, in the vector feature class, to the desired class value of 5.25. Note that the end values of .x99999 were chosen because the source wind speed data values are expressed with this precision. Choosing “End” values at round 0.5 increments would have resulted in source wind speed point values being assigned to the wrong class. The differences could be negligible but in the interested of maintaining the integrity of the data, the higher precision values were chosen for the upper limit.
Also carried over for the sake of completeness…
It should be noted that, whenever rasters are symbolized with unique values, the full width of the color ramp will be used regardless of the lower and upper extremes of the values across the feature class. This posed a problem since the majority of the wind speed datasets had differing lower and upper extremes. What this meant was that the same shade of any color could be used for different class values (depending on what values were in the feature class). In other words, the same shade of dark green would be used regardless of what the lowest class value was in the dataset. This was not acceptable as we wanted there to be consistency in symbology when comparing wind speed for different months. My solution was to create an 8-bit (256 value) proxy raster, in Adobe Photoshop, that could be used to derive a master colormap. This colormap could then be applied to all of the rasters resulting in each class getting the same color regardless of its lowest and highest values. I will document this process in the future. In the meantime, feel free to check out the nifty little Python script I wrote to help divvy up the 256 8-bit values into 40 classes (which corresponds with the 40 classes that would be found in a hypothetical wind speed dataset with a range of 0 to 20 meters/second).
Wind Speed Polygon Creation
After finishing the raster reclassification, the rasters were then converted to vector feature classes using the Raster to Polygon tool. A field was added to contain the final desired average wind speed class value (as discussed in the previous section). Using the field calculator, the new field was calculated to take the feature’s corresponding gridcode value (the raster’s newly assigned value from the previous step) and divide it by 20. This essentially unwraps the little math hoops I had to jump through in the previous step. In the red-boxed example (same one as above), the new value is calculated as 5.25. This gives this particular class a lower limit of 5.0 meters/second and an upper limit of 5.5 meters/second.
Thanks so much for taking the time to read! I initially thought this would be a simple replay of the 2020 work but as evidenced by this writing, there were plenty of new challenges to conquer. Again, lots of fun! Shout-outs again to Joel Osuna-Williams (CGST project manager) and Frank Pendleton (BOEM GIS analyst) for having me on the project.