Chapter 6 Spatial Models and GIS Integration

6.1 Gridded and tabular data retrieval

15 pnts

6.1.1 Learning Module 9

6.1.1.1 Objectives

In this module you will:

-Retrieve flow data using the dataretrieval package
-Learn to retrieve and manage digital elevation models
-Retrieve and import remotely sensed data for the watershed extent

6.1.1.2 Background

This final section moves toward spatially explicit modeling, integrating everything we have learned so far with geospatial data.

A refresher in GIS concepts: This module assumes familiarity with basic Geographic Information System (GIS) such as coordinate systems (latitude and lognitude, UTM). Some terms to recall:

Gridded data refers to spatial data in raster format. If you have performed any GIS work, you are likely using gridded data. Rasters organize geographical areas into a grid where each regularly spaced cell represents a specific location and contains some attribute representing that location. For example, digital elevation models (DEM) are a form of gridded data. Imagine that we are working in a watershed and we can drop a grid or square celled net over the entire catchment. We can then model the elevation data of that catchment by assigning each cell in the net a mean elevation value for the area represented by the cell. Similarly, we can assign any value we wish to model to the cells. If we were to drop another net and assign a maximum tree height to every cell, we would be adding another ‘layer’ to our model. The spatial resolution of gridded data refers to the length and width of each cell represents. If we want to observe gridded data over time, we can generate a raster stack, where each raster layer represents a point in time. With a raster stack, we can increase the temporal resolution of our gridded data.

Vector data describes spatial features using points, lines, and polygons. These spatial features are represented a geometrical objects with associated attributes, like lines to represent linear features such as rivers, or points to represent single locations. We will work with a .shp file in this module that is a collection of points representing the boundary of a watershed.

Tabular data refers to data organized in a row-column format. In this class, each of our .csv datasets are a form of tabular data.

6.1.1.3 A refresher in remote sensing:

Remote sensing is the science of obtaining information about objects or areas from a distance, typically from aircraft or satellites. It involves the collection and interpretation of data gathered by sensors that detect electromagnetic radiation reflected or emitted from the Earth’s surface and atmosphere. As ecologists, many of us utilize remote sensing to observe and analyze the Earth’s surface and its dynamics, providing valuable insights into environmental processes, land use and land cover changes, natural disasters, and more. Some helpful terms to recall:

spatial resolution: Remotely sensed data can vary in its spatial resolution, which refers to the scale of the smallest unit, or level of visual detail.

temporal resolution: When used to describe remotely sensed data, we are often talking about how frequently a remote sensing system revisits the same area on the Earth surface. Systems with high temporal resolution revisit the same location frequently, allowing for monitoring rapid changes in phenomena, where a lower resolution may limit a system’s ability to capture short-term events.

We often find that data collection requires a balance of spatial and temporal resolution to select the most appropriate data for the coverage and detail that ecological work requires.

6.1.1.4 Ethical conduct

When utilizing publicly available datasets, it is important for us as users to:

-Be aware of any copyright or licensing agreements. There may be restrictions or usage terms imposed by data providers. Read provided agreements or terms to be sure we are in compliance with these when using datasets in our research. This may also include requirements for registration or subscription established by providers or agencies like USGS or NASA.

-Cite and acknowledge the contributions of data providers and researchers whose datasets we utilize. This also facilitates transparency and reproducibility in our research.

- Assess the quality and accuracy of geospatial data before using it for decision making purposes. It is important that we consider spatial resolution, temporal resolution and metadata documentation to critically evaluate the reliability of the datasets we use.

-Promote Open Data Principles: Try to explore and use publicly available datasets that support transparency and collaboration within our scientific and academic communities.

6.1.3 Labwork (20 pnts)

In this module we will import streamflow (tablular) data, using USGS’s DataRetrieval package, then visualize vector data to delineate our area of interest and learn how to find spatial gridded data for the area.

6.1.3.1 Setup:

When importing packages and setting parameters, think about how the local working environment can vary among users and computers. A function that examines and installs packages before importing them enhances the script’s ability to be easily used by others.

6.1.3.2 Selecting a watershed:

Here, we are using USGS’ dataRetrieval package to search and filter hydrologic datasets. There are many ways data can be searched; more can be found in this dataRetrieval vingette. In this example, we will fetch USGS site numbers for Colorado watersheds with streamflow (discharge) data for a specified time period.

Q1 (2 pts). If you were creating this script for your work team’s shared collaboration, or for yourself to use again a year from now, what might be the disadvantages of embedding search parameters directly into the function parameters?

ANSWER:

Q2 (3 pts). What is a potential use for whatNWISsites?

ANSWER:

EXTRA (1 pt): Why might the above script return an error if dpylr:: is not placed before select()?

ANSWER:

drain_area_va is a column value returned by readNWISsite. Other variables can be found here

Q3 (1 pt). In a sentence, what does this dataframe (all_site_data) tell you?

ANSWER:

Note that while your research interests may not require streamflow data, it can serve as a valuable resource for a wide array of data types.

For the remainder of this assignment, we’ll focus on a single watershed in central Colorado near the Experimental Forest that we explored in the Snowmelt module. There are several dataRetrieval functions could we utilize to see what data is available for a particular site number. Check dataRetrieval docs to learn package functions or tools.

Q4 (4 pts). Find the longest running dataset in DataAvailable (i.e. either by count or by start and end date). Use https://help.waterdata.usgs.gov to look up the meaning of some of column headers and the codes associated with them.
a. What kind of data does this long running dataset hold?
b. What column did you use to learn that?
c. What does ‘dv’ mean in data_type_cd?

ANSWER:

6.1.3.3 Flow data retrieval:

Let’s rename our columns. For this you will need to determine the units of the date

Q5 (2 pts). Based on the hydrograph plot, what type of precipitation events appear to influence the observed intra-annual variations in flow rates?

ANSWER:

6.1.3.4 Watershed boundary vector data:

A boundary shapefile has been provided to save time and improve repeatability. Watershed boundaries for gauging stations can be found at USGS StreamStats by zooming into the water map and clicking ‘Delineate’ near your desired gage or point. Once your watershed is delineated, you can click on ‘Download Basin’ and select from a choice of file types. This can be a great to to delineate river basins if all you require is a shapefile. For smaller streams, other analyses are available. We will cover those in the next module.

Let’s import a pre-created shapefile:

Let’s transform this to a projected CRS (flat surface) rather than a geographic (spherical surface) CRS. A projected CRS will depict the data on a flat surface which is required for distance or area measurements. A more detailed explanation can be found at the University of Colorado’s Earth Lab if needed.

6.1.3.5 Visualization - watershed boundary

i.e., sanity check

6.1.3.6 Digital Elevation Models (DEM)

A DEM has been provided for consistency among users.

However, if you are unfamiliar with the process of retrieving a DEM here are three methods that our team commonly uses:

  1. USDA’s Natural Resources Conservation Science (NRCS) website. Here you can order (free) DEMs by state, county or bounding box. It takes just minutes for them to be emailed to you.

  2. If you have a USGS account (fast and free) you can also find DEM data at https://earthexplorer.usgs.gov. See this video if interested. This is the method we used to provide this DEM and we will use this resource again to retrieve Landsat data.

  3. GIS or hydrological software like QGIS and SAGA offer tools for downloading DEMs directly into your project for analysis. This can be optimal when working over large geographic extents to streamline the data acquisition process. Both QGIS and SAGA are free and easy to learn if you have experience with ESRI products.

Let’s check out our DEM:

If the watershed and raster don’t overlap as you expect, be sure the projection and extent is the same for all data files plotted.

6.1.3.7 Other remotely sensed data:

Now that we have site data spatial data, let’s cover a couple of techniques or resources you can use to access satellite data for your site. In the event that you only need a vegetative map layer from a specific point in time, it is not too difficult to download actual Landsat imagery from usgs.gov. In the case of Cabin Creek let’s look at a single summer image from 2020.

Navigate to https://ers.cr.usgs.gov/, and if you don’t already have an account, you can create one quickly or sign in if you already have one. Once signed in, go to https://earthexplorer.usgs.gov you should see a surface map with tabs on the left. Start with the ‘Search Criteria’ tab. For this assignment the shape file we have is too large to upload, so it is easiest to scroll down and choose ‘polygon’ or ‘circle’ to define your area. Under the circle tab you can use Center Latitude 39.9861 Center Longitude -105.719 and a radius of about 3.5 kilometers to capture our study area.

Let’s say we want an NDVI map layer from summer of 2020. In ‘date range’, select dates for the summer of 2020. In this high elevation area of Colorado, the greenest times of the year without snow cover are likely June and July with senescence beginning sometime in late July. First search from June 1 to July 15 2020. Then click on ‘Data Sets’ either at the bottom or top of the left-hand side menus. From the data sets we will select Landsat Collection 2 level 2, Landsat 8 - 9 OLI TIRS C2L2.

We can skip ‘Additional Criteria’ for now and go to ‘Results’. You should receive a set of about 11 images. For each image you can check out the metadata. That is the ‘pen and notepad’ icon in the middle of image specific menu. Here you can look at data like cloud cover. For these images, the values you see are a cloud coverage percent. Cloud coverage measurements and the codes to filter for certain cloud cover thresholds may change across missions. If you are retrieving Landsat data for time series using code, it will be important to explore the different quality control parameters and what they represent. You don’t need to download anything for this assignment, we’ve provided it for you. However to gain familiarity with this process, click on the ‘download’ icon for July 7, 2020 and click ‘select files’ on the Reflectance Bands option.

Q6 (1 pt): What 2 bands should we download to calculate NDVI?

Answer:

Q7 (3 pt): Find 2 other indices that can be used as indicators of vegetative health. What might each of these indices indicate. What are the bands used for each index?

Check out other earthexplorer resources while you are at the site. You will likely find resources here for your MS projects, even if it is just for making maps of your study area for presentations or paper.

6.1.3.7.1 Landsat

If you want to learn about Landsat missions or utilize Landsat data, USGS offers all of the reference information you need. The sensors on these missions can be a fantastic resource if characterizing land or water surface over large temporal scales. Landsat data provides (30m) spatial resolution so can capture spatial heterogeneity, even for smaller study areas. If calculating vegetation indices over different time periods, it is important to know which mission you are retrieving data from and check the band assignments for that mission. For example, on the Landsat 6 instrument, Band 4 is near-infrared, whereas Band 4 on Landsat 9’s instrument is red.

6.1.3.7.2 Moderate Resolution Imaging Spectroradiometer (MODIS):

is a sensor aboard NASA’s Terra and Aqua satellites which orbit Earth and collect data on a daily basis. MODIS data provides moderate spatial resolution (250m to 1km) and high temporal resolution. Therefore, MODIS data can be an optimal way to monitor or characterize rapid changes to Earth’s surface like land cover changes resulting from wildfire. Data can be accessed and downloaded directly through NASA’s Earthdata tool.

Let’s import our raster data.

Q8 (2 pt) What does ‘SR’ in the .tif names tell us about the data?

ANSWER:

Q9 (2 pts) What is the difference between ‘mask’ and ‘crop’ when filtering large files to fit the area of interest?

ANSWER:

6.1.3.8 Earth surface changes over time

If considering changes over time, comparing individual rasters from different time periods directly may not always provide meaningful insights because the absolute values of vegetation indices can vary due to factors such as differences in solar angle, atmospheric conditions, and land cover changes. Instead, it’s often more informative to analyze trends in vegetation over time.

6.1.3.8.1 LandTrendr

One fun way to do this is with Oregon State University’s LandTendr. With a Google account and access to Earth Engine, you can easily interact with the UI Applications in section 8. You can jump to the pixel timeseries here. This will allow you to view changes in vegetation indices over a specified time period within a single Landsat pixel.

6.1.3.8.2 Google Earth Engine

Another effective way to access evaluate large spatial datasets is through Google Earth Engine (GEE). GEE is a cloud-based platform developed by Google for large scale environmental data analysis. It provides a massive archive of satellite imagery and other geospatial datasets, as well as computational tools to view and analyze them. GEE uses Google’s cloud to provide processing of data so you don’t need powerful computing ability to run a potentially computationally expensive spatial analysis.

6.1.3.8.2.1 GEE Access

If spatial data is something you may need for your MS project, we recommend exploring this resource

Sign Up: Go to the Google Earth Engine website (https://earthengine.google.com/) and sign in with your Google account. If you don’t have a Google account, you can sign up for access by filling out a request form.

Request Access: If you’re signing up for the first time, you’ll need to request access to Google Earth Engine. Provide information about your affiliation, research interests, and how you plan to use Earth Engine (learn and access data!).

Approval: After submitting your request, you’ll need to wait for approval from the Google Earth Engine team. Approval times may vary, but you should receive an email notification once your request has been approved.

Access Earth Engine: Once approved, you can access Google Earth Engine through the Code Editor interface in your web browser at code.earthengine.google.com to start exploring the available datasets, running analyses, and visualizing your results.

We have provided a basic script for retrieving and viewing NDVI data. This is a code snapshot, so you can manually copy the code from the snapshot and paste it into you own Google Earth Engine Code editor. You are welcome to save this to your own GEE files and try changing the area of interest, vegetation indices, time periods or exploring other data.

Colorado State University also has a great tutorial here that will start you from scratch.

Note that JavaScript is the default language for the code editor interface in your web browser. If you are familiar with Python, GEE provides a Python API that can allow you to use Python syntax. If neither of these languages are familiar, don’t give up yet! You can still use this tool with your R coding background and trial and error. We recommend finding other scripts that suit your need (there are a lot of shared resources online as well as in our data collection assignment) and adapting them to your area of interest. Once you understand coding concepts, like those in R, other languages can be picked up easily. You will find that you are learning JavaScript before you realize it.

6.2 DEM processing

6.2.1 Learning Module 10

6.2.1.1 Objective:

Users will explore basic hydrological tools through the process of DEM preprocessing and defining stream networks for a watershed in the Fraser Experimental Forest in Colorado using Whitebox Tools for R. This exercise will also demonstrate methods for writing functions and efficiently handling multiple files simultaneously, including importing, processing, and exporting data within the context of Whitebox Tools for R.

Hydrological analysis preprocessing involves the use of digital elevation model (DEM) raster data to establish a watershed model and a simulation of surface hydrological processes. These steps enable us to quantify key parameters such as flow accumulation, stream network characteristics, and hydrological connectivity, which are essential for studying the dynamics of water movement within a landscape. Overall, preprocessing is the set of foundational steps in hydrological modeling and analysis.

Whitebox Tools is an advanced geospatial data analysis platform that can be used to perform common geographical information systems (GIS) analysis operations. This platform was developed with the Center for Hydrogeomatics in Guelph University so it is focused on hydrological analysis. With just a DEM, it allows us to produce a multitude of outputs that can be used for future analysis (Lindsay, 2016) doi:10.1016/j.cageo.2016.07.003). While we are demonstrating its use in R, these tools are also available in QGIS and Python platforms.

6.2.3 Labwork (20 pts)

6.2.3.1 Installing libraries

We are going to try installing the whitebox R package from CRAN as it should be the simplest method.

However, if this does not work for you, you can install the development version from GitHub by putting this inside a code chunk:

if (!require(“remotes”)) install.packages(‘remotes’) remotes::install_github(“opengeos/whiteboxR”, build = FALSE)

More information on installation can be found at: https://cran.r-project.org/web/packages/whitebox/readme/README.html

Helpful whitebox documentation can be found at https://jblindsay.github.io/wbt_book/preface.html. Essentially, we will be using input rasters via filepath and output filepaths as arguments for various whitebox functions. The script is designed to perform functions on all rasters in a given folder at once.

When writing scripts, developers typically follow a standard workflow:
1. Import required libraries
2. Generate functions useful throughout the script
3. Establish working directories or paths to other directories if needed
4. Import data
5. Data Cleaning and Preprocessing - this may involve handling missing values, removing outliers, converting to preferred units. etc.
6. Exploratory Data Analysis - it is beneficial to explore data visually to help understand the characteristics of the data.
7. Apply functions, or models or other analytical techniques
8. Evaluate results - If modeling, this may involve comparing model predictions with observed data or conducting sensitivity analysis
9. Visualize and report - plots, maps and tables can be effective ways to communicate findings.

While you might find slight variations among collaborators, following this general workflow ensures that our scripts are structured in a way that facilitates easy sharing and reproducibility of results.

6.2.3.1.1 Generate functions

Since we have imported the required libraries, let’s generate some functions.

Q1.(2 pnt) What does date: “04 March, 2025” in the header do?

ANSWER:

Q2.(3 pnt) What does ‘recursive = TRUE’ do? What would the ‘recursive = FALSE’ return? ANSWER:

EXTRA (1 pnt) : Rewrite this function to generate ‘splitbase’ with fewer lines. For example, what happens when you replace ‘basename’ in ‘splitbase <- strsplit(basename,’_‘)[[1]][1]’ with ‘splitpath[[1]][2]’?

Q3. (3 pnt) What is the point of writing a function? Why do you think it is advantageous to write functions early in your script? ANSWER:

6.2.3.1.2 Establish new directories(folders)

in our working directory. We will use these to store the outputs of the Whitebox functions.

Check your working directory, you should see the new folders there.

6.2.3.2 Resample DEMs

Here we will start with LiDAR data with 0.5m resolution. While this resolution has useful applications, a high resolution DEM can make hydrological models very computationally expensive with little or no improvement to the output. If you have the option of high resolution data in your work, you can test model outputs at different resolutions to determine what is the most efficient and effective resolution for your work. Here, we will resample our LiDAR data to a 10m resolution.

Q4. (3 pnt) Did we use the function extractsitenames in the above chunk? How? What did it do?

ANSWER:

Let’s quickly check our work by importing a resampled DEM and checking the resolution.

Note: This can also be done without importing the raster to the workspace by installing the library gdalUtils.

Q5.(2 pnt) What is the resolution of the resampled DEM? Where and how could we change the resolution to 30m if desired?

ANSWER:

6.2.3.3 Filling and breaching

When performing hydrological analysis on a DEM, the DEM usually needs to be pre-processed by ‘filling’ or ‘breaching’ any depressions or sinks to create a hydraulically connected and filled DEM. There are several depression or pit filling options available in whitebox. Breach depressions can be a better option that just pit filling according to whitebox documentation, however, some users argue that this can smooth too much, resulting in an altered watershed delineation. It is prudent to investigate different DEM pre-processing methods and their resulting DEM. You can fill depressions directly, breach depressions and then fill them, applying breach or fill single cell pit before breach/fill depressions, and use the one that generates the most reasonable watershed delineation results. Here we are going to make extra sure our flowpaths are uninhibited by first filling in single cell pits, and then breaching any larger depressions.

Q6 (2 pnt) What is breach1? How is lapply using breach1? ANSWER:

6.2.3.4 Flow direction and accumulation rasters

Q7 (2 pnt) Check out the WhiteboxTools User Manual. What does a d8pointer raster tell us and what might we use it for? ANSWER:

Let’s visualize some of our work so far:

Q8. (3 pnt) What are the units for the values that you see in each of these legends? It may be helpful to check out the Manual again. ANSWER:

6.2.3.5 Streams

Define streams using the flow accumulation raster. Use of the wbt_extract_streams function will return a raster with stream cells indicated only.

Sometimes we would prefer to see the stream within the watershed boundary. Here we are using the extent of the flow accumulation raster to generate a raster with ‘0’ to indicate watershed cells, along with stream cells indicated by the streams.tif. This is a demonstration for one watershed.

6.2.3.6 Final thoughts:

There are many more hydrological preprocessing and analysis tools available through Whitebox for R. If you are interested in watershed delineation in R, there is a tutorial here that is fairly easy to follow. However, if you find that you use these tools frequently and do not use R much in other work, you may also consider these options for hydrological analysis:

1. SAGA SAGA tools offer a versatile suite of geospatial processing capabilities accessible through both QGIS and ArcGIS plugins as well their standalone GUI interface. Often I find the GUI easiest for preprocessing, then I will import SAGA’s output rasters to QGIS for formatting or map making, or into model scripts. SAGA has a robust online support community, so it can be a valuable resource for hydrological work.

2. Similarly, Whitebox GAT tools can be used as plugin to QGIS and ArcGIS, providing Whitebox functionality directly with in a GIS environment.
When using these tools, the order of operations is similar to our work above: fill the DEM, generate a flow direction and flow accumulation raster, identify channels, delineate watersheds, then you can move forward according to the specificity of your project. Ultimately, the choice of workflow is yours, but I suggest documenting your process as you proceed, including links or file paths to projects and scripts within a written or diagrammed workflow (or workflow software). It’s also important to carefully consider the organization and storage of your projects and files. For instance, files generated by a GIS project should be readily accessible to any associated scripts. Returning to a preprocessing step can be (sometimes painfully) challenging if there’s no clear way to trace back your workflow and regenerate a crucial layer.

Take another look at the model selection process from Bevin (above), and think about how you can make your workflow easier to recall and repeat. A little planning upfront can save so much time later. (Seriously, so. much. time.)
To document and track iterations efficiently, consider using a tool that fits your workflow:
- Bookdown in R - this is useful if your work involves R scripts or Markdown. This is the tool we used to develop the tutorial ‘book’ for this course and I frequently use it to develop and share my own research scripts, figures and workflows.
- Obsidian or Notion - Better options if R scripts aren’t a major part of your work, allowing for flexible organization and easy cross-referencing. - You can even make a slide in Power Point with a work flow chart, inserting links, file paths and written steps into the chart as you go.

As you work on your term projects over the next couple of weeks, keep these tips in mind. If you haven’t started the thesis course yet (LRES 575), you’ll likely want to revisit the workflow you develop now and continue refining it for your thesis. Set yourself up for success by making it easy to track, recall and build on your work. Think about how you can make this process smoother for your future self. Your future self will thank you.

Good Luck!