CoastConnect Kelp Drone Mapping: Analysis Methods and Report Card Generation

1 Introduction

This document provides a comprehensive workflow for generating report cards from the CoastConnect Kelp Drone Mapping program. The report card system leverages drone-based imagery to monitor kelp forest canopy extent over time, providing critical insights for coastal ecosystem management and conservation.

The analysis workflow produces three primary outputs:

  1. Long-term trends spanning the entire time series
  2. Recent trends focused on the last five years
  3. Status assessments using a percentile-based approach adapted from U.S. West Coast monitoring programs (Frieder et al., 2025)

2 Data Requirements and Specifications

2.1 Required Data Structure

The analysis requires a CSV file with the following structure and column specifications:

  • Site (text): Unique site identifier or name
  • Dom_sp (text): Dominant kelp species at the site (macro for Macrocystis or nereo for Nereocystis)
  • Date (YYYY-MM-DD): Survey date in ISO format
  • Year (YYYY): Calendar year as a four-digit integer
  • Month: Month of survey
  • Tide_height_m: Tidal height during the drone survey, measured in meters
  • Site_m2: Total area of the survey site in square meters
  • Canopy_macro_m2: Total Macrocystis canopy area within the site, measured in square meters
  • Canopy_nereo_m2: Total Nereocystis canopy area within the site, measured in square meters

2.2 Important Methodological Considerations

Note

Reminder! 1. Canopy area should have been “clipped” to the site area or interest (AOI) to make sure the same survey areas are being compared over time. 2. Need to have at least 5 years worth of data to run this trend and status analysis.

NoteTide Correction Methodology

Tidal height during a drone survey affects the area of kelp canopy present at the ocean surface. Previous research has explored the relationship between tide height and kelp canopy area (Timmer et al., 2024) and a correction factor has been developed based on that work. You may choose to apply this correction factor, however, take into consideration your site conditions. The Central Coast is typically described as having slack tide conditions for 1 hour on either side of low. This relationship may not be appropriate for sites that experience higher current regimes. The correction equation is:

=canopy_extent_m2*(1+(0.257*tide_height_m))

2.3 Report Card Outputs

2.3.1 Site-Level Analysis

For each monitoring site, the workflow produces:

  1. Trend analysis: Statistical assessment using either linear regression or the Mann-Kendall test, depending on whether parametric assumptions are met
    • Long-term trend: Analysis of the entire time series
    • Recent trend: Analysis of the most recent five years
  2. Status assessment: Classification using the Frieder et al. (2025) percentile-based method, which categorizes current conditions relative to baseline periods. Status categories include:
    • Baseline reference is defined as the first five years of the time series
    • Far Above Expectations
    • Above Expectations
    • Meeting Expectations
    • Below Expectations
    • Far Below Expectations.

2.3.2 Regional-Level Report Card

Regional analyses aggregate site-level data by dominant species:

  • Pool sites by species
  • Report on species level trends
  • Pool all kelp together and report on trend (PacRim method)

This multi-scale approach enables both fine-scale site management and broader regional ecosystem assessments.

3 Setup

The analysis relies on several R packages for data manipulation, statistical analysis, and visualization. The tidyverse suite provides data wrangling capabilities, while the Kendall package enables non-parametric trend testing when parametric assumptions are violated.

Code
library(tidyverse)
library(Kendall) # For Mann-Kendall trend test

# Set paths
data_file <- "data/drone_site_level_data.csv"
output_dir <- "output/Kelp"

# Create output directory if needed
if (!dir.exists(output_dir)) {
  dir.create(output_dir, recursive = TRUE)
}

cat("=== KELP FOREST MONITORING ANALYSIS ===\n")
=== KELP FOREST MONITORING ANALYSIS ===
Code
cat("Data file:", data_file, "\n")
Data file: data/drone_site_level_data.csv 
Code
cat("Output directory:", output_dir, "\n\n")
Output directory: output/Kelp 

4 Data Loading and Validation

The first step in any ecological analysis is rigorous data validation. This section loads the input data, verifies that all required columns are present, and performs quality checks to identify potential issues such as missing years, insufficient temporal coverage, or data gaps.

Code
cat("=== LOADING AND VALIDATING DATA ===\n")
=== LOADING AND VALIDATING DATA ===
Code
kelp_data <- read.csv(data_file)

# Check required columns
required_cols <- c(
  "site",
  "year",
  "dom_sp",
  "canopy_macro_m2_tidecor",
  "canopy_nereo_m2_tidecor",
  "tide_height_m"
)
missing_cols <- setdiff(required_cols, names(kelp_data))

if (length(missing_cols) > 0) {
  stop(
    "ERROR: Missing required columns: ",
    paste(missing_cols, collapse = ", ")
  )
}

cat("✓ All required columns present\n")
✓ All required columns present
Code
cat("Total observations:", nrow(kelp_data), "\n")
Total observations: 119 
Code
cat("Sites:", paste(unique(kelp_data$site), collapse = ", "), "\n")
Sites: Triquet, Manley, Womanley, Golden, Stryker, Meay, Simonds, Ratatouille, North Beach, Second, Westbeach, Breaker, Triquetta 
Code
cat("Years:", min(kelp_data$year), "to", max(kelp_data$year), "\n\n")
Years: 2015 to 2025 
Code
# Check data quality
sites_with_gaps <- kelp_data %>%
  group_by(site) %>%
  summarize(
    n_years = n_distinct(year),
    year_range = max(year) - min(year) + 1,
    has_gaps = n_years < year_range
  ) %>%
  filter(has_gaps)

if (nrow(sites_with_gaps) > 0) {
  cat("⚠ WARNING: The following sites have gaps in their time series:\n")
  print(sites_with_gaps)
  cat("\n")
}
⚠ WARNING: The following sites have gaps in their time series:
# A tibble: 2 × 4
  site      n_years year_range has_gaps
  <chr>       <int>      <dbl> <lgl>   
1 Breaker         7          9 TRUE    
2 Triquetta       9         10 TRUE    
Code
# Check minimum data requirements
sites_summary <- kelp_data %>%
  group_by(site) %>%
  summarize(
    n_years = n_distinct(year),
    first_year = min(year),
    last_year = max(year),
    n_obs = n()
  )

insufficient_data <- sites_summary %>% filter(n_years < 5)
if (nrow(insufficient_data) > 0) {
  cat("⚠ WARNING: The following sites have fewer than 5 years of data:\n")
  print(insufficient_data)
  cat("These sites will have limited trend analysis capability\n\n")
}

# Create dominant species lookup with full names
dominant_species <- kelp_data %>%
  select(site, dom_sp) %>%
  distinct() %>%
  mutate(
    dominant = case_when(
      dom_sp == "macro" ~ "Macrocystis",
      dom_sp == "nereo" ~ "Nereocystis",
      TRUE ~ dom_sp
    )
  )

cat("Dominant species by site:\n")
Dominant species by site:
Code
print(dominant_species)
          site dom_sp    dominant
1      Triquet  macro Macrocystis
2       Manley  nereo Nereocystis
3     Womanley  macro Macrocystis
4       Golden  macro Macrocystis
5      Stryker  macro Macrocystis
6         Meay  macro Macrocystis
7      Simonds  macro Macrocystis
8  Ratatouille  nereo Nereocystis
9  North Beach  nereo Nereocystis
10      Second  nereo Nereocystis
11   Westbeach  macro Macrocystis
12     Breaker  nereo Nereocystis
13   Triquetta  nereo Nereocystis
Code
cat("\n")
Code
# Add dominant species values to data
kelp_data <- kelp_data %>%
  left_join(
    dominant_species %>% select(site, dom_sp, dominant),
    by = c("site", "dom_sp")
  ) %>%
  mutate(
    kelp_extent = ifelse(
      dom_sp == "macro",
      canopy_macro_m2_tidecor,
      canopy_nereo_m2_tidecor
    )
  )

The validation process identifies sites with temporal gaps or insufficient data for robust trend analysis. These diagnostics are essential for interpreting results and understanding the confidence we can place in subsequent statistical tests.

5 Long-Term Trend Analysis

Long-term trend analysis provides insight into decadal-scale patterns in kelp canopy extent. The choice of statistical method depends critically on whether the data meet the assumptions of parametric linear regression. When assumptions of normality and homoscedasticity are violated, the non-parametric Mann-Kendall test provides a robust alternative.

5.1 Statistical Methodology

Linear regression assumes:

  1. Linearity: The relationship between year and canopy extent is approximately linear
  2. Independence: Observations are independent (a potential concern in time series)
  3. Homoscedasticity: Variance of residuals is constant across the time series
  4. Normality: Residuals are normally distributed

When these assumptions are not met, the Mann-Kendall test offers a distribution-free approach that assesses the monotonic trend without assuming linearity or normality. The test uses Sen’s slope estimator, which represents the median slope across all pairs of observations and is robust to outliers.

Code
cat("=== PART 1: LONG-TERM TREND ANALYSIS ===\n\n")
=== PART 1: LONG-TERM TREND ANALYSIS ===
Code
# Test assumptions and run appropriate trend test for each site
longterm_trend_results <- list()

for (i in 1:nrow(sites_summary)) {
  site_name <- sites_summary$site[i]
  n_years <- sites_summary$n_years[i]

  cat(sprintf("Analyzing %s (%d years of data)...\n", site_name, n_years))

  # Get site data
  site_data <- kelp_data %>%
    filter(site == site_name) %>%
    arrange(year) %>%
    select(year, kelp_extent) %>%
    filter(!is.na(kelp_extent))

  if (nrow(site_data) < 3) {
    cat("  ⚠ Insufficient data for trend analysis (need ≥3 observations)\n\n")
    longterm_trend_results[[site_name]] <- data.frame(
      site = site_name,
      n_obs = nrow(site_data),
      method = "Insufficient data",
      slope = NA,
      p_value = NA,
      r_squared = NA,
      direction = "Insufficient data"
    )
    next
  }

  # Test linear regression assumptions
  # 1. Fit linear model
  lm_model <- lm(kelp_extent ~ year, data = site_data)

  # 2. Test for normality of residuals (Shapiro-Wilk)
  shapiro_test <- shapiro.test(residuals(lm_model))

  # 3. Test for homoscedasticity (visual check via residual plot would be better,
  #    but for automation we'll use a simple variance ratio test)
  #    Split residuals into first half and second half
  n_obs <- nrow(site_data)
  if (n_obs >= 6) {
    first_half_var <- var(residuals(lm_model)[1:floor(n_obs / 2)])
    second_half_var <- var(residuals(lm_model)[(floor(n_obs / 2) + 1):n_obs])
    var_ratio <- max(first_half_var, second_half_var) /
      min(first_half_var, second_half_var)
    homoscedastic <- var_ratio < 4 # Rule of thumb: ratio should be < 4
  } else {
    homoscedastic <- TRUE # Skip test if too few observations
  }

  # Decide which method to use
  assumptions_met <- (shapiro_test$p.value > 0.05) & homoscedastic

  if (assumptions_met) {
    # Use linear regression
    method <- "Linear regression"
    slope <- coef(lm_model)[2]
    p_value <- summary(lm_model)$coefficients[2, 4]
    r_squared <- summary(lm_model)$r.squared

    cat(sprintf("  Method: Linear regression (assumptions met)\n"))
    cat(sprintf(
      "  Slope: %.3f m²/year, p = %.4f, R² = %.3f\n",
      slope,
      p_value,
      r_squared
    ))
  } else {
    # Use Mann-Kendall test
    method <- "Mann-Kendall"
    mk_test <- MannKendall(site_data$kelp_extent)

    # Estimate slope using Sen's slope (median of all pairwise slopes)
    sen_slope <- function(x, y) {
      slopes <- c()
      for (i in 1:(length(x) - 1)) {
        for (j in (i + 1):length(x)) {
          slopes <- c(slopes, (y[j] - y[i]) / (x[j] - x[i]))
        }
      }
      return(median(slopes))
    }

    slope <- sen_slope(site_data$year, site_data$kelp_extent)
    p_value <- mk_test$sl
    r_squared <- NA # Mann-Kendall doesn't produce R²

    cat(sprintf("  Method: Mann-Kendall (assumptions not met)\n"))
    if (!shapiro_test$p.value > 0.05) {
      cat(sprintf(
        "    - Normality violated (Shapiro p = %.4f)\n",
        shapiro_test$p.value
      ))
    }
    if (!homoscedastic) {
      cat(sprintf(
        "    - Homoscedasticity violated (variance ratio = %.2f)\n",
        var_ratio
      ))
    }
    cat(sprintf("  Sen's slope: %.3f m²/year, p = %.4f\n", slope, p_value))
  }

  # Determine direction
  if (p_value < 0.05) {
    direction <- ifelse(slope > 0, "Increasing", "Decreasing")
  } else {
    direction <- "No significant trend"
  }

  cat(sprintf("  Direction: %s\n\n", direction))

  # Store results
  longterm_trend_results[[site_name]] <- data.frame(
    site = site_name,
    n_obs = nrow(site_data),
    method = method,
    slope = slope,
    p_value = p_value,
    r_squared = ifelse(is.na(r_squared), NA, r_squared),
    direction = direction
  )
}
Analyzing Breaker (7 years of data)...
  Method: Mann-Kendall (assumptions not met)
    - Normality violated (Shapiro p = 0.0469)
  Sen's slope: 1718.915 m²/year, p = 0.0355
  Direction: Increasing

Analyzing Golden (8 years of data)...
  Method: Linear regression (assumptions met)
  Slope: 778.200 m²/year, p = 0.0513, R² = 0.496
  Direction: No significant trend

Analyzing Manley (10 years of data)...
  Method: Linear regression (assumptions met)
  Slope: -75.287 m²/year, p = 0.3525, R² = 0.109
  Direction: No significant trend

Analyzing Meay (10 years of data)...
  Method: Linear regression (assumptions met)
  Slope: 560.070 m²/year, p = 0.0356, R² = 0.443
  Direction: Increasing

Analyzing North Beach (11 years of data)...
  Method: Linear regression (assumptions met)
  Slope: -2889.409 m²/year, p = 0.0044, R² = 0.613
  Direction: Decreasing

Analyzing Ratatouille (5 years of data)...
  Method: Linear regression (assumptions met)
  Slope: 3788.677 m²/year, p = 0.2741, R² = 0.373
  Direction: No significant trend

Analyzing Second (11 years of data)...
  Method: Mann-Kendall (assumptions not met)
    - Normality violated (Shapiro p = 0.0105)
  Sen's slope: -1412.637 m²/year, p = 0.2758
  Direction: No significant trend

Analyzing Simonds (9 years of data)...
  Method: Linear regression (assumptions met)
  Slope: 861.728 m²/year, p = 0.0383, R² = 0.481
  Direction: Increasing

Analyzing Stryker (8 years of data)...
  Method: Mann-Kendall (assumptions not met)
    - Homoscedasticity violated (variance ratio = 8.24)
  Sen's slope: 112.257 m²/year, p = 0.7105
  Direction: No significant trend

Analyzing Triquet (10 years of data)...
  Method: Linear regression (assumptions met)
  Slope: -176.267 m²/year, p = 0.5955, R² = 0.037
  Direction: No significant trend

Analyzing Triquetta (9 years of data)...
  Method: Linear regression (assumptions met)
  Slope: -2215.065 m²/year, p = 0.0378, R² = 0.482
  Direction: Decreasing

Analyzing Westbeach (11 years of data)...
  Method: Linear regression (assumptions met)
  Slope: -618.233 m²/year, p = 0.1040, R² = 0.267
  Direction: No significant trend

Analyzing Womanley (10 years of data)...
  Method: Linear regression (assumptions met)
  Slope: 922.466 m²/year, p = 0.0576, R² = 0.380
  Direction: No significant trend
Code
# Combine results
longterm_trends_df <- bind_rows(longterm_trend_results)

cat("=== LONG-TERM TREND SUMMARY ===\n")
=== LONG-TERM TREND SUMMARY ===
Code
print(longterm_trends_df)
                 site n_obs            method       slope     p_value
...1          Breaker     7      Mann-Kendall  1718.91500 0.035498142
year...2       Golden     8 Linear regression   778.19998 0.051309477
year...3       Manley    10 Linear regression   -75.28686 0.352535903
year...4         Meay    10 Linear regression   560.06985 0.035599041
year...5  North Beach    11 Linear regression -2889.40879 0.004396506
year...6  Ratatouille     5 Linear regression  3788.67732 0.274058728
...7           Second    11      Mann-Kendall -1412.63680 0.275757849
year...8      Simonds     9 Linear regression   861.72761 0.038341510
...9          Stryker     8      Mann-Kendall   112.25739 0.710523009
year...10     Triquet    10 Linear regression  -176.26704 0.595501591
year...11   Triquetta     9 Linear regression -2215.06456 0.037849975
year...12   Westbeach    11 Linear regression  -618.23350 0.103981268
year...13    Womanley    10 Linear regression   922.46625 0.057597647
           r_squared            direction
...1              NA           Increasing
year...2  0.49557145 No significant trend
year...3  0.10855847 No significant trend
year...4  0.44326231           Increasing
year...5  0.61265479           Decreasing
year...6  0.37277826 No significant trend
...7              NA No significant trend
year...8  0.48074197           Increasing
...9              NA No significant trend
year...10 0.03679367 No significant trend
year...11 0.48247279           Decreasing
year...12 0.26654438 No significant trend
year...13 0.38022026 No significant trend
Code
cat("\n")
Code
# Save long-term trend results
write.csv(
  longterm_trends_df,
  file.path(output_dir, "longterm_trends.csv"),
  row.names = FALSE
)
cat("✓ Long-term trends saved to longterm_trends.csv\n\n")
✓ Long-term trends saved to longterm_trends.csv

The long-term trend results provide the foundation for understanding overall trajectories at each site. Sites showing significant trends warrant particular management attention, while sites with no significant trend may represent stable kelp forests or may lack sufficient statistical power due to high interannual variability.

6 Recent Trend Analysis

While long-term trends reveal decadal patterns, recent trends (focused on the most recent five years) provide early warning signals of emerging changes. Recent trends may diverge from long-term patterns, potentially indicating regime shifts, recovery from disturbance, or responses to recent environmental changes.

Code
cat("=== PART 2: RECENT TREND ANALYSIS (LAST 5 YEARS) ===\n\n")
=== PART 2: RECENT TREND ANALYSIS (LAST 5 YEARS) ===
Code
# Determine the last 5 years
max_year <- max(kelp_data$year)
recent_years <- (max_year - 4):max_year
cat("Recent period:", min(recent_years), "to", max(recent_years), "\n\n")
Recent period: 2021 to 2025 
Code
# Filter to recent data
recent_data <- kelp_data %>% filter(year %in% recent_years)

# Run trend analysis for recent period
recent_trend_results <- list()

for (i in 1:nrow(sites_summary)) {
  site_name <- sites_summary$site[i]

  # Get recent site data
  site_data <- recent_data %>%
    filter(site == site_name) %>%
    arrange(year) %>%
    select(year, kelp_extent) %>%
    filter(!is.na(kelp_extent))

  n_recent <- nrow(site_data)
  cat(sprintf(
    "Analyzing %s (%d recent observations)...\n",
    site_name,
    n_recent
  ))

  if (n_recent < 3) {
    cat("  ⚠ Insufficient recent data for trend analysis\n\n")
    recent_trend_results[[site_name]] <- data.frame(
      site = site_name,
      n_obs = n_recent,
      method = "Insufficient data",
      slope = NA,
      p_value = NA,
      r_squared = NA,
      direction = "Insufficient data"
    )
    next
  }

  # Test linear regression assumptions
  lm_model <- lm(kelp_extent ~ year, data = site_data)
  shapiro_test <- shapiro.test(residuals(lm_model))

  if (n_recent >= 6) {
    first_half_var <- var(residuals(lm_model)[1:floor(n_recent / 2)])
    second_half_var <- var(residuals(lm_model)[
      (floor(n_recent / 2) + 1):n_recent
    ])
    var_ratio <- max(first_half_var, second_half_var) /
      min(first_half_var, second_half_var)
    homoscedastic <- var_ratio < 4
  } else {
    homoscedastic <- TRUE
  }

  assumptions_met <- (shapiro_test$p.value > 0.05) & homoscedastic

  if (assumptions_met) {
    # Linear regression
    method <- "Linear regression"
    slope <- coef(lm_model)[2]
    p_value <- summary(lm_model)$coefficients[2, 4]
    r_squared <- summary(lm_model)$r.squared

    cat(sprintf("  Method: Linear regression\n"))
    cat(sprintf(
      "  Slope: %.3f m²/year, p = %.4f, R² = %.3f\n",
      slope,
      p_value,
      r_squared
    ))
  } else {
    # Mann-Kendall
    method <- "Mann-Kendall"
    mk_test <- MannKendall(site_data$kelp_extent)

    sen_slope <- function(x, y) {
      slopes <- c()
      for (i in 1:(length(x) - 1)) {
        for (j in (i + 1):length(x)) {
          slopes <- c(slopes, (y[j] - y[i]) / (x[j] - x[i]))
        }
      }
      return(median(slopes))
    }

    slope <- sen_slope(site_data$year, site_data$kelp_extent)
    p_value <- mk_test$sl
    r_squared <- NA

    cat(sprintf("  Method: Mann-Kendall\n"))
    cat(sprintf("  Sen's slope: %.3f m²/year, p = %.4f\n", slope, p_value))
  }

  # Determine direction
  if (p_value < 0.05) {
    direction <- ifelse(slope > 0, "Increasing", "Decreasing")
  } else {
    direction <- "No significant trend"
  }

  cat(sprintf("  Direction: %s\n\n", direction))

  recent_trend_results[[site_name]] <- data.frame(
    site = site_name,
    n_obs = n_recent,
    method = method,
    slope = slope,
    p_value = p_value,
    r_squared = ifelse(is.na(r_squared), NA, r_squared),
    direction = direction
  )
}
Analyzing Breaker (5 recent observations)...
  Method: Linear regression
  Slope: 3450.568 m²/year, p = 0.0233, R² = 0.860
  Direction: Increasing

Analyzing Golden (5 recent observations)...
  Method: Linear regression
  Slope: 675.684 m²/year, p = 0.4819, R² = 0.176
  Direction: No significant trend

Analyzing Manley (5 recent observations)...
  Method: Linear regression
  Slope: -373.768 m²/year, p = 0.1091, R² = 0.630
  Direction: No significant trend

Analyzing Meay (5 recent observations)...
  Method: Linear regression
  Slope: 842.375 m²/year, p = 0.2890, R² = 0.355
  Direction: No significant trend

Analyzing North Beach (5 recent observations)...
  Method: Linear regression
  Slope: 1550.709 m²/year, p = 0.2498, R² = 0.403
  Direction: No significant trend

Analyzing Ratatouille (5 recent observations)...
  Method: Linear regression
  Slope: 3788.677 m²/year, p = 0.2741, R² = 0.373
  Direction: No significant trend

Analyzing Second (5 recent observations)...
  Method: Linear regression
  Slope: 6398.385 m²/year, p = 0.1591, R² = 0.537
  Direction: No significant trend

Analyzing Simonds (5 recent observations)...
  Method: Linear regression
  Slope: 1030.420 m²/year, p = 0.2383, R² = 0.418
  Direction: No significant trend

Analyzing Stryker (5 recent observations)...
  Method: Linear regression
  Slope: -409.710 m²/year, p = 0.4562, R² = 0.195
  Direction: No significant trend

Analyzing Triquet (5 recent observations)...
  Method: Linear regression
  Slope: 713.914 m²/year, p = 0.2934, R² = 0.350
  Direction: No significant trend

Analyzing Triquetta (5 recent observations)...
  Method: Linear regression
  Slope: -5810.644 m²/year, p = 0.0420, R² = 0.795
  Direction: Decreasing

Analyzing Westbeach (5 recent observations)...
  Method: Linear regression
  Slope: 1236.237 m²/year, p = 0.1468, R² = 0.558
  Direction: No significant trend

Analyzing Womanley (5 recent observations)...
  Method: Linear regression
  Slope: 2952.106 m²/year, p = 0.0817, R² = 0.689
  Direction: No significant trend
Code
# Combine results
recent_trends_df <- bind_rows(recent_trend_results)

cat("=== RECENT TREND SUMMARY ===\n")
=== RECENT TREND SUMMARY ===
Code
print(recent_trends_df)
                 site n_obs            method      slope    p_value r_squared
year...1      Breaker     5 Linear regression  3450.5676 0.02331438 0.8597221
year...2       Golden     5 Linear regression   675.6841 0.48194200 0.1760107
year...3       Manley     5 Linear regression  -373.7676 0.10906919 0.6296834
year...4         Meay     5 Linear regression   842.3746 0.28895762 0.3550777
year...5  North Beach     5 Linear regression  1550.7087 0.24979678 0.4031122
year...6  Ratatouille     5 Linear regression  3788.6773 0.27405873 0.3727783
year...7       Second     5 Linear regression  6398.3851 0.15911409 0.5367840
year...8      Simonds     5 Linear regression  1030.4197 0.23828029 0.4182122
year...9      Stryker     5 Linear regression  -409.7095 0.45621860 0.1952961
year...10     Triquet     5 Linear regression   713.9145 0.29341414 0.3499127
year...11   Triquetta     5 Linear regression -5810.6444 0.04198670 0.7954228
year...12   Westbeach     5 Linear regression  1236.2375 0.14680623 0.5580450
year...13    Womanley     5 Linear regression  2952.1062 0.08171397 0.6894793
                     direction
year...1            Increasing
year...2  No significant trend
year...3  No significant trend
year...4  No significant trend
year...5  No significant trend
year...6  No significant trend
year...7  No significant trend
year...8  No significant trend
year...9  No significant trend
year...10 No significant trend
year...11           Decreasing
year...12 No significant trend
year...13 No significant trend
Code
cat("\n")
Code
# Save recent trend results
write.csv(
  recent_trends_df,
  file.path(output_dir, "recent_trends.csv"),
  row.names = FALSE
)
cat("✓ Recent trends saved to recent_trends.csv\n\n")
✓ Recent trends saved to recent_trends.csv

Comparing long-term and recent trends can reveal important ecological dynamics. A site with a long-term decline but recent increase may be recovering, while a site with a long-term increase but recent decline may be experiencing new stressors. These patterns inform adaptive management strategies.

7 Status Assessment

Status assessment translates raw canopy measurements into management-relevant categories by comparing current conditions to historical baselines. This approach follows the U.S. West Coast percentile method developed by Frieder et al. (2025), which classifies sites based on where current values fall within the distribution of baseline observations.

7.1 Percentile-Based Classification

The status assessment uses site-specific reference periods consisting of the first five years of monitoring at each location. Current observations are assigned scores based on their percentile rank within the reference distribution:

  • Score +2 (Far Above Expectations): Current value ≥ 90th percentile of baseline
  • Score +1 (Above Expectations): Current value ≥ 75th percentile of baseline
  • Score 0 (Meeting Expectations): Current value between 25th and 75th percentiles of baseline
  • Score -1 (Below Expectations): Current value ≤ 25th percentile of baseline
  • Score -2 (Far Below Expectations): Current value ≤ 10th percentile of baseline

This percentile-based approach inherently accounts for site-specific variability and historical context, making it more ecologically meaningful than absolute thresholds.

Code
cat("=== PART 3: STATUS ASSESSMENT ===\n\n")
=== PART 3: STATUS ASSESSMENT ===
Code
# Helper functions for status assessment
calculate_score_from_percentile <- function(current_val, ref_vals) {
  if (is.na(current_val)) {
    return(NA_real_)
  }
  ref_clean <- na.omit(ref_vals)
  if (length(ref_clean) < 2) {
    return(NA_real_)
  }

  percentile <- ecdf(ref_clean)(current_val) * 100

  if (percentile >= 90) {
    return(2)
  } else if (percentile >= 75) {
    return(1)
  } else if (percentile >= 25) {
    return(0)
  } else if (percentile >= 10) {
    return(-1)
  } else {
    return(-2)
  }
}

translate_score_to_category <- function(avg_score) {
  if (is.na(avg_score)) {
    return("Insufficient Data")
  } else if (avg_score < -1.5) {
    return("Far Below Expectations")
  } else if (avg_score < -0.5) {
    return("Below Expectations")
  } else if (avg_score < 0.5) {
    return("Meeting Expectations")
  } else if (avg_score < 1.5) {
    return("Above Expectations")
  } else {
    return("Far Above Expectations")
  }
}

# Define site-specific reference periods (first 5 years)
site_reference_years <- kelp_data %>%
  group_by(site) %>%
  arrange(year) %>%
  summarize(
    first_year = min(year),
    reference_years = list(sort(unique(year))[1:min(5, length(unique(year)))]),
    n_ref_years = min(5, length(unique(year))),
    .groups = "drop"
  )

cat("Site-specific reference periods:\n")
Site-specific reference periods:
Code
for (i in 1:nrow(site_reference_years)) {
  cat(sprintf(
    "  %s: %s (n=%d years)\n",
    site_reference_years$site[i],
    paste(site_reference_years$reference_years[[i]], collapse = ", "),
    site_reference_years$n_ref_years[i]
  ))
}
  Breaker: 2017, 2019, 2021, 2022, 2023 (n=5 years)
  Golden: 2018, 2019, 2020, 2021, 2022 (n=5 years)
  Manley: 2016, 2017, 2018, 2019, 2020 (n=5 years)
  Meay: 2016, 2017, 2018, 2019, 2020 (n=5 years)
  North Beach: 2015, 2016, 2017, 2018, 2019 (n=5 years)
  Ratatouille: 2021, 2022, 2023, 2024, 2025 (n=5 years)
  Second: 2015, 2016, 2017, 2018, 2019 (n=5 years)
  Simonds: 2017, 2018, 2019, 2020, 2021 (n=5 years)
  Stryker: 2018, 2019, 2020, 2021, 2022 (n=5 years)
  Triquet: 2016, 2017, 2018, 2019, 2020 (n=5 years)
  Triquetta: 2016, 2018, 2019, 2020, 2021 (n=5 years)
  Westbeach: 2015, 2016, 2017, 2018, 2019 (n=5 years)
  Womanley: 2016, 2017, 2018, 2019, 2020 (n=5 years)
Code
cat("\n")
Code
# Get current year
current_year <- max(kelp_data$year)
cat("Current assessment year:", current_year, "\n\n")
Current assessment year: 2025 
Code
# Build reference data for each site
reference_data_list <- list()

for (i in 1:nrow(site_reference_years)) {
  site_name <- site_reference_years$site[i]
  ref_years <- site_reference_years$reference_years[[i]]

  dom_sp_code <- dominant_species %>% filter(site == site_name) %>% pull(dom_sp)
  dom_sp_name <- dominant_species %>%
    filter(site == site_name) %>%
    pull(dominant)

  site_ref_raw <- kelp_data %>%
    filter(site == site_name, year %in% ref_years)

  # Extract correct column based on dominant species
  if (dom_sp_code == "macro") {
    ref_values <- site_ref_raw$canopy_macro_m2_tidecor
    ref_median <- median(site_ref_raw$canopy_macro_m2_tidecor, na.rm = TRUE)
  } else if (dom_sp_code == "nereo") {
    ref_values <- site_ref_raw$canopy_nereo_m2_tidecor
    ref_median <- median(site_ref_raw$canopy_nereo_m2_tidecor, na.rm = TRUE)
  }

  reference_data_list[[i]] <- data.frame(
    site = site_name,
    dominant = dom_sp_name,
    ref_median = ref_median,
    ref_values = I(list(ref_values)),
    n_obs = nrow(site_ref_raw)
  )
}

reference_data <- bind_rows(reference_data_list)

# Get current year data
current_data_list <- list()

for (i in 1:nrow(dominant_species)) {
  site_name <- dominant_species$site[i]
  dom_sp_code <- dominant_species$dom_sp[i]
  dom_sp_name <- dominant_species$dominant[i]

  site_current_raw <- kelp_data %>%
    filter(site == site_name, year == current_year)

  if (dom_sp_code == "macro") {
    current_val <- site_current_raw$canopy_macro_m2_tidecor[1]
  } else if (dom_sp_code == "nereo") {
    current_val <- site_current_raw$canopy_nereo_m2_tidecor[1]
  }

  current_data_list[[i]] <- data.frame(
    site = site_name,
    dominant = dom_sp_name,
    current_value = current_val
  )
}

current_data <- bind_rows(current_data_list)

# Calculate site-level status
site_assessment <- current_data %>%
  left_join(
    reference_data %>% select(site, ref_median, ref_values),
    by = "site"
  ) %>%
  left_join(site_reference_years %>% select(site, reference_years), by = "site")

for (i in 1:nrow(site_assessment)) {
  current_val <- site_assessment$current_value[i]
  ref_median <- site_assessment$ref_median[i]
  ref_vals <- site_assessment$ref_values[[i]]

  score <- calculate_score_from_percentile(current_val, ref_vals)
  site_assessment$status_pct[i] <- (current_val / ref_median) * 100

  if (!is.na(current_val) && length(na.omit(ref_vals)) >= 2) {
    site_assessment$percentile[i] <- ecdf(na.omit(ref_vals))(current_val) * 100
  } else {
    site_assessment$percentile[i] <- NA_real_
  }

  site_assessment$score[i] <- score
}

site_status_final <- site_assessment %>%
  mutate(
    status_category = sapply(score, translate_score_to_category),
    ref_years_display = sapply(reference_years, function(x) {
      paste(x, collapse = ", ")
    })
  ) %>%
  select(
    site,
    dominant,
    ref_years_display,
    current_value,
    ref_median,
    status_pct,
    percentile,
    score,
    status_category
  )

cat("=== SITE-LEVEL STATUS ===\n")
=== SITE-LEVEL STATUS ===
Code
print(site_status_final)
          site    dominant            ref_years_display current_value
1      Triquet Macrocystis 2016, 2017, 2018, 2019, 2020     11844.856
2       Manley Nereocystis 2016, 2017, 2018, 2019, 2020       346.329
3     Womanley Macrocystis 2016, 2017, 2018, 2019, 2020     27248.396
4       Golden Macrocystis 2018, 2019, 2020, 2021, 2022      7792.385
5      Stryker Macrocystis 2018, 2019, 2020, 2021, 2022      8489.835
6         Meay Macrocystis 2016, 2017, 2018, 2019, 2020      8972.075
7      Simonds Macrocystis 2017, 2018, 2019, 2020, 2021     23682.946
8  Ratatouille Nereocystis 2021, 2022, 2023, 2024, 2025     29267.049
9  North Beach Nereocystis 2015, 2016, 2017, 2018, 2019      9361.574
10      Second Nereocystis 2015, 2016, 2017, 2018, 2019     38490.119
11   Westbeach Macrocystis 2015, 2016, 2017, 2018, 2019     10420.655
12     Breaker Nereocystis 2017, 2019, 2021, 2022, 2023     12728.725
13   Triquetta Nereocystis 2016, 2018, 2019, 2020, 2021      7169.418
   ref_median status_pct percentile score        status_category
1    8052.646  147.09273         80     1     Above Expectations
2     560.000   61.84446         20    -1     Below Expectations
3   13396.000  203.40696        100     2 Far Above Expectations
4    3316.017  234.99233         80     1     Above Expectations
5    8067.478  105.23531         60     0   Meeting Expectations
6    1470.832  610.00000        100     2 Far Above Expectations
7   17512.864  135.23171        100     2 Far Above Expectations
8    8347.002  350.62946        100     2 Far Above Expectations
9   20117.000   46.53564         40     0   Meeting Expectations
10  16816.597  228.88173        100     2 Far Above Expectations
11   7495.539  139.02476         60     0   Meeting Expectations
12   1315.589  967.53087        100     2 Far Above Expectations
13  24151.635   29.68502          0    -2 Far Below Expectations
Code
cat("\n")
Code
# Species-level summary (site-averaged)
species_summary <- site_status_final %>%
  group_by(dominant) %>%
  summarize(
    N_sites = n(),
    N_valid_scores = sum(!is.na(score)),
    Average_score = mean(score, na.rm = TRUE),
    Median_status_pct = median(status_pct, na.rm = TRUE),
    Status_category = translate_score_to_category(mean(score, na.rm = TRUE)),
    .groups = "drop"
  ) %>%
  rename(Species = dominant)

cat("=== SPECIES-LEVEL STATUS (SITE-AVERAGED) ===\n")
=== SPECIES-LEVEL STATUS (SITE-AVERAGED) ===
Code
print(species_summary)
# A tibble: 2 × 6
  Species N_sites N_valid_scores Average_score Median_status_pct Status_category
  <chr>     <int>          <int>         <dbl>             <dbl> <chr>          
1 Macroc…       7              7          1.14              147. Above Expectat…
2 Nereoc…       6              6          0.5               145. Above Expectat…
Code
cat("\n")
Code
# Pooled species analysis
pooled_analysis_list <- list()

for (sp in unique(dominant_species$dominant)) {
  species_sites <- dominant_species %>% filter(dominant == sp) %>% pull(site)
  dom_sp_code <- dominant_species %>%
    filter(dominant == sp) %>%
    pull(dom_sp) %>%
    first()

  # Get reference data
  species_ref_data <- kelp_data %>%
    filter(site %in% species_sites) %>%
    left_join(
      site_reference_years %>% select(site, reference_years),
      by = "site"
    ) %>%
    rowwise() %>%
    filter(year %in% reference_years[[1]]) %>%
    ungroup()

  # Get current data
  species_current_data <- kelp_data %>%
    filter(site %in% species_sites, year == current_year)

  if (dom_sp_code == "macro") {
    ref_values <- species_ref_data$canopy_macro_m2_tidecor
    current_values <- species_current_data$canopy_macro_m2_tidecor
  } else {
    ref_values <- species_ref_data$canopy_nereo_m2_tidecor
    current_values <- species_current_data$canopy_nereo_m2_tidecor
  }

  ref_values_clean <- na.omit(ref_values)
  current_values_clean <- na.omit(current_values)

  # Calculate scores
  scores <- sapply(current_values_clean, function(val) {
    percentile <- ecdf(ref_values_clean)(val) * 100
    if (percentile >= 90) {
      return(2)
    } else if (percentile >= 75) {
      return(1)
    } else if (percentile >= 25) {
      return(0)
    } else if (percentile >= 10) {
      return(-1)
    } else {
      return(-2)
    }
  })

  avg_score <- mean(scores)

  pooled_analysis_list[[sp]] <- data.frame(
    Species = sp,
    N_sites = length(species_sites),
    N_ref_obs = length(ref_values_clean),
    N_current_obs = length(current_values_clean),
    Ref_median = median(ref_values_clean),
    Current_median = median(current_values_clean),
    Percent_of_baseline = (median(current_values_clean) /
      median(ref_values_clean)) *
      100,
    Average_score_pooled = avg_score,
    Status_category_pooled = translate_score_to_category(avg_score)
  )
}

pooled_species_summary <- bind_rows(pooled_analysis_list)

cat("=== SPECIES-LEVEL STATUS (POOLED) ===\n")
=== SPECIES-LEVEL STATUS (POOLED) ===
Code
print(pooled_species_summary)
      Species N_sites N_ref_obs N_current_obs Ref_median Current_median
1 Macrocystis       7         7             7   7287.019       10420.66
2 Nereocystis       6         6             6  20256.501       11045.15
  Percent_of_baseline Average_score_pooled Status_category_pooled
1           143.00300            0.5714286     Above Expectations
2            54.52644            0.1666667   Meeting Expectations
Code
cat("\n")
Code
# Save status results
write.csv(
  site_status_final,
  file.path(output_dir, "site_status.csv"),
  row.names = FALSE
)
write.csv(
  species_summary,
  file.path(output_dir, "species_status_site_averaged.csv"),
  row.names = FALSE
)
write.csv(
  pooled_species_summary,
  file.path(output_dir, "species_status_pooled.csv"),
  row.names = FALSE
)

cat("✓ Status assessment results saved\n\n")
✓ Status assessment results saved

The status assessment provides a scientifically defensible framework for communicating kelp forest condition to stakeholders and managers. By grounding assessments in site-specific historical context, this approach avoids the pitfalls of arbitrary thresholds while remaining intuitive and actionable.

8 Visualizations

Visual representations of trends and status are essential for effective communication with diverse stakeholders. The visualization workflow generates bar plots with trend lines, reference baselines, and uncertainty bounds for each monitoring site.

Code
cat("=== CREATING VISUALIZATIONS ===\n\n")
=== CREATING VISUALIZATIONS ===
Code
# Create bar plots with trend lines for each site (long-term)
cat("Creating long-term trend visualizations...\n")
Creating long-term trend visualizations...
Code
for (i in 1:nrow(sites_summary)) {
  site_name <- sites_summary$site[i]

  # Get site data
  site_data <- kelp_data %>%
    filter(site == site_name) %>%
    arrange(year) %>%
    select(year, kelp_extent, dominant) %>%
    filter(!is.na(kelp_extent))

  if (nrow(site_data) < 3) {
    cat(sprintf("  ⚠ Skipping %s (insufficient data)\n", site_name))
    next
  }

  # Get trend results
  trend_result <- longterm_trends_df %>% filter(site == site_name)

  # Determine color based on dominant species
  bar_color <- ifelse(
    site_data$dominant[1] == "Macrocystis",
    "darkgreen",
    "brown"
  )

  # Create plot
  # png(
  #   file.path(output_dir, paste0("longterm_trend_", site_name, ".png")),
  #   width = 800,
  #   height = 600
  # )

  par(mar = c(5, 5, 4, 2))

  # Bar plot
  bp <- barplot(
    site_data$kelp_extent,
    names.arg = site_data$year,
    col = bar_color,
    border = "black",
    main = paste(site_name, "- Long-term Trend"),
    xlab = "Year",
    ylab = "Kelp Canopy Area (m²)",
    las = 2,
    ylim = c(0, max(site_data$kelp_extent) * 1.1)
  )

  # Add trend line if significant
  if (trend_result$method == "Linear regression") {
    lm_model <- lm(kelp_extent ~ year, data = site_data)
    pred_vals <- predict(lm_model)
    lines(bp, pred_vals, col = "red", lwd = 2, lty = 2)
  } else if (trend_result$method == "Mann-Kendall") {
    # For Mann-Kendall, draw Sen's slope line
    slope <- trend_result$slope
    intercept <- median(site_data$kelp_extent) - slope * median(site_data$year)
    pred_vals <- intercept + slope * site_data$year
    lines(bp, pred_vals, col = "red", lwd = 2, lty = 2)
  }

  # Add statistics text
  stat_text <- sprintf(
    "Method: %s\nSlope: %.3f m²/yr\np-value: %.4f%s\n%s",
    trend_result$method,
    trend_result$slope,
    trend_result$p_value,
    ifelse(
      !is.na(trend_result$r_squared),
      sprintf("\nR² = %.3f", trend_result$r_squared),
      ""
    ),
    trend_result$direction
  )

  legend("topright", legend = stat_text, bty = "n", cex = 0.9)

  # dev.off()

  cat(sprintf("  ✓ %s\n", site_name))
  bp
}

  ✓ Breaker

  ✓ Golden

  ✓ Manley

  ✓ Meay

  ✓ North Beach

  ✓ Ratatouille

  ✓ Second

  ✓ Simonds

  ✓ Stryker

  ✓ Triquet

  ✓ Triquetta

  ✓ Westbeach

  ✓ Womanley
Code
cat("\n✓ All visualizations created\n\n")

✓ All visualizations created

Each visualization includes color-coding by dominant species (dark green for Macrocystis, brown for Nereocystis), trend lines derived from the appropriate statistical method, and summary statistics in the legend. These plots provide an immediate visual assessment of site condition and trajectory.

9 Summary and Outputs

The complete analysis workflow produces a suite of data products suitable for both technical analysis and stakeholder communication.

Code
cat("=== ANALYSIS COMPLETE ===\n\n")
=== ANALYSIS COMPLETE ===
Code
cat("OUTPUT FILES:\n")
OUTPUT FILES:
Code
cat("  1. longterm_trends.csv - Long-term trend statistics for each site\n")
  1. longterm_trends.csv - Long-term trend statistics for each site
Code
cat("  2. recent_trends.csv - Recent (5-year) trend statistics for each site\n")
  2. recent_trends.csv - Recent (5-year) trend statistics for each site
Code
cat("  3. site_status.csv - Current status assessment for each site\n")
  3. site_status.csv - Current status assessment for each site
Code
cat(
  "  4. species_status_site_averaged.csv - Species-level status (site-averaged)\n"
)
  4. species_status_site_averaged.csv - Species-level status (site-averaged)
Code
cat(
  "  5. species_status_pooled.csv - Species-level status (pooled observations)\n"
)
  5. species_status_pooled.csv - Species-level status (pooled observations)
Code
cat(
  "  6. longterm_trend_[site].png - Bar plot with trend line for each site\n\n"
)
  6. longterm_trend_[site].png - Bar plot with trend line for each site
Code
cat("SUMMARY STATISTICS:\n")
SUMMARY STATISTICS:
Code
cat(sprintf("  Total sites analyzed: %d\n", nrow(sites_summary)))
  Total sites analyzed: 13
Code
cat(sprintf("  Date range: %d-%d\n", min(kelp_data$year), max(kelp_data$year)))
  Date range: 2015-2025
Code
cat(sprintf("  Current assessment year: %d\n", current_year))
  Current assessment year: 2025
Code
cat("\nLONG-TERM TRENDS:\n")

LONG-TERM TRENDS:
Code
trend_summary <- table(longterm_trends_df$direction)
for (direction in names(trend_summary)) {
  cat(sprintf("  %s: %d sites\n", direction, trend_summary[direction]))
}
  Decreasing: 2 sites
  Increasing: 3 sites
  No significant trend: 8 sites
Code
cat("\nRECENT TRENDS:\n")

RECENT TRENDS:
Code
recent_trend_summary <- table(recent_trends_df$direction)
for (direction in names(recent_trend_summary)) {
  cat(sprintf("  %s: %d sites\n", direction, recent_trend_summary[direction]))
}
  Decreasing: 1 sites
  Increasing: 1 sites
  No significant trend: 11 sites
Code
cat("\nSTATUS ASSESSMENT:\n")

STATUS ASSESSMENT:
Code
status_summary <- table(site_status_final$status_category)
for (category in names(status_summary)) {
  cat(sprintf("  %s: %d sites\n", category, status_summary[category]))
}
  Above Expectations: 2 sites
  Below Expectations: 1 sites
  Far Above Expectations: 6 sites
  Far Below Expectations: 1 sites
  Meeting Expectations: 3 sites
Code
cat("\n=== WORKFLOW COMPLETE ===\n")

=== WORKFLOW COMPLETE ===

9.1 Output File Descriptions

The workflow generates six types of output files:

  1. longterm_trends.csv: Contains site-level statistics for the entire time series, including the statistical method used (linear regression or Mann-Kendall), slope estimate, p-value, and trend direction.

  2. recent_trends.csv: Parallel structure to long-term trends but focused on the most recent five years, enabling comparison of recent versus historical trajectories.

  3. site_status.csv: Site-level status classifications including reference period definitions, current values, baseline medians, percentile ranks, and categorical assessments.

  4. species_status_site_averaged.csv: Species-level summaries derived by averaging scores across all sites dominated by each species, providing a regional perspective on species-specific patterns.

  5. species_status_pooled.csv: Alternative species-level summary that pools all observations across sites before calculating status, emphasizing regional patterns while de-emphasizing site-level heterogeneity.

  6. longterm_trend_[site].png: Individual visualization for each site showing annual canopy area, trend line, and summary statistics.

These outputs support both detailed site-level management and broader regional assessments, facilitating multi-scale conservation planning.

10 Session Information

For computational reproducibility and troubleshooting, the following session information documents the R version, operating system, and package versions used in this analysis.

Code
sessionInfo()
R version 4.5.2 (2025-10-31)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 24.04.3 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0

locale:
 [1] LC_CTYPE=C.UTF-8       LC_NUMERIC=C           LC_TIME=C.UTF-8       
 [4] LC_COLLATE=C.UTF-8     LC_MONETARY=C.UTF-8    LC_MESSAGES=C.UTF-8   
 [7] LC_PAPER=C.UTF-8       LC_NAME=C              LC_ADDRESS=C          
[10] LC_TELEPHONE=C         LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C   

time zone: UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] Kendall_2.2.2   lubridate_1.9.4 forcats_1.0.1   stringr_1.6.0  
 [5] dplyr_1.1.4     purrr_1.2.1     readr_2.1.6     tidyr_1.3.2    
 [9] tibble_3.3.1    ggplot2_4.0.1   tidyverse_2.0.0

loaded via a namespace (and not attached):
 [1] gtable_0.3.6       jsonlite_2.0.0     compiler_4.5.2     tidyselect_1.2.1  
 [5] scales_1.4.0       boot_1.3-32        yaml_2.3.12        fastmap_1.2.0     
 [9] R6_2.6.1           generics_0.1.4     knitr_1.51         pillar_1.11.1     
[13] RColorBrewer_1.1-3 tzdb_0.5.0         rlang_1.1.7        utf8_1.2.6        
[17] stringi_1.8.7      xfun_0.56          S7_0.2.1           timechange_0.3.0  
[21] cli_3.6.5          withr_3.0.2        magrittr_2.0.4     digest_0.6.39     
[25] grid_4.5.2         hms_1.1.4          lifecycle_1.0.5    vctrs_0.7.1       
[29] evaluate_1.0.5     glue_1.8.0         farver_2.1.2       rmarkdown_2.30    
[33] tools_4.5.2        pkgconfig_2.0.3    htmltools_0.5.9   

References

Frieder, C. A., Bell, T. W., Berry, H., Cavanaugh, K., Claar, D. C., Freiwald, J., Grime, B., Hamilton, S., Houskeeper, H. F., Lombardo, N., Marion, S., McHugh, T. A., McKenna, G., Parnell, P. E., Spector, P., & Weisberg, S. B. (2025). Developing a Status and Trends Assessment for Floating Kelp Canopies across Large Geographic Areas. Environmental Science & Technology, 59(46), 24745–24754. https://doi.org/10.1021/acs.est.5c07501
Timmer, B., Reshitnyk, L. Y., Hessing-Lewis, M., Juanes, F., Gendall, L., & Costa, M. (2024). Capturing accurate kelp canopy extent: Integrating tides, currents, and species-level morphology in kelp remote sensing. Frontiers in Environmental Science, 12, 1338483. https://doi.org/10.3389/fenvs.2024.1338483