TERRA-REF Documentation
WebsiteGitHubTutorials
Primary version
Primary version
  • Introduction
  • Scientific Objectives
  • Experimental Design
    • The Maricopa Agricultural Center (MAC)
    • Controlled Environment Phenotyping
    • Genomics
  • Data
    • How to Access Data
    • Data Products
      • Environmental conditions
      • Phenotype Data
      • Genomics data
      • Fluorescence intensity imaging
      • Geospatial information
      • Hyperspectral imaging data
      • Infrared heat imaging data
      • Meteorological data
      • Point Cloud Data
      • Controlled Environment phenotype data
    • Data Use Policy
    • Manuscripts and Authorship Guidelines
  • Protocols
    • Field Scanner
    • Sensor Calibration
    • Hyperspectral Data
    • Controlled Environment Protocols
    • Manual Field Data Protocols
    • Phenotractor Protocols
    • UAV Protocols
    • Genomic Protocols
  • Technical Documentation
    • Software
    • Data Standards
      • Existing Data Standards
      • Agronomic and Phenotype Data Standards
      • Genomic Data Standards
      • Sensor Data Standards
      • Data Standards Committee
    • Data Product Levels
    • Directory Structure
    • Data Transfer
    • Data Processing Pipeline
    • Time Series Data in Geostreams
    • Data Backup
    • Systems Configuration
  • Code of Conduct
  • Appendix
    • Glossary
    • Accessing BETYdb with GIS Software
  • References
  • Archived Documentation
    • Developer Manual
      • Submitting data to Clowder
      • Submitting data to BETYdb
      • Submitting Data to CoGe
      • Developing Clowder Extractors
Powered by GitBook
On this page
Export as PDF
  1. Protocols

Hyperspectral Data

PreviousSensor CalibrationNextControlled Environment Protocols

Last updated 5 years ago

The TERRA hyperspectral data pipeline processes imagery from hyperspectral camera, and ancillary metadata. The pipeline converts the "raw" ENVI-format imagery into netCDF4/HDF5 format with (currently) lossless compression that reduces their size by ~20%. The pipeline also adds suitable ancillary metadata to make the netCDF image files truly self-describing. At the end of the pipeline, the files are typically [ready for xxx]/[uploaded to yyy]/[zzz].

Installation

Software dependencies

The pipeline currently depends on three pre-requisites: _[_netCDF Operators (NCO)]( .

Pipeline source code

Once the pre-requisite libraries above have been installed, the pipeline itself may be installed by checking-out the TERRAREF computing-pipeline repository. The relevant scripts for hyperspectral imagery are:

  • Main script * JSON metadata->netCDF4 script

Setup

The pipeline works with input from any location (directories, files, or stdin). Supply the raw image filename(s) (e.g., meat_raw), and the pipeline derives the ancillary filename(s) from this (e.g., meat_raw.hdr, meat_metadata.json). When specifying a directory without a specifice filename, the pipeline processes all files with the suffix "_raw".

shmkdir ~/terrarefcd ~/terrarefgit clone git@github.com:terraref/computing-pipeline.gitgit clone git@github.com:terraref/documentation.git

Run the Hyperspectral Pipeline

shterraref.sh -i ${DATA}/terraref/foo_raw -O ${DATA}/terrarefterraref.sh -I /projects/arpae/terraref/raw_data/lemnatec_field -O /projects/arpae/terraref/outputs/lemnatec_field

http://nco.sf.net)_._
Python netCDF4
terraref.sh
JsonDealer.py