Data products are processed at various levels ranging from Level 0 to Level 4. Level 0 products are raw data at full instrument resolution. At higher levels, the data are converted into more useful parameters and formats. These are derived from NASA and NEON
1 Earth Observing System Data Processing Levels, NASA 2 National Ecological Observatory Network Data Processing
Level
Description
0
Reconstructed, unprocessed, full resolution instrument data; artifacts and duplicates removed.
1a
Level 0 plus time-referenced and annotated with calibration coefficients and georeferencing parameters (level 0 is fully recoverable from level 1a data).
1b
Level 1a processed to sensor units (level 0 not recoverable)
2
Derived variables (e. g., NDVI, height, fluorescence) at the level 1 resolution.
3
Level 2 mapped to uniform grid, missing points gap filled; overlapping images combined
4
'phenotypes' derived variables associated with a particular plant or genotype rather than a spatial location
Outlined below are the steps taken to create a raw vcf file from paired end raw FASTQ files. This was done for each sequenced accession so a HTCondor DAG Workflow was written to streamline the processing of those ~200 accessions. While some cpu and memory parameters have been included within the example steps below those parameters varied from sample to sample and the workflow has been honed to accomodate that variation. This pipeline is subject to modification based on software updates and changes to software best practices.
Download Sorghum bicolor v3.1 from Phytozome
Generate:
Above this point is the workflow for the creation of the gVCF files for this project. The following additional steps were used to create the Hapmap file
NOTE: This project has 363 gvcfs: multiple instances of CombineGVCFs, with unique subsets of gvcf files, were run in parallel to speed up this step below are examples
The TERRA hyperspectral data pipeline processes imagery from hyperspectral camera, and ancillary metadata. The pipeline converts the "raw" ENVI-format imagery into netCDF4/HDF5 format with (currently) lossless compression that reduces their size by ~20%. The pipeline also adds suitable ancillary metadata to make the netCDF image files truly self-describing. At the end of the pipeline, the files are typically [ready for xxx]/[uploaded to yyy]/[zzz].
Software dependencies
The pipeline currently depends on three pre-requisites: _[_netCDF Operators (NCO)](http://nco.sf.net)_._ Python netCDF4.
Pipeline source code
Once the pre-requisite libraries above have been installed, the pipeline itself may be installed by checking-out the TERRAREF computing-pipeline repository. The relevant scripts for hyperspectral imagery are:
Main script terraref.sh* JSON metadata->netCDF4 script JsonDealer.py
Setup
The pipeline works with input from any location (directories, files, or stdin). Supply the raw image filename(s) (e.g., meat_raw), and the pipeline derives the ancillary filename(s) from this (e.g., meat_raw.hdr, meat_metadata.json). When specifying a directory without a specifice filename, the pipeline processes all files with the suffix "_raw".
shmkdir ~/terrarefcd ~/terrarefgit clone git@github.com:terraref/computing-pipeline.gitgit clone git@github.com:terraref/documentation.git
Run the Hyperspectral Pipeline
shterraref.sh -i ${DATA}/terraref/foo_raw -O ${DATA}/terrarefterraref.sh -I /projects/arpae/terraref/raw_data/lemnatec_field -O /projects/arpae/terraref/outputs/lemnatec_field