Last updated
Last updated
The TERRA REF computing pipeline and data management is managed by Clowder. The pipeline consists of 'extractors' that take a file or other piece of information and generate new files or information. In this way, each extractor is a step in the pipeline.
An extractor 'wraps' an algorithm in code that watches for files that it can convert into new data products and phenotypes. These extractors wait silently alongside the Clowder interface and databases. Extractors can be configured to wait for specific file types and automatically execute operations on those files to process them and extract metadata.
If you want to add an algorithm to the TERRAREF pipeline, or use the Clowder software to manage your own pipeline, extractors provide a way of automating and scaling the algorithms that you have. provides instructions, including:
Setting up a pipeline development environment on your own computer.
Using the ) (currently in beta testing)
Using the Clowder API
Using the pyClowder to add an analytical or technical component to the pipeline.
To make working with the TERRA-REF pipeline as easy as possible, the Python library was written. By importing this library in an extractor script, developers can ensure that code duplication is minimized and standard practices are used for common tasks such as GeoTIFF creation and georeferencing. It also provides modules for managing metadata, downloading and uploading, and BETYdb/geostreams API wrapping.
Modules include:
BETYdb API wrapper
General extractor tools e.g. for creating metadata JSON objects and generating folder hierarchies
Standard methods for creating output files e.g. images from numpy arrays
GDAL general image tools
Geostreams API wrapper
InfluxDB logging API wrapper
LemnaTec-specific data management methods
Getting and cleaning metadata
Get file lists
Standard sensor information resources
Geospatial metadata management
To keep code and algorithms broadly applicable, TERRA-REF is developing a series of science-driven packages to collect methods and algorithms that are generic to an input and output from the pipeline. That is, these packages should not refer to Clowder or extraction pipelines, but instead can be used in applications to manipulate data products. They are organized by sensor.
These packages will also include test suites to verify that any changes are consistent with previous outputs. The test directories can also act as examples on how to instantiate and use the science packages in actual code.
Extractors can be considered wrapper scripts that call methods in the science packages to do work, but include the necessary components to communicate with TERRA's RabbitMQ message bus to process incoming data as it arrives and upload outputs to Clowder. There should be no science-oriented code in the extractor repos - this code should be implemented in science packages instead so it is easier for future developers to leverage.
Each repository includes extractors in the workflow chain corresponding to the named sensor.
stereo RGB camera (stereoTop in rawdata, rgb prefix elsewhere)
FLIR infrared camera (flirIrCamera in rawData, ir prefix elsewhere)
laser 3D scanner (scanner3DTop in rawData, laser3d elsewhere)
Extractor development and deployment:
Development environments:
On our
On