Genomic Protocols

Genomic data includes whole-genome resequencing data from the HudsonAlpha Institute for Biotechnology, Alabama for 384 samples for accessions from the sorghum Bioenergy Association Panel (BAP) and genotyping-by-sequencing (GBS) data from Kansas State University for 768 samples from a population of sorghum recombinant inbred lines (RIL).

Danforth Center genomics pipeline

Outlined below are the steps taken to create a raw vcf file from paired end raw FASTQ files. This was done for each sequenced accession so a HTCondor DAG Workflow was written to streamline the processing of those ~200 accessions. While some cpu and memory parameters have been included within the example steps below those parameters varied from sample to sample and the workflow has been honed to accomodate that variation. This pipeline is subject to modification based on software updates and changes to software best practices.

Software versions:

Preparing reference genome

Download Sorghum bicolor v3.1 from Phytozome

Generate:

BWA index:

fasta file index:

Sequence dictionary:

Quality trimming and filtering of paired end reads

Aligning reads to the reference

Convert and Sort bam

Mark Duplicates

Index bam files

Find intervals to analyze

Realign

Variant Calling with GATK HaplotypeCaller

Above this point is the workflow for the creation of the gVCF files for this project. The following additional steps were used to create the Hapmap file

Combining gVCFs with GATK CombineGVCFs

NOTE: This project has 363 gvcfs: multiple instances of CombineGVCFs, with unique subsets of gvcf files, were run in parallel to speed up this step below are examples

Joint genotyping on gVCF files with GATK GenotypeGVCFs

Applying hard SNP filters with GATK VariantFiltration

Filter and recode VCF with VCFtools

Adapt VCF for use with Tassel5

Convert VCF to Hapmap with Tassel5

CoGe genomics pipeline

CoGe has integrated the tools that make up the Danforth Center’s variant calling pipeline into their easy point and click GUI, allowing users to reproduce a majority of the TERRA SNP analysis. Below, we detail how to run sequence data through CoGe’s system.

  • If this is your initial attempt, you will need to create a Genome.

    1. Under Tools, click Load Genome or use this link.

  • Under Tools, click Load Experiment or use this link.

  • Select Data: to use the TERRA data click Community Data or choose from CoGe’s other data options.

  • Select Options: This outlines CoGe’s choices for data processing and analysis. To reproduce pipeline used to create the TERRA SNPs, you can reference the exact tools and parameters used in the Danforth analysis above and enter the appropriate values into their equivalent drop downs or fields.

For the TERRA SNP the following were used:

FASTQ Format

  • Read Type: Paired-end

  • Encoding: phred33

Trimming

  • Trimmer: BBDuk

  • BBDuk parameters: k=23, mink=11, hdist=1, check mark both tpe and tbo, qtrim=rl, trimq=20, minlen=20, set trim adapters to both ends

Alignment

  • Aligner: BWA-MEM

  • BWA-MEM parameters: check mark -M, fill in Read Groups ID (identifier), PL (sequence platform), LB (library prep), SM (sample name)

SNP Analysis

  • Check mark Enable which expands this section

  • Method: GATK HaplotypeCaller (single-sample GVCF) using the default parameters but you can choose to use Realign reads around INDELS

General Options

  • Checkmark both options to add results to your notebook and receive an email when pipeline has completed.

Describe Experiment: Enter an experiment name (required), your data processing version ie 1 for first time, Source if using TERRA Data, it’s TERRA (required), and Genome (required and if you start typing it will find your loaded genome but be sure to verify version and id .)

Last updated