Adding (aspatial) variance to soil moisture or saturation deficit

From Rhessys
Revision as of 18:24, 9 February 2012 by Jchoate (talk | contribs) (Created page with '==Adding within patch (aspatial) variance to soil moisture== Purpose: Used to account for the effect of sub-patch scale variance in soil moisture on decomposition, denitrificatio…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Adding within patch (aspatial) variance to soil moisture

Purpose: Used to account for the effect of sub-patch scale variance in soil moisture on decomposition, denitrification and nitrification estimates. (eg. hot-spots). See (and cite) Tague 2009, Biogeochemistry for an application Approach: Denitrification, nitrification and decomposition flux estimation routines all use a moisture scalar, that adjusts estimates by available soil water. When an estimate of soil moisture standard deviation is available, estimates of these fluxes are computed by integrating over a normal distribution of soil moisture given the current mean soil moisture and standard deviation. Standard deviation is computed as a function of mean soil moisture (theta) using a second order polynomial (consistent with many observed measure of fine-scale moisture mean-variance relationships - see Tague et al (), WRR). We use two optional soil type default parameters to define this relationship and compute soil moisture standard deviation as std = theta_mean_std_p1 * theta + theta_mean_std_p2 * theta **2

Note that these parameters are initialized to zero which effectively turns off soil moisture variance. (std = 0.0). OPTIONAL PARMS: soil_default: theta_mean_std_p1, theta_mean_std_p2 inital values 0.0, 0.0 Adding (aspatial) variance to saturation deficit Purpose: Used to account for the effect of sub-patch scale variance in water table (saturation deficit) depth on computation of subsurface lateral fluxes and return flow estimates (saturation excess). Approach: A worldfile variable defines the standard deviation (std) (in m) for each patch. In patch routines compute_varbased_flow and compute_varbased_return_flow, this standard deviation is used in the computation of lateral flux out and return flow respectively. If standard deviation is non-zero, then flux estimate are computed by integrating of a discretized normal distribution with mean equal to current saturation deficit and standard deviation defined by std. Std is constant through time. Here we are assuming that fine (subpatch) scale topographic variation leads to the variance in water table depth and does not change with time (unlike soil moisture (theta) variance with changes with soil moisture). std_flag on the command line is used to trigger this - and will cause an extra line (std ) in the worldfile to be read.. the -std flag also requires a scale factor which will multiply the std value from the worldfile. This can be used to perform sensitivity analysis on std. OPTIONAL PARMS: none STATE VARIABLES: Worldfile - std, placed immediately after rz_storage and before m; note that command line flag -std must be used for this to be read correctly COMMAND LINE: -std scale factor