By Enrique M Saldarriaga and Beth Devine
The objective of this entry is to present the lessons we learned from a meta-analysis we conducted on the detection pattern on SARS-CoV-2. In the process, we found high statistical heterogeneity across the studies, that persisted even after stratification by demographic and study-design characteristics. Although these results did not allow us to increase our knowledge on the shedding patterns of COVID-19, it prompted us to review the concepts, assumptions, and methods available to measure heterogeneity and how it affects the estimation of quantitative summaries.
In this post, we present our analytic process and reflections on the methods. We discuss the use of mean versus median and the crosswalk between them, key differences between fixed and random effects models, measures of heterogeneity, and analytic tools implemented with R. Along the way, we provide a tutorial of the methods used to conduct a meta-analysis.
SARS-CoV-2 window of detection
The window of SARS-CoV-2 detection presents key information to understand the patterns of virus shedding and infectiousness to better implement testing and isolation strategies.1,2 A diagnostic test conducted too soon or too late can lead to a false negative result, increasing the likelihood of virus transmission.
Dr. Kieran Walsh et al.3 conducted a systematic review of studies that described the duration of virus detection. The authors included “any study that reports on the viral load or duration of viral detection or infectivity of COVID-19”, excluding studies without laboratory confirmation of COVID-19 from molecular testing (i.e., polymerase chain reaction or PCR). Thus, they included cohort studies, cross-sectional, non-randomized clinical trials, and case series, from various countries and age groups (adults and children). In addition, the viral samples came from the upper-respiratory tract, lower-respiratory tract, and stool samples. From a narrative summary, the authors concluded that while the trajectory of SARS-CoV-2 viral load is relatively consistent over the course of the disease, the duration of infectivity is unclear.
We decided to meta-analyze the results of this well-conducted systematic review. To boost internal consistency across studies, we focused our meta-analysis solely on studies that reported upper respiratory tract samples.
Mean v. Median
To combine results, it is necessary to have consistency in the reported metric. Most of the studies reported mean and standard deviation, but others reported median and inter-quantile range or max-min range. We followed the conclusions of Wan et al4 to estimate sample mean and standard deviation based on the reporting information presented for each study.
We employed one of two possible methods:
Method 1.

Where ɸ-1(·) is the inverse of the probability density function for the normal distribution (the function `qnorm()` in R), centered at 0 with standard deviation 1; P is defined by defined by P = (n − 0.375)/(n + 0.25), where n is the sample size.
Method 2.

Where IQ1 is the lower bound of the inter-quantile range equivalent to the 2.5 percentile, and IQ3 the upper bound, equivalent to the 97.5 percentile. P is defined by P = (0.75n − 0.175)/(n + 0.25).
The underlying assumption for both methods is that the observations summarized by the median arise from a normal distribution, which can be a limitation. However, these methods improve upon commonly accepted conversion formulas (see Bland et al 20155 and Hozo et al 20056 for more details), by relaxing non-negativity assumptions, and using more stable and adaptive quantities to estimate the SD.
Fixed vs random meta-analysis
The pooled mean is a weighted average and the decision of using a fixed or random effects model directly affects how the study weights are generated. If we assume that the studies have a common effect then it makes sense that the pooled mean places more importance (i.e., weight) on the studies with the lowest uncertainty. In other words, the assumption that the true value (in direction and magnitude) is the same across all studies implies that observed differences are due to chance. On the contrary, if there is no prior knowledge suggesting a common effect, and rather, each study provides an estimate of its own, then the weighting process should reflect that. The first alternative calls for a fixed effects model and the latter for random effects. The random effects assumption is less restrictive as it acknowledges the variation in the true effects estimated in each study.7 Thus, the precision (i.e., estimated uncertainty expressed in the standard deviation) of the studies plays an important role, but so does the assumption (and/or knowledge) about the relationship across studies. See Tufanaru et al 20158 and Borenstein et al 20109for a complete discussion on these two statistical models.
Fixed- v. random-effects model: comparative table of key characteristics and rationale.
Criterion | Fixed-effects model | Random-effects model |
Goal of statistical inference (statistical generalizability of results). | Results apply only to studies in meta-analysis. | Results apply beyond studies included in the analysis. |
Statistical assumption regarding the parameter. | There is one common, fixed parameter and all studies estimate the same common parameter. | There is no common parameter and studies estimate different parameters. |
Nonstatistical assumption regarding the comparability of studies from a clinical point of view (participants, interventions, comparators, and outcomes). | It is reasonable to consider that studies are similar enough and that there is a common effect. | Studies are different and it is not reasonable to consider that there is a common effect. |
The nature of meta-analysis results. | The meta-analysis summary effect is an estimate of the effect that is common to all studies included in the analysis. | The meta-analysis summary effect is an estimate of the mean of a distribution of true effects; it is not the shared common estimate, because there is not one. |
“The fixed-effects meta-analysis model’s total effect is an estimator of the combined effect of all studies. In contrast, the random-effect meta-analysis’s full effect is an estimator of the mean value of the true effect distribution” (Hackenberger 202010).
In our analysis, we determined that there was no common effect across studies due to differing study designs and populations studied.
Heterogeneity
Statistical heterogeneity is a consequence of clinical and/or methodological differences across studies and drives to what extent it is possible to assume that the true value found by each study is the same. Clinical differences include participant characteristics, and intervention design and implementation; methodological differences include definition and measurement of outcomes, procedures for data collection, and any other characteristic associated to the design of the study.
There are two main metrics that we can use to summarize heterogeneity: the percentage of variance attributable to study heterogeneity (I2) and the true-effect variance (𝜏2). 𝜏2 is possibly the most widely used metric. It builds on the chi-squared test – usually referred to as the Cochran’s Q in the literature – for expected vs observed information, under the null hypothesis that differences observed across studies are due to chance alone. A limitation of this test is that it provides a binary assessment and ignores the degree of heterogeneity, which is more relevant, as variability in method, procedures, and results across studies is expected.
Julian Higgins11 proposed the I2, a newer metric to describe the percentage of total variation due to heterogeneity rather than chance (i.e., uses the same rationale as the Q test). Formally, I2 = 100% × (Q − df)/Q where df is the degrees of freedom. Negative values of Q − df are equated to zero so I2 is bounded between 0% and 100%. This metric should be interpreted in the context of the analysis and other factors, such as the characteristics of the studies, and the magnitude and direction of the estimate of individual values. As a rule of thumb, an I2 higher than 50% might represent substantial heterogeneity and requires caution in the interpretation of the pooled values.
Another measure of heterogeneity is the variance of the true effects, 𝜏2. This metric is consistent with the random-effects assumption that there could be more than one true effect, and each study provides an estimate of those. There are several ways to estimate 𝜏, the most popular is the DerSimonian and Laird method, which is based on normal maximum likelihood. The main limitation of this estimate is that unless the sampling variances are homogeneous (regardless of the number of studies included) it tends to underestimate 𝜏. Viechtbauer 200512 provides a thorough assessment of the alternatives.
The interpretation of 𝜏2 is straightforward and it provides an estimate of the between-study variance of the true effects. It therefore helps to inform whether a quantitative summary makes sense; a large variance can make the mean meaningless. Further, the estimation of 𝜏2 has uncertainty, and it is possible to estimate its confidence interval for a deeper assessment of the variance across studies.
All these metrics can measure statistical heterogeneity. However, it depends on the researcher to determine if, even in absence of statistical heterogeneity, the results of two or more studies should be combined into a single value. That assessment depends upon the clinical characteristics of the studies under analysis, specifically the population, place, and time, the interventions evaluated, and outcomes measured. This is the main reason we excluded studies whose samples did not arise from the upper-respiratory tract, because pooling the results of structurally different studies would have been a mistake.
See Chapter 9 of the Cochrane Handbook for an introduction to the topic and Ioannidis J, JECP 200813 for an informative discussion on how to assess heterogeneity and bias in meta-analysis.
Confidence intervals and prediction intervals
Both the mean and the standard error in a meta-analysis are a function of the inverse variance weights, which are estimated considering the individual standard deviation and the assumption of heterogeneity for the true effect. Random effects models tend to present more equally distributed weights across studies compared to fixed effects, and therefore the estimate of the standard error is higher leading to wider confidence intervals (CI). In the extreme, the presence of high between-study variance in meta-analysis can yield a pooled mean that closely resembles an arithmetic mean (despite study sample size or precision in the estimates) due to the equal distribution of weights across studies.14
If the pooled mean is denoted by μ and the standard error is SE(μ), then by the central limit theorem, the 95% CI are estimated as μ ± 1.96 × SE(μ) where 1.96 is the percentile 97.5 of the normal distribution. The 95% CI provides information about the precision of the pooled mean, i.e., the uncertainty of the estimation. Borenstein et al 20099 present a helpful discussion of the rationale and steps involved in the estimation of the pooled mean and standard errors for both fixed and random effects models.
Under the random-effects assumption, we can estimate the prediction interval for a future study. Its derivation uses both the pooled standard error, SE(μ), and the between-variance estimate, 𝜏2. The approximate 95% prediction interval for the estimation of a new study is given by:15

Where 𝜶 is the level of significance, usually, 5%; t𝜶k-2 denotes the 100 × (1 − 𝜶/2)% percentile (97.5% when 𝜶 = 0.05 ) of the t-distribution with k − 2 degrees of freedom, where k is the number of studies included in the meta-analysis. The use of a t-distribution (instead of a normal as for the estimation of confidence intervals) aims to reflect the uncertainty surrounding 𝜏2, thus the use of a distribution with heavier tails.
Analysis
Our analysis of the findings from Walsh et al3 was implemented in R using the “ meta` package.16 This library offers several alternatives to conduct meta-analysis; for our study two were particularly relevant – `metagen` and `metamean`. Both base the weights calculation on the inverse-variance methods; the former treats each individual value as a treatment effect (i.e., the difference in performance in two competing alternatives), while the latter assumes each value is a single mean. As shown below, the mean and confidence intervals under both are similar: 13.9 days of detectability (95% CI 11.7, 16.7) using `metagen`, and 15.6 days (95 %CI 12.3, 18.9) using `metamean`. However, the heterogeneity estimates are very different. We present the results of both analyses below.
Our results of the pooled mean using `metagen`

Our results of the pooled mean using `metamean`

The main difference between the two methods is the assumption made over the uncertainty metric included in the data. While `metagen` assumes that the metric is the standard error (SE; that arose from a previous statistical analysis), `metamean` assumes that it is the standard deviation (SD) and hence corrects it using the sample size (n) of the studies, using SE = SD/n0.5. Therefore, the estimated confidence intervals for each study are wider under `metagen` which gives the impression that all studies are more alike, reducing the estimated heterogeneity. The opposite happens under `metamean`, the confidence intervals are narrower in comparison and therefore the heterogeneity is higher.
This difference in the uncertainty estimation under each method is also reflected in the weights. Under `metagen` the fixed-effects model shows a more homogeneous distribution of weights compared to the high concentration displayed under `metamean`. This is because the narrower intervals under `metamean` lead one to think that studies like Lavezzo and Chen,3 for example, are very accurate and therefore more important in the estimation. In contrast, under the random-effects models, the weights in `metamean` are almost the same for all studies, indicating that each arises from a different true effect distribution, while under `metagen` the wider intervals lead one to think that some studies arise from the similar true distributions and hence deserve higher weights. A homogeneous distribution of weights in the random-effect model under `metagen` is consistent with the notion that under enough uncertainty, the pooled mean closely approximates the simple arithmetic mean.14
Conclusion
The function `metamean` is the appropriate option for our analysis, because it is consistent with the information reported by the studies: a single mean and SD. We found that 99% of the variability is attributable to statistical heterogeneity (the I2 estimate) and that the standard deviation of the true effect is around 9 days (𝜏 = 81.50.5). The mean duration of the detectable period is 15.6 days (95%CI 12.3, 18.9).
We believe that the level of heterogeneity found is likely a consequence of the marked difference in the types of studies included in the systematic review, that ranged from case series to non-randomized clinical trials. Further, Walsh et al3 collected data from March to May 2020, and the variability of the studies is a consequence of the scarce information about COVID at the time. From the prediction interval we found that an undefined study will find a mean duration of detectable period between -3 and 34 days at 95% of confidence. The impossibility of this result (i.e., having a negative duration) is a consequence of the high level of study heterogeneity found in our analysis.
Given the level of heterogeneity, a quantitative approach is not a feasible option to summarize the results across studies. Even though these results did not allow us to expand our clinical knowledge on the shedding patterns of COVID, it created an interesting exercise that helped us reflect on the underlying assumptions and methods of meta-analysis. The code for this analysis is available in GitHub at https://github.com/emsaldarriaga/COVID19_DurationDetection. This includes all steps presented in this entry plus the data gathering process using web scrapping and stratified analysis by type of publication, population, and countries.
References
- Bedford J, Enria D, Giesecke J, et al. COVID-19: towards controlling of a pandemic. The Lancet. 2020;395(10229):1015-1018. doi:10.1016/S0140-6736(20)30673-5
- Cohen K, Leshem A. Suppressing the impact of the COVID-19 pandemic using controlled testing and isolation. Sci Rep. 2021;11(1):6279. doi:10.1038/s41598-021-85458-1
- Walsh KA, Jordan K, Clyne B, et al. SARS-CoV-2 detection, viral load and infectivity over the course of an infection. J Infect. 2020;81(3):357-371. doi:10.1016/j.jinf.2020.06.067
- Wan X, Wang W, Liu J, Tong T. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med Res Methodol. 2014;14(1):135. doi:10.1186/1471-2288-14-135
- Bland M. Estimating Mean and Standard Deviation from the Sample Size, Three Quartiles, Minimum, and Maximum. Int J Stat Med Res. 2015;4(1):57-64. doi:10.6000/1929-6029.2015.04.01.6
- Hozo SP, Djulbegovic B, Hozo I. Estimating the mean and variance from the median, range, and the size of a sample. BMC Med Res Methodol. 2005;5(1):13. doi:10.1186/1471-2288-5-13
- Serghiou S, Goodman SN. Random-Effects Meta-analysis: Summarizing Evidence With Caveats. JAMA. 2019;321(3):301-302. doi:10.1001/jama.2018.19684
- Tufanaru C, Munn Z, Stephenson M, Aromataris E. Fixed or random effects meta-analysis? Common methodological issues in systematic reviews of effectiveness. JBI Evid Implement. 2015;13(3):196-207. doi:10.1097/XEB.0000000000000065
- Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. A basic introduction to fixed-effect and random-effects models for meta-analysis. Res Synth Methods. 2010;1(2):97-111. doi:10.1002/jrsm.12
- K. Hackenberger B. Bayesian meta-analysis now – let’s do it. Croat Med J. 2020;61(6):564-568. doi:10.3325/cmj.2020.61.564
- Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557-560. doi:10.1136/bmj.327.7414.557
- Viechtbauer W. Bias and Efficiency of Meta-Analytic Variance Estimators in the Random-Effects Model. J Educ Behav Stat. 2005;30(3):261-293. doi:10.3102/10769986030003261
- Ioannidis JPA. Interpretation of tests of heterogeneity and bias in meta-analysis. J Eval Clin Pract. 2008;14(5):951-957. doi:10.1111/j.1365-2753.2008.00986.x
- Imrey PB. Limitations of Meta-analyses of Studies With High Heterogeneity. JAMA Netw Open. 2020;3(1):e1919325. doi:10.1001/jamanetworkopen.2019.19325
- Higgins JPT, Thompson SG, Spiegelhalter DJ. A re-evaluation of random-effects meta-analysis. J R Stat Soc Ser A Stat Soc. 2009;172(1):137-159. doi:10.1111/j.1467-985X.2008.00552.x
- Balduzzi S, Rücker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. 2019;22(4):153-160. doi:10.1136/ebmental-2019-300117