Highlights
- Reporting errors and reporting biases are relevant concerns for innovation research.
- Reporting errors are found in 45% of all articles and 4% of all tests.
- Discontinuities at conventional thresholds of statistical significance indicate reporting biases.
- Uncertainty due to rounding of published results is taken into account.
Abstract: Errors and biases in published results compromise the reliability of empirical research, posing threats to the cumulative research process and to evidence-based decision making. We provide evidence on reporting errors and biases in innovation research. We find that 45% of the articles in our sample contain at least one result for which the provided statistical information is not consistent with reported significance levels. In 25% of the articles, at least one strong reporting error is diagnosed where a statistically non-significant finding becomes significant or vice versa using the common significance threshold of 0.1. The error rate at the test level is very small with 4.0% exhibiting any error and 1.4% showing strong errors. We also find systematically more marginally significant findings compared to marginally non-significant findings at the 0.05 and 0.1 thresholds of statistical significance. These discontinuities indicate the presence of reporting biases. Explorative analysis suggests that discontinuities are related to authors’ affiliations and to a lesser extent the article’s rank in the issue and the style of reporting.
Read more...
Read more...