- "A national experiment reveals where a growth mindset improves achievement" by Yeagar and large group of coauthors (2019), Nature. Extremely interesting paper on the effect of a short online growth mindset intervention on grades and enrolment to advanced mathematics among US school students. From technical point of view, I found very interesting the whole process of data analysis: "Confidence in the conclusions of this study comes from independent data collection and processing, pre-registration of analyses, and corroboration of results by a blinded Bayesian analysis." (Yeagar et al., 2019)
- "Civic honesty around the globe" by Cohn et al. (2019), Science. Puzzling result in large experiment across 40 countries. People are more likely to return the lost wallets with cash than without cash in it. Experts were not capable to predict these results. Further research is needed to better understand the reason for these results and generalisability of implication.
- "A standardized citation metrics author database annotated for scientific field" by Ioannidis et al. (2019), PLoS biology. The study develops a large citation database and shows that a large number of the authors' citations comes from self-citations or citations by co-authors. In extreme cases, it can reach 94% raising the question to what extent citation indexes are accurate (see nice summary and discussion in Nature). The paper reminded me of the paper by Dominik Heinisch et al. (2016). They show that patterns of knowledge diffusion measured by citations of patents to non-patent literature changes drastically once one excludes individual and organizational self-citation. Dominik is my co-author in another paper and we worked in the same research group. How fair is it for me to cite his paper?
Abstract: Errors and biases in published results compromise the reliability of empirical research, posing threats to the cumulative research process and to evidence-based decision making. We provide evidence on reporting errors and biases in innovation research. We find that 45% of the articles in our sample contain at least one result for which the provided statistical information is not consistent with reported significance levels. In 25% of the articles, at least one strong reporting error is diagnosed where a statistically non-significant finding becomes significant or vice versa using the common significance threshold of 0.1. The error rate at the test level is very small with 4.0% exhibiting any error and 1.4% showing strong errors. We also find systematically more marginally significant findings compared to marginally non-significant findings at the 0.05 and 0.1 thresholds of statistical significance. These discontinuities indicate the presence of reporting biases. Explorative analysis suggests that discontinuities are related to authors’ affiliations and to a lesser extent the article’s rank in the issue and the style of reporting.
Showing Life Opportunities: Increasing opportunity-driven entrepreneurship and STEM careers through online courses in schools.
"How might a government encourage more opportunity-led entrepreneurship and science-led innovation careers at a large scale? This question was the starting point that led us to begin some research to consider why the youth are not choosing these careers. Perhaps young people not have relevant skills and knowledge? However, it seems that even if young people do have the right skills, they might not believe they can choose these career paths."
Read more on IGL website.