Igor Asanov
  • Home
  • CV
  • Research
  • Teaching
  • Blog

Weekly Links 06th of October

9/6/2019

0 Comments

 
Picture
  • "Girls’ comparative advantage in reading can largely explain the gender gap in math-related fields" by Breda and Napp (2019), PNAS. Very interesting paper that the difference in reading abilities  between boys and girls  in early age can explain the gender gap in math-career and  intentions. I am not 100% sure what is policy implication given the complexity of the issue e.g. family planning, but authors argue "to better inform students regarding the returns to different fields of study, something that is likely to trigger large effects on educational choices". Indeed,  informational treatment can improve job prospects in general population  and Breda et al. (2018) show that in girls. However, do we improve them or bias?
  • "A project-management tool from the tech industry could benefit your lab" by David Adam (2019), Nature.  A friend of mine pointed out to me a "Scrum" as  useful tool to for project management used in the industry. It is interesting to see that advocates of this approach in science. 
0 Comments

Weekly Links 24th of August

8/24/2019

0 Comments

 
Picture
  • "A national experiment reveals where a growth mindset improves achievement"  by Yeagar and large group of coauthors (2019), Nature.  Extremely interesting paper on the effect of a short online growth mindset intervention on grades and enrolment to advanced mathematics among US school students.  From technical point of view, I found very interesting  the whole process of data analysis: "Confidence in the conclusions of this study comes from independent data collection and processing, pre-registration of analyses, and corroboration of results by a blinded Bayesian analysis."  (Yeagar et al., 2019) 

  • "Civic honesty around the globe"  by Cohn et al. (2019), Science.  Puzzling result in large experiment across 40 countries. People are more likely to return the lost wallets with cash than without cash in it. Experts were not capable to predict these results. Further research is needed to better understand the reason for these results and generalisability of implication.
​
  • "A standardized citation metrics author database annotated for scientific field" by  Ioannidis et al. (2019), PLoS biology.  The study develops a large citation database and shows that a large number of the authors' citations comes from self-citations or citations by co-authors. In extreme cases, it can reach 94% raising the question to what extent citation indexes are accurate (see nice summary and discussion in Nature). The paper reminded me of the paper by Dominik Heinisch et al. (2016). They show that patterns of knowledge diffusion measured by citations of patents to non-patent literature changes drastically once one excludes individual and organizational self-citation. ​ Dominik is my co-author in another paper and we worked  in the same research group.  How fair is it for me to cite his paper? 
0 Comments

Reporting errors and biases in published empirical findings: Evidence from innovation research

6/8/2019

0 Comments

 
Picture
Highlights
  • Reporting errors and reporting biases are relevant concerns for innovation research.
  • Reporting errors are found in 45% of all articles and 4% of all tests.
  • Discontinuities at conventional thresholds of statistical significance indicate reporting biases.
  • Uncertainty due to rounding of published results is taken into account.
​
Abstract: Errors and biases in published results compromise the reliability of empirical research, posing threats to the cumulative research process and to evidence-based decision making. We provide evidence on reporting errors and biases in innovation research. We find that 45% of the articles in our sample contain at least one result for which the provided statistical information is not consistent with reported significance levels. In 25% of the articles, at least one strong reporting error is diagnosed where a statistically non-significant finding becomes significant or vice versa using the common significance threshold of 0.1. The error rate at the test level is very small with 4.0% exhibiting any error and 1.4% showing strong errors. We also find systematically more marginally significant findings compared to marginally non-significant findings at the 0.05 and 0.1 thresholds of statistical significance. These discontinuities indicate the presence of reporting biases. Explorative analysis suggests that discontinuities are related to authors’ affiliations and to a lesser extent the article’s rank in the issue and the style of reporting. 

​Read more...
0 Comments

Showing Life Opportunities: Increasing opportunity-driven entrepreneurship and STEM careers through online courses in schools.

4/26/2019

0 Comments

 

"How might a government encourage more opportunity-led entrepreneurship and science-led innovation careers at a large scale? This question was the starting point that led us to begin some research to consider why the youth are not choosing these careers. Perhaps young people not have relevant skills and knowledge? However, it seems that even if young people do have the right skills, they might not believe they can choose these career paths."
​Read more on IGL website.



0 Comments

    About this Blog

    I would like to share some random thoughts on the research topics that I find  interesting and my research activity.

    Archives

    September 2019
    August 2019
    June 2019
    April 2019

    Categories

    All

    RSS Feed

Powered by Create your own unique website with customizable templates.