Staff profile

Professor Geoff

Professor Geoff Cumming


Emeritus Professor

College of Science, Health and Engineering

School of Psychology and Public Health

Department of Psychology and Counselling

Melbourne (Bundoora)


B Sc, Dip Ed Monash, DPhil Oxford.



Membership of professional associations

Fellow, Association for Psychological Science

Area of study


Brief profile

Psychological Science, perhaps the world’s top empirical journal in psychology, introduced drastically revised author submission guidelines from 1 January 2014. The new guidelines are There is an explanation at

Eric Eich, the journal's then editor, commissioned me to write this tutorial article to support the changes

Cumming, G. (2014). The New Statistics: Why and How. Psychological Science, 25, 7-29.

I hope my statistics textbooks will change the world.

My most recent book is an introductory textbook that assumes no previous statistics knowledge. My co-author is Bob Calin-Jageman. The book takes an estimation approach from the start, and also introduces and explains Open Science--the new techniques needed to increase the replicability and trustworthiness of research. Later it explains statistical significance testing, and cautions about its problems. This is the first introductory statistics text that presents the new statistics and Open Science, and does so all through.

Cumming, G., & Calin-Jageman, R. (2017). Introduction to The New Statistics: Estimation, Open Science, and Beyond. New York: Routledge.

It was released in October 2016. Information, and download of the Contents, Preface, and Chapter 1, are available from the Publisher's website, at:  Further information, and our statistics blog, are at:  There are many videos, extra exercises and datasets, quizzes, teaching and learning guides, and many other goodies provided to go with the book.

My earlier book also took an estimation approach, after explaining why that's much better than using statistical significance testing. It includes three chapters on meta-analysis.

Cumming, G. (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York: Routledge.  You can download the book's Preface, Contents, and a sample chapter from the Publisher's website for the book. 

I retired in January 2008, mainly to write these books and develop the software that goes with them. The software is ESCI ("ESS-key", Exploratory Software for Confidence Intervals). There are different versions of ESCI for the two books, and both versions can be downloaded, free, from   ESCI and the books are intended to support better understanding of the new statistics, and their use by researchers and students in a wide range of disciplines.

I may be interested in visits to interesting labs in interesting places. (I can offer research talks and various statistics workshops, as given for example at many APA and APS Conventions.)

Psychology and many other disciplines need to improve the way they analyse data.  A vital first step is to use, as much as possible, estimation--meaning confidence intervals--instead of null hypothesis significance testing (NHST) and p values.  NHST is a deeply flawed statistical approach, and the current widespread obsession with NHST, and reliance on NHST, is damaging to research progress. 

I refer to effect sizes, confidence intervals, and meta-analysis as the new statistics not because the techniques are new, but because adopting them would for most researchers be very new, and a major change in thinking.  Such a change is highly desirable, and could greatly improve our research.  The highly influential American Psychological Association Publication Manual now states that researchers should "Wherever possible, base discussion and interpretation of results on point and interval estimates" (p. 34).  That's unequivocal support for the new statistics.

My main recent research has been in the area of statistical cognition, which is the study of how people understand--or misunderstand--statistical concepts, and various different ways to present the results of statistical analyses.  I advocate the evidence-based practice of statistics, meaning that our selection of a statistical technique should be supported by cognitive evidence that people understand it well.

I am especially interested in replication, which is the topic of Chapter 5 in the first book.  One of many reasons that CIs are better than p values is that CIs generally give quite good information about what is likely to happen on replication of an experiment, whereas a p value gives almost no information about replication.  The dance of the p values illustrates how p values vary enormously with replication, thus indicating how terribly uninformative they are.


Recent publications


Cumming, G., & Calin-Jageman, R. (2017). Introduction to The New Statistics: Estimation, Open Science, and Beyond. New York: Routledge. (Due for release in August 2016.) For information, see:

Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York: Routledge.

Software package

Cumming, G. (2001-2016). ESCI, Exploratory Software for Confidence Intervals. Computer software, available from:


At YouTube, search for 'Geoff Cumming' and 'the new statistics' to find many videos about the new statistics, including the dance of the p values.

The Association for Psychological Science professionally recorded and edited 6 videos of a new-statistics workshop I gave at the APS Convention in San Francisco in May 2014. They are available at:

Refereed publications (journal articles and book chapters)

In the citations below, * denotes that there is an ESCI module available for free download, to accompany this article.

Stukas, A., & Cumming, G. (2014). Interpreting effect sizes: Towards a quantitative cumulative social psychology. European Journal of Social Psychology, 44, 711-722.

Fidler, F., & Cumming, G. (2014). Yes, but don't underestimate estimation: Reply to Morey, Rouder, Verhagen, and Wagenmakers (2014). Psychological Science, 25, 1291-1292.

Cumming, G. (2014). The New Statistics: Why and How. Psychological Science, 25, 7-29.

Cumming, G. (2013). Cohen’s d needs to be readily interpretable: Comment on Shieh (2013). Behavior Research Methods. Published online: 04 Sep 2013. doi: 10.3758/s13428-013-0392-4

Cumming, G. (2013). The new statistics: A how-to guide. Australian Psychologist, 48, 161-170. doi: 10.1111/ap.12018

Michael, R. B., Newman, E. J., Vuorre, M., Cumming, G., & Garry, M. (2013). On the (non)persuasive power of a brain image. Psychonomic Bulletin & Review, published online 12 Feb 2013. doi: 10.3758/s13423-013-0391-6

Fidler, F., & Cumming, G. (2013). Effect size estimation and confidence intervals. In J. A. Schinka & W. F. Velicer (Eds.) Handbook of psychology. Vol. 2: Research methods in psychology (2nd ed.) (pp. 142-163). Hoboken, NJ: Wiley.

Tressoldi, P. E., Giofré, D., Sella, F., & Cumming, G. (2013). High Impact = High Statistical Standards? Not Necessarily So. PLoS ONE, 8(2):e56180. doi:10.1371/journal.pone.0056180

Abbott, J. D., Cumming, G., Fidler, F., & Lindell, A. K. (2012). The perception of positive and negative facial expressions in unilateral brain-damaged patients: A meta-analysis. Laterality, iFirst, 1-23. doi:10.1080/1357650X.2012.703206 

Vaux, D. L., Fidler, F., & Cumming, G. (2012). Replicates and repeats—What is the difference and is it significant? EMBO reports, 13, 291-296. doi:10.1038/embor.2012.36

Cumming, G., Fidler, F., Kalinowski, P., & Lai, J. (2012). The statistical recommendations of the American Psychological Association Publication Manual: Effect sizes, confidence intervals, and meta-analysis. Australian Journal of Psychology, 64, 138-146. doi:10.1111/j.1742-9536.2011.00037.x 

 Lai, J., Fidler, F., & Cumming, G. (2012). Subjective p intervals: Researchers underestimate the variability of p values over replication. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 8, 51-62. doi:10.1027/1614-2241/a000037  [abstract]

Cumming, G., & Fidler, F. (2011). From hypothesis testing to parameter estimation: An example of evidence-based practice in statistics. In A. T. Panter & S. Sterba (Eds.) Handbook of Ethics in Quantitative Methodology (pp. 293-312). New York: Routledge.

Cumming, G. (2010). p values versus confidence intervals as warrants for conclusions that results will replicate. In B. Thompson & R. Subotnik (Eds.) Methodologies for Conducting Research on Giftedness (pp. 53-69). Washington, DC: APA Books.

Cumming, G., & Fidler, F. (2010). Effect sizes and confidence intervals. In G. R. Hancock & R. O. Mueller (Eds.) The reviewer’s guide to quantitative methods in the social sciences (pp. 79-91). London: Routledge. [chapter]

Coulson, M., Healey, M., Fidler, F., & Cumming, G. (2010). Confidence intervals permit, but do not guarantee, better inference than statistical significance testing. Frontiers in Quantitative Psychology and Measurement, 1:26. doi:10.3389/fpsyg.2010.00026. Available online:  Also available at:

Cumming, G. (2010). Replication, p-rep, and confidence intervals: Comment prompted by Iverson, Wagenmakers, and Lee (2010); Lecoutre, Lecoutre, and Poitevineau (2010); and Maraun and Gabriel (2010). Psychological Methods, 15, 192-198. doi: 10.1037/a0019521

Kalinowski, P., Lai, J., Fidler, F., & Cumming, G. (2010). Qualitative research: An essential part of statistical cognition research. Statistics Education Research Journal, 9(2), 22-34. Available online:

Speirs-Bridge, A., Fidler, F., McBride, M., Flander, L., Cumming, G., & Burgman, M. (2010). Reducing overconfidence in the interval judgments of experts. Risk Analysis, 30, 512-523. doi: 10.1111/j.1539-6924.2009.01337.x [article pdf]

Cumming, G. (2009). Inference by eye: Reading the overlap of independent confidence intervals. Statistics in Medicine, 28, 205-220.

Cumming, G., & Fidler, F. (2009). Confidence intervals: Better answers to better questions. Zeitschrift für Psychologie / Journal of Psychology, 217, 15-26.

Finch, S., & Cumming, G. (2009). Putting research in context: Understanding confidence intervals from one or more studies. Journal of Pediatric Psychology, 34, 903-916. doi: 10.1093/jpepsy/jsn118 [download article pdf] *

Cumming, G. (2008). Confidence intervals. In G. Ritzer (Ed.) The Blackwell concise encyclopedia of sociology (pp. 79-80). Oxford, UK: Blackwell.

Beyth-Marom, R., Fidler, F., & Cumming, G. (2008). Statistical cognition: Towards evidence-based practice in statistics and statistics education. Statistics Education Research Journal., 7, 20-39 [download article pdf]

Cumming, G. (2008). Replication and p intervals: p values predict the future only vaguely, but confidence intervals do much better. Perspectives on Psychological Science, 3, 286-300. *

Faulkner, C., Fidler, F., & Cumming, G. (2008). The value of RCT evidence depends on the quality of statistical analysis. Behaviour Research and Therapy, 46, 270-281.

Fidler, F., & Cumming, G. (2008). The new stats: Attitudes for the twenty-first century. In J.W. Osborne (Ed.). Best practice in quantitative methods (pp. 1-12). Thousand Oaks, CA: Sage. [Introduction]

Fidler, F., Faulkner, S., & Cumming, G. (2008). Analyzing and presenting outcomes: Focus on effect size estimates and confidence intervals. In A. M. Nezu & C. M. Nezu (Eds.) Evidence-based outcome research: A practical guide to conducting randomized controlled trials for psychosocial interventions (pp. 315-334). New York: OUP.

Kalinowski, P., Fidler, F., & Cumming, G. (2008). Overcoming the inverse probability fallacy: A comparison of two teaching interventions. Methodology, 4, 152-158. Doi: 10.1027/1614-2241.4.4.152. [abstract]

Velicer, W. F., Cumming, G., Fava, J. L., Rossi, J. S., Prochaska, J. O., & Johnson, J. (2008). Theory testing using quantitative predictions of effect size. Applied Psychology: An International Review, 57, 589-608. [abstract]

Cumming, G. (2007). Confidence intervals. In G. Ritzer (Ed.) The Blackwell encyclopedia of sociology (vol. II, pp. 656-659). Oxford, UK: Blackwell.

Cumming, G. (2007). Inference by eye: Pictures of confidence intervals and thinking about levels of confidence. Teaching Statistics, 29, 89-93.  *

Cumming, G., Fidler, F., Leonard, M., Kalinowski, P., Christiansen, A., Kleinig, A., Lo, J., McMenamin, N., & Wilson, S. (2007) Statistical reform in psychology: Is anything changing? Psychological Science,18, 230-232.

Cumming, G., Fidler, F., & Vaux, D. L. (2007). Error bars in experimental biology. Journal of Cell Biology, 177, 7-11. [link to article]

Fidler, F. & Cumming, G. (2007). Lessons learned from statistical reform efforts in other disciplines. Psychology in the Schools, 44, 441-449.

Cumming, G., & Maillardet, R. (2006). Confidence intervals and replication: Where will the next mean fall? Psychological Methods, 11, 217-227. *

Fidler, F., Burgman, M., Cumming, G., Buttrose, R. & Thomason, N. (2006). Impact of criticisms of hypothesis significance testing on statistical reporting practices in conservation biology. Conservation Biology,20, 1539-1544.

Belia, S., Fidler, F., Williams, J., & Cumming, G. (2005). Researchers misunderstand confidence intervals and standard error bars. Psychological Methods, 10, 389-396.

Cumming, G. (2005). Understanding the average probability of replication. Comment on Killeen (2005). Psychological Science, 16, 1002-1004. * *

Di Stefano, J., Fidler, F., & Cumming, G. (2005). Effect size estimates and confidence intervals: An alternative focus for the presentation and interpretation of ecological data. In A. R. Burk (Ed.) New trends in ecology research (pp. 71-102). New York: Nova Science Publishers.

Fidler, F., Cumming, G., Thomason, N., Pannuzzo, D., Smith, J., Fyffe, P., Edmonds, H., Harrington, C., & Schmitt, R. (2005). Evaluating the effectiveness of editorial policy to improve statistical practice: The case of the Journal of Consulting and Clinical Psychology. Journal of Consulting and Clinical Psychology. 73, 136-143.

Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J. (2005). Confidence intervals, still much to learn: Reply to Rouder & Morey. Psychological Science, 16, 494-495.

Cumming, G., & Finch, S. (2005). Inference by eye: Confidence intervals, and how to read pictures of data. American Psychologist, 60, 170–180. [download article] *

Cumming, G. (2005). Megabytes and colour, but learning is still the issue. Australian Educational Computing, 20(1), 14-17.

Fidler, F., Cumming, G., Thomason, N. & Burgman, M. (2004). Statistical reform in medicine, psychology and ecology. Journal of Socio-Economics, 33, 615-630.

Cumming, G., Williams, J., & Fidler, F. (2004). Replication, and researchers’ understanding of confidence intervals and standard error bars. Understanding Statistics, 3, 299-311. *

Finch, S., Cumming, G., Williams, J., Palmer, L., Griffith, E., Alders, C., Anderson, J., & Goodman, O. (2004). Reform of statistical inference in psychology: The case of Memory & Cognition. Behavior Research Methods, Instruments & Computers, 36, 312-324.

Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J. (2004). Editors can lead researchers to confidence intervals, but can’t make them think: Statistical reform lessons from medicine. Psychological Science, 15, 119-126.

Van Gelder, T., Bissett, M., & Cumming, G. (2004). Cultivating expertise in informal reasoning. Canadian Journal of Experimental Psychology, 58, 142-152. [abstract]

Wolfe, R., & Cumming, G. (2004). Communicating the uncertainty in research findings: Confidence intervals. Journal of Science and Medicine in Sport, 7, 138-143. (Subject of invited editorial: Marshall, S. (2004). Testing with confidence: The use (and misuse) of confidence intervals in biomedical research. Journal of Science and Medicine in Sport, 7, 135-137.) [abstract ] *

Finch, S., Thomason, N., & Cumming, G. (2002). Past and future American Psychological Association guidelines for statistical practice. Theory & Psychology, 12, 825-853.

Cumming, G. (2001). Project Design and achieving educational change: From StatPlay to ESCI. In G. Kennedy, M. Keppell, C. McNaught & T. Petrovic (Eds.), Meeting at the Crossroads. Proceedings of the 18th Annual Conference of the Australian Society for Computers in Learning in Tertiary Education. (pp. 151-160). Melbourne: Biomedical Multimedia Unit, The University of Melbourne.

Cumming, G., & Finch, S. (2001). A primer on the understanding, use and calculation of confidence intervals based on central and noncentral distributions. Educational and Psychological Measurement, 61, 530-572. *