Recap: using evidence to support your paper

Recap: using evidence to support your paper

Whether you’re returning to postgrad after a study break or diving straight into it, it’s a great idea to brush up on best practice use of evidence in your coursework.

Evidence not only validates your argument or hypothesis, but also demonstrates you’ve conducted thorough research and considered all possibilities before arriving confidently at your premise.

Blurring the lines of evidence

The social sciences will often use a combination of recorded history, empirical data and lived experience to approach a task.

In the hard sciences, or ‘natural sciences’, (such as chemistry and physics), the standards of evidence are the most methodologically rigorous. This empirical approach should apply to biology, psychology, health sciences and economics wherever possible, but there are limitations.

Behavioural studies are a classic example, where findings can change simply because a subject knows they are being observed. Similarly, economics can be both empirical and polarising. We can use hard data to support a premise, but we rely on soft data to explain it. This leaves room for debate.

Other opportunities for poor use of evidence lie in self-reported studies, non-blinded studies and in-vivo or animal models when no human data exists.

It’s not what you think, but how you think it

Starting with a position and then setting out to support it has paid off academically in the past, but it takes either great luck or a giant intellect. You can reason your way into a position in a kind of mathematical or logical way, but ultimately it’s flawed to conclude before you reason. If you wish to make an argument that would contradict a body of scholarly evidence, then the onus is on you to overturn it. This is known as the burden of proof.

Deferring to the best we have

If you’re looking at different kinds of evidence, consider the following: citing credentials, rather than ideas, is called an appeal to authority. An expert is more likely to be correct than a non-expert, but that is not a reason to rely on their position. Likewise, links to non-peak body websites are frowned upon because we cannot know whether the claims have been tested.

Literature evaluation and selection

All journals are not all created equally. This is the most important take-away – be familiar with the reputation of the journal (it must be peer-reviewed), and ensure that it is not among the hundreds of predatory journals.

Once you’ve established the veracity of a publication, consider the scale of evidence. A systematic review or meta-analysis is generally your best source of evidence. Take their gold-standard approach and look for double-blinded, placebo-controlled trials with a large number of subjects; large-scale epidemiological studies or Census-based statistic when looking at population data; and where appropriate, cite reputable polling data with international standing.

Are they pulling your leg?

Sometimes a graph or a statistic or a striking coincidence will seem to be making an irrefutable point, and this can quickly be used in argument. The following graph is a case in point:

margarine-and-maine

Image: http://www.tussursilk.com/correlation-causation-confusion/

 

Now, from this we can either be quite sure that something in margarine was destroying marital relations in Maine; or, given enough data points, can we find innumerable examples where two unrelated or semi-related sets of data will coincide?

To understand this is to know the difference between correlation and causation. It is known as the post-hoc fallacy. As academics, we must be very careful to establish causation, because simple correlation is only a point from which you can test an idea.

We humans are flawed

Our brains are fundamentally flawed – it’s why optical illusions work, and why we see patterns where none really exist. We also have extremely delicate memories – they are subject to all sorts of phenomena that reframe and even falsify them.

Our cognitive bias is the reason we often seek and use poor or discredited evidence. None of us are immune to this – we must simply acknowledge that there are certain things that we wish to be true, and we tend to evaluate the veracity of evidence in line with this. This confirmation bias reinforces our pre-existing ideas without making us better scholars.

Be prepared to be wrong

Being skilled at evaluating and citing evidence is important to advancing the discourse in your vocation both during and beyond postgrad, and essential if you pursue a PhD. The best scholars were not full of certainty, but doubt. If the evidence is incomplete, our approach should remain cautious.

A reasoned thinker should begin humbly and consider that being proven wrong means they are more educated for it. Our collective knowledge, as an advanced species with the luxury of pondering the world around us, rests upon the shoulder of giants, and a disciplined thinker looking for evidence will always reach up.