When first reading this title I thought; Of course not! It is always wrong to manipulate your data to try and make it show something it doesn’t. However on reflection maybe the answer to the question isn’t a straight forward no.

In order to be able to generalise our research findings to the population and in order for the results to be valid and reliable it is better to have a larger number of participants. Therefore I feel that if adding more participants changes the results of your research and gives you an overall effect then this okay because the increase in participants is only showing something that really exists. For example if you are doing research into the effects a drug has on people’s reaction times and start off with ten participants and find that the drug has no effect. Disappointed in your findings you decide to add a further ninety participants (now having a hundred participants in total), and your results now show that there is a significant effect of the drug on reaction times, how can this be classed as bad science? The adding of participants only makes the results more reliable and generalisable; surely this is better science than having fewer participants?

The other part of the question about manipulating your data is a bit harder to justify. Obviously it is not good science to completely fabricate your results, for example in 2006 The Journal of Cell Biology was accused of using Adobe Photoshop to change and manipulate photographs of cells to enhance and make it look like there things existed on the pictures that did not. Obviously this is completely bad science however what if we were to manipulate things in our data in a different way? For example we are doing the research stated above; testing a drug to see if it improves reaction times, after carrying out statistical tests we get a significance of 0.06. Now normally in psychology we use a significant level of 0.05, so normally this would not be significant and we would accept the null hypothesis that the drug had no difference on reaction times. However with a significance of 0.06 there is obviously a difference on reaction time when the drug is taken, so what if we increase our significance level to say…0.07 and then reject the null and report a difference…is this good science? I find it acceptable; however I must agree that if we do this then where do we draw the line? Equally, how about if we do not get significance with one statistical test but we do with the next we try? I personal believe this is okay; the first test may have been too stringent and hidden to actual effect in the data.

In my opinion as long as nothing underhand happens and the results are not completely fabricated adding participants and manipulating the data (in a certain none underhand way) is acceptable in good science.

Leave a comment