Category Archives: Uncategorized

Is science accessible enough to the general public?

Posted on

I thought this week I’d take a look at how the world of academia is presented to the general public, and whether it is sufficiently accessible.

First and foremost is access to the information in general. As anyone familiar with the file drawer problem is aware, only a very small number of papers on a given topic are published, and as such there is a staggering amount of research that is never seen by academics. If this causes such an accessability issue to us, then the general public are naturally at an even greater disadvantage. At present, this small minority of published papers is not freely disseminated, instead being consigned to pricey journals and members-only websites. It is unreasonable for us to assume that a layman will invest the neccessary time and funs into purchasing the relevant journal, and so his access to the information published within is essentially non-existant.

Obviously, it is impractical to freely publish absolutely everything and make it available for nothing. However, there ouhgt to be some thought given towards disseminating the information freely, or cheaply, at some point. Even if it was just releasing the articles after a set amount of time on the internet for free, it would be hugely preferable to the curent system.

How can we expect people to make informed decisions on highly sensitive topics such as global warming or stem cell research if they lack the opportunity to learn about the matter at hand?

Blogs I’ve commented on this week 18/03/12

Posted on

File-drawer effect – What is the problem?

Applied research findings are more valuable than theoretical findings.

Extra, extra! Neurotic Girl’s Anxieties Cured With Stats!

SONA – Have we seen and done it all TOO much??

Is there anything that can’t be measured by psychologists?

Posted on

There are many things that psychologists are currently incapable of measuring, and thus at present there is a great deal of research that cannot occur due to there being nothing on which to compare results.

A prime example being emotions. At present, we are unable to directly measure emotions such as anger, happiness or fear, and it is unlikely that we will ever be able to do so with complete accuracy no matter how much our technology, such as MRI, improves.

However, even though a direct measure of such things is impossible, we are able to measure them indirectly via measurable traits and characterisitics such as body temperature, skin conductance, pupil dilation, blink rate, voice pitch, blood pressure, and myriad more. While these measures are a good alternative to currently impossible methods, they do not claim to be wholely accurate or reliable, acting only as a method of forming an opinion rather than obtaining facts.

Eventually, there may be a better method of obtaining direct measures of these traits, but until then there remain many things that we are unable to measure.

Blogs I’ve commented on this week 22/02/12

Posted on

Sorry if these are a little short, currently dosed to the eyeballs on medicine.

 

Cheating Statistics

Should we be able to take potential data from the internet??

Self Report Measures – Tell me about yourself.

Statistics in the Real World: Game Shows

Is compulsory participation leading to poor science?

Posted on

So, today I’d like to talk about SONA studies. Love them or hate them, they’re a large part of our degree.

Due to them being entirely compulsory to do.

But a point I’d like to explore is that, perhaps in limiting our participant pool to being composed entirely of students we’ve had to force to take part, are we then reaching conclusions and results that may not be entirely generalisable or accurate?

Generally, the idea in experiments is that our participants should form a fairly representative group of the larger population, so that any results we gain can be generalised to society at large. Yet by meeting the quota of participants through compulsory participation, we in fact achieve the complete opposite, performing our entire experiment on a single demographic: the unwilling student.

Think about it. What can we conclude from studies about stress, based solely on the stress levels of a single demographic. Or, how about happiness? Or intelligence? Much as it pains me to say it, students are more alike than they are different. We drink too much, sleep too little, work when it suits us and chill out never. Any data that is gleaned from testing us will be representative of only one single population.

The other students in Bangor.

So yes, I realise I may be exaggerating slightly, but there is still a valid point to be made. If your participant pool consists solely of a single demographic who’s arm you need to twist into co-operation, can you really deem it representative of anyone but that demographic?

Blogs I’#ve commented on this week 10/02/12

Posted on

Murphy -v- Toast

Blog about blogs – literally!

Research Applications in the Real World

Experimenter biases on research – they are as human as you or I!

Post hoc ergo propter hoc

Posted on

For this weeks blog, I have decided to look at the topic of correlation and causation.

The title of this post refers to the latin phrase meaning “after this, therefore because of this”. Post hoc reasoning is a logical fallacy that states that because event A happens before event B, A must have been the cause of B. In science, this type of fallacy is often the result of making a post hoc error as to the causes of some particular outcome, and can thus lead to false conclusions about the results of an experiment. This is often the type of fallacy that results in spurious claims about how everything in existence is the leading cause of cancer, along with most other ridiculous claims that are based on a sequence of events without any particular focus on other causes.

Not naming any names, of course. That would just be silly.

Let’s take the following example. Brian, after eating bacon every morning for a week, is told during a checkup that he has a tumor. A less than elegant example, I know, but the above image is making me grumpy and I just want to go cry now.

Anyways, a post hoc error would state that because eating all that yummy bacon came before the diagnosis, it must have been the cause. It wouldn’t look at details about Brian such as his chronic smoking in earlier life, or his family history of cancer. And this is the problem.

Correlation of events does not, in any way, imply causality. The sequence in which these events occured does not demonstrate any underlying connection between them, and they may in fact be entirely inconsequential in relation, with their occurence having no deeper meaning than that they happened in the first place.

In the words of Lawrence M. Krauss, “Rare things happen all the time“.

 

 

Blogs I’ve commented on this week 09/12/11

Posted on

Meta Analysis – Let’s all get together

Ethics – are they necessary or have we outgrown them as a society?

Blogs – What I think on them

Is it Possible to Prove a Research Hypothesis?

Does informed consent impede scientific progress?

Posted on

The biggest complaint I’ve heard when it comes to research skills is in regards to how informed consent, which is an ethical requirement under the standards of the APA, can often make studies impossible to conduct.

The reasons why informed consent is necessary is often debated. While participants should rightfully be informed of what they will be doing, is it right that this should be the first priority even in studies where there is no harm of any form likely to be caused to the participant?

Take for example the Milgram study. Milgram’s study took place before any ethical guidelines existed, and as such informed consent was not an issue. While the participants suffered trauma at the end of the study, they were debriefed and counselled and offered any other care necessary. So with all these protective measures in place, what difference would informed consent make? All that could conceivably change is that the study would have provided results showing nothing of particular interest, due to participants either refusing to take part, or knowing full well they weren’t even doing anything. Without deception, the Milgram study, one of the most well known and important studies in the field of psychology, would have probably never happened, and we’d be completely in the dark about the subject matter.

The issue with informed consent is that as soon as a participant understands what the research will be observing, they will then begin to exhibit more desirable and acceptable behaviour than they would otherwise. This is known as the Hawthorne effect. In Milgram’s study this would have manifested as participants halting the shocks almost immediately, in order to improve how they appeared to the researcher.

So, is informed consent such an important principal that we should impede progress altogether? I, for one, don’t believe so. I feel that with all of the protective methods already in existence, such as debriefing, counselling etc, as well as the right to withdraw data if the participant is unhappy with what they have done, that participants are more than adequately protected from any harm, and that informed consent does more harm than good in the grand scheme of things.

Is qualitative research as scientific as quantitative methods?

Posted on

The topic that has been set for the blog this week was originally presented as “Qualitative research isn’t as scientific as quantitative methods.” The reason I have changed that assertion is that I quite simply do not agree with it.

Qualitative and quantitave methods are such wildly different beasts, calling one more scientific than the other is like debating whether physics or chemistry is “more scientific”. I personally feel that something is either scientific or unscientific, nothing more nor less. So how, exactly, would one rate on a scale of scientificness?

I suppose the point I’m trying to make is that both methods are scientific, with neither holding some vaguely defined sense of superiority. The only difference between the two is what situations each can be applied in.

Many people I know assert that qualitative methods are more suitable for the field of psychology than quantitative methods due to the young age that our science is currently at, and back when I first began my degree, I would have been inclined to agree with them. After all, I was a bright eyed little fresher, who naturally assumed that so little in psychology could be measured that I would never need to touch numbers at all.

However, I know now that there is an enormous amount of quantitative research occuring in psychology. For example, how would we measure reaction times qualitatively? Or measure heart rate in the context of arousal?

There are various areas in every field of science that can only be measured quantitatively, but the same holds true for qualitative research. As such, how can either be considered more scientific?

I firmly believe that there is not real difference between how “scientific” each method can be, and hold them to be equally scientific and valid.