Friday, December 26, 2008

9. Report the Results - Experimental

Reporting Experimental Data

The principles outlined for reporting descriptive data apply equally to reporting experimental data. Experimental data can also be presented in text, table or figure. report the means and standard deviations, and probably graphs, as a summary of the data before presenting the statistical information. After statistical information, indicate whether you accept or reject the hypothesis related to that statistical test.

Reporting in Text

State the statistical test, the degree of freedom and/or sample size, the statistic symbol, the calcualted statisitic, and probability ( a alpha and possibly B beta).

Reporting in Figures

How NOT to presetn data (uniforamtive caption, no legend, no axis labels, poor dimensions as don't realistically represent differences, wrong data palced together, no representation of vaiability)

Interpreting data

So report the data clearly and thoroughly in the Results section, then in the Discussion interpre those results. It is usual to discuss the hypothesis in the same order in which they were stated in the method and in which the results were presented. For each hypothesis state whether you accept the null hypothesis, or whether you must reject the null hypothesis and accept the alternate hypothesis. Relate your results to your theory, to the literature and to practical significance.

Remember that you colected data to help answer questions. your result provide you with evidence of whether or not you postulated would be the situation. There are two major aspects you must consider in interpreting sample data; chance and practical importance.

Statistical Significance using Hypothesis Testing

Rejecting the Null Hypothesis

The first aspect is staistical significance, which is to do with sampling and chance. You need to know whether yo could expect the results you collected if there was no difference in the real world. Or, in research jargon, if the null hypothesis is true in the real world, what is the probability of getting a difference the size of the difference in teh sample data.

Remember your test statistic (t, F, X to the power of 2 etc.) represents the size difference between your data sets.

Thus, if you get a test statisitc of say 7.0 and p=0.5, there is a 0.5 (or 5%, or 1 in 20) chance that the statistic could be large (that is the difference between your data sets could be large) in a sample when the null hypothesis is true.

If the chance of getting a statistic as large (extreme) is less than 0.05 or 0.01, it is customary to reject the null hypothesis. That is, the chance of getting such a large differnce in the sample if there was no difference in the real world is so small that it is unreasonable to believe. Having rejected the null hypothesis, you must tentatively accept your alternate hypothesis. Remember accepting your alternate hypothesis is accepting your best guess of wht the real world situation is, given the unlikelihood of the null hypothesis.

Accepting the Null Hypothesis

Conversly, if you get a test statistic of say 1.2 and p=0.33, there is a .33 (or 33%, or 1 in 3) chance that the statistic could be that large (that is, the differnce between your data sets could be that large) in a sample when the null hypothesis is true.

If the chance of getting a statistic as large (extreme) is greater than 0.05 or 0.01, it is customary to accept the null hypothesis. That is, the chance of getting such large differnce in the sample if there was no real difference in the real world is so great that it is reasonable to believe there is no differencel. Having accepted the null hypothesis, you must tentatively reject your alternate hypothesis.

There is a common misconception that having to accept the null hypothesis means your project has failed. This si not so. If you ahve colelcted honest data carefully using a suitable method, teh outcome will be useful whichever way the hypothesis testing goes.

There is an extra perception youc an take when interpreting data. It is especially important if you accept the null hypothesis. You should calculate how likely it was that your study could have shown a statsitically significant difference. This is called the power of your test, and is related to B, or the cahnce of making type II error. Power = 1-B.

Remember the p value usually reported is a, or the chance of saying something is different whe it isn't. B, is the opposite, ot the cahnce of saying there is no difference when there really is.

So if you reported a statisitcally significant difference it is udeful it is useful for the reader to know how sure you are of the difference, or more accurately, what the chances is that your results are "wrong", that is, a.

Equally, if you reported that there is no statistically significant difference, it is useful for your reader to know how sure you are that there is no difference, which is power (or 1-B, or one minus your chance of being "wrong").

Statistical significance, then, is the chance that you did/didn't have a difference in your data given no real difference, adn the chance that you were able to pick a difference when there was a real difference.

Statistical Significance using Confidence Intervals

Another way of determining the statistical significance is by using Confidence Intervals. This method is not used as much a straight hypothesis testing, but has a number of advantages, one of which is that makes it easier to determine practical significance from the statistical data.

Your sample mean is a point estimate of what the mean in the real world is. Similarly, confidence intervals are estimates of the population, or real, means based on the data sample. However, rather than being one number, they give an interval which should include the population mean.

As with point estimates, these estimates of intervals can be given a level of probability, or a level of confidence, that the interval includes the popualtion mean.

Using confidence intervals you can be more specific about what you think the real world mean is. This makes it easier for the reader to see what the situation is, and how practically significant the results are likely to be.

Practical Significance

The second aspect is practical and clinical significance, or what is the practical importance of the data?" Statistical significance is related to the effect size and the number of data points. Thus if you collect enough data, you are likely to get a statistically significant difference. However, you must interpret that difference in a practical setting.

Other Discussion Points

After you ahve discussed each hypothesis in terms of your results. your theory, the literature and practical importance, you probably shuold discuss the problems, limitations and advantages of your study and where it would be useful for future study to be directed.

This is the last post on the research process.

Even though it might be a bit too exhaustive and thorough a coverage on the topic of research I felt it is an appropriate post for the 5th chakra on discrimination as we go down into the 4th chakra of devotion and service (the practical application of technology in the growing field of bio-life sciences, innovation and medical aid).

The analysis involved in research is the same use of the intellect as in the spiritual path of wisdom (jnana marg) but applied outwardly into understanding phenomena. It is also good training for sharpening of the intellect for the same investigation into the nature of the mind itself - in Vedanta by the "neti neti" approach and the seeing of knowledge as the ignorance synonomous with consciousness in which everything appears and dissapears. As Nisargadatta said, there is an end to knowledge but infintie discovery in the Self.

Anwyay, would like to acknowledge Leon Staker, the lecturer from Curtin whose notes these were taken from. It seemed to me that this understanding of research should be made available to all in the pursuit of humanitarian technology in a knowledge based economy or in the discovery of truth.

Finally, the edu-create proposal that has been written but not yet published presents an example of how this research approach can be harnessed in a creative way in schools through a creative association by 5 and diagnostic approach to learning. I hope that one day it can be made available on line in the context of the earlier post on the Jiva educational pathway to reveal how a more enlightened education system could be brough about for all. Presumption pershaps, but it is developed on the foundation of the spiritual yet scientific approach that this blog is trying to promote and I feel it might have some worth to readers in the future.

No comments: