What is the difference between STANDARD DEVIATION and STANDARD ERROR?
The standard deviation is a measure of the variability of a single sample of observations. Let’s say we have a sample of 10 plant heights. We can say that our sample has a mean height of 10 cm and a standard deviation of 5 cm. The 5 cm can be thought of as a measure of the average of each individual plant height from the mean of the plant heights.
The standard error, on the other hand, is a measure of the variability of a set of means. Let’s say that instead of taking just one sample of 10 plant heights from a population of plant heights we take 100 separate samples of 10 plant heights. We calculate the mean of each of these samples and now have a sample (usually called a sampling distribution) of means. The standard deviation of this set of mean values is the standard error.
In lieu of taking many samples one can estimate the standard error from a single sample. This estimate is derived by dividing the standard deviation by the square root of the sample size. How good this estimate is depends on the shape of the original distribution of sampling units (the closer to normal the better) and on the sample size (the larger the sample the better).
The standard error turns out to be an extremely important statistic, because it is used both to construct confidence intervals around estimates of population means (the confidence interval is the standard error times the critical value of t) and in significance testing.
–taken from comments by John W. Willoughby on the listserv at STUDSTAT@ASUVM.INRE.ASU.EDU