What is the formula for sample standard deviation of a small sample size?
The formula for sample standard deviation is given by:
$$s = \sqrt{\frac{\sum_{i=1}^{i=N} (x_i  \bar{x})^2}{N1}}$$Am I right that when the sample size is small ($N<30$), the formula for sample standard deviation becomes:
$$s = t_{N1, \text{confidence}} \sqrt{\frac{\sum_{i=1}^{i=N} (x_i  \bar{x})^2}{N1}} \ \ \ \ \ \ \ \ ?$$Here, $t_{N1, \text{confidence}}$ is the Student's $t$ coefficient obtained from the tables (the tables are given, for instance, here).
The reason I am asking is the following. I had no doubts whatsoever that I must multiply standard deviation by the Student's $t$ coefficient for a small sample size. And I have been doing it all the time. I used it in a draft for an article. When I was checking the draft, I decided to check this formula. First, I could not remember where I got it from. Second, I searched the books and the Internet only to find out that people use Student's $t$ coefficient to calculate confidence interval, not standard deviation. I tried to derive my formula from the formula for confidence interval and failed. I talked to a fellow student who also thinks standard deviation must be multiplied by Student's $t$ coefficient and also does not remember why.
3 answers
The following users marked this post as Works for me:
User  Comment  Date 

Ivan Nepomnyashchikh 
Thread: Works for me Thank you for the clarification! 
Sep 8, 2023 at 20:03 
The sample standard deviation (with Bessel's correction) is defined to be the first formula in your post. It doesn't ‘become’ anything else.
You were possibly remembering using the sample standard deviation in an estimator for the population mean. The \(t\)value is multiplied by the sample standard deviation as part of finding the confidence interval for the population mean, as you've probably seen in your research.
A somewhat related notion is correcting for the fact that the sample standard deviation is consistently an underestimate of the population standard deviation for small population sizes, even after Bessel's correction. There's no one formula for an unbiased estimate of the population standard deviation for an arbitrary distribution, but for particular distributions, the sample standard deviation can be made unbiased by multiplying by a correction factor. Wikipedia has a table of coefficients for a normal distribution (if you use these values, note that they are meant to be divisors, not multiplicands—they are all < 1). But this factor isn't the $t$value.
This popular meme of $\text{“}N<30\text{”}$ is only a popular meme (except that it's mentioned in statistics courses for the mathematically unschooled), and is about the central limit theorem, which says (loosely speaking) that the sample mean or the sample sum is approximately normally distributed if the sample size is big. However, first note that "big" can be reasonably construed as perhaps $N\ge 12$ if the population distribution is not particularly skewed, whereas with a very skewed distribution $N=100$ may not be enough.
But none of that has anything at all to do with any definition of the "sample standard deviation."
Conventionally the "sample variance" is defined in a way in which one divides by the sample size minus 1. That makes the "sample variance" an unbiased estimator of the population variance. Unbiasedness is overrated, and at any rate the socalled "sample standard deviation" is not an unbiased estimator of the population standard deviation.
The formula in which one multiplies by $\text{“}t_{N1,\text{confidence}}\text{”}$ has nothing to do with how the sample standard deviation is defined, but is used in finding confidence intervals.
In many contexts, one uses a capital $N$ to denote the size of the population and a lowercase $n$ to denote the size of the sample, so by that standard, you should use the lowercase $n$ here.
0 comment threads
From the little that I know ...

If the sample = population (census)
$\sigma^2 = \displaystyle\frac{1}{N}\sum_{i = 1}^N (x_i  \mu)^2$ where $N$ is the size of the population and $\mu$ is the population mean. The variance is $\sigma^2$ and the standard deviation then is $\sigma$ 
If the sample is smaller than the population (any study except a census)
$\sigma^2 = \displaystyle\frac{1}{n  1}\sum_{i = 1}^n(x_i  \bar x)^2$ where $n$ is the size of the sample and $\bar x$ is the sample mean. This $\sigma^2$ is called the unbiased variance and the standard deviation is $\sigma$.
0 comment threads