Notifications
Q&A

Which of the multiple definitions of correlation to use?

+4
−0

In optics1, we have a notion of coherence, that's defined as a normalised correlation/cross-correlation/autocorrelation function: Simplifying notation from the linked Wiki page we can write the (un-normalised, first order) correlation function $$G^{(1)} = \left\langle XY\right\rangle,$$ where I'm using $\left\langle\ldots\right\rangle$ to represent the expectation value, which can then be normalised as $$g^{(1)} = \frac{\left\langle XY\right\rangle}{\sqrt{\left\langle X^2\rangle\langle Y^2\right\rangle}},$$ where $X$ and $Y$ are some matrices/operators/variables etc.

All seems well and good... Except the problem was already mentioned in the first sentence - the correlation between $X$ and $Y$ is defined as $$g = \frac{\left\langle XY\right\rangle - \left\langle X\rangle\langle Y\right\rangle}{\sqrt{\left\langle X^2\right\rangle - \left\langle X\right\rangle^2}\sqrt{\left\langle Y^2\right\rangle - \left\langle Y\right\rangle^2}},$$ which is the normalised covariance $$G = \left\langle XY\right\rangle - \left\langle X\rangle\langle Y\right\rangle.$$

So, how do we reconcile these similar yet different definitions? More importantly, how do you know which is the right one to use? In order to tell how correlated two sets of data taken from the expectation of two different measurements of a signal are.

Initially, it seemed like the former might apply to operators and matrices in physics, while the latter applies to variables in statistics. However, there are a couple of issues with this - in taking the expectation of an operator, it instead becomes a continuous variable, which can be treated statistically. The second is that I sometimes see the first definition of correlation $G^{(1)}$ used alongside the definition of covariance $G$, which if nothing else, is a confusing way of mixing different definitions.

  1. Yes, I know, starting a maths question with something about physics... Bear with me? Please?

Why should this post be closed?

6 comments

The latter is the general definition of correlation. However, since in optics we deal with fields having zero time average, because they are sinusoidal in time, we have $\langle X \rangle = \langle Y \rangle =0$, so we get the former definition. ‭MathPhysics‭ 11 days ago

See my comment to this answer - we're generally not time averaging anything - the expectation values are taken at specific spatial and temporal locations ‭Mithrandir24601‭ 11 days ago

In the definition of coherence in optics we always time average fields. The definitions of higher order coherences is a different subject; they are not correlation of some variables. They are defined in some way analogous to the first order one. ‭MathPhysics‭ 11 days ago

I'm not sure who told you that, but we really don't - is useful because it doesn't average over time (or frequency if that's your thing). In classical physics, $g^{(2)}$ is intensity correlations. Anyway, this question is explicitly about the maths. You're trying to 'frame challenge' my argument but I'm entirely convinced these are both valid definitions and my question still holds ‭Mithrandir24601‭ 11 days ago

According to the ergodic hypothesis in statistical physics, the time average of a process is the same as the average over the statistical ensemble. So, in the definition of the degree of the degree of first-order coherence we have $\langle E(t_1) E(t_2) \rangle = \frac{1}{T} \int_{T} E(t_1)E(t_2) dt_1$. ‭MathPhysics‭ 10 days ago

Show 1 more comments

0 answers

Sign up to answer this question »