ParaMonte Fortran 2.0.0
Parallel Monte Carlo and Machine Learning Library
See the latest version documentation.
pm_sampleCCF Module Reference

This module contains classes and procedures for computing properties related to the cross correlation of random samples. More...

Data Types

interface  getACF
 Generate and return the auto-correlation function (ACF) \((f\star f)(\tau)\) of the discrete signal \(f\) lagging itself for a range of lags.
interface  getCCF
 Generate and return the cross-correlation function (CCF) \((f\star g)(\tau)\) of the discrete signal \(g\) lagging the signal the discrete signal \(f\) for a range of specified lags.
interface  setACF
 Return the auto-correlation function (ACF) \((f\star f)(\tau)\) of the discrete signal \(f\) lagging itself for a range of lags that spans the sequence length.
interface  setCCF
 Return the cross-correlation function (CCF) \((f\star g)(\tau)\) of the discrete signal \(g\) lagging the signal the discrete signal \(f\) for a range of lags that spans the maximum of the lengths of the two sequences.


character(*, SK), parameter MODULE_NAME = "@pm_sampleCCF"

Detailed Description

This module contains classes and procedures for computing properties related to the cross correlation of random samples.

Cross Correlation

Cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other.
This is also known as a sliding dot product or sliding inner-product.
It is commonly used for searching a long signal for a shorter, known feature.
It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology.
The cross-correlation is similar in nature to the convolution of two functions.
In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.

The term cross-correlation also refers to the correlations between the entries of two random vectors \(\mathbf{X}\) and \(\mathbf {Y}\), while the correlations of a random vector \(\mathbf{X}\) are the correlations between the entries of \(\mathbf{X}\) itself, those forming the correlation matrix of \(\mathbf{X}\).
If each of \(\mathbf{X}\) and \(\mathbf{Y}\) is a scalar random variable which is realized repeatedly in a time series, then the correlations of the various temporal instances of \(\mathbf{X}\) are known as autocorrelations of \(\mathbf{X}\), and the cross-correlations of \(\mathbf{X}\) with \(\mathbf{Y}\) across time are temporal cross-correlations.
In probability and statistics, the definition of correlation always includes a standardizing factor in such a way that correlations have values between −1 and +1.


If \(X\) and \(Y\) are two independent random variables with probability density functions \(f\) and \(g\), respectively, then the probability density of the difference \(Y − X\) is formally given by the cross-correlation \(f\star g\).
Equivalently, the convolution \(f * g\) (equivalent to the cross-correlation of \(f(t)\) and \(g(-t)\) gives the probability density function of the sum \(X + Y\).


For continuous functions \(f\) and \(g\), the cross-correlation is defined as:

\begin{equation} (f\star g)(\tau) = \int_{-\infty }^{\infty }{\overline {f(t)}}g(t+\tau )\,dt \end{equation}

which is equivalent to,

\begin{equation} (f\star g)(\tau ) = \int_{-\infty }^{\infty }{\overline {f(t-\tau )}}g(t)\, dt \end{equation}

where \(\overline{f(t)}\) denotes the complex conjugate of \(f(t)\), and \(\tau\) is called displacement or lag.
For highly-correlated \(f\) and \(g\) which have a maximum cross-correlation at a particular \(\tau\), a feature in \(f\) at \(t\) also occurs later in \(g\) at \(t + \tau\), hence \(g\) is said to lag \(f\) by \(\tau\).


  1. The cross-correlation of functions \(f(t)\) and \(g(t)\) is equivalent to the convolution (denoted by \(*\)) of \(\overline{f(-t)}\) and \(g(t)\).
    That is:

    \begin{equation} [f(t)\star g(t)](t) = [{\overline{f(-t)}} * g(t)](t) ~. \end{equation}

  2. \([f(t)\star g(t)](t) = [{\overline{g(t)}}\star{\overline {f(t)}}](-t)\) ~.
  3. If \(f\) is a Hermitian function, then \(f\star g = f*g\).
  4. If both \(f\) and \(g\) are Hermitian, then \(f\star g = g\star f\).
  5. \(\left(f\star g\right)\star \left(f\star g\right) = \left(f\star f\right)\star \left(g\star g\right)\).
  6. Analogous to the convolution theorem, the cross-correlation satisfies,

    \begin{equation} \mathcal{F} \left\{f\star g\right\} = {\overline{{\mathcal {F}}\left\{f\right\}}} \cdot {\mathcal{F}}\left\{g\right\} ~, \end{equation}

    where \({\mathcal{F}}\) denotes the Fourier transform, and an \({\overline {f}}\) again indicates the complex conjugate of \(f\), since \(\mathcal{F}\left\{{\overline {f(-t)}}\right\} = {\overline{{\mathcal {F}}\left\{f(t)\right\}}}\).
    Coupled with fast Fourier transform algorithms, this property is often exploited for the efficient numerical computation of cross-correlations.
  7. The cross-correlation is related to the spectral density (see Wiener–Khinchin theorem).
  8. The cross-correlation of a convolution of \(f\) and \(h\) with a function \(g\) is the convolution of the cross-correlation of \(g\) and \(f\) with the kernel \(h\):

    \begin{equation} g\star \left(f * h\right) = \left(g\star f\right) * h. \end{equation}


The fastest methods for computing for Cross-correlation of large sequences rely on the fast-fourier-transform and the correlation/convolution theorems.

  1. Convert the input signals \(f\) and \(g\) signals to signals of type complex.
  2. Left-pad the input signals \(f\) and \(g\) with zeros such that the sequences become of minimum length size(f) + size(g) - 1.
    Such padding is to ensure the periodic nature of the FFT method does not affect the computed correlations by overlapping the sequences circularly.
  3. Compute the FFT of both signals and multiply the results of the first FFT with the conjugate of the second.
  4. Compute the inverse FFT of the resulting elemental multiplication.

The above steps can be summarized as getFFTI(getFFTF(getPaddedl(f, size(f) + size(g) - 1, (0., 0.))) + conjg(getFFTF(getPaddedl(f, size(f) + size(g) - 1, (0., 0.))))) assuming both f and g are already of type complex.
The resulting slice ccf(1 : size(f)) contains the CCF for lags [(lag, lag = 0, size(f) - 1)] and the slice ccf(size(f) + 1 : size(f) + size(g) - 1) contains the CCF for lags [(lag, lag = -size(g), -1)].

The convolution theorem

The convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms.
More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain).

See also
Cross Correlation

Final Remarks

If you believe this algorithm or its documentation can be improved, we appreciate your contribution and help to edit this page's documentation and source file on GitHub.
For details on the naming abbreviations, see this page.
For details on the naming conventions, see this page.
This software is distributed under the MIT license with additional terms outlined below.

  1. If you use any parts or concepts from this library to any extent, please acknowledge the usage by citing the relevant publications of the ParaMonte library.
  2. If you regenerate any parts/ideas from this library in a programming environment other than those currently supported by this ParaMonte library (i.e., other than C, C++, Fortran, MATLAB, Python, R), please also ask the end users to cite this original ParaMonte library.

This software is available to the public under a highly permissive license.
Help us justify its continued development and maintenance by acknowledging its benefit to society, distributing it, and contributing to it.

Amir Shahmoradi, Tuesday 01:45 AM, August 21, 2018, Dallas, TX

Variable Documentation


character(*, SK), parameter pm_sampleCCF::MODULE_NAME = "@pm_sampleCCF"

Definition at line 156 of file pm_sampleCCF.F90.