First, let us recall a standard result from linear algebra: "real symmetric matrices are diagonalizable by orthogonal matrices". Thus, any variance-covariance matrix can be written

since a variance-covariance matrix is also definite positive.
In the context of Gaussian random vectors (or more generally elliptical distributions), we can write

The idea in factor models is that a simplified version of the diagonal matrix can be considered

where

assuming that eigenvalues were sorted .
The idea is to write the expression above

where the largest eigenvalues are considered. This can also be written

were the so-called factors are assumed to be orthogonal, i.e. non-correlated. Thus, components are driven by those factors, and the remaining term is called (in finance) the idiosyncratic component.
This technique is extremely popular in finance, to model returns of multiple stocks, from the capital asset pricing model (CAPM, Sharpe (1964) or Mossin (1966)) - with one factor (the so-called market) - to the arbitrage pricing theory (APT, Ross (1976)). For instance, with the following code, we can extract prices of 35 French stocks,
```code=read.table(
"http://perso.univ-rennes1.fr/arthur.charpentier/
code\$Nom=as.character(code\$Nom)
code\$Code=as.character(code\$Code)
i=1
library(tseries)
code=code[-8,]
X<-get.hist.quote(code\$Code[i])
Xc=X\$Close
for(i in 2:nrow(code)){
x<-get.hist.quote(code\$Code[i])
xc=x\$Close
Xc=merge(Xc,xc)}```
It is natural to consider log-returns, and their correlations,
```R=diff(log(Xc))
colnames(R)=code\$Code
correlation=matrix(NA,ncol(R),ncol(R))
colnames(correlation)=code\$Code
rownames(correlation)=code\$Code
for(i in 1:ncol(R)){
for(j in 1: ncol(R)){
I=(is.na(R[,i])==FALSE)&(is.na(R[,j])==FALSE)
correlation[i,j]=cor(R[I,i],R[I,j]);
}}
library(corrgram)
corrgram(correlation, order=NULL,
```L=eigen(correlation)