forked from mortenarendt/dataanalyssisconsumerscience
-
Notifications
You must be signed in to change notification settings - Fork 0
/
06_PCAintro.Rmd
88 lines (54 loc) · 4.84 KB
/
06_PCAintro.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
# Introduction to PCA and multivariate data
Multivariate data is defined as a set of (multiple) response variables measured on the same set of samples. In principle these response variables can be of any nature (continuous, ordinal, binary), but mostly PCA is used for analysis of continuous variables.
In this book Principal Component Analysis (PCA) is used several times. This chapter will shortly explain the theory behind PCA and the interpretation of relevant plots.
PCA is a tool for looking a correlation structure between variables, and groupings of samples. _All through visualizations_.
Check out YouTube on the subject for an introduction.
A conceptual introduction is given here:
```{r, echo=FALSE}
vembedr::embed_youtube("NFIkD9-MuTY")
```
## A bit of math
The multivariate dataset is organized in a matrix $\mathbf{X}$ with $n$ samples and $p$ variables.
This matrix is factorized into so called scores ($\mathbf{T}$) and loadings ($\mathbf{P}$).
$$ \mathbf{X} = \mathbf{T}\mathbf{P} + \mathbf{E} $$
This estimation is done such that the residuals ($\mathbf{E}$) is minimized in a least squares sense.
The upside of using PCA is that we characterize the sample distribution (what is similar and different) using plots of $\mathbf{T}$, and the correlation of the responses using plots of $\mathbf{P}$.
In addition to PCA there exists a range of methods for factorizing multivariate datasets including Partial Least Squares (PLS), confirmatory factor analysis (CFA), Correspondence Analysis (CA), Redundancy Analysis (RA), and a lot more.
Conceptually, all these methods aims to make a latent-factor model just like PCA, and hence understanding the idea and especially how PCA is used, opens up for using a wide range of variants.
```{r, include=FALSE, eval=FALSE}
#[Question to Aasmund: What should we extend with? - another video?]
```
## Interpreting model output
For calculating a PCA model, we will use the dataset _beef_. It is a sensory descriptive profile on vacuum packed beef steaks heat treated in water baths at different temperatures for different times (sous vide) In total 12 combinations. The samples are coded as e.g. 56-03, meaning the sample has been treated for 56 degrees for 3 hours. There are 10 assessors in the panel, four sensory repetitions and 22 attrributes (A-Brownsurface to T-Salt).
The first 3 columns of the dataset is used for the design.
The PCA is computed on the response variables _only_.
These can be chosen using the hard brackets []. The way to chose only a part of a dataset (to subset) is done by specifying the following inside the []:
A dataset often has two dimensions, rows and columns. In this case, the [] takes two inputs: which row you want, and which columns you want. These are seperated by a comma. This means, that if you write: dataset[1:4,2:6], the result will be rows 1-4, of columns 2-6. If nothing is written on one side of the comma, all of the rows/columns are chosen.
```{r}
library(data4consumerscience)
data(beef)
mdlPCA <- prcomp(beef[,4:25], scale. = T, center = T)
```
### Biplot
The *ggbiplot* package (and function) is a needed tool to plot the PCA model.
Installation of this is done by:
```{r, eval = F, include=T}
install.packages("devtools")
devtools::install_github('vqv/ggbiplot')
```
```{r, message=FALSE}
ggbiplot::ggbiplot(mdlPCA)
```
The PCA tries to capture the maximum structure of the data in a few dimensions called principal components (PC1, PC2,...).
The arrows and labels reflects the correlation structure between the responses. I.e we see that the _Bouillon_ characteristics is correlated and explains PC2, while textural properties like _Rubberband_, _Chewtime_ is correlated, and oppositely correlated of _Tender_, and collectively explains PC1.
We can decorate the scores with design information, such as beef type as well as judge
```{r, message=FALSE, warning=FALSE}
ggbiplot::ggbiplot(mdlPCA, groups = beef$ProductName, ellipse = T)
```
All though this plot is a bit messy, we see that higher temperature promotes higher position on PC2: I.e. moving from bloody to bouillon. While PC1 to some extend captures trends in cooking time.
```{r}
ggbiplot::ggbiplot(mdlPCA, groups = beef$Assessor, ellipse = T)
```
Coloring according to judge reveal that judge A04 is low in range (variance), and hence do not utilize the entire response scale, A03 tends to rate higher on _rubberband_, _chewtime_, _bloodmetal_, etc. especilly compared to A10, which is higher in _bouillon_ / PC2.
This plot indicates how well trained the panel is, as we optimally have only smaller impact of the individual judges.
NOTE: If the **ggbiplot()** needs modification, for instance, if the loading labels are exceeding the boundaries, or are too small, then look into the ggbiplot arguments such as `varname.size`. Also normal *ggplot2*-functionallity like `xlim()`, `ylim()` or `theme()` can help to modify the plot.