Matrix norms are an extension of vector norms to matrices and are used to define a measure of distance on the space of a matrix. The most commonly occurring matrix norms in matrix analysis are the Frobenius, \(L_1\), \(L_2\) and \(L_\infty\) norms. The following will investigate these norms, along with some Python implementations of the calculation of the matrix norm.
Vector Norms and Inequalities with Python
Similar to the real line concerning two real scalars and the distance between them, vector norms allow us to get a sense of the distance or magnitude of a vector. In fact, a vector of length one is simply a scalar. Norms are often used in regularization methods and other machine learning procedures, as well as many different matrix and vector operations in linear algebra.
Quadratic Discriminant Analysis of Several Groups
Quadratic discriminant analysis for classification is a modification of linear discriminant analysis that does not assume equal covariance matrices amongst the groups (\(\Sigma_1, \Sigma_2, \cdots, \Sigma_k\)). Similar to LDA for several groups, quadratic discriminant analysis for several groups classification seeks to find the group that maximizes the quadratic classification function and assign the observation vector \(y\) to that group.
Quadratic Discriminant Analysis of Two Groups
LDA assumes the groups in question have equal covariance matrices (\(\Sigma_1 = \Sigma_2 = \cdots = \Sigma_k\)). Therefore, when the groups do not have equal covariance matrices, observations are frequently assigned to groups with large variances on the diagonal of its corresponding covariance matrix (Rencher, n.d., pp. 321). Quadratic discriminant analysis is a modification of LDA that does not assume equal covariance matrices amongst the groups. In quadratic discriminant analysis, the respective covariance matrix \(S_i\) of the \(i^{th}\) group is employed in predicting the group membership of an observation, rather than the pooled covariance matrix \(S_{p1}\) in linear discriminant analysis.
Linear Discriminant Analysis for the Classification of Several Groups
Similar to the two-group linear discriminant analysis for classification case, LDA for classification into several groups seeks to find the mean vector that the new observation \(y\) is closest to and assign \(y\) accordingly using a distance function. The several group case also assumes equal covariance matrices amongst the groups (\(\Sigma_1 = \Sigma_2 = \cdots = \Sigma_k\)).
Linear Discriminant Analysis for the Classification of Two Groups
In this post, we will use the discriminant functions found in the first post to classify the observations. We will also employ cross-validation on the predicted groups to get a realistic sense of how the model would perform in practice on new observations. Linear classification analysis assumes the populations have equal covariance matrices (\(\Sigma_1 = \Sigma_2\)) but does not assume the data are normally distributed.
Discriminant Analysis for Group Separation
Discriminant analysis assumes the two samples or populations being compared have the same covariance matrix \(\Sigma\) but distinct mean vectors \(\mu_1\) and \(\mu_2\) with \(p\) variables. The discriminant function that maximizes the separation of the groups is the linear combination of the \(p\) variables. The linear combination denoted \(z = a′y\) transforms the observation vectors to a scalar. The discriminant functions thus take the form:
Discriminant Analysis of Several Groups
Discriminant analysis is also applicable in the case of more than two groups. In the first post on discriminant analysis, there was only one linear discriminant function as the number of linear discriminant functions is \(s = min(p, k − 1)\), where \(p\) is the number of dependent variables and \(k\) is the number of groups. In the case of more than two groups, there will be more than one linear discriminant function, which allows us to examine the groups' separation in more than one dimension.
Factor Analysis with the Iterated Factor Method and R
The iterated principal factor method is an extension of the principal
Factor Analysis with the Principal Component Method and R Part Two
In the first post on factor analysis, we examined computing the estimated covariance matrix \(S\) of the rootstock data and proceeded to find two factors that fit most of the variance of the data. However, the variables in the data are not on the same scale of measurement, which can cause variables with comparatively large variances to dominate the diagonal of the covariance matrix and the resulting factors. The correlation matrix, therefore, makes more intuitive sense to employ in factor analysis.
Factor Analysis with the Principal Component Method and R
The goal of factor analysis, similar to principal component analysis, is to reduce the original variables into a smaller number of factors that allows for easier interpretation. PCA and factor analysis still defer in several respects. One difference is principal components are defined as linear combinations of the variables while factors are defined as linear combinations of the underlying latent variables.
Image Compression with Principal Component Analysis
Image compression with principal component
Principal Component Analysis with R Example
Often, it is not helpful or informative to only look at all the variables in a dataset for correlations or covariances. A preferable approach is to derive new variables from the original variables that preserve most of the information given by their variances. Principal component analysis is a widely used and popular statistical method for reducing data with many dimensions (variables) by projecting the data with fewer dimensions using linear combinations of the variables, known as principal components.
Image Compression with Singular Value Decomposition
The method of image compression with singular value decomposition is
Singular Value Decomposition and R Example
SVD underpins many statistical and real-world
Cholesky Decomposition with R Example
Cholesky decomposition, also known as Cholesky factorization, is a
How to Calculate the Inverse Matrix for 2×2 and 3×3 Matrices
The inverse of a number is its reciprocal. For example, the inverse of 8
Categories
- Analysis
- Calculus
- Data Science
- Finance
- Linear Algebra
- Machine Learning
- nasapy
- petpy
- poetpy
- Python
- R
- SQL
- Statistics
Recent Posts
Page 1 / 1