Matrix inversion is an operation that has no counterpart in the vector case, and it deserves its own section. In the scalar case, when we consider standard multiplication, we observe that there is a “neutral” element for that operation, the number 1. This is a neutral element in the sense that for any , we have x ·… Continue reading The identity matrix and matrix inversion
Month: February 2023
Operations on matrices
Operations on matrices are defined much along the lines used for vectors. In the following, we will denote a generic element of a matrix by a lowercase letter with two subscripts; the first one refers to the row, and the second one refers to the column. So, element aij is the element in row i, column j. Addition Just like… Continue reading Operations on matrices
MATRIX ALGEBRA
The solution of systems of linear equations. Many issues related to systems of linear equations can be addressed by introducing a new concept, the matrix. Matrix theory plays a fundamental role in quite a few mathematical and statistical methods that are relevant for management. We have introduced vectors as one-dimensional arrangement of numbers. A matrix is, in… Continue reading MATRIX ALGEBRA
Linear combinations
The two basic operations on vectors, addition and multiplication by a scalar, can be combined at wish, resulting in a vector; this is called a linear combination of vectors. The linear combination of vectors vj with coefficients αj, Fig. 3.8 Illustrating linear combinations of vectors. j = 1, …, m is If we denote each component i, i = l, …, n, of vector j by vij, the component i of the linear… Continue reading Linear combinations
Inner products and norms
The inner product is an intuitive geometric concept that is easily introduced for vectors, and it can be used to define a vector norm. A vector norm is a function mapping a vector x into a nonnegative number that can be interpreted as vector length. We have see that we may use the dot product to define the usual Euclidean norm: It… Continue reading Inner products and norms
Operations on vectors
We are quite used to elementary operations on numbers, such as addition, multiplication, and division. Not all of them can be sensibly extended to the vector case. Still, we will find some of them quite useful, namely: Vector addition Addition is defined for pairs of vectors having the same dimension. If , we define: For instance Since… Continue reading Operations on vectors
VECTOR ALGEBRA
Vectors are an intuitive concept that we get acquainted with in highschool mathematics. In ordinary two- and three-dimensional geometry, we deal with points on the plane or in the space. Such points are associated with coordinates in a Cartesian reference system. Coordinates may be depicted as vectors, as shown in Fig. 3.4; in physics, vectors are… Continue reading VECTOR ALGEBRA
Cramer’s rule
As a last approach, we consider Cramer’s rule, which is a handy way to solve systems of two or three equations. The theory behind it requires more advanced concepts, such as matrices and their determinants, which are introduced below. We anticipate here a few concepts so that readers not interested in advanced multivariate statistics can… Continue reading Cramer’s rule
Gaussian elimination
Gaussian elimination, with some improvements, is the basis of most numerical routines to solve systems of linear equations. Its rationale is that the following system is easy to solve: Such a system is said to be in upper triangular form, as nonzero coefficients form a triangle in the upper part of the layout. A system in… Continue reading Gaussian elimination
Substitution of variables
A basic (highschool) approach to solving a system of linear equations is substitution of variables. The idea is best illustrated by a simple example. Consider the following system: Rearranging the first equation, we may express the first variable as a function of the second one: and plug this expression into the second equation: Then we… Continue reading Substitution of variables