Nick Higham

694 FOLLOWERS

Nick Higham the Richardson Professor of Applied Mathematics in the School of Mathematics at the University of Manchester. His research area is numerical analysis, focusing on numerical linear algebra.

Nick Higham

9M ago

The spectral radius of a square matrix is the largest absolute value of any eigenvalue of :
For Hermitian matrices (or more generally normal matrices, those satisfying ) the spectral radius is just the -norm, . What follows is most interesting for nonnormal matrices.
Two classes of matrices for which the spectral radius is known are as follows.
Unitary matrices (): these have all their eigenvalues on the unit circle and so .
Nilpotent matrices ( for some positive integer ): such matrices have only zero eigenvalues, so , even though can be arbitrarily large.
The spectral radius of is not ..read more

Nick Higham

11M ago

An upper Hessenberg matrix has the property that for . For , the structure is
is an upper triangular matrix with an extra subdiagonal.
A lower Hessenberg matrix is the transpose of an upper Hessenberg matrix. In the rest of this article, the Hessenberg matrices are upper Hessenberg.
Hessenberg matrices play a key role in the QR algorithm for computing the eigenvalues of a general matrix. The first step of the algorithm is to reduce to Hessenberg form by unitary Householder transformations and then carry out the QR iteration on the Hessenberg form.
Because is so close to being triangular ..read more

Nick Higham

11M ago

An upper bidiagonal matrix
depends on just parameters, which appear on the main diagonal and the superdiagonal.
Such matrices arise commonly, for example as the factor and the transpose of the factor in the LU factorization of tridiagonal matrices, and as the intermediate matrix in the computation of the singular value decomposition by the Golub–Reinsch algorithm.
Bidiagonal matrices have many interesting properties, especially when we consider products of them. We describe some of their properties here.
Inverse
Consider the inverse of a nonsingular bidiagonal matrix:
Notice that every ..read more

Nick Higham

1y ago

A subspace of is an invariant subspace for if , that is, if implies .
Here are some examples of invariant subspaces.
and are trivially invariant subspaces of any .
The null space is an invariant subspace of because implies .
If is an eigenvector of then is a -dimensional invariant subspace, since , where is the eigenvalue corresponding to .
The matrix
has a one-dimensional invariant subspace and a two-dimensional invariant subspace , where denotes the th column of the identity matrix.
Let be linearly independent vectors. Then is an invariant subspace of if and only if f ..read more

Nick Higham

1y ago

A submatrix of a matrix is another matrix obtained by forming the intersection of certain rows and columns, or equivalently by deleting certain rows and columns. More precisely, let be an matrix and let and . Then the matrix with is the submatrix of comprising the elements at the intersection of the rows indexed by and the columns indexed by . For example, for the matrix
shown with four elements highlighted in two different ways,
is a submatrix (the intersection of rows and and columns and , or what is left after deleting row and column ), but
is \emph{not} a submatrix.
Submatri ..read more

Nick Higham

1y ago

A flop is one of the elementary arithmetic operations , , , carried out on floating-point numbers. For example, evaluating the expression takes three flops. A square root, which occurs infrequently in numerical computation, is also counted as one flop.
As an example, the computation of the inner product of two -vectors and can be written
s = x(1)*y(1)
for i = 2:n
s = s + x(i)*y(i)
end
which requires multiplications and additions, or flops. A matrix multiplication of two matrices involves computing the inner products of every row of with every column of , that is inner product ..read more

Nick Higham

1y ago

Complex numbers have the form
where is the imaginary unit. Quaternions contain two more imaginary units, and :
Sir William Rowan Hamilton discovered the quaternions in 1843 as he walked along the Royal Canal in Dublin, and he famously carved the formulas on a stone of Brougham Bridge. He had for some time been trying to extend the “couplets” to “triplets” , but he realized that the requirement for the product of two such numbers could not hold. His key insight was to realize that a fourth imaginary unit, , was required and that multiplication would be noncommutative: in general.
Sir Wil ..read more

Nick Higham

1y ago

Hamilton, the Quaternions, and Creativity
Complex numbers have the form
where is the imaginary unit. Quaternions contain two more imaginary units, and :
Sir William Rowan Hamilton discovered the quaternions in 1843 as he walked along the Royal Canal in Dublin, and he famously carved the formulas on a stone of Brougham Bridge. He had for some time been trying to extend the “couplets” to “triplets” , but he realized that the requirement could not hold. His key insight was to realize that a fourth imaginary unit, , was required and that multiplication would be noncommutative: in general ..read more

Nick Higham

1y ago

The pseudoinverse is an extension of the concept of the inverse of a nonsingular square matrix to singular matrices and rectangular matrices. It is one of many generalized inverses, but the one most useful in practice as it has a number of special properties.
The pseudoinverse of a matrix is an matrix that satisfies the Moore–Penrose conditions
Here, the superscript denotes the conjugate transpose. It can be shown that there is a unique satisfying these equations. The pseudoinverse is denoted by ; some authors write .
The most important property of the pseudoinverse is that for any syste ..read more

Nick Higham

1y ago

James Wilkinson’s 1963 book Rounding Errors in Algebraic Processes has been hugely influential. It came at a time when the effects of rounding errors on numerical computations in finite precision arithmetic had just starting to be understood, largely due to Wilkinson’s pioneering work over the previous two decades. The book gives a uniform treatment of error analysis of computations with polynomials and matrices and it is notable for making use of backward errors and condition numbers and thereby distinguishing problem sensitivity from the stability properties of any particular algorithm.
The ..read more