Reference

The SPMatrix type

StandardPacked.SPMatrixType
SPMatrix{R,V,Fmt}

Two-dimensional dense Hermitian square matrix in packed storage. Depending on Fmt, we only store the upper or lower triangle in col-major vectorized or scaled vectorized form:

  • :U: PM[i, j] = PM[i + (j -1) * j ÷ 2] for 1 ≤ i ≤ j ≤ n
  • :L: PM[i, j] = PM[i + (j -1) * (2n - j) ÷ 2] for 1 ≤ j ≤ i ≤ n
  • :US: PM[i, j] = s(i, j) PM[i + (j -1) * j ÷ 2] for 1 ≤ i ≤ j ≤ n
  • :LS: PM[i, j] = s(i, j) PM[i + (j -1) * (2n - j) ÷ 2] for 1 ≤ j ≤ i ≤ n

where n is the side dimension of the packed matrix PM, s(i, i) = 1, and s(i, j) = inv(sqrt(2)) for i ≠ j.

Warning

The SPMatrix can be effciently broadcast to other matrices, which works on the vector representation. It can also receive values from generic matrices by copyto! or broadcasting. Using it in a mixed chain of broadcastings of different type is not implemented and will potentially lead to logical errors (in the sense that the types will not match) or even segfaults (as the correct index mapping is not implemented).

source

Creating an SPMatrix

StandardPacked.SPMatrixMethod
SPMatrix(dim::Integer, data::AbstractVector{R}, type::Symbol=:U)

Creates a packed matrix from already existing vectorized data. type must be one of :U, :L, :US, or :LS.

source
StandardPacked.SPMatrixMethod
SPMatrix{R}(undef, dim, type::Symbol=:U)

Creates a packed matrix with undefined data. type must be one of :U, :L, :US, or :LS.

source
StandardPacked.SPMatrixMethod
SPMatrix(A::Union{<:Hermitian,<:Symmetric})

Construct a packed matrix from a Hermitian or symmetric wrapper of any other matrix. Note that in case a symmetric wrapper is used, the element type must be invariant under conjugation (but this is not checked)!

source

Formats of an SPMatrix

StandardPacked.packed_scale!Function
packed_scale!(P::SPMatrix)

Ensures that P is a scaled packed matrix by multiplying off-diagonals by $\sqrt2$ if necessary. Returns a scaled packed matrix. P itself should not be referenced any more, only the result of this function.

See also packed_unscale!.

source
StandardPacked.packed_unscale!Function
packed_unscale!(P::SPMatrix)

Ensures that P is an unscaled packed matrix by dividing off-diagonals by $\sqrt2$ if necessary. Returns an unscaled packed matrix. P itself should not be referenced any more, only the result of this function.

See also packed_scale!.

source

Accessing the data

Base.getindexFunction
P[idx]

Returns the value that is stored at the position idx in the vectorized (possibly scaled) representation of P.

source
P[row, col]

Returns the value that is stored in the given row and column of P. This corresponds to the value in the matrix, so even if P is of a scaled type, this does not affect the result.

source
Base.setindex!Function
P[idx] = X

Sets the value that is stored at the position idx in the vectorized (possibly scaled) representation of P.

source
P[row, col] = X

Sets the value that is stored in the given row and column of P. This corresponds to the value in the matrix, so if P is of a scaled type, X will internally be multiplied by $\sqrt2$.

source
Base.vecMethod
vec(P::SPMatrix)

Returns the vectorized data associated with P. Note that this returns the actual vector, not a copy.

source
Base.MatrixMethod
Matrix{R}(P::SPMatrix{R}, viewtype=R isa Complex ? Hermitian : Symmetric) where {R}

Construct a dense matrix from a packed matrix. The parameter symmetric determines which kind of view is returned. Use convert(Matrix{R}, SPMatrix{R}) to return the matrix itself the opposite triangle explicitly filled.

source
StandardPacked.packedsizeFunction
packedsize(dim::Integer)
packedsize(A::AbstractMatrix)

Calculates the number of unique entries in a symmetric matrix A with side dimension dim, $\frac{\mathit{dim}(\mathit{dim} +1)}{2}$.

source
StandardPacked.packedsideFunction
packedside(fullsize::Integer)
packedside(AP::AbstractVector)
packedside(P::SPMatrix)

Calculates the side dimension of a vector AP of size fullside representing a symmetric packed matrix, $\frac{\sqrt{8\mathit{fullsize} -1}}{2}$.

source

Diagonal and off-diagonal elements

StandardPacked.SPDiagonalIteratorType
SPDiagonalIterator(P::SPMatrix, k=0)
SPDiagonalIterator(fmt::Symbol, dim, k=0)

Creates an iterator that returns the linear indices for iterating through the kth diagonal of a packed matrix.

source

Extensions in LinearAlgebra

This section documents functions for which the interface provided by LinearAlgebra has been extended to cover the SPMatrix type. However, since SPMatrix is an AbstractMatrix, many more helper and convenience functions are actually available that will (hopefully) fall back to the efficient implementations documented here.

LinearAlgebra.dotFunction
dot(::SPMatrix, ::SPMatrix)

Gives the scalar product between two packed matrices, which corresponds to the Frobenius/Hilbert-Schmidt scalar product for matrices. This is fastest for scaled matrices.

source
LinearAlgebra.axpy!Function
axpy!(α, x::Union{<:SPMatrix,<:AbstractVector},
    y::Union{<:SPMatrix,<:AbstractVector})

Overwrite y with x * α + y and return y, where both x and y are interpreted as their corresponding vectorizations.

source
LinearAlgebra.axpby!Function
axpby!(α, x::Union{<:SPMatrix,<:AbstractVector}, β,
    y::Union{<:SPMatrix,<:AbstractVector})

Overwrite y with x * α + y * β and return y, where both x and y are interpreted as their corresponding vectorizations.

source
LinearAlgebra.choleskyFunction
cholesky(P::SPMatrix, NoPivot(); shift=0, check=true) -> Cholesky

Compute the Cholesky factorization of a packed positive definite matrix P and return a Cholesky factorization.

The triangular Cholesky factor can be obtained from the factorization F via F.L and F.U, where P ≈ F.U' * F.U ≈ F.L * F.L'.

The following functions are available for Cholesky objects from packed matrices: size, \, inv, det, logdet, isposdef.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition's validity (via issuccess) lies with the user.

source
LinearAlgebra.cholesky!Function
cholesky!(P::SPMatrix, NoPivot(); shift=0, check=true) -> Cholesky

The same as cholesky, but saves space by overwriting the input P, instead of creating a copy.

source
LinearAlgebra.bunchkaufmanFunction
bunchkaufman(P::SPMatrix, ipiv=missing; check = true) -> S::BunchKaufman

Compute the Bunch-Kaufman factorization of a packed matrix P P'*U*D*U'*P or P'*L*D*L'*P, depending on which triangle is stored in P, and return a BunchKaufman object.

Iterating the decomposition produces the components S.D, S.U or S.L as appropriate given S.uplo, and S.p.

When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition's validity (via issuccess) lies with the user.

The following functions are available for BunchKaufman objects: size, \, inv, getindex.

The pivoting vector ipiv may be passed beforehand to save allocations.

source
LinearAlgebra.eigvalsFunction
eigvals(P::SPMatrix, args...)

Return the eigenvalues of P.

This function calls spevd! and will forward all additional arguments. However, consider using eigvals! for a non-allocationg version.

source
eigvals(P::SPMatrix, vl::Real, vu::Real, args...)

Return the eigenvalues of P. It is possible to calculate only a subset of the eigenvalues by specifying a pair vl and vu for the lower and upper boundaries of the eigenvalues.

This function calls spevx! and will forward all additional arguments. However, consider using eigvals! for a non-allocationg version.

source
eigvals(P::SPMatrix, range::UnitRange, args...)

Return the eigenvalues of P, overwriting P in the progress. It is possible to calculate only a subset of the eigenvalues by specifying a UnitRange range covering indices of the sorted eigenvalues, e.g. the 2nd to 8th eigenvalues.

This function calls spevx! and will forward all additional arguments. However, consider using eigvals! for a non-allocationg version.

source
eigvals(AP::SPMatrix, BP::SPMatrix, args...)

Compute the generalized eigenvalues of AP and BP.

See also eigen.

source
LinearAlgebra.eigvals!Function
eigvals!(P::SPMatrix, args...)
eigvals!(P::SPMatrix, vl::Real, vu::Real, args...)
eigvals!(P::SPMatrix, range::UnitRange, args...)

The same as eigvals, but saves space by overwriting the input P, instead of creating a copy.

source
LinearAlgebra.eigvecsFunction
eigvecs(P::SPMatrix, args...)

Return a matrix M whose columns are the eigenvectors of P. (The kth eigenvector can be obtained from the slice M[:, k].)

See also eigen.

source
eigvecs(A::SPMatrix, B::SPMatrix, args...)

Return a matrix M whose columns are the generalized eigenvectors of A and B. (The kth eigenvector can be obtained from the slice M[:, k].)

See also eigen.

source
StandardPacked.eigvecs!Function
eigvecs!(P::SPMatrix, args...)
eigvecs(A::SPMatrix, B::SPMatrix, args...)

The same as eigvecs, but saves space by overwriting the input P (or A and B), instead of creating a copy.

source
LinearAlgebra.eigenFunction
eigen(P::SPMatrix, args...) -> Eigen

Compute the eigenvalue decomposition of P, returning an Eigen factorization object F which contains the eigenvalues in F.values and the eigenvectors in the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)

This function calls spevd! and will forward all arguments. However, consider using eigen! for a non-allocating version.

source
eigen(A::SPMatrix, B::SPMatrix, args...) -> GeneralizedEigen

Compute the generalized eigenvalue decomposition of A and B, returning a GeneralizedEigen factorization object F which contains the generalized eigenvalues in F.values and the generalized eigenvectors in the columns of the matrix F.vectors. (The kth generalized eigenvector can be obtained from the slice F.vectors[:, k].)

This function calls spgvd! and will forward all arguments. However, consider using eigen! for a non-allocating version.

source
eigen(P::SPMatrix, irange::UnitRange, args...)

Compute the eigenvalue decomposition of P, returning an Eigen factorization object F which contains the eigenvalues in F.values and the eigenvectors in the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)

The UnitRange irange specifies indices of the sorted eigenvalues to search for.

This function calls spevx! and will forward all arguments. However, consider using eigen! for a non-allocating version.

source
eigen(P::SPMatrix, vl::Real, vu::Real, args...)

Compute the eigenvalue decomposition of P, returning an Eigen factorization object F which contains the eigenvalues in F.values and the eigenvectors in the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)

vl is the lower bound of the window of eigenvalues to search for, and vu is the upper bound.

This function calls spevx! and will forward all arguments. However, consider using eigen! for a non-allocating version.

source
LinearAlgebra.eigen!Function
eigen!(P::SPMatrix, args...)
eigen!(A::SPMatrix, B::SPMatrix, args...)

Same as eigen, but saves space by overwriting the input A (and B) instead of creating a copy.

source
LinearAlgebra.normFunction
norm(P::SPMatrix, p::Real=2)

For any packed matrix P, compute the p-norm as if P was interpreted as a full vector (containing the off-diagonals twice without scaling).

The p-norm is defined as

\[ \lVert P\rVert_p = \Biggl( \sum_{i, j = 1}^n \lvert P_{i, j}\rvert^p \Biggr)^{1/p}\]

with $P_{i, j}$ the entries of $P$, $\lvert P_{i, j}\rvert$ the norm of $P_{i, j}$, and $n$ its side dimension.

p can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular, norm(P, Inf) returns the largest value in abs.(P), whereas norm(P, -Inf) returns the smallest. If p=2, then this is equivalent to the Frobenius norm.

The second argument p is not necessarily a part of the interface for norm, i.e. a custom type may only implement norm(P) without second argument.

source
LinearAlgebra.isposdef!Function
isposdef!(P::SPMatrix, tol=0)

Test whether a matrix is positive definite by trying to perform a Cholesky factorization of P, overwriting P in the process. See also isposdef.

source
Base.transposeFunction
transpose(P::SPMatrix{<:Real})

Identity operation for real-valued packed matrices.

source
Base.adjointFunction
P'
adjoint(P::SPMatrix)

Identity operation for packed matrices

source

Low-level matrix operations

LinearAlgebra.mul!Function
mul!(::SPMatrix, B::AbstractMatrix, ::AbstractVector, α, β)
mul!(::SPMatrix, B::AbstractMatrix, ::SPMatrix, α, β)
mul!(::AbstractVector, B::AbstractMatrix, ::SPMatrix, α, β)

Combined inplace matrix-vector multiply-add $A B α + C β$. The result is stored in C by overwriting it. Note that C must not be aliased with either A or B. This method will automatically interpret all SPMatrix arguments as their (scaled) vectorizations (so this is simply an alias to calling the standard mul! method on the vec of the packed matrices).

Note that B must not be a packed matrix.

source
mul!(C::AbstractVector, A::SPMatrixUnscaled, B::AbstractVector, α, β)

Combined inplace matrix-vector multiply-add A B α + C β. The result is stored in C by overwriting it. Note that C must not be aliased with either A or B. This method requires an unscaled packed matrix and only works on native floating point types supported by BLAS.

See also spmv!, hpmv!.

source
LinearAlgebra.lmul!Function
lmul!(a::Number, P::SPMatrix)

Scale a packed matrix P by a scalar a overwriting P in-place. Use rmul! to multiply scalar from right. The scaling operation respects the semantics of the multiplication * between a and an element of P. In particular, this also applies to multiplication involving non-finite numbers such as NaN and ±Inf.

source
LinearAlgebra.rmul!Function
rmul!(P::SPMatrix, b::Number)

Scale a packed matrix P by a scalar b overwriting P in-place. Use lmul! to multiply scalar from left. The scaling operation respects the semantics of the multiplication * between an element of P and b. In particular, this also applies to multiplication involving non-finite numbers such as NaN and ±Inf.

source
LinearAlgebra.ldiv!Function
ldiv!(P, B)

Compute P \ B in-place and overwriting B to store the result.

The argument P should not be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced by cholesky or bunchkaufman from a SPMatrix). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g., cholesky!), and performance-critical situations requiring ldiv! usually also require fine-grained control over the factorization of P.

source