Reference
The SPMatrix type
StandardPacked.SPMatrix — TypeSPMatrix{R,V,Fmt}Two-dimensional dense Hermitian square matrix in packed storage. Depending on Fmt, we only store the upper or lower triangle in col-major vectorized or scaled vectorized form:
:U:PM[i, j] = PM[i + (j -1) * j ÷ 2]for1 ≤ i ≤ j ≤ n:L:PM[i, j] = PM[i + (j -1) * (2n - j) ÷ 2]for1 ≤ j ≤ i ≤ n:US:PM[i, j] = s(i, j) PM[i + (j -1) * j ÷ 2]for1 ≤ i ≤ j ≤ n:LS:PM[i, j] = s(i, j) PM[i + (j -1) * (2n - j) ÷ 2]for1 ≤ j ≤ i ≤ n
where n is the side dimension of the packed matrix PM, s(i, i) = 1, and s(i, j) = inv(sqrt(2)) for i ≠ j.
The SPMatrix can be effciently broadcast to other matrices, which works on the vector representation. It can also receive values from generic matrices by copyto! or broadcasting. Using it in a mixed chain of broadcastings of different type is not implemented and will potentially lead to logical errors (in the sense that the types will not match) or even segfaults (as the correct index mapping is not implemented).
Creating an SPMatrix
StandardPacked.SPMatrix — MethodSPMatrix(dim::Integer, data::AbstractVector{R}, type::Symbol=:U)Creates a packed matrix from already existing vectorized data. type must be one of :U, :L, :US, or :LS.
StandardPacked.SPMatrix — MethodSPMatrix{R}(undef, dim, type::Symbol=:U)Creates a packed matrix with undefined data. type must be one of :U, :L, :US, or :LS.
StandardPacked.SPMatrix — MethodSPMatrix(A::Union{<:Hermitian,<:Symmetric})Construct a packed matrix from a Hermitian or symmetric wrapper of any other matrix. Note that in case a symmetric wrapper is used, the element type must be invariant under conjugation (but this is not checked)!
Formats of an SPMatrix
StandardPacked.SPMatrixUpper — TypeSPMatrixUpper{R,V}This is a union type for scaled and unscaled packed matrices with upper triangular storage.
StandardPacked.SPMatrixLower — TypeSPMatrixLower{R,V}This is a union type for scaled and unscaled packed matrices with lower triangular storage.
StandardPacked.SPMatrixUnscaled — TypeSPMatrixUnscaled{R,V}This is a union type for unscaled packed matrices with upper or lower triangular storage.
StandardPacked.SPMatrixScaled — TypeSPMatrixScaled{R,V}This is a union type for scaled packed matrices with upper or lower triangular storage.
StandardPacked.packed_isupper — Functionpacked_isupper(::SPMatrix)
packed_isupper(::Type{<:SPMatrix})Returns true iff the given packed matrix has upper triangular storage.
See also packed_islower, packed_isscaled.
StandardPacked.packed_islower — Functionpacked_islower(::SPMatrix)Returns true iff the given packed matrix has lower triangular storage.
See also packed_isupper, packed_isscaled.
StandardPacked.packed_isscaled — Functionpacked_isscaled(::SPMatrix)Returns true iff the given packed matrix has scaled storage, i.e., if the off-diagonal elements are internally stored with a scaling of $\sqrt2$.
See also packed_isupper, packed_islower.
StandardPacked.packed_scale! — Functionpacked_scale!(P::SPMatrix)Ensures that P is a scaled packed matrix by multiplying off-diagonals by $\sqrt2$ if necessary. Returns a scaled packed matrix. P itself should not be referenced any more, only the result of this function.
See also packed_unscale!.
StandardPacked.packed_unscale! — Functionpacked_unscale!(P::SPMatrix)Ensures that P is an unscaled packed matrix by dividing off-diagonals by $\sqrt2$ if necessary. Returns an unscaled packed matrix. P itself should not be referenced any more, only the result of this function.
See also packed_scale!.
Accessing the data
Base.getindex — FunctionP[idx]Returns the value that is stored at the position idx in the vectorized (possibly scaled) representation of P.
P[row, col]Returns the value that is stored in the given row and column of P. This corresponds to the value in the matrix, so even if P is of a scaled type, this does not affect the result.
Base.setindex! — FunctionP[idx] = XSets the value that is stored at the position idx in the vectorized (possibly scaled) representation of P.
P[row, col] = XSets the value that is stored in the given row and column of P. This corresponds to the value in the matrix, so if P is of a scaled type, X will internally be multiplied by $\sqrt2$.
Base.vec — Methodvec(P::SPMatrix)Returns the vectorized data associated with P. Note that this returns the actual vector, not a copy.
Base.Matrix — MethodMatrix{R}(P::SPMatrix{R}, viewtype=R isa Complex ? Hermitian : Symmetric) where {R}Construct a dense matrix from a packed matrix. The parameter symmetric determines which kind of view is returned. Use convert(Matrix{R}, SPMatrix{R}) to return the matrix itself the opposite triangle explicitly filled.
StandardPacked.packedsize — Functionpackedsize(dim::Integer)
packedsize(A::AbstractMatrix)Calculates the number of unique entries in a symmetric matrix A with side dimension dim, $\frac{\mathit{dim}(\mathit{dim} +1)}{2}$.
StandardPacked.packedside — Functionpackedside(fullsize::Integer)
packedside(AP::AbstractVector)
packedside(P::SPMatrix)Calculates the side dimension of a vector AP of size fullside representing a symmetric packed matrix, $\frac{\sqrt{8\mathit{fullsize} -1}}{2}$.
Diagonal and off-diagonal elements
StandardPacked.SPDiagonalIterator — TypeSPDiagonalIterator(P::SPMatrix, k=0)
SPDiagonalIterator(fmt::Symbol, dim, k=0)Creates an iterator that returns the linear indices for iterating through the kth diagonal of a packed matrix.
StandardPacked.rmul_diags! — Functionrmul_diags!(P::SPMatrix, factor)Right-multiplies all diagonal entries in P by factor. Returns P.
See also lmul_diags!, rmul_offdiags!, lmul_offdiags!.
StandardPacked.rmul_offdiags! — Functionrmul_offdiags!(P::SPMatrix, factor)Right-multiplies all off-diagonal entries in P by factor. Returns P.
See also rmul_diags!, lmul_diags!, lmul_offdiags!.
StandardPacked.lmul_diags! — Functionlmul_diags!(factor, P::SPMatrix)Left-multiplies all diagonal entries in P by factor. Returns P.
See also rmul_diags!, rmul_offdiags!, lmul_offdiags!.
StandardPacked.lmul_offdiags! — Functionlmul_offdiags!(factor, P::SPMatrix)Left-multiplies all diagonal entries in P by factor. Returns P.
See also rmul_diags!, rmul_offdiags!, lmul_diags!.
Extensions in LinearAlgebra
This section documents functions for which the interface provided by LinearAlgebra has been extended to cover the SPMatrix type. However, since SPMatrix is an AbstractMatrix, many more helper and convenience functions are actually available that will (hopefully) fall back to the efficient implementations documented here.
LinearAlgebra.dot — Functiondot(::SPMatrix, ::SPMatrix)Gives the scalar product between two packed matrices, which corresponds to the Frobenius/Hilbert-Schmidt scalar product for matrices. This is fastest for scaled matrices.
LinearAlgebra.axpy! — Functionaxpy!(α, x::Union{<:SPMatrix,<:AbstractVector},
y::Union{<:SPMatrix,<:AbstractVector})Overwrite y with x * α + y and return y, where both x and y are interpreted as their corresponding vectorizations.
LinearAlgebra.axpby! — Functionaxpby!(α, x::Union{<:SPMatrix,<:AbstractVector}, β,
y::Union{<:SPMatrix,<:AbstractVector})Overwrite y with x * α + y * β and return y, where both x and y are interpreted as their corresponding vectorizations.
LinearAlgebra.factorize — Functionfactorize(A::SPMatrix)Compute a Bunch-Kaufman factorization of A.
LinearAlgebra.cholesky — Functioncholesky(P::SPMatrix, NoPivot(); shift=0, check=true) -> CholeskyCompute the Cholesky factorization of a packed positive definite matrix P and return a Cholesky factorization.
The triangular Cholesky factor can be obtained from the factorization F via F.L and F.U, where P ≈ F.U' * F.U ≈ F.L * F.L'.
The following functions are available for Cholesky objects from packed matrices: size, \, inv, det, logdet, isposdef.
When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition's validity (via issuccess) lies with the user.
LinearAlgebra.cholesky! — Functioncholesky!(P::SPMatrix, NoPivot(); shift=0, check=true) -> CholeskyThe same as cholesky, but saves space by overwriting the input P, instead of creating a copy.
LinearAlgebra.bunchkaufman — Functionbunchkaufman(P::SPMatrix, ipiv=missing; check = true) -> S::BunchKaufmanCompute the Bunch-Kaufman factorization of a packed matrix P P'*U*D*U'*P or P'*L*D*L'*P, depending on which triangle is stored in P, and return a BunchKaufman object.
Iterating the decomposition produces the components S.D, S.U or S.L as appropriate given S.uplo, and S.p.
When check = true, an error is thrown if the decomposition fails. When check = false, responsibility for checking the decomposition's validity (via issuccess) lies with the user.
The following functions are available for BunchKaufman objects: size, \, inv, getindex.
The pivoting vector ipiv may be passed beforehand to save allocations.
LinearAlgebra.bunchkaufman! — Functionbunchkaufman!(P::SPMatrix, ipiv=missing; check = true) -> S::BunchKaufmanThe same as bunchkaufman, but saves space by overwriting the input P, instead of creating a copy.
LinearAlgebra.eigvals — Functioneigvals(P::SPMatrix, args...)Return the eigenvalues of P.
This function calls spevd! and will forward all additional arguments. However, consider using eigvals! for a non-allocationg version.
eigvals(P::SPMatrix, vl::Real, vu::Real, args...)Return the eigenvalues of P. It is possible to calculate only a subset of the eigenvalues by specifying a pair vl and vu for the lower and upper boundaries of the eigenvalues.
This function calls spevx! and will forward all additional arguments. However, consider using eigvals! for a non-allocationg version.
eigvals(P::SPMatrix, range::UnitRange, args...)Return the eigenvalues of P, overwriting P in the progress. It is possible to calculate only a subset of the eigenvalues by specifying a UnitRange range covering indices of the sorted eigenvalues, e.g. the 2nd to 8th eigenvalues.
This function calls spevx! and will forward all additional arguments. However, consider using eigvals! for a non-allocationg version.
eigvals(AP::SPMatrix, BP::SPMatrix, args...)Compute the generalized eigenvalues of AP and BP.
See also eigen.
LinearAlgebra.eigvals! — Functioneigvals!(P::SPMatrix, args...)
eigvals!(P::SPMatrix, vl::Real, vu::Real, args...)
eigvals!(P::SPMatrix, range::UnitRange, args...)The same as eigvals, but saves space by overwriting the input P, instead of creating a copy.
LinearAlgebra.eigmax — FunctionStandardPacked.eigmax! — Functioneigmax!(P::SPMatrix, args...)Return the largest eigenvalue of P, overwriting P in the progress.
See also eigvals!.
LinearAlgebra.eigmin — FunctionStandardPacked.eigmin! — Functioneigmin!(P::SPMatrix, args...)Return the smallest eigenvalue of P, overwriting P in the progress.
See also eigvals!.
LinearAlgebra.eigvecs — Functioneigvecs(P::SPMatrix, args...)Return a matrix M whose columns are the eigenvectors of P. (The kth eigenvector can be obtained from the slice M[:, k].)
See also eigen.
eigvecs(A::SPMatrix, B::SPMatrix, args...)Return a matrix M whose columns are the generalized eigenvectors of A and B. (The kth eigenvector can be obtained from the slice M[:, k].)
See also eigen.
StandardPacked.eigvecs! — Functioneigvecs!(P::SPMatrix, args...)
eigvecs(A::SPMatrix, B::SPMatrix, args...)The same as eigvecs, but saves space by overwriting the input P (or A and B), instead of creating a copy.
LinearAlgebra.eigen — Functioneigen(P::SPMatrix, args...) -> EigenCompute the eigenvalue decomposition of P, returning an Eigen factorization object F which contains the eigenvalues in F.values and the eigenvectors in the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)
This function calls spevd! and will forward all arguments. However, consider using eigen! for a non-allocating version.
eigen(A::SPMatrix, B::SPMatrix, args...) -> GeneralizedEigenCompute the generalized eigenvalue decomposition of A and B, returning a GeneralizedEigen factorization object F which contains the generalized eigenvalues in F.values and the generalized eigenvectors in the columns of the matrix F.vectors. (The kth generalized eigenvector can be obtained from the slice F.vectors[:, k].)
This function calls spgvd! and will forward all arguments. However, consider using eigen! for a non-allocating version.
eigen(P::SPMatrix, irange::UnitRange, args...)Compute the eigenvalue decomposition of P, returning an Eigen factorization object F which contains the eigenvalues in F.values and the eigenvectors in the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)
The UnitRange irange specifies indices of the sorted eigenvalues to search for.
This function calls spevx! and will forward all arguments. However, consider using eigen! for a non-allocating version.
eigen(P::SPMatrix, vl::Real, vu::Real, args...)Compute the eigenvalue decomposition of P, returning an Eigen factorization object F which contains the eigenvalues in F.values and the eigenvectors in the columns of the matrix F.vectors. (The kth eigenvector can be obtained from the slice F.vectors[:, k].)
vl is the lower bound of the window of eigenvalues to search for, and vu is the upper bound.
This function calls spevx! and will forward all arguments. However, consider using eigen! for a non-allocating version.
LinearAlgebra.eigen! — Functioneigen!(P::SPMatrix, args...)
eigen!(A::SPMatrix, B::SPMatrix, args...)Same as eigen, but saves space by overwriting the input A (and B) instead of creating a copy.
LinearAlgebra.diagind — Functiondiagind(P::SPMatrix, k::Integer=0)A vector containing the indices of the kthe diagonal of the packed matrix P.
See also: diag, SPDiagonalIterator.
LinearAlgebra.diag — Functiondiag(P::SPMatrix, k::Integer=0)The kth diagonal of a packed matrix, as a vector.
See also: diagind, SPDiagonalIterator.
LinearAlgebra.norm — Functionnorm(P::SPMatrix, p::Real=2)For any packed matrix P, compute the p-norm as if P was interpreted as a full vector (containing the off-diagonals twice without scaling).
The p-norm is defined as
\[ \lVert P\rVert_p = \Biggl( \sum_{i, j = 1}^n \lvert P_{i, j}\rvert^p \Biggr)^{1/p}\]
with $P_{i, j}$ the entries of $P$, $\lvert P_{i, j}\rvert$ the norm of $P_{i, j}$, and $n$ its side dimension.
p can assume any numeric value (even though not all values produce a mathematically valid vector norm). In particular, norm(P, Inf) returns the largest value in abs.(P), whereas norm(P, -Inf) returns the smallest. If p=2, then this is equivalent to the Frobenius norm.
The second argument p is not necessarily a part of the interface for norm, i.e. a custom type may only implement norm(P) without second argument.
LinearAlgebra.tr — Functiontr(P::SPMatrix)Matrix trace. Sums the diagonal elements of P.
LinearAlgebra.isposdef — Functionisposdef(P::SPMatrix, tol=0)Test whether a matrix is positive definite by trying to perform a Cholesky factorization of P.
LinearAlgebra.isposdef! — Functionisposdef!(P::SPMatrix, tol=0)Test whether a matrix is positive definite by trying to perform a Cholesky factorization of P, overwriting P in the process. See also isposdef.
Base.transpose — Functiontranspose(P::SPMatrix{<:Real})Identity operation for real-valued packed matrices.
Base.adjoint — FunctionP'
adjoint(P::SPMatrix)Identity operation for packed matrices
LinearAlgebra.checksquare — Functionchecksquare(P::SPMatrix)Returns the side dimension of P.
Low-level matrix operations
LinearAlgebra.mul! — Functionmul!(::SPMatrix, B::AbstractMatrix, ::AbstractVector, α, β)
mul!(::SPMatrix, B::AbstractMatrix, ::SPMatrix, α, β)
mul!(::AbstractVector, B::AbstractMatrix, ::SPMatrix, α, β)Combined inplace matrix-vector multiply-add $A B α + C β$. The result is stored in C by overwriting it. Note that C must not be aliased with either A or B. This method will automatically interpret all SPMatrix arguments as their (scaled) vectorizations (so this is simply an alias to calling the standard mul! method on the vec of the packed matrices).
Note that B must not be a packed matrix.
mul!(C::AbstractVector, A::SPMatrixUnscaled, B::AbstractVector, α, β)Combined inplace matrix-vector multiply-add A B α + C β. The result is stored in C by overwriting it. Note that C must not be aliased with either A or B. This method requires an unscaled packed matrix and only works on native floating point types supported by BLAS.
LinearAlgebra.lmul! — Functionlmul!(a::Number, P::SPMatrix)Scale a packed matrix P by a scalar a overwriting P in-place. Use rmul! to multiply scalar from right. The scaling operation respects the semantics of the multiplication * between a and an element of P. In particular, this also applies to multiplication involving non-finite numbers such as NaN and ±Inf.
LinearAlgebra.rmul! — Functionrmul!(P::SPMatrix, b::Number)Scale a packed matrix P by a scalar b overwriting P in-place. Use lmul! to multiply scalar from left. The scaling operation respects the semantics of the multiplication * between an element of P and b. In particular, this also applies to multiplication involving non-finite numbers such as NaN and ±Inf.
LinearAlgebra.ldiv! — Functionldiv!(P, B)Compute P \ B in-place and overwriting B to store the result.
The argument P should not be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced by cholesky or bunchkaufman from a SPMatrix). The reason for this is that factorization itself is both expensive and typically allocates memory (although it can also be done in-place via, e.g., cholesky!), and performance-critical situations requiring ldiv! usually also require fine-grained control over the factorization of P.