• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

yunjhongwu/TensorDecompositions.jl: A Julia implementation of tensor decompositi ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称:

yunjhongwu/TensorDecompositions.jl

开源软件地址:

https://github.com/yunjhongwu/TensorDecompositions.jl

开源编程语言:

Julia 100.0%

开源软件介绍:

TensorDecompositions.jl

A Julia implementation of tensor decomposition algorithms

Build Status Coverage Status


What's new

  • Compatible with Julia v0.7/v1.0

Available functions

  1. The following functions for Tucker decompositions, except for sshopm, return a Tucker, which contains factors::Vector{Matrix{Float64}}, core::Array{Float64} (1-dimensional array for Kruskal decompositions), and the relative reconstruction error error::Float64.
  • High-order SVD (HOSVD) [3] hosvd{T,N}(tnsr::StridedArray{T,N}, core_dims::NTuple{N, Int}; pad_zeros::Bool=false, compute_error::Bool=false)

  • Canonical polyadic decomposition (CANDECOMP/PARAFAC) candecomp{T,N}(tnsr::StridedArray{T,N}, r::Integer, initial_guess::NTuple{N, Matrix{T}}; method::Symbol=:ALS, tol::Float64=1e-5, maxiter::Integer=100, compute_error::Bool=false, verbose::Bool=true). This function provides two algorithms, set by method argument, for fitting the CANDECOMP model:

    • ALS (default): Alternating least square method [3]
    • SGSD: Simultaneous generalized Schur decomposition [1]
  • Non-negative CANDECOMP/PARAFAC by the block-coordinate update method [7] nncp(tnsr::StridedArray, r::Integer; tol::Float64=1e-4, maxiter::Integer=100, compute_error::Bool=false, verbose::Bool=true)

Remark. Choose a smaller r if the above functions throw ERROR: SingularException.

  • Symmetric rank-1 approximation for symmetric tensors by shifted symmetric higher-order power method (SS-HOPM) [4] sshopm{T,N}(tnsr::AbstractArray{T,N}, alpha::Real; tol::Float64=1e-5, maxiter::Int=100, verbose::Bool=false). This function requires a shifting parameter alpha and returns an eigenpair of tnsr, represented by a tuple (Float64, Vector{Float64}). Note that this functions does NOT check symmetry of input tensors. This implementation takes both dense representation StridedArray and sparse representation (Matrix{Int64}, StridedVector, Integer), which contains indexes, as column vectors of the matrix in the first component of the tuple, corresponding values, and dimension of a mode.

  • Sparse (semi-)nonnegative Tucker decomposition by the alternating proximal gradient method [6] spnntucker{T,N}(tnsr::StridedArray{T, N}, core_dims::NTuple{N, Int}; core_nonneg::Bool=true, tol::Float64=1e-4, hosvd_init::Bool=false, max_iter::Int=500, max_time::Float64=0.0, lambdas::Vector{Float64} = fill(0.0, N+1), Lmin::Float64 = 1.0, rw::Float64=0.9999, bounds::Vector{Float64} = fill(Inf, N+1), ini_decomp = nothing, verbose::Bool=false).

  1. Tensor-CUR for 3-mode tensors [5] is a randomized algorithm and returns a CUR, which includes indexes of c slabs (along axis slab_index) and r fibers, matrix U, and the relative reconstruction error of slabs. Note that this function samples with replacement, and the numbers of repeated samples are stored in Cweight and Rweight.
  • tensorcur3(tnsr::StridedArray, c::Integer, r::Integer, slab_axis::Integer=3; compute_u::Bool=true)
  1. PARAFAC2 for 3-mode tensors by the alternating least square method [2] takes an array of matrices, which may not be equally sized, and factorizes the matrices under a constraint on row space. The below function returns a PARAFAC2, which contains factors and diagonal effects D of each matrix, a common loading matrix A, and the relative error.
  • parafac2{S<:Matrix}(X::Vector{S}, r::Integer; tol::Float64=1e-5, maxiter::Integer=100, verbose::Bool=true)
  • Common parameters:
    • tnsr::StridedArray: Data tensor
    • r::Integer: Number of components/factors
    • tol::Float64: Tolerance to achieve
    • maxiter::Integer: Maximum number of iterations
    • verbose::Bool: Print status when iterative algorithms terminate

Quick example

Here is a quick example of code that fits the CANDECOMP model:

julia> using TensorDecompositions

julia> u = randn(10); v = randn(20); w = randn(30)

# Generate a noisy rank-1 tensor
julia> T = cat(3, map(x -> x * u * v', w)...) + 0.2 * randn(10, 20, 30)

julia> size(T)
(10, 20, 30)

julia> F = candecomp(T, 1, (randn(10, 1), randn(20, 1), randn(30, 1)), compute_error=true, method=:ALS);
NFO: Initializing factor matrices...
INFO: Applying CANDECOMP ALS method...
INFO: Algorithm converged after 4 iterations.

julia> [size(F.factors[i]) for i = 1:3]
3-element Array{Any,1}:
 (10,1)
 (20,1)
 (30,1)

julia> F.props
Dict{Symbol,Any} with 1 entry:
  :rel_residue => 0.18735979193091348

julia> F.factors[1]
10x1 Array{Float64,2}:
  0.0676683 
 -0.0985366 
 -0.239748  
  0.0821674 
 -0.0547672 
 -0.00892641
  0.0220593 
 -0.058075  
 -0.135493  
  0.23256 

Requirements

  • Julia 0.7/1.0
  • TensorOperations
  • Distributions
  • StatsBase
  • ProgressMeter
  • FactCheck

Performance issues

  • Inefficiency of unfolding (matricizing), which has a significant impact on the performance of candecomp (method:=ALS) and parafac2

Future plan

  • Improving performance
  • More algorithms for fitting CANDECOMP/PARAFAC and non-negative tensor decompositions
  • Probabilistic tensor decompositions
  • Algorithms for factorizing sparse tensors
  • Tensor completion algorithms

Reference

  • [1] De Lathauwer, L., De Moor, B., & Vandewalle, J. (2004). Computation of the canonical decomposition by means of a simultaneous generalized Schur decomposition. SIAM Journal on Matrix Analysis and Applications, 26(2), 295-327.

  • [2] Kiers, H. A., Ten Berge, J. M., & Bro, R. (1999). PARAFAC2-Part I. A direct fitting algorithm for the PARAFAC2 model. Journal of Chemometrics, 13(3-4), 275-294.

  • [3] Kolda, T. G., & Bader, B. W. (2009). Tensor decompositions and applications. SIAM Review, 51(3), 455-500.

  • [4] Kolda, T. G., & Mayo, J. R. (2011). Shifted power method for computing tensor eigenpairs. SIAM Journal on Matrix Analysis and Applications, 32(4), 1095-1124.

  • [5] Mahoney, M. W., Maggioni, M., & Drineas, P. (2008). Tensor-CUR decompositions for tensor-based data. SIAM Journal on Matrix Analysis and Applications, 30(3), 957-987.

  • [6] Xu, Y. (2015). Alternating proximal gradient method for sparse nonnegative Tucker decomposition. Mathematical Programming Computation, 7(1), 39-70.

  • [7] Xu, Y., & Yin, W. (2013). A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM Journal on Imaging Sciences, 6(3), 1758-1789.

Special thanks

  • Thank @alyst for making the package compatible with Julia 1.0.



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
cvxgrp/scs: Splitting Conic Solver发布时间:2022-07-09
下一篇:
REX-Computing/unumjl: Julia Implementation of Unums发布时间:2022-07-09
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap