矩阵分解的Jungle

标签: 新闻 Matrix Decompositions matrix factorizations 矩阵分解 | 发表时间:2011-09-05 11:36 | 作者:忙菇 SuperLucky
出处:http://www.cvchina.info

美帝的法国貌似是美法混血的有心人士(此有心人士长期从事航天飞机研究。。汗。。)收集了市面上的矩阵分解的几乎所有算法和应用,由于源地址在某神秘物质之外,特转载过来,源地址

Matrix Decompositions has a long history and generally centers around a set of known factorizations such as LU, QR, SVD and eigendecompositions. More recent factorizations have seen the light of the day with work started with the advent of NMF, k-means and related algorithm [1]. However, with the advent of new methods based on random projections and convex optimization that started in part in the compressive sensing literature, we are seeing another surge of very diverse algorithms dedicated to many different kinds of matrix factorizations with new constraints based on rank and/or positivity and/or sparsity,… As a result of this large increase in interest, I have decided to keep a list of them here following the success of the big picture in compressive sensing.

The sources for this list include the following most excellent sites: Stephen Becker’s pageRaghunandan H. Keshavan‘ s pageNuclear Norm and Matrix Recovery through SDP by Christoph HelmbergArvind Ganesh’s Low-Rank Matrix Recovery and Completion via Convex Optimization who provide more in-depth additional information.  Additional codes were featured also on Nuit Blanche. The following people provided additional inputs: Olivier GriselMatthieu Puigt.

Most of the algorithms listed below generally rely on using the nuclear norm as a proxy to the rank functional. It may not be optimal. Currently, CVXMichael Grant and Stephen  Boyd) consistently allows one to explore other proxies for the rank functional such as the log-det as found by Maryam  FazellHaitham HindiStephen Boyd. ** is used to show that the algorithm uses another heuristic than the nuclear norm.

In terms of notations, A refers to a matrix, L refers to a low rank matrix, S a sparse one and N to a noisy one. This page lists the different codes that implement the following matrix factorizations: Matrix Completion, Robust PCA , Noisy Robust PCA, Sparse PCA, NMF, Dictionary Learning, MMV, Randomized Algorithms and other factorizations. Some of these toolboxes can sometimes implement several of these decompositions and are listed accordingly. Before I list algorithm here, I generally feature them on Nuit Blanche under the MF tag: http://nuit-blanche.blogspot.com/search/label/MF or. you can also subscribe to the Nuit Blanche feed,

Matrix Completion, A = H.*L with H a known mask, L unknown solve for L lowest rank possible

The idea of this approach is to complete the unknown coefficients of a matrix based on the fact that the matrix is low rank:

Noisy Robust PCA,  A = L + S + N with L, S, N unknown, solve for L low rank, S sparse, N noise

Robust PCA : A = L + S with L, S, N unknown, solve for L low rank, S sparse

Sparse PCA: A = DX  with unknown D and X, solve for sparse D

Sparse PCA on wikipedia

  • R. Jenatton, G. Obozinski, F. Bach. Structured Sparse Principal Component Analysis. International Conference on Artificial Intelligence and Statistics (AISTATS). [pdf] [code]
  • SPAMs
  • DSPCA: Sparse PCA using SDP . Code is here.
  • PathPCA: A fast greedy algorithm for Sparse PCA. The code is here.

Dictionary Learning: A = DX  with unknown D and X, solve for sparse X

Some implementation of dictionary learning implement the NMF

NMF: A = DX with unknown D and X, solve for elements of D,X > 0

Non-negative Matrix Factorization (NMF) on wikipedia

Multiple Measurement Vector (MMV) Y = A X with unknown X and rows of X are sparse.

Blind Source Separation (BSS) Y = A X with unknown A and X and statistical independence between columns of X or subspaces of columns of X

Include Independent Component Analysis (ICA), Independent Subspace Analysis (ISA), and Sparse Component Analysis (SCA). There are many available codes for ICA and some for SCA. Here is a non-exhaustive list of some famous ones (which are not limited to linear instantaneous mixtures). TBC

ICA:

SCA:

Randomized Algorithms

These algorithms uses generally random projections to shrink very large problems into smaller ones that can be amenable to traditional matrix factorization methods.

Resource
Randomized algorithms for matrices and data by Michael W. Mahoney
Randomized Algorithms for Low-Rank Matrix Decomposition

Other factorization

D(T(.)) = L + E with unknown L, E and unknown transformation T and solve for transformation T, Low Rank L and Noise E

Frameworks featuring advanced Matrix factorizations

For the time being, few have integrated the most recent factorizations.

GraphLab / Hadoop

Books

Example of use

Sources

Arvind Ganesh’s Low-Rank Matrix Recovery and Completion via Convex Optimization

Relevant links

Reference:

A Unified View of Matrix Factorization Models by Ajit P. Singh and Geoffrey J. Gordon

Tags: , , ,

Related posts

  • No related posts.

相关 [矩阵分解 jungle] 推荐:

矩阵分解的Jungle

- SuperLucky - 增强视觉 | 计算机视觉 增强现实
美帝的法国貌似是美法混血的有心人士(此有心人士长期从事航天飞机研究. )收集了市面上的矩阵分解的几乎所有算法和应用,由于源地址在某神秘物质之外,特转载过来,源地址. Matrix Decompositions has a long history and generally centers around a set of known factorizations such as LU, QR, SVD and eigendecompositions.

推荐算法之矩阵分解

- - 标点符
推荐领域的人一般都会听说过十年前 Netflix Prize 的比赛,随着Netflix Prize推荐比赛的成功举办,近年来隐语义模型(Latent Factor MOdel,LFM)受到越来越多的关注. 隐语义模型最早在文本挖掘领域被提出,用于寻找文本的隐含语义,相关的模型常见的有潜在语义分析(Latent Semantic Analysis,LSA)、LDA(Latent Dirichlet Allocation)的主题模型(Topic Model)、矩阵分解(Matrix Factorization)等等.

矩阵分解在推荐系统中的应用(转)

- -
本文将简单介绍下最近学习到的矩阵分解方法. 开始觉得这种方法很神奇很数学,而且在实际使用的时候也非常好用. 但最近读了Yehuda大神的paper之后,觉得这种方法比较猥琐. 其实,矩阵分解的核心是将一个非常稀疏的评分矩阵分解为两个矩阵,一个表示user的特性,一个表示item的特性,将两个矩阵中各取一行和一列向量做内积就可以得到对应评分.

基于矩阵分解的推荐算法,简单入门 - kobeshow

- - 博客园_首页
       本文将要讨论基于矩阵分解的推荐算法,这一类型的算法通常会有很高的预测精度,也活跃于各大推荐系统竞赛上面,前段时间的百度电影推荐最终结果的前10名貌似都是把矩阵分解作为一个单模型,最后各种ensemble,不知道正在进行的阿里推荐比赛( http://102.alibaba.com/competition/addDiscovery/index.htm),会不会惊喜出现.