You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 1 Mar 2021 • Xiao Guo, Xiang Li, Xiangyu Chang, Shusen Wang, Zhihua Zhang

The low communication and computation power of such devices, and the possible privacy breaches of users' sensitive data make the computation of SVD challenging.

no code implementations • 19 Feb 2020 • Xiang Li, Shusen Wang, Kun Chen, Zhihua Zhang

As a practical surrogate of OPT, sign-fixing, which uses a diagonal matrix with $\pm 1$ entries as weights, has better computation complexity and stability in experiments.

no code implementations • 27 Dec 2019 • Haishan Ye, Shusen Wang, Zhihua Zhang, Tong Zhang

Fast matrix algorithms have become the fundamental tools of machine learning in big data era.

no code implementations • 21 Dec 2019 • Songgaojun Deng, Shusen Wang, Huzefa Rangwala, Lijing Wang, Yue Ning

Forecasting influenza-like illness (ILI) is of prime importance to epidemiologists and health-care providers.

no code implementations • 21 Oct 2019 • Xiang Li, Wenhao Yang, Shusen Wang, Zhihua Zhang

Recently, the technique of local updates is a powerful tool in centralized settings to improve communication efficiency via periodical communication.

no code implementations • 24 Sep 2019 • Shusen Wang

On the one hand, our theories are based on weak and valid assumptions.

no code implementations • 24 Sep 2019 • Mengjiao Zhang, Shusen Wang

Collaborative learning allows participants to jointly train a model without data sharing.

1 code implementation • ICLR 2020 • Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, Zhihua Zhang

In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs.

no code implementations • 13 Feb 2019 • Xiang Li, Shusen Wang, Zhihua Zhang

Subsampled Newton methods approximate Hessian matrices through subsampling techniques, alleviating the cost of forming Hessian matrices but using sufficient curvature information.

1 code implementation • 6 Nov 2018 • Vipul Gupta, Shusen Wang, Thomas Courtade, Kannan Ramchandran

We propose OverSketch, an approximate algorithm for distributed matrix multiplication in serverless computing.

Distributed, Parallel, and Cluster Computing Information Theory Information Theory

no code implementations • ICML 2018 • Miles E. Lopes, Shusen Wang, Michael W. Mahoney

As a more practical alternative, we propose a bootstrap method to compute a posteriori error estimates for randomized LS algorithms.

no code implementations • 11 Oct 2017 • Youzuo Lin, Shusen Wang, Jayaraman Thiagarajan, George Guthrie, David Coblentz

We employ a data reduction technique in combination with the conventional kernel ridge regression method to improve the computational efficiency and reduce memory usage.

no code implementations • NeurIPS 2018 • Shusen Wang, Farbod Roosta-Khorasani, Peng Xu, Michael W. Mahoney

For distributed computing environment, we consider the empirical risk minimization problem and propose a distributed and communication-efficient Newton-type optimization method.

no code implementations • 6 Aug 2017 • Miles E. Lopes, Shusen Wang, Michael W. Mahoney

In recent years, randomized methods for numerical linear algebra have received growing interest as a general approach to large-scale problems.

no code implementations • 9 Jun 2017 • Shusen Wang, Alex Gittens, Michael W. Mahoney

This work analyzes the application of this paradigm to kernel $k$-means clustering, and shows that applying the linear $k$-means clustering algorithm to $\frac{k}{\epsilon} (1 + o(1))$ features constructed using a so-called rank-restricted Nystr\"om approximation results in cluster assignments that satisfy a $1 + \epsilon$ approximation ratio in terms of the kernel $k$-means cost function, relative to the guarantee provided by the same algorithm without the use of the Nystr\"om method.

no code implementations • ICML 2017 • Shusen Wang, Alex Gittens, Michael W. Mahoney

In particular, there is a bias-variance trade-off in sketched MRR that is not present in sketched LSR.

1 code implementation • 28 May 2015 • Shusen Wang

In recent years, a bunch of randomized algorithms have been devised to make matrix computations more scalable.

no code implementations • 29 Mar 2015 • Shusen Wang, Zhihua Zhang, Tong Zhang

The Nystr\"om method is a special instance of our fast model and is approximation to the prototype model.

no code implementations • 26 Dec 2014 • Shusen Wang, Tong Zhang, Zhihua Zhang

Low-rank matrix completion is an important problem with extensive real-world applications.

no code implementations • 22 Jun 2014 • Shusen Wang, Luo Luo, Zhihua Zhang

In this paper we conduct in-depth studies of an SPSD matrix approximation model and establish strong relative-error bounds.

no code implementations • 1 Apr 2014 • Shusen Wang, Zhihua Zhang

Recently, a variant of the Nystr\"om method called the modified Nystr\"om method has demonstrated significant improvement over the standard Nystr\"om method in approximation accuracy, both theoretically and empirically.

no code implementations • 30 Mar 2014 • Shusen Wang

Given a data matrix $X \in R^{n\times d}$ and a response vector $y \in R^{n}$, suppose $n>d$, it costs $O(n d^2)$ time and $O(n d)$ space to solve the least squares regression (LSR) problem.

no code implementations • 18 Mar 2013 • Shusen Wang, Zhihua Zhang

The CUR matrix decomposition and the Nystr\"{o}m approximation are two important low-rank matrix approximation techniques.

no code implementations • NeurIPS 2012 • Shusen Wang, Zhihua Zhang

The CUR matrix decomposition is an important extension of Nyström approximation to a general matrix.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.