Rudrasis Chakraborty

Publications : 62
Aldex : 30
H-index : 9
Citations : 210

VolterraNet: A higher order convolutional network with group equivariance for homogeneous manifolds

Monami Banerjee et 4 al.

Jun 5, 2021 in Arxiv
Convolutional neural networks have been highly successful in image-based learning tasks due to their translation equivariance property. Recent work has generalized the traditional convolutional layer of a convolutional neural network to non-Euclidean spaces and shown group equivariance of the generalized convolution operation. In this paper, we present a novel higher order Volterra convolutional n...

On the Versatile Uses of Partial Distance Correlation in Deep Learning

Xingjian Zhen, Vikas Singh et 3 al.

Jul 20, 2022 in Arxiv
Comparing the functional behavior of neural network models, whether it is a single network over time or two (or more networks) during or post-training, is an essential step in understanding what they are learning (and what they are not), and for identifying strategies for regularization or efficiency improvements. Despite recent progress, e.g., comparing vision transformers to CNNs, systematic com...

SurReal: Complex-Valued Learning as Principled Transformations on a Scaling and Rotation Manifold

Rudrasis Chakraborty, Stella Yu et 2 al.

Oct 18, 2019 in Arxiv
Complex-valued data is ubiquitous in signal and image processing applications, and complex-valued representations in deep learning have appealing theoretical properties. While these aspects have long been recognized, complex-valued deep learning continues to lag far behind its real-valued counterpart. We propose a principled geometric approach to complex-valued deep learning. Complex-valued data c...

Mixed Effects Neural ODE: A Variational Approximation for Analyzing the Dynamics of Panel Data

Jurijs Nazarovs, Vikas Singh et 4 al.

Feb 18, 2022 in Arxiv
Panel data involving longitudinal measurements of the same set of participants taken over multiple time points is common in studies to understand childhood development and disease modeling. Deep hybrid models that marry the predictive power of neural networks with physical simulators such as differential equations, are starting to drive advances in such applications. The task of modeling not just ...

Spatial Transformer for 3D Point Clouds

Jiayun Wang, Stella X. Yu et 2 al.

Jun 26, 2019 in Arxiv
Deep neural networks are widely used for understanding 3D point clouds. At each point convolution layer, features are computed from local neighborhoods of 3D points and combined for subsequent processing in order to extract semantic information. Existing methods adopt the same individual point neighborhoods throughout the network layers, defined by the same metric on the fixed input point coordina...

Equivariance Allows Handling Multiple Nuisance Variables When Analyzing Pooled Neuroimaging Datasets

Vishnu Suresh Lokhande, Vikas Singh et 3 al.

Mar 29, 2022 in Arxiv
Pooling multiple neuroimaging datasets across institutions often enables improvements in statistical power when evaluating associations (e.g., between risk factors and disease outcomes) that may otherwise be too weak to detect. When there is only a {\em single} source of variability (e.g., different scanners), domain adaptation and matching the distributions of representations may suffice in many ...

Forward Operator Estimation in Generative Models with Kernel Transfer Operators

Zhichun Huang, Vikas Singh et 2 al.

Dec 1, 2021 in Arxiv
Generative models which use explicit density modeling (e.g., variational autoencoders, flow-based generative models) involve finding a mapping from a known distribution, e.g. Gaussian, to the unknown input distribution. This often requires searching over a class of non-linear functions (e.g., representable by a deep neural network). While effective in practice, the associated runtime/memory costs ...

Orthogonal Convolutional Neural Networks

Jiayun Wang, Stella X. Yu et 3 al.

Nov 27, 2019 in Arxiv
Deep convolutional neural networks are hindered by training instability and feature redundancy towards further performance improvement. A promising solution is to impose orthogonality on convolutional filters. We develop an efficient approach to impose filter orthogonality on a convolutional layer based on the doubly block-Toeplitz matrix representation of the convolutional kernel instead of using...

An information theoretic formulation of the Dictionary Learning and Sparse Coding Problems on Statistical Manifolds

Rudrasis Chakraborty et 3 al.

Apr 23, 2016 in Arxiv
In this work, we propose a novel information theoretic framework for dictionary learning (DL) and sparse coding (SC) on a statistical manifold (the manifold of probability distributions). Unlike the traditional DL and SC framework, our new formulation {\it does not explicitly incorporate any sparsity inducing norm in the cost function but yet yields SCs}. Moreover, we extend this framework to the ...

Generative Adversarial Network based Autoencoder: Application to fault detection problem for closed loop dynamical systems

Indrasis Chakraborty et 3 al.

Apr 15, 2018 in Arxiv
Fault detection problem for closed loop uncertain dynamical systems, is investigated in this paper, using different deep learning based methods. Traditional classifier based method does not perform well, because of the inherent difficulty of detecting system level faults for closed loop dynamical system. Specifically, acting controller in any closed loop dynamical system, works to reduce the effec...

Similar author:

Laurent Itti