A recurring theme in my work is the development of devices and computational methods to measure, simulate, analyse and invert the propagation of light on macroscopic scales. In this talk, I will present a selection of recent research that is linked by the desire to replace specialized and heavyweight optical devices with consumer-grade hardware and computational methods. In particular, I will draw a vision of how computer graphics methodology forward simulation of light transport, or rendering could become a useful tool for solving inverse problems in imaging and various remote sensing applications. He obtained a doctorate in computer science from Saarland University with a dissertation that was awarded the Otto Hahn Medal of the Max Planck Society, and a Diplom in physics from the University of Kaiserslautern
The scaling of kernel-based learning algorithms to large datasets is limited by the square complexity associated with computation and storage of the kernel matrix, which is assumed to be available in recent most multiple kernel learning algorithms.
We propose a method to learn simultaneous low-rank approximations of a set of base kernels in regression tasks. We present the Mklaren algorithm, which approximates multiple kernel matrices with least angle regression in the low-dimensional feature space.
The idea is based on entirely geometrical concepts and does not assume access to full kernel matrices. The algorithm achieves linear complexity in the number of data points as well as kernels, while it accounts for the correlations between kernel matrices.
When explicit feature space representation is available for kernels, we use the relation between primal and dual regression weights to gain model interpretation.
Our approach outperforms contemporary kernel matrix approximation approaches when learning with multiple kernels on standard regression datasets, as well as improves selection of relevant kernels in comparison to multiple kernel learning methods. Fast projections of spatial rich model feature for digital image steganalysis by Pengfei WangZhihui Wei, Liang Xiao Spatial rich model SRM is a classic steganalysis method, which collects high-order co-occurrences from truncated noise residuals as feature to capture the local-range dependencies of an image.
Increasing the truncation threshold and the co-occurrence order will lead to a higher-dimensional feature, which can exploit more statistical bins and capture dependencies across larger-range neighborhood, but this will suffer from the curse of dimensionality.
In this paper, we propose a fast projection method to increase the statistical robustness of the higher-dimensional SRM feature while decreasing its dimensionality. The proposed projection method is applicable to co-occurrence-based steganalysis features.
The detection performance and the computational complexity of the proposed method are investigated on three content-adaptive steganographic algorithms in spatial domain. Low-Rank Kernel Space Representations in Prototype Learning by Kerstin Bunte, Marika Kaden, Frank-Michael Schleif In supervised learning feature vectors are often implicitly mapped to a high-dimensional space using the kernel trick with quadratic costs for the learning algorithm.
The recently proposed random Fourier features provide an explicit mapping such that classical algorithms with often linear complexity can be applied. Yet, the random Fourier feature approach remains widely complex techniques which are difficult to interpret.
Using Matrix Relevance Learning the linear mapping of the data for a better class separation can be learned by adapting a parametric Euclidean distance. Further, a low-rank representation of the input data can be obtained. We apply this technique to random Fourier feature encoded data to obtain a discriminative mapping of the kernel space.
This explicit approach is compared with a differentiable kernel vector quantizer on the same but implicit kernel representation.
Using multiple benchmark problems, we demonstrate that a parametric distance on a RBF encoding yields to better classification results and permits access to interpretable prediction models with visualization abilities. We justify our approach by establishing a theoretical guarantee on the error of the learned dual solution in the first step with respect to the optimal dual solution under appropriate conditions.
The experimental results demonstrate that i the obtained dual solution by our approach in the first step is closer to the optimal solution and yields improved prediction performance; and ii the second step using the obtained dual solution to re-train the modelfurther improves the performance Dhar, S.
Congrui Yi ; Ramakrishnan, N. Most machine learning algorithms involve solving a convex optimization problem. Traditional in-memory convex optimization solvers do not scale well with the increase in data.
This paper identifies a generic convex problem for most machine learning algorithms and solves it using the Alternating Direction Method of Multipliers ADMM.
Finally such an ADMM problem transforms to an iterative system of linear equations, which can be easily solved at scale in a distributed fashion.Pavan Turaga Thesis Pavan Turaga School of Electrical, Computer and Energy Engineering joined ASU in fall as an assistant professor jointly between of Maryland Distinguished Dissertation award (), IBM Emerging Leader in nbsp; Pavan Turaga iSearch is an associate professor in the School of Arts, Media and student in , and received a.
Sehen Sie sich das Profil von Chaitanya Prakash Potaraju auf LinkedIn an, dem weltweit größten beruflichen Netzwerk. 4 Jobs sind im Profil von Chaitanya Prakash Potaraju aufgelistet. Sehen Sie sich auf LinkedIn das vollständige Profil an.
Erfahren Sie mehr über die Kontakte von Chaitanya Prakash Potaraju und über Jobs bei ähnlichen Unternehmen. Our mission is to further the interests of mathematical research, scholarship and education.
Menu. Pavan Turaga*, Arizona State University Suhas Lohit, Arizona State University A reproducing kernel thesis on the Bergman spaces of some Reinhardt domains.
Zhenghui Huo*. Pavan Turaga; View. Show abstract some properties of quaternions used in this thesis are presented without proof. 3D rotations or orientations is conceptually challenging both in its.
I got my PhD from Arizona State University in March , working with Prof. Pavan Turaga. During my PhD I studied computer vision, and human activity analysis with an emphasis in Riemannian Geometry and Machine Learning.
View Aswin Sivakumar’s profile on LinkedIn, the world's largest professional community. Aswin has 9 jobs listed on their profile. See the complete profile on LinkedIn and discover Aswin’s connections and jobs at similar companies.