William of Ockham |
[Updated 2011/07/02 for a call of paper to Journal of Applied Mathematics on "Preconditioning Techniques for Sparse Linear Systems"]
You still have one day (deadline: 2011/06/19) to submit a paper to ACM Multimedia SRED 2011 : First International Workshop on Sparse Representation for Event Detection in Multimedia, from 28. Nov. to 1 Dec. in Arizona, USA.
Should you need a little more time, an oxymoric abundance of calls for papers related to sparsity offers additional opportunities. Let's hope your submissions will spark interesting discussions on sparsity on Nuit Blanche. (BTW, thank you Igor for promoting a PhD thesis proposal related to sparsity and seismics, i'll try to make it more international soon).
Journal: EURASIP Journal on Advances in Signal Processing
Special issue: New Image and Video Representations Based on Sparsity
Editors : Fred Truchetet, Université de Bourgogne; Akram Aldroubi, Department of Mathematics, Vanderbilt University; Ivan W. Selesnick, Polytechnic Institute of New York University; Peter Schelkens, Vakgroep Elektronica en Informatieverwerking, Vrije Universiteit Brussel; Olivier Laligant, Université de Bourgogne, Dijon, Bourgogne, France
Submission deadline: apparently open until June 30 2011 [2011/06/30], see this page (CfP)
In recent years, new signal representations based on sparsity have drawn considerable attention. Natural images can be modeled by sparse models living in high-dimensional spaces. New efficient algorithms based on sparse representations have been recently proposed and successfully applied to many image and video processing problems.This special issue will focus on how sparsity has impacted image and video processing. It intends to be an international forum for researchers to summarize the most recent developments, trends, and new ideas in the field of sparse representations and their applications to image and video processing and hence highlight the advances in this field. The topics to be covered include, but are not limited to:Journal: EURASIP Journal on Advances in Signal Processing
- Sparse representations for image and video
- Multiresolution approaches, Wavelet, and X-let analysis for image processing
- Compressed sensing
- Applications of sparsity to image denoising, compression, segmentation, restoration, recognition, inpainting, super resolution, and so forth
Special issue: Sparse Signal Processing
Editors: Farokh Marvasti, Advanced Communications Research Institute, Sharif University of Technology; Jonathon Chambers, Advanced Signal Processing Group, Department of Electronic and Electrical Engineering, Loughborough University; Mohammad Djafari, CNRS, Ecole supérieure d'éléctricité (Supélec)
Submission deadline: July 15, 2011 [2011/07/15] (CfP)
An emerging important area of signal processing is the case when the signal is sparse in any transform domain. Sparse signal processing reveals significant reduction in the sampling rate and processing manipulations. Efficient algorithms have been developed for sparse signals for various applications; the algorithms developed seem to be application specific. It is the aim of this special issue to compare so many algorithms for these applications. The goal is a unified view of sparse signal processing by bringing together various fields.The key applications of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of reconstruction algorithms papers are solicited in, but are not limited to:
- Random sampling
- Compressed sensing
- Rate of innovation
- Real Galois field error correction codes
- Spectral estimation
- Multisource location
- DOA estimation in array processing
- Sparse array beamforming
- Sparse sensor networks
- Blind source separation in SCA
- Multipath channel estimation
Journal: International Journal of Mathematics and Mathematical Sciences
Special issue: Sparse Sampling and Sparse Recovery and Its Applications to Inverse Problems
Editors: Gerd Teschke, Institute for Computational Mathematics in Science and Technology, Neubrandenburg University of Applied Sciences; Anders Hansen, Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge; Ronny Ramlau, Industrial Mathematics Institute, Johannes Kepler University
Submission deadline: September 1st, 2011 [2011/09/01] (CfP)
Many applications in science and engineering require the solution of an operator equation Kx = y. Often only noisy data are available, and if the problem is ill posed, regularization methods have to be applied for the stable approximation of a solution. Influenced by the huge impact of sparse signal representations and the practical feasibility of advanced sparse recovery algorithms, the combination of sparse signal recovery and inverse problems emerged in the last decade as a new growing area. Currently, there exist a great variety of sparse recovery algorithms for inverse problems. These algorithms are successful formany applications and have lead to breakthroughs in many fields (e.g., MRI, tomography). However, the feasibility is usually limited to problems for which the data are complete and where the problem is of moderate dimension. For really large-scale problems or problems with incomplete data, these algorithms are not well suited or fail completely. In the context of signal recovery, generalized sampling theories were developed to tackle the problem of data incompleteness. One outstanding approach is the theory of compressed sensing. A major breakthrough was achieved when it was proven that it is possible to reconstruct a signal from very few measurements. A crucial condition for compressed sensing is the so-called restricted isometry property. Nowadays, this strong requirement has been relaxed in several ways, but so far all formulations of compressed sensing are in finite dimensions. Quite recently, first attempts of infinite dimensional formulations emerged. In this special issue, our focus is on stable and numerically feasible recovery algorithms and–and this is one major question–whether these technologies generalize to the solution of operator equations/inverse problems. Hence we invite authors to submit original research papers and review articles that provide the state of the art in this field and extend the known theory and contribute therefore to answer these questions. We are interested in articles that explore aspects of generalized sparse sampling, sparse recovery, and inverse problems. Potential topics include, but are not limited to:Journal: Neurocomputing
- Generalized sampling principles and stable reconstruction
- Compressed sampling strategies and the solution of operator equations
- Compressive sampling principles and their extensions to infinite dimensions
- Sparse recovery principles for inverse problems
- Regularization theory for inverse problems with sparsity constraints
- Algorithms and their numerical realization
Special issue: Distributed Machine Learning and Sparse Representation with Massive Data Sets
Editors: Oliver Obst CSIRO ICT Centre, Sydney, ; Tiberio Caetano NICTA, Canberra, ; Michael Mahoney Stanford University, Stanford
Submission deadline: September 16, 2011 [2011/09/16]
The exponentially increasing demand for computing power as well as physical and economic limitations has contributed to a proliferation of distributed and parallel computer architectures. To make better use of current and future high-performance computing, and to fully benefit from these massive amounts of data, we must discover, understand and exploit the available parallelism in machine learning. Simultaneously, we have to model data in an adequate manner while keeping the models as simple as possible, by making use of a sparse representation of the data or sparse modelling of the respective underlying problem.Journal: Journal of Visual Communication and Image Representation (JVCI)
This special issue follows the 2011 Symposium on "Distributed Machine Learning and Sparse Representation with Massive Data Sets" (DMMD 2011). We invite both new submissions as well as previously unpublished work that have been presented on DMMD 2011. Suggested topics for this special issue include:
- Distributed, Multicore and Cluster based Learning Techniques
- Machine Learning on Alternative Hardware (GPUs, Robots, Sensor Networks, Mobile Phones, Cell Processors ...)
- Sparsity in Machine Learning and Statistics
- Learning results and techniques on Massive Datasets
- Dimensionality Reduction, Sparse Matrix, Large Scale Kernel Methods
- Fast Online Algorithms for Large Scale Data
- Parallel Computing Tools and Libraries
Special issue: Sparse Representations for Image and Video Analysis
Editors: Jinhui Tang, Nanjing University of Science and Technology; Shuicheng Yan, National University of Singapore; John Wright, Microsoft Research Asia; Qi Tian, University of Texas at San Antonio; Yanwei Pang, Tianjin University; Edwige Pissaloux, Université Pierre et Marie Curie
Submission deadline: October 1, 2011 [2011/10/01] (CfP)
Sparse representation has gained popularity in the last few years as a technique to reconstruct a signal with few training examples. This reconstruction can be defined as adaptively finding a dictionary which best represents the signal on sample bases. Sparse representation establishes a more rigorous mathematical framework for studying high-dimensional data and ways to uncover the structures of the data, giving rise to a large repertoire of efficient algorithms. The sparse representation has just been applied to visual analysis for few years, while has shown its advantages in processing the visual information. Thus it will have a great potential in this field.
Sparse representation has wide applications in image/video processing, analysis, and understanding, such as denoising, deblurring, inpainting, compression, super-resolution, detection, classification, recognition, and retrieval. Many approaches based on sparse representation were proposed for these applications in the past years, and showed the promising results. This special issue aims to bring together the range of research efforts in sparse representation for image/video processing, analysis, and understanding. The goals of this special issue are threefold: (1) to introduce the advances of the theories on sparse representation; (2) to survey the progress of the applications of sparse representation in visual analysis; and (3) to discuss new sparse representation based technologies that will be potentially impactful in the image/video applications (primary results are needed).
The scope of this special issue is to cover all aspects that relate to sparse representation for visual analysis. Topics of interest include, but are not limited to the following:
- The fundamental theories on sparse representation
- Dictionary learning for sparse representation and modeling
- The novel learning methods based on sparse representation
- The applications of sparse representation in image/video denoising, impainting, debluerring, compression, and super-resolution
- Sparse representation for pattern recognition and classification
- Sparse representation for image/video retrieval
- Sparse reconstruction for medical imaging and radar imaging
- Sparse component analysis and its application to blind source separation
Journal: IEEE Journal of Selected topics in Signal Processing
Special issue: Robust Measures and Tests Using Sparse Data for Detection and Estimation
Editors: Hsiao-Chun Wu, Louisiana State University; Philippe Ciblat, ENST; Octavia A. Dobre, Memorial University of Newfoundland; Jitendra K. Tugnait, Auburn University
Submission deadline: March 28, 2011 (apparently past, still on the "Open Special Issues" page, yet on the upcoming publications, due February 2012) (CfP)
The sparse (undersampled) data constraint in the statistical signal processing is quite common for efficient computation and system time-invariance validity. Hence, the research about how to build reliable statistical measures and statistical tests using sparse data for different signal processing applications is still quite challenging nowadays. When the real-time efficiency or the unnoticeable processing delay is required with the help of the state-of-the-art microprocessors or DSP platforms, researchers are still making continual efforts to develop new robust statistical methodologies. Two crucial indicators, “number-of-samples to number-ofparameters- to-be-estimated ratio” (referred to as SPR) and “system performance versus signal-to-interferenceplus- noise ratio”(referred to as SPSINR), can reflect both sparse data constraint and robustness. The objective is to seek new ideas and techniques to surmount the existing signal processing methods in terms of low SPR and superior SPSINR but still achieve good computational efficiency. In the signal processing research, reliable statistical measures such as statistical moments/cumulants, Lp-norms, mean-square-errors (MSE), Cramer-Rao bounds (CRB), signal-to-noise ratio (SNR), signal-to-interference ratio (SIR), mutual information/entropy, divergence, etc. are always in pursuit, especially subject to the restriction on the limited data and/or the time variance of the underlying systems instead of the classical asymptotical analysis based on the infinite data set. This special issue will focus on all aspects of design, development, implementation, operation, and applications of robust measures and tests using sparse data for detection and estimation.Journal: Journal of Applied Mathematics
We invite original and unpublished research contributions in all areas relevant to signal processing in cooperative cognitive radio systems. The topics of interest include, but are not limited to:
- New robust measures or objective functions for detection and estimation using sparse data
- New robust statistical tests for detection and estimation using sparse data
- New theoretical and empirical analyses for detection and estimation using sparse data
- New results for explicit expressions of CRB or variance for detection and estimation using sparse data
- Reliable signal quality measures using sparse data
- General frameworks for evaluating various statistical measures/tests using sparse data
Special issue: Special Issue on Preconditioning Techniques for Sparse Linear Systems
Editors: Massimiliano Ferronato, Department of Mathematical Methods and Models for Scientific Applications, University of Padova, Padova, Italy; Edmond Chow, School of Computational Science and
Engineering, College of Computing, Georgia Institute of Technology, Atlanta; Kok Kwang Phoon, Department of Civil Engineering, National University of Singapore
Submission deadline: November 1, 2011 [2011/11/01] (CfP Special Issue on Preconditioning Techniques for Sparse Linear Systems)
The accurate and efficient solution to sparse linear systems of equations, arising from the discretization of PDEs, often represents the main memory- and time-consuming tasks in a computer simulation. Direct methods are still widely used on the basis of their robustness and reliability. However, they generally scale poorly with the matrix size, especially on 3D problems. For large sparse systems, iterative methods based on Krylov subspaces are a most attractive option. Several Krylov subspace solvers have been developed during the 1970s through the 1990s, and they are generating a growing interest in many areas of engineering and scientific computing. Nonetheless, to become really competitive with direct solvers they need an appropriate preconditioning to achieve convergence in a reasonable number of iterations.Additional conference events are found at the Compressive sensing meetings page or SIVA Conferences (not updated often enough these times). Let us mention:
It is widely recognized that preconditioning is the key factor to increase the robustness and the computational efficiency of iterative methods. Unfortunately, theoretical results are few, and it is not rare that “empirical” algorithms work surprisingly well despite the lack of a rigorous foundation. The research on preconditioning has significantly grown over the last two decades and currently appears to be a much more active area than either direct or iterative solution methods. On one hand, this is due to the understanding that there are virtually no limits to the available options for obtaining a good preconditioner. On the other hand, it is also generally recognized that an optimal general-purpose preconditioner is unlikely to exist, so new research fields can be opened for improving the computational efficiency in the solution of any specific problem at hand on any specific computing environment.
We invite investigators to contribute original research articles as well as review articles on the development and the application of preconditioning techniques for the solution to sparse linear systems. Potential topics include, but are not limited to:
- Development and numerical testing of novel preconditioners
- Development and numerical testing of preconditioners for specific applications
- Improvement of existing general-purpose algebraic preconditioners
- Theoretical advances on the properties of existing general-purpose algebraic preconditioners
- Application of existing techniques to novel fields
The first international Travelling Workshop of Interaction between Sparse models and Technologies (iTWIST 2012), May 9-11, 2012, at CIRM, in Marseilles, France, on "Generalized sparsity in high-dimensional geometries". The range of topics will include (but may not be limited to) :
Organisers: Dr. Sandrine Anthoine, Prof. Yannick Boursier, Prof. Pascal Frossard, Dr. Laurent Jacques, Prof. Pierre Vandergheynst, Prof. Christophe De Vleeschouwer
- Graph theory and applications
- Dictionary learning
- Sparse models in machine learning
- Compressed sensing
- Emerging and innovative acquisition technologies such as:
* compressive imagers
* hyperspectral imaging
* analog to information converter
* coded aperture system
* computational photography- Applications to real-life problems (astronomy, biomedical, industry, multimedia and any other field ... )
Probably more information soon at: http://marwww.in2p3.fr/~boursier/
For the near present, we are awaiting the upcoming (September 2011) Adaptive Sparse Representation of Data and Applications in Signal and Image Processing (IEEE Journal of Selected topics in Signal Processing) and the "very very sparse" special issue "Sparse Representation of Data and Images" in Advances in Adaptive Data Analysis. Theory and Applications, which appears only on a few places (Improved analysis of the subsampled randomized Hadamard transform, cited by A Note on Low-rank Matrix Decompositions via the Subsampled Randomized Hadamard Transform).
Finally, the special issue of Signal Processing on Advances in Multirate Filter Bank Structures and Multiscale Representations is complete; 4 out of 11 papers refer to sparsity in their title, all of them in their text. Sparsity is no self-referential concept these days.