Hyperspectral cameras capture images across several, narrowband wavelengths, which finds usage in numerous computer vision and material identification
applications. Due to a dense sampling of space and spectrum, the captured hyperspectral image is often very high dimensional. This leads to severe loss in
SNR per band, requires very long exposure times and is inherently wasteful.
A key observation is that a hyperspectral image of a natural scene has very limited spectral diversity, leading to a concise low-rank representation. We propose an optical imager that directly captures this low-rank subspace. We achieve this by implementing two optical operators -- a spatially-coded spectrometer and a spectrally-coded imager. By alternating between the two, using output of one operator as input to other operator, we capture a low-rank approximation with as few as 10 measurements.
Structured Light (SL) relies on projecting a known pattern and capturing an image of the scene. By computing the correspondences between projector and camera pixels, SL
is capable of highly accurate depth map estimation. Existing SL techniques either require projection of multiple patterns, or rely on complex computing to estimate
the depth map, both of which preclude an efficient implementation on mobile systems such as cellphones and drones.
Devices with small real estates can only accommodate a narrow (micro) baseline between camera and projector. We observe that such a narrow baseline can be used for linearizing the otherwise non-linear SL equation relating projected pattern and captured image. This leads to a linear equation in two unknowns (albedo and disparity) at each pixel. The resulting equation can then be efficiently solved using a local least-squares approach which requires minimal computational resources, and needs projection of only a single pattern.
Wavelet transform of an image is both a sparsifying and a predictive transformation. By optically measuring the wavelet coefficients with a single pixel camera that measures linear projections of scene's image, one can adaptively tease out the dominant wavelet coefficients, requiring fewer measurements than compressive sensing. In practice, such a method faces the debilitating problem of increasing noise with increasing spatial scales, due to spatially compact wavelet basis.
Instead of using a DMD as a spatial light modulator (SLM), we propose using a phase-only SLM which simply redistributes light into the spatially compact basis, thus maintaining a constant measurement SNR independent of spatial scale. This allows high quality imaging with a small set of high SNR measurements made adaptively.
One of the distinguishing feature of materials is their spectral signature, which is the intensity of light the material reflects as a function
of wavelength. Hyperspectral images, which capture spectral signataure at every pixel, are used for material classification and one specific
application is anomaly detection, where materials very different from the background and in trace quantities are identified.
However, capturing all the data to detect material present in trace quantities is both costly and wasteful. Instead, we propose a novel two stage procedure where we first identify the spectrum of the background and remove it. In the second stage, due to absence of background, presence of anomalies can be seen as measurement of a sparse signal, which can be done with various compressive sensing techniques.
Contrary to an orthonormal basis, an overcomplete dictionaries has far more elements than the signal dimension,
which admits sparse representation. Such a framework enables compressive sensing tasks, by exploiting the sparsity during sensing
as well as recovery of the signal.
Unfortunately, such applications are far from practical, due to the enormous dictionaries required for good results and the significant time required for recovery. We instead propose a novel clustering technique which reduces the recovery time by a factor of 10x to 100x, with very small loss in accuracy. The clustering of dictionary elements is done by identifying that visual signals are similar across scales, thus enabling identification of clusters with a down-sampled version of the same signal.
Mobiles have become ubiquitous over the past decade, and they offer more data than a traditional DSLR can
offer, such as accelerometer, compass and gyroscopes. At the same time, human held mobiles tend to create a blur due to hand shake.
We proposed some novel applications of such mobile imagery, where we incorporate the sensor information for various image processing tasks such as depth estimation, image deblurring, all-focus image and so on. This work was part of my undergrad thesis where I worked with Prof. A N Rajagopalan