Assistant Professor,

Electrical & Computer Engineering; and

the Center for Neural Basis of Cognition

B-202 Hamerschlag Hall

Carnegie Mellon University

Ph: (412) 268-3644

pgrover at andrew dot cmu dot edu

Broadly, I am interested in an understanding of information that goes beyond just communication. Our lab seeks to attain this understanding through a mix of thought and laboratory experiments, spanning examination of fundamental limits all the way to experiments. Current topics of interest include fundamental and practical understanding of circuits and systems for processing and communicating information; flow of information in neural systems and neural interfaces (and use of this understanding to design radically new neural interfaces); and understanding information and its use by exploring the union of control and communication. Find a short bio here.

Postdoctoral researcher (2011-12) in Electrical Engineering, Stanford University.

PhD, UC Berkeley, Dec 2010.

B. Tech, M.Tech, IIT Kanpur ('03, '05), Schooling: Vidyashram, Jaipur

You can find my CV here.

Postdoctoral researcher (2011-12) in Electrical Engineering, Stanford University.

PhD, UC Berkeley, Dec 2010.

B. Tech, M.Tech, IIT Kanpur ('03, '05), Schooling: Vidyashram, Jaipur

You can find my CV here.

Modern distributed computing systems, from nanoscale circuits to large supercomputers, are all prone to faults, errors, and delays in computing. Our approach is to merge computation and error-correction coding into “coded computation” that uses close to optimal redundancy in making the system robust to faults/errors/delays. This is attained through obtaining fundamental limits as well as novel coded computation strategies, and comparing them. Our focus is on examining applications of modern interest, such as scientific computing and machine learning, and in identifying building blocks of these computations towards making them resilient.

This raises new coding theory problems, e.g. low communication-complexity ``Short-Dot'' codes for matrix multiplication. At ISIT'17, I will jointly present a tutorial on this exciting area, discussing fundamental limits and strategies for combating errors (with Viveck Cadambe, Penn State).

What is fundamentally new in this area that goes beyond classical information and coding theory? I have observed two important aspects:

(i)

(ii)

These fundamental results provide insights that we used to arrive at efficient and robust strategies: careful embedding of error-correction at intermediate stages of computation can be used to control error-propagation. Our results [TIT'17b] are the first strategies that, despite all gates being noisy, are able to compute linear transforms “reliably”. For pursuing this direction, we enthusiastically joined the SRC SONIC center enabling tech-transfer to industry, and also recently (2017) received an NSF grant.

[TIT '17b] Yaoqing Yang, Pulkit Grover, and Soummya Kar.

[TIT'17a] Yaoqing Yang, Pulkit Grover, and Soummya Kar.

[NIPS '16] Sanghamitra Dutta,Viveck Cadambe, and Pulkit Grover.

[ISIT'17] Sanghamitra Dutta,Viveck Cadambe, and Pulkit Grover.

Theoretical underpinnings and practical designs of “green” radios”

The goal is to

Conclusions derived from our “Node Model” and “Wire Model” are as follows:

- [Node model] There is a fundamental tradeoff between transmit and encoding/decoding power. When computational nodes dominate processing power, to minimize total power, one must fundamentally stay away from capacity.
- [Node model] Capacity-approaching LDPC codes optimize over transmit power, but require large decoding power. Regular LDPC codes are order-optimal in the Node Model.
- [Wire/Info-friction model] When wires dominate the circuit power consumption, the total power diverges to infinity significantly faster than that for the node model. Further, the optimal choice of transmit power also increases unboundedly as the error probability is lowered.

Our results have the potential to drastically reduce power consumed in short-distance wireless (e.g. 60 GHz band) and wired (e.g. multi-Gbps communication in data-centers) communication.

[TIT'15a] Pulkit Grover,

[JSAC '11] Pulkit Grover, Kristen Ann Woyach and Anant Sahai,

[ISIT'17c] Haewon Jeong, Christopher Blake, and Pulkit Grover.

[ISIT '12] Pulkit Grover, Andrea Goldsmith and Anant Sahai.

[ITW '07] Pulkit Grover,

[ISIT '14] Pulkit Grover.

[JSAC '16a] Karthik Ganesan, Pulkit Grover, Jan Rabaey, and Andrea Goldsmith.

[Globecom '12] Karthik Ganesan, Yang Wen, Pulkit Grover, Andrea Goldsmith and Jan Rabaey,

We are seeking a fundamental approach, guided by signal processing and information theory, to understand information-use in wearable and implantable biosensing systems.

For noninvasive (EEG) brain sensing modality, our work [Proc IEEE'17, ISIT'17] (support: SRC SONIC, CMU BrainHUB, and NSF WiFiUS) makes a systematic case that the current theoretical understanding severely underestimates EEG's spatial resolution. In essence, it relies on a spatial Nyquist rate estimation when, really, Nyquist rate (as estimated) has nothing to do with how much information you can infer about the brain activity from the scalp. Recently, we have been able to obtain the first experimental validations for these conclusions (with Marlene Behrmann's lab, led by Amanda Robinson and Praveen Venkatesh; under preparation, preliminary results presented at [CNBC Retreat'16]), and are working with clinicians on using this high resolution for improved diagnoses of neural disorders. This led to instrumentation of some of the highest density EEG systems to exist (for their coverage), and those didn't turn out to have sufficient density! We are also working with instrumentation engineers (led by Ashwati Krishnan) on overcoming novel challenges and difficulties in instrumenting and installing such high density systems. I am personally very interested in seeing these systems to their end goal: improvement in clinical and neuroscientific inferences.

(A good summary would be: when they go sub-Nyquist, we go super-Nyquist!)

Another fundamental question is: how well can we infer information flow directions in the brain? Our information-theoretic counterexamples [Allerton'15c] (inspired by the celebrated work of Schalkwijk & Kailath) show that Granger causality and directed information-- used widely to infer these directions -- can provide the wrong answer

[ISIT'17a] Praveen Venkatesh and Pulkit Grover.

[Proc. IEEE '17] Pulkit Grover and Praveen Venkatesh.

[Allerton '15c] Praveen Venkatesh and Pulkit Grover.

[CNBC Retreat'16] A. Robinson, M. J. Boring, P. Venkatesh, X. Kuang, M. Behrmann, M. J. Tarr, and P. Grover.

Control and communication in cyber-physical systems”

The “observers” cannot act on the system, and therefore they communicate their observations to the “controllers.” The “controllers” cannot observe the state directly, and thus rely on the signals sent by the “observers” to decide on their actions. In a realistic control system, these simplified control agents may be extremely limiting. However, analytically, they have been simpler to understand because they disallow

The crux of this issue is captured in a deceptively simple problem called the Witsenhausen counterexample, which has been

[Ph.D. thesis] Pulkit Grover.

[TAC '13] Pulkit Grover, Se Yong Park, and Anant Sahai,

[TIT'15b] Pulkit Grover, Aaron B. Wagner and Anant Sahai,

[IJSCC '10] Pulkit Grover and Anant Sahai,

[ISIT '10] Pulkit Grover and Anant Sahai,

[CDC '08] Pulkit Grover and Anant Sahai,

MS

UG

The epithet “students” is unfair to all of the above who have taught me a lot during our collaborations.

My favorite picture.