FAQs on my research on “green” communications

I have been asked a variety of thoughtful and thought-provoking questions during my talks, from fundamental physics, to computer science, to circuits. I have collected many of them here because they help understand my research and approach better.

Also, take a look at this video for an extended, but informal intro (the first 15 minutes or so introduce the broad problem and why it is important). And this, and this video, for short and informal presentations.

I'd love to hear your thoughts! Drop me an email.

High-level questions

  1. I don't know any information theory. What do you do?

  2. Intellectually, I am trying to demystify a "magic" in information theory. Claude Elwood Shannon, in an intellectually beautiful and practically extremely important 1948 paper, showed something highly non-intuitive. To understand the gist of his message, consider the following situation. You want to talk to your friend, who is far away in a crowded place. What can you do? You can shout! Or you can also repeat yourself many times. The more reliably you want to communicate, the louder you should be, or the more times you should repeat, yes?

    No! Shannon showed that as long as you have enough to say , you don't need to repeat yourself many times, or speak louder, to make your message more reliable. Isn't this magical? The key of course lies in understanding what “as long as you have enough to say” entails. My contention is that it entails spending energy in mixing your information (with a little bit of redundancy) at the transmitter, and demixing it at the receiver. What I show, using circuit models, is that if you want to minimize the total energy in transmission, mixing, and demixing, then it is more consistent with our intuition: speak louder, or slower. Sophisticated coding techniques still save you a lot of (total) energy -- but these will be newer techniques that we are currently developing.

    This matters practically because there are many systems now for which the power required in mixing and demixing information at transmitter and receiver can dominate that in transmission. This was not the case when Shannon theory first started, when we were mainly concerned with long distance communication (remember television, telephone, telegraph? tele is literally “far off”).

  3. What are the fundamental contributions of your work?

  4. At a high-level, I show that the energy required in processing at the transmitter and receiver must increase as we approach the fundamental limits on transmit power, or lower the target error-probability. Along the lines of analogy above, if you do not want your friend's head to explode because of heat in processing your garbled message, you may want to balance between transmit and processing power.

  5. Does energy consumed in communication and computing matter on a global scale?

  6. Yes! Information and Communication Technologies will likely consume more than 15% of our electric energy by 2020. They already consume about 8%.

    Of course, the energy also matters locally, as you know from how quickly your cellphone battery runs out when you're talking or downloading data.

  7. But what you do is mostly “fundamental”. Of what practical use are these results?

  8. So far, my work has indeed focused on fundamental limits. However, I have tested these results (in collaboration with circuit experts) and found the predictions to be not just qualitatively correct, but also insightful in designing systems.

    Our system-simulations with joint design of strategy and circuits consume significantly (5x-10x) lower energy than existing systems in some cases.

  9. Any example systems where they would help reduce energy consumption?

  10. I like to give the example of data-center ethernet networks. Today we have these giant computers, called data-centers, that process and store all our data in a fast and reliable way. Collectively, they consume more energy than most countries! 20% of the energy they consume is within their communication links. That's one of my areas of focus.

    Indoor wireless systems (e.g. WiFi, Bluetooth, 60GHz band) are also potential areas of applying my ideas.

  11. I don't see any wires or nodes when I write a software code. So can't I decode the signal in software and surpass your bounds?

  12. At the end of the day, a software is run on a physical circuit with wires and nodes. There's no getting away from that. Writing a code in software only makes this process less energy and time efficient.

  13. Then, are there any limitations to your results?

  14. Yes. My models do not allow as much flexibility as some (though few) circuits use today. They're limited in their reconfigurability and ability to let elements go to sleep to save energy. They are also noiseless. The new Information-Friction Model addresses some of the concerns on flexibility, but there's a long way to go.


Technical questions related to information theory:

  1. Do these results hold only for sparse-graph codes? (e.g. LDPC codes?)

  2. They hold for all codes and all encoding, decoding algorithms. These algorithms are assumed to be executed on some simple circuit models, and the code is assumed to be transmitted across some similarly simple channels.

  3. OK, then how are these codes constrained?

  4. All codes, even nonlinear ones. No constraint, except those in traditional information-theory: the reliability (error-probability) is fixed, and so is the desired rate.

  5. Why call them “fundamental”? They seem to hold for only specific implementation models!

  6. If you allow the channel and the implementation to change, then they're not fundamental. But it that case, neither are Shannon-theoretic bounds on transmit energy, as Landauer and Bennett insist in their papers, see e.g. Landauer's 1996 paper “Minimal Energy Requirements in Communication”. More on this below.


Technical questions related to physics:

  1. I've heard that the physicists have a way of providing fundamental limits on energy consumed in computing (Landauer's principle). Why don't you combine that with Shannon theory and get an understanding of this total power?

  2. If you go to fundamental limits of physics, the answers you get my surprise you. The fundamental limit on both communication as well as computing energy is zero. As long as you perform them in a “thermodynamically-reversible” manner. Which means that the speed of communication/computing has to be essentially zero.

  3. So the fundamental limit on communication energy is not provided by Shannon-capacity?

  4. Yes and no. If you're given a communication channel, and asked: What is the required transmit energy to communicate reliably across this channel, Shannon-capacity will give you the answer. On the other hand, if you're free to design your channel, the required transmit energy can be made as small as possible by reducing friction and noise in the system.

  5. Those limits in physics sound pretty useless!

  6. They are! Unless what keeps you awake at night is the possibility of perpetual motion machines. Physicists were forced in a corner by Maxwell's demon, and this understanding (i.e., the physics of computing) is an understanding to address that. (If Maxwell's demon isn't refuted, perpetual motion machines can exist. The sacred Second Law of Thermodynamics -- which, by the way, has intellectual connections with information theory -- would fail.) Not to actually guide design of computation/communication. Landauer himself noted as much in his 1996 paper.

  7. What if the channel can be designed, but communication is nonzero rate?

  8. I don't think this is known yet. Even if it is, there does not seem to be a widely accepted answer.



Technical questions related to computer science:

  1. In your research, you're using these circuit-models. Even the info-friction model is counting the required amount of information movement to compute a function. Why don't you consider the simpler number-of-operations kind of a model?

  2. The "number of operations" Turing Machine model is simple to state, but extremely hard to use to obtain fundamental limits! That is why we still do not know if P equals NP.

    Worse, the connection between power/energy consumption and Turing-complexity is little. Even the time-requirements on an actual circuit are hard to predict using this model. It can be a useful first step when analyzing algorithms. But it is hardly the last word.




Technical questions related to circuits:

  1. Aren't your circuit models are too simple, too crude, to predict energy consumed by today's circuits accurately?

  2. Yes! That's the point of models: to simplify. Consider a parallel from classical information theory. The AWGN model (or the BSC, BEC, any DMC for that matter) is only a crude approximation to what is really out there. Always. Yet, it is very insightful, and coding strategies designed using it work! Think of choosing an error-correcting code without the benefit of Shannon theory. The space is doubly-exponentially large even for a fixed blocklength. It is likely that we would still have been stuck with Hamming/RS/BCH codes.

    Also, look at where we started out from. The Turing-machine model under-estimates energy consumption, and does not yield much insight on what codes to choose: all sparse-graph codes are linear-time decoding. Then there are error-exponent style analyses that tell us how blocklength increases when we approach capacity. But blocklength is a very vague notion of complexity, and tells us very little about energy consumption.

    So we (as a specie) have been designing codes with very little understanding of how much power they'll consume once they're put in place. But this needs to change, and that's one of the things that my crude models prove. Also, they call for new code designs, which, in my view, is the biggest practical benefit.