### Sparse codes, fractional bases, neural representations

Recently my brothers and I were discussing this comic. Is it really possible to represent any value with a single 1 and a lot of zeros? Of course it is; this is just a variation on base 1 notation. We don't usually use base 1 for anything except counting on our fingers because the number of places required is so large.
We usually use base 2 in computers, because it is the densest base you can use that only uses the symbols 1 and 0. But this denseness comes with a price: in order to use all the possible states, the mapping between each individual symbol and it's meaning is lost. With base 1, each symbol stands for one specific thing. With base 2, in order to pick out one specific thing, you need to specify every single symbol.
What would be nice is some kind of code that is somewhere in between base 1 and base 2. (Perhaps a fractional base like 1.3 or base Phi.) For example, you could have one symbol mean "red" when activated, one symbol mean "round," and one symbol mean "tasty." Then 111 would represent "apple" while 110 would represent "tomato." Of course, in a real system there would be a lot of other meanings ("smart," "fast," "salty," etc...) that wouldn't be used in this example, corresponding to extra zeros in the code. It would be a sparse code, with a lot of zeros and very few ones. In fact, if it took no energy to represent a zero, but some energy to represent a one, this would be the sort of code you would want to use.

That's why it's the system the brain appears to use to store information, and is becoming widely used for machine learning tasks. In fact, the brain's representation may be very sparse. Some neurons seem to fire only when shown a certain person's face. (And another paper on the subject.)
The image shown at the top is a sparse code for representing a small patch of an image for image compression. The patch would be the sum of the activated elements from the code.