To signal the onset of salient sensory features or execute well-timed motor sequences, neuronal circuits must transform streams of incoming spike trains into precisely timed firing. To address the efficiency and fidelity with which neurons can perform such computations, we developed a theory to characterize the capacity of feedforward networks to generate desired spike sequences. We find the maximum number of desired output spikes a neuron can implement to be 0.1–0.3 per synapse. We further present a biologically plausible learning rule that allows feedforward and recurrent networks to learn multiple mappings between inputs and desired spike sequences. We apply this framework to reconstruct synaptic weights from spiking activity and study the precision with which the temporal structure of ongoing behavior can be inferred from the spiking of premotor neurons. This work provides a powerful approach for characterizing the computational and learning capacities of single neurons and neuronal circuits.