Why does the Sphere microphone have two output channels even though I just want to model a mono microphone?

Having a two-channel microphone makes it possible to capture directional and distance information from the soundfield, which allows the DSP algorithms to reconstruct the three-dimensional response of a wide range of microphones. The Sphere plug-in takes these two channels as input and outputs mono audio corresponding to the sound of the original (mono) microphone being modeled.

Conversely, some microphone modeling products apply various forms of filtering to attempt to “morph” a single-channel microphone into another. As any audio engineers can attest, no amount of EQ or filtering can make one microphone sound like another. This is largely because EQ doesn’t take into account proximity effect or the three-dimensional polar response of the microphone. The best you can do with EQ, or any other type of single channel processing, is model the on-axis response of the microphone at a single distance from the source.

A relevant example is the iconic U47 which, although nominally cardioid, actually has a pattern somewhere super-cardioid and hyper-cardioid (depending on frequency and the particular microphone). Compared to other cardioid microphones this more directional pattern helps give the U47 the more "present" and "intimate" sound that it's known for.

The dual-channel Sphere microphone allows us to accurately model these effects, even when the end result is mono.

It is also possible to record in stereo using the Sphere 180 plug-in with one microphone model on the front (or left) of the mic and another on the rear (or right). In this case, the plug-in output is stereo.