Then performance drops through the floor. Note that network I/O is not the
issue -- this is local (and this reflects production use).
I know what you're thinking -- "select one_column" uses an index while
"select *" does not. Good guess, but not this time. Both queries use a
clustered index scan, since there is no index on one_column.
Frankly, this is baffling to me. A clustered index scan is a clustered index
scan, right? In fact, gathering query statistics shows me SQL Server is
reporting the same number of the same types of reads, it just seems to read
a whole lot slower if more columns are involved. In fact, performance seems
to be roughly inversely proportional to the number of columns selected. The
position of the columns doesn't seem to be a factor.
Googling around gives no suggestions as to why a bigger row size should be
such a significant factor, but it is -- highly significant, to the tune of a
tenfold slowdown. Surely constructing the result set doesn't produce so much
overhead that I/O itself stalls... right? I'm open to ideas.