Pushback on NPR vs. Fox

I get off a connecting flight in Newark, en route to Shanghai, to see a mailbox full of notes questioning an item from last night. That item was based on a chart appearing to show that Fox News viewers overall did worse on a test of public-affairs factual knowledge than those who got their news elsewhere, or even than those who said they didn’t watch the news at all.

Here’s the most fully argued version of the comments I’ve received, from a reader in New York. All emphasis in original:

I’ve been following your “False Equivalence” series and have generally enjoyed and agreed with your insights, but I fear you may have jumped to a possibly unfounded conclusion on this one.  I’m a statistician by trade and have worked with various US government statistics departments the past and current work for an international organization.  Though I find these results entertaining from a media frenzy point of view, a number of alarm bells go off right away when I see this survey.  In ascending order of what bothered me most (with the relevant survey disclaimer quotes in italics):

    1.    It was conducted as a telephone survey.  “Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, non-response (eligible individuals refusing to be interviewed)…..” .  With caller ID these days what are chances that randomly chosen people would pick up for an unknown number?  And of those that pick up, how many are likely to agree to talk on the phone for 10 minutes to complete a survey such as this?  I would surmise that the response rate was quite low (I didn’t see any documentation in the report).  A low response rate raises the possibility of nonresponse bias –  the possibility that certain demographic types would be undersampled.  The report states that responses were reweighted to account for discrepancies in race, age and gender proportions as compared to the national average, but presumable there are other factors that go into nonresponse bias. 

    2.    Only 8 questions were asked.  “Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, ….. question wording, the order in which questions are asked, and variations among interviewers.” This is a structural bias issue.  For example, what if Fox News reported particularly poorly on one or more of the topics included in the survey, but reported much better on some other topics not included?  While I don’t see any inherent bias in the questions that doesn’t mean there isn’t any.  How were the questions selected?  Did both liberals, conservatives and centrists screen them for bias?  And how well the result of 8 random news questions relate to “what you know” anyway?

    3.    The deep breakdown of data in the survey.  1,185 people sounds like a lot, but when it is broken down to such a low level the sample size dwindles.  The graph that you use in your post shows the average number of questions answered correctly by respondents who reported getting their news from just this source in the past week.  So of the 1,185, how many watched Fox News and not any of the other sources listed?  MSNBC?  I would think that most people get their news from multiple sources (local news AND Fox News for example).  These people are apparently excluded from the analysis.  Presumably, the remaining sample could be quite small.  Which leads to the possibly most important issue:

    4.    Lack of standard errors on the correct answers statistic. “The margin of error for a sample of 1185 randomly selected respondents is +/- 3 percentage points. The margin of error for subgroups is larger and varies by the size of that subgroup.” The size of the subgroups on which the graph is based are not mentioned.  Also +/- 3 percentage points does not apply to the number of questions answered correctly.  I do not see evidence of statistical testing to show there are significant differences by respondents reporting receiving their news from different sources (though I suppose there’s a chance it may just not have been mentioned in the report).

While I’m not sure that the team at Farleigh Dickinson could have done a much better job than they did with their resources, I think this type of survey does not rise to level of “news” (nor do most soft surveys like this).  It is extremely easy to jump to conclusions based on a graph that agrees with one’s inklings about news sources even when the data behind it may not lend itself to clear cut conclusions.  Another thing that should be noted is the issue of causality.  You note in your post “that NPR aspires actually to be a news organization and provide ‘information’, versus fitting a stream of facts into the desired political narrative”  While this could be true, it is also possible that even if the survey results were correct there may be a bit of self-selection when choosing news networks.  In that case, ignorance could be the viewer’s fault rather than the fault of Fox News.

These are convincing points; I am sorry if I passed this chart along too eagerly and credulously, without reading the caveats. I have been big on the theme that reporters / commentators should not so often rush to conclusions and should instead be more aware of what they/we do not know. Conveniently and in my public-spirited way, I have now provided an illustration of this tendency myself.

And to give a sample of another recurring theme:

I take some exception to this post, on how Fox viewers answer fewer questions correctly than NPR viewers. I’ll bet that Fox viewers tend to be more conservative than NPR listeners. Conservatives tend to be less educated than liberals, and less educated people probably know less about current events.

There are any number of correlations that could be involved in driving this result, and until those are explored the only safe accusation you can make is that Fox attracted less informed viewers than NPR, not that Fox provides less information. That might be true, and your opinion, but this isn’t proper evidence for it.

via Politics : The Atlantic http://www.theatlantic.com/politics/archive/2012/05/pushback-on-npr-vs-fox/257620/