Sign Up | Log In
REAL WORLD EVENT DISCUSSIONS
Dems picking on Nate Silver
Tuesday, March 25, 2014 1:09 PM
NEWOLDBROWNCOAT
Tuesday, March 25, 2014 4:34 PM
REAVERFAN
Tuesday, March 25, 2014 6:55 PM
AURAPTOR
America loves a winner!
Tuesday, March 25, 2014 6:58 PM
Quote:Originally posted by reaverfan: Picking on him?
Tuesday, March 25, 2014 7:42 PM
MAL4PREZ
Quote:Originally posted by NewOldBrownCoat: SO a bunch of Dems are seemingly turning against him, putting up every prediction that he ever made that was wrong. Now that he's bringing bad news, he's not such a genius any more.
Quote: Yesterday, Nate Silver of fivethirtyeight.com predicted that the Republicans would capture the Senate by one seat in the 2014 midterm elections. We are not going to compete with him (right now) but not due to cowardice or lack of interest in the Senate. There is a more fundamental reason and it is worth explaining. It has to do with methodology. Fivethirtyeight.com is model driven. This site is data driven. Both methods have value but they are different. Silver is looking at a number of factors in each Senate race. These are the national environment, the candidate quality, the state partisanship, incumbency, and the head-to-head polls. For each race, numbers can be assigned to each of these variables. Then they have to be weighted, since some might be more important than the others. For example, the weights 0.2, 0.3, 0.2, 0.1, and 0.2 would overweight candidate quality and underweight incumbency. But an alternative weighting would be 0.3, 0.2, 0.1, 0.2, 0.2. In fact, any set of five numbers than add to 1.0 is a potential weighting. Which weights he uses is somewhat arbitrary and other statisticians or political analysts might differ on which factors are most important. Now Silver is a smart guy, so undoubtedly he took the numbers for 2012 and tried a whole lot of different weights. In fact, there are thousands of combinations (millions if you want to express each weight to several decimal places). But with a bit of programming, he could have tried all the possible combinations to, say, three decimal places. For each combination, he could then see how many 2012 Senate seats it got right. The weights that made the best prediction would be declared the winner. Of course, he could also run this experiment for 2010, 2008, and as far back as he had data. In the end, he would have to pick the weights that worked best in 2012, 2010, 2008, or some (weighted) average of them. So now he would have an excellent way to predict the 2012 or a previous year's election. But this year things could be different. For example, people could now be so fed up with Congress that incumbency counts for zero or is even negative ("throw the bums out"). As you can see, everything depends on which parameters go into the model and the weights they are assigned, even if they are optimized for one or more previous elections. He could easily miss key parameters that weren't an issue before, such as how much the Koch brothers are spending on each race. That is the nature of any model-based calculation: your model could be off. This site works differently. It looks only at the polls, in particular, the average of the polls taken within a week of the most recent poll. In essence, this approach assumes that the polling data already includes all the factors. If a candidate is of high quality or the Koch brothers are pouring millions into a race, in principle that should be reflected in the polling data. Which method is better? It is pretty hard to say in general. In the 2012 presidential election, Silver called every state correctly. We got 48 states right but thought North Carolina was too close to call and Romney would win Florida by 2%. Ultimately, Romney won North Carolina by 3% and Obama won Florida by 1%, so Silver did a little bit better, although we were within the margin of error on North Carolina and Florida. But one election is not a statistically significant test. Nevertheless, as the year wears on, you will no doubt see predictions about control of the Senate in lots of places and you should be aware that they are all probably using different methodologies and that matters. Remember the unskewed polls from Dean Chambers? He used a hybrid form, taking the polling data and then adding 6 points to Romney and subtracting 6 points from Obama. This wasn't such a great model. Silver also said (in 2010) that Rasmussen and PPP polls were way off and biased. Again, the devil is in the details. On Nov. 23, 2012 we analyzed the 2012 polls and came to a different conclusion. Again, methodology is everything. In our analysis, we only looked at polls on or after Oct. 1, 2012. Why this date and not Sept. 1 or Oct. 15? Basically it was an arbitrary choice. Also, we only considered major pollsters, as defined by having run at least five presidential polls in one or more states after this cutoff date. Why five and not three or 10? Again it was arbitrary and somebody else might take the same data, use different parameters, and get different answers. Anyway, we concluded that Rasmussen was biased by 2.4% towards Romney and his average error was 3.8%, which is right at the margin of error for most state polls. In contrast, PPP was biased by only 1.2%--for Romney--with an average error of 3.2%. In other words, while PPP was only slightly more accurate than Rasmussen, the errors were close to random, sometimes on Obama's side and sometimes on Romney's side, but on the whole, with only a small bias, and that for Romney. The media virtually always includes the phrase "Democratic-leaning" in front of PPP in news stories, but the reality is that last time PPP showed a Republican bias and only two pollsters were less biased, SurveyUSA with 1.1% for Romney and ORC International, with 1.0% for Romney. Again, our point here is that Silver chose a different way to analyze pollsters and came to a different conclusion, at least with respect to PPP. So a lot comes down to exactly how you measure things. Keep that in mind when reading news reports this year. Since we are data driven, we can't really start making predictions until we know who the candidates are and polling of the races has started. This is likely in the summer. On the other hand, in politics a week is a long time, so any predictions made now, by any method, should be taken with a pail of salt. All this should not be interpreted as our saying that Silver is wrong or that modeling is a bad idea. Our back-of-the-envelope calculation at this point also indicates control of the new Senate will be very close. It's just that we don't have real data to back that up yet.
Tuesday, March 25, 2014 7:51 PM
YOUR OPTIONS
NEW POSTS TODAY
OTHER TOPICS
FFF.NET SOCIAL