Maybe the posting by this guy will strike a familiar cord: Did you literally mean that a poll taken 2-3 wks into the season is better than the original preseason poll, or that taking a poll 2-3 wks into the season for the first time is more accurate than having a preseason poll that is updated after each week of football? If the former, I gave you WAY TOO much credit in making a reply. If the latter, I don't know how to reply to these contradictory responses.
All I can say is that they update the poll after games are played each week and these polls reflect the results. Then what is your rational for declaring that a poll taken 2-3 wks into the season would be better than a preseason poll that is updated each week?
In 2005 from preseason to week 1 the following occurred in the AP (If you care to investigate yourself, http://sports.yahoo.com/ncaaf/polls?poll=1&date=2005-08-21 ): TN drops from 3 to 6 after a win, behind LSU who DNP Ohio State jumps LSU who DNP Iowa and GA jump Florida who won Miami, who DNP, drops from 9 to 14 GA and FSU jump Louisiville who DNP ASU jumps CAL who won GA Tech jumps BC who won, Tx Tech who DNP, VA who won, Fresno who DNP ND jumped a bunch Even if it's true that some voters will not drop a team unless they lose (you know they may have lied to get the hellhounds, who will only accept a vote their way as legitimate, off their backs), does it really matter in practice? Another way of saying I will not drop a team is to say I've studied the teams and in my experience my first instincts have proven more accurate than my adjusting to the weekly up and downs of the season. As long as it is just a few, as real life evidence suggests (i.e. how could all these changes occur if no drop was the dominant view), then why is this an illigitimate point of view? Personally, I don't think anybody holds this view as an absolute. This is just a way too convenient of an excuse to cover up a whole host of agendas. I'm still waiting for a sensible explanation of what is expected to be accomplished by delaying the poll. First, if the problem is real, and I don't think there is evidence that it is, it should be obvious that the best that could be achieved is delaying the problem. Delaying the poll will not covert all these guys into born again droppers. You will be stuck with the poll they come up with and I'm not convinced it will be better. Second, the only rational explanation that I can think of as to why a delayed poll would be substantially different from an updated preseason poll doesn't sit well with me. As you know, these pollsters do not start with a blank slate and there is no way that they can truly limit their thought process to just what they know this year. They will know that LSU>ULM, that the SEC>Southland, that USC has all-americans stacked behind all-americans, etc. They will form opinions, just like everyone else, based on who is coming back, how they performed before, and who is in the pipeline. All this information is essential to even having a chance at ranking teams with just a few weeks of data. No will be able to watch and analyze up to 50+ games a week to determine a ranking from scratch. So what these guys will do is take an expectation and modify it as games are played, which is exactly what they do today. If a voter has a rule like don't drop them unless they lose, why wouldn't he do the same thing while forming the delayed poll. The only explanation that I can think of to explain it is that they are all hypocrits. In other words to get different results the voters would have to make different private decisions than the public ones he makes today. This explanation is far too cynical for me. I'm open to some other explanation if you have one.
Let's just assume for a second that you don't know that better polls=more accurate polls. When one says it's a better poll, it could only mean one thing, a more accurate poll. It doesn't mean typed in a prettier font. Thus, there is no contradiction.
So in the former post, according to you, I contradicted myself by saying better poll in one breath and more accurate poll in the other? So what is your definition of better in this post? By better, do you mean more accurate. If so, someone has just put his foot in his mouth. To answer your question: Yes, teams jump other teams, but sometimes a team starts so far down that it still doesn't make it to where they should be ranked, even if they jump teams along the way.
I know I'm chiming in a bit late... And only read the first couple of pages. But Indy Tiger... I think your just arguing for the sake of arguing now. I think I was on page 3 or 4 when I heard you say SOS hurt AU. Not true, according to the NCAA, we had the #5 toughest schedule in the nation. Then you talk OOC... well, during LSU's NC run in '03, you guys played a 2-10 Arizona team, Lousinia-Monroe, Western Illinois, & La. Tech. You guys played that OOC schedule, had one loss, and still made it to the NC game that year. AU goes undefeated, plays the #5 hardest schedule in all the land, and got shammed. Let the bickering continue.....
The guy who brings facts, asks questions about others' positions using those facts, and trys to answer as best as he can any reasonable questions about his own opinions is not arguing for the sake of arguing; he is seeking understanding. The guy who only brings unsupported opinion, answers no reasonable questions, and cannot rationally explain why how his hypothesis will work better is the guy arguing for the sake of arguing. Now you brought a couple of facts to the table, but are you seeking understanding or are you arguing for the sake of arguing? SOS is not some fundamental law of nature; there are a plethora of ways to define it. Which SOS definition that matters (not the correct one, as there is no such thing, but the one that matters), is the one that seems to correlate well with the computer models. The old BCS SOS, the one that they eliminated after 2003, correlates very well with the results; the NCAA SOS does not. How and why do I know this? First, if you care, you can confirm for youself that the definitions are different. Second, these computer models sort teams in a rank order. How do they do that for undefeated teams, especially when they are not allowed to use margin of victory as an input. I don't know the details of these models, but common sense should tell you, and you could confirm this with the model owners if you cared enough, that SOS will be a very important, if not the only factor in any model. Now each of these models have their own definition of SOS, but when you average their results, unbeaten teams pretty much line up in BCS SOS order. It's not a perfect predictor, because two teams' SOS can be a lot closer than the rank order suggests, but it's very good. I can't show you the BCS ranking in 2004, but AU was a solid 3rd behind OK and USC. If you remember, not a single computer model in the BCS even ranked you 2nd. The only thing I can offer is Tellshow's site which shows the results from last year, so you can get a feel for what I'm saying: http://www.tellshowbcs.com/ I think I've said in this thread that AU was extremely unlucky not to go to the championship game as you just happened to go undefeated with 2 other major conference undefeateds. You are correct we probably had a worse OOC schedule and we lost a game and we still went to the CG. We also got better support from the SEC as the conference had wins over 11-2 Miami, 10-3 TX, 9-5 Fresno, and 9-4 So Miss. In 2004, nobody in the conference beat anyone OOC with a record better than 7-5. But all of that is irrelevant, except the conference record in 2004, as it was a different year. Yes you can claim that someone had you as the 5th toughest SOS, but that SOS was also irrelevant in practice. Argue for a different SOS in the future if you want, but that's the way it was like it was for OK getting to the CG in 2003 even though they didn't win their conference. For the record, I believe that AU should have had a chance to prove it on the field, but since there could only be two...