Part Two of my look at the June DMN/UT-Tyler poll, which has its share of interesting results.
Still, not everything is coming up roses for Abbott. His job approval rating is respectable, with 50% approving of his performance and 36% disapproving.
But that pales next to the 61%-23% split in his favor in April 2020, as Texans rallied around him in the early weeks of the coronavirus pandemic.
Also, Texans’ assessment of Abbott’s response to the devastating February winter storm has soured, at least slightly. For the first time, though it’s within the poll’s margin of error, more said Abbott responded not well or not well at all than said he performed well or very well.
And amid continued calls for conservation of electricity, Texas voters are losing confidence that the state’s electricity grid can withstand heat waves and spiking demand this summer, the poll showed.
[…]
A plurality of all voters continues to say Attorney General Ken Paxton, accused by former associates of misuse of office, has the integrity to be the state’s top lawyer: 33% say he does and 25% say he doesn’t. “These numbers are likely to soften,” pollster Owens said, as Paxton’s two opponents in next year’s GOP primary for attorney general, Land Commissioner George P. Bush and former Texas Supreme Court Justice Eva Guzman, begin pounding on him. Among likely primary voters, Paxton has support from 42%; Bush, 34%; and Guzman, 4%. A Trump endorsement could shake up the race, though not push any of the three clear of a probable runoff, Owens said.
See here for part one, and here for the poll data. To cut to the chase, here are the approval numbers given, including the same numbers from the March and April polls:
Name March April June
======================================
Biden 47 - 41 48 - 41 47 - 42
Abbott 52 - 31 50 - 36 50 - 36
Patrick 38 - 27 37 - 26 37 - 24
Paxton 36 - 29 37 - 26 37 - 24
Cornyn 40 - 26 42 - 24 37 - 21
Cruz 42 - 45 44 - 42 45 - 38
Beto 37 - 42 35 - 37 31 - 40
Harris 42 - 43 43 - 40 39 - 42
Note that the question for the first four is “approve/disapprove”, and for the second four is “favorable/unfavorable”. There are usually some small differences in numbers when both questions are asked about a particular person, but not enough to worry about for these purposes. The numbers are weirdly positive overall, especially when compared to the recent UT/Trib and Quinnipiac numbers. For UT/Trib, which only asks “approve/disapprove”, we got these totals for June:
Biden 43 - 47
Abbott 44 - 44
Patrick 36 - 37
Paxton 33 - 36
Cornyn 34 - 41
Cruz 43 - 46
And for Quinnipiac, which asked both – the first five are approvals, the Beto one is favorables:
Biden 45 - 50
Abbott 48 - 46
Paxton 41 - 39
Cornyn 41 - 42
Cruz 46 - 49
Beto 34 - 42
They didn’t ask about Dan Patrick. For whatever the reason, the “Don’t know/no opinion” responses are higher in the DMN/UT-Tyler polls, which seems to translate to lower disapproval numbers, at least for the Republicans. The partisan splits are wild, too. These are the Democratic numbers only (June results):
Name DMN/UTT UT-Trib Quinn
======================================
Abbott 29 - 60 8 - 82 10 - 85
Patrick 25 - 42 6 - 71 N/A
Paxton 27 - 50 7 - 66 27 - 56
Cornyn 26 - 35 6 - 74 20 - 69
Cruz 26 - 58 5 - 86 12 - 84
LOL at the difference between the UT-Trib and DMN/UT-Tyler numbers. It’s like these are two completely different samples. With the exception of their weirdly pro-Paxton result, Quinnipiac is closer to UT-Trib, and I think is reasonably accurate in its expression of Democratic loathing for these particular people. I don’t have a good explanation for the unfathomable DMN/UT-Tyler numbers, but because I find them so mind-boggling, I refuse to engage in any of their issues polling. You can’t make sense from samples that don’t make sense.
The last thing to note is the Republican primary result for Attorney General, in which Paxton has a modest lead over George P Bush and Eva Guzman barely registers. I think this is basically a measure of name recognition, and thus should serve as a reminder that most normal people have no idea who many of the folks who hold statewide office are. I expect she will improve, and it may be that she will start out better in a less goofy poll. But again, she’s not that well known, and she’s running against two guys that are. That’s a handicap, and it’s going to take a lot of effort and resources to overcome it.
All polls should be viewed with skepticism because none is perfect, and so it is good to have a comparison of three different ones as offered by Kuff here. – Kudos! That said, let me offer a few obserations about sampling.
Re: “It’s like these are two completely different samples.”
The issue really isn’t whether there are different samples – you would expect that from different polling outfits – but whether they are representative samples drawn from the same population whose views are of interest; here registered voters.
In principle, representativeness is assured by random sampling from the target population, and should result in similar findings within the standard margins of error, which are driven by sample size. But truly random sampling isn’t feasible here.
So then it becomes a question of statistical corrections, such as stratifying and weighting, to match the nonrandom sample to known population parameters (derived from census data and/or actual voting records). And only certain demographic variables can be used for such statistical corrections. Which ones? And how do you do the weighting?
Arguably, the Quinnipiac sample inspires greater confidence than the mixed-mode one assembled in the DMN/UT Tyler poll because it used random digit telephone number selection of respondents, rather than on-line volunteers. But each method of recruiting respondents has its own problems. Some folks have multiple phones, for example, while others have none. So even if you select phone numbers randomly, the person with two cell phones on them has twice the chance of being called, while the person with no phone can’t be reached at all. And then there is the matter of refusals to participate, which could also result in sample bias, because the reasons for refusal may be correlated with political attitudes, including partisan leanings, and with propensity to vote.
And if you recruit survey respondents on the internet (instead of calling them), you can ask them about their use of social media and online news sources, but that wouldn’t include those who get all their news from TV and/or the local newspaper and don’t do social media or other interactive internet. So the latter subgroup won’t be represented while the internet/social media users will be over-represented.
Even if the samples fall short in trying to mirror the reference population (here registered Texas voters) however, huge partisan disparities in responses are still meaningful, as are comparative tallies for politicians as long as the questions for each is phrased identically (if different politicians are not included in the same questionnaire item for comparative evaluation or sentiment expression).
But job performance can only be asked of incumbents. Any comparison with challengers would therefore have to go to sentiment (like/dislike or favorability) or voting intentions. If different polling organizations probe respondents’ views on candidates – or feelings about them — with questions that are worded differently, or structure the reponse options differently, comparability is impaired, if not nixed altogether. And that creates a problem independent of possible discrepancies resulting from different sampling methods and statistical adjustments made to correct for selection bias incident to the manner in which respondents are recruited.
Pingback: Eva Guzman raises a few bucks – Off the Kuff