Generation Z voters could make waves in 2018 midterm elections

By Kei Kawashima-Ginsberg, Tufts University

Unlike the much-studied millennials, we don’t know much about Generation Z, who now make up most of the 18- to 24-year-old voting bloc.

These young people started first grade after 9/11, were born with the internet, grew up with smartphones and social media and practiced active-shooter drills in their classrooms.

In 2018, they have taken an active role in political activism on issues like gun control, Black Lives Matter and #MeToo. For example, Parkland high school students started the movement against gun violence and named voting as a way to support the movement.

Yet, many people are skeptical about Generation Z’s commitment to voting. For instance, The Economist explained, in a piece titled “Why Young People Don’t Vote,” that “young people today do not feel they have much of a stake in society.”

Will Generation Z affect the midterm elections?

https://cdn.theconversation.com/infographics/306/e3ef64f9ac2ae13147f69d92514b90ec3d3359d1/site/index.html

The Center for Information and Research on Civic Learning and Engagement at Tufts University, where we do research, has been watching young people’s civic and political behaviors for nearly 20 years. This fall, my colleagues and I are conducting two large-scale national surveys of 2,087 Americans ages 18 to 24 to document and understand what Gen Zs are thinking, feeling and doing when it comes to politics.

So far, the data point to a surge in political engagement, intention to vote and outreach between friends to encourage voting. Gen Zers may be voting for the first time, but they are certainly not new to politics.

All signs point to youth wave

Young voters have a reputation of not showing up to the polls, especially in midterm elections. This trend goes back 40 years.

There are a few ways we can find out how likely it is that people in Generation Z will turn out to vote.

First, we can just ask. In our survey, 34 percent of youth said they are “extremely likely” to vote in November. While a survey can’t predict exact turnout numbers, data from previous surveys we’ve done using this approach have been close to actual turnout numbers. Other evidence supports this measure of intent to vote: Voter registration among young people is up in key battleground states and overall.

Research also shows that activism and intent to vote are strongly correlated. So, in our survey we also asked young people about activism, such as participating in protests, union strikes, sit-ins and walk-outs.

The proportion of young people who join protests and marches tripled since the fall of 2016, from 5 percent to 15 percent. Participation is especially high among young people who are registered as Democrats.

Finally, we found that young people are paying attention to politics more than they were in 2016. In 2016, about 26 percent of young people said they were paying at least some attention to the November elections. This fall, the proportion of youth who report that they are paying attention to the midterm races rose to 46 percent.

It’s clear that more young people are actively engaged in politics this year than 2016.

Why?

Cynicism and worry aren’t obstacles

To learn more about what might be motivating Generation Z to vote, we asked our survey participants to rate their level of agreement with three statements.

“I worry that older generations haven’t thought about young people’s future.”

“I’m more cynical about politics than I was 2 years ago.”

“The outcomes of the 2018 elections will make a significant impact to everyday issues involving the government in my community, such as schools and police.”

In this year’s survey, we found that young people who feel cynical are far more likely to say they will vote. Other research has found that cynicism about politics can suppress or drive electoral engagement depending on the contexts.

Among young people who said “yes” to all three of those questions, more than half – 52 percent – said they are extremely likely to vote. Among young people who said “no” to all three of those questions, only 22 percent were extremely likely to vote.

Our poll results suggest political involvement in this generation is far above the levels we usually see among youth, especially in midterm election cycles.

In fact, almost 3 out of 4 youth – 72 percent – said they believe that dramatic change could occur in this country if people banded together. Gen Z is certainly aware of the challenges ahead but they are hopeful and actively involving themselves and friends in politics. Beyond almost any doubt, youth are involved and feel ready to make a dramatic change in the American political landscape.The Conversation


Kei Kawashima-Ginsberg, Director, Center for Information and Research on Civic Learning and Engagement in the Jonathan M. Tisch College of Civic Life, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

You can trust the polls in 2018, if you read them carefully

By Josh Pasek, University of Michigan and Michael Traugott, University of Michigan

File 20180910 123125 4v5a1d.jpg?ixlib=rb 1.1
A Michigan township collects votes in 2016.
Barbara Kalbfleisch/shutterstock

On the morning of Nov. 8, 2016, many Americans went to bed confident that Hillary Clinton would be elected the nation’s first female president.

Their confidence was driven, in no small part, by a pervasive message that Clinton was ahead in the polls and forecasts leading up to the election. Polling aggregation sites, such as Huffington Post’s Pollster and The New York Times Upshot blog, reported that Clinton was virtually certain to win. It soon became clear that these models were off the mark.

Since then, forecasters and media prognosticators have dissected what went wrong. The finger-pointing almost inevitably landed on public opinion polling, especially at the state level. The polls, critics argued, led modelers and the public to vastly overestimate the likelihood of a Clinton win.

With the 2018 elections coming up, many in the public have expressed their skepticism that public opinion polls can be trusted this time around. Indeed, in an era where a majority of American adults no longer even have landline telephones, where many people answer only when calls originate from a known number, and where pollsters’ calls are sometimes flagged as likely spam, there are lots of reasons to worry.

But polling firms seem to be going about their business as usual, and those of us who do research on the quality of public opinion research are not particularly alarmed about what’s going on.

Looking back

One might be tempted to think that those of us in the polling community are simply out to lunch. But the data from 2016 tell a distinctly different story.

The national polls were fairly accurate both in their national estimate of the popular vote in 2016 and in historical perspective. In the average preelection national poll, Clinton was ahead of Donald Trump by 3.3 percentage points. She proceeded to win the popular vote by 2.1 percentage points. Pollsters missed the mark by a mere 1.2 percentage points on average.

The polls in the Upper Midwest states missed by larger margins. These polls were conducted in ways that pollsters widely know to be suboptimal. They relied heavily on robocalls; on surveys of people who volunteer to take surveys on the internet; and on samples of respondents from voter files with incomplete information.

What went wrong

So why was the 2016 election so shocking? The big reason wasn’t the polls, it was our expectations.

In the last few years, members of the public have come to expect that a series of highly confident models can tell us exactly what is going to happen in the future. But in the runup to the 2016 election, these models made a few big, problematic assumptions.

For one, they largely assumed that the different errors that different polls had were independent of one another. But the challenges that face contemporary polling, such as the difficulty of reaching potential respondents, can induce small but consistent errors across almost all polls.

When modelers treat errors as independent of one another, they make conclusions that are far more precise than they should be. The average poll is indeed the best guess at the outcome of an election, but national polling averages are often off by around 2 percentage points. State polls can be off by even more at times.

In addition, polling aggregators and public polling information have been flooded by a deluge of lower-quality surveys based on suboptimal methods. These methods can sometimes produce accurate estimates, but the processes by which they do so is not well-understood on theoretical grounds. There are lots of reasons to think that these methods may not produce consistently accurate results in the future. Unfortunately, there will likely continue to be lots of low-quality polls, because they are so much less expensive to conduct.

Research out of our lab suggests yet another reason that the polls were shocking to so many: When ordinary people look at the evidence from polling, just as with other sources of information, they tend to see the results they desire.

During the 2016 election campaign, we asked Americans to compare two preelection polls – one where Clinton was leading and one where Trump was ahead. Across the board, Clinton supporters told us that the Clinton-leading poll was more accurate than the Trump-leading poll. Trump supporters reported exactly the opposite perceptions. In other studies, we saw the same phenomenon when people were exposed to poll results showing majorities in favor of or opposed to their own views on policy issues such as gun control or abortion.

What polls really say

So, what does this all mean for someone reading the polls in 2018?

You don’t have to ignore the results – just recognize that all polling has some error. While even the experts may not know quite which way that error is going to point, we do have a sense of the size of that error. Error is likely to be smaller when considering a polling average instead of an individual poll.

It’s also a good bet that the actual result will be within 3 percentage points for an averaging of high-quality national polls. For similarly high-quality state polls, it will likely be within more like 5 percentage points, because these polls usually have smaller sample sizes.

What makes a high-quality poll? It will either use live interviewers with both landlines and cellphones or recruit respondents using offline methods to take surveys online. Look for polls conducted around the same time to see whether they got the same result. If not, see whether they sampled the same kind of people, used the same interviewing technique or used a similar question wording. This is often the explanation for reported differences.

The good news is that news consumers can easily find out about a poll’s quality. This information is regularly included in news stories and is shown by many poll aggregators. What’s more, pollsters are increasingly transparent about the methods they use.

Polls that don’t use these methods should be taken with a big grain of salt. We simply don’t know enough about when they will succeed and when they will fail.The Conversation

 

About the Authors:  Josh Pasek, Assistant Professor of Communication Studies, University of Michigan and Michael Traugott, Research Professor at the Center for Political Studies, University of Michigan.


This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Blue Wave, Red Wave; What Wave? No Wave

RedWaveBlueWave

By Chapman Rackaway of the University of West Georgia

Political scientists and pundits alike face a contradictory challenge in the concept of the “wave” election. Journalists use the term commonly, and 2018 is no exception. The hashtag #bluewave is a constant presence on political Twitter feeds, and a search reveals hundreds of news articles discussing the likelihood of Democrats benefitting from just such as wave. What constitutes a wave election is a complicated matter, however, and needs some definition. Is a wave election simply when one party does appreciably better than the other? Is a specific seat gain enough to call an election a wave? Insinuated in tweets and stories about a 2018 blue wave is a sense that voters nationwide have gravitated intentionally towards Democrats with the specific goal of resisting the presidency of Donald Trump. Political science can help us bust that myth, and see that national intent during midterm elections does not exist. Democrats may do very well nationwide in 2018, but that does not mean that a national wave of support is why Democrats look to succeed.

We know some basic evidence from the discipline that puts the wave talk in perspective. First of all, incumbents are rarely vulnerable, but Jacobson (2015) shows that the incumbency advantage has been eroding recently. Still the best opportunity for Democrats to make gains in Congress or in state legislative seats is for a large number of Republicans to retire. In Congress, at least, Democrats can rely on a higher level of Republican exposure than in the last six election cycles. A total of forty-four GOP incumbents are retiring from Congress, the highest since 31 left before the 2012 elections. And Democrats have a much lower exposure rate in the House, with only twenty departures as of early August.

The Senate also looks good for Democrats, with the need to take just two Republican seats away to wrest majority control. Of the seven races listed as “toss-up” by RealClearPolitics, four are held by Democrats while three are Republican. A tied chamber is certainly possible but the Senate seems “wave-proof” in 2018.

State legislative races also can factor into a “wave” election, and again Republicans have a high level of exposure. Republicans hold majorities in 31 state legislatures, compared with fourteen Democrat-controlled assemblies and four are split between the two parties (Nebraska’s non-partisan legislature is not included here). Governing magazine rates ten GOP chambers as leaning or tossups, and Democrats have seven chambers leaning with no tossups.

Democrats have a target-rich environment, and have performed very well in special elections which Smith and Brunell (2010) show yield some predictive power. Turnout for adherents to the in-power party tends to drop off in midterm elections, too, which should hurt Republicans. Previously local-first races into a nationalized environment (see Abramowitz and Webster 2016). Add a divisive President with a lower approval rating than other recent Presidents at their midterm to mobilize Democrats, and together, the anecdotes suggest the components of a wave.

But one important factor suggests that the Democratic wave will not happen: negative partisanship. We know that the nature of partisan identification has been changing for some time, and while support for one’s own party has remained stable, the disapproval voters feel towards the opposing party has increased. Independents, long considered the swinging gate in between the parties upon which elections have hinged, are the key to Democratic success and the least likely group to vote. Calling an election a wave must mean that there is a surge of support behind it, and those support surges do not happen in the current partisan environment. Instead of getting a boost from independent and leaning-partisan swing voters who cast ballots for the opposing party in the previous election, partisans today must do a better job of mobilizing their base.

For Democrats to approximate a wave in 2018, they need to register more voters, and across the country most state-level registrations of new voters has been flat. A wave of anti-Trump resistance will not flood former leaning-Republican voters to embrace local-level Democratic candidates. Republicans do not trust Democrats (and vice versa) while independents are unreliable saviors. If Democrats do have great success in the 2018 elections, it will come instead from a methodical, state-by-state process of registering and mobilizing base Democratic voters.

Calling successful elections “waves” does a disservice to the voting public, advancing a narrative of the electorate’s motivations that does not sync with their real preferences and behaviors. Wins happen in politics, waves stay on the water.

About the author: Chapman Rackaway serves the University of West Georgia as Professor and Chair of the Department of Political Science where he teaches classes in Political Parties, Political Campaign Management, Interest Groups and Lobbying, and Campaign Finance. You can also find Rackaway on Twitter and his website.