MPSA in 2017 – Accomplishments Worth Celebrating (video)

 

This year was confusing at times and exhausting at others, but it also had its high points. As we say goodbye to 2017, we welcome you to join us for the MPSA highlight reel. Our thanks to everyone who played a part in making these projects a reality, including our program chairs, council members, committee chairs, program partners, donors, volunteers, and members. May the new year welcome only the best to you both personally and professionally! – MPSA Staff

The Top 5 MPSA Roundtable Audio Recordings (podcasts) from 2017

MPSA-Top5Podcasts

Each year at its annual conference, MPSA records dozens of professional development panels focusing on topics most relevant to researchers and to those who teach. Audio from the roundtable discussions is available to MPSA Members online by visiting the Highlighted Presentations Section of the website and selections are also available to the public as part of MPSA’s outreach to the discipline. As 2017 comes to a close, its time to take a look back at the five most popular of these audio recordings.

  • MPSA Roundtable on Career: What to Do/Not Do at a Job Talk
    (Read the Recap) – Listen in as Elizabeth A. Bennion of Indiana University-South Bend chairs the MPSA Career Roundtable on “What to Do and What Not to Do at a Job Talk” with Mary Hallock Morris of University of Southern Indiana and David C. Wilson of University of Delaware. During the discussion, the members of the panel share their observations on how to know if the university is a good fit for you (personally and professionally) and what can make you stand out as a successful candidate.  
  • MPSA Roundtable: Applying to Graduate School
    (Read the Recap) – Mackenzie H. Eason of the University of California – Los Angeles chairs this MPSA roundtable session on “Applying to Graduate School” with Coty J. Martin, West Virginia University, Joan Ricart-Huguet, Princeton University, and Jovan Milojevich, University of California-Irvine. Members of the panel discuss questions and issues related to applying to graduate programs, such as when and where to apply, and how to make yourself a more appealing and ultimately successful candidate for admission.
  • MPSA Roundtable: Teaching LGBTQ Politics
    (Read the Recap) – Susan Burgess, Ohio University-Main Campus, chairs this discussion among panelists and participants in the audience on Teaching LGBTQ Politics. Panelists include Christine Keating of Ohio State University-Main Campus, Megan Elizabeth Osterbur of Xavier University of Louisiana, Marla Brettschneider of University of New Hampshire-Main Campus, and Courtenay Daum of Colorado State University-Fort Collins. Session topics included selecting topics, readings, and pedagogical strategies pertaining to teaching LGBTQ politics classes.
  • MPSA Roundtable On Congressional Leadership Through The Eyes Of Randy Strahan And Barbara Sinclair
    (Read the Recap) – Sean M. Theriault of the University of Texas at Austin, chairs this MPSA roundtable session on “Congressional Leadership through the Eyes of Randy Strahan and Barbara Sinclair” with Gregory Koger, University of Miami, Daniel John Palazzolo, University of Richmond, Kathryn Pearson, University of Minnesota-Twin Cities, David W. Rohde, Duke University and Matthew N. Green, Catholic University of America. Members of the panel remember the contributions of Randy Strahan and Barbara Sinclair to the field of political science through the sharing of memories and personal reflections and take an early look at congressional leadership in the 115th Congress.
  • MPSA Roundtable: Teaching Research Methods to Undergraduates
    (Recap Not Available) – Nathan D. Griffith of Belmont University chairs the MPSA roundtable session on “Teaching Research Methods to Undergraduates” with Binneh S. Minteh of Rutgers University-Newark, and Emily Clough of Newcastle University.

Many thanks to our panelists at the 2017 conference and congratulations to those with topics that have been shown to be among the most popular with listeners after the conference. You may share your expertise by participating as a panelist in one of MPSA’s Professional Development Roundtables at the 2018 conference in Chicago. MPSA seeks to organize a series of roundtable sessions on topics including public engagement, career development, publishing, teaching, and research methods. Learn more about the opportunity and volunteer your expertise as a panelist.

Recap of Tuesday’s #PSBeWell End-of-Semester/Holiday Edition

PSBeWell-EndOfSemester

This month’s MPSA Twitter Chat featured a conversation about creating a less stressful end-of-semester experience for those on both sides of the syllabus, ways to balance work and personal time during the busy holiday season, and a few resolutions for the upcoming semester. Many thanks to our co-hosts for the discussion: Todd Curry, Assistant Professor of Political Science at The University of Texas at El Paso, Jacqueline Sievert, Research Fellow with YWCA Niagara, and Adnan Rasool, Doctoral Candidate at Georgia State University.

Read the recap below or look for the extended conversation on Twitter using #PSBeWell. 

Please share your ideas for upcoming #MPSAchat sessions at https://mpsa.typeform.com/to/tuWRlM.

Save the Date for the Next #MPSAchat: January 28, 2018 (2pm Eastern)

More Guns, Less Replication: The Case for Robust Research Findings

MPSA-Blog_ReplicationRobust

The meaning of the wordreplication hardly seems like the sort of thing that would land a person in court. Yet, it did. In Lott v. Levitt (2009), the U.S. District Court of Northern Illinois ruled on that very question, in a dispute between two academic authors of bestselling books.

In his book Freakonomics, Steven Levitt argued that “other researchers have failed to replicate” John Lott’s work, published in the latter’s book More Guns, Less Crime. In fact, as Lott later pointed out, he is willing to share his data, and when other researchers run the same statistical models using the same data, they successfully replicate the results. Lott did not falsify his data, fabricate results, or make errors in his reporting. This is what replication is meant to check, and Lott’s research passed the test.

Levitt countered by arguing that the word “replicatecan have a broader meaning. The courts agreed and also noted that generally, the judicial system should stay out of such disputes unless the use of the word is particularly egregious and serves to defame the target, which was not the case here. (A second part of the suit was settled in Lott’s favor, but this was unrelated to the argument over the exact meaning of replication).

I recently blundered into this whole controversy. I published a newspaper column referencing Lott’s research, and I, too, suggested that others could not replicate his research. Lott spotted the column and wrote me a friendly note offering to share some of his more recent research on Australia’s gun laws with me. I read it and found it fascinating. However, he also asked for a retraction of the replication charge. I demurred, because while I would not use the word “replication” again in this particular context, I think the way I used it is defensible, as per the Lott v. Levitt ruling and the interpretation offered in the above, hyperlinked article from Scientific American. What I had in mind were other studies, using different data, methods, and time periods, that reach different conclusions than does Lott’s book—replication in a broader sense.

Lott and Levitt are both economists, but political scientists have had our own struggles with replication. In the 1990s, Harvard Professor Gary King started a movement to push for replication in quantitative political science, but the standards for which he advocated have never become universal. King did have some success in getting participating political scientists to make their data more accessible, but not everyone is game. Enormous amounts of time and, at times, money go into data-collection, and many researchers consider their datasets proprietary. Furthermore, academic journals rely on unpaid, volunteer reviewers, who layer the responsibilities on top of their other duties as professors and researchers. With few exceptions (including the AJPS which has a third-party replication process), journals must rely on reviewers to download massive datasets into statistical programs like SPSS and R, then replicate exactly what other researchers have done. This approach is probably not realistic; editors have a hard time just getting them to complete their reviews, which are sometimes months late and only a few sentences long. By contrast, book reviewers are often paid, but book publishers increasingly look, not for the kind of detailed, technical matters involved in replication, but rather for books that will reach a broader audience. Selling books only to other political science professors is not a very lucrative market. While Lott and Levitt both have their critics (including one another), both wrote bestselling books, and this is what publishers want. They are not likely to get involved in a lengthy replication project, when what they are seeking are readability and larger audiences: the next More Guns, Less Crime or Freakonomics—and they do not much care which, as long as it sells. In short, publishers cannot be relied upon to enforce standards of replication, nor can editors, and the courts would prefer to stay out of it.

One way out of this mess is to invoke another statistical concept: robustness. Just as replication can have a narrow or broad meaning, robustness can as well. While the word always makes me think of my morning coffee, or perhaps a good merlot, robustness in the statistical sense refers to a relationship between two variables that is not driven by just a few cases or assumptions. At the risk of oversimplifying: if a few seemingly minor alterations in a data analysis result in a change in the results, then those results were not robust in the first place. For a better, more technical explanation, visit http://www.rci.rutgers.edu/~dtyler/ShortCourse.pdf.

Like replication, robustness can also be defined more broadly. Much as the results with a single dataset are robust if they hold across all (or least many) of the cases and not just a few, so research results can be said to be robust if the same finding keeps popping up in multiple studies, using different data and different ways of modeling it. Findings such as the relationship between education and political participation (those with more education are more likely to vote and to participate in other ways) hold up, no matter how you slice the data. Old data, new data, crude models, highly sophisticated analyses—again and again, the relationship appears. There is just no way around the fact that more education often pairs with more political involvement. Of course, a few individuals exist who do not fit this pattern, but these exceptions do not debunk the claim. It is solid. Within a single dataset, robustness refers to the relationship holding across a broad swath of cases. Considered more broadly, a finding can be said to be robust if it holds up across a broad swath of studies.

Conversely, the research on concealed-carry, gun ownership, and crime is not robust, in the broad sense that I am using it. Lott’s research finds that concealed-carry laws mean more crime deterrence. The research from Aneja, Donohue, and Zhang, referenced in the piece from The Washington Post above, shows that such laws increase crime—specifically aggravated assault—while having no impact on other crime rates. For his part, Levitt believes that there is little relationship at all between gun ownership and crime rates. In short, the research on this topic is highly sensitive to model specification, time periods covered, and data used. No clear, robust relationships have emerged that carry across different research by different researchers using different data and different modeling techniques, to establish clear conclusions. The most likely explanation for this, is that the relationship between gun ownership and crime is ambiguous. Whether positive or negative, the effects are small compared to the big drivers such as the percentage of poor, unemployed, young males—who commit the vast majority of street crimes, regardless of race—that exist in the population at any given time…

…which is exactly what I wrote in that newspaper column. But, by using the word “replication” loosely, I wandered into a whole new set of questions—ones which are not easily resolved, and ultimately go to the soul of social science itself.

About the author: Michael A. Smith is a Professor of Political Science at Emporia State University where he teaches classes on state and local politics, campaigns and elections, political philosophy, legislative politics, and nonprofit management. Read more on the MPSA blog from Smith and follow him on Twitter.

#MPSAchat with AJPS Editor William G. Jacoby (10/24)

Fourth Tuesdays at 2pm EDT #MPSAchat1On Tuesday, October 24 (2pm Eastern), please join us for a Twitter chat with American Journal of Political Science editor William G. Jacoby. We’ll chat with Jacoby on trends he has identified during his time as editor, peer review, and tips for avoiding a “technical reject”, among other topics.

If you haven’t participated in a live Twitter chat before, here are a few tips:

  • A moderator from MPSA will post a series of numbered questions over the course of the hour to help prompt response from Jacoby and participants.
  • To share your comments to a specific question, just begin your response with “A1” and include the hashtag(s) designated for the chat. In this case, that’s #MPSAchat.
  • The live chat will last approximately an hour, and you are welcome to participate in some or all of it. We hope that the conversation continues using the hashtag so others can catch up on it later.
  • You may choose to use your regular Twitter account to follow along or you may opt to use online tools created specifically for Twitter chats. Here are three examples and instructions for each.

Future MPSA Twitter chats will be on the fourth Tuesday of each month with a focus on topics including professional development, public engagement, advocacy, research, publishing, teaching/learning, and work-life balance. Our next #MPSAchat will be November 28, 2017, when we discuss Work-Life Balance #PSBeWell. 

Missed the Twitter chat? Read the recap here.