Spread the Word: Enhancing Replicability of Speech Research Through Stimulus Sharing

Purpose: The ongoing replication crisis within and beyond psychology has revealed the numerous ways in which flexibility in the research process can affect study outcomes. In speech research, examples of these “researcher degrees of freedom” include the particular syllables, words, or sentences presented; the talkers who produce the stimuli and the instructions given to them; the population tested; whether and how stimuli are matched on amplitude; the type of masking noise used and its presentation level; and many others. In this research note, we argue that even seemingly minor methodological choices have the potential to affect study outcomes. To that end, we present a reanalysis of six existing data sets on spoken word identification in noise to assess how differences in talkers, stimulus processing, masking type, and listeners affect identification accuracy. Conclusions: Our reanalysis revealed relatively low correlations among word identification rates across studies. The data suggest that some of the seemingly innocuous methodological details that differ across studies—details that cannot possibly be reported in text given the idiosyncrasies inherent to speech—introduce unknown variability that may affect replicability of our findings. We therefore argue that publicly sharing stimuli is a crucial step toward improved replicability in speech research.

By Julia F. Strand & Violet A. Brown

Preregistration: Practical Considerations for Speech, Language, and Hearing Research

In the last decade, psychology and other sciences have implemented numerous reforms to improve the robustness of our research, many of which are based on increasing transparency throughout the research process. Among these reforms is the practice of preregistration, in which researchers create a time- stamped and uneditable document before data collection that describes the methods of the study, how the data will be analyzed, the sample size, and many other decisions. The current article highlights the benefits of preregistration with a focus on the specific issues that speech, language, and hearing researchers are likely to encounter, and additionally provides a tutorial for writing preregistrations. Conclusions: Although rates of preregistration have increased dramatically in recent years, the practice is still relatively uncommon in research on speech, language, and hearing. Low rates of adoption may be driven by a lack of under- standing of the benefits of preregistration (either generally or for our discipline in particular) or uncertainty about how to proceed if it becomes necessary to deviate from the preregistered plan. Alternatively, researchers may see the ben- efits of preregistration but not know where to start, and gathering this informa- tion from a wide variety of sources is arduous and time consuming. This tutorial addresses each of these potential roadblocks to preregistration and equips readers with tools to facilitate writing preregistrations for research on speech, language, and hearing.

By Violet A. Brown & Julia F. Strand

Introduction to Mixed Modeling Video: Part 2

This is part 2 of a 2 of a series of videos on mixed effects modeling. The intended audience is graduate student or anyone with some regression background but no background in mixed modeling.

By Violet A. Brown

Introduction to Mixed Modeling Video: Part 1

This is part 1 of a 2 of a series of videos on mixed effects modeling. The intended audience is graduate student or anyone with some regression background but no background in mixed modeling.

By Violet A. Brown

An Introduction to Linear Mixed-Effects Modeling in R

This Tutorial serves as both an approachable theoretical introduction to mixed-effects modeling and a practical introduction to how to implement mixed-effects models in R. The intended audience is researchers who have some basic statistical knowledge, but little or no experience implementing mixed-effects models in R using their own data. In an attempt to increase the accessibility of this Tutorial, I deliberately avoid using mathematical terminology beyond what a student would learn in a standard graduate-level statistics course, but I reference articles and textbooks that provide more detail for interested readers. This Tutorial includes snippets of R code throughout; the data and R script used to build the models described in the text are available via OSF at https://osf.io/v6qag/, so readers can follow along if they wish. The goal of this practical introduction is to provide researchers with the tools they need to begin implementing mixed-effects models in their own research.

By Violet A. Brown

Understanding Speech Amid the Jingle and Jangle: Recommendations for Improving Measurement Practices in Listening Effort Research

The latent constructs psychologists study are typically not directly accessible, so researchers must design measurement instruments that are intended to provide insights about those constructs. Construct validation—assessing whether instruments measure what they intend to—is therefore critical for ensuring that the conclusions we draw actually reflect the intended phenomena. Insufficient construct validation can lead to the jingle fallacy—falsely assuming two instruments measure the same construct because the instruments share a name—and the jangle fallacy—falsely assuming two instruments measure different constructs because the instruments have different names. In this paper, we examine construct validation practices in research on listening effort and identify patterns that strongly suggest the presence of jingle and jangle in the literature. We argue that the lack of construct validation for listening effort measures has led to inconsistent findings and hindered our understanding of the construct. We also provide specific recommendations for improving construct validation of listening effort instruments, drawing on the framework laid out in a recent paper on improving measurement practices. Although this paper addresses listening effort, the issues raised and recommendations presented are widely applicable to tasks used in research on auditory perception and cognitive psychology.

By Julia F. Strand, Lucia Raya, Naseem H. Dillman-Hasso, Jed Villanueva, and Violet A. Brown

Publishing Open, Reproducible Research With Undergraduates

In this paper, we highlight why adopting open science practices may be particularly beneficial to conducting and publishing research with undergraduates. The first author is a faculty member at Carleton College (a small, undergraduate-only liberal arts college) and the second is a former undergraduate research assistant (URA) and lab manager in Dr. Strand’s lab, now pursuing a PhD at Washington University in St. Louis. We argue that open science practices have tremendous benefits for undergraduate students, both in creating publishable results and in preparing students to be critical consumers of science.

By Julia F. Strand & Violet A. Brown