top of page
MY RECENT PROJECTS

 

 

Spoken word recognition and connected speech processes

 

The process of spoken word recognition involves the incremental mapping of speech sounds to lexical candidates. However, variability in the production of speech sounds can result in ambiguity at the level of lexical selection. One common source of such variability involves phonological processes such as place assimilation. For example, in English the place of articulation (PoA) of a coronal nasal or stop consonant can assimilate to a following labial sound such as [b], thus ‘phone box’ might be perceptually similar to ‘foam box’ (/n/ --> [m]). Previous studies have suggested that listeners can compensate for the effects of assimilation when a triggering context is present (e.g., when the sound precedes a labial/velar consonant).

 

My research focuses on the unresolved issue of whether only general auditory mechanisms are responsible for compensation for assimilation or if higher level phonological knowledge is also involved in the recognition process. For this purpose, I am studying the role of the phonological context in listeners’ compensation for place assimilation in real time language comprehension by comparing the findings within and across the two groups of sounds that are affected by this process, namely nasal and stop consonants. In the design of the experiments, I have implemented a method that combines an eye tracking methodology with a process-priming paradigm.

 

The effect of acoustic features and background noise on the processing of reduced speech

 

Recent research has begun to more carefully consider the process of mapping of the speech input to competing lexical candidates in situations that incorporate some of the challenges of real-world listening (e.g., background noise and inherent variability in the form of speech sounds).  In a recently completed project, I examined the case of English word-final voiceless stops /p, t, k/, which can be commonly produced with or without a release burst. The release burst provides one of the primary acoustic cues to stop place of articulation (PoA), along with transition cues from the preceding vowel.

 

My research interest included the extent to which the vowel transition information can compensate for lack of the release burst when the stop is unreleased, and how the uptake of acoustic cues during lexical processing is influenced by background noise, the particular PoA of the word-final consonant, and the effect of different types of preceding vowels (monophthongs vs. diphthongs).

Acoustic characteristics of word-final consonant clusters in Persian

 

Persian phonotactics allows clusters of two (or occasionally three) consonants to appear in word-final position (CVCC(C). Among the acceptable final clusters some are particularly of interest. As an example, in a word such as /bæbr/ ‘tiger’, the stop consonant /b/ (C1) has a lower level of sonority compared to the liquid /r/ (C2). This would result in violation of the well-known phonological principle of the Sonority Sequencing (SSP). In an earlier acoustic experiment, I found evidence of partial devoicing of word-final obstruents within clusters, which might provide an explanation for the Persian phonotactic patterns. In addition, the focus of my research was on further investigation of the acoustic characteristics of the consonants (e.g., /r, l, s,z, p,b/) within word-final clusters, specifically the ones which violate SSP. For this purpose I recorded several native Persian speakers and performed various acoustic analyses on the production data using Praat software.

bottom of page