Kategorien
Seiten
-

Akustik-Blog

Kolloquium

Apr
5
Fr
Xinshuo Gu: Internship with Head Acoustics
Apr 5 um 11:00 – 11:30

As I am interested in acoustical and tele-communicational devices, I chose Head acoustics GmbH as the place for my 18-weeks mandatory internship so that I can have insight into practical engineering application in acoustical and signal processing area. Head acoustics is one of the worldwide successful companies in integrated acoustics solution as well as communication technology.  In this time, I worked in the Telecom Division and my main task is to measure the Mean Opinion Score (MOS) of two VoIP devices with few of our Measurement Front-ends (MFE), including preparing and designing measurements under different network conditions using measurement system ACQUA. Additionally, I have also worked for the assistance of measurement data processing and visualization with Python libraries Pandas and Matplotlib.

Pengcheng Zhao: Internship with XCMG Europe GmbH
Apr 5 um 11:30 – 12:00

Wird noch bekannt gegeben.

Mai
3
Fr
Julian Burger: Ermittlung der Höranstrengung bei Kindern im Grundschulalter in verschiedenen Lärmsituationen mit Hilfe eines Dual-Task Paradigmas
Mai 3 um 11:30 – 12:00

Höranstrengung (listening effort) ist, vor allem bei Kindern im Grundschulalter (6-10 Jahre), noch wenig erforscht. Unter Höranstrengung versteht man die Aufmerksamkeit und die kognitiven Ressourcen, die aufgebracht werden müssen um Sprache zu verstehen. Um diese von Kindern in Ruhe als auch unter Einfluss von Störgeräuschen zu ermitteln, wird ein kindgerechtes Dual-Task-Paradigma entwickelt. Im Rahmen eines Experiments werden 24 Kinder mittels diesem Paradigma in einer mobilen Hörkabine getest.

Dazu wird ein Wort in einer Störgeräuschkondition präsentiert und die primäre Aufgabe besteht darin das richtige Bild aus insgesamt vier Bildern durch das Drücken eines Knopfes auszuwählen. Bei der sekundären Aufgabe wird die Reaktionszeit bei einer aufleuchtenden Lampe auf dem Bildschirm gemessen.

Die Hypothese ist, dass Kinder im Grundschulalter bei einer Störgeräuschkondition mehr kognitiven Ressourcen für das Sprachverständnis benötigen als im Stillen.

Mai
10
Fr
Xinya Xu: Objective and subjective analysis of room acoustics in open-plan offices
Mai 10 um 11:00 – 12:00

Im alltäglichen Arbeiten sind Menschen von verschiedenen Geräuschen bzw. Lärm umgeben, die sich maßgeblich auf die kognitive Arbeitsleistung und die akustische Zufriedenheit auswirken können. Diese akustische Wahrnehmung und ihre Auswirkung werden von verschiedenen Faktoren verursacht und beeinflusst. Daher wird eine vielfältige akustische Raumanalyse, sowohl vom objektiven, als auch vom subjektiven Aspekt, benötigt. In dieser Masterarbeit werden Großraumbüros mit gehörrichtigen Verfahren vermessen und deren akustische Eigenschaften analysiert. Zum einen wird die Raumakustik der Räume untersucht, zum anderen werden die Hintergrundgeräusche in Anwesenheit von Personen für die psychoakustische Analyse gemessen (In situ-Messungen). Dabei wird der ITA-Kunstkopf verwendet, um eine gehörrichtige Auswertung zu erhalten. Zusätzlich wird die subjektive Wahrnehmung untersucht, indem Frageböge an die anwesenden Personen in den Räumen verteilt werden, und während der In situ-Messungen bearbeitet werden. Anschließend werden die Relationen zwischen Raumakustik- und Psychoakustikparameter, sowie zwischen objektiven und subjektiven Ergebnissen ausgewertet.

Mai
24
Fr
Rhoddy Angel Viveros Munoz: Speech Perception in Complex Acoustic Environments: Evaluating Moving Maskers Using Virtual Acoustics
Mai 24 um 11:00 – 12:00

Listeners with hearing impairments have difficulties understanding speech in the presence of background noise. Although prosthetic devices like hearing aids may improve the hearing ability, listeners with hearing impairments still complain about their speech perception in the presence of noise. Pure-tone audiometry gives reliable and stable results, but the degree of difficulties in spoken communication cannot be determined. Therefore, speech-in-noise tests measure the hearing impairment in complex scenes and are an integral part of the audiological assessment.

In everyday acoustic environments, listeners often need to resolve speech targets in mixed streams of distracting noise sources. This specific acoustic environment was first described as the “cocktail party” effect and most research has concentrated on the listener’s ability to understand speech in the presence of another voice or noise, as a masker. Speech reception threshold (SRT) for different spatial positions of the masker(s) as a measure of speech intelligibility has been measured. At the same time, the benefit of the spatial separation between speech target and masker(s), known as spatial release from masking (SRM), was largely investigated. Nevertheless, previous research has been mainly focused on studying only stationary sound sources. However, in real-life listening situations, we are confronted with multiple moving sound sources such as a moving talker or a passing vehicle. In addition, head movements can also lead to moving sources. Thus, the present thesis deals with quantifying the speech perception in noise of moving maskers under different complex acoustic scenarios using virtual acoustics.

In the first part of the thesis, the speech perception with a masker moving both away from the target position and toward the target position was analyzed. From these measures, it was possible to assess the spatial separation benefit of a moving masker. Due to the relevance of spatial separation on intelligibility, several models have been created to predict SRM for stationary maskers. Therefore, this thesis presents a comparative analysis between moving maskers and previous models for stationary maskers to investigate if the models are able to predict SRM of maskers in movement. Due to the results found in this thesis, a new mathematical model to predict SRM for moving maskers is presented.

In real-world scenarios, listeners often move their head to identify the sound source of interest. Thus, this thesis also investigates if listeners use their head movements to maximize the intelligibility in an acoustic scene with a masker in movement. A higher SRT (worse intelligibility) was found with the head movement condition than in the condition without head movement. Also, the use of an individual head-related transfer function (HRTF) was evaluated in comparison to an artificial-head HRTF. Results showed significant differences between individual and artificial HRTF, reflecting higher SRTs (worse intelligibility) for artificial HRTF than individual HRTF.

The room acoustics is another relevant factor that affects speech perception in noise. For maskers in movement, an analysis comparing different masker trajectories (circular and radial movements) among different reverberant conditions (anechoic, treated and untreated room) is presented. This analysis was carried out within two groups of subjects: young and elderly normal hearing. For circular and radial movements, the elderly group showed greater difficulties in understanding speech with moving masker than stationary masker.

To summarize, several cases show differences between speech perception in noise with moving maskers and stationary maskers. Therefore, a listening test that presents moving maskers could be relevant in the clinical assessment of speech perception in noise closer to real situations.