BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Akustik-Blog - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Akustik-Blog
X-ORIGINAL-URL:https://blog.rwth-aachen.de/akustik
X-WR-CALDESC:Veranstaltungen für Akustik-Blog
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20211105T110000
DTEND;TZID=Europe/Berlin:20211105T120000
DTSTAMP:20260504T162004
CREATED:20211022T142103Z
LAST-MODIFIED:20211105T100556Z
UID:1746-1636110000-1636113600@blog.rwth-aachen.de
SUMMARY:Campos Ruiz: Pass-by vehicle sound synthesis using machine learning techniques.
DESCRIPTION:In recent years\, the use of techniques based on machine learning has increased and expanded to different areas. The increasing computing capacity and the availability of public datasets have allowed the development of these techniques\, particularly deep learning\, that today are considered the state of the art in topics such as pattern recognition\, natural language processing\,  gaming\, among others. In this work\, we propose the use of machine learning techniques for the synthesis of vehicle noise. Audio and video data of passing-by vehicles in non-controlled environments are collected in order to build a vehicle pass-by dataset. Automatic dataset annotation and synchronization is performed using a deep neural network\, for vehicle identification\, and tracking algorithms. Also\, geometry projection algorithms are used to determine the vehicle position and average speed. Vehicle noise emission directivity is extracted from microphone array recordings\, using beamforming techniques and will be used to train a neural network for audio synthesis. Different neural networks architectures\, such CNNs\, RNNs\, Autoencoders and GANs\, will be considered when designing the synthesis network.  Video recording examples and preliminary results about the extraction of vehicle characteristics such category\, position and speed will be presented. \nMelden Sie sich hier an um die Einladung zu dem Vortrag per E-Mail zu erhalten.\nRegister here to receive the invitation to this talk via E-Mail. \nThe lectures are currently offered as a in-person event and as an online stream via the Zoom platform. There are 25 seats available to attend in the lecture hall. To do so you must identify yourself as vaccinated\, recovered or tested against SARS-CoV-2. \nZoom-Meeting: 910 0030 1035\,\nPassword: 980455
URL:https://blog.rwth-aachen.de/akustik/event/campos-ruiz-vehicle-pass-by-sound/
LOCATION:IHTA Seminarraum (60 Persons) and Zoom-Meeting (Hybrid)\, Kopernikusstr. 5\, Aachen\, 52074\, Deutschland
CATEGORIES:Vortrag
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Berlin:20211126T110000
DTEND;TZID=Europe/Berlin:20211126T120000
DTSTAMP:20260504T162004
CREATED:20210916T100701Z
LAST-MODIFIED:20211118T115603Z
UID:1686-1637924400-1637928000@blog.rwth-aachen.de
SUMMARY:Pausch: Spatial audio reproduction for hearing aid research: System design\, evaluation and application
DESCRIPTION:Hearing loss (HL) has multifaceted negative consequences for individuals of all age groups. Despite individual fitting based on clinical assessment\, consequent usage of hearing aids (HAs) as a remedy is often discouraged due to unsatisfactory HA performance. Consequently\, the methodological complexity in the development of HA algorithms has been increased by employing virtual acoustic environments\, allowing simulation of indoor scenarios with plausible room acoustics. Inspired by the research question of how to make such environments accessible for HA users while maintaining complete signal control\, a novel concept addressing combined perception via HAs and residual hearing is proposed. The specific system implementations employ a master HA and research HAs for aided signal provision\, and loudspeaker-based spatial audio methods for external sound field reproduction. Selected aspects of the system and its performance were analysed in various objective evaluations. Perceptual investigations involving adults with normal hearing revealed that the characteristics of the used research HAs primarily affect sound localisation performance\, while still allowing comparable egocentric auditory distance estimates as observed in loudspeaker-based reproduction. To demonstrate the applicability of the system\, school-age children with HL fitted with research HAs were tested for speech-in-noise perception in a virtual classroom and showed similar speech reception thresholds as a comparison group using commercial HAs\, supporting the validity of the HA simulation. The inability to perform spatial unmasking of speech compared to their peers with normal hearing implies that reverberation times above 0.4 s have extensive disruptive effects on spatial processing in children with HL. Collectively\, the results from evaluation and application indicate that the proposed systems satisfy core criteria towards their use in HA research. \nMelden Sie sich hier an um die Einladung zu dem Vortrag per E-Mail zu erhalten.\nRegister here to receive the invitation to this talk via E-Mail. \nZoom-Meeting-ID: 962 0081 6605\nPasswort: 631489
URL:https://blog.rwth-aachen.de/akustik/event/pausch-spatial-audio-reproduction-for-hearing-aid-research-system-design-evaluation-and-application/
LOCATION:Zoom-Meeting
CATEGORIES:Verteidigung Dissertation
END:VEVENT
END:VCALENDAR