BEGIN:VCALENDAR
VERSION:2.0
PRODID:www.dresden-science-calendar.de
METHOD:PUBLISH
CALSCALE:GREGORIAN
X-MICROSOFT-CALSCALE:GREGORIAN
X-WR-TIMEZONE:Europe/Berlin
BEGIN:VTIMEZONE
TZID:Europe/Berlin
X-LIC-LOCATION:Europe/Berlin
BEGIN:DAYLIGHT
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
DTSTART:19810329T030000
RRULE:FREQ=YEARLY;INTERVAL=1;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
DTSTART:19961027T030000
RRULE:FREQ=YEARLY;INTERVAL=1;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:DSC-21446
DTSTART;TZID=Europe/Berlin:20241121T130000
SEQUENCE:1732171228
TRANSP:OPAQUE
DTEND;TZID=Europe/Berlin:20241121T150000
URL:https://www.dresden-science-calendar.de/calendar/de/detail/21446
LOCATION:TUD Materials Science - HAL\, Hallwachsstraße 301069 Dresden
SUMMARY:Komendantskaya: Ensuring Neural Networks Robustness: Problems and O
 pportunities
CLASS:PUBLIC
DESCRIPTION:Speaker: Ekaterina Komendantskaya\nInstitute of Speaker: Univer
 sity of Southampton\, United Kingdom\nTopics:\nPhysik\n Location:\n  Name:
  TUD Materials Science - HAL ()\n  Street: Hallwachsstraße 3\n  City: 010
 69 Dresden\n  Phone: \n  Fax: \nDescription: Machine learning methods have
  recently seen a rapid development\, both in terms of variety of model arc
 hitectures (feedforward\, recurrent\, convolutional neural networks\, tran
 sformers)\, training methods&amp\;#13\; (gradient descent\, adversarial an
 d property-based training)  and sheer sizes of models. Thanks to these dev
 elopments\, machine learning is being incorporated in an ever growing numb
 er of applications\, ranging from traditional computer vision applications
 \, to more recent domains such as conversational agents and scientific com
 puting. However\, neural networks\, new and old equally\, suffer from a ra
 nge of safety and security problems\, such as vulnerability to adversarial
  attacks\, data poisoning\, catastrophic forgetting. Blindly adapting neur
 al networks to safety critical domains may lead to a whole range of issues
  that machine-learning-free applications were not prone to. This problem l
 ed to the development of neural network verification\, a hybrid field that
  merges formal methods and security with machine learning methods\, with t
 he purpose of developing robust tools and methods to guarantee safe neural
  network operation. In this talk\, I will overview some of the pitfalls an
 d challenges in adapting neural networks to different domains\, and discus
 s their common symptoms and  underlying technical reasons. I will survey t
 he existing methods to safeguard neural networks or applications incorpora
 ting neural networks\; focusing in particular on the available methods and
  tools of neural network verification.
DTSTAMP:20260507T091302Z
CREATED:20241031T063710Z
LAST-MODIFIED:20241121T064028Z
END:VEVENT
END:VCALENDAR