Deep Learning and Security
Presented by: sanjay
7 years ago
| 58 interested

Are the so called smart Deep Learning based products safe and secure? What type of new loopholes do they create?

Adversarial examples is a known issue where images can be perturbed to fool deep learning systems into believing they are something else. For example, the above image of a temple is recognized as Ostrich (Credits: Christian Szegedy et al) by some deep learning nets.

On the other hand, Deep Learning is used to detect frauds and help in improving the security of other systems.

This talk/discussion hopes to throw some light on these aspects.

    Share this session:

    Comments

    Leave a Reply