













Deep Learning and Security
7 years ago
Are the so called smart Deep Learning based products safe and secure? What type of new loopholes do they create?
Adversarial examples is a known issue where images can be perturbed to fool deep learning systems into believing they are something else. For example, the above image of a temple is recognized as Ostrich (Credits: Christian Szegedy et al) by some deep learning nets.
On the other hand, Deep Learning is used to detect frauds and help in improving the security of other systems.
This talk/discussion hopes to throw some light on these aspects.

sanjay•thenair•mayankj•chandra_shekar•sanjay•apoorv1•nivedita•atheeth-s•kiran-mv•ravindranath-shreyasgmail-com•suhanshetty•sankalp_sans•rajat_saxena_1•sanjay•shreelola•mallikarjuna_jalageri•thiyagarajan•pancham•govindars•rajatarora•sanjay•rohini1729•aakashsinghal•shravangurram•krisshvish•sathyabhat•krunal-naidu•nishant-babel•arjadhav•kira2566•sujay_menasinakai•sameerahmed•suma-sree•shravan•vvh•pawan-kumar-gupta•rakshitha-m•roopar•monikab•kunallguptaaa•bhavani_shankar•anil100nair•sachin_phatak•ana•poojitha-s•manasabv•niharika•mounika-r•nagarathna_a•bindu_ketineni•naveen-kumar-p•pavi•tejaswi_babu•praharsh•srinivasa_chamarthy_srini•sridhartr•anjalshireesh•vivekk
Share this session:
When is the session. What time and where it is?
I’m new to this Barcamp.