How to trick deep learning algorithms into doing new things


Two things often mentioned with deep learning are “data” and “compute resources.” You need a lot of both when developing, training, and testing deep learning models. When developers don’t have a lot of training samples or access to very powerful servers, they use transfer learning to finetune a pre-trained deep learning model for a new task. At this year’s ICML conference, scientists at IBM Research and Taiwan’s National Tsing Hua University Research introduced “black-box adversarial reprogramming” (BAR), an alternative repurposing technique that turns a supposed weakness of deep neural networks into a strength. BAR expands the original work on adversarial…

This story continues at The Next Web

from LatestTechyTalks

Comments

Popular Posts