Learning to Learn

  • 분류 전체보기 (215)
    • Research Note (0)
      • KD for openset (0)
      • instance learning for opens.. (0)
      • Use Background as openset (0)
      • Vision Transformer (0)
      • Apply Fourier Transform Sam.. (0)
      • Check Openset sample model (0)
    • Paper (34)
      • Openset recogniton (8)
      • Out-of-distribution (2)
      • Imbalanced data (7)
      • Semi-Supervised learning (1)
      • Unsupervised learning (9)
      • Knowledge distillation (1)
      • ETC (6)
    • Project (0)
      • ICT (0)
      • LG (0)
      • AI Challenge (0)
    • Tutorial (1)
    • Code (2)
    • Deep learning (1)
    • Math (6)
      • Linear Algebra (1)
      • Baysian learning (1)
      • Causal Inference (3)
      • Optimizaiton (1)
    • Python (25)
    • Algorithm (0)
    • Pytorch (8)
    • Terminal (13)
    • Docker (4)
    • CUDA (1)
    • Dic (102)
  • 홈
  • 태그
  • 방명록
/ /

Adversarial Attack

2020. 1. 31. 10:39
728x90

https://towardsdatascience.com/adversarial-examples-in-deep-learning-be0b08a94953

 

Adversarial examples in deep learning

This post will contain essentially the same information as the talk I gave during the last Deep Learning Paris Meetup. I feel that as more…

towardsdatascience.com

https://towardsdatascience.com/getting-to-know-a-black-box-model-374e180589ce

 

Getting to know a black-box model:

A two-dimensional example of Jacobian-based adversarial attacks and Jacobian-based data augmentation

towardsdatascience.com

 

728x90
저작자표시 (새창열림)

+ Recent posts

Powered by Tistory, Designed by wallel
Rss Feed and Twitter, Facebook, Youtube, Google+

티스토리툴바