Port 22

Manipulating Machine Learning Systems by Manipulating Training Data

Interesting research: “TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents”: Abstract:: Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time. In this work, we show that these training-time vulnerabilities extend to deep reinforcement learning (DRL) agents and can be exploited by an adversary with access to the training…