Deep learning systems are gaining wider adoption due to their remarkable performances in computer vision and natural language tasks. As its applications reach into high stakes and mission-critical areas such as self-driving vehicle, safety of these systems become paramount. A lapse in safety in deep learning models could result in loss of lives and erode trust from the society, marring progress made by technological advances in this field. This thesis addresses the current threats in the safety of deep learning models and defences to counter these threats. Two of the most pressing safety concerns are adversarial examples and data poisoning where malicious actors can subjugate deep learning systems through targeting a model and its train...