AIShield protects AI systems against four types of adversarial attacks.
Model Extraction – The act of stealing and extracting a proprietary model
Model Evasion – The act of making a model do the wrong thing making it intentionally biased
Data Poisoning – The act of making a model learn wrong things by injecting malicious samples
Model Inference – The act of making the model reveal its logic and data