Red Teaming
Actively and systematically probing AI systems for weaknesses and vulnerabilities, especially those not considered during its design.
Actively and systematically probing AI systems for weaknesses and vulnerabilities, especially those not considered during its design.