Bias, Values, and Verification in AI
7:30 PM - 8:30 PM
Free and open to the public!
Is this company’s AI model biased? Are its predictions reliable? Are they using my data responsibly? As AI is deployed in sensitive applications, it is increasingly important to audit models to ensure they uphold societal values. However, AI service providers almost never release their models or data for auditing due to intellectual property and data privacy issues.
My work aims to address this tension through privacy-preserving cryptographic ‘contracts’ which can bind service providers’ models. These contracts use zero-knowledge proofs and other cryptographic tools to guarantee that (i) the model satisfies an important property such as group fairness, robustness, or differential privacy; (ii) outside parties can view the contract to verify whether the model has the property, but they learn no information about the model parameters or data by doing so.
Cryptographic verification is powerful but computationally expensive, especially for larger models. In this talk I will introduce a few optimization strategies that I’ve employed in my research to enable this critical emerging approach to AI/ML regulation.
About the Speaker