The success of an artificial intelligence (AI) algorithm depends in large part upon trust, yet many AI technologies function as opaque ‘black boxes.’ Indeed, some are intentionally designed that way. This charts a mistaken course.
Trust in AI is engendered through transparency, reliability and explainability. In order to achieve those ends, an AI application must be trained on data of sufficient variety, volume and verifiability. Given the criticality of these factors, it is unsurprising that regulatory and enforcement agencies afford particular attention to whether personally-identifiable information (“PII”) has been collected and employed appropriately in the development of AI. Thus, as a threshold matter, when AI training requires PII (or even data derived from PII), organizations need to address whether such data have been obtained and utilized in a permissible, transparent and compliant manner.
Connect with ILN
Firm of the Month
ILN Members Twitter Feed