Intel Describes Tool to Train AI Models With Encrypted Data

Intel revealed that it has made progress in an anonymized, encrypted method of model training. Industries such as healthcare that need a way to use AI tools on sensitive, personally identifiable information have been waiting for just such a capability. At the NeurIPS 2018 conference in Montreal, Intel showed off its open-sourced HE-Transformer that works as a backend to its nGraph neural network compiler, allowing AI models to work on encrypted data. HE-Transformer is also based on a Microsoft Research encryption library.

VentureBeat reports that, “Microsoft Research’s Simple Encrypted Arithmetic Library (SEAL) was also released in open source this week,” and that Intel and Microsoft describe HE-Transformer as “an example of ‘privacy-preserving’ machine learning.”

“HE allows computation on encrypted data,” wrote Intel research scientist Fabian Boemer and Intel senior director of research Casimir Wierzynski. “This capability, when applied to machine learning, allows data owners to gain valuable insights without exposing the underlying data; alternatively, it can enable model owners to protect their models by deploying them in encrypted form.”

HE-Transformer, which helps development with an abstraction layer that can be applied to open-source framework neural networks (such as Google’s TensorFlow, Facebook’s PyTorch or Apache MXNet), “effectively eliminates the need to manually integrate models into HE cryptographic libraries.” It also “incorporates the Cheon-Kim-Kim-Song (CKKS) encryption scheme and addition and multiplication operations, such as add, broadcast, constant, convolution, dot, multiply, negate, pad, reshape, result, slice, and subtract.”

“HE” stands for “homomorphic encryption,” a type of cryptography that “enables computation on ciphertexts — plaintext (file contents) encrypted using an algorithm … [and] generates an encrypted result that, when decrypted, exactly matches the result of operations that would have been performed on unencrypted text.” IBM researcher Craig Gentry first developed a “fully HE scheme” in 2009.

Intel said that, “HE-Transformer delivers state-of-the-art performance on cryptonets — learned neural networks that can be applied to encrypted data — using a floating-point model trained in TensorFlow.”

“We are excited to work with Intel to help bring homomorphic encryption to a wider audience of data scientists and developers of privacy-protecting machine learning systems,” said Microsoft Research principal researcher/research manager of cryptography Kristin Lauter. Boemer and Wierzynski also said that, “future versions of HE-Transformer will support a wider variety of neural network models.”