[Posted on Behalf of Archie Jackson, Senior Director and Head of IT & IS   Incedo Inc]

AI is a huge ecosystem of tools, languages, frameworks with functions ranging from data ingestion, data pre-processing, modeling, model evaluation, integration, visualization, packaging & deployment, etc.

Multiple stakeholders’ groups ranging from traditional software engineers to machine learning engineers, data scientists and statisticians are involved in any project.

The complete working solution usually involves multiple systems coupled together at various interfaces.

No alt text provided for this image
Huge amounts of business data is involved, millions or even billions of records. Depending upon the problem domain, this data may have tremendous amount of value or sensitivity. Moreover, the preference is to consider all attributes/aspects/dimensions of data.

One can’t use ‘dummy data’ for testing/pre-production stages in AI & ML. At all points in the ecosystem, various stakeholders and various systems handling precious ‘Real’ data. The many iterations and requests for variety of data can disrupt any existing data governance we may have in place.

Convolutional Neural Network (CNN) in the picture has a more ‘distributed’ opinion about what it is. In fact, some part of the network even thinks it might be a horse!


Adversarial machine learning exploits the above ideas to make the learning and/or prediction mechanisms in an AI/ML system do wrong things. This is done by impacting the learning process, when an attacker may be able to feed data while it is being learned or during prediction/operation, when an attacker may get the model to do something wrong or unexpected for a given input chosen by the attacker. Attacks may be performed in a ‘white-box’ manner, when an attacker knows most things about the internals of the model, the hyper-parameters, etc. or a ‘black-box’ manner, when the attacker can only explore the system ‘from the outside’ by observing its decisions for certain inputs like any other end user.

In adversarial attacks, the attacker tries to disturb the inputs just enough so that the probability distribution of ‘what it is?’ changes in a manner that is favorable to the attacker. In the CNN example above, the adversary may try to tweak certain parts of the image so that the probability of the network thinking that it is a ‘horse’ goes up significantly and ‘horse’ gets voted as the ‘correct’ classification.

Another example if adversary attack is Tay Bot which was an AI chatbot that was created by Microsoft AI research team and was launched on Twitter to have conversations with other users and learn to interact with people ‘along the way’. However, as an outcome of coordinated attacks to ‘mis-train’ it, Tay rapidly became offensive and started posting all sorts of inflammatory tweets. It had to be brought down within just 14 hours of its launch!

How to secure & maintain privacy?
Start with Awareness: Ensure that all members/stakeholders have a good basic understanding of security and privacy. Things like data classification, data protection techniques, authentication/authorization, privacy principles, applicable regulatory requirements, etc. The goal should be to ensure that all stakeholders have role-appropriate understanding of security and privacy and everyone uses the same terminology and knows and understands relevant policy and standards.

Data Governance & Privilege management: Ownership and accountability should be clear for various stakeholders as data changes hands at different stages of each workflow. This is particularly important given the wide circulation of data that will be inevitable in ML & AI projects. Right level of PAM and authentication. AI needs to be accompanied by the same strong access barriers one would encounter through a Web or mobile interface. This virtual barrier could include passwords, biometrics or multi-factor authentication

Diligent threat modeling of solutions: At component level as well as from an end to end perspective. This will ensure that security is ‘built in’ into the design and that applicable security requirements are met at every point in the end to end system. Attention should be paid at boundaries and interfaces between the different sub-systems. Assumptions made by either side at those interfaces should be clearly verified. Also, because production data is involved everywhere, all workflows must be exhaustively covered in the threat models (from the earliest experiments and proofs of concept to the fully operational system as it would be in deployed in production). All threats/risks identified during threat modeling must be fixed by performing a combination of feature security testing and penetration assessments.

Monitoring hygiene & IR plan: All software components are at their latest security patch level, conduct periodic access reviews, rotate keys/certificates, etc. and embed strong incident response plan to deal with a calamity if one does happen.

Inference control is the ability to share extracts from large scale datasets for various studies/research projects without revealing privacy sensitive information about individuals in the dataset.

‘k-anonymity’, ‘l-diversity’ and ‘t-closeness’.

K-Anonymity is used to provide a guarantee that any arbitrary query on a large dataset will not reveal information that can help narrow a group down below a threshold of ‘k’ individuals. The technique provides an assurance that there will remain an ambiguity of ‘at-least-k’ records for anyone mining for privacy sensitive attributes from a dataset. Attacks on k-anonymity, can happen if the results from different subsets of the dataset are unsorted. A mitigation for this is to randomize the order of each released subset.

Another class of attacks comes into play if there is not enough diversity in the records containing a sensitive attribute within each equivalence group. In that case, an attacker can use some background information on individuals to infer sensitive data about them. L-Diversity tries to address this by ensuring that equivalence groups have ‘attribute diversity’. It ensures that subsets of the dataset that have the same value have ‘sufficient diversity’ of the sensitive attribute. ‘l-diversity’ works in hand in hand with ‘k-anonymity’, it adds ‘attribute inference’ protection to a dataset that is protected for ‘membership inference’ by ‘k-anonymity’.

T-Closeness mitigates these weaknesses by consciously keeping the distribution of each sensitive attribute in an equivalence group ‘close’ to its distribution in the complete dataset. In ‘t-closeness’, the distance between the distribution of a sensitive attribute in an equivalence group and the distribution of that attribute in the whole table is no more than a threshold ‘t’.

Differential Privacy provides a mathematical framework that can be used to understand the extent to which a machine learning algorithm ‘remembers’ information about individuals that it shouldn’t, therefore offering the ability to evaluate ML algorithms for privacy guarantees they can provide. This is invaluable because we require models to learn general concepts from a dataset (e.g., people with salary higher than X are 90% more likely to purchase drones than people with salary less than Y) but not specific attributes that can reveal the identity or sensitive data of individuals that made up the dataset (e.g., Atul’s salary is X). Differential privacy adds a controlled amount of ‘noise’ during processing so as to generate enough ambiguity downstream that privacy-impacting inferences cannot be made based on predictions from the system.

PATE Framework

The Private Aggregation of Teacher Ensembles (PATE) Framework applies differential privacy to provide an overall privacy guarantee on the model being trained from user data. The key intuition in the PATE framework is that “if two models trained on separate data agree on some outcome then it is less likely that sharing that outcome to the consumer will leak any sensitive data about a specific user”.

The framework divides the private data into subsets and independently trains different models (called ‘teachers’) on each of the subsets. The overall prediction is generated by combining the individual predictions of this ‘ensemble’ of teacher models. First, noise is added when combining the outcomes of individual teachers so that the combined result is a ‘noisy aggregation’ of individual teacher predictions. Second, these noisy predictions from the teacher ensemble are used as ‘labeled training data’ to train a downstream ‘student’ model. It is this student model that is exposed to end users for consumption.

Federated Learning

Federated Learning takes a somewhat different approach to preserve privacy in learning scenarios. The key idea is not to bring all data together and instead devise ways in which we can learn from subsets of data and then effectively aggregate learnings.

For instance, a group of hospitals may be interested in applying ML techniques to improve healthcare of patients but (a) individual hospitals may not have sufficient data to do so by themselves and (b) they may not want to risk releasing their data for central aggregation and analysis. This is an ideal scenario for applying federated learning.

Homomorphic Encryption, when data is encrypted using traditional techniques, it becomes impossible to do any meaningful computation on it in the encrypted form. With the widespread adoption of cloud computing, one often encounters scenarios where a party possessing sensitive data wants to outsource some computation on that data to a third party which it does not trust with the plaintext data. Homomorphic encryption basically provides the ability to perform various meaningful operations on encrypted data without having direct access to the encryption keys or the plain text data itself. Using homomorphic encryption, the service can perform the requested computation on the encrypted data and return the encrypted result back to a client. The client can then use the encryption key (which was never shared with the service) to decrypt the returned data and get the actual result.

E-mail me when people leave their comments –

You need to be a member of CISO Platform to add comments!

Join CISO Platform

CISO Platform

A global community of 5K+ Senior IT Security executives and 40K+ subscribers with the vision of meaningful collaboration, knowledge, and intelligence sharing to fight the growing cyber security threats.

Join CISO Community Share Your Knowledge (Post A Blog)