New Neural Network Feature Representation to Fight Algorithmic Discrimination
SENSITIVENETS
AGNOSTIC LEARNING REPRESENTATION
Technology to train new Neural Network feature representations capable to lead the best performance, transparency and trust on Artificial Intelligence systems
SensitiveNets transform traditional feature descriptors to ensure fairness and transparency on decision-making. New technology for a new generation of pattern recognition algorithms trained on divertsity
A step forward into machine learning. Improve the transparency, privacy and accountability of your AI systems. SensitiveNets removes biases introduced by humans, data and traditional representation learning
Discrimination can be defined as the unfair treatment of an individual because of his or her membership in a particular group, e.g. ethnic, gender, etc. The right to non-discrimination is deeply embedded in the normative framework that underlies various national and international regulations, and can be found for example in Article 7 of the Universal Declaration of Human Rights, and Article 14 of the European Convention on Human Rights, among others. Algorithms have an increasingly important role in decision-making in several processes involving humans. These decisions have therefore increasing effects in our lives, and there is an increasing need for developing machine learning methods that guarantee fairness in such decision-making. Read this article for more information about how AI bias happens [link].
According to paragraph 71 of GDPR, data controllers who process sensitive data have to: “implement appropriate technical and organizational measures to prevent, inter alia, discriminatory effects”.
Pattern recognition algorithms have the potential to reflect and amplify discriminatory effects of cognitive biases
We have developed a new agnostic feature representation capable to remove sensitive information from the decision making of automatic processes. The resulting networks trained with the proposed representation, called SensitiveNets, can be trained for specific tasks (e.g. image classification, face recognition, speech recognition, …), while minimizing the contribution of selected covariates, both the task at hand and in the information embedded in the trained network. Those covariates will typically be source of discrimination that we want to prevent (e.g. gender, ethnicity, age, context).
SensitiveNets remove sensitive information from the decision-making of biometric systems. Our technology allows the accomplish of the new legislation to guarantee rights of citizens.
HOW TO USE SENSITIVENETS
SensitiveNets can be used in two ways:
1) Pre-trained Models: no need to retrain your models. SensitiveNets are applied to your embeddings descriptors. The main difference with other technlogies such as de-identification is that sensitive information is eliminated instead of distorted. The removal is applied in the feature space as a part of a new learning process focus on transparency and accountability
2) You can train from scratch your models according to SensitiveNet patented learning process. Develop new biometric recognition systems that guarantee fairness in the decision-making and therefore comply with regulations and exigences of final users regardless their gender, race or age. Enter in the market of biometric technologies with a disruptive product.
Very small drop of performance when the projection is applied to state-of-the-art face recognition models, which demonstrates the success of SensitiveNets in preserving the recognition accuracy
Gender information is removed from the embedding descriptors. As a result, gender-dependent features are removed from the decision-making of the biometric system
Ethnicity information is removed from the embedding descriptors. As a result, the ethnicity-dependent features are removed from the decision-making of the biometric system
“It is safe to assume that bias exists in all data. The question is how to identify it and remove it from the model” ― Chris DeBrusk, MIT Sloan Management Review 2018
“All are entitled to equal protection against any discrimination” ― Universal Declaration of Human Rights
“If you don’t understand algorithmic discrimination, then you don’t understand discrimination in the 21st century” ― Bruce Schneier