Discrimination can be defined as the unfair treatment of an individual because of his or her membership in a particular group, e.g. ethnic, gender, etc. The right to non-discrimination is deeply embedded in the normative framework that underlies various national and international regulations, and can be found for example in Article 7 of the Universal Declaration of Human Rights, and Article 14 of the European Convention on Human Rights, among others. Algorithms have an increasingly important role in decision-making in several processes involving humans. These decisions have therefore increasing effects in our lives, and there is an increasing need for developing machine learning methods that guarantee fairness in such decision-making. Read this article for more information about how AI bias happens [link].

According to paragraph 71 of GDPR, data controllers who process sensitive data have to: “implement appropriate technical and organizational measures to prevent, inter alia, discriminatory effects”.
Pattern recognition algorithms have the potential to reflect and amplify discriminatory effects of cognitive biases
We have developed a new agnostic feature representation capable to remove sensitive information from the decision making of automatic processes. The resulting networks trained with the proposed representation, called SensitiveNets, can be trained for specific tasks (e.g. image classification, face recognition, speech recognition, …), while minimizing the contribution of selected covariates, both the task at hand and in the information embedded in the trained network. Those covariates will typically be source of discrimination that we want to prevent (e.g. gender, ethnicity, age, context).

SensitiveNets remove sensitive information from the decision-making of biometric systems. Our technology allows the accomplish of the new legislation to
guarantee rights of citizens.
HOW TO USE SENSITIVENETS
SensitiveNets can be used in two ways:

1) Pre-trained Models: no need to retrain your models. SensitiveNets are applied to your embeddings descriptors. The main difference with other technlogies such as de-identification is that sensitive information is eliminated instead of distorted. The removal is applied in the feature space as a part of a new learning process focus on transparency and accountability
2) You can train from scratch your models according to SensitiveNet patented learning process. Develop new biometric recognition systems that guarantee fairness in the decision-making and therefore comply with regulations and exigences of final users regardless their gender, race or age. Enter in the market of biometric technologies with a disruptive product.