⚠️ IA & CV’s WOMEN ⚠️

Only 11.1% of female-named QoL remain in first position when the initial breeding is made by an IA. In other words: almost 9 in 10 cases the IA first puts the CV associated with a man.

You read it well…

The 2025 study was conducted by Kyra Wilson and Aylin Caliskan de la University of Washington Information School , testing real CV’s in 3 MTE-type IA models (Massive Text Embedding Models) that are currently used by companies to decide to filter applications to the first interview. These systems «rankean» Who fits the vacancy better.
This is the full paper in case you’re interested. 👇
https: / / lnkd.in / dgxf _ 5HR

More? In 85.1% of cases, IA models prefer names associated with white people. In the study they have done, it happens in all kinds of professions, so it shows that it is a structural problem, models discriminate on the basis, not only in specific sectors. The IA assumes that the most valid profile is a white man.

💡 What should companies do? Some tips:
▪️ Audit the model before using it in real selection: ask questions to who has designed the model of IA, questions with a feminist perspective.
▪️ Training the team: if you are not able to recognize gender bias in your day-to-day, you will not be able to recognize gender bias in the IA.
▪️ Look at the initial funnel, not just the final photo: how many women fall before they get to the interview? If this data is high, you’re automating the glass roof.

You want to know the gender bias that the IA can have so you don’t perpetuate them in your company?

📩 Contact me for a training or conference

#PsicologíaCognitiva #Neurociencia #Productividad #Aprendizaje #EscrituraManual #MindfulnessDigital #PsicologíaSocial #SerDiferente #Generaciones #PensamientoCrítico #Reflexión #Identidad #SociedadActual #UniformeInvisible #Tendencias #Mentalidad

Sígueme en redes sociales

Scroll to Top