By Sonja Anna Sartys, Junior Researcher

 

Apparently, it’s the time of the year: Winter. The weather is horrible, it gets dark around noon, and everyone is depressed. This is the time when most people are stuck inside and yearn for the warmth and comfort of another human being to contrast the gruesome conditions of the outside world. So, in modern times most people are used to doing almost anything from their cozy couch at home. Reading the news, watching a movie, shopping and for our purpose also: online dating.  

 

In this blog post, I would like to present the research topic “Discriminatory Behavior of AI in Dating apps”. To understand my approach and current research discoveries I will guide you through a brief history of dating apps, the algorithms used and the appearance of AI technology in this sector. 

Online dating has been around longer than you might think: Even my mom (white hair) met their partner on the internet. The original Desktop Online dating is a concept that started in 1995. After some years of being this “shameful tool” and people making up stories of how they “really met”, nowadays online dating is en vogue. Due to the rocketing user numbers over time, the technology evolved towards a more user-friendly approach. The first app that distanced itself from an overwhelming UI, and was the first one to introduce “swipes”, an approach of gamification, was Tinder in 2009. The criticism did not wait for long. The act of deciding about a match based (mostly) on the attractiveness of the person and at the same time being enabled to make a great number of decisions, might draw a picture of excessive consumption of potential matches, which fits perfectly into the consumerism of our lifetimes. Although the easy accessibility of the app made it extremely popular and changed the dating app world completely. It inspired a whole new generation of dating apps with many more to follow. The American company MatchGroup is the mothership of Dating apps, specializing in the creation of apps for each distinct consumer group. 

After a brief history of Online Dating platforms, we might ask ourselves: BUT how does it actually work? I honestly don’t know how many people may ask this themselves BUT here we go …  

 

Technology in Dating Apps: From Algorithms to AI 

The focus of dating apps and other matchmaking apps is to make (the perfect!) match. Depending on a range of factors like the user’s preferences and online behavior an algorithm matches users. An algorithm is a “how should I decide for what”- manual for a computer. Several algorithms are known to be the basis of certain dating apps’ functionalities. 

Tinder, and Bumble – the self-claimed feminist dating app – both operate with an algorithm, which is based on the Elo Rating system, named after its creator Arpad Elo, a Hungarian-American physics professor. It is known for calculating the skill levels of chess players. Taking into consideration how often you – as a user – receive right swipes (the “good” swipes) your score rises and changes the outcome of future matches. Depending on which desirability level you gain during your time as a user, you will only be exposed to a certain pool of users in the range of your desirability rank. 

Another algorithm, that is used by Hinge, another part of MatchGroup, is the Gale-Shapley algorithm, which is the solution to the so-called stable matching problem aka stable marriage problem. The idea is that the people in each pair should prefer each other over the rest of the people in the pool. These pairs consist of people who are mutually alike. Consequently, this algorithm can create a problem of “social class consolidation.” To fulfil the algorithm’s goal of a stable match, e.g., people with a similar educational background would be matched.

Algorithms form the basis of artificial intelligence (AI) technology, serving as the underlying framework that enables machines to learn, interpret, and respond to complex data. These algorithmic sets of rules and calculations are meticulously designed to process information, make decisions, and solve problems, much like the human brain. By analyzing vast datasets, these algorithms can identify patterns, predict outcomes, and make informed decisions, thereby driving the functionality of AI systems.

The usage of AI in dating apps is a new phenomenon. Before it rocketed in the summer of 2023, AI was first introduced to dating app companies around 2018/19. The first enterprises to incorporate AI technology in their dating app technology were Iris and Badoo. To determine which facial features a user is most attracted to and then match them with potential partners based on their preference. Badoo also offers the possibility to upload “Lookalikes” a certain type the user is interested in finding on the app. 

With most things in life, it is never just black and white. The same counts for AI in dating apps surely can help users find many amazing matches. Still, problems emerge from AI – assuming visual recognition is used – it might result in one of AI’s “traditional problems” of racial bias due to the homogenous data sets used to train the AI. This is an issue the company might take into consideration by publishing a statement on its website. Looking at e.g., Tinder certain statements can be interpreted in many ways and do not seem to be very transparent. According to the company’s website: “Generative AI technologies should not perpetuate harmful biases or unfair practices. As we continue to dive deeper into generative AI, we are being thoughtful on how we can layer in protections throughout our apps’ development lifecycles, such as regular audits and algorithmic adjustments.” In Summer 2023 MatchGroup announced that they would soon introduce more AI features to their products to enhance the usability and “help solve key dating pain points”.

The problem with discriminatory behavior of algorithms and AI in dating apps is not only technology. Since technology is a tool to help humanity in certain areas of life and make things easier for them. Regarding this, we must think about: Who produces technology and with what purpose? Is it the technology in use that is harmful or is it also the people making use of it?

According to AI technologies, The National Institute of Standards and Technology (NIST) published a report, which focuses on understanding and managing biases in artificial intelligence (AI). They suggest that we should not only look at the technical side of AI, like the data it is trained on but also consider the wider social influences on how AI is developed. In their report “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,” NIST points out the importance of building AI that users can trust. 

AI biases are well-known for coming from how the AI is programmed and the training data it learns from. For instance, if an AI is trained on data that does not fairly represent all genders or ethnic groups, it might make unfair decisions. The NIST report acknowledges this and adds that understanding AI bias fully requires us to also think about human and systemic biases. These are biases that come from the way institutions operate or from our own personal prejudices.  

Organizations dealing with ethical AI concerns already work on identifying problematic technologies and research towards fairer solutions. One to mention is the Algorithmic Justice League, founded in 2016, whose mission is to raise awareness about the impacts of AI. One of the members is Cathy O’Neil, the author of “Weapons of Math Destruction” who actively works to improve algorithmic systems.  

It is very inspiring to see how the research on this topic is evolving. Therefore, I would like to contribute with my research project to the ongoing discussion about biases in the technology used in dating apps. My goal is to find out more about user interactions, their experiences with the dating apps they use and if possible, talking to the people that are responsible for the technology. The researched data will be translated into a 3D printed object to make the research results tangible and as an approach to creating something more accessible to represent the impact of technology that just runs in the background. With my research topic, I strive to raise awareness of discriminatory behavior in tech.

 

You can look forward to another blog article in May with the following questions to be answered by user interviews: Is anyone feeling not represented in the right way? Who feels unfairly treated on their dating app of choice? How do they feel about preferences? 

 

Sources

Images: 

All the images were generated by an AI model. 

References: 

Mimi. (2022, December 13). The algorithms of dating apps, explained – QMIND Technology Review – Medium. Medium. https://medium.com/qmind-ai/the-algorithms-of-dating-apps-explained-52e851394b23 

Wikipedia contributors. (2023, October 29). Gale–Shapley algorithm. Wikipedia. https://en.wikipedia.org/wiki/Gale%E2%80%93Shapley_algorithm 

Khalatian, I. (2022, October 26). How to overcome user and AI bias in dating apps. Forbes. https://www.forbes.com/sites/forbestechcouncil/2022/10/26/how-to-overcome-user-and-ai-bias-in-dating-apps/?sh=8039b3273687)%20(https://www.forbes.com/sites/forbestechcouncil/2023/03/17/matchmaking-20-how-ai-is-revolutionizing-online-dating/ 

Match group. (n.d.). https://mtch.com/ai-principles 

 

(Generated with https://www.scribbr.com/citation/generator/folders/57gWU7IfcGwVdhq6r4ey3t/lists/229ZrwxQcsppKrJkdhLpwz/)