AI models are built embedded with the biases already present in the world. The training data natively includes many negative aspects of society which if not checked will continue to be present in new content generated from these models.
For e.g a recent now deleted post from Buzzfeed showed AI generated barbies from each country. The images were rife with biases and inaccuracies which reveal how important it is for AI based content to be peer reviewed by a human and better yet – be filtered to reduce biases or stereotypes being fed back into society.
Chris McClean, Global Lead Digital Ethics at Avanade, sees an additional layer to the conversation in the ethical and societal impacts. Generative AI engines, as they are trained to use content from the Internet, are picking up the Internet’s biases, stereotypes, and sometimes precarious points of view into their knowledge base and treating them as facts.
Biases prevalent on the internet originate and are perpetuated by people. Unconscious biases and even conscious biases are unfortunately a part of the fabric of society.
There have been several class action lawsuits filed in the western world where decisions laced with bias have affected groups of individuals negatively. For e.g. in December 2023, several individuals filed a lawsuit against the US Navy Federal Credit Union for biased mortgage lending practices. A CNN investigation uncovered that there was a significant disparity in mortgage lending when comparing Black and Latino applicants to their Caucasian counterparts.
Another example is a Chinese based online tutoring company who recently settled their lawsuit in August 2023. The Equal Employment Opportunity Commission based in the United States filed suit against this company after finding that the AI tool being used to filter resumes/cvs rejected female applicants over 55 and male applicants over 60.
These are recent examples of both human and AI based biases and it shows not only a negative impact on groups of individuals but ultimately on companies as well.
When completing an AI course recently, I was asked a question regarding good use of AI. The question posed was “is resume filtering a good case for the use of AI”. The correct answer for the course was “Yes”. However, my personal view for areas where a human life can be affected is that the answer should be – “No” or “It depends”.
I would say “No” or “It Depends” because of several factors:
Underlying Data
When planning these projects, the data is a key part of the solution. However, we focus on the quality of the data, the training of the data and the amount of data available. We often do not consider evaluating the data to identify biases or if the results of the model show bias during testing. Many software teams or customers do not have a team diverse enough to identify problematic trends.
Team Diversity
The Body Mass Index (BMI) is a key metric used to identify the risk of an individual developing diabetes, heart disease and other weight related illnesses. People with a higher BMI are often asked to pay more insurance related costs or can sometimes be uninsurable. However, studies have shown that the BMI calculation is flawed. People of asian decent have higher body fat percent and are often at risk of weight related diseases at a lower BMI to their Caucasian counterparts. Whereas black people have higher lean muscle and lower body fat which means a higher BMI but lower risk of weight related diseases. As the standard related to BMI is centered around Caucasian males – Asians and Black people are both negatively affected (sometimes to the detriment of their health).
In a project that aims to automate the approval of insurance applications, the data being used would undoubtedly be problematic as where the problems with BMI has been documented and discussed for sometime, there has been no changes made to how it impacts the application process. Therefore when looking at the data, there would be no obvious problem. The end result would likely be a continuation of the existing bias in the process.
These factors can of course be addressed.
Ensuring human oversight of AI based decisions that would affect another human would be a part of the solution. However we should also ensure that individuals providing oversight are unbiased, well informed and are a part of a diverse team able to learn from and rely on each other.
Therefore my decision to implement AI and automation on areas which affect the livelihood, health and wellbeing of individuals would depend on a company’s willingness to address these factors.
So, is the conclusion for risk of an individual developing weight related diseases be that it needs to take BMI and origin into account? Building a more accurate model of the risk rather than relying on a single parameter.
LikeLike
BMI is already taken into account. However the BMI range doesn’t take into account the differences in race. In some insurers, they don’t even collect race, ethnicity or language as data. Therefore using BMI for weight related diseases won’t be accurate for a sub group of society. Fixing the data, reworking the model and the logic will help.
LikeLiked by 1 person