AI4EDI is a research group that focuses on AI technologies to tackle EDI related issues (such as educational awarding gaps for certain minorities) and investigating EDI related issues such as racial bias in AI or ML systems.
AI is here. We interact with AI technology every time we search online, interact on a social media platform or use a credit card. We know that AI can be a force for good, for example, OU Analyse uses machine learning to help identify students at risk of failing. Given the ubiquity of this technology, it is important though that we understand its potential impact, good and bad, for all users. Within AI4EDI we will highlight EDI issues related to AI research and innovation. In particular, how AI can help address EDI issues, such as the awarding gap for black students, and EDI challenges that can be present in AI systems, such as data and decision-making bias.
The AI4EDI Team are research staff and students who have a passion for making societal impact through our collective AI research and EDI as a core principle for society. As such we are very happy that we are able to bring this two areas together.
The core objective of NoBIAS is to research and develop novel methods for AI-based decision making without bias.
Despite equal efforts made by students on their study journey, we can often see unequal awards gained. This phenomenon, called the degree-awarding gaps, has been noted across multiple subgroups of students in the categories of e.g. ethnicity, gender, disability. The design of learning experiences may easily slip and be driven by the needs of majority groups whereas the needs of minority groups may be left behind as their voice is not heard as much. The awarding gaps may impact the whole society as it is more difficult for people with lower education to seek a job, they are more dependent on financial support and most likely end up with more debts.
This PhD investigates how we can use Learning Analytics (LA) to address awarding gaps and enhance overall Equality, Diversity and Inclusion (EDI) in education. Development, use and analysis of a variety of LA components (tools and approaches) as part of this PhD helps to better understand (a) where current awarding gaps are and how to systematically locate them, (b) how LA components can be used to tackle the awarding gaps, and (c) how fair, across all students, are existing LA components that aim to help students.
Results of the early analysis revealed that the use of the Predictive Learning Analytics (PLA) tool by tutors at the Open University to detect at-risk students increased the overall success pass rate from 61% to 64% at three STEM modules. The most positive impact was measured for students from low Socio-Economical Status backgrounds and Black, Asian and Minority Ethnic (BAME) students. However, the prediction models are not fair to all students equally in the categories of ethnicity, gender and disability, and consequent conduction of different configurations of the prediction models showed different impacts on their fairness. This suggests the potential of making the tool fairer and even more effective on student success in their studies, especially those affected by the awarding gaps.
At its heart Learning Analytics (LA) is founded on the monitoring of learning behaviours linked with the personal details of learners. A problem that has been raised with personal analytics in general is ‘dataveillance’: the systematic use of software in the monitoring of the actions of one or more persons online. Although educational institutions such as the OU will utilise ethical policies and checks there remain concerns about the possible negative impacts on students and these can sometimes overlap with EDI issues.
For example, in a recent case in the US, a monitoring tool for an online education course flagged up that a 15 year old African-American girl was ‘not attending school’ during the Covid crisis and this led to the child, who had a Attention Deficit Hyperactive Disorder (ADHD) diagnosis, being sent to a juvenile detention centre. This PhD is exploring ways in which Learning Analytics can be implemented in a privacy preserving manner. In particular, machine learning models, required for Learning Analytic-based predictions, would be trained on data which would remain in the ownership and under the control of the target student. More technically, a decentralised architecture is being developed which retains the verifiability of the data whilst empowering the student.
This research is on intersectionality in hate speech detection, focusing on “Misogynoir” – a specific type of hatred experienced by Black women.
“Misogynoir” was coined by Moya Bailey and proliferated by Trudy to describe “the anti-Black racist misogyny that Black women experience”. It is misogyny against black women with the intersection of race and gender. The concept of misogynoir is exclusive to Black womanhood, and women of other races cannot experience it, but individuals of any gender or ethnicity can perpetuate it. The hyper sexualisation of Black women and stereotypes that characterise Black women, particularly as angry, unreasonable, or extraordinarily strong, are examples of misogynoir that impact the health, safety and well-being of Black women and girls.
This research aims to explore how misogynoir manifests online, and how it could be mitigated, since existing technologies for hate speech detection do not address this type of hate and protect Black women accordingly. Research has shown that, although social networking sites have created automated techniques for addressing hate speech, these approaches do not perform effectively for specific marginalised groups such as Black women or types of intersectional hate, such as Islamophobia and Antisemitism. Therefore, exploring ways to help platforms reduce the amount of hate Black women receive is crucial. The methodology for this research is multidisciplinary, combining social, computational, and linguistic approaches. In addition, the study will help inform policies around hateful conduct online and expand knowledge around intersectional hate.
The project consists of studying the different biases that can be found in the artificial intelligence models that are used daily in different financial services, especially in those services offered by our partners at Visa Europe. In this way, the objective will be to find the most frequent types of biases embedded in these financial systems and to explore the best methods to evaluate them, mitigate them and prevent them from recurring in the future.
Online hate speech is a mirror of social inequalities and generally has a faster and broader spread than in other forms of conversation.
Within the EU NoBIAS Project (https://nobias-project.eu/), this work examines current computational methods for detecting hate automatically and aims to develop data-centric solutions to reduce the spread of content attacking vulnerable groups or individuals.