PhD Projects

Despite equal efforts made by students on their study journey, we can often see unequal awards gained. This phenomenon, called the degree-awarding gaps, has been noted across multiple subgroups of students in the categories of e.g. ethnicity, gender, disability. The design of learning experiences may easily slip and be driven by the needs of majority groups whereas the needs of minority groups may be left behind as their voice is not heard as much. The awarding gaps may impact the whole society as it is more difficult for people with lower education to seek a job, they are more dependent on financial support and most likely end up with more debts.

This PhD investigates how we can use Learning Analytics (LA) to address awarding gaps and enhance overall Equality, Diversity and Inclusion (EDI) in education. Development, use and analysis of a variety of LA components (tools and approaches) as part of this PhD helps to better understand (a) where current awarding gaps are and how to systematically locate them, (b) how LA components can be used to tackle the awarding gaps, and (c) how fair, across all students, are existing LA components that aim to help students.

Results of the early analysis revealed that the use of the Predictive Learning Analytics (PLA) tool by tutors at the Open University to detect at-risk students increased the overall success pass rate from 61% to 64% at three STEM modules. The most positive impact was measured for students from low Socio-Economical Status backgrounds and Black, Asian and Minority Ethnic (BAME) students. However, the prediction models are not fair to all students equally in the categories of ethnicity, gender and disability, and consequent conduction of different configurations of the prediction models showed different impacts on their fairness. This suggests the potential of making the tool fairer and even more effective on student success in their studies, especially those affected by the awarding gaps.

Vaclav Bayer

At its heart Learning Analytics (LA) is founded on the monitoring of learning behaviours linked with the personal details of learners. A problem that has been raised with personal analytics in general is ‘dataveillance’: the systematic use of software in the monitoring of the actions of one or more persons online. Although educational institutions such as the OU will utilise ethical policies and checks there remain concerns about the possible negative impacts on students and these can sometimes overlap with EDI issues.

For example,  in a recent case in the US, a monitoring tool for an online education course flagged up that a 15 year old African-American girl was ‘not attending school’ during the Covid crisis and this led to the child, who had a Attention Deficit Hyperactive Disorder (ADHD) diagnosis, being sent to a juvenile detention centre. This PhD is exploring ways in which Learning Analytics can be implemented in a privacy preserving manner. In particular, machine learning models, required for Learning Analytic-based predictions, would be trained on data which would remain in the ownership and under the control of the target student. More technically, a decentralised architecture is being developed which retains the verifiability of the data whilst empowering the student.

Audrey Ekuban

This research is on intersectionality in hate speech detection, focusing on “Misogynoir” – a specific type of hatred experienced by Black women.

“Misogynoir” was coined by Moya Bailey and proliferated by Trudy to describe “the anti-Black racist misogyny that Black women experience”. It is misogyny against black women with the intersection of race and gender. The concept of misogynoir is exclusive to Black womanhood, and women of other races cannot experience it, but individuals of any gender or ethnicity can perpetuate it. The hyper sexualisation of Black women and stereotypes that characterise Black women, particularly as angry, unreasonable, or extraordinarily strong, are examples of misogynoir that impact the health, safety and well-being of Black women and girls.

This research aims to explore how misogynoir manifests online, and how it could be mitigated, since existing technologies for hate speech detection do not address this type of hate and protect Black women accordingly. Research has shown that, although social networking sites have created automated techniques for addressing hate speech, these approaches do not perform effectively for specific marginalised groups such as Black women or types of intersectional hate, such as Islamophobia and Antisemitism. Therefore, exploring ways to help platforms reduce the amount of hate Black women receive is crucial. The methodology for this research is multidisciplinary, combining social, computational, and linguistic approaches. In addition, the study will help inform policies around hateful conduct online and expand knowledge around intersectional hate.

Joseph Kwarteng

The project consists of studying the different biases that can be found in the artificial intelligence models that are used daily in different financial services, especially in those services offered by our partners at Visa Europe. In this way, the objective will be to find the most frequent types of biases embedded in these financial systems and to explore the best methods to evaluate them, mitigate them and prevent them from recurring in the future.

Angel Pavon

Online hate speech is a mirror of social inequalities and generally has a faster and broader spread than in other forms of conversation.

Within the EU NoBIAS Project (https://nobias-project.eu/), this work examines current computational methods for detecting hate automatically and aims to develop data-centric solutions to reduce the spread of content attacking vulnerable groups or individuals.

Paula Reyero-Lobo