The AI projects listed contain aspects related to EDI and highlight how we emphasise EDI within our ethical research approach.
A 24/7 personal AI Assistant for every student.
Funded by the Innovation Foundry and in collaboration with colleagues across the OU including IET, OpenLearn and IT this project aims to use Generative AI to build a 24/7 personal digital assistant for students.
AIDA will be able to explain and re-explain tricky concepts and instantaneously generate and give feedback on quizzes on student demand.
CORE hosts the world’s largest collection of open access research, benefiting researchers, libraries, funders, and institutions globally. Its services enable text mining, business intelligence, compliance monitoring, and research analytics, supporting diverse use cases and making CORE a valuable infrastructure for the research community.
Our project aims to enhance fairness in UK Higher Education by addressing bias in AI-driven learning analytics. Building on the Open University's OUAnalyse system, it will create a framework of best practices and compliance for fair and responsible AI. Collaborating with regulatory bodies, the project supports ethical AI in line with UK regulations.
ORBIS seeks to bridge the gap between citizens and policymakers in Europe through scalable digital tools promoting participatory democracy. It will test six initiatives, including youth policy engagement, AI-driven dialogues, and sustainable city planning, across local to European levels, validating innovative democratic methods for modern citizen involvement.
OU Analyse is a system powered by machine learning methods for early identification of students at risk of failing. All students with their risk of failure in their next assignment are updated weekly and made available to the course tutors and the Student Support Teams to consider appropriate support. The overall objective is to significantly improve the retention of OU students.
Can responsible Generative AI (GenAI) lead to improved student outcomes? In SAGE-RAI, we utilise partner-applied education-oriented GenAI tools to explore this. Inspired by Bloom's 1984 study on 1-to-1 teaching's efficacy and the potential for cost-effective, scalable personalised education, we aim to unlock this potential. Addressing tutor limitations in accommodating large cohorts, we investigate how responsible GenAI can enhance tutoring, offer tailored more personalised learning experiences and generate student feedback. Our goal is to create a platform supporting assessment and student guidance while responsibly applying GenAI, addressing challenges of misinformation, copyright, and bias. The journey embodies educational innovation for better outcomes.
How can we create a more just society with AI?
In this project, therefore, we are seeking a new paradigm for thinking about what can and should be done with AI Technology, which cannot be reduced to cultural complexity and which takes into account the reality of world forces (such as power and wealth) that help to predict what is likely to happen
This research is funded by a UKRI Future Leaders Fellowship (round 6).
The main objective of the Project is to create a GATEKEEPER that connects healthcare providers, businesses, entrepreneurs, elderly citizens and the communities they live in, in order to originate an open, trust-based arena for matching ideas, technologies, user needs and processes, aimed at ensuring healthier independent lives for the ageing populations. Specifically, the team will test the feasibility and appropriateness of in-home robots and community robots that can provide citizens with healthcare support and information and well as a direct link to healthcare professionals.
The research addressed the underexplored phenomenon of misogynoir and its impact on Black women’s experiences online. An Amnesty International survey revealed that 41% of women experiencing online bullying felt their physical safety was compromised. The study aimed to investigate how misogynoir manifested online and sought ways to mitigate its prevalence while ensuring participants' confidentiality.
The NoBIAS research project focused on developing AI systems that minimise bias in decision-making processes. It explored the ethical and legal challenges associated with AI, aiming to create fairness-aware algorithms and improve transparency. The project trained 15 Early-Stage Researchers in a multidisciplinary approach, preparing them for leadership roles across various sectors while ensuring social good in AI applications.