Responsibility in the AI Innovation Race
Insights from AI UK and RAI UK
If there’s one thing I took away from attending both the Responsible AI UK (RAI UK) event and AI UK over the last week, it’s that no one has all the answers yet - and maybe that’s a good thing. The talks covered weren’t polished corporate pitches or utopian visions of AI-led futures. They were raw, sometimes uneasy, often brilliant discussions about what AI is doing right now to our societies, institutions, and everyday lives.
At times, the conversations felt like watching people trying to build a bridge while already walking across it. The technology is moving fast, but the frameworks for understanding its impact - social, ethical, environmental - are still being worked out. Here are some of the discussions that stood out.
Can AI care?
The idea that AI should be empathetic is an odd one when you really think about it. Empathy, after all, is something we associate with human relationships, built over time and shaped by cultural and personal experiences. Can a machine, no matter how sophisticated, really be ‘empathetic’? And if so, whose version of empathy does it learn?
Andy McStay, lead researcher on the Automated Empathy - Globalising International Standards (AEGIS) project, made an important distinction:
Strong empathy: to truly feel and understand another’s emotions
Weak empathy: to respond in a way that appears empathetic, without real understanding
Both exist in human interactions all the time. But the problem comes when these lines get crossed - when weak empathy is mistaken for strong empathy, and AI is trusted with responsibilities that require a deeper level of care.
Right now, AI models tend to be built around the cultural norms of those who build it, with little room for the variety of ways people express and experience emotion. A system designed to recognise and respond to empathy in one setting might misinterpret it in another. In trying to make AI more ‘universal,’ we risk stripping away the nuance that makes human connection meaningful.
🔗 Check out their guidance on ethical considerations for emulated empathy in AI.
Whose truth is guiding AI?
There’s sometimes an assumption in AI that if we just collect enough data, the model will eventually land on the ‘right’ answers. But the truth is, the right answer depends on who is answering.
At AI UK, Phelim Bradley (CEO of Prolific) emphasised that AI requires not just representative data but also diverse perspectives in its evaluation. He highlighted that assessments of AI models can vary significantly based on the evaluator’s background. Without diversity in those conducting the evaluations, biases may go unnoticed, shaping the technology in ways that erase or harm minoritised perspectives.
Take the PRISM project, which studies how different cultural backgrounds shape AI evaluation. Their findings reinforce the importance of considering who provides feedback, how it is gathered, and where it applies, particularly in subjective and value-laden domains where perspectives vary across cultures and individuals.
Closing the AI transparency gap for SMEs
At AI UK, there was plenty of talk about Generative AI. But when asked, only 5–10% of the audience had actually interacted with an LLM API - meaning most people were engaging with Generative AI at the surface level, through chatbots and interfaces, rather than working with models in a more hands-on or technical capacity.
This disconnect is even more pronounced in small and medium-sized enterprises (SMEs). While large corporations are investing heavily in AI, many SMEs are still struggling to adopt it at all. The barriers are significant:
Limited access to data collection and validation at scale
A lack of clear AI ethics guidance tailored to SMEs
The additional effort and time required to implement responsible AI practices, which many SMEs simply don’t have the resources for
The relentless pace of AI advancements, making it difficult for SMEs to keep up, let alone consider best practices
What’s most concerning is that many SMEs aren’t even considering AI ethics yet - not because they don’t care, but because they don’t feel they have the control or resources to do so.
The Responsible Innovation in Generative AI (RAISE) project is working to bridge this transparency gap, developing guidelines to help SMEs in UK and Africa navigate AI adoption responsibly, sustainably, and with a better understanding of the risks and opportunities.
🔗 Explore their guidelines on Responsible Generative AI for SMEs.
Children and AI: the most affected, the least consulted
One of the most striking moments from AI UK came from the session about the Children’s AI Summit, facilitated by Mhairi Aitken (Ethics Research Fellow at the ATI), where young people spoke about how AI is shaping their lives. It was a reminder that, while adults are debating AI’s risks and benefits, children are already living with the consequences.
“You may be the ones making the decisions, but the consequences are being suffered by us.”
AI is being built into education, social spaces, and mental health support, but children are rarely consulted. The result is well-intended technology that can sometimes do more harm than good: reducing opportunities for social interaction, reinforcing stereotypes, or creating dependencies on automation where human support is needed.
🔗 Check out the takeaways from the Children’s AI Summit.
In education, for example, Khanmigo, Khan Academy’s AI tutor, was showcased as an example of AI done right. But how do we ensure AI in education is used ethically, not as a substitute for real learning but as a tool for deepening it? Researchers Joseph Kwarteng and Aisling Third (Open University) are wrestling with this question through the SAGE-RAI project - they are developing an open source Retrieval Augmented Generation (RAG) system and considering ways to apply it responsibly within education.
🔗 Here’s their guidance on responsible adoption of Generative AI for education.
Another moment that stood out from the AI UK talk on the Children’s Summit was a simple but powerful statement from a young speaker:
“Our planet is priceless.”
AI has become a race - one driven by profit and power - but at what cost? These young speakers weren’t calling for progress to stop. Instead, they were asking for something often overlooked in the rush to innovate: responsibility. They recognised that AI can be a force for good, but only if it is developed with transparency, fairness, and care for both people and the planet.
As AI continues to shape the spaces children grow up in, their voices must not just be heard, but acted upon.
🔗 A must read is the children’s manifesto for the future AI here.
AI and the NHS: how do we get this right?
Given my focus on AI for mental healthcare, I had to attend the talk on Harnessing AI with Clinical Ingenuity at AI UK, where experts from across England, Scotland, and Wales discussed the role of AI in the NHS. The emphasis was clear: AI should be used to optimise clinical pathways, speed up triage, and reduce the burden of administration for clinicians.
Each of David Lowe, Shakeel Ahmad, and Maaike Kusters shared insights on what it takes to get AI right in healthcare:
Keep the patient story at the heart of it - technology should serve people, not the other way around
Root decisions in practical realities, not abstract possibilities - AI has to fit into existing clinical workflows, not disrupt them
Systematically understand the purpose of an AI tool - how it affects patient flow, data flow, and clinician workflows before integrating it
Start with the data, ensuring standardised ontologies across healthcare practices to create consistency and reliability
Embed AI within practice, rather than something "added on" after the fact
A key takeaway from this session came from Maaike Kusters (Consultant Paediatric Immunologist at Great Ormond Street Hospital, GOSH), who made an urgent call for AI to prioritise children’s mental healthcare. With long waiting lists and an overburdened system, AI has huge potential to improve prevention and increase access to high-quality care at scale.
AI needs more humanity
After this week of discussions, it was clear to me that AI is evolving faster than our ability to govern it but that doesn’t mean we can’t shape its direction. Conversations about ethics, access, and responsibility can’t lag behind innovation - they must be built into the systems we create, not patched on later.
The future of AI will be defined by who gets a seat at the table and whose perspectives are embedded in its design. That means moving beyond one-size-fits-all models, ensuring SMEs and public institutions have the tools to adopt AI responsibly, and making sure AI development isn’t dictated solely by big tech. AI will inevitably transform our lives, and we can still shape whether that transformation is ethical, inclusive, and human-centred.

