Session: Signals of Safety: When Technology Listens to Women
Organiser: Sarah Barnbrook, Soroptimist International South East Asia Pacific /Founder Away from Keyboard Inc. and Dr Tamara Polajnar, CEO and Founder herEthical AI
Blog by: Sarah Barnbrook, SISEAP - SI Melbourne Inc.
How can technology recognise the early signals of harm before violence escalates? This question was at the centre of the CSW70 virtual parallel event “Signals of Safety: When Technology Listens to Women,” which I co-presented with Tamara Polajnar. The session explored how artificial intelligence and digital systems can both amplify harm and provide new opportunities to identify risk earlier and create safer digital environments.
I opened the session by discussing how harm in digital environments now occurs at an extraordinary scale and velocity, with algorithms and recommendation systems capable of amplifying harmful behaviour across millions of interactions in a matter of moments. What may appear sudden is often the rapid escalation of patterns in behaviour, language, and engagement that have been developing beneath the surface. Recognising these early signals is essential if we want to move beyond reactive responses and design systems that identify risks sooner and prevent harm before it escalates.
As Co-Lead for Soroptimist International Australia’s National Project, Digital Safety Futures, I highlighted the importance of ensuring that women’s lived experiences inform technology governance and digital safety policy. I also spoke about my work as a volunteer with Away From Keyboard (AFK) Inc., the charity I founded to advocate for safer digital environments and to highlight how digital harms often extend beyond online platforms into homes, schools, workplaces, and communities.
Dr Tamara Polajnar shared research demonstrating how ethical AI can analyse patterns in language and behaviour to detect misogynistic abuse, grooming patterns in romance fraud, coercive control, and victim-blaming language. Her work shows that many forms of abuse are not visible in single messages but instead emerge through patterns of behaviour over time.
During the session I also introduced the Alt-TAB Ethical Technology Assessment Tool, currently in development through AFK Inc. The tool is designed to help organisations understand the potential risks associated with the technologies they are developing or adopting. It encourages organisations to consider cybersecurity resilience, technology-facilitated gender-based violence, child safeguarding, and broader human rights impacts before AI systems are deployed.
I also shared my ongoing research examining how algorithmic duty of care can be operationalised within algorithmic recommendation systems, particularly in relation to protecting women and children in digital environments. As recommendation systems increasingly shape what information people see and how behaviour spreads online, ensuring appropriate safeguards is urgent. I invited participants interested in assisting with testing and contributing global perspectives and data sets to contact me to help strengthen this research.
This discussion aligns closely with the Soroptimist pillars of Education, Leadership, Empowerment of Women and Girls, and Freedom from Violence, highlighting the importance of ensuring technologies are designed and governed in ways that support safer digital environments for women, girls, and children.