Online scams are taking an emotional and financial toll on people around the globe. Artificial intelligence (AI) is already being used to create targeted campaigns and humans are not able to distinguish AI-generated from "human" content. Evidently, technical systems alone cannot prevent people from falling for online scams. We also need to update the human computer user. I will present three studies from my PhD that examined novel paradigms to improve people's ability to detect phishing e-mails - a quintessential type of online scam. The first study tested what psychological and demographic factors relate to people's likelihood to fall for phishing e-mails, using an experimental setting with behavioural tracking and a representative participants sample. The result gave rise to designing three e-mail security tools to scan e-mails in a usable fashion, which we evaluated in the second study. Third, I used the psychological concept of "self-projection" to design and test an adversarial phishing detection training. Indeed, engaging people with how phishing e-mails are created can improve their detection ability. I will end the talk with a reflection on the implications of our findings and future directions for research.
Sarah recently completed her PhD in Security & Crime Science at UCL with a full scholarship from the Dawes Centre for Future Crime. She has a background in psychology and neuroscience, and four years of experience in AI and data science consulting. These roles include developing machine learning models for credit card fraud detection and working on AI use cases for the Dutch MoD. She started programming websites in primary school, but a fascination with how the human mind works made her a psychological researcher in the first place. With her work, she aims to bridge the gap between cognitive and computer science.