The role of AI in Suicide Prediction & Prevention

Lifeline

Suicide is an extensive public health problem, but it’s preventable too. How can we leverage artificial intelligence to predict risk and save lives?Catriona CampbellJust now·4 min readOn my socials this morning, I proudly shared news of my sister-in-law Laura Campbell appearing on the Telegraph’s podcast Bryony Gordon’s Mad World.The Suicide Prevention Manager for Govia Thameslink Railway, and the first of her kind at that, Laura joined the show’s host on World Suicide Prevention Day to speak on training railway staff to carry out suicide interventions at stations, share invaluable tools to support vulnerable people, and open up about the painful childhood experience that drew her towards this role.Suicide is an extensive worldwide public health problem, one helped in no way by the surge in depression and anxiety precipitated by the pandemic. The latest data from mental health charity Samaritans shows that 6,002 people died from suicide in England, Wales and Scotland in 2020, and 209 in Northern Ireland in 2019.Despite a decrease of 477 between 2019 and 2020 in England, Wales and Scotland, as well as action to raise awareness of suicide and treat people experiencing suicidal thoughts, rates have remained problematic over the years. They’ve even increased in other countries, such as the United States, which saw a staggering increase of almost 25% between 1999 and 2014.Suicide may be a complex and widespread issue, but it’s also a preventable one — even if global prevention efforts to date suggest otherwise. For this reason, my sister-in-law’s appearance on the podcast (and also on Sky News this morning) got me thinking about the contribution of artificial intelligence to the problem. And a little research demonstrates there’s a lot of potential for these technologies to make a significant dent in suicide rates.Here’s why:A lot of victims actually speak to a medical professional in the days, weeks and months before they commit suicide. This indicates an extremely worrying issue with risk detection among the people best qualified and trusted to detect risk, especially when given clear opportunities to do so. Others don’t engage with a medical professional about their worries — or with anyone, for that matter — for fear of stigmatisation or forced hospitalisation.Young people are more likely to reveal signs of suicidal intentions on social media sites like facebook and Twitter than they are to healthcare workers, highlighting differences in risk detection among age groups. Differences in risk detection, as well as risk itself, also exist depending on a person’s gender, ethnicity, socio-economic status, geographical location, and access to healthcare.For instance, Samaritans found that suicide rates in the UK are higher among men than women and those living in deprived areas.Combined, all of this highlights the need for thorough studies of suicide risk, which can be expensive, time-consuming and held back by challenges like researcher bias when traditional research methods are employed. That’s likely why no such studies have been conducted to date — at least none that reduce risk to a sufficient degree.Enter AI…The tech could have a huge impact on the accurate prediction of suicide risk and the development of effective suicide prevention programmes.Exploiting vast datasets, researchers can use predictive modelling techniques unique to AI to achieve far more accurate suicide prediction results. One example of this would be the application of machine learning to electronic medical records.For anyone who closely follows the well-documented progress of artificial intelligence in the press, this may very well ring a bell. And that’s because we’re talking about tools already successfully used for predicting risk in a number of other medical fields, including death in premature babies, sepsis in those with severe infections, and rare outcomes — the list goes on.Although the use of artificial intelligence to predict and prevent suicide is still in its infancy, it all looks very promising. The number of research studies in the area is fast increasing, and so I feel super confident that AI can and will help save lives.But like I always say, as with the application of such technologies in any field, it’s important to stay within ethical and legal boundaries — especially when data as sensitive as that contained in electronic medical records is involved.

Read More

The EU’s draft AI regulations: A thoughtful but imperfect start

EU flag with binary

In April this year, the EC published its much-anticipated Draft AI Regulations. Although the proposed rules haven’t yet been adopted (a process that could take until 2026), they’re the first-ever solid endeavour to regulate artificial intelligence. I recently discussed the draft regulations with fellow members of the newly-formed Scottish AI Alliance Leadership Circle. As it’s a complex topic, my colleague Callum Sinclair, Head of Technology & Commercial at law firm Burness Paull, provided us with his view of the draft regulations — a great summary of some key points.

Read More

Tesla AI day zooms closer

Tesla car dash

A little bird told me that Tesla soon plans to hold an Artificial Intelligence Day. Or was it a drone? Actually, neither…it was Elon Musk himself. And of course, he didn’t call me personally, instead sharing the welcome news on Twitter.

In a post earlier this week, the electric vehicle & clean energy company’s CEO said the event will likely take place “in about a month or so” and will “go over progress with Tesla AI software & hardware, both training & inference”. He closed with a few words bound to get a few engines revving: “purpose is recruiting”. Ooh, interesting!

Read More