What Can AI Offer to the Field of Mental Health - The Complete Interview with Giulio Jacucci

The second episode of our podcast series Well Ahead is about the possibilities and challenges that arise when developing artificial intelligence systems inside the mental health care. Giulio Jaccucci is a professor in Computer Science at the university of Helsinki, focusing on human-computer interaction. Topics such as Giulio Jaccucci's thoughts on OpenAI's Chat GBT, how our AI-assisted conversational agent was made, and what theoretical background the AI runs on as well as current and future solutions to digital tools within the mental health field will be discussed! Listen to the podcast episode here or read it from below.

Sinnu Savola

Hello, all you wonderful listeners. And welcome to the second episode of our podcast series Well Ahead. I'm a psychologist Sinnu Savola, from Formulator. You have tuned in to learn all about global mental mental health challenges, and about what are the most cutting edge solutions to those. In this podcast, we will talk about psychology, mental health, therapeutic work and technological solutions around them.

Today, we will be focusing on AI technologies, and what are the benefits and challenges when applying them, especially within mental healthcare solutions. This is something that I'm very pleased to have a chat about with Giulio Jacucci, who is a professor in computer science at the University of Helsinki, focusing on human-computer interaction. So his work is about developing new interaction techniques. Giulio is also a chairman of the board at Formulator. Welcome.

Giulio Jacucci

Thank you very much.

Sinnu Savola

So do you want to first tell us something more about yourself? For example, what do you have on your research table right now?

Giulio Jacucci

Absolutely. So recently, my research has been about integrating advanced techniques into user interfaces, in particular AI. To make some examples of experiments we're currently running, we developed the different virtual reality environments that are connected also with sensors to our brainwaves. And then we study, for example, how to teach meditation.

But in a very recent experiment, we are studying how narrow adaptivity, which means how the virtual environment might adapt to my changing, let's say brainwaves, how this kind of system could help modify our experience of time. And this can be used, for example, to diagnose or possibly in the future provide some therapies for conditions such as depression.

And we study a lot interaction in virtual reality, in particular, effective interaction. I don't know if your listener noticed that there is a social VR platform now coming quite strong. And one thing that characterizes such platforms is that people are much more emotional, there is much for example, much more harassment in virtual reality. And what we study is how effective interaction happens, for example, with an agent, and also we study a new way to convey emotions like haptics replicating the sense of touch. So these are some examples of the research we are doing.

Sinnu Savola

Well, this sounds very valuable and exciting. So as you're an AI expert, I can not help but ask you first about your thoughts on open AI's chat GPT? Because there has been such a, I don't know, hassle around it in on social media and media for the past year or so, or a bit less. So have you been following their launch? And I don't know, how you see it? Do you see that as disruptive as some at least argue.

Giulio Jacucci

Definitely, this, this whole area called generative Ai is, has really gone through a big disruption, with example, such as chat GPT. There are also other platforms, or there are other examples also from open AI of how to, for example, generate images, just describing what you like. These are really very important, new applications, that as the idea of generative AI, then basically create something new. And this is a very big, important, let's say disruption in the sense that no longer I can not just like with the search web search tool like Google to try to find the right document into or pieces of documents. But this chat GBT is able to actually carry out some tasks.

So one of the most striking examples I found is how you can ask the chat GPT to write some software, I found that really amazing. So at the first glance, this looks like something unprecedented people have started using it in school to cheat. And so, it can also be extremely useful now automating some tasks.

But I'd like to point to some maybe issues with this kind of systems. One issue is that it can answer especially general question. And in the way it works, it always tries to find to compose, let's say, the responses. It's trying to use everything that it has learned, for example, through what was publicly available in the internet. The problem is that this is very good, like search tool like Google to find simple obvious answers and general answer or do quite predictable tasks, like summarizing a text. And for this reason, then it's not very precise it's not very accurate. So as if your question starts to be more precise, then there will be much more problems.

And even the first excitement of writing programs by Chat GPT recently got a lot of critiques. A lot of examples of bad programming advice, for example, and what I find most worrying is the general trend to let's say, I will use this work of having technology that support deception, in the sense that you don't know anymore what's real. So this Instagram account, is this a real account or not? Is this agent, a real human or a virtual agent? Is this text, a real article? There are even news sites that have started writing article with AI.

And then of course, this brings a lot of problems because we as humans make, of course, also mistakes. And also the AI makes mistakes, we make different types of mistakes if you want. But one of the big fears, it's also how it can be biasing, for example, some examples of Chat GPT, let's say not respecting women. There's so many examples of how this can kind of go wrong.

I want to leave, however, with a small anecdote that maybe help us to understand what's coming next is that, of course, deception and having automating our task is kind of also scary. But we will find, for example, at the university, we will find we should not reject this technology. But understand that and maybe change a bit our learning outcomes and what we teach or how we test, how we exam, this has to be changed. So to make an example when I studied at the university, there were very powerful new calculators that allowed to calculate the structure of whatever building a very complex calculations, and at the university, we learned how to use this calculator. But we also still learn how to do calculations of the same structure by ourselves. So in the end, possibly in the future, we will still learn how to speak and summarize. But then we will also learn how to use chat GPT.

Sinnu Savola

Yeah, I like that optimism. However, as you can hear from what you just said, artificial intelligence is transforming many areas of our lives and studying universities, they are one of them. But what do you see as the greatest possibilities of AI solutions, what do they have offer? And maybe especially to mental health care?

Giulio Jacucci

Yes. So in AI, let's start by saying that AI in healthcare and mental health care, it's still quite new, and there are not many examples. In fact, still a few months ago or a couple of years ago, we were still debating, how do we clinically test AI, so that there is a, I think this transformation will take some time. But there are of course, very good examples, tested examples, of something that works also in mental health, that is, in a way a special area in, in healthcare. And here I would say the users range from being able to predict the certain symptoms. So, you can try to detect early symptoms of a particular condition. This has been used for depression, for example.

And then it can be used to automate certain tasks also in the intervention helping to scale up intervention. And, of course, it can be used for decision making based on previous data. And, however, specifically for mental health care, I would like to say that I saw some good example of how it can provide even just the companionship.

There are other some interesting applications that give as a service as somebody that can listen to you, and that you can believe that there is somebody listening to you. I find this quite interesting. But then if we go into a therapy, there are many examples of how cognitive behavioral therapy, especially the digital version of it, can be supported by some of these tools. For example, having an agent accompanying you, reminding you to do things.

And what I found the interesting is also how maybe in some in certain context, when it has to do with mental health, I'd might like to have a human contact, but maybe in other situations, I might find that easier to work with an agent. I saw just this morning a study referring to these things, all things that need further investigation, but definitely a lot of promise in terms of scaling up the offer of mental health support that is really badly needed recently.

Sinnu Savola

Yeah, indeed. So you said that there's need that we'll need more research before applying the technologies or before trusting them too much, to like different features of AI. What can you tell me more about the limitations of AI and deep learning? Especially, like, what are the challenges that we have to face before integrating some AI solutions or features into digital mental health interventions?

Giulio Jacucci

Well, there are different types of AI. So deep learning is a particular, let's say important revolution that happened actually already some decades ago. But now availability of data and processor power allows us to really make use of deep learning. So not all AI is the same.

But so if we talk about deep learning, that one limitation is that it usually needs really a large amount of data and also good data in a way whereas maybe even in healthcare, sometime, we have little data.

Let's take the case of medicines and treat cancer. Usually we have very few patients and data. Very few patients, and instead a lot of genes and different medical substances. So in the end, the question of data is a quite important. Good data also means that it should be data that doesn't bring us to certain biases. And so in a way, just working with data is a limitation if we don't have good models and theories behind that to maybe even correct the data.

And one thing that is particularly worrying with the deep learning that is being, tried to be, let's say, investigated now or improved is that it works something more like a black box. So it's very hard to understand, for example, causes or relationship between things or the why of things. And this is a problem that sometimes is called the problem of transparency or explainability. So once I come up with the answer, I cannot really say, what are the rules governing the world? So why should it be this answer? And not another one? So something with deep learning is more difficult.

Sinnu Savola

Yeah. What would you say, what kind of solution, like what kind of new practices or what kind of systems are there analyzing and helping mental health care professionals that AI provides? Are they already in use already? How does mental health care look like today? Is it using AI technologies already? Mostly, like, what could we do?

Giulio Jacucci

Yeah, there is a bit of both in the sense that a push towards digital interventions in mental health care. It has been happening now, for really many years with a lot of research, a lot of pilots, a lot of studies showing the differences of delivering, for example, cognitive behavioral theory either face to face, or totally digitally, or in some sort of hybrid approach.

Now. So the advent of AI for, for example, let's say call it conversational AI, and also predictive AI. These are then currently being included and integrated in these digital interventions. And I think that that's where we are now but if you consider that there are many types of therapies, so each approach of therapy is a bit different, and require possibly different ways to integrate such solutions.

There are, as I was mentioning, before, also available apps. So a lot of I think, there has been a proliferation of apps in the area of mental health that don't include, for example, the therapist as a role in let's say, in the user of the app. I think that we have to also distinguish a bit that that sometimes AI is integrated in with a lot of, let's say, knowledge and consideration for how the mental health field works. And sometimes these things are offered directly on an app store. So they can be very different systems.

But I would say, we are at the beginning of introducing such such technologies. And there are some early reviews. But for example, a recent review I saw of how conversational agents or chatbot are used was showing that, and I think review came out really few months ago or last year, is that more than 90% of these interventions utilizing a conversational agent were actually rule-based conversational agent. What does it mean is that the agent has scripted rules how to answer and how to manage the dialogue, which is very constraining in a way, and it's maybe difficult to consider like the full fledged the integration of AI in some of these interventions. So I think we are still at the beginning.

Sinnu Savola

So at Formulator we have built this digital dialogue with the conversational agent and what it means is that clients can chat it through and receive a psychological analysis of themselves afterwards. And then why we have built it is so that the clients could be prepared and ready to come to their meeting with their mental health professional, and also that the mental health professional would have like very comprehensive information about the clients before meeting them so they can prepare the interventions in the best way possible.

What do you see as well, of course, as the benefits, but also as the challenges in this kind of AI based case formulation tool?

Giulio Jacucci

Joo, tämä on siis eräänlainen vastakohta, sanotaan Chat GPT:lle, josta aloitimme tämän keskustelun. Tavallaan Chat GPT käyttää hyvin suurta tietomallia, joka voi kattaa pintapuolisesti melko paljon asioita. Sen sijaan Formulatorissa keskitymme hyvin tarkkarajaiseen tehtävään. Ja tietynlaiseen kokeiluun, jossa käytämme keskusteluagenttia. Mutta se on hyvin määritelty tehtävä, jonka ovat laatineet asiantuntijat, joilla on paljon kokemusta ammattimaisesta terapiasta, ja se perustuu myös näyttöön perustuvaan, sanotaanko, tutkimukseen siitä, miten tapausjäsennys tehdään.

So if this is maybe where the example of Chat GBT really falls short from the point of view that when you talk with the Chat GBT it's hard for Chat GBT to understand context of things. And this is what we are very focused right now is in understanding a particular point in a journey of a person that is seeking mental health support. And it's very clear, in a way, what is the context of what is happening and with whom.

And the challenge now is how to make this conversational agent, truly understand who is in front of them, and so engage with who is in front of them. And I think that for this purpose what we are doing is learning from the data we are collecting, how to personalize the conversation at the best. And this is where the AI comes in with. So in a way it's something we need to learn over time, like we humans. So we need a lot of data and it merged with the experience of our professional.

I would think what is difficult is to collect the right data and use the right models to improve. One example could be how to reduce dropouts in whatever digital intervention. And what we're studying is how to predict when a drop might happen. And then also studying what's the right corrective action. So just predicting that the dropout might happen might not be enough. To solve the situation I also need to know given who is in front of me, what's the right personalized way to act. And of course, we can use the experience of the professionals to know, but we can use also more and more data of how people use the service in the past and so knowing what works.

Sinnu Savola

Yes, exactly. Perfect. I think I have this is all I had for today for you. Thank you so much Giulio for having the discussion with me. This was super exciting.

Giulio Jacucci

Thank you Sinnu was really very exciting to talk about these important topics. The field is transforming so fast.

Sinnu Savola

Yes. Thank you so much for your listeners, also, for listening the episode. Hope you enjoyed and hope you learned something new. Let us know on the YouTube comments for example, and please subscribe to our channel wherever you are listening to it. In the next episodes we will cover new ideas again about psychology and mental health and mental health technologies so hear you then - see you then bye!

Edellinen
Edellinen

Petaa menestystä: Realististen tavoitteiden asettaminen lisää terapian tehokkuutta.

Seuraava
Seuraava

Terapia tieteenä ja taiteena: Tracy Eellsin näkemyksiä