It's prima facie more plausible for chatbots to cause suicide, considering that chatbots are more personal and interactive than even video games. There's a distinct difference, I would think, between what is obviously fake murder in a fake setting and being sympathized with, like one human to another, on thinking about actual murder. And while chatbots explicitly have the warning that they are not real people, I would not expect a person with an underdeveloped prefrontal cortex and possibly pre-existing mental health troubles (again, this can apply to video games too, but, I imagine, to a lesser degree) to fully act accordingly.