Post a Comment Print Share on Facebook

Chatbot with a keen sense for emotions

The US-American market research company Gartner has made 2016 a stunning forecast: chat bots – text - or speech-based dialogue systems – would be involved in th

- 20 reads.

Chatbot with a keen sense for emotions

The US-American market research company Gartner has made 2016 a stunning forecast: chat bots – text - or speech-based dialogue systems – would be involved in the year 2020, 85 percent of interactions between businesses and customers. In the future, the average would lead people to even more conversations with a Chatbot than with your spouse.

Whether it is really so, will not be evaluated. Sure is that chat bots gain more and more skills, which opens up new areas of application. Thanks to the access to extensive databases, understanding and artificial intelligence, the capabilities of digital assistants like Siri and Cortana, or of the chat bots in a narrower sense, as Sergeant Star of the U.S. army's language is quite remarkable. Machine ethics Oliver Bendel of the University of applied Sciences northwestern Switzerland (FHNW) in Brugg-explores Windisch, the possibilities and limitations of such systems – and as an ethicist think about the potential dangers.

Dialog user

Now the bag itself has developed a web-based Chatbot names Bestbot, the. The special feature of the Bestbot is that it detects by means of face recognition, the emotions of the human dialog partner. Up to 20 feeling he can distinguish layers.

A Dialog with the user David from the delivery, for example, like this:

David: For some time I feel worthless.

Bestbot: Okay, calm down, please. I'm here for you and can help you! You, you want to talk about it with people?

David: Yes, I would like to.

Bestbot: Okay, wait a Moment, I'm looking for a suitable contact number ... I see you're from the region of Aarburg, therefore, I can give you a number in Switzerland. The Organisation, The Samaritans can be reached under the number 143.

a Little later on the same day:

Bestbot: Hello, David, nice to see you again. How are you?

David: I'm good.

Bestbot: makes me very Happy to hear that. But, hey, you look sad, even though you write quite normal. I see that right?

Here is the Bestbot has detected a discrepancy between the text input from David and his facial expression. "With the Bestbot we wanted to invent entirely aware of a System that can assess the emotional state of the user as well as possible," says Bendel. In fact, the Bestbot with the described emotion analysis of potential problems of the user more reliably than through pure textual analysis. "However, you have to expose yourself to the questionable face detection. Safety versus freedom is an old Dilemma that occurs here is evident."

In the sense of the Bestbot is a moral machine, in a immoral: He wants only the Best for the user, penetrates but because of the face - and emotion recognition, deep in the privacy and could the determination of the right to informational self-undermining. "For me, the Bestbot is a bit like a matryoshka doll. Outside hui, inside complex," says Bendel. "Even here at the Academy we have different positions." For the time being anyway, Bendel has decided to keep the Bestbot in the laboratory and not released to the Public.

Moral rules

The dialogue with the user follows the Bestbot some moral rules. So it makes the user always clear that he is a machine. He takes the user's problems seriously, and supports him in the framework of its possibilities, and makes transparent, whence he derived his information. With these and other meta-rules and the exclusion principles can be, for example, prevent the Bestbot is about racist as 2016, the Chatbot Tay from Microsoft.

According to Bendel, there are institutions that could use systems such as the Bestbot in the future. This includes youth facilities. You could run a Chatbot with emotion analysis round-the-clock to provide the help-seeking young people in the emergency the appropriate contact person. In interviews, a kind of Chat with camera comes today partly. Here, too, employers may be tempted to an emotion analysis.

Also, hospitals could offer an emotionally intelligent Bot on your website as a point of contact for patients. "The requirements of a Chatbot would be very high, because the conversation between the doctor and the Patient is very sensitive," says Fabio Feubli, head of Digital Services at the University hospital Zurich (USZ). "The systems are still not Mature enough." Also, most people would contact the USZ over the phone or personally come by. Therefore, the topic of the Chatbot will not be processed by the USZ is currently engrossed. "Of course, we observe the development of conversational User Interfaces."

In fact, the Bestbot has weaknesses, especially because of the associated systems. "This can't currently distinguish played emotions from real emotions," says Bendel. "If I want to assassinate, but smiling through the station hall, then the risk of the analysis software would not be recognized." A Problem was also not that user emotion is always in front of the Chatbot to sit, and some people, is virtually always a poker face, no matter how you feel. Then the emotion detection would bring no additional benefit.

The Bestbot stands in a Tradition of other chat bots, developed by Bendel already. Started in 2012 with the finding that existing chat bots do not react at certain topics adequately. "If we have entered that I want to carve, or a rampage plane, not interested in most of the chat bots, and changed the subject." So Bendel in 2013, has developed the Goodbot, acting according to certain moral rules and the users in case of emergency, for example, with a contact number.

in 2016, came the next step: Bendel presented the Liebot, a lying with seven different strategies systematically – a delicate project. "Similar to the exploration of nuclear fission, the danger of abuse," says Bendel. "In principle, could spread the Liebot as immoral machine in the world and cause harm." But Bendel believes that the Benefits outweigh this Lügenbolds. "With the Liebot, we wanted to illustrate the strategies of the automatic lying and understand. So we can point out to programmers, providers of chat bots and the users of the dangers."

Delicate physiognomy

The Bestbot project has expanded Bendel along with his programmer David Studer, the seven moral rules of the Goodbot on the twelve and him with facial and emotion recognition. Every ten seconds, the Bestbot makes a photo of the user and analyzes it. "If you look away, or disguised, then he complains," says Bendel.

the problem is the Bestbot, according to Bendel, in particular against the Background that more and more monitoring systems rely on physiognomy, so try to close from the Outside of a man whose traits of character, political tendency, or propensity for violence. "Even if this does not work in most cases, it is highly problematic," says Bendel. "Someone has certain facial features and Sweat on the forehead, it could be in the sense of physiognomy, the interpretation that he has Evil in mind. If this is done, then it is reckless to be in the public space."

Also, therefore, Bendel has not made the Bestbot accessible to the public. "You could accuse us of doing that we put ideas out into the world, which are dangerous. I would say: Yes, that's right, I can't dispel. However, I represent vigorously the Position that we can manufacture something in the lab. So we can discuss the opportunities and risks before something similar comes into the world."

image to enlarge

(editing Tamedia)

Created: 11.12.2018, 18:26 PM

Avatar
Your Name
Post a Comment
Characters Left:
Your comment has been forwarded to the administrator for approval.×
Warning! Will constitute a criminal offense, illegal, threatening, offensive, insulting and swearing, derogatory, defamatory, vulgar, pornographic, indecent, personality rights, damaging or similar nature in the nature of all kinds of financial content, legal, criminal and administrative responsibility for the content of the sender member / members are belong.