“When people meet a robot, they think you’re supposed to talk to it like Alexa,” Samer Al Moubayed says. “We’re trying to push the boundaries of creativity.”
Al Moubayed is co-founder of Sweden’s Furhat Robotics, which has designed Furhat, the “world’s most advanced social robot”. He calls the customisable social robot a “native interface”, as opposed to the traditional tech interfaces on tablets, computers and phones. And in line with this ambition, the design team focuses on the “human interaction” element, according to Al Moubayed.
In the same way that a UX designer would choose specific typefaces, icons or colours for an app, the same thought needs to go into the robot’s “personality”, Al Moubayed says. Some of these characteristics include the robot’s voice, appearance and background. Furhat Robotics comprises around 30 people, and within that there is an interactive design team, known internally as “the magic team”, Al Moubayed says. The company also has a full-time magician on-board, who helps to design productions where the robot might appear at schools, for example.
The company has a few products; the robot itself, the software, and a software development kit (SDK). Depending on the project, and the technical capabilities of a client, Furhat Robotics will collaborate on designing the robot’s personality and on the research and development phase. The robot got its name from the hat the team used to cover up wires in early development; it’s also a nod to traditional Swedish clothing and the country’s cold winters.
How does the design process work?
There are two types of reaction when it comes to robotic interaction, Al Moubayed explains. The first is a more technically-literate group who are used to voice assistants like Amazon’s Alexa or Google Home. The second, most likely to be children or older people, can more “easily suspend their disbelief and treat the robot as a human”, according to Al Moubayed.
He says that this second group expects a robot to act like a human, adapting to noisy environments and changes in conversation and the use of humour, for example. “It’s really important when it comes to design, that you make the robot able to handle the situation,” Al Moubayed says. “Because you don’t know if the user has talked to a robot before.”
While robotic technology, especially in the fields of AI, has become much more widespread in recent years thanks to those aforementioned devices, people can still have a narrow perception of what a robot is. “Their imagination is limited to chatboxes,” Al Moubayed says. “We’re trying to let people understand there is so much more to interacting with a robot, such as psychology. There are new types of interaction that are almost impossible to do with other user interfaces.”
The “world’s first unbiased interviewer”
One of those is a collaboration with Stockholm HR start-up Tengai, to produce the what Furhat Robotics calls the “world’s first unbiased interviewer”. According to research, it takes an average of seven seconds for a recruiter to make a decision about a potential candidate. As a way to create a more balanced playing field, across gender, age and ethnicity, the companies worked to create a neutral interviewer.
This involved a year-long data collection process, where humans met robots that were being controlled remotely by recruiters trained in unbiased recruitment. This dataset was analysed to train the robot’s behaviour. The interaction team also looked at the type of personality you would want to meet as a job candidate. “It shouldn’t be a 60-year-old man if the candidate is a 25-year-old woman,” Al Moubayed says. The result was an androgynous personality, that has been provided a “good user experience for the interviewee and can capably asses personality traits correlated to work performance”, according to the company.
The uses for Furhat are broad, from medical screening services to retail and tourism. Furhat partnered with Japanese videogame company Namco Entertainment to bring its cartoon characters to real life. Now Namco is looking at using the robot to greet and interact with visitors at its amusements in Asia.
The robotics company also recently worked with a group of psychologists to develop a robot that could train medical professionals to recognise depression. The robots were designed with special attention to eye movements and pauses in speech so that psychologists and student can have training before interacting with actual human patients. “The platform is open so people can do what they want with it,” Al Moubayed says.
A significant strand of the company’s work is providing software to universities and academic institutions to better understand the relationship between people and robots, according to Al Moubayed. Topics include how humans treat robots, whether they trust them, and how likely they are to do what a robot says.
“You want to create diversity at every step”
While the opportunities are plentiful, the ethical side of robotics is more complicated. In the case of voice assistants, unconscious bias means that certain people’s voices – usually white men’s – are more easily recognised. This is partly because of the make-up of people designing the technology in the first place. Al Moubayed is aware of the potential risks in this area. “You want to create diversity at every step,” he says, “from discussions with clients to the design team.”
There also appears to be a fine line with the types of personalities that can be designed. “There are seven billion personalities in the world,” Al Moubayed says, pointing out that people are likely to have a small group of friends and know what qualities they like in people. While the company often designs with these human biases in mind – like the recruitment robot that aims to put people at ease – Al Moubayed admits that there is a “harmful human bias” too. “You don’t want that in the wrong context, if a difference in personality harms the other person.”
The team takes it on a case-by-case basis and views it as another design element which needs careful consideration, he says. “It’s an interesting way of looking at bias,” Al Moubayed adds. “Is it a harmful bias or is it something that could be useful in the right setting?”