Robots for Independent Living: Do they ‘care’ or are they ‘assistive’?

Praminda Celeb-SollyDr Praminda Caleb-Solly, Head of Electronics and Computer Systems, Designability, is currently working with members of our team and external partners on CHIRON – a project to design and develop innovative robotics technology to enhance independence. Here, she questions the use of terminology and purpose of robots in the home.

Firstly, a question. What would your response be to:

“Here is an assistive robot that can help you get dressed on your own, Mrs. Johnson”

Compared to:

“Here is a care robot that can help you get dressed on your own Mrs. Johnson”?

The way people feel about technology is influenced by the words used to describe the technology. The names and adjectives we use to describe things help us to understand what their function or role is. So, when some people come across these machines which are referred to as ‘care’ robots, they can become naturally alarmed.

Roboticists tend to consider the use of the word ‘care’ as robots working within the healthcare and social care domains. Sometimes, this can get interpreted as robots actually providing care. To imply that a robot might be involved in providing care in some way can lead to a lot of apprehension or even unrealistic expectations. It can challenge the very core of our humanity and what it means to be human.

Robots to support daily living

We are seeing an increasing focus on research and development into robots that can help with mobility and provide a range of support for tasks such as personal hygiene and eating. People have a strong desire to maintain their autonomy and dignity which supports the rationale to develop technology which promotes independent living.

My main area of research at present concerns the design and development of robots to support life’s daily activities. But, how do we ensure that these new technologies deliver solutions that enhance dignity and agency? Could labelling these systems as ‘care’ robots impact people’s acceptance of them and indeed how they are designed, adopted and used?

The Royal College of Nursing (2012) noted that caring is “not just a series of tasks, but is complex and time consuming, both in terms of the physical needs and psychological/social needs of patients”. While a robot might be able to provide autonomous or collaborative support for tasks, does calling it a care robot imply that it has the ability to help with psychological and socials needs of the person using it as well? Also, even if it could relate to people at a psychological, emotional or social level, should it?

How human should they be?

Researchers like myself have to grapple with ethical questions like these every day. This dilemma is compounded by the anthropomorphisation of robots being developed for use in a home or healthcare environment. While the argument behind anthropomorphisation is based on providing more intuitive human-robot interaction; the fact that the robot has human-like characteristics can result in an expectation or designed semblance of empathy when interacting with the robot. Particularly when this interaction is speech-based.

Should the voice and tone of the robot be gentle and charming, or terse and unemotional? One might prefer the former, but how will the person be affected when the interaction with their gentle and charming robot Rhona was limited to ‘Please hold your right hand out so I can get this sleeve on’ or ‘you haven’t had your medicine yet’.

What if the interaction wasn’t limited to this? What if your robot also incorporated a chatbot facility that could ‘discuss’ your favourite TV programme with you?

Psychologists and ethicists are undertaking numerous research studies to investigate these issues and popular media and science fiction writers have been putting forward a range of controversial and provocative scenarios for millennia. However, now we are reaching a stage when we are going to see these technologies become more common in our everyday lives. Disembodied agents such as Apple’s Siri and Amazon’s Alexa are becoming widely used already.

A person-centred approach to design and development

As part of a recent research project, we asked people how they would like their robot to behave and look. Some told us that they would like it to sing Frank Sinatra songs as it went about its business around the house and they wanted it to look happy. Could this result in an unhealthy attachment? Is that acceptable?

A group of researchers explored concerns associated with the use of robots to support older adults with ageing-related impairments and identified six main ethical concerns. These included:

  1. the potential reduction in the amount of human contact
  2. an increase in the feelings of objectification and loss of control
  3. a loss of privacy
  4. a loss of personal liberty
  5. deception and infantilisation
  6. the circumstances in which people should be allowed to control robots.

The researchers stressed the need to ensure a balance between “improving the lives of the elderly by enabling them to live in their own homes for longer whilst protecting their individual rights, and their physical and psychological welfare.” They concluded that this could be achieved by including detailed consultation and customisation to produce a good working solution designed to protect a person’s physical health, whilst still preserving their freedom and control over their lives. In summary – taking a person-centred approach to the design and development. Which is exactly how we at Designability are approaching research in this area.

Working with people to assess the use of robotics in real life

The ethical issues in real life situations are even more complex. The presence of a robot in the home environment may prompt healthier living habits or conversely, encourage laziness by excessive use or reliance on automation. Identifying and addressing these issues can only be achieved through conducting long-term trials.

Now we have an additional challenge – how do we collect this data and how do our experiences change over time? Could converging too early on a set of standards, before having complete knowledge of how this technology might be adopted and appropriated, potentially constrain its scope?

Using the word ‘care’ when speaking about robots can imply an extra layer of emotional interaction that has the potential of perpetuating other ethical concerns. I think we should leave the emotional aspects of care for us humans and the robots should just be functional assistants – I don’t really want to develop a relationship with an emotive robot which helps me with food preparation… or do I? Would it just add that extra bit of fun, when it told me a random joke, or made a quirky comment about having forgotten to close the fridge door again? I would love to hear what you think in the comments below.

Further reading on the topics in this blog:

  • Sharkey, A.J.C. and Sharkey, N.E. (2011) Granny and the robots: Ethical issues in robot care for the elderly, Ethics and Information Technology, Springer
  • Salvini, P., (2015). On Ethical, Legal and Social Issues of Care Robots. In Intelligent Assistive Robots (pp. 431-445). Springer International Publishing
  • Caleb-Solly, P., Dogramadzi, S., Ellender, D., Fear, T., & Heuvel, H. V. D. (2014). A mixed-method approach to evoke creative and holistic thinking about robots in a home environment. ACM/IEEE International Conference on Human-robot interaction (pp. 374-381). ACM
  • British Standard for personal care robots (BS EN ISO 13482:2014)
  • BSI Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems(BS 8611:2016)Ethics design and application robots

[Image: Robots in the Anchor Assisted Living Studio at the Bristol Robotics Laboratory, UWE, Bristol]

Save

Save

3 thoughts on “Robots for Independent Living: Do they ‘care’ or are they ‘assistive’?”

  1. Your final question is an interesting one. My feeling is that it’s not really possible to know what it would be like to have a relationship – even a very thin one – with an emotive robot without actually trying it.

    That requires a lot of emotional competence from the robot. Without that, you won’t have anything you might genuinely relate to, you just have a robot that can play a few canned jokes, and you are immediately in Sirius Cybernetics territory. I don’t think anyone wants that.

    So, perhaps the question is, what level of emotional competence can you get a robot to exhibit ?

    Like

Leave a comment