Artificial Pancreas - dialogue
Waag BY-NC-SA

Looking for the Pancreas’ voice

This blog post series tells the on-going process of our 'Measuring Less to Feel More' project, in which we are collaborating with Inreda Diabetic, a Dutch company developing the world first Artificial Pancreas. In case you missed the first chapter, it’s here!

I will go through how we are approaching the design of the interaction/interfacing between the Artificial Pancreas (AP) and its user. If one can see the shape and the form of the AP as the AP’s body, by analogy we could say that its interface could be something like its voice. So the starting questions (if I would put it down in an amusing way) would be something like : What kind of voice should an Artificial Pancreas have? What kind of dialogue should a user have with its (Artificial) pancreas? Should there even be a dialogue? As someone pointed out: "That’s weird, you don’t think of your pancreas in that way, right?"

More seriously, one of the most straightforward way for me to explore those research questions and understanding how future users would like to deal with their Artificial Pancreas was simply to meet them. That is why before starting to join the Inreda team last month, I have spent some time in February and March meeting some people in order to find out how they would receive, perceive and use such device.

As those interviews were going to serve as a first inspirational starting point, I deliberately decided to meet a very small amount of persons (5), and oriented the interviews in a qualitative direction more than a quantitative one. The interviews were composed of a Q&A (on lifestyle implications of living with Diabetes, the relationship the interviewed persons have with their current device) and a more projection phase/story telling (on how would they use an AP and what it would mean for them). The flow of the interviews was on purpose kept quite organic, in order to encourage the discussion. I am busy at the moment wrapping up the outcome into a booklet/report which presents my insights from those first sessions. That booklet should be available at the time of my next blog, so stay tuned.

However, I can already share a couple of those insights. About using a Artificial Pancreas, people were overall very enthusiastic. This device will definitely contribute in helping them handling their diabetes: "That would be a big help", "Much easier than the way I do it now", "For me, that would give me tranquillity. Less stress. Yes, I think it would make me less stressed". Nonetheless, all that will happen at almost one condition: trust.

Indeed, whereas with the glucose meter (the first step of ‘Measuring Less to Feel More’) the focus was on simplifying the feedback from the device to allow people to get less ‘depending’ to it, the way of doing (measuring by yourself by pricking your finger) was remaining the same. There the glucose meter (as any current glucose meter or insulin pump) was a tool. A tool that you used in order to regulate by yourself your blood sugar level. Now, the AP being an automatic closed-loop system, it seems that it gets out of the ‘tool category’ and belong more to the ‘machine category’. A machine that does the job «by itself» for its’ user benefit, yes. But which seems also taking away from the users hands the feeling of control: "I am used to have control for 31 years, I think it is not easy to change and trust something new?", "I would need to pass on my judgement to a machine. Is it even possible?"

Everybody agreed that Trust is not something you get: Trust is something you build. When asked which factors will contribute to building trust in the AP, the answers cover a wide range: from the most objective/societal aspect (such as regulations), till the most personal convictions ("if it compares with my own feelings"). In the middle of that, we can find a category which might deserve some attention in designing the ‘voice’ of the AP: "It says what it does".

Artificial Pancreas reasons

Providing feedback appears as a good way for the AP to be understood by its user and (step-by-step) gain his/her trust. You can actually imagine the situation in between two humans:

Two persons, unknown from each other. And one has to perform a vital task for the other one. No one would be convinced if the task-performer would do its job without saying a word. Not even a "just trust me" would be enough. But if he would provide sufficient and clear explanation of what is being done and how is the situation, that would clearly help building a trustful relationship between the two persons (with the obvious condition that what is done is well done). 

Nobody likes a ‘black box’ anyway.

Going further we could also imagine that after a certain amount of time, less feedback will be needed as the trust reach a greater level. So providing relevant feedback will help raising trust. But it seems that the more trust, the less feedback will be necessary.