Monday, 6 April 2009
Workshop summary
The workshop took place at CHI on Saturday 4th April 2009. Following introductions, we had two sessions where participants worked in groups to develop evaluation plans for an example technology. In the first session, participants were looking at two example home care technologies and scenarios. Key reflections from this session were:
- Relationship between designers and evalutors
- The disconnect between an ‘ideal’ evaluation plan which starts vague and the need to ‘sell’ specifics
- Mixed methods and mixed data (qualitative and quantitative)
- Compliance – how to incentivise, encourage and measure value for users
- How to get sufficient engagement with participants
- Need to have range of participants – patients, clinicians, carers
- Iterative approach and its implications e.g. longer term and does it provide adequate benefit?
- Different views of clinical trials as evaluation
- Which parameters to isolate?
- Scope of evaluation + goals of system determine methods and process
In the second session, participants were looking at two example hospital technologies and scenarios. Key reflections from this session were:
- Usability evaluation onsite or in lab?
- Difficulty getting clinician time
- Importance of getting basic issues sorted before deployment
- Value of doing studies in hospital – but can still improve on current state of art
- Importance of maintaining safety
- Running two parallel systems – trade offs
- How to understand what aspects of multiple components are having an impact – need to separate them?
- Impact on whole team even if focus looks like on only one person
- Can you separate effects? Sum more than individual parts – can’t be reductionist
- Importance of choosing right setting
After a poster session, we concluded by considering issues which still need to be further explored. The 'top issues' identified in this session were:
- What/how to preserve integrity of ‘ideal world’ in real world? Often no control
- What does it make sense to look at whole or pull apart? (Technology and environment)
- Looked at process approach but what about levels (e.g. micro, organisational)? What are the appropriate levels?
- How to account for different values of different stakeholders?
- Where to undertake evaluations? What’s appropriate? Lab – simulation – in situ. How close to the real world does the simulation have to be?
- Need to do more studies of long term impact? How to evaluate long term trials? Are RCTs right?
- What constitutes evidence? For who?
- How to bring design team to know enough for evaluation?
- How to account for all different roles of people using one technology?
- How do we work out in what context a technology has most benefit?
- How to generalise artifacts and methodologies
- Ethics, privacy, trust issues
- Inherent risks to clinician-patient relationships
- Accessing real patient data
- Being able to report findings from studies
- How can we evaluate real usage if people don’t tell/hide pieces?
- How to capture and share evaluator experiences – what has worked/not worked?
- Why do we do evaluation? How to describe complex findings?
- How to share what matters to users/organisations? Can we do this across contexts/countries etc?
- How to capture domain knowledge of the clinical experts?
- What are parallels in other domains that we can use?
- How much do we need to specialise in this?
- How does new/changing technology change research?
- Let’s not lose sight of ‘health outcomes’
- Can we capture lessons learned?
These are issues that we can continue to explore and discuss within this blog. Other ways of taking this discussion forward that were suggested were:
- The special issue of the International Journal for Human-Computer Interaction
- A google group as a less public discussion forum
- A LinkedIn group as a way of maintaining contact
All in all, it was a very productive day and thanks to everyone who took part.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment