Monday, 6 April 2009

Workshop summary


The workshop took place at CHI on Saturday 4th April 2009. Following introductions, we had two sessions where participants worked in groups to develop evaluation plans for an example technology. In the first session, participants were looking at two example home care technologies and scenarios. Key reflections from this session were:

- Relationship between designers and evalutors
- The disconnect between an ‘ideal’ evaluation plan which starts vague and the need to ‘sell’ specifics
- Mixed methods and mixed data (qualitative and quantitative)
- Compliance – how to incentivise, encourage and measure value for users
- How to get sufficient engagement with participants
- Need to have range of participants – patients, clinicians, carers
- Iterative approach and its implications e.g. longer term and does it provide adequate benefit?
- Different views of clinical trials as evaluation
- Which parameters to isolate?
- Scope of evaluation + goals of system determine methods and process

In the second session, participants were looking at two example hospital technologies and scenarios. Key reflections from this session were:
- Usability evaluation onsite or in lab?
- Difficulty getting clinician time
- Importance of getting basic issues sorted before deployment
- Value of doing studies in hospital – but can still improve on current state of art
- Importance of maintaining safety
- Running two parallel systems – trade offs
- How to understand what aspects of multiple components are having an impact – need to separate them?
- Impact on whole team even if focus looks like on only one person
- Can you separate effects? Sum more than individual parts – can’t be reductionist
- Importance of choosing right setting

After a poster session, we concluded by considering issues which still need to be further explored. The 'top issues' identified in this session were:
- What/how to preserve integrity of ‘ideal world’ in real world? Often no control
- What does it make sense to look at whole or pull apart? (Technology and environment)
- Looked at process approach but what about levels (e.g. micro, organisational)? What are the appropriate levels?
- How to account for different values of different stakeholders?
- Where to undertake evaluations? What’s appropriate? Lab – simulation – in situ. How close to the real world does the simulation have to be?
- Need to do more studies of long term impact? How to evaluate long term trials? Are RCTs right?
- What constitutes evidence? For who?
- How to bring design team to know enough for evaluation?
- How to account for all different roles of people using one technology?
- How do we work out in what context a technology has most benefit?
- How to generalise artifacts and methodologies
- Ethics, privacy, trust issues
- Inherent risks to clinician-patient relationships
- Accessing real patient data
- Being able to report findings from studies
- How can we evaluate real usage if people don’t tell/hide pieces?
- How to capture and share evaluator experiences – what has worked/not worked?
- Why do we do evaluation? How to describe complex findings?
- How to share what matters to users/organisations? Can we do this across contexts/countries etc?
- How to capture domain knowledge of the clinical experts?
- What are parallels in other domains that we can use?
- How much do we need to specialise in this?
- How does new/changing technology change research?
- Let’s not lose sight of ‘health outcomes’
- Can we capture lessons learned?

These are issues that we can continue to explore and discuss within this blog. Other ways of taking this discussion forward that were suggested were:
- The special issue of the International Journal for Human-Computer Interaction
- A google group as a less public discussion forum
- A LinkedIn group as a way of maintaining contact

All in all, it was a very productive day and thanks to everyone who took part.

Sunday, 5 April 2009

Call for papers: Special issue of the International Journal of Human-Computer Interaction

Submission deadline: Friday 19th June 2009


New mobile, wireless and sensor-based technologies for supporting the provision of healthcare are increasingly pervasive. Within hospitals, technology is moving out of the consulting room and to the bedside via devices such as tablet PCs and interactive displays. Healthcare technologies are making their way into patients’ homes, in the form of telecare and assistive technology packages to enable carers and clinicians to remotely monitor patients and to enable patients to take greater control of their health. At the same time, both clinicians and patients have access to an increasing amount of information via a broad range of software solutions such as electronic patient records and computerised decision support systems. Such changes raise a number of challenges regarding the evaluation of the use and impact of such technologies.

To follow on from a CHI2009 workshop on ‘Evaluating new interactions in healthcare: challenges and approaches’, we are pleased to announce a call for papers for a special issue of the International Journal of Human Computer Interaction on this theme. This special issue invites original papers that contribute to our understanding of how to evaluate the use and impact of new technologies in healthcare. Researchers are encouraged share their experiences and perspectives, and reflect on the theory and methods of evaluating technologies designed to support the delivery of healthcare.

Research areas include, but are not limited to the following:
- Benefits and limitations of different evaluation methods and how they can be adapted to meet the challenges of evaluating new healthcare technologies;
- Approaches for increasing the potential of lab-based studies and simulations to provide insight into how healthcare technologies may be used in practice;
- Novel methods for the evaluation of healthcare technologies once they have been deployed that meet the challenges of evaluating technologies in particular settings such as hospitals, patients’ homes or other community settings;
- Questions of who should be involved in an evaluation and innovative methods for capturing the experience of the patient and/or other stakeholders;
- Theoretical perspectives that can inform our approach to the evaluation of healthcare technologies;
- Insights from other domains that could provide a framework for evaluation.

Submissions
Papers should be submitted via email to Rebecca Randell (rebecca.randell.1@city.ac.uk) by Friday 19th June 2009 as either a Word document or PDF file.

Manuscripts should be between 9,000 and 14,000 words long (excluding references and tables). All manuscripts should be double-spaced with 1" margins on all sides and pages should be numbered consecutively throughout the paper. You should use 10-12 point Times New Roman font or a similar font. Authors should also supply a shortened version of the title for a running head, not exceeding 50 character spaces, an abstract of approximately 100-150 words, three to six keywords, and the author(s) affiliation and location. Each submitted article must contain author(s) mailing address, telephone number, and email. Literature referenced should be indicated in the text by author and date. Listed references should be complete and journal abbreviations should conform to Chemical Abstracts style.


Important dates
Submission deadline: Friday 19th June 2009
Notification to authors: Friday 21st August 2009
Submission of revised papers: Friday 23rd October 2009

It is expected that the special issue will be published mid 2010.

Guest editors
Rebecca Randell, City University London, UK.
Geraldine Fitzpatrick, University of Sussex, UK.
Stephanie Wilson, City University London, UK.

For further information, please contact Rebecca Randell at rebecca.randell.1@city.ac.uk.