Category Archives: projects

Unspoken voices: Gathering perspectives from people who use Alternative and Augmentative Communication (AAC)

This blog post is from Katherine Broomfield – Speech and Language Therapist, Gloucestershire Care Services NHS Trust. Kath has recently been successful in achieving an NIHR Doctoral Research Fellowship, will start her PhD with Prof Karen Sage of Sheffield Hallam in 2017 and will be working with our team as part of this.


I lead the local AAC service in Gloucestershire; part of the adult speech and language therapy service. We assess and provide basic communication aids such as low-tech, paper-based systems and direct-access, high-tech devices. In a quest to improve our service, I was interested in how to reinforce the quality of the assessment and support that we provide to people in need of communication aids. I also wanted to understand how to improve people’s experience of using them. In 2014, I secured funding from Health Education England South West to carry out a clinical academic internship at the Bristol Speech and Language Therapy Research Unit, under the supervision of Professor Karen Sage.  The objectives of the internship were to: a) search for research literature about how to best support the implementation of communication aids, b) carry out interviews with service users and c) consider areas for further research.

The literature search uncovered limited information about why some people use communication aids effectively and others do not; nor what ‘successful communication’ means to people who rely on communication aids and what they feel best supports them to achieve this. The services users I interviewed reported very different views on successful communication aid use. They also provided some interesting insights into how to improve the support that NHS services provide when issuing AAC equipment. The number of participants in the interviews was small however and they were all adult users of one particular device. By the end of the internship, I had generated more questions than I had answered.

I chose to apply to the National Institute for Health Research (NIHR) for funding to carry out further research into the perspectives of users of communication aids. In February 2016 Prof Karen Sage relocated from Bristol to a post at Sheffield Hallam University (SHU) Centre for Health and Social Care Research. This provided me with the opportunity to establish a team to help me with my research project from the vibrant health research community in Sheffield and, more specifically, to approach Simon Judge at the Barnsley Assistive Technology Team. Simon agreed to join Prof Karen Sage, Prof Karen Collins (SHU) and Prof Georgina Jones (Leeds Beckett University) in supporting me to develop my research proposal, complete the funding application and, if successful, to supervise me while carrying out the research.

At the end of last year I was awarded NIHR funding. My project aims to develop a greater understanding about why people do and do not use communication aids and how they view success with using them. I plan to carry out a more extensive and specific literature review focusing on user perspectives and outcomes for communication aids. I will then complete a series of interviews with young people and adults who use communication aids at different points across the AAC pathway – from assessment and provision of equipment to the use of communication aids in people’s homes, schools and communities. The ultimate aim of the project is to develop a patient reported outcome measure (PROM). The PROM will be made available for use by NHS services to gather the perspectives of people who use communication aids about the equipment and the support they receive.

The project is one aspect of my PhD training programme (the Clinical Doctoral Research Fellowship, or CDRF) targeted at developing practicing NHS clinicians into academic researchers. This scheme is part of the current drive to improve the use of research evidence within NHS services.

I am really looking forward to working closely with people who use communication aids and their friends, families and carers throughout this project. I am also excited about the opportunity of working closely with the team at Barnsley Assistive Technology whose clinical work and research I have admired for some time. I will be setting up my own blog imminently to keep people informed about the project – but in the meantime, I am contactable via Simon and the team. I am passionate about good communication and I still have a lot to learn about AAC, so please get in touch!

 – Katherine Broomfield, Speech and Language Therapist, Gloucestershire Care Services NHS Trust.

 

 

 

Creating a Personalised Synthetic Voice (Voice Banking)

‘Voice banking’ has been discussed quite a lot within the AAC field recently and so, as a team, we have been exploring this and other similar techniques in more depth.

As the team member volunteered to test out the packages available to create a personalised synthetic voice I have spent what feels like weeks recording countless phrases! I have now created two personalised voices, one using ModelTalker and one using My-Own-Voice.  This post is a summary of the experience of creating these voices (including example recordings of the result).  We also have a fact sheet on our website with information about the options.

ModelTalker

Last week it was fantastic to finally hear the results of all the hours of recording into the ModelTalker voice recorder. The process has been frustrating to say the least! I have good quality voice, had access to the quiet surroundings required and a decent microphone and was pretty competent in the IT stuff needed to set up the ModelTalker recordings – but even so it wasn’t plain sailing. At times it seemed impossible to get a “green” light recording  with all parameters at an acceptable level, and so, individual phrases had to be re-recorded numerous times, even though as far as I was aware all environmental factors, recording volume and voice quality were unchanged. The tiring factor, even for me with a robust voice, was telling, and it was difficult to record successfully for more than a couple of hours at a time. Connection at times was lost, and at times the programme froze, which again  slowed the recording process and sometimes meant that the time set aside to record could not be used.

The ModelTalker documentation suggests that the recordings can be completed in 6-8 hours. However, it took me far longer than this and a lot of my phrases were submitted as an “amber” recording; meaning acceptable, but not perfect. Consequently, it is very evident that for most of our clients a comprehensive level of support during the recording process would probably be needed to get the best outcome.

However, on hearing the final product I can safely say that the voice that has been created definitely sounds like me – and creating and using this voice is currently completely free! The speech produced can be a little disjointed at times, irregular English spelling patterns create it some pronunciation difficulties and the intelligibility decreases in longer utterances, but the overall quality definitely shows features of my voice

Alongside this process I have begun working with a client, Greg, who, with support from his local therapist, Jennifer Benson, has created his own ModelTalker voice which he now uses on Predictable. To hear about the process I had just undertaken, from a client’s perspective, has been fascinating . Greg came across similar frustrations to those I have highlighted during the recording process and the tiring effect was very significant for him. However, for Greg, the pay off of having his own voice on his communication app seems to outweigh any of the difficulties he encountered.in creating his voice.  Greg has made a video about his voice banking experience at: https://youtu.be/DYdSTNDYBWE.

My-Own-Voice

After the hours taken to record my voice using the ModelTalker platform, the creation of a personalised voice using “My-Own-Voice” seemed  a lot less time consuming. Although the total number of phrases needed to be recorded were similar to ModelTalker, the “My-Own-Voice” only  took around 5 hours to make. The re-recordings needed were minimal and generally the process was less disrupted, with the recording working consistently each time I attempted to record a phrase. The navigation through the phrases as I recorded them seemed a lot more intuitive and I was able to seamlessly record one phrase after another.

Once more the voice created does sound like me, although there are some definite issues! Certain speech sounds do not sound at all like they should, particularly  word endings and some word initial consonant blends. In connected speech, intonation patterns can sound a little odd at times and the boundaries between words sound quite slurred which definitely has a negative impact on intelligibility.

The My-Own-Voice process is free to record and create  a voice, but you then need to apply  for a costing to use the voice you have created on  a communication aid.

Comparing the Results

For the purposes of comparison, I recorded a phrase as a direct voice recording and then created it using my personalised synthetic speech with “ModelTalker” and “My-Own-Voice”. You can compare the results for yourself by listening below, the phrase I used was “I am sat here, writing this blog, to allow you to compare the personalised voices that can be created by two web based programmes; ModelTalker and My-Own-Voice”:

My recorded voice

ModelTalker synthesised speech.

My-Own-Voice synthesised speech.

FactSheet

Feel free to download the information sheet  we have produced about voice and message banking from our website. This summary includes some hints and tips to think about when considering  the process of  recording words and/or phrases or creating a synthetic personalised voice.

In a future post we will discuss the difference between Voice Banking, Message Banking and other approaches to retaining identity in the use of communication aids.  As we write this post, breaking news is that Amy Roman, an AAC specialist in the USA has created a resource for message banking – MessageBanking.Com . This is discussed on this thread on the fantastic AT ALS email  list.

Choosing the right vocabulary package – a Barnsley AT Team study session

Members of the Barnsley AT Team met recently to spend some time looking at the evidence behind several different vocabulary/language packages for communication aids. This is a key and frequent discussion within the team and we organised a session to help develop our thinking on this.  It also links with the forthcoming research project we will be involved with funded by the  National Institute for Health Research and in collaboration with Manchester Metropolitan University (more on this later!).

Vocabulary packages can broadly be grouped into: taxonomic (categories), schematic (activities), alphabetic, iconic encoding, visual scene displays and idiosyncratic (personalised). Some systems might use several of these methods to organise vocabulary. For this session we chose to looked at the evidence around Visual Scene Displays (VSDs) and considered the following literature (some published in peer reviewed journals, some ‘grey’ literature):

  • Drager, Light et al (2003) The Performance of Typically Developing 2½ Year Olds on Dynamic Display AAC Technologies with Different System Layouts and Language Organizations
  • Drager & Light (2010) Designing effective visual scene displays for young children. 

This work suggest that visual scene displays may:

  • be most beneficial for young children and people with significant cognitive or linguistic difficulties (e.g. Learning Disability, Aphasia, Brain Injury);
  • provide a high level of contextual support;
  • enable communication partners to engage and support the person with communication needs by providing a framework and context from which they can scaffold a conversation;
  •  support real life events and experiences as they happen, by providing a supportive narrative;
  • be highly personalised/replicate real life experiences;
  • provide language in context;
  • shift the focus away from expressing wants and needs and towards social interaction and exchange of ideas and information;
  • reduce cognitive demands by reducing visual processing;
  • access linguistic concepts via episodic memory not semantic memory;
  • exploit human capacity for rapid visual processing of visual scenes.

The above studies also discuss the limitations of VSDs suggesting that:

  • Children with motor difficulties may find it harder to access hotspots on a VSD than symbol grids.
  • VSDs may be more visually complex than evenly spaced symbols in a grid.
  • VSDs are labour intensive to produce and maintain.
  • Jackson, Wahlquist and Marquis (2011), found children performed better with a grid layout and made more mis-hits with VSDs.

We also looked at the paper: “Critical Review: Which Design Overlay is Better Suited for Early Assisted AAC Intervention in Preschoolers: Visual Scene Displays or Traditional Grid Layouts? ” Kaempffer (2013).

Kaempffer reviewed the literature on VSDs and found results of studies looking at VSDs to be inconclusive and limited. Kaempffer was also critical of methodology and statistical analysis used in studies into VSDs. Only one study has included children with communication needs and some studies have suggested grid layouts may be more appropriate.

Our team concluded that VSDs should still be considered as part of the AAC assessment process. However although the literature suggests that emergent AAC users and adults with cognitive impairments may benefit from VSDs, from this session we could not see strong evidence to suggest particular groups of individuals or situations in which VSDs may be most useful.

A number of software packages are available that support VSDs, these include:

  • Tobii-DynaVox Compass, Sono Primo
  • MultiChat 15/Touch Chat HD app
  • Therapy Box Chatable & Scene and Heard

Have you used VSDs with a communciation aid user, have we missed some important literature? We would love to hear about your experience!

References:

Drager, K., Light, J., Speltz, J., Fallon, K., Jeffries, L. (2003). The Performance of Typically Developing 2½ Year Olds on Dynamic Display AAC Technologies with Different System Layouts and Language Organizations. Journal of Speech, Language, and Hearing Research, 46(2), 298-312.

Jackson, C., Wahlquist, J., Marquis, C. (2011). Visual Supports for Shared Reading with Young  Children: the Effect of Static Overlay Design. Augmentative and Alternative Communication, 27 (2), 91-102.

Kaempffer, A (2013) Critical Review: Which Design Overlay is Better Suited for Early Assisted AAC Intervention in Preschoolers: Visual Scene Displays or Traditional Grid Layouts? Poster presentation at University of Ontario. Unpublished/peer reviewed, but available as PDF.

Light, J., Drager, K., & Wilkinson, K. (2010, November). Designing effective visual scene displays for young children. ASHA Conference. Lecture conducted from Philadelphia, PA.. Conference presentation avaliable as a PDF.

Phonemes, Decisions, Identification, Services, Eye Tracking and Assessment!

Phonemes, Decisions, Identification, Services, Eye Tracking and Assessment. This is the range of diverse topics that members of the team will be presenting at the ISAAC 2014 conference later this month. ISAAC is the international conference relating to Augmentative Communication.
The Barnsley AT team is pleased to have had six papers accepted for this prestigious conference. A summary of all the papers is below. If you are attending ISAAC, please come and find Simon or Andrea to chat about these topics!
The work being presented at ISAAC has been supported by Devices For Dignity and Sparks Charity

Continue reading

Accessible app developers of the future?

Yesterday I gave a presentation at Sheffield Hallam University following an invitation from Dr Peter O’Neill, Senior Lecturer and leader on modules including mobile applications and programming for computing. The students were from the BSc Mobile Application Development course and an MSc Group Project.

Considering that the audience could be web and app developers of the future, this was an opportunity to remind of the need to design for accessibility. To set the context I explained the role of our service in assessing for and providing electronic assistive technology such as AAC, EC and computer access and described how some of our clients access this technology. An illustration was given of well established methods such as switch access, alternative keyboards, mice, eye gaze, voice recognition, screen reading software and use of inbuilt accessibility features in Windows, iOS and Android.

This lead to highlighting more recent technology developments which have the potential to be used as Assistive Technology  – if developed in the right way:

Leap Motion –  non contact gesture input from hand and finger movement.

Google Glass – wearable computer and optical head mounted display.

Google 3D Sensors – Project Tango – phone with motion tracking and depth sensing.

Hopefully we enthused the students with the potential of using these novel technologies for Assistive Technology and in thinking accessibility in everything they did.

There are lots of exciting potential student projects in the area of Assistive Technology and Accessibility. We will continue to develop collaborations between the Barnsley AT Team and University groups such as Sheffield Hallam – hopefully, at some point, building on the Project Possibility model in the UK.

Natural Speech Technology User Group – Edinburgh

I (Andrea) attended the Natural Speech Technology User Group in Edinburgh last week. It was great to see some really interesting and groundbreaking work going on in developing the both the technology and the clinical uses of speech synthesis and speech recognition. Heidi Christensen’s project on the Home service could have potential as a real practical solution for some of our client’s with limited physical access and wishing to use their dysarthric voice to access environmental control.
As a speech therapist, I found the Ultrax project a really exciting new development in looking at ways of giving feedback on speech production.
A really fascinating day, some great contacts made and hopefully opportunities identified for increased joint working between Barnsley and the team at Edinburgh.

Further analysis – AAC team staffing

DfEStaffingVisSomeone asked what data could be extracted from the DfE-dataset about the makeup of local AAC teams, so I made another visualisation of the data to try and answer this question. There are, as ever, caveats about this data  – mainly that some of it is not based on a large number of responses, for example not many services provided data about the banding of staff.  Also, the averages should be calculated per-population to be truely useful/indicative (which i could do, but which produces very small numbers that are harder to interpret easily).
Hopefully this is useful and helps add to the profile of some local services delivering AAC.