Colour coded original transcription text to increase use in supervision and teaching of clinicans
Nicola Holmes
Hi Guys, firstly thank you so much for changing my life and preventing burn out and getting me back into loving why I do medicine! As a teacher and supervisor it would be useful to have the original transcript text colour coded so you can easily identify when the patient is speaking (say black) and when the clinican is speaking (blue) and maybe a third colour for other random speakers eg kids. This would be useful for feedback when reveiwing consultations, instantly you see who is doing most of the talking and can finely look into details eg see how you interrupted them here....etc. Thanks again you heros.
Anya Sharma
Merged in a post:
Editing Transcripts for accuracy and speakers
Ardee Brook
I have noticed in my notes that Heidi can confuse my voice with the voice of a client. Is there any way to include labels for each speaker in the sessions that I can then adjust if Heidi has mis-labeled them? e.g. 'speaker 1', 'speaker 2' which I can label as '(client name)' or 'clinician'.
When I have tried to edit a transcript that has mis-heard a word, I cannot edit this in the transcript, I have to edit it multiple times in my note. I would love to see a feature that allows me to edit the transcript.
Anya Sharma
Merged in a post:
Transcript speaker labels (diarization)
N
Nathan
This is crucial. I regularly have errors introduced into my notes because content is not parsed as spoken by either myself or the patient, leading to significant re-writes of a significant proportion of notes.
Anya Sharma
Merged in a post:
Diarization
T
Tia Konis
Add a Speaker Diarization feature to Heidi so that Heidi will be able to discern who is speaking during a consult, and transcribe it accurately in the transcript.
Canny AI
Merged in a post:
Detect speakers to reflect practitioner treatment and action plan
C
Charlotte Ballisager
It would be helpful if Heidi could distinguish between the client and practitioner to more accurately reflect the client's subjective information and the practitioner's treatment modalities and recommendations. I find that Heidi largely interprets the transcript as the client's statements, and I have to go back and manually add my interventions and plan. It puts the action plan into the client's statements as if they had already occurred. This can be fixed by Heidi recognizing speakers like Otter AI does. Is this possible?
Tom
Hiya Ardee Brook, thanks for this post! I have a few more questions for you:
- How often do you encounter issues with speaker misidentification during sessions?
- Would you prefer the ability to edit transcripts directly within the session or as a separate post-session feature?
- Are there specific types of sessions or environments where speaker misidentification occurs more frequently?
Ardee Brook
Tom Thanks for your reply! For context, I am a speech therapist working with people of all ages.
- Typically at least once per client, often in discussion with parents, and for specific types of clients, particularly minimally-verbal children, my prompts are identified as the client themselves speaking, e.g. 'A used phrase "help please" spontaneously' (where the client is non verbal, and Heidi is picking up on my prompting).
- My thought is that if the transcript itself is annotated or colour coded by speaker, then I can select and change the assigned speaker for each utterance as required. If this is more easily expressed as a post-session feature, where the transcript is interpreted and utterances are categorised by speaker, it could then be helpful to have a drag and drop option in a table-style format to quickly review and adjust whenever the note expresses the speaker incorrectly, e.g. An additional tab next to the transcript and context where you can drag an utterance from the client column to the clinician column, or from clinician column to a parent column etc.
- The most common mis-identification occurs for me when I model a utterance, action, sentence structure, or sound to a client, for example, if my session goal is to increase a child's use of the word 'more' to request something, I will provide multiple models, and repeat the word alongside their non-verbal communication, sounds or attempts.
Thanks for your time reviewing and considering my response, I am excited for this possible new feature!
A
Andrea
Merged in a post:
easier to read transcript
S
Spiro Arkadianos
Can transcript be separated into paragraphs when pauses occur OR beter still as per Otter - separated into doctor .... and patient .......
A
Andrea
Merged in a post:
Speakers in transcriptions
R
Ryne Zuzinec
I'd like to see speakers listed in the transcription.
Canny AI
Merged in a post:
Re-format the transcript so it can be read like a script-write to improve its readability.
J
Joseph Ormerod
There are various scenarios where going back to the whole transcript is useful. At the moment the format of the transcript is very basic and hard to read. I propose some templating is needed to make this more legible.
e.g.
Clinician speaks: "transcript"
Patient speaks: "Transcript"
Clinician speaks: "transcript"
Patient speaks: "Transcript"
Time stamps would be nice too.
A
Andrea
Merged in a post:
The transcripts do not recognise different speakers
Adhiraj Joglekar
If as a clinician I asked - are you still continuing to have pain ... the AI generated summary mistook this as patient saying they are continuing to have pain.
S
Sarah Sultan
I find this happens a lot—plus Heidi confuses speakers, even when they have distinct voices. Once, I described a scintillating scotoma and Heidi mistakenly added “migraine with classical aura” to the pt's note—with an updated history and detailed treatment recommendations! Even when I clarify it's the physician speaking, Heidi will still misattribute info. Sometimes, I avoid casual conversation just to prevent errors. Yet Heidi will also dismiss crucial, extended interactions as small talk. AI is impressive but a work in progress.
Load More
→