Originally, the project could import the associated dialogue and display it in the correct time location. This was cool but the user could not edit the transcription, only view it. This is how the original transcription looked :
130.3656830 132.4688772 it’s_uh_a_body_of_water
132.7277318 133.5042958 a_river_boat
This format does not account for any lag in the participants speech (example: its . . . . . . . . . uh. . . a body of water), it is just one line of text before the next person spoke. To analyze speech, it would be important to have the ability to edit the transcription file as you are listening to it. The software now allows for editing and saving of the transcription file. This is demonstrated below.
The user uses a two-finger gesture to highlight the desired dialogue to be moved. One finger is then used within the highlighted area to drag the dialogue along the x-axis. Once the dialogue is exactly where the participant said it in time, the user drags the highlighted area down. That phrase is now separated from its original (phrase), represented by a green line.
Another new feature is cross correlation of the sound file and pupil diameter. Again this is done using the highlight capability. Once the area is highlighted, the user presses the blue button. A new graph area appears with the correlation of the two signals.
The correlation feature still needs a lot of work but the base algorithm is implemented.