3 Clinical Trials for Various Conditions
The goal of this study is to learn about eye gaze technology's use as an assessment and intervention of visual skills and the impact on occupational performance in children with cortical/cerebral visual impairment. The main questions the study aims to answer are: * Does the use of eye gaze technology with graded visual activities improve visual abilities: * Does an improvement in visual abilities improve occupational performance? - What are the factors that correlate with improved visual abilities? Participants will complete the Pre-test with Canadian Occupational Performance Measurement, Cortical Visual Impairment Range, Sensory Profile and Sensory Processing Checklist for Children with Visual Impairment. Then will participate in eye gaze technology activities using eye gaze software with graded visual games for 20 minutes per day for 4 weeks. Observations of positioning, head/eye position, sensory processing, and types of eye gaze activities used during the session. Pre test, daily and post test percentage scores on the eye gaze activities will be recorded. Then the child will complete post testing with the Canadian Occupational Performance Measurement and Cortical Visual Impairment Range.
This is a randomized, pilot interventional study in participants with visual field deficit (VFD) caused by cortical lesion. Damage to the primary visual cortex (V1) causes a contra-lesional, homonymous loss of conscious vision termed hemianopsia, the loss of one half of the visual field. The goal of this project is to elaborate and refine a rehabilitation protocol for VFD participants. It is hypothesized that visual restoration training using moving stimuli coupled with noninvasive current stimulation on the visual cortex will promote and speed up recovery of visual abilities within the blind field in VFD participants. Moreover, it is expected that visual recovery positively correlates with reduction of the blind field, as measured with traditional visual perimetry: the Humphrey visual field test or an eye-tracker based visual perimetry implemented in a virtual reality (VR) headset. Finally, although results will vary among participants depending on the extent and severity of the cortical lesion, it is expected that a bigger increase in neural response to moving stimuli in the blind visual field in cortical motion area, for those participants who will show the largest behavioral improvement after training. The overarching goals for the study are as follows: Group 1a will test the basic effects of transcranial random noise stimulation (tRNS) coupled with visual training in stroke cohorts, including (i) both chronic/subacute ischemic and chronic hemorrhagic VFD stroke participants, and (ii) longitudinal testing up to 6 months post-treatment. Group 1b will test the effects of transcranial tRNS coupled with visual training on a Virtual Reality (VR) device in stroke cohorts, including both chronic/subacute ischemic and chronic hemorrhagic VFD stroke participants. Group 2 will examine the effects of tRNS alone, without visual training, also including chronic and subacute VFD stroke participants and longitudinal testing.
The overarching objective of this project is to transform access to assistive communication technologies (augmentative and alternative communication) for individuals with motor disabilities and/or visual impairment, for whom natural speech is not meeting their communicative needs. These individuals often cannot access traditional augmentative and alternative communication because of their restricted movement or visual function. However, most such individuals have idiosyncratic body-based means of communication that is reliably interpreted by familiar communication partners. The project will test artificial intelligence algorithms that gather information from sensors or camera feeds about these idiosyncratic movement patterns of the individual with motor/visual impairments. Based on the sensor or camera feed information, the artificial intelligence algorithms will interpret the individual's gestures and translate the interpretation into speech output. For instance, if an individual waves their hand as their means of communicating "I want", the artificial intelligence algorithm will detect that gesture and prompt the speech-generating technology to produce the spoken message "I want." This will allow individuals with restricted but idiosyncratic movements to access the augmentative and alternative communication technologies that are otherwise out of reach.