I think we need more laboratory work to compare different techniques.
Interested in Assistive Technology, I wonder how Eye Tracking
relates to text input with switches and to low tech augmentative
and alternative communication as when a trained interpeter uses
a laser pen and an alphabet board.
With 'relates to' I mean either clinical data or laboratory work as to
number of hours instruction needed, cost of ownership, client
satisfaction, percentage of patients with certain afflictions that
can be helped with it, quality of care, satisfaction by family and
caregivers, ease of maintenance, clinical data on input speed
on the long run, laboratory work on learning, and more. Of course,
I ask too much. Still, if anybody considers to compare several
such techniques I would like to be involved.
For several videos that illustrate my own work including several
new switch based techniques please consider
Finally I think that perhaps several techniques might be
combined, and used in other forms of specialised computing.
For instance, one might gaze at an avatar, then send it
a coded signal like '...'='s' to select from many different options
like 'shoot', 'shout', 'sell', 'settle' or what not. This much
resembles pie-menus, and could involve morse code, word
prediction and eye gaze. Surely other funny combinations
can be found.