Replies: 2 comments
-
Hi! I have been using jsPsych for a while and have some ideas. I am just a novice programmer, but hopefully this information will be helpful!
|
Beta Was this translation helpful? Give feedback.
-
Hi @wizofe, I think the biggest unknown is the answer to this question:
The main problem is that the synchronization accuracy of WebGazer and jsPsych is unknown. In my own experiences running eye tracking experiments with jsPsych and WebGazer there has been a lot of variability in the sampling rate of individual participants because the gaze predictions are calculated client side and dependent on the participant's device characteristics. If knowing the exact timing of an event relative to stimulus onset is critical, then I'd recommend testing the performance carefully. If you can tolerate synchronization errors of maybe ~50ms, then it's probably within that limit. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone,
I am quite new to jsPsych and I want to evaluate if it's suitable for an experiment I am going to run for my project. The idea is to present visual stimuli to create an automated visual acuity test.
Do you think that jsPsych would be suitable to perform this analysis on the browser (interesting in using a webcam and tablet) although consider the documentation it should work without big issues?
How efficient is for Python be incorporated if further analyses needed? I am thinking of using a Flask backend to have Python running there or using the Kivy framework. How about using a pure Javascript backend with jsPsych? What are people usually create their experiments with? For example I am envisioning D3.js (I read about Raphael on the publication as well) or Vue.js for a nice front-end presentation.
In general my biggest worry is the latency of the visual stimuli presentation. I would require to detect a stimulus change from the user (which takes for the eye 100ms-1000ms and on average around 200-300ms). For a 30Hz camera that's around 3-10 frames after the event on the screen. Now, I want to know if WebGazer fork could detect sufficiently fast the frames in sync with the display.
I've read (The timing mega-study: comparing a range of experiment generators, both lab-based and online, Bridges et al) and they use a quite fancy equipment to detect those changes. Anyone who can bring some insight on that?
Thanks for your ideas and help.
Beta Was this translation helpful? Give feedback.
All reactions