You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For my PhD thesis, I want to take a deeper focus on the combination of physiological parameters and facial expression to analyse the emotional expression of people with profound intellectual and multiple disabilities.
During my search, I came across your software and, maybe, it is suitable to my approach.
Shorts explanation of the target group:
First, each person with profound intellectual and multiple disabilities is very individual concerning her/his competencies and impairments. However, there are some characteristics that apply to a large number affected persons:
profound intellectual disability (IQ < 20) combined with other disabilities (e.g., motor impairment, sensorial disabilities (hearing or visual impairment))
communication: usually no verbal language
usually no understanding of symbols
maybe no use of common behaviour signals (e.g., different showing of facial expression in comparison to people without disabilities) -> for example, “smiling” is not always a signal for happiness
So, the problem is that these target group cannot tell us directly how they feel. Therefore, I created the following plan and maybe you can say me, if this is possible (with your software):
I want to trigger special emotional situations for the person with disabilities based on the information of her/his parents and caregivers.
These situations should be recorded with a focus on the face.
Afterwards, the personal facial expression of an emotion can be extracted -> Several Pictures of several situations that show the same facial expression, which stand for one emotion. The same procedure for other emotions.
The last step includes a field trail, in which these emotions should be recognisable using machine processing/software in daily life.
Do you think that I can train your software with the pictures that I get from the special emotional situations and use this trained software to recognize in the specific facial expression in a totally new recording?
So, is it possible that the software can detect the shown facial expression (which will stand in the final analysis for an emotion) in a video?
Moreover, is it possible to get some further details (like keypoints etc.) of the shown facial expression to use them in a further analysis?
To sum up, an answer would be really helpful.
Thanks!
Dear Sir or Madam,
For my PhD thesis, I want to take a deeper focus on the combination of physiological parameters and facial expression to analyse the emotional expression of people with profound intellectual and multiple disabilities.
During my search, I came across your software and, maybe, it is suitable to my approach.
Shorts explanation of the target group:
First, each person with profound intellectual and multiple disabilities is very individual concerning her/his competencies and impairments. However, there are some characteristics that apply to a large number affected persons:
So, the problem is that these target group cannot tell us directly how they feel. Therefore, I created the following plan and maybe you can say me, if this is possible (with your software):
Do you think that I can train your software with the pictures that I get from the special emotional situations and use this trained software to recognize in the specific facial expression in a totally new recording?
So, is it possible that the software can detect the shown facial expression (which will stand in the final analysis for an emotion) in a video?
Moreover, is it possible to get some further details (like keypoints etc.) of the shown facial expression to use them in a further analysis?
To sum up, an answer would be really helpful.
Thanks!
You can also contact me directly: [email protected]
Best regards
The text was updated successfully, but these errors were encountered: