Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Research approach concerning emotions of people with PIMD using physiological parameters and facial expressions #49

Open
ghost opened this issue Aug 3, 2018 · 0 comments

Comments

@ghost
Copy link

ghost commented Aug 3, 2018

Dear Sir or Madam,

For my PhD thesis, I want to take a deeper focus on the combination of physiological parameters and facial expression to analyse the emotional expression of people with profound intellectual and multiple disabilities.
During my search, I came across your software and, maybe, it is suitable to my approach.

Shorts explanation of the target group:
First, each person with profound intellectual and multiple disabilities is very individual concerning her/his competencies and impairments. However, there are some characteristics that apply to a large number affected persons:

  • profound intellectual disability (IQ < 20) combined with other disabilities (e.g., motor impairment, sensorial disabilities (hearing or visual impairment))
  • communication: usually no verbal language
  • usually no understanding of symbols
  • maybe no use of common behaviour signals (e.g., different showing of facial expression in comparison to people without disabilities) -> for example, “smiling” is not always a signal for happiness

So, the problem is that these target group cannot tell us directly how they feel. Therefore, I created the following plan and maybe you can say me, if this is possible (with your software):

  1. I want to trigger special emotional situations for the person with disabilities based on the information of her/his parents and caregivers.
  2. These situations should be recorded with a focus on the face.
  3. Afterwards, the personal facial expression of an emotion can be extracted -> Several Pictures of several situations that show the same facial expression, which stand for one emotion. The same procedure for other emotions.
  4. The last step includes a field trail, in which these emotions should be recognisable using machine processing/software in daily life.

Do you think that I can train your software with the pictures that I get from the special emotional situations and use this trained software to recognize in the specific facial expression in a totally new recording?
So, is it possible that the software can detect the shown facial expression (which will stand in the final analysis for an emotion) in a video?
Moreover, is it possible to get some further details (like keypoints etc.) of the shown facial expression to use them in a further analysis?

To sum up, an answer would be really helpful.
Thanks!

You can also contact me directly: [email protected]

Best regards

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant
and others