Facial emotion recognition in major depressive disorder: A meta-analytic review sciencedirect.com/science…
πŸ‘︎ 23
πŸ’¬︎
πŸ‘€︎ u/Psychnews
πŸ“…︎ Jun 26 2021
🚨︎ report
India, are you ready for Emotion Recognition Technology?

tl;dr

In this explainer, we will be looking at emotion recognition technology and how it works as well as the issues related to its usage by different authorities. We will also be looking at the rights which are involved in this scenario as well as any laws in place to regulate its use and misuse.

What is emotion recognition technology?

In the short-lived but extremely engaging television series, β€œLie to me”, the protagonist is the world's leading deception expert who studies facial expressions and involuntary body language to identify when a person is lying. He does this by studying β€œmicro-expressions”, which are said to be involuntary facial expressions that last on a person’s face for a short time that give away their actual feelings before the person masks these feelings in an attempt to deceive or lie.

The protagonist of this show was based on and inspired by the real life work of American psychologist Paul Ekman, who is renowned for having come up with a theory of universal emotions which holds that, β€œ(o)f all the human emotions we experience, there are seven universal emotions that we all feel, transcending language, regional, cultural, and ethnic differences”. This theory identifies seven universal facial expressions for these emotions, which are anger, disgust, fear, surprise, happiness, sadness and contempt, and is the basis for most of the recent development in artificial intelligence (AI) based emotion recognition technology.

Emotion recognition technology uses AI to identify and categorise emotions into these seven universal emotions or a combination thereof based on facial expressions it perceives from the subject and is used in conjunction with facial recognition technology. We recently came across this technology when the Lucknow Police announced its intention to use emotion recognition technology to track expressions of β€œdistress” on the faces of women who come under the gaze of the AI enabled cameras in public places. The cameras would then automatically alert the nearest police station even before the woman in question takes any action to report any issue herself. Another troubling instance of the use of this technology was when a [Chinese subsidiary of Japanese camera maker Canon, Canon Information Technology, last year unveiled a new workspace

... keep reading on reddit ➑

πŸ‘︎ 32
πŸ’¬︎
πŸ‘€︎ u/InternetFreedomIn
πŸ“…︎ Jun 21 2021
🚨︎ report
A camera system that uses AI and facial recognition intended to reveal states of emotion has been tested on Uyghurs bbc.co.uk/news/technology…
πŸ‘︎ 77
πŸ’¬︎
πŸ‘€︎ u/das_Eichhorn
πŸ“…︎ May 27 2021
🚨︎ report
[OC] Distribution of emotions during the presidential debates analyzed using an expression recognition neural network v.redd.it/506rhcerg1w51
πŸ‘︎ 44k
πŸ’¬︎
πŸ‘€︎ u/fredfredbur
πŸ“…︎ Oct 29 2020
🚨︎ report
Looking for participants for an online study investigating emotion recognition and working memory in adults with ADHD

Hello!

I’m part of a team at the University of Aberdeen conducting an online study as part of my postgraduate research project looking at emotion recognition in adults with ADHD. We're using a dynamic morphing task to investigate this.

In addition, we're exploring the role of attention/working memory in emotion recognition – we want to know if performing another memory task at the same time as we're trying to recognise emotions affects how quickly and accurately we do this.

The experiment takes roughly 30-40 minutes to complete, and includes two small questionnaires.

We are looking for people aged 18-45 with and without ADHD to take part. If you're interested, please follow this link to participate: https://tstbl.co/211-383

Please note: Requires a computer running Google Chrome. Tablets/mobile devices are not supported.

If you know anyone else who might be interested, please feel free to crosspost this or share this link with others!

If you have any questions regarding the study, please feel free to comment/DM me, or contact t24ah20@abdn.ac.uk.

Thanks!

PEC/4710/2021/4

πŸ‘︎ 37
πŸ’¬︎
πŸ‘€︎ u/z0c4t
πŸ“…︎ Jun 09 2021
🚨︎ report
The differences between sentiment analysis and emotion recognition. Pt. 1 of 4 - V.E.R.N. AI vernai.com/how-vern-gets-…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/aspleenic
πŸ“…︎ Jun 30 2021
🚨︎ report
A camera system that uses AI and facial recognition intended to reveal states of emotion has been tested on Uyghurs bbc.co.uk/news/technology…
πŸ‘︎ 46
πŸ’¬︎
πŸ‘€︎ u/Puffin_fan
πŸ“…︎ May 27 2021
🚨︎ report
More on Emotion Recognition vs Sentiment Analysis vernai.com/what-to-expect…
πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/aspleenic
πŸ“…︎ May 26 2021
🚨︎ report
India, are you ready for Emotion Recognition Technology? /r/india/comments/o4vtk2/…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Iamt1aa
πŸ“…︎ Jun 21 2021
🚨︎ report
A camera system that uses AI and facial recognition intended to reveal states of emotion has been tested on Uyghurs in Xinjiang bbc.com/news/technology-5…
πŸ‘︎ 17
πŸ’¬︎
πŸ‘€︎ u/DarnPandas
πŸ“…︎ Jun 03 2021
🚨︎ report
AI emotion-detection software tested on Uyghurs. A camera system that uses AI and facial recognition intended to reveal states of emotion has been tested on Uyghurs in Xinjiang, the BBC has been told. bbc.co.uk/news/technology…
πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/reubenfinlay
πŸ“…︎ May 26 2021
🚨︎ report
Violent offenders show reduced attention orienting to the eyes while viewing faces; although offenders & controls show comparable emotion recognition performance, reduced eye gaze is lined to lower recognition for fearful faces bipartisanalliance.com/20…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/jordiwmata
πŸ“…︎ Jun 16 2021
🚨︎ report
Decision Making with Emotion Recognition AI vernai.com/decision-makin…
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/aspleenic
πŸ“…︎ Jun 08 2021
🚨︎ report
Emotion recognition: can AI detect human feelings from a face? ft.com/content/c0b03d1d-f…
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/candleflame3
πŸ“…︎ May 13 2021
🚨︎ report
1DCNN + LSTM for music emotion recognition

I am trying to recreate the model from this paper.

The task is to predict valence and arousal from a raw audio signal.

Our database is made of .mp3 files annotated on windows of 500ms.

The "feature extraction" is done by the multi-view CNN, fed then to a Bidirectional LSTM. The output should be a pair of values (Valence and Arousal).

I'm wondering on how the two sub-nets should be connected together in order to exploit the capability of LSTM to maintain temporal information.

Right now we have as input one excerpt of 500ms (22050 samples at 44.1kHz) with 1 channel.

Here's a sketch of our structure so far.

https://preview.redd.it/ex4gl3riph071.png?width=1131&format=png&auto=webp&s=d34ce26904f0be800f244b0f0bcabb8f857122e6

I feel like we should have another dimension, for example adding a timedistributed layer and flattening the output of the CNN, so that the input to the LSTM would be something like

(batch_size, timesteps, features)

with timestep dimension given by TimeDistributed layer and features obtained from flattening or max pooling the number of features, but I fear I am missing something.

Thanks in advance.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/cwr0ng
πŸ“…︎ May 21 2021
🚨︎ report
[Academic] Emotion recognition survey (Western Europeans or Russians only)

Hello, everyone! I am a psychology student and I am currently conducting a cross-cultural study (Western Europe - Russia) on emotion recognition. I would be very grateful if you would participate in my research. The survey is completely anonymous and will take no more than 30 minutes. You can take the survey in English or Russian (select the language in the upper right corner). Thank you very much in advance! Follow the link to complete the survey and find out all the additional information

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/YouDouble9096
πŸ“…︎ May 22 2021
🚨︎ report
What to expect from your emotion recognition software vernai.com/what-to-expect…
πŸ‘︎ 17
πŸ’¬︎
πŸ‘€︎ u/aspleenic
πŸ“…︎ May 12 2021
🚨︎ report
Audiovisual Emotion Recognition in the browser - get emotion from a face in pure JS arjunb.westus.cloudapp.az…
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/101arrowz
πŸ“…︎ Apr 20 2021
🚨︎ report
[R] Emotion Recognition of the Singing Voice: Toward a Real-Time Analysis Tool for Singers
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ZeroHour999
πŸ“…︎ May 08 2021
🚨︎ report
Wav2Vec 2.0 models that were trained on 3k hours of French, along with benchmarks showing cutting edge performance on ASR, SLU, speech translation, and emotion recognition tasks
πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/nshmyrev
πŸ“…︎ Apr 30 2021
🚨︎ report
China is home to a growing market for dubious β€œemotion recognition” – A new report says the tools rely on junk science and have the potential to erode human rights around the world. restofworld.org/2021/chin…
πŸ‘︎ 317
πŸ’¬︎
πŸ‘€︎ u/Exastiken
πŸ“…︎ Jan 26 2021
🚨︎ report
Help for emotion recognition in brain waves using an EEG

Hello, I have a small comprehension problem and need your help.

I am currently working on a method for emotion recognition in brain waves using an EEG.

The input parameters are the raw EEG data and the output should be arousal and valence values between -1 to 1.

My steps so far:

  • Data cleaning -> remove artifacts and filtering the data.
  • Feature extraction.

The Features I used:

(Mean, Standard deviations, Means of the absolute values of the first differences, Means of the absolute values of the second differences, Skewness, Kurtosis, Variance, Peak-to-peak (PTP) amplitude, Integral approximation of the spectrum (Theta, LowerAlpha, UpperAlpha, Beta, Gamma), Linelength, Hjorth Activity, Hjorth Mobility, Hjorth Complexity, Petrosian fractal dimension, Hurst fractal dimension)

And the Output of the Feature extraction look like this

Each row represents a participant trial and in each column represents a feature divided into channels.

The channels are in the typical 10-20 system. (Fp1, Fp2, Fz ...)

So the Feature table roughly look like this:

Participant 0 FP1_Mean FP1_Variance ... FP2_Mean
Participant 1 FP1_Mean FP1_Variance ... FP2_Mean

Since this is my own data collected through a study, it is unfortunately not labeled, so it has to be clustered using an unsupervised clustering algorithm.

What would be the next steps ? Can someone help me with this ?

I study computer science, so I program all the algorithms myself in Python. But I don't know how to transform the output data to fit into a clustering algorithm to get the valence and arousal values afterwards

πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/curlu
πŸ“…︎ Feb 19 2021
🚨︎ report
How god are your cockatiels at emotion recognition

I am curious, mine can get moderate emotion from voice, like "I don't understand" "I fixed it" l He make a (often imitating melody of words) whistle if he get it.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Falco_cassini
πŸ“…︎ Apr 19 2021
🚨︎ report
China is home to a growing market for dubious β€œemotion recognition” – A new report says the tools rely on junk science and have the potential to erode human rights around the world. restofworld.org/2021/chin…
πŸ‘︎ 71
πŸ’¬︎
πŸ‘€︎ u/Exastiken
πŸ“…︎ Jan 26 2021
🚨︎ report
[ACADEMIC] Investigating the link between Alexithymia and emotion recognition in faces!

Hi Everyone! I am a BSc psychology student at the University of York and I would really appreciate your participation in a study for my dissertation.

I am investigating whether impaired emotion recognition commonly associated with Autism can be better explained by Alexithymia instead. This study can be completed by anyone aged 18 or above that does not have a diagnosis of Autism Spectrum Disorder. The experiment involves a face identity and emotion recognition task followed by two questionnaires. There is also a depression and anxiety questionnaire which is entirely optional. As usual, participation is completely anonymous and again, very greatly appreciated!

The implications of this research could help establish paradigms that would reduce misdiagnosis of ASD!

https://research.sc/participant/login/dynamic/22444227-BD01-45BB-9EE3-CE0A6B1D04F9

πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/prakritij18
πŸ“…︎ Feb 04 2021
🚨︎ report
Scientists create online games to show risks of AI emotion recognition | Artificial intelligence (AI) - It is a technology that has been frowned upon by ethicists: now researchers are hoping to unmask the reality of emotion recognition systems in an effort to boost public debate. theguardian.com/technolog…
πŸ‘︎ 55
πŸ’¬︎
πŸ‘€︎ u/Gari_305
πŸ“…︎ Apr 05 2021
🚨︎ report
Audiovisual Emotion Recognition in the browser - get emotion from a face in pure JS arjunb.westus.cloudapp.az…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/101arrowz
πŸ“…︎ Apr 20 2021
🚨︎ report
Emotion recognition survey

Hello, everyone! I am a psychology student and I am currently conducting a cross-cultural study (Western Europe - Russia) on emotion recognition. I would be very grateful if you would participate in my research. The survey is completely anonymous and will take no more than 30 minutes. You can take the survey in English or Russian (select the language in the upper right corner). Thank you very much in advance! Follow the link to complete the survey and find out all the additional information

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/YouDouble9096
πŸ“…︎ May 22 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.