kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Machine Recognition of Facial Expressions of Pain in Horses
Swedish Univ Agr Sci, Dept Anat Physiol & Biochem, SE-75007 Uppsala, Sweden..
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-5458-3473
Univ Calif Davis, Dept Comp Sci, Davis, CA 95616 USA..
Swedish Univ Agr Sci, Dept Anat Physiol & Biochem, SE-75007 Uppsala, Sweden..
Show others and affiliations
2021 (English)In: Animals, E-ISSN 2076-2615, Vol. 11, no 6, article id 1643Article, review/survey (Refereed) Published
Abstract [en]

Simple Summary Facial activity can convey valid information about the experience of pain in a horse. However, scoring of pain in horses based on facial activity is still in its infancy and accurate scoring can only be performed by trained assessors. Pain in humans can now be recognized reliably from video footage of faces, using computer vision and machine learning. We examine the hurdles in applying these technologies to horses and suggest two general approaches to automatic horse pain recognition. The first approach involves automatically detecting objectively defined facial expression aspects that do not involve any human judgment of what the expression "means". Automated classification of pain expressions can then be done according to a rule-based system since the facial expression aspects are defined with this information in mind. The other involves training very flexible machine learning methods with raw videos of horses with known true pain status. The upside of this approach is that the system has access to all the information in the video without engineered intermediate methods that have filtered out most of the variation. However, a large challenge is that large datasets with reliable pain annotation are required. We have obtained promising results from both approaches. Automated recognition of human facial expressions of pain and emotions is to a certain degree a solved problem, using approaches based on computer vision and machine learning. However, the application of such methods to horses has proven difficult. Major barriers are the lack of sufficiently large, annotated databases for horses and difficulties in obtaining correct classifications of pain because horses are non-verbal. This review describes our work to overcome these barriers, using two different approaches. One involves the use of a manual, but relatively objective, classification system for facial activity (Facial Action Coding System), where data are analyzed for pain expressions after coding using machine learning principles. We have devised tools that can aid manual labeling by identifying the faces and facial keypoints of horses. This approach provides promising results in the automated recognition of facial action units from images. The second approach, recurrent neural network end-to-end learning, requires less extraction of features and representations from the video but instead depends on large volumes of video data with ground truth. Our preliminary results suggest clearly that dynamics are important for pain recognition and show that combinations of recurrent neural networks can classify experimental pain in a small number of horses better than human raters.

Place, publisher, year, edition, pages
MDPI , 2021. Vol. 11, no 6, article id 1643
Keywords [en]
pain, facial expressions, objective methods, horse, computer vision, machine learning, deep recurrent two-stream network, convolutional networks, facial keypoint detection, facial action units
National Category
Computer Sciences Veterinary Science
Identifiers
URN: urn:nbn:se:kth:diva-299017DOI: 10.3390/ani11061643ISI: 000665456000001PubMedID: 34206077Scopus ID: 2-s2.0-85106878511OAI: oai:DiVA.org:kth-299017DiVA, id: diva2:1581955
Note

QC 20210727

Available from: 2021-07-27 Created: 2021-07-27 Last updated: 2024-01-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Broomé, SofiaLi, ZhenghongKjellström, Hedvig

Search in DiVA

By author/editor
Broomé, SofiaLi, ZhenghongKjellström, Hedvig
By organisation
Robotics, Perception and Learning, RPL
In the same journal
Animals
Computer SciencesVeterinary Science

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 282 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf