Bilgilendirme: Kurulum ve veri kapsamındaki çalışmalar devam etmektedir. Göstereceğiniz anlayış için teşekkür ederiz.
 

An Efficient Human Action Recognition Framework with Pose-Based Spatiotemporal Features

dc.contributor.author Agahian, Saeid
dc.contributor.author Negin, Farhood
dc.contributor.author Kose, Cemal
dc.date.accessioned 2026-03-26T14:42:01Z
dc.date.available 2026-03-26T14:42:01Z
dc.date.issued 2020
dc.description Alp, Sait/0000-0003-2462-6166; en_US
dc.description.abstract In the past two decades, human action recognition has been among the most challenging tasks in the field of computer vision. Recently, extracting accurate and cost-efficient skeleton information became available thanks to the cutting edge deep learning algorithms and low-cost depth sensors. In this paper, we propose a novel framework to recognize human actions using 3D skeleton information. The main components of the framework are pose representation and encoding. Assuming that human actions can be represented by spatiotemporal poses, we define a pose descriptor consisting of three elements. The first element contains the normalized coordinates of the raw skeleton joints information. The second element contains the temporal displacement information relative to a predefined temporal offset and the third element keeps the displacement information pertinent to the previous timestamp in the temporal resolution. The final descriptor of the whole sequence is the concatenation of frame-wise descriptors. To avoid the problems regarding high dimensionality, Principal Component Analysis (PCA) is applied on the descriptors. The resulted descriptors are encoded with Fisher Vector (FV) representation before they get trained with an Extreme Learning Machine (ELM). The performance of the proposed framework is evaluated by three public benchmark datasets. The proposed method achieved competitive results compared to the other methods in the literature. (C) 2019 Karabuk University. Publishing services by Elsevier B.V. en_US
dc.identifier.doi 10.1016/j.jestch.2019.04.014
dc.identifier.issn 2215-0986
dc.identifier.scopus 2-s2.0-85065755858
dc.identifier.uri https://doi.org/10.1016/j.jestch.2019.04.014
dc.identifier.uri https://hdl.handle.net/20.500.14901/1739
dc.language.iso en en_US
dc.publisher Elsevier - Division Reed Elsevier India Pvt Ltd en_US
dc.relation.ispartof Engineering Science and Technology-An International Journal-JESTECH en_US
dc.rights info:eu-repo/semantics/openAccess en_US
dc.subject Skeleton-Based en_US
dc.subject 3D Action Recognition en_US
dc.subject Extreme Learning Machines en_US
dc.subject RGB-D en_US
dc.title An Efficient Human Action Recognition Framework with Pose-Based Spatiotemporal Features en_US
dc.type Article en_US
dspace.entity.type Publication
gdc.author.id Alp, Sait/0000-0003-2462-6166
gdc.author.scopusid 57156487700
gdc.author.scopusid 55807355500
gdc.author.scopusid 6602451231
gdc.author.wosid Alp, Sait/Nbk-9274-2025
gdc.author.wosid Köse, Cemal/V-9731-2017
gdc.author.wosid Negin, Farhood/Aao-7507-2021
gdc.description.department Erzurum Technical University en_US
gdc.description.departmenttemp [Agahian, Saeid] Erzurum Tech Univ, Fac Engn, Dept Comp Engn, TR-25050 Erzurum, Turkey; [Negin, Farhood] CNRS, Inst Pascal, UMR 6602, F-63171 Aubiere, France; [Kose, Cemal] Karadeniz Tech Univ, Fac Engn, Dept Comp Engn, TR-61080 Trabzon, Turkey en_US
gdc.description.endpage 203 en_US
gdc.description.issue 1 en_US
gdc.description.publicationcategory Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı en_US
gdc.description.scopusquality Q1
gdc.description.startpage 196 en_US
gdc.description.volume 23 en_US
gdc.description.woscitationindex Science Citation Index Expanded
gdc.description.wosquality Q1
gdc.identifier.wos WOS:000514548800017
gdc.index.type Scopus

Files