高级检索
当前位置: 首页 > 详情页

Weakly Supervised Micro- and Macro-Expression Spotting Based on Multi-Level Consistency

文献详情

资源类型:
WOS体系:

收录情况: ◇ SCIE

机构: [1]Univ Elect Sci & Technol China, Sichuan Canc Hosp & Inst, Sch Life Sci & Technol, Chengdu 610054, Peoples R China
出处:
ISSN:

关键词: Videos Feature extraction Optical flow Proposals Training Location awareness Image segmentation Annotations Data mining Optical losses Micro- and macro-expression spotting weakly supervised learning multi-level consistency multiple instance learning

摘要:
Most micro- and macro-expression spotting methods in untrimmed videos suffer from the burden of video-wise collection and frame-wise annotation. Weakly supervised expression spotting (WES) based on video-level labels can potentially mitigate the complexity of frame-level annotation while achieving fine-grained frame-level spotting. However, we argue that existing weakly supervised methods are based on multiple instance learning (MIL) involving inter-modality, inter-sample, and inter-task gaps. The inter-sample gap is primarily from the sample distribution and duration. Therefore, we propose a novel and simple WES framework, MC-WES, using multi-consistency collaborative mechanisms that include modal-level saliency, video-level distribution, label-level duration and segment-level feature consistency strategies to implement fine frame-level spotting with only video-level labels to alleviate the above gaps and merge prior knowledge. The modal-level saliency consistency strategy focuses on capturing key correlations between raw images and optical flow. The video-level distribution consistency strategy utilizes the difference of sparsity in temporal distribution. The label-level duration consistency strategy exploits the difference in the duration of facial muscles. The segment-level feature consistency strategy emphasizes that features under the same labels maintain similarity. Experimental results on three challenging datasets-CAS(ME)(2), CAS(ME)(3), and SAMM-LV-demonstrate that MC-WES is comparable to state-of-the-art fully supervised methods.

基金:
语种:
WOS:
PubmedID:
中科院(CAS)分区:
出版当年[2025]版:
大类 | 1 区 计算机科学
小类 | 1 区 计算机:人工智能 1 区 工程:电子与电气
最新[2025]版:
大类 | 1 区 计算机科学
小类 | 1 区 计算机:人工智能 1 区 工程:电子与电气
JCR分区:
出版当年[2024]版:
Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
最新[2024]版:
Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Q1 ENGINEERING, ELECTRICAL & ELECTRONIC

影响因子: 最新[2024版] 最新五年平均 出版当年[2024版] 出版当年五年平均 出版前一年[2024版]

第一作者:
第一作者机构: [1]Univ Elect Sci & Technol China, Sichuan Canc Hosp & Inst, Sch Life Sci & Technol, Chengdu 610054, Peoples R China
通讯作者:
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:65780 今日访问量:0 总访问量:5151 更新日期:2025-12-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 四川省肿瘤医院 技术支持:重庆聚合科技有限公司 地址:成都市人民南路四段55号