Paper Details

Abstract

Time-series anomaly detection plays a critical role in a wide range of application domains, including finance, healthcare, and industrial systems monitoring. Despite notable progress in recent years, existing models often face limitations in accuracy and robustness when applied to complex temporal data. In this study, we propose an improved anomaly detection framework that builds upon an existing baseline model by replacing its traditional encoder–decoder architecture with a Transformer-based encoder–decoder enhanced with learnable positional encoding. The core contribution of this work lies in the integration of this advanced architectural component, which facilitates more effective modeling of temporal dependencies and contextual information within the input sequences. Empirical evaluations conducted on benchmark datasets demonstrate substantial performance gains over the original model, particularly in terms of F$_1$ score. These findings underscore the potential of Transformer-based approaches in advancing the state of the art in time-series anomaly detection.

Keywords
Contrastive Learning One-Class Deep Learning Anomaly Detection
Contact Information
Huỳnh Công Việt Ngữ (Corresponding Author)
Department of Computing Fundamental, FPT University, Việt Nam
0968683264

All Authors (2)

Trần Chí Tâm

Affiliation: Department of Software Engineering, FPT University

Country: Việt Nam

Email: tamtcse182549@fpt.edu.vn

Phone: 0902519191

Huỳnh Công Việt Ngữ C

Affiliation: Department of Computing Fundamental, FPT University

Country: Việt Nam

Email: nguhcv@fe.edu.vn

Phone: 0968683264