Paper Details
Abstract
Time-series anomaly detection plays a critical role in a wide range of application domains, including finance, healthcare, and industrial systems monitoring. Despite notable progress in recent years, existing models often face limitations in accuracy and robustness when applied to complex temporal data. In this study, we propose an improved anomaly detection framework that builds upon an existing baseline model by replacing its traditional encoder–decoder architecture with a Transformer-based encoder–decoder enhanced with learnable positional encoding. The core contribution of this work lies in the integration of this advanced architectural component, which facilitates more effective modeling of temporal dependencies and contextual information within the input sequences. Empirical evaluations conducted on benchmark datasets demonstrate substantial performance gains over the original model, particularly in terms of F$_1$ score. These findings underscore the potential of Transformer-based approaches in advancing the state of the art in time-series anomaly detection.