Paper Details

Abstract

In this work, we propose a tiny Transformer architecture with pre-processing, optimized for real-time gloss-to-text translation. Our method explicitly addresses the structural and grammatical divergences between sign language glosses and natural language text, enabling accurate and efficient translation in low-resource settings. The pre-processing stage enriches gloss sequences with linguistic cues, guiding the model to generate more f luent English sentences. Experimental results demonstrate that our approach improves BLEUscores by 17.85% compared to a baseline tiny Transformer without pre-processing. Furthermore, by reducing model complexity, we achieve up to a 50% decrease in latency, highlighting the suitability of the proposed system for real-time applications and deploy ment on edge devices.

Keywords
sign language translation tiny Transformer architecture Gloss-to-Text; la tency American Sign Language gloss.
Contact Information
NGUYEN XUAN SAM (Corresponding Author)
Swinburne Vietnam, FPT University, Vietnam
0969938284

All Authors (1)

NGUYEN XUAN SAM C

Affiliation: Swinburne Vietnam, FPT University

Country: Vietnam

Email: samnx2@fe.edu.vn

Phone: 0969938284