[1]
“Optimizing Large‑Scale Language Model Inference via Firmware‑Level and Architectural Attention Sparsity”, Int J Mod Med, vol. 4, no. 10, pp. 14–20, Oct. 2025, Accessed: Jan. 15, 2026. [Online]. Available: https://intjmm.com/index.php/ijmm/article/view/78