Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies.