This repository offers a Python package
for the PyTorch
implementation of the APTx activation function, as introduced in the paper "APTx: Better Activation Function than MISH, SWISH, and ReLU's Variants used in Deep Learning."
APTx (Alpha Plus Tanh Times) is a novel activation function designed for computational efficiency in deep learning. It enhances training performance and inference speed, making it particularly suitable for low-end hardware such as IoT devices. Notably, APTx provides flexibility by allowing users to either use its default parameter values or optimize them as trainable parameters during training.
Paper Title: APTx: Better Activation Function than MISH, SWISH, and ReLU's Variants used in Deep Learning
Author: Ravin Kumar
Publication: 5th July, 2022
Published Paper: click here
Doi: DOI Link of Paper
Other Sources:
- Arxiv.org
- Research Gate, Research Gate - Preprint
- Osf.io - version 2, Osf.io - version 1
- SSRN
- Internet Archive, Internet Archive - Preprint
- Medium.com
- Github Repository (Python Package- Pytorch Implementation): Python Package
Ravin Kumar (2022). APTx: Better Activation Function than MISH, SWISH, and ReLU’s Variants used in Deep Learning. International Journal of Artificial Intelligence and Machine Learning, 2(2), 56-61. doi: 10.51483/IJAIML.2.2.2022.56-61
The APTx activation function is defined as:
where:
-
$\alpha$ controls the baseline shift (default: 1.0) -
$\beta$ scales the input inside the tanh function (default: 1.0) -
$\gamma$ controls the output amplitude (default: 0.5)
At
So, we can use
Interestingly, APTx function with parameters
pip install aptx_activation
or,
pip install git+https://github.com/mr-ravin/aptx_activation.git
- Python >= 3.7
- Pytorch >= 1.8.0
On Default Device:
import torch
from aptx_activation import APTx
# Example Usage
aptx = APTx(alpha=1.0, beta=1.0, gamma=0.5) # default values in APTx
tensor = torch.randn(5)
output = aptx(tensor)
print(output)
On GPU Device:
import torch
from aptx_activation import APTx
# Example Usage
aptx = APTx(alpha=1.0, beta=1.0, gamma=0.5).to("cuda") # default values in APTx
tensor = torch.randn(5).to("cuda")
output = aptx(tensor)
print(output)
import torch
from aptx_activation import APTx
# Example Usage
aptx = APTx(alpha=1.0, beta=0.5, gamma=0.5) # Behaves like SWISH(x, 1)
tensor = torch.randn(5)
output = aptx(tensor)
print(output)
APTx allows for trainable parameters to adapt dynamically during training:
from aptx_activation import APTx
aptx = APTx(trainable=True) # Learnable α, β, and γ
- Efficient Computation: Requires fewer mathematical operations compared to MISH and SWISH.
- Faster Training: The reduced complexity speeds up both forward and backward propagation.
- Lower Hardware Requirements: Optimized for edge devices and low-end computing hardware.
-
Parameter Flexibility - SWISH:
- By setting
$\alpha = 1$ ,$\beta = 0.5$ , and$\gamma = 0.5$ , APTx exactly replicates the SWISH(x, 1) activation function. - By setting
$\alpha = 1$ ,$\beta = 1$ , and$\gamma = 0.5$ , APTx exactly replicates the SWISH(x, 2) activation function.
- By setting
-
Parameter Flexibility - MISH:
- By setting
$\alpha = 1$ ,$\beta = 0.5$ , and$\gamma = 0.5$ , APTx closely replicates thenegative domain
part of MISH activation function. - By setting
$\alpha = 1$ ,$\beta = 1$ , and$\gamma = 0.5$ , APTx closely replicates thepositive domain
part of MISH activation function.
- By setting
- SWISH generally outperforms ReLU (and its variants) in deeper networks because it is smooth and non-monotonic, allowing better gradient flow.
- MISH vs SWISH:
- MISH is smoother than SWISH, helping gradient flow.
- MISH retains more information during negative input values.
- MISH requires more computation.
- APTx offers similar performance to MISH but with significantly lower computation costs, making it ideal for resource-constrained environments.
MISH has similar or even better performance than SWISH which is better than the rest of the activation functions. Our proposed activation function APTx behaves similar to MISH but requires lesser mathematical operations in calculating value in forward propagation, and derivatives in backward propagation. This allows APTx to train neural networks faster and be able to run inference on low-end computing hardwares such as neural networks deployed on low-end edge-devices with Internet of Things. Interestingly, using APTx one can also generate the SWISH(x, ρ) activation function at parameters 𝞪 = 1 , 𝛽 = ρ/2 and γ = ½.
Copyright (c) 2025 Ravin Kumar
Website: https://mr-ravin.github.io
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation
files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy,
modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.