Tip
Semantic communications focus on understanding the meaning behind transmitted data, ensuring effective task execution and seamless information exchange. However, when AI-native devices employ different internal representations (e.g., latent spaces), semantic mismatches can arise, hindering mutual comprehension. This paper introduces a novel approach to mitigating latent space misalignment in multi-agent AI-native semantic communications. In a downlink scenario, we consider an access point (AP) communicating with multiple users to accomplish a specific AI-driven task. Our method implements a protocol that shares semantic encoder at the AP and local semantic equalizers at user devices, fostering mutual understanding and task-oriented communication while considering power and complexity constraints. To achieve this, we employ a federated optimization for the decentralized training of semantic encoders and equalizers. Numerical results validate the proposed approach in goal-oriented semantic communication, revealing key trade-offs among accuracy, communication overhead, complexity, and the semantic proximity of AI-native communication devices.
This section provides the necessary commands to run the simulations required for the experiments. The commands execute different training scripts with specific configurations. Each simulation subsection contains both the python
command and uv
counterpart.
# Federated Semantic Alignment and Multi-Link Semantic Alignment
python scripts/train_linear.py communication.channel_usage=1,2,4,6,8,10,20 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 base_station.status=multi-link,shared simulation=compr_fact -m
# Baseline First-K
python scripts/train_baseline.py communication.channel_usage=1,2,4,6,8,10,20 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 base_station.strategy=First-K simulation=compr_fact -m
# Baseline Top-K
python scripts/train_baseline.py communication.channel_usage=1,2,3,4,5,10 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 base_station.strategy=Top-K simulation=compr_fact -m
# Federated Semantic Alignment and Multi-Link Semantic Alignment
uv run scripts/train_linear.py communication.channel_usage=1,2,4,6,8,10,20 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 simulation=compr_fact -m
# Baseline First-K
uv run scripts/train_baseline.py communication.channel_usage=1,2,4,6,8,10,20 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 base_station.strategy=First-K simulation=compr_fact -m
# Baseline Top-K
uv run scripts/train_baseline.py communication.channel_usage=1,2,3,4,5,10 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 base_station.strategy=Top-K simulation=compr_fact -m
# Federated Semantic Alignment
python scripts/train_linear.py communication.channel_usage=1,4,8 communication.antennas_receiver=4 communication.antennas_transmitter=4 seed=27,42,100,123,144,200 communication.snr=-20.0,-10.0,10.0,20.0,30.0 simulation=snr -m
# Baseline First-K
python scripts/train_baseline.py communication.channel_usage=1,4,8 communication.antennas_receiver=4 communication.antennas_transmitter=4 seed=27,42,100,123,144,200 communication.snr=-20.0,-10.0,10.0,20.0,30.0 base_station.strategy=First-K simulation=snr -m
# Baseline Top-K
python scripts/train_baseline.py communication.channel_usage=2,4 communication.antennas_receiver=4 communication.antennas_transmitter=4 seed=27,42,100,123,144,200 communication.snr=-20.0,-10.0,10.0,20.0,30.0 base_station.strategy=Top-K simulation=snr -m
# Federated Semantic Alignment
uv run scripts/train_linear.py communication.channel_usage=1,4,8 communication.antennas_receiver=4 communication.antennas_transmitter=4 seed=27,42,100,123,144,200 communication.snr=-20.0,-10.0,10.0,20.0,30.0 simulation=snr -m
# Baseline First-K
x uv run scripts/train_baseline.py communication.channel_usage=1,4,8 communication.antennas_receiver=4 communication.antennas_transmitter=4 seed=27,42,100,123,144,200 communication.snr=-20.0,-10.0,10.0,20.0,30.0 base_station.strategy=First-K simulation=snr -m
# Baseline Top-K
uv run scripts/train_baseline.py communication.channel_usage=2,4 communication.antennas_receiver=4 communication.antennas_transmitter=4 seed=27,42,100,123,144,200 communication.snr=-20.0,-10.0,10.0,20.0,30.0 base_station.strategy=Top-K simulation=snr -m
# Heterogeneous
python scripts/train_linear.py --config-name=heterogeneous communication.channel_usage=1,2,4,6,8,10,20 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 -m
# Homogeneous
python scripts/train_linear.py --config-name=homogeneous communication.channel_usage=1,2,4,6,8,10,20 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 -m
# Heterogeneous
uv run scripts/train_linear.py --config-name=heterogeneous communication.channel_usage=1,2,4,6,8,10,20 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 -m
# Homogeneous
uv run scripts/train_linear.py --config-name=homogeneous communication.channel_usage=1,2,4,6,8,10,20 communication.antennas_receiver=1,2,4 communication.antennas_transmitter=1,2,4 seed=27,42,100,123,144,200 -m
The following command will initiate training of the required classifiers for the above simulations. However, this step is not strictly necessary, as the simulation scripts will automatically check for the presence of pretrained classifiers in the models/classifiers
subfolder. If the classifiers are not found, a pretrained version (used in our paper) will be downloaded from Drive.
# Classifiers
python scripts/train_classifier.py rx_enc=vit_small_patch16_224,vit_small_patch32_224,vit_base_patch16_224,vit_base_patch32_clip_224,rexnet_100,mobilenetv3_small_075,mobilenetv3_large_100,mobilenetv3_small_100,efficientvit_m5.r224_in1k,levit_128s.fb_dist_in1k,vit_tiny_patch16_224 seed=27,42,100,123,144,200 -m
# Classifiers
uv run scripts/train_classifier.py rx_enc=vit_small_patch16_224,vit_small_patch32_224,vit_base_patch16_224,vit_base_patch32_clip_224,rexnet_100,mobilenetv3_small_075,mobilenetv3_large_100,mobilenetv3_small_100,efficientvit_m5.r224_in1k,levit_128s.fb_dist_in1k,vit_tiny_patch16_224 seed=27,42,100,123,144,200 -m
It is highly recommended to create a Python virtual environment before installing dependencies. In a terminal, navigate to the root folder and run:
python -m venv <venv_name>
Activate the environment:
-
On macOS/Linux:
source <venv_name>/bin/activate
-
On Windows:
<venv_name>\Scripts\activate
Once the virtual environment is active, install the dependencies:
pip install -r requirements.txt
You're ready to go! 🚀
uv
is a modern Python package manager that is significantly faster than pip
.
To install uv
, follow the instructions from the official installation guide.
Simply run a script with:
uv run path/to/script.py
Or Run the following command in the root folder:
uv sync
This will automatically create a virtual environment (if none exists) and install all dependencies.
You're ready to go! 🚀