Replies: 2 comments 2 replies
-
a) Image and label volumes are loaded from Nifti files using Nibabel which will interpret the contents according the header values in the files. This includes choosing the dtype for the volumes, so the labels should be an integer format and so have discrete label values. b) "slice_label" was created by the c) The input to the transform sequence you have here is a list of dictionaries, where each dictionary relates "image" and "label" to the image and label file paths to load. These don't need to be structured as they are in this dataset, so you can bring in whatever set of Nifti files you like and just figure out what the paired filenames are. This tutorial generates a synthetic dataset and demonstrates one way of constructing these. |
Beta Was this translation helpful? Give feedback.
-
Thanks. Will have further questions but great help.RegardsAvinash GokliSent from my iPadOn Apr 22, 2025, at 8:23 AM, Eric Kerfoot ***@***.***> wrote:
a) Image and label volumes are loaded from Nifti files using Nibabel which will interpret the contents according the header values in the files. This includes choosing the dtype for the volumes, so the labels should be an integer format and so have discrete label values.
b) "slice_label" was created by the CopyItemsd transform, so this is just a copy of the label tensor. This lambda will replace that with 2.0 if the label is not empty and 1.0 otherwise, so it's indicating if "label" is blank or not.
c) The input to the transform sequence you have here is a list of dictionaries, where each dictionary relates "image" and "label" to the image and label file paths to load. These don't need to be structured as they are in this dataset, so you can bring in whatever set of Nifti files you like and just figure out what the paired filenames are. This tutorial generates a synthetic dataset and demonstrates one way of constructing these.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Behind the scene I will like to understand how images and labels are retrieved from nifti files from imagesTr and labelsTr folders. Various turtorials presents logics: (an example from Weakly Supervised Anomaly Detection with Implicit Guidance)
channel = 0 # 0 = Flair
assert channel in [0, 1, 2, 3], "Choose a valid channel"
train_transforms = transforms.Compose(
[
transforms.LoadImaged(keys=["image", "label"]),
transforms.EnsureChannelFirstd(keys=["image", "label"]),
transforms.Lambdad(keys=["image"], func=lambda x: x[channel, :, :, :]),
transforms.EnsureChannelFirstd(channel_dim="no_channel", keys=["image"]),
transforms.EnsureTyped(keys=["image", "label"]),
transforms.Orientationd(keys=["image", "label"], axcodes="RAS"),
transforms.Spacingd(keys=["image", "label"], pixdim=(3.0, 3.0, 2.0), mode=("bilinear", "nearest")),
transforms.CenterSpatialCropd(keys=["image", "label"], roi_size=(64, 64, 44)),
transforms.ScaleIntensityRangePercentilesd(keys="image", lower=0, upper=99.5, b_min=0, b_max=1),
transforms.RandSpatialCropd(keys=["image", "label"], roi_size=(64, 64, 1), random_size=False),
transforms.Lambdad(keys=["image", "label"], func=lambda x: x.squeeze(-1)),
transforms.CopyItemsd(keys=["label"], times=1, names=["slice_label"]),
transforms.Lambdad(keys=["slice_label"], func=lambda x: 2.0 if x.sum() > 0 else 1.0),
]
)
train_ds = DecathlonDataset(
root_dir=root_dir,
task="Task01_BrainTumour",
section="training",
cache_rate=1.0, # you may need a few Gb of RAM... Set to 0 otherwise
num_workers=4,
download=True, # Set download to True if the dataset hasn't been downloaded yet
seed=0,
transform=train_transforms)
train_loader = DataLoader(
train_ds, batch_size=batch_size, shuffle=True, num_workers=4, drop_last=True, persistent_workers=True
)
while iteration < max_epochs:
for batch in train_loader:
iteration += 1
model.train()
images, classes = batch["image"].to(device), batch["slice_label"].to(device)
.. . .. ...
Questions:
(a) The images and labels in nifti format (imagesTr and labelsTr folders). Behind scene, how are images and labels are retrieved from those nifiti files? How are nifti labels are transformed to binary values (masks?)?
(b) The statement transforms.Lambdad(keys=["slice_label"], func=lambda x: 2.0 if x.sum() > 0 else 1.0) retrives slice_label values accordingly (conditionally). So, what are these sums within underneath logic?
(c) I really want to use other Spleen 3D segmentation data , where images and labels are stored as nifit files and then use MONAI transforms.
Can you please provide possible solution(s) or pointer(s) or tutorials?
Thanks
Avinash Gokli
Beta Was this translation helpful? Give feedback.
All reactions