diff --git a/SwinUNETR/BRATS21/README.md b/SwinUNETR/BRATS21/README.md
index 3ecd5834..a3614305 100644
--- a/SwinUNETR/BRATS21/README.md
+++ b/SwinUNETR/BRATS21/README.md
@@ -28,7 +28,7 @@ Challenge: RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge
"TrainingData/BraTS2021_01146/BraTS2021_01146_flair.nii.gz"
-- Download the json file from this [link](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing) and placed in the same folder as the dataset.
+- Download the json file from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json) and placed in the same folder as the dataset.
The sub-regions considered for evaluation in BraTS 21 challenge are the "enhancing tumor" (ET), the "tumor core" (TC), and the "whole tumor" (WT). The ET is described by areas that show hyper-intensity in T1Gd when compared to T1, but also when compared to “healthy” white matter in T1Gd. The TC describes the bulk of the tumor, which is what is typically resected. The TC entails the ET, as well as the necrotic (NCR) parts of the tumor. The appearance of NCR is typically hypo-intense in T1-Gd when compared to T1. The WT describes the complete extent of the disease, as it entails the TC and the peritumoral edematous/invaded tissue (ED), which is typically depicted by hyper-intense signal in FLAIR [[BraTS 21]](http://braintumorsegmentation.org/).
@@ -41,7 +41,7 @@ Figure from [Baid et al.](https://arxiv.org/pdf/2107.02314v1.pdf) [3]
# Models
We provide Swin UNETR models which are pre-trained on BraTS21 dataset as in the following. The folds
-correspond to the data split in the [json file](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing).
+correspond to the data split in the [json file](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json).
diff --git a/SwinUNETR/BTCV/README.md b/SwinUNETR/BTCV/README.md
index 623f9b5e..087a914c 100644
--- a/SwinUNETR/BTCV/README.md
+++ b/SwinUNETR/BTCV/README.md
@@ -75,7 +75,7 @@ The training data is from the [BTCV challenge dataset](https://www.synapse.org/#
Please download the json file from this link.
-We provide the json file that is used to train our models in the following link.
+We provide the json file that is used to train our models in the following link.
Once the json file is downloaded, please place it in the same folder as the dataset. Note that you need to provide the location of your dataset directory by using ```--data_dir```.
diff --git a/UNETR/BTCV/README.md b/UNETR/BTCV/README.md
index bd93e360..8ff6f412 100644
--- a/UNETR/BTCV/README.md
+++ b/UNETR/BTCV/README.md
@@ -62,7 +62,7 @@ We provide state-of-the-art pre-trained checkpoints and TorchScript models of UN
For using the pre-trained checkpoint, please download the weights from the following directory:
-https://drive.google.com/file/d/1kR5QuRAuooYcTNLMnMj80Z9IgSs8jtLO/view?usp=sharing
+https://developer.download.nvidia.com/assets/Clara/monai/research/UNETR_model_best_acc.pth
Once downloaded, please place the checkpoint in the following directory or use ```--pretrained_dir``` to provide the address of where the model is placed:
@@ -86,7 +86,7 @@ python main.py
For using the pre-trained TorchScript model, please download the model from the following directory:
-https://drive.google.com/file/d/1_YbUE0abQFJUR4Luwict6BB8S77yUaWN/view?usp=sharing
+https://developer.download.nvidia.com/assets/Clara/monai/research/UNETR_model_best_acc.pt
Once downloaded, please place the TorchScript model in the following directory or use ```--pretrained_dir``` to provide the address of where the model is placed:
@@ -155,7 +155,7 @@ Under Institutional Review Board (IRB) supervision, 50 abdomen CT scans of were
We provide the json file that is used to train our models in the following link:
-https://drive.google.com/file/d/1t4fIQQkONv7ArTSZe4Nucwkk1KfdUDvW/view?usp=sharing
+https://developer.download.nvidia.com/assets/Clara/monai/tutorials/swin_unetr_btcv_dataset_0.json
Once the json file is downloaded, please place it in the same folder as the dataset.
diff --git a/coplenet-pneumonia-lesion-segmentation/README.md b/coplenet-pneumonia-lesion-segmentation/README.md
index 73d6523d..c4828dd0 100644
--- a/coplenet-pneumonia-lesion-segmentation/README.md
+++ b/coplenet-pneumonia-lesion-segmentation/README.md
@@ -27,7 +27,7 @@ pip install "monai[nibabel]==0.2.0"
```
The rest of the steps assume that this repo is cloned to your local file system and the current directory is the folder of this README file.
- download the input examples from [google drive folder](https://drive.google.com/drive/folders/1pIoSSc4Iq8R9_xXo0NzaOhIHZ3-PqqDC) to `./images`.
-- download the adapted pretrained model from [google drive folder](https://drive.google.com/drive/folders/1HXlYJGvTF3gNGOL0UFBeHVoA6Vh_GqEw) to `./model`.
+- download the adapted pretrained model from this [link](https://developer.download.nvidia.com/assets/Clara/monai/research/coplenet_pretrained_monai_dict.pt) to `./model`.
- run `python run_inference.py` and segmentation results will be saved at `./output`.
_(To segment COVID-19 pneumonia lesions from your own images, make sure that the images have been cropped into the lung region,