diff --git a/.Rbuildignore b/.Rbuildignore
index d60f5840..1443cf18 100644
--- a/.Rbuildignore
+++ b/.Rbuildignore
@@ -35,3 +35,4 @@ data-raw
^Meta$
^CRAN-SUBMISSION$
^paper$
+^cran-comments\.md$
diff --git a/DESCRIPTION b/DESCRIPTION
index b3f8733d..173caef3 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -1,6 +1,6 @@
Package: mlr3torch
Title: Deep Learning with 'mlr3'
-Version: 0.1.2-9000
+Version: 0.2.0
Authors@R:
c(person(given = "Sebastian",
family = "Fischer",
diff --git a/NEWS.md b/NEWS.md
index 09962b16..72f4a7e2 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -1,31 +1,44 @@
-# mlr3torch dev
-
-* perf: Use a faster image loader
-* perf/BREAKING_CHANGE: Removed some optimizers for which no fast ('ignite') variant exists.
- For other optimizers, the old variant was replaced with the 'ignite' variant which
- leads to significantly faster training times.
-* perf: The `jit_trace` parameter was added to `LearnerTorch` which when activated can lead
- to significantly faster training times
-* BREAKING_CHANGE: The default optimizer is now AdamW instead of Adam
-* feat: Add parameter `num_interop_threads` to `LearnerTorch`
-* feat: Add adaptive average pooling
-* feat: Added `n_layers` parameter to MLP
-* BREAKING_CHANGE: The private `LearnerTorch$.dataloader()` method now operates no longer
+# mlr3torch 0.2.0
+
+## Breaking Changes
+
+* Removed some optimizers for which no fast ('ignite') variant exists.
+* The default optimizer is now AdamW instead of Adam.
+* The private `LearnerTorch$.dataloader()` method now operates no longer
on the `task` but on the `dataset` generated by the private `LearnerTorch$.dataset()` method.
-* feat: the `tensor_dataset` parameter was added, which allows to stack all batches
+* The `shuffle` parameter during model training is now initialized to `TRUE` to sidestep
+ issues where data is sorted.
+
+## Performance Improvements
+
+* Optimizers now use the faster ('ignite') version of the optimizers,
+ which leads to considerable speed improvements.
+* The `jit_trace` parameter was added to `LearnerTorch`, which when set to
+ `TRUE` can lead to significant speedups.
+ This should only be enabled for 'static' models, see the
+ [torch tutorial](https://torch.mlverse.org/docs/articles/torchscript)
+ for more information.
+* Added parameter `num_interop_threads` to `LearnerTorch`.
+* The `tensor_dataset` parameter was added, which allows to stack all batches
at the beginning of training to make loading of batches afterwards faster.
-* BREAKING_CHANGE: Early stopping now not uses `epochs - patience` for the internally tuned
+* Use a faster default image loader.
+
+## Features
+
+* Added `PipeOp` for adaptive average pooling.
+* The `n_layers` parameter was added to the MLP learner.
+* Added multimodal melanoma and cifar{10, 100} example tasks.
+* Added a callback to iteratively unfreeze parameters for finetuning.
+* Added different learning rate schedulers as callbacks.
+
+## Bug Fixes:
+
+* Torch learners can now be used with `AutoTuner`.
+* Early stopping now not uses `epochs - patience` for the internally tuned
values instead of the trained number of `epochs` as it was before.
- Also, the improvement is calculated as the difference between the current and the best score,
- not the current and the previous score.
-* feat: Added multimodal melanoma and cifar{10, 100} example tasks.
-* feat: Added a callback to iteratively unfreeze parameters for finetuning.
-* fix: torch learners can now be used with `AutoTuner`.
-* feat: Added different learning rate schedulers as callbacks.
-* feat: `PipeOpBlock` should no longer create ID clashes with other PipeOps in the graph (#260)
-* fix: `device` is no longer part of the `dataset` which allows for parallel dataloading
- on GPUs.
-* feat: The `shuffle` parameter during model training is now initialized to `TRUE`.
+* The `dataset` of a learner must no longer return the tensors on the specified `device`,
+ which allows for parallel dataloading on GPUs.
+* `PipeOpBlock` should no longer create ID clashes with other PipeOps in the graph (#260).
# mlr3torch 0.1.2
diff --git a/README.Rmd b/README.Rmd
index 08d78fd0..387b7ac0 100644
--- a/README.Rmd
+++ b/README.Rmd
@@ -20,7 +20,7 @@ lgr::get_logger("mlr3")$set_threshold("warn")
# mlr3torch
-Package website: [release](https://mlr3torch.mlr-org.com/) | [dev](https://mlr3torch.mlr-org.com/dev)
+Package website: [release](https://mlr3torch.mlr-org.com/) | [dev](https://mlr3torch.mlr-org.com/dev/)
Deep Learning with torch and mlr3.
diff --git a/README.md b/README.md
index 3341bd30..90d74a0c 100644
--- a/README.md
+++ b/README.md
@@ -4,7 +4,7 @@
# mlr3torch
Package website: [release](https://mlr3torch.mlr-org.com/) \|
-[dev](https://mlr3torch.mlr-org.com/dev)
+[dev](https://mlr3torch.mlr-org.com/dev/)
Deep Learning with torch and mlr3.
diff --git a/cran-comments.md b/cran-comments.md
new file mode 100644
index 00000000..0037a2f6
--- /dev/null
+++ b/cran-comments.md
@@ -0,0 +1,3 @@
+## R CMD check results
+
+0 errors | 0 warnings | 0 notes