Skip to content

Commit

Permalink
release 0.2.0
Browse files Browse the repository at this point in the history
  • Loading branch information
sebffischer committed Feb 7, 2025
1 parent 783e3cc commit f5ffda5
Show file tree
Hide file tree
Showing 6 changed files with 45 additions and 28 deletions.
1 change: 1 addition & 0 deletions .Rbuildignore
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,4 @@ data-raw
^Meta$
^CRAN-SUBMISSION$
^paper$
^cran-comments\.md$
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Package: mlr3torch
Title: Deep Learning with 'mlr3'
Version: 0.1.2-9000
Version: 0.2.0
Authors@R:
c(person(given = "Sebastian",
family = "Fischer",
Expand Down
63 changes: 38 additions & 25 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,44 @@
# mlr3torch dev

* perf: Use a faster image loader
* perf/BREAKING_CHANGE: Removed some optimizers for which no fast ('ignite') variant exists.
For other optimizers, the old variant was replaced with the 'ignite' variant which
leads to significantly faster training times.
* perf: The `jit_trace` parameter was added to `LearnerTorch` which when activated can lead
to significantly faster training times
* BREAKING_CHANGE: The default optimizer is now AdamW instead of Adam
* feat: Add parameter `num_interop_threads` to `LearnerTorch`
* feat: Add adaptive average pooling
* feat: Added `n_layers` parameter to MLP
* BREAKING_CHANGE: The private `LearnerTorch$.dataloader()` method now operates no longer
# mlr3torch 0.2.0

## Breaking Changes

* Removed some optimizers for which no fast ('ignite') variant exists.
* The default optimizer is now AdamW instead of Adam.
* The private `LearnerTorch$.dataloader()` method now operates no longer
on the `task` but on the `dataset` generated by the private `LearnerTorch$.dataset()` method.
* feat: the `tensor_dataset` parameter was added, which allows to stack all batches
* The `shuffle` parameter during model training is now initialized to `TRUE` to sidestep
issues where data is sorted.

## Performance Improvements

* Optimizers now use the faster ('ignite') version of the optimizers,
which leads to considerable speed improvements.
* The `jit_trace` parameter was added to `LearnerTorch`, which when set to
`TRUE` can lead to significant speedups.
This should only be enabled for 'static' models, see the
[torch tutorial](https://torch.mlverse.org/docs/articles/torchscript)
for more information.
* Added parameter `num_interop_threads` to `LearnerTorch`.
* The `tensor_dataset` parameter was added, which allows to stack all batches
at the beginning of training to make loading of batches afterwards faster.
* BREAKING_CHANGE: Early stopping now not uses `epochs - patience` for the internally tuned
* Use a faster default image loader.

## Features

* Added `PipeOp` for adaptive average pooling.
* The `n_layers` parameter was added to the MLP learner.
* Added multimodal melanoma and cifar{10, 100} example tasks.
* Added a callback to iteratively unfreeze parameters for finetuning.
* Added different learning rate schedulers as callbacks.

## Bug Fixes:

* Torch learners can now be used with `AutoTuner`.
* Early stopping now not uses `epochs - patience` for the internally tuned
values instead of the trained number of `epochs` as it was before.
Also, the improvement is calculated as the difference between the current and the best score,
not the current and the previous score.
* feat: Added multimodal melanoma and cifar{10, 100} example tasks.
* feat: Added a callback to iteratively unfreeze parameters for finetuning.
* fix: torch learners can now be used with `AutoTuner`.
* feat: Added different learning rate schedulers as callbacks.
* feat: `PipeOpBlock` should no longer create ID clashes with other PipeOps in the graph (#260)
* fix: `device` is no longer part of the `dataset` which allows for parallel dataloading
on GPUs.
* feat: The `shuffle` parameter during model training is now initialized to `TRUE`.
* The `dataset` of a learner must no longer return the tensors on the specified `device`,
which allows for parallel dataloading on GPUs.
* `PipeOpBlock` should no longer create ID clashes with other PipeOps in the graph (#260).

# mlr3torch 0.1.2

Expand Down
2 changes: 1 addition & 1 deletion README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ lgr::get_logger("mlr3")$set_threshold("warn")

# mlr3torch <a href="https://mlr3torch.mlr-org.com"><img src="man/figures/logo.svg" align="right" height="139" /></a>

Package website: [release](https://mlr3torch.mlr-org.com/) | [dev](https://mlr3torch.mlr-org.com/dev)
Package website: [release](https://mlr3torch.mlr-org.com/) | [dev](https://mlr3torch.mlr-org.com/dev/)

Deep Learning with torch and mlr3.

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# mlr3torch <a href="https://mlr3torch.mlr-org.com"><img src="man/figures/logo.svg" align="right" height="139" /></a>

Package website: [release](https://mlr3torch.mlr-org.com/) \|
[dev](https://mlr3torch.mlr-org.com/dev)
[dev](https://mlr3torch.mlr-org.com/dev/)

Deep Learning with torch and mlr3.

Expand Down
3 changes: 3 additions & 0 deletions cran-comments.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
## R CMD check results

0 errors | 0 warnings | 0 notes

0 comments on commit f5ffda5

Please sign in to comment.