diff --git a/dev/articles/callbacks.html b/dev/articles/callbacks.html index 63820c64..f9f01c68 100644 --- a/dev/articles/callbacks.html +++ b/dev/articles/callbacks.html @@ -225,7 +225,7 @@

Writing a Custom Logger## load_state_dict: function (state_dict) ## on_before_valid: function () ## on_batch_end: function () -## Parent env: <environment: 0x559b24b0b2f8> +## Parent env: <environment: 0x5565121757a0> ## Locked objects: FALSE ## Locked class: FALSE ## Portable: TRUE diff --git a/dev/articles/get_started.html b/dev/articles/get_started.html index d5e37ef8..3a3e58aa 100644 --- a/dev/articles/get_started.html +++ b/dev/articles/get_started.html @@ -235,7 +235,7 @@

Loss #> clone: function (deep = FALSE, ..., replace_values = TRUE) #> Private: #> .__clone_r6__: function (deep = FALSE) -#> Parent env: <environment: 0x556ff2664ec0> +#> Parent env: <environment: 0x555a5a7138e0> #> Locked objects: FALSE #> Locked class: FALSE #> Portable: TRUE diff --git a/dev/articles/internals_pipeop_torch.html b/dev/articles/internals_pipeop_torch.html index 5869ac5a..8dc806ff 100644 --- a/dev/articles/internals_pipeop_torch.html +++ b/dev/articles/internals_pipeop_torch.html @@ -104,8 +104,8 @@

A torch Primerinput = torch_randn(2, 3) input #> torch_tensor -#> -0.1934 1.3338 0.2307 -#> -0.3255 0.4996 -0.5817 +#> -0.4498 -1.7058 -1.9809 +#> 0.3088 -1.8150 -0.8680 #> [ CPUFloatType{2,3} ]

A nn_module is constructed from a nn_module_generator. nn_linear is one of the @@ -117,8 +117,8 @@

A torch Primeroutput = module_1(input) output #> torch_tensor -#> 0.2293 0.0607 0.7179 1.1731 -#> -0.1982 0.3674 0.0534 0.6567 +#> -0.5222 1.4472 1.0816 -1.5185 +#> -0.6949 0.4341 0.4113 -1.4010 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

A neural network with one (4-unit) hidden layer and two outputs needs the following ingredients

@@ -134,8 +134,8 @@

A torch Primeroutput = softmax(output) output #> torch_tensor -#> 0.1706 0.1728 0.6566 -#> 0.1868 0.1952 0.6180 +#> 0.1116 0.3492 0.5392 +#> 0.1333 0.3734 0.4933 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

We will now continue with showing how such a neural network can be represented in mlr3torch.

@@ -170,8 +170,8 @@

Neural Networks as Graphsoutput = po_module_1$train(list(input))[[1]] output #> torch_tensor -#> 0.2293 0.0607 0.7179 1.1731 -#> -0.1982 0.3674 0.0534 0.6567 +#> -0.5222 1.4472 1.0816 -1.5185 +#> -0.6949 0.4341 0.4113 -1.4010 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

Note we only use the $train(), since torch modules do not have anything that maps to the state (it is filled by @@ -196,8 +196,8 @@

Neural Networks as Graphsoutput = module_graph$train(input)[[1]] output #> torch_tensor -#> 0.1706 0.1728 0.6566 -#> 0.1868 0.1952 0.6180 +#> 0.1116 0.3492 0.5392 +#> 0.1333 0.3734 0.4933 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

While this object allows to easily perform a forward pass, it does not inherit from nn_module, which is useful for various @@ -245,8 +245,8 @@

Neural Networks as Graphs
 graph_module(input)
 #> torch_tensor
-#>  0.1706  0.1728  0.6566
-#>  0.1868  0.1952  0.6180
+#>  0.1116  0.3492  0.5392
+#>  0.1333  0.3734  0.4933
 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]
@@ -363,8 +363,8 @@

small_module(input) #> torch_tensor -#> 0.5809 -0.4595 -1.1508 -0.0610 -#> 0.1223 -0.4154 -0.3022 -0.0321 +#> 0.0266 -0.4232 -0.4594 1.4647 +#> -0.5457 -0.1446 -0.4460 0.9944 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

@@ -429,9 +429,9 @@

Using ModelDescriptor to small_module(batch$x[[1]]) #> torch_tensor -#> 2.7076 -0.4141 -5.1192 -0.4539 -#> 2.4751 -0.3695 -4.7644 -0.4728 -#> 2.5174 -0.4087 -4.7471 -0.4259 +#> 0.9057 2.1483 -0.2835 -3.7977 +#> 1.0580 1.9881 -0.3755 -3.5451 +#> 0.8433 2.0029 -0.2918 -3.4974 #> [ CPUFloatType{3,4} ][ grad_fn = <AddmmBackward0> ]

The first linear layer that takes “Sepal” input ("linear1") creates a 2x4 tensor (batch size 2, 4 units), @@ -689,14 +689,14 @@

Building more interesting NNsiris_module$graph$pipeops$linear1$.result #> $output #> torch_tensor -#> -0.8202 4.1808 -2.2521 1.3857 -#> -0.8172 3.8501 -1.9552 1.2092 +#> -2.0687 0.7256 1.8980 -2.8776 +#> -1.8386 0.8302 1.8299 -2.6160 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ] iris_module$graph$pipeops$linear3$.result #> $output #> torch_tensor -#> 0.8816 0.4615 -0.5002 -0.6235 -0.1313 -#> 0.8816 0.4615 -0.5002 -0.6235 -0.1313 +#> -0.3845 0.0300 -0.1459 0.7397 0.6609 +#> -0.3845 0.0300 -0.1459 0.7397 0.6609 #> [ CPUFloatType{2,5} ][ grad_fn = <AddmmBackward0> ]

We observe that the po("nn_merge_cat") concatenates these, as expected:

@@ -704,8 +704,8 @@

Building more interesting NNsiris_module$graph$pipeops$nn_merge_cat$.result #> $output #> torch_tensor -#> -0.8202 4.1808 -2.2521 1.3857 0.8816 0.4615 -0.5002 -0.6235 -0.1313 -#> -0.8172 3.8501 -1.9552 1.2092 0.8816 0.4615 -0.5002 -0.6235 -0.1313 +#> -2.0687 0.7256 1.8980 -2.8776 -0.3845 0.0300 -0.1459 0.7397 0.6609 +#> -1.8386 0.8302 1.8299 -2.6160 -0.3845 0.0300 -0.1459 0.7397 0.6609 #> [ CPUFloatType{2,9} ][ grad_fn = <CatBackward0> ] diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png index eb2d7fc0..6c4b74a9 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png index 13beb415..2c340b00 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png index 13408467..94a8b3e7 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png index ab8c7f0b..3f25a418 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png index 429ec3bc..d581d49c 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png differ diff --git a/dev/articles/lazy_tensor.html b/dev/articles/lazy_tensor.html index 8bcb874e..9c3f5cab 100644 --- a/dev/articles/lazy_tensor.html +++ b/dev/articles/lazy_tensor.html @@ -386,7 +386,7 @@

Digging Into Internals#> <DataDescriptor: 1 ops> #> * dataset_shapes: [x: (NA,1)] #> * input_map: (x) -> Graph -#> * pointer: nop.14a7b3.x.output +#> * pointer: nop.7e5d94.x.output #> * shape: [(NA,1)]

The printed output of the data descriptor informs us about: