diff --git a/dev/articles/callbacks.html b/dev/articles/callbacks.html index 05d95a64..661dfac3 100644 --- a/dev/articles/callbacks.html +++ b/dev/articles/callbacks.html @@ -240,7 +240,7 @@

Writing a Custom Logger## load_state_dict: function (state_dict) ## on_before_valid: function () ## on_batch_end: function () -## Parent env: <environment: 0x55e41b1a5a50> +## Parent env: <environment: 0x563fce1c4ab0> ## Locked objects: FALSE ## Locked class: FALSE ## Portable: TRUE diff --git a/dev/articles/internals_pipeop_torch.html b/dev/articles/internals_pipeop_torch.html index 443d4452..72f43b75 100644 --- a/dev/articles/internals_pipeop_torch.html +++ b/dev/articles/internals_pipeop_torch.html @@ -119,8 +119,8 @@

A torch Primerinput = torch_randn(2, 3) input #> torch_tensor -#> -0.1615 -0.3987 -1.5443 -#> 0.6225 0.8516 -0.1881 +#> 0.2536 0.7442 -1.3748 +#> 1.1752 0.4839 0.1875 #> [ CPUFloatType{2,3} ]

A nn_module is constructed from a nn_module_generator. nn_linear is one of the @@ -132,8 +132,8 @@

A torch Primeroutput = module_1(input) output #> torch_tensor -#> 0.2601 -0.9715 0.3109 -0.1023 -#> -0.1050 -0.2976 -0.0288 0.4191 +#> 0.7590 0.9118 -0.4479 -0.6905 +#> 0.7574 0.3326 -0.7514 -0.2532 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

A neural network with one (4-unit) hidden layer and two outputs needs the following ingredients

@@ -149,8 +149,8 @@

A torch Primeroutput = softmax(output) output #> torch_tensor -#> 0.2899 0.4311 0.2790 -#> 0.2569 0.4788 0.2643 +#> 0.2141 0.3827 0.4032 +#> 0.2039 0.3793 0.4168 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

We will now continue with showing how such a neural network can be represented in mlr3torch.

@@ -185,8 +185,8 @@

Neural Networks as Graphsoutput = po_module_1$train(list(input))[[1]] output #> torch_tensor -#> 0.2601 -0.9715 0.3109 -0.1023 -#> -0.1050 -0.2976 -0.0288 0.4191 +#> 0.7590 0.9118 -0.4479 -0.6905 +#> 0.7574 0.3326 -0.7514 -0.2532 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

Note we only use the $train(), since torch modules do not have anything that maps to the state (it is filled by @@ -211,8 +211,8 @@

Neural Networks as Graphsoutput = module_graph$train(input)[[1]] output #> torch_tensor -#> 0.2899 0.4311 0.2790 -#> 0.2569 0.4788 0.2643 +#> 0.2141 0.3827 0.4032 +#> 0.2039 0.3793 0.4168 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

While this object allows to easily perform a forward pass, it does not inherit from nn_module, which is useful for various @@ -260,8 +260,8 @@

Neural Networks as Graphs
 graph_module(input)
 #> torch_tensor
-#>  0.2899  0.4311  0.2790
-#>  0.2569  0.4788  0.2643
+#>  0.2141  0.3827  0.4032
+#>  0.2039  0.3793  0.4168
 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]
@@ -378,9 +378,8 @@

small_module(input) #> torch_tensor -#> 0.01 * -#> 5.4497 -71.7569 -38.6771 -9.7114 -#> -80.4275 -61.2732 -38.4816 -46.1400 +#> 1.2926 -0.5278 0.9868 0.0542 +#> 1.0211 -1.1056 0.6474 -0.1616 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

@@ -444,9 +443,9 @@

Using ModelDescriptor to small_module(batch$x[[1]]) #> torch_tensor -#> -3.1491 0.3084 0.2427 -1.2166 -#> -2.9427 0.2295 0.2374 -1.1338 -#> -2.9390 0.2340 0.1861 -1.1485 +#> 1.3399 -0.8466 2.5598 1.7150 +#> 1.4827 -0.8560 2.5392 1.6399 +#> 1.2756 -0.8234 2.3903 1.5592 #> [ CPUFloatType{3,4} ][ grad_fn = <AddmmBackward0> ]

The first linear layer that takes “Sepal” input ("linear1") creates a 2x4 tensor (batch size 2, 4 units), @@ -703,14 +702,14 @@

Building more interesting NNsiris_module$graph$pipeops$linear1$.result #> $output #> torch_tensor -#> -4.9530 4.1720 1.1683 -3.0423 -#> -4.6095 3.8498 0.9760 -3.0238 +#> 0.3256 1.1636 -1.9216 2.3950 +#> 0.4993 1.3250 -1.8388 2.1772 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ] iris_module$graph$pipeops$linear3$.result #> $output #> torch_tensor -#> -0.6211 -0.3307 -0.2578 -0.1894 -0.5379 -#> -0.6211 -0.3307 -0.2578 -0.1894 -0.5379 +#> -0.1437 0.1231 -0.0351 0.0968 0.7743 +#> -0.1437 0.1231 -0.0351 0.0968 0.7743 #> [ CPUFloatType{2,5} ][ grad_fn = <AddmmBackward0> ]

We observe that the po("nn_merge_cat") concatenates these, as expected:

@@ -718,8 +717,8 @@

Building more interesting NNsiris_module$graph$pipeops$nn_merge_cat$.result #> $output #> torch_tensor -#> -4.9530 4.1720 1.1683 -3.0423 -0.6211 -0.3307 -0.2578 -0.1894 -0.5379 -#> -4.6095 3.8498 0.9760 -3.0238 -0.6211 -0.3307 -0.2578 -0.1894 -0.5379 +#> 0.3256 1.1636 -1.9216 2.3950 -0.1437 0.1231 -0.0351 0.0968 0.7743 +#> 0.4993 1.3250 -1.8388 2.1772 -0.1437 0.1231 -0.0351 0.0968 0.7743 #> [ CPUFloatType{2,9} ][ grad_fn = <CatBackward0> ] diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png index 75bd01d0..569c6a8b 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png index be31f215..45260ed4 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png index 87967e23..15f99037 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png index 98f764c3..2ec0f04a 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png index 8ee9f910..9726c275 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png differ diff --git a/dev/articles/lazy_tensor.html b/dev/articles/lazy_tensor.html index 07bd34e8..7261b8d9 100644 --- a/dev/articles/lazy_tensor.html +++ b/dev/articles/lazy_tensor.html @@ -402,7 +402,7 @@

Digging Into Internals#> <DataDescriptor: 1 ops> #> * dataset_shapes: [x: (NA,1)] #> * input_map: (x) -> Graph -#> * pointer: nop.a2f3b6.x.output +#> * pointer: nop.ef77a0.x.output #> * shape: [(NA,1)]

The printed output of the data descriptor informs us about: