diff --git a/dev/articles/callbacks.html b/dev/articles/callbacks.html index 274d8aea..a65089da 100644 --- a/dev/articles/callbacks.html +++ b/dev/articles/callbacks.html @@ -225,7 +225,7 @@

Writing a Custom Logger## load_state_dict: function (state_dict) ## on_before_valid: function () ## on_batch_end: function () -## Parent env: <environment: 0x558aa657d5b8> +## Parent env: <environment: 0x559b2fbf2820> ## Locked objects: FALSE ## Locked class: FALSE ## Portable: TRUE diff --git a/dev/articles/get_started.html b/dev/articles/get_started.html index 783861e8..42bf4d0d 100644 --- a/dev/articles/get_started.html +++ b/dev/articles/get_started.html @@ -236,7 +236,7 @@

Loss #> clone: function (deep = FALSE, ..., replace_values = TRUE) #> Private: #> .__clone_r6__: function (deep = FALSE) -#> Parent env: <environment: 0x5562a6d4b3d0> +#> Parent env: <environment: 0x55fd301d9d78> #> Locked objects: FALSE #> Locked class: FALSE #> Portable: TRUE diff --git a/dev/articles/internals_pipeop_torch.html b/dev/articles/internals_pipeop_torch.html index dbe9da95..24586606 100644 --- a/dev/articles/internals_pipeop_torch.html +++ b/dev/articles/internals_pipeop_torch.html @@ -104,8 +104,8 @@

A torch Primerinput = torch_randn(2, 3) input #> torch_tensor -#> 0.9070 -2.7058 0.9525 -#> -0.6046 -0.8117 -0.4933 +#> 0.1291 2.0222 -1.2980 +#> -1.6010 -0.5313 -0.2308 #> [ CPUFloatType{2,3} ]

A nn_module is constructed from a nn_module_generator. nn_linear is one of the @@ -117,8 +117,8 @@

A torch Primeroutput = module_1(input) output #> torch_tensor -#> 0.2788 1.2324 -0.4626 -0.5106 -#> 0.5439 0.4810 0.3413 -0.0922 +#> 0.0974 0.3486 0.5000 -0.0451 +#> -0.0646 -0.3571 -0.4350 -0.1996 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

A neural network with one (4-unit) hidden layer and two outputs needs the following ingredients

@@ -134,8 +134,8 @@

A torch Primeroutput = softmax(output) output #> torch_tensor -#> 0.5970 0.2588 0.1442 -#> 0.5811 0.2891 0.1298 +#> 0.2069 0.4379 0.3552 +#> 0.2262 0.4446 0.3292 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

We will now continue with showing how such a neural network can be represented in mlr3torch.

@@ -170,8 +170,8 @@

Neural Networks as Graphsoutput = po_module_1$train(list(input))[[1]] output #> torch_tensor -#> 0.2788 1.2324 -0.4626 -0.5106 -#> 0.5439 0.4810 0.3413 -0.0922 +#> 0.0974 0.3486 0.5000 -0.0451 +#> -0.0646 -0.3571 -0.4350 -0.1996 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

Note we only use the $train(), since torch modules do not have anything that maps to the state (it is filled by @@ -196,8 +196,8 @@

Neural Networks as Graphsoutput = module_graph$train(input)[[1]] output #> torch_tensor -#> 0.5970 0.2588 0.1442 -#> 0.5811 0.2891 0.1298 +#> 0.2069 0.4379 0.3552 +#> 0.2262 0.4446 0.3292 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

While this object allows to easily perform a forward pass, it does not inherit from nn_module, which is useful for various @@ -245,8 +245,8 @@

Neural Networks as Graphs
 graph_module(input)
 #> torch_tensor
-#>  0.5970  0.2588  0.1442
-#>  0.5811  0.2891  0.1298
+#>  0.2069  0.4379  0.3552
+#>  0.2262  0.4446  0.3292
 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]
@@ -363,8 +363,8 @@

small_module(input) #> torch_tensor -#> 1.6260 -1.7737 -1.2775 -1.2681 -#> 0.7365 -0.6027 0.0853 0.2263 +#> -0.8520 -0.1764 0.4265 0.1729 +#> 0.7045 0.0188 -0.1037 -0.5437 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

@@ -429,9 +429,9 @@

Using ModelDescriptor to small_module(batch$x[[1]]) #> torch_tensor -#> -2.7629 0.9494 -0.5337 -0.3909 -#> -2.4095 0.8402 -0.4640 -0.1927 -#> -2.4897 0.8274 -0.5014 -0.3388 +#> -3.3067 2.0636 2.0882 -2.1895 +#> -3.1507 1.8299 1.9559 -1.9313 +#> -3.0589 1.8823 1.9405 -2.0448 #> [ CPUFloatType{3,4} ][ grad_fn = <AddmmBackward0> ]

The first linear layer that takes “Sepal” input ("linear1") creates a 2x4 tensor (batch size 2, 4 units), @@ -689,14 +689,14 @@

Building more interesting NNsiris_module$graph$pipeops$linear1$.result #> $output #> torch_tensor -#> -2.8656 -1.3944 -4.9096 2.5212 -#> -2.5005 -1.4532 -4.5724 2.2481 +#> -4.3902 -2.6418 -1.1820 2.4671 +#> -3.9918 -2.5084 -0.9970 2.2088 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ] iris_module$graph$pipeops$linear3$.result #> $output #> torch_tensor -#> 0.4109 -0.4586 0.3801 0.5571 -0.0729 -#> 0.4109 -0.4586 0.3801 0.5571 -0.0729 +#> -0.7749 0.6632 0.1619 0.4226 -0.1172 +#> -0.7749 0.6632 0.1619 0.4226 -0.1172 #> [ CPUFloatType{2,5} ][ grad_fn = <AddmmBackward0> ]

We observe that the po("nn_merge_cat") concatenates these, as expected:

@@ -704,8 +704,8 @@

Building more interesting NNsiris_module$graph$pipeops$nn_merge_cat$.result #> $output #> torch_tensor -#> -2.8656 -1.3944 -4.9096 2.5212 0.4109 -0.4586 0.3801 0.5571 -0.0729 -#> -2.5005 -1.4532 -4.5724 2.2481 0.4109 -0.4586 0.3801 0.5571 -0.0729 +#> -4.3902 -2.6418 -1.1820 2.4671 -0.7749 0.6632 0.1619 0.4226 -0.1172 +#> -3.9918 -2.5084 -0.9970 2.2088 -0.7749 0.6632 0.1619 0.4226 -0.1172 #> [ CPUFloatType{2,9} ][ grad_fn = <CatBackward0> ] diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png index d8415733..f0975090 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png index 1e8c5c08..b8968d3d 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png index e35c3ede..f8282b7c 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png index b28cb635..2922de58 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png index 2922de58..b73b1206 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png differ diff --git a/dev/articles/lazy_tensor.html b/dev/articles/lazy_tensor.html index 1f6cde8b..3a0b358b 100644 --- a/dev/articles/lazy_tensor.html +++ b/dev/articles/lazy_tensor.html @@ -387,7 +387,7 @@

Digging Into Internals#> <DataDescriptor: 1 ops> #> * dataset_shapes: [x: (NA,1)] #> * input_map: (x) -> Graph -#> * pointer: nop.2cfac7.x.output +#> * pointer: nop.187422.x.output #> * shape: [(NA,1)]

The printed output of the data descriptor informs us about: