diff --git a/dev/articles/callbacks.html b/dev/articles/callbacks.html index 9035fd49..184f09b5 100644 --- a/dev/articles/callbacks.html +++ b/dev/articles/callbacks.html @@ -225,7 +225,7 @@

Writing a Custom Logger## load_state_dict: function (state_dict) ## on_before_valid: function () ## on_batch_end: function () -## Parent env: <environment: 0x55fe6f6a36e0> +## Parent env: <environment: 0x555b81e8ca48> ## Locked objects: FALSE ## Locked class: FALSE ## Portable: TRUE diff --git a/dev/articles/get_started.html b/dev/articles/get_started.html index ec959595..f2eea715 100644 --- a/dev/articles/get_started.html +++ b/dev/articles/get_started.html @@ -235,7 +235,7 @@

Loss #> clone: function (deep = FALSE, ..., replace_values = TRUE) #> Private: #> .__clone_r6__: function (deep = FALSE) -#> Parent env: <environment: 0x55f7957bfea8> +#> Parent env: <environment: 0x55fb4779a888> #> Locked objects: FALSE #> Locked class: FALSE #> Portable: TRUE diff --git a/dev/articles/internals_pipeop_torch.html b/dev/articles/internals_pipeop_torch.html index c4aa37b5..6d8cbe1b 100644 --- a/dev/articles/internals_pipeop_torch.html +++ b/dev/articles/internals_pipeop_torch.html @@ -104,8 +104,8 @@

A torch Primerinput = torch_randn(2, 3) input #> torch_tensor -#> 0.3523 -1.2318 0.3239 -#> 0.7956 0.8281 -1.7749 +#> 0.6886 1.5234 -0.4646 +#> -0.2283 -0.4859 1.8830 #> [ CPUFloatType{2,3} ]

A nn_module is constructed from a nn_module_generator. nn_linear is one of the @@ -117,8 +117,8 @@

A torch Primeroutput = module_1(input) output #> torch_tensor -#> 0.2760 -0.9127 0.6610 -0.1370 -#> 0.8999 0.5818 -0.5390 -1.4131 +#> -0.5194 -0.5348 -0.6050 1.0497 +#> 0.9937 0.5747 0.5523 -0.8098 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

A neural network with one (4-unit) hidden layer and two outputs needs the following ingredients

@@ -134,8 +134,8 @@

A torch Primeroutput = softmax(output) output #> torch_tensor -#> 0.2606 0.3220 0.4174 -#> 0.3031 0.3645 0.3324 +#> 0.3351 0.3104 0.3545 +#> 0.3483 0.2008 0.4510 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

We will now continue with showing how such a neural network can be represented in mlr3torch.

@@ -170,8 +170,8 @@

Neural Networks as Graphsoutput = po_module_1$train(list(input))[[1]] output #> torch_tensor -#> 0.2760 -0.9127 0.6610 -0.1370 -#> 0.8999 0.5818 -0.5390 -1.4131 +#> -0.5194 -0.5348 -0.6050 1.0497 +#> 0.9937 0.5747 0.5523 -0.8098 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

Note we only use the $train(), since torch modules do not have anything that maps to the state (it is filled by @@ -196,8 +196,8 @@

Neural Networks as Graphsoutput = module_graph$train(input)[[1]] output #> torch_tensor -#> 0.2606 0.3220 0.4174 -#> 0.3031 0.3645 0.3324 +#> 0.3351 0.3104 0.3545 +#> 0.3483 0.2008 0.4510 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

While this object allows to easily perform a forward pass, it does not inherit from nn_module, which is useful for various @@ -245,8 +245,8 @@

Neural Networks as Graphs
 graph_module(input)
 #> torch_tensor
-#>  0.2606  0.3220  0.4174
-#>  0.3031  0.3645  0.3324
+#>  0.3351  0.3104  0.3545
+#>  0.3483  0.2008  0.4510
 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]
@@ -363,8 +363,8 @@

small_module(input) #> torch_tensor -#> -0.0085 0.5001 0.0189 -0.8415 -#> 0.2737 -1.1212 0.2270 -0.3720 +#> 0.2002 -0.1355 0.8235 1.2075 +#> -1.1713 -0.2991 0.4776 -0.6605 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

@@ -429,9 +429,9 @@

Using ModelDescriptor to small_module(batch$x[[1]]) #> torch_tensor -#> -2.2059 -0.9033 2.7936 3.0107 -#> -2.0079 -1.0007 2.6154 2.7973 -#> -2.0381 -0.8644 2.5998 2.7272 +#> -0.9670 0.3528 1.9821 0.5473 +#> -0.7823 0.3230 1.8976 0.7231 +#> -0.8980 0.2995 1.8627 0.5497 #> [ CPUFloatType{3,4} ][ grad_fn = <AddmmBackward0> ]

The first linear layer that takes “Sepal” input ("linear1") creates a 2x4 tensor (batch size 2, 4 units), @@ -689,14 +689,14 @@

Building more interesting NNsiris_module$graph$pipeops$linear1$.result #> $output #> torch_tensor -#> -0.3670 2.6644 1.5224 2.9457 -#> -0.1040 2.4606 1.4121 2.6115 +#> -3.8580 -1.0830 -1.7136 0.4629 +#> -3.5353 -1.1476 -1.6128 0.5261 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ] iris_module$graph$pipeops$linear3$.result #> $output #> torch_tensor -#> -0.0572 0.8390 0.6079 0.6556 -0.2773 -#> -0.0572 0.8390 0.6079 0.6556 -0.2773 +#> -0.1886 0.3924 0.7736 -0.4227 -0.1783 +#> -0.1886 0.3924 0.7736 -0.4227 -0.1783 #> [ CPUFloatType{2,5} ][ grad_fn = <AddmmBackward0> ]

We observe that the po("nn_merge_cat") concatenates these, as expected:

@@ -704,8 +704,8 @@

Building more interesting NNsiris_module$graph$pipeops$nn_merge_cat$.result #> $output #> torch_tensor -#> -0.3670 2.6644 1.5224 2.9457 -0.0572 0.8390 0.6079 0.6556 -0.2773 -#> -0.1040 2.4606 1.4121 2.6115 -0.0572 0.8390 0.6079 0.6556 -0.2773 +#> -3.8580 -1.0830 -1.7136 0.4629 -0.1886 0.3924 0.7736 -0.4227 -0.1783 +#> -3.5353 -1.1476 -1.6128 0.5261 -0.1886 0.3924 0.7736 -0.4227 -0.1783 #> [ CPUFloatType{2,9} ][ grad_fn = <CatBackward0> ] diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png index 4f2fe573..1a4722c9 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png index b73d0530..882c2afb 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png index 79ef5f59..f9592125 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png index 814868ca..a288aeb4 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png index 9bbe1ca7..6f29b401 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png differ diff --git a/dev/articles/lazy_tensor.html b/dev/articles/lazy_tensor.html index e7521a2b..116b6805 100644 --- a/dev/articles/lazy_tensor.html +++ b/dev/articles/lazy_tensor.html @@ -386,7 +386,7 @@

Digging Into Internals#> <DataDescriptor: 1 ops> #> * dataset_shapes: [x: (NA,1)] #> * input_map: (x) -> Graph -#> * pointer: nop.d5587b.x.output +#> * pointer: nop.c189e3.x.output #> * shape: [(NA,1)]

The printed output of the data descriptor informs us about: