diff --git a/dev/articles/callbacks.html b/dev/articles/callbacks.html index 75a64039..274d8aea 100644 --- a/dev/articles/callbacks.html +++ b/dev/articles/callbacks.html @@ -225,7 +225,7 @@

Writing a Custom Logger## load_state_dict: function (state_dict) ## on_before_valid: function () ## on_batch_end: function () -## Parent env: <environment: 0x55c39d30d470> +## Parent env: <environment: 0x558aa657d5b8> ## Locked objects: FALSE ## Locked class: FALSE ## Portable: TRUE diff --git a/dev/articles/get_started.html b/dev/articles/get_started.html index d4a46e81..783861e8 100644 --- a/dev/articles/get_started.html +++ b/dev/articles/get_started.html @@ -236,7 +236,7 @@

Loss #> clone: function (deep = FALSE, ..., replace_values = TRUE) #> Private: #> .__clone_r6__: function (deep = FALSE) -#> Parent env: <environment: 0x5573802d0a48> +#> Parent env: <environment: 0x5562a6d4b3d0> #> Locked objects: FALSE #> Locked class: FALSE #> Portable: TRUE diff --git a/dev/articles/internals_pipeop_torch.html b/dev/articles/internals_pipeop_torch.html index dd2b3abc..dbe9da95 100644 --- a/dev/articles/internals_pipeop_torch.html +++ b/dev/articles/internals_pipeop_torch.html @@ -104,8 +104,8 @@

A torch Primerinput = torch_randn(2, 3) input #> torch_tensor -#> 1.4462 -0.4322 0.2945 -#> -1.4414 -1.0909 0.0804 +#> 0.9070 -2.7058 0.9525 +#> -0.6046 -0.8117 -0.4933 #> [ CPUFloatType{2,3} ]

A nn_module is constructed from a nn_module_generator. nn_linear is one of the @@ -117,8 +117,8 @@

A torch Primeroutput = module_1(input) output #> torch_tensor -#> -0.8440 0.2178 -1.0613 0.0279 -#> 0.3387 -0.6332 -0.3840 -0.1144 +#> 0.2788 1.2324 -0.4626 -0.5106 +#> 0.5439 0.4810 0.3413 -0.0922 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

A neural network with one (4-unit) hidden layer and two outputs needs the following ingredients

@@ -134,8 +134,8 @@

A torch Primeroutput = softmax(output) output #> torch_tensor -#> 0.2715 0.3477 0.3808 -#> 0.2549 0.3680 0.3771 +#> 0.5970 0.2588 0.1442 +#> 0.5811 0.2891 0.1298 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

We will now continue with showing how such a neural network can be represented in mlr3torch.

@@ -170,8 +170,8 @@

Neural Networks as Graphsoutput = po_module_1$train(list(input))[[1]] output #> torch_tensor -#> -0.8440 0.2178 -1.0613 0.0279 -#> 0.3387 -0.6332 -0.3840 -0.1144 +#> 0.2788 1.2324 -0.4626 -0.5106 +#> 0.5439 0.4810 0.3413 -0.0922 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

Note we only use the $train(), since torch modules do not have anything that maps to the state (it is filled by @@ -196,8 +196,8 @@

Neural Networks as Graphsoutput = module_graph$train(input)[[1]] output #> torch_tensor -#> 0.2715 0.3477 0.3808 -#> 0.2549 0.3680 0.3771 +#> 0.5970 0.2588 0.1442 +#> 0.5811 0.2891 0.1298 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

While this object allows to easily perform a forward pass, it does not inherit from nn_module, which is useful for various @@ -245,8 +245,8 @@

Neural Networks as Graphs
 graph_module(input)
 #> torch_tensor
-#>  0.2715  0.3477  0.3808
-#>  0.2549  0.3680  0.3771
+#>  0.5970  0.2588  0.1442
+#>  0.5811  0.2891  0.1298
 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]
@@ -363,8 +363,8 @@

small_module(input) #> torch_tensor -#> 0.3247 -0.7265 -0.9555 -1.1902 -#> -1.1986 -0.7648 0.3823 0.3389 +#> 1.6260 -1.7737 -1.2775 -1.2681 +#> 0.7365 -0.6027 0.0853 0.2263 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

@@ -429,9 +429,9 @@

Using ModelDescriptor to small_module(batch$x[[1]]) #> torch_tensor -#> -1.1461 1.7293 0.9483 -3.5494 -#> -0.9435 1.5987 0.9821 -3.2417 -#> -1.0713 1.5527 0.8862 -3.2914 +#> -2.7629 0.9494 -0.5337 -0.3909 +#> -2.4095 0.8402 -0.4640 -0.1927 +#> -2.4897 0.8274 -0.5014 -0.3388 #> [ CPUFloatType{3,4} ][ grad_fn = <AddmmBackward0> ]

The first linear layer that takes “Sepal” input ("linear1") creates a 2x4 tensor (batch size 2, 4 units), @@ -689,14 +689,14 @@

Building more interesting NNsiris_module$graph$pipeops$linear1$.result #> $output #> torch_tensor -#> 3.1296 -1.8277 -1.6964 -1.1258 -#> 3.0588 -1.5238 -1.6015 -1.1881 +#> -2.8656 -1.3944 -4.9096 2.5212 +#> -2.5005 -1.4532 -4.5724 2.2481 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ] iris_module$graph$pipeops$linear3$.result #> $output #> torch_tensor -#> 1.0162 -0.3580 -0.9170 0.3528 -0.2661 -#> 1.0162 -0.3580 -0.9170 0.3528 -0.2661 +#> 0.4109 -0.4586 0.3801 0.5571 -0.0729 +#> 0.4109 -0.4586 0.3801 0.5571 -0.0729 #> [ CPUFloatType{2,5} ][ grad_fn = <AddmmBackward0> ]

We observe that the po("nn_merge_cat") concatenates these, as expected:

@@ -704,8 +704,8 @@

Building more interesting NNsiris_module$graph$pipeops$nn_merge_cat$.result #> $output #> torch_tensor -#> 3.1296 -1.8277 -1.6964 -1.1258 1.0162 -0.3580 -0.9170 0.3528 -0.2661 -#> 3.0588 -1.5238 -1.6015 -1.1881 1.0162 -0.3580 -0.9170 0.3528 -0.2661 +#> -2.8656 -1.3944 -4.9096 2.5212 0.4109 -0.4586 0.3801 0.5571 -0.0729 +#> -2.5005 -1.4532 -4.5724 2.2481 0.4109 -0.4586 0.3801 0.5571 -0.0729 #> [ CPUFloatType{2,9} ][ grad_fn = <CatBackward0> ] diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png index 633e0a97..d8415733 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png index b4f4874b..1e8c5c08 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png index 2bbf3884..e35c3ede 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png index ff46e5df..b28cb635 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png index aa4af5c2..2922de58 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png differ diff --git a/dev/articles/lazy_tensor.html b/dev/articles/lazy_tensor.html index dd77f409..1f6cde8b 100644 --- a/dev/articles/lazy_tensor.html +++ b/dev/articles/lazy_tensor.html @@ -387,7 +387,7 @@

Digging Into Internals#> <DataDescriptor: 1 ops> #> * dataset_shapes: [x: (NA,1)] #> * input_map: (x) -> Graph -#> * pointer: nop.ded895.x.output +#> * pointer: nop.2cfac7.x.output #> * shape: [(NA,1)]

The printed output of the data descriptor informs us about: