diff --git a/dev/articles/callbacks.html b/dev/articles/callbacks.html index e9f6a059..9fd157cb 100644 --- a/dev/articles/callbacks.html +++ b/dev/articles/callbacks.html @@ -225,7 +225,7 @@

Writing a Custom Logger## load_state_dict: function (state_dict) ## on_before_valid: function () ## on_batch_end: function () -## Parent env: <environment: 0x563506eb9280> +## Parent env: <environment: 0x5603df3b07c8> ## Locked objects: FALSE ## Locked class: FALSE ## Portable: TRUE diff --git a/dev/articles/get_started.html b/dev/articles/get_started.html index daca3100..a745d517 100644 --- a/dev/articles/get_started.html +++ b/dev/articles/get_started.html @@ -236,7 +236,7 @@

Loss #> clone: function (deep = FALSE, ..., replace_values = TRUE) #> Private: #> .__clone_r6__: function (deep = FALSE) -#> Parent env: <environment: 0x56101df4bf40> +#> Parent env: <environment: 0x55a43f05f830> #> Locked objects: FALSE #> Locked class: FALSE #> Portable: TRUE diff --git a/dev/articles/internals_pipeop_torch.html b/dev/articles/internals_pipeop_torch.html index e8dde36d..a6f0cb20 100644 --- a/dev/articles/internals_pipeop_torch.html +++ b/dev/articles/internals_pipeop_torch.html @@ -104,8 +104,8 @@

A torch Primerinput = torch_randn(2, 3) input #> torch_tensor -#> -0.0922 -0.7069 -0.6839 -#> -0.8342 0.7515 0.0459 +#> -0.6687 -2.4004 -1.4578 +#> 0.9376 -0.9994 -0.3993 #> [ CPUFloatType{2,3} ]

A nn_module is constructed from a nn_module_generator. nn_linear is one of the @@ -117,8 +117,8 @@

A torch Primeroutput = module_1(input) output #> torch_tensor -#> 0.1557 0.3401 -0.2068 -1.1674 -#> -0.5995 0.2244 -0.2694 -0.1591 +#> 0.0980 0.3119 -1.4607 -0.4350 +#> 0.2679 0.8104 0.1816 0.0485 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

A neural network with one (4-unit) hidden layer and two outputs needs the following ingredients

@@ -134,8 +134,8 @@

A torch Primeroutput = softmax(output) output #> torch_tensor -#> 0.2584 0.2372 0.5043 -#> 0.2826 0.2323 0.4850 +#> 0.2918 0.1502 0.5579 +#> 0.2890 0.1217 0.5893 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

We will now continue with showing how such a neural network can be represented in mlr3torch.

@@ -170,8 +170,8 @@

Neural Networks as Graphsoutput = po_module_1$train(list(input))[[1]] output #> torch_tensor -#> 0.1557 0.3401 -0.2068 -1.1674 -#> -0.5995 0.2244 -0.2694 -0.1591 +#> 0.0980 0.3119 -1.4607 -0.4350 +#> 0.2679 0.8104 0.1816 0.0485 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

Note we only use the $train(), since torch modules do not have anything that maps to the state (it is filled by @@ -196,8 +196,8 @@

Neural Networks as Graphsoutput = module_graph$train(input)[[1]] output #> torch_tensor -#> 0.2584 0.2372 0.5043 -#> 0.2826 0.2323 0.4850 +#> 0.2918 0.1502 0.5579 +#> 0.2890 0.1217 0.5893 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

While this object allows to easily perform a forward pass, it does not inherit from nn_module, which is useful for various @@ -245,8 +245,8 @@

Neural Networks as Graphs
 graph_module(input)
 #> torch_tensor
-#>  0.2584  0.2372  0.5043
-#>  0.2826  0.2323  0.4850
+#>  0.2918  0.1502  0.5579
+#>  0.2890  0.1217  0.5893
 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]
@@ -363,8 +363,8 @@

small_module(input) #> torch_tensor -#> -0.5541 0.3901 0.5417 -0.4853 -#> 0.1020 -0.3195 -0.1370 -0.0911 +#> 1.2548 -1.3432 -1.5458 0.7297 +#> 0.4231 -0.7686 -0.7230 0.6464 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

@@ -429,9 +429,9 @@

Using ModelDescriptor to small_module(batch$x[[1]]) #> torch_tensor -#> 1.3063 -2.6567 -0.8676 -0.7079 -#> 1.1566 -2.4779 -0.7330 -0.6347 -#> 1.1751 -2.4444 -0.7673 -0.6908 +#> -0.9548 2.4764 1.7547 -0.4855 +#> -1.1006 2.1895 1.5184 -0.5936 +#> -0.8588 2.2540 1.5650 -0.4339 #> [ CPUFloatType{3,4} ][ grad_fn = <AddmmBackward0> ]

The first linear layer that takes “Sepal” input ("linear1") creates a 2x4 tensor (batch size 2, 4 units), @@ -689,14 +689,14 @@

Building more interesting NNsiris_module$graph$pipeops$linear1$.result #> $output #> torch_tensor -#> -0.6149 1.9622 2.6344 -0.7642 -#> -0.6835 1.6752 2.4814 -0.5922 +#> 2.6239 0.8791 -1.1404 0.1583 +#> 2.4387 1.0496 -1.3065 0.2425 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ] iris_module$graph$pipeops$linear3$.result #> $output #> torch_tensor -#> 0.1054 0.6288 -0.4318 -0.2591 -0.2036 -#> 0.1054 0.6288 -0.4318 -0.2591 -0.2036 +#> -0.0769 0.4376 0.2229 -0.0915 -0.3279 +#> -0.0769 0.4376 0.2229 -0.0915 -0.3279 #> [ CPUFloatType{2,5} ][ grad_fn = <AddmmBackward0> ]

We observe that the po("nn_merge_cat") concatenates these, as expected:

@@ -704,8 +704,8 @@

Building more interesting NNsiris_module$graph$pipeops$nn_merge_cat$.result #> $output #> torch_tensor -#> -0.6149 1.9622 2.6344 -0.7642 0.1054 0.6288 -0.4318 -0.2591 -0.2036 -#> -0.6835 1.6752 2.4814 -0.5922 0.1054 0.6288 -0.4318 -0.2591 -0.2036 +#> 2.6239 0.8791 -1.1404 0.1583 -0.0769 0.4376 0.2229 -0.0915 -0.3279 +#> 2.4387 1.0496 -1.3065 0.2425 -0.0769 0.4376 0.2229 -0.0915 -0.3279 #> [ CPUFloatType{2,9} ][ grad_fn = <CatBackward0> ] diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png index c56a2242..f8d1e32f 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png index d8c28a7d..a63a8719 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png index c7be6c31..0b65a405 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png index ff46e5df..ab8c7f0b 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png index aa4af5c2..efc5bd67 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png differ diff --git a/dev/articles/lazy_tensor.html b/dev/articles/lazy_tensor.html index 5803f88a..64487e81 100644 --- a/dev/articles/lazy_tensor.html +++ b/dev/articles/lazy_tensor.html @@ -387,7 +387,7 @@

Digging Into Internals#> <DataDescriptor: 1 ops> #> * dataset_shapes: [x: (NA,1)] #> * input_map: (x) -> Graph -#> * pointer: nop.aa876f.x.output +#> * pointer: nop.6b05fd.x.output #> * shape: [(NA,1)]

The printed output of the data descriptor informs us about: