diff --git a/dev/articles/callbacks.html b/dev/articles/callbacks.html index 9fd157cb..b5bbb3ef 100644 --- a/dev/articles/callbacks.html +++ b/dev/articles/callbacks.html @@ -225,7 +225,7 @@

Writing a Custom Logger## load_state_dict: function (state_dict) ## on_before_valid: function () ## on_batch_end: function () -## Parent env: <environment: 0x5603df3b07c8> +## Parent env: <environment: 0x55fef5065170> ## Locked objects: FALSE ## Locked class: FALSE ## Portable: TRUE diff --git a/dev/articles/get_started.html b/dev/articles/get_started.html index a745d517..132aeddb 100644 --- a/dev/articles/get_started.html +++ b/dev/articles/get_started.html @@ -236,7 +236,7 @@

Loss #> clone: function (deep = FALSE, ..., replace_values = TRUE) #> Private: #> .__clone_r6__: function (deep = FALSE) -#> Parent env: <environment: 0x55a43f05f830> +#> Parent env: <environment: 0x55ffc865fba8> #> Locked objects: FALSE #> Locked class: FALSE #> Portable: TRUE diff --git a/dev/articles/internals_pipeop_torch.html b/dev/articles/internals_pipeop_torch.html index a6f0cb20..9bbb6894 100644 --- a/dev/articles/internals_pipeop_torch.html +++ b/dev/articles/internals_pipeop_torch.html @@ -104,8 +104,8 @@

A torch Primerinput = torch_randn(2, 3) input #> torch_tensor -#> -0.6687 -2.4004 -1.4578 -#> 0.9376 -0.9994 -0.3993 +#> 0.4430 -1.7493 0.3801 +#> 2.5443 0.4085 -0.4808 #> [ CPUFloatType{2,3} ]

A nn_module is constructed from a nn_module_generator. nn_linear is one of the @@ -117,8 +117,8 @@

A torch Primeroutput = module_1(input) output #> torch_tensor -#> 0.0980 0.3119 -1.4607 -0.4350 -#> 0.2679 0.8104 0.1816 0.0485 +#> 0.6725 0.3022 -0.3800 0.3413 +#> -0.8161 0.3485 0.3559 -1.3795 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

A neural network with one (4-unit) hidden layer and two outputs needs the following ingredients

@@ -134,8 +134,8 @@

A torch Primeroutput = softmax(output) output #> torch_tensor -#> 0.2918 0.1502 0.5579 -#> 0.2890 0.1217 0.5893 +#> 0.2155 0.3400 0.4445 +#> 0.1626 0.3668 0.4706 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

We will now continue with showing how such a neural network can be represented in mlr3torch.

@@ -170,8 +170,8 @@

Neural Networks as Graphsoutput = po_module_1$train(list(input))[[1]] output #> torch_tensor -#> 0.0980 0.3119 -1.4607 -0.4350 -#> 0.2679 0.8104 0.1816 0.0485 +#> 0.6725 0.3022 -0.3800 0.3413 +#> -0.8161 0.3485 0.3559 -1.3795 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

Note we only use the $train(), since torch modules do not have anything that maps to the state (it is filled by @@ -196,8 +196,8 @@

Neural Networks as Graphsoutput = module_graph$train(input)[[1]] output #> torch_tensor -#> 0.2918 0.1502 0.5579 -#> 0.2890 0.1217 0.5893 +#> 0.2155 0.3400 0.4445 +#> 0.1626 0.3668 0.4706 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

While this object allows to easily perform a forward pass, it does not inherit from nn_module, which is useful for various @@ -245,8 +245,8 @@

Neural Networks as Graphs
 graph_module(input)
 #> torch_tensor
-#>  0.2918  0.1502  0.5579
-#>  0.2890  0.1217  0.5893
+#>  0.2155  0.3400  0.4445
+#>  0.1626  0.3668  0.4706
 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]
@@ -363,8 +363,8 @@

small_module(input) #> torch_tensor -#> 1.2548 -1.3432 -1.5458 0.7297 -#> 0.4231 -0.7686 -0.7230 0.6464 +#> -0.1274 0.1809 -1.4397 -0.5290 +#> 0.6121 -0.7253 -0.5453 -0.2486 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

@@ -429,9 +429,9 @@

Using ModelDescriptor to small_module(batch$x[[1]]) #> torch_tensor -#> -0.9548 2.4764 1.7547 -0.4855 -#> -1.1006 2.1895 1.5184 -0.5936 -#> -0.8588 2.2540 1.5650 -0.4339 +#> 0.3122 -4.8047 0.3374 4.3038 +#> 0.4304 -4.4378 0.4278 3.9666 +#> 0.3240 -4.4393 0.2842 3.9762 #> [ CPUFloatType{3,4} ][ grad_fn = <AddmmBackward0> ]

The first linear layer that takes “Sepal” input ("linear1") creates a 2x4 tensor (batch size 2, 4 units), @@ -689,14 +689,14 @@

Building more interesting NNsiris_module$graph$pipeops$linear1$.result #> $output #> torch_tensor -#> 2.6239 0.8791 -1.1404 0.1583 -#> 2.4387 1.0496 -1.3065 0.2425 +#> -3.2734 1.7562 1.3484 2.9473 +#> -2.9881 1.8728 1.3176 2.7498 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ] iris_module$graph$pipeops$linear3$.result #> $output #> torch_tensor -#> -0.0769 0.4376 0.2229 -0.0915 -0.3279 -#> -0.0769 0.4376 0.2229 -0.0915 -0.3279 +#> 0.1362 0.5117 -0.3741 0.8576 0.3130 +#> 0.1362 0.5117 -0.3741 0.8576 0.3130 #> [ CPUFloatType{2,5} ][ grad_fn = <AddmmBackward0> ]

We observe that the po("nn_merge_cat") concatenates these, as expected:

@@ -704,8 +704,8 @@

Building more interesting NNsiris_module$graph$pipeops$nn_merge_cat$.result #> $output #> torch_tensor -#> 2.6239 0.8791 -1.1404 0.1583 -0.0769 0.4376 0.2229 -0.0915 -0.3279 -#> 2.4387 1.0496 -1.3065 0.2425 -0.0769 0.4376 0.2229 -0.0915 -0.3279 +#> -3.2734 1.7562 1.3484 2.9473 0.1362 0.5117 -0.3741 0.8576 0.3130 +#> -2.9881 1.8728 1.3176 2.7498 0.1362 0.5117 -0.3741 0.8576 0.3130 #> [ CPUFloatType{2,9} ][ grad_fn = <CatBackward0> ] diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png index f8d1e32f..bc316f9a 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png index a63a8719..ce5d9191 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png index 0b65a405..4dd846c7 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png index ab8c7f0b..bda90f0e 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png index efc5bd67..2ec0f04a 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png differ diff --git a/dev/articles/lazy_tensor.html b/dev/articles/lazy_tensor.html index 64487e81..037223b4 100644 --- a/dev/articles/lazy_tensor.html +++ b/dev/articles/lazy_tensor.html @@ -387,7 +387,7 @@

Digging Into Internals#> <DataDescriptor: 1 ops> #> * dataset_shapes: [x: (NA,1)] #> * input_map: (x) -> Graph -#> * pointer: nop.6b05fd.x.output +#> * pointer: nop.181509.x.output #> * shape: [(NA,1)]

The printed output of the data descriptor informs us about: