Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some Pool Options outSize datatype is not correct , maybe should set as LongOptionalVector #1586

Open
mullerhai opened this issue Mar 5, 2025 · 14 comments

Comments

@mullerhai
Copy link

HI ,
some pool layer can not use ,AdaptiveMaxPool2d, AdaptiveMaxPool3d AdaptiveAvgPool2d AdaptiveAvgPool2d ,these need pass outputSize maybe tuple2 or tuple3, but the

2d
public native @cast("torch::ExpandingArrayWithOptionalElem<2>*") @ByRef @NoException(true) LongOptional output_size();

3d
public native @cast("torch::ExpandingArrayWithOptionalElem<3>*") @ByRef @NoException(true) LongOptional output_size();

so we will meet error


    //java.lang.RuntimeException: Storage size calculation overflowed with sizes=[1, 64, 1912110652560, 1912110652592]
    val m1 = nn.AdaptiveMaxPool2d((5, 7))
    val input = torch.randn(Seq(1, 64, 8, 9))
    assertEquals(m1(input).shape, Seq(1, 64, 5, 7))

    //java.lang.RuntimeException: Storage size calculation overflowed with sizes=[1, 64, 1961392298400, 1961392298432]
    val input2 = torch.randn(Seq(1, 64, 10, 9))
//    val m2 = nn.AdaptiveMaxPool2d((7))
//    assertEquals(m2(input2).shape, Seq(1, 64, 7, 7))
new LongOptionalVector(h.toOptional,new LongOptional(w))

2d
 public native @Cast("torch::ExpandingArrayWithOptionalElem<2>*") @ByRef @NoException(true) LongOptionalVector output_size();

3d
  public native @Cast("torch::ExpandingArrayWithOptionalElem<3>*") @ByRef @NoException(true) LongOptionalVector output_size();

but I also scare it can not work, please check the code ,thanks

@mullerhai
Copy link
Author

has these bug is these layer
AdaptiveMaxPool3dImpl , AdaptiveMaxPool2dImpl,AdaptiveAvgPool3dImpl,AdaptiveAvgPool2dImpl
AdaptiveAvgPool2dOptions, AdaptiveAvgPool3dOptions, AdaptiveMaxPool2dOptions,AdaptiveMaxPool3dOptions
the bug method field
public native @cast("torch::ExpandingArrayWithOptionalElem<3>*") @ByRef @NoException(true) LongOptional output_size();
the method bug only could receive first value ,the next third value can not receive
we must update the output_size() the return type as LongOptionalVector

FractionalMaxPool2dImpl FractionalMaxPool2dOptions
FractionalMaxPool3dImpl FractionalMaxPool3dOptions
bug method
public native @cast("std::optional<torch::ExpandingArray<3> >") @ByRef @NoException(true) LongExpandingArrayOptional output_size();
public native @cast("std::optional<torch::nn::FractionalMaxPoolOptions<3>::ExpandingArrayDouble>
") @ByRef @NoException(true) DoubleExpandingArrayOptional output_ratio();

only receive the first value , @saudet we need solve these bug before javacpp-pytorch 2.6 release

@saudet saudet added the bug label Mar 6, 2025
@mullerhai
Copy link
Author

mullerhai commented Mar 6, 2025

let us check the javacpp raw code in scala

import torch.nn.modules.pooling.AdaptiveAvgPool2d
import org.bytedeco.pytorch.{AdaptiveMaxPool2dImpl, AdaptiveMaxPool2dOptions, T_TensorTensor_T}
import org.bytedeco.pytorch
import org.bytedeco.javacpp.{LongPointer, DoublePointer}
import torch.internal.NativeConverters.{fromNative, toNative, toOptional}
import org.bytedeco.pytorch.LongOptionalVector
import org.bytedeco.pytorch.LongOptional
object testRawPool {

  def main(args: Array[String]): Unit = {
    val nativeOutputSize = LongPointer(Array(5l,7l)*)
    val options: AdaptiveMaxPool2dOptions = AdaptiveMaxPool2dOptions(nativeOutputSize)
    val nativeModule: AdaptiveMaxPool2dImpl = AdaptiveMaxPool2dImpl(options)

    val input = torch.randn(Seq(1, 64, 8, 9))
    val output =nativeModule.forward(input.native)
    val shape = output.shape
    println(s"output shape: ${output.shape}")
//    val m1 = new AdaptiveAvgPool2d((5, 7))
//    val input = torch.randn(Seq(1, 64, 8, 9))
//    val output = m1(input)
//    println(output)
    assertEquals(m1(input).shape, Seq(1, 64, 5, 7)) 

  }
}






    val m1 = nn.AdaptiveMaxPool2d((5, 7))
    val input = torch.randn(Seq(1, 64, 8, 9))
    assertEquals(m1(input).shape, Seq(1, 64, 5, 7)) // ArraySeq(1, 64, 5, 0) 多个零  |pytorch torch.Size([1, 64, 5, 7])

console log

xception in thread "main" java.lang.RuntimeException: Storage size calculation overflowed with sizes=[1, 64, 5, 5476390388284129548]
Exception raised from computeStorageNbytesContiguous at D:\a\javacpp-presets\javacpp-presets\pytorch\cppbuild\windows-x86_64-gpu\pytorch\aten\src\ATen\EmptyTensor.cpp:66 (most recent call first):
00007FF8A7EB83C9 <unknown symbol address> c10.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A7EB6C5A <unknown symbol address> c10.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF89F739BB6 <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF89F737B5E <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF89F739D68 <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A07C2741 <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF89F9A1498 <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A07D9676 <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A07A6F83 <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A0575EAE <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A1DA866E <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A1DC9C2C <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A04F64F4 <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A2F959BF <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A2F95EFA <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF89E77C11F <unknown symbol address> jnitorch.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
0000017F07540B72 <unknown symbol address> !<unknown symbol> [<unknown file> @ <unknown line number>]

	at org.bytedeco.pytorch.AdaptiveMaxPool2dImpl.forward(Native Method)

@mullerhai
Copy link
Author

def main(args: Array[String]): Unit = {
val nativeOutputSize = LongPointer(Array(5l,7l)*)
val options: AdaptiveMaxPool2dOptions = AdaptiveMaxPool2dOptions(nativeOutputSize)
val nativeModule: AdaptiveMaxPool2dImpl = AdaptiveMaxPool2dImpl(options)
println(s"AdaptiveMaxPool2d raw options 1: ${options.output_size().get} options 2: ${options.output_size().get} ")
val pi =options.output_size()
println(s"pi : ${pi}")
val input = torch.randn(Seq(1, 64, 8, 9))
val output =nativeModule.forward(input.native)
println(s"AdaptiveMaxPool2d nativeModule options 1: ${nativeModule.options.output_size().get} options 2: ${nativeModule.options.output_size()} ")
val shape = output.shape
println(s"output shape: ${output.shape}")
// val m1 = new AdaptiveAvgPool2d((5, 7))
// val input = torch.randn(Seq(1, 64, 8, 9))
// val output = m1(input)
// println(output)
}
}

@mullerhai
Copy link
Author

javacpp-pytorch Fra..Pool

import torch.nn.modules.pooling.AdaptiveAvgPool2d
import org.bytedeco.pytorch.{AdaptiveMaxPool2dImpl, AdaptiveMaxPool2dOptions, T_TensorTensor_T}
import org.bytedeco.pytorch.{FractionalMaxPool2dImpl, T_TensorTensor_T, FractionalMaxPool2dOptions, LongExpandingArrayOptional, DoubleExpandingArrayOptional}
import org.bytedeco.pytorch
import org.bytedeco.javacpp.{LongPointer, DoublePointer}
import torch.internal.NativeConverters.{fromNative, toNative, toOptional}
import org.bytedeco.pytorch.LongOptionalVector
import org.bytedeco.pytorch.LongOptional
object testRawPool {

  def main(args: Array[String]): Unit = {

    val kernelSize = (5,7) // LongExpandingArrayOptional(5, 7)
    val options: FractionalMaxPool2dOptions = FractionalMaxPool2dOptions(toNative(kernelSize))
    options.kernel_size().put(toNative(kernelSize))
    val t  =(7,9)
//    options.output_size().put(LongPointer(t._1.toLong))
    //    options.output_size().put(LongPointer(t._2.toLong))
    options.output_ratio().put(DoublePointer(Array(t._1.toDouble, t._2.toDouble) *))
    val k =(1.4f,3.7f)
//    options.output_ratio().put(DoublePointer(k._1.toDouble))
//    options.output_ratio().put(DoublePointer(k._2.toDouble))
    options.output_ratio().put(DoublePointer(Array(k._1.toDouble, k._2.toDouble)*))
    println(s"FractionalMaxPool3d raw  options kernel ${options.kernel_size().get(0)} k2 ${options.kernel_size().get(1)} outsize ${options.output_size().has_value()}  ${options.output_size().get().get(0)} out2 ${options.output_size().get().get(1)} outRatio ${options.output_ratio().has_value()} ${options.output_ratio().get().get(0)} ratio2 ${options.output_ratio().get().get(1)}")
    val nativeModule: FractionalMaxPool2dImpl = FractionalMaxPool2dImpl(
      options
    )
    val input = torch.randn(Seq(1, 64, 8, 9))
    println(s"FractionalMaxPool2d options kernel ${nativeModule.options().kernel_size().get(0)} k2 ${nativeModule.options().kernel_size().get(1)} outsize ${nativeModule.options().output_size().has_value()}  ${nativeModule.options().output_size().get.get(0)} out2 ${nativeModule.options().output_size().get().get(0)} outRatio ${nativeModule.options().output_ratio().has_value()} ${nativeModule.options().output_ratio().get().get(0)} ratio2 ${nativeModule.options().output_ratio().get().get(1)}")
    val output = fromNative(nativeModule.forward(input.native))
    println(s"output.shape  ${output.shape}")
  }

  class FractionalMaxPool2dSuite extends munit.FunSuite {
    test("AdapativeMaxPool2d output shapes") {
      val m13 = nn.FractionalMaxPool2d(kernel_size = (7, 7), output_size = Some(7, 7), output_ratio = Some(0.57f, 05f))
      val input = torch.randn(Seq(1, 64, 8, 9))
      println(s" options kernel ${m13.nativeModule.options().kernel_size().get(0)} k2 ${m13.nativeModule.options().kernel_size().get(1)} outsize ${m13.nativeModule.options().output_size().has_value()}  ${m13.nativeModule.options().output_size().getPointer(0)} out2 ${m13.nativeModule.options().output_size().getPointer(1)} outRatio ${m13.nativeModule.options().output_ratio().has_value()} ${m13.nativeModule.options().output_ratio().getPointer(0)} ratio2 ${m13.nativeModule.options().output_ratio().getPointer(1)}")
      assertEquals(m13(input.to(torch.float64)).shape, Seq(1, 64, 5, 7))
    }
  }

  class FractionalMaxPool3dSuite extends munit.FunSuite {
    test("FractionalMaxPool3dSuite output shapes") {
      val input = torch.randn(Seq(1, 64, 8, 9))
      val m23 = nn.FractionalMaxPool3d(kernel_size = (4, 8, 1), output_size = Some(5, 6, 7), output_ratio = Some(0.4f, 0.34f, 0.57f))
      println(s" options kernel ${m23.nativeModule.options().kernel_size().get(0)} k2 ${m23.nativeModule.options().kernel_size().get(1)} outsize ${m23.nativeModule.options().output_size().has_value()}  ${m23.nativeModule.options().output_size().getPointer(0)} out2 ${m23.nativeModule.options().output_size().getPointer(1)} outRatio ${m23.nativeModule.options().output_ratio().has_value()} ${m23.nativeModule.options().output_ratio().getPointer(0)} ratio2 ${m23.nativeModule.options().output_ratio().getPointer(1)}")
      println(m23(input.to(torch.float64)).shape)

    }
  }

console log error

FractionalMaxPool3d raw  options kernel 5 k2 7 outsize true  720575940396092416 out2 -9223304962545763061 outRatio true 1.399999976158142 ratio2 1.5573749537459052E-207
Exception in thread "main" java.lang.RuntimeException: FractionalMaxPool2d requires specifying either an output size, or a pooling ratio
Exception raised from reset at D:\a\javacpp-presets\javacpp-presets\pytorch\cppbuild\windows-x86_64-gpu\pytorch\torch\csrc\api\src\nn\modules\pooling.cpp:289 (most recent call first):

@mullerhai
Copy link
Author


import torch.nn.modules.pooling.AdaptiveAvgPool2d
import org.bytedeco.pytorch.{AdaptiveMaxPool2dImpl, AdaptiveMaxPool2dOptions, T_TensorTensor_T}
import org.bytedeco.pytorch.{FractionalMaxPool2dImpl, T_TensorTensor_T, FractionalMaxPool2dOptions, LongExpandingArrayOptional, DoubleExpandingArrayOptional}
import org.bytedeco.pytorch
import org.bytedeco.javacpp.{LongPointer, DoublePointer}
import torch.internal.NativeConverters.{fromNative, toNative, toOptional}
import org.bytedeco.pytorch.LongOptionalVector
import org.bytedeco.pytorch.LongOptional
import org.bytedeco.pytorch.{ FractionalMaxPool3dImpl, FractionalMaxPool3dOptions}
object testRawPool {

  def main(args: Array[String]): Unit = {

    val kernelSize = (4,8,1) // LongExpandingArrayOptional(5, 7)
    val options: FractionalMaxPool3dOptions = FractionalMaxPool3dOptions(toNative(kernelSize))
    options.kernel_size().put(toNative(kernelSize))
    val t = (5,6,7)
    //    options.output_size().put(LongPointer(t._1.toLong))
    //    options.output_size().put(LongPointer(t._2.toLong))
    options.output_ratio().put(DoublePointer(Array(t._1.toDouble, t._2.toDouble,t._3.toDouble) *))
    val k = (0.4f, 0.34f, 0.57f)
    //    options.output_ratio().put(DoublePointer(k._1.toDouble))
    //    options.output_ratio().put(DoublePointer(k._2.toDouble))
    options.output_ratio().put(DoublePointer(Array(k._1.toDouble, k._2.toDouble, k._3.toDouble) *))
    println(s"FractionalMaxPool3d raw  options kernel ${options.kernel_size().get(0)} k2 ${options.kernel_size().get(1)} k3 ${options.kernel_size().get(2)} outsize ${options.output_size().has_value()}  ${options.output_size().get().get(0)} out2 ${options.output_size().get().get(1)} out3 ${options.output_size().get().get(2)} outRatio ${options.output_ratio().has_value()} ${options.output_ratio().get().get(0)} ratio2 ${options.output_ratio().get().get(1)} ratio3 ${options.output_ratio().get().get(2)}")
    val nativeModule: FractionalMaxPool3dImpl = FractionalMaxPool3dImpl(
      options
    )
    val input = torch.randn(Seq(1, 64, 8, 9))
    println(s"FractionalMaxPool3d options kernel ${nativeModule.options().kernel_size().get(0)} k2 ${nativeModule.options().kernel_size().get(1)} outsize ${nativeModule.options().output_size().has_value()}  ${nativeModule.options().output_size().get.get(0)} out2 ${nativeModule.options().output_size().get().get(0)} outRatio ${nativeModule.options().output_ratio().has_value()} ${nativeModule.options().output_ratio().get().get(0)} ratio2 ${nativeModule.options().output_ratio().get().get(1)}")
    val output = fromNative(nativeModule.forward(input.native))
    println(s"output.shape  ${output.shape}")
  }

console log error

C:\Users\jeffsyry\.jdks\openjdk-23.0.1\bin\java.exe "-javaagent:C:\Program Files\JetBrains\IntelliJ IDEA 2024.3\lib\idea_rt.jar=60485:C:\Program Files\JetBrains\IntelliJ IDEA 2024.3\bin" -Dfile.encoding=UTF-8 -Dsun.stdout.encoding=UTF-8 -Dsun.stderr.encoding=UTF-8 -classpath F:\code\storch_demo\target\scala-3.6.2\classes;D:\Coursier\https\repo1.maven.org\maven2\org\scala-lang\scala3-library_3\3.6.2\scala3-library_3-3.6.2.jar;C:\Users\jeffsyry\.ivy2\local\dev.storch\core_3\0.1.9-2.4.2\jars\core_3.jar;C:\Users\jeffsyry\.ivy2\local\dev.storch\vision_3\0.1.9-2.4.2\jars\vision_3.jar;D:\Coursier\https\repo1.maven.org\maven2\org\scalameta\munit_3\0.7.29\munit_3-0.7.29.jar;D:\Coursier\https\repo1.maven.org\maven2\org\scalameta\munit-scalacheck_3\0.7.29\munit-scalacheck_3-0.7.29.jar;D:\Coursier\https\repo1.maven.org\maven2\org\scala-lang\scala-library\2.13.15\scala-library-2.13.15.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\javacpp\1.5.11\javacpp-1.5.11.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\javacpp\1.5.11\javacpp-1.5.11-windows-x86_64.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\pytorch\2.5.1-1.5.11\pytorch-2.5.1-1.5.11.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\pytorch\2.5.1-1.5.11\pytorch-2.5.1-1.5.11-windows-x86_64-gpu.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\openblas\0.3.28-1.5.11\openblas-0.3.28-1.5.11.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\openblas\0.3.28-1.5.11\openblas-0.3.28-1.5.11-windows-x86_64.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\cuda\12.6-9.5-1.5.11\cuda-12.6-9.5-1.5.11.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\cuda\12.6-9.5-1.5.11\cuda-12.6-9.5-1.5.11-windows-x86_64-redist.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\mkl\2025.0-1.5.11\mkl-2025.0-1.5.11.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\mkl\2025.0-1.5.11\mkl-2025.0-1.5.11-windows-x86_64.jar;D:\Coursier\https\repo1.maven.org\maven2\org\bytedeco\pytorch\2.5.1-1.5.11\pytorch-2.5.1-1.5.11-windows-x86_64.jar;D:\Coursier\https\repo1.maven.org\maven2\org\typelevel\spire_3\0.18.0\spire_3-0.18.0.jar;D:\Coursier\https\repo1.maven.org\maven2\org\typelevel\shapeless3-typeable_3\3.3.0\shapeless3-typeable_3-3.3.0.jar;D:\Coursier\https\repo1.maven.org\maven2\com\lihaoyi\os-lib_3\0.9.1\os-lib_3-0.9.1.jar;D:\Coursier\https\repo1.maven.org\maven2\com\lihaoyi\sourcecode_3\0.3.0\sourcecode_3-0.3.0.jar;D:\Coursier\https\repo1.maven.org\maven2\dev\dirs\directories\26\directories-26.jar;D:\Coursier\https\repo1.maven.org\maven2\com\sksamuel\scrimage\scrimage-core\4.3.0\scrimage-core-4.3.0.jar;D:\Coursier\https\repo1.maven.org\maven2\com\sksamuel\scrimage\scrimage-webp\4.3.0\scrimage-webp-4.3.0.jar;D:\Coursier\https\repo1.maven.org\maven2\com\sksamuel\scrimage\scrimage-scala_2.13\4.3.0\scrimage-scala_2.13-4.3.0.jar;D:\Coursier\https\repo1.maven.org\maven2\org\scalameta\junit-interface\0.7.29\junit-interface-0.7.29.jar;D:\Coursier\https\repo1.maven.org\maven2\junit\junit\4.13.2\junit-4.13.2.jar;D:\Coursier\https\repo1.maven.org\maven2\org\scalacheck\scalacheck_3\1.15.4\scalacheck_3-1.15.4.jar;D:\Coursier\https\repo1.maven.org\maven2\org\typelevel\spire-macros_3\0.18.0\spire-macros_3-0.18.0.jar;D:\Coursier\https\repo1.maven.org\maven2\org\typelevel\spire-platform_3\0.18.0\spire-platform_3-0.18.0.jar;D:\Coursier\https\repo1.maven.org\maven2\org\typelevel\spire-util_3\0.18.0\spire-util_3-0.18.0.jar;D:\Coursier\https\repo1.maven.org\maven2\org\typelevel\algebra_3\2.8.0\algebra_3-2.8.0.jar;D:\Coursier\https\repo1.maven.org\maven2\com\lihaoyi\geny_3\1.0.0\geny_3-1.0.0.jar;D:\Coursier\https\repo1.maven.org\maven2\com\twelvemonkeys\imageio\imageio-core\3.9.4\imageio-core-3.9.4.jar;D:\Coursier\https\repo1.maven.org\maven2\com\twelvemonkeys\imageio\imageio-jpeg\3.9.4\imageio-jpeg-3.9.4.jar;D:\Coursier\https\repo1.maven.org\maven2\com\drewnoakes\metadata-extractor\2.18.0\metadata-extractor-2.18.0.jar;D:\Coursier\https\repo1.maven.org\maven2\commons-io\commons-io\2.11.0\commons-io-2.11.0.jar;D:\Coursier\https\repo1.maven.org\maven2\ar\com\hjg\pngj\2.1.0\pngj-2.1.0.jar;D:\Coursier\https\repo1.maven.org\maven2\org\apache\commons\commons-lang3\3.12.0\commons-lang3-3.12.0.jar;D:\Coursier\https\repo1.maven.org\maven2\org\slf4j\slf4j-api\2.0.6\slf4j-api-2.0.6.jar;D:\Coursier\https\repo1.maven.org\maven2\org\scala-sbt\test-interface\1.0\test-interface-1.0.jar;D:\Coursier\https\repo1.maven.org\maven2\org\hamcrest\hamcrest-core\1.3\hamcrest-core-1.3.jar;D:\Coursier\https\repo1.maven.org\maven2\org\typelevel\cats-kernel_3\2.8.0\cats-kernel_3-2.8.0.jar;D:\Coursier\https\repo1.maven.org\maven2\com\twelvemonkeys\common\common-lang\3.9.4\common-lang-3.9.4.jar;D:\Coursier\https\repo1.maven.org\maven2\com\twelvemonkeys\common\common-io\3.9.4\common-io-3.9.4.jar;D:\Coursier\https\repo1.maven.org\maven2\com\twelvemonkeys\common\common-image\3.9.4\common-image-3.9.4.jar;D:\Coursier\https\repo1.maven.org\maven2\com\twelvemonkeys\imageio\imageio-metadata\3.9.4\imageio-metadata-3.9.4.jar;D:\Coursier\https\repo1.maven.org\maven2\com\adobe\xmp\xmpcore\6.1.11\xmpcore-6.1.11.jar torch.testRawPool
FractionalMaxPool3d raw  options kernel 4 k2 8 k3 1 outsize true  7305804402280461893 out2 7308324465986073673 out3 4934100689780695366 outRatio true 0.4000000059604645 ratio2 8.24028317E-315 ratio3 0.0
Exception in thread "main" java.lang.RuntimeException: FractionalMaxPool3d requires specifying either an output size, or a pooling ratio
Exception raised from reset at D:\a\javacpp-presets\javacpp-presets\pytorch\cppbuild\windows-x86_64-gpu\pytorch\torch\csrc\api\src\nn\modules\pooling.cpp:348 (most recent call first):
00007FF93B3B83C9 <unknown symbol address> c10.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF93B3B6C5A <unknown symbol address> c10.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A2F98B7A <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF8A2F95357 <unknown symbol address> torch_cpu.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00007FF89E8696E5 <unknown symbol address> jnitorch.dll!<unknown symbol> [<unknown file> @ <unknown line number>]
00000202E272D207 <unknown symbol address> !<unknown symbol> [<unknown file> @ <unknown line number>]

	at org.bytedeco.pytorch.FractionalMaxPool3dImpl.allocate(Native Method)
	at org.bytedeco.pytorch.FractionalMaxPool3dImpl.<init>(FractionalMaxPool3dImpl.java:44)
	at torch.testRawPool$.main(pooltest.scala:29)
	at torch.testRawPool.main(pooltest.scala)

@mullerhai
Copy link
Author

@saudet now I have supply raw javacpp-pytorch raw code with these pool layer , these layer are have bug ,please solve these bug ,thanks

@mullerhai
Copy link
Author

maybe LongExpandingArrayOptional, DoubleExpandingArrayOptional these class have some bug

@saudet
Copy link
Member

saudet commented Mar 8, 2025

Sounds like toNative isn't able to set the kernel_size properly for some reason. Please try to set the values manually

@mullerhai
Copy link
Author

Sounds like toNative isn't able to set the kernel_size properly for some reason. Please try to set the values manually

it really bug ,I could promise . the code I think you should try run once, the console log you could view set the raw value is difference from options get real value ,this is pure javacpp code ,not use toNative and so on ,

AdaptiveMaxPool3dImpl , AdaptiveMaxPool2dImpl,AdaptiveAvgPool3dImpl,AdaptiveAvgPool2dImpl
AdaptiveAvgPool2dOptions, AdaptiveAvgPool3dOptions, AdaptiveMaxPool2dOptions,AdaptiveMaxPool3dOptions
FractionalMaxPool2d FractionalMaxPool3d FractionalMaxPool3dOptions FractionalMaxPool3dOptions all have same bug ,please run my test code ,check the value you view ,if you could get the correct value or run it to correct work ,please tell me where is my fault

thanks @saudet

the pure javacpp code

  def fractionalMaxPool2dSuite():Unit ={
    val kernel = LongPointer(Array(5l,7l)*)
//    val kernel = LongOptionalVector(Array(LongOptional(5), LongOptional(7)) *)
    val options: FractionalMaxPool2dOptions = FractionalMaxPool2dOptions(kernel)
    val outputSize = LongOptionalVector(Array(LongOptional(7),LongOptional(9))*)
    options.output_size().put(outputSize)

    val k =(1.4f,3.7f)
    val outputRatio = DoubleVectorOptional(DoubleVector(k._1.toDouble,k._2.toDouble))
    options.output_ratio().put(outputRatio) //DoublePointer(Array(k._1.toDouble, k._2.toDouble)*))

    println(s"FractionalMaxPool2d raw  options  kernel:   raw_1 position set -> ${kernel.get(0)} ,kernel real get_1 position -> ${options.kernel_size().get(0)}  |||  kernel raw_2 position set -> ${kernel.get(1)} ,kernel real get_2 position -> ${options.kernel_size().get(1)}")

//    println(s"FractionalMaxPool2d raw  options  kernel:   raw_1 position set -> ${kernel.get(0).get()} ,kernel real get_1 position -> ${options.kernel_size().get(0)}  |||  kernel raw_2 position set -> ${kernel.get(1).get()} ,kernel real get_2 position -> ${options.kernel_size().get(1)}" )

    println(s"FractionalMaxPool2d raw  options  outsize:  raw_1 position set -> ${outputSize.get(0).get()}, outsize real get_1 position -> ${options.output_size().get().get(0)}  |||  outsize  raw_2 position set-> ${outputSize.get(1).get()}   outsize real get_2 position -> ${options.output_size().get().get(1)} ${options.output_size().has_value()} ")

    println(s"FractionalMaxPool2d raw  options  outRatio:raw_1 position set -> ${outputRatio.get().get(0)} ,outRatio real get_1 position -> ${options.output_ratio().get().get(0)}  |||  outRatio  raw_2 position set -> ${outputRatio.get().get(1)} outRatio real get_2 position ->  ${options.output_ratio().get().get(1)} ${options.output_ratio().has_value()} ")
    val nativeModule: FractionalMaxPool2dImpl = FractionalMaxPool2dImpl(
      options
    )
    val input = torch.randn(Seq(1, 64, 8, 9))
    println(s"FractionalMaxPool2d options kernel ${nativeModule.options().kernel_size().get(0)} k2 ${nativeModule.options().kernel_size().get(1)} outsize ${nativeModule.options().output_size().has_value()}  ${nativeModule.options().output_size().get.get(0)} out2 ${nativeModule.options().output_size().get().get(0)} outRatio ${nativeModule.options().output_ratio().has_value()} ${nativeModule.options().output_ratio().get().get(0)} ratio2 ${nativeModule.options().output_ratio().get().get(1)}")
    val output = fromNative(nativeModule.forward(input.native))
    println(s"output.shape  ${output.shape}")
  }

console log

FractionalMaxPool2d raw  options  kernel:   raw_1 position set -> 5 ,kernel real get_1 position -> 5  |||  kernel raw_2 position set -> 7 ,kernel real get_2 position -> 7

FractionalMaxPool2d raw  options  outsize:  raw_1 position set -> 7, outsize real get_1 position -> 1557814916896  |||  outsize  raw_2 position set-> 9   outsize real get_2 position -> 1557814916928 true 

FractionalMaxPool2d raw  options  outRatio:raw_1 position set -> 1.399999976158142 ,outRatio real get_1 position -> 7.696630517185E-312  |||  outRatio  raw_2 position set -> 3.700000047683716 outRatio real get_2 position ->  7.696630517264E-312 true 
#

@mullerhai
Copy link
Author

HI , @saudet the FractionalMaxPool2d FractionalMaxPool3d can not work maybe just bad operate options unset correct value ,
could you write a runnable FractionalMaxPool2d demo let me view ,thanks

@mullerhai
Copy link
Author

HI @saudet ,adaptiveMaxpool2d the output_size really bug ,please check , because the output_size second element can not set value ! it become Long.MaxValue ! 216232169515805804 ,if you could set ,paste the correct code ,thanks

  class AdapativeMaxPool2dRawSuite3 extends munit.FunSuite {
    test("AdapativeMaxPool2d output shapes") {
      val nativeOutputSize = LongPointer(Array(5l, 7l) *)
      val options: AdaptiveMaxPool2dOptions = AdaptiveMaxPool2dOptions(nativeOutputSize)
      val nativeModule: AdaptiveMaxPool2dImpl = AdaptiveMaxPool2dImpl(options)
      println(s"AdaptiveMaxPool2d raw options 1: ${options.output_size().get} options 2: ${options.output_size().get} ")
      val pi = options.output_size()
      println(s"pi : ${pi}")
      val input = torch.randn(Seq(1, 64, 8, 9))
      val output = nativeModule.forward(input.native)
      println(s"AdaptiveMaxPool2d nativeModule options 1: ${nativeModule.options.output_size().get} options 2: ${nativeModule.options.output_size()} ")
      val shape = output.shape
      println(s"output shape: ${output.shape}")
      //    val m1 = new AdaptiveAvgPool2d((5, 7))
      //    val input = torch.randn(Seq(1, 64, 8, 9))
      //    val output = m1(input)
      //    println(output)
    }
  }

console

AdaptiveMaxPool2d raw options 1: 5 options 2: 5 
pi : org.bytedeco.pytorch.LongOptional[address=0x1aab8f74670,position=0,limit=0,capacity=0,deallocator=null]

java.lang.RuntimeException: Storage size calculation overflowed with sizes=[1, 64, 5, 216232169515805804]

```

@saudet
Copy link
Member

saudet commented Mar 9, 2025

Please try to set the "org.bytedeco.javacpp.nopointergc" system property to "true".

@mullerhai
Copy link
Author

Please try to set the "org.bytedeco.javacpp.nopointergc" system property to "true".

  def main(args: Array[String]): Unit = {
    System.setProperty("org.bytedeco.javacpp.nopointergc", "true")
    val kernelSize = (5, 7) // LongExpandingArrayOptional(5, 7)
    //    val kernel = LongPointer(Array(5l,7l)*)
    val kernel = LongOptionalVector(Array(LongOptional(5), LongOptional(7)) *)
    val options: FractionalMaxPool2dOptions = FractionalMaxPool2dOptions(kernel)
    //    options.kernel_size().put(toNative(kernelSize))
    val t = (7, 9)
    val kk = LongOptionalVector(Array(LongOptional(7), LongOptional(9)) *)
    //    options.output_size().put(LongPointer(t._1.toLong))
    //    options.output_size().put(LongPointer(t._2.toLong))
    options.output_ratio().put(kk) //DoublePointer(Array(t._1.toDouble, t._2.toDouble) *))
    val k = (1.4f, 3.7f)
    //    options.output_ratio().put(DoublePointer(k._1.toDouble))
    //    options.output_ratio().put(DoublePointer(k._2.toDouble))
    val rr = DoubleVectorOptional(DoubleVector(k._1.toDouble, k._2.toDouble))
    println(rr.get().get(0))
    options.output_ratio().put(rr) //DoublePointer(Array(k._1.toDouble, k._2.toDouble)*))
    println(s"FractionalMaxPool2d raw  options kernel ${options.kernel_size().get(0)} k2 ${options.kernel_size().get(1)} outsize ${options.output_size().has_value()}  ${options.output_size().get().get(0)} out2 ${options.output_size().get().get(1)} outRatio ${options.output_ratio().has_value()} ${options.output_ratio().get().get(0)} ratio2 ${options.output_ratio().get().get(1)}")
    val nativeModule: FractionalMaxPool2dImpl = FractionalMaxPool2dImpl(
      options
    )
    val input = torch.randn(Seq(1, 64, 8, 9))
    println(s"FractionalMaxPool2d options kernel ${nativeModule.options().kernel_size().get(0)} k2 ${nativeModule.options().kernel_size().get(1)} outsize ${nativeModule.options().output_size().has_value()}  ${nativeModule.options().output_size().get.get(0)} out2 ${nativeModule.options().output_size().get().get(0)} outRatio ${nativeModule.options().output_ratio().has_value()} ${nativeModule.options().output_ratio().get().get(0)} ratio2 ${nativeModule.options().output_ratio().get().get(1)}")
    val output = fromNative(nativeModule.forward(input.native))
    println(s"output.shape  ${output.shape}")
  }
}

console log

C:\Users\hai71\.jdks\openjdk-23.0.2\bin\java.exe "-javaagent:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2024.3.3\lib\idea_rt.jar=6518:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2024.3.3\bin" -Dfile.encoding=UTF-8 -Dsun.stdout.encoding=UTF-8 -Dsun.stderr.encoding=UTF-8 -classpath D:\data\storch_demo\target\scala-3.6.2\classes;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\scala3-library_3\3.6.2\scala3-library_3-3.6.2.jar;C:\Users\hai71\.ivy2\local\dev.storch\core_3\0.1.9-2.4.3\jars\core_3.jar;C:\Users\hai71\.ivy2\local\dev.storch\vision_3\0.1.9-2.4.3\jars\vision_3.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scalameta\munit_3\0.7.29\munit_3-0.7.29.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scalameta\munit-scalacheck_3\0.7.29\munit-scalacheck_3-0.7.29.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-lang\scala-library\2.13.15\scala-library-2.13.15.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\javacpp\1.5.11\javacpp-1.5.11.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\javacpp\1.5.11\javacpp-1.5.11-windows-x86_64.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\pytorch\2.5.1-1.5.11\pytorch-2.5.1-1.5.11.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\pytorch\2.5.1-1.5.11\pytorch-2.5.1-1.5.11-windows-x86_64-gpu.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\openblas\0.3.28-1.5.11\openblas-0.3.28-1.5.11.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\openblas\0.3.28-1.5.11\openblas-0.3.28-1.5.11-windows-x86_64.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\cuda\12.6-9.5-1.5.11\cuda-12.6-9.5-1.5.11.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\cuda\12.6-9.5-1.5.11\cuda-12.6-9.5-1.5.11-windows-x86_64-redist.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\mkl\2025.0-1.5.11\mkl-2025.0-1.5.11.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\mkl\2025.0-1.5.11\mkl-2025.0-1.5.11-windows-x86_64.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\bytedeco\pytorch\2.5.1-1.5.11\pytorch-2.5.1-1.5.11-windows-x86_64.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\typelevel\spire_3\0.18.0\spire_3-0.18.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\typelevel\shapeless3-typeable_3\3.3.0\shapeless3-typeable_3-3.3.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\lihaoyi\os-lib_3\0.9.1\os-lib_3-0.9.1.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\lihaoyi\sourcecode_3\0.3.0\sourcecode_3-0.3.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\dev\dirs\directories\26\directories-26.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\sksamuel\scrimage\scrimage-core\4.3.0\scrimage-core-4.3.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\sksamuel\scrimage\scrimage-webp\4.3.0\scrimage-webp-4.3.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\sksamuel\scrimage\scrimage-scala_2.13\4.3.0\scrimage-scala_2.13-4.3.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scalameta\junit-interface\0.7.29\junit-interface-0.7.29.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\junit\junit\4.13.2\junit-4.13.2.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scalacheck\scalacheck_3\1.15.4\scalacheck_3-1.15.4.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\typelevel\spire-macros_3\0.18.0\spire-macros_3-0.18.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\typelevel\spire-platform_3\0.18.0\spire-platform_3-0.18.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\typelevel\spire-util_3\0.18.0\spire-util_3-0.18.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\typelevel\algebra_3\2.8.0\algebra_3-2.8.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\lihaoyi\geny_3\1.0.0\geny_3-1.0.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twelvemonkeys\imageio\imageio-core\3.9.4\imageio-core-3.9.4.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twelvemonkeys\imageio\imageio-jpeg\3.9.4\imageio-jpeg-3.9.4.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\drewnoakes\metadata-extractor\2.18.0\metadata-extractor-2.18.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\commons-io\commons-io\2.11.0\commons-io-2.11.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\ar\com\hjg\pngj\2.1.0\pngj-2.1.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\apache\commons\commons-lang3\3.12.0\commons-lang3-3.12.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\slf4j\slf4j-api\2.0.6\slf4j-api-2.0.6.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\scala-sbt\test-interface\1.0\test-interface-1.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\hamcrest\hamcrest-core\1.3\hamcrest-core-1.3.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\org\typelevel\cats-kernel_3\2.8.0\cats-kernel_3-2.8.0.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twelvemonkeys\common\common-lang\3.9.4\common-lang-3.9.4.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twelvemonkeys\common\common-io\3.9.4\common-io-3.9.4.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twelvemonkeys\common\common-image\3.9.4\common-image-3.9.4.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\twelvemonkeys\imageio\imageio-metadata\3.9.4\imageio-metadata-3.9.4.jar;C:\Users\hai71\AppData\Local\Coursier\cache\v1\https\repo1.maven.org\maven2\com\adobe\xmp\xmpcore\6.1.11\xmpcore-6.1.11.jar torch.testRawPool
1.399999976158142
FractionalMaxPool2d raw  options kernel 2204125177120 k2 2204125177152 outsize true  2204125177152 out2 -8068723197431211648 outRatio true 1.0862640424175E-311 ratio2 1.0862640424254E-311
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x00007ff8a0d440f2, pid=30988, tid=11256
#
# JRE version: OpenJDK Runtime Environment (23.0.2+7) (build 23.0.2+7-58)
# Java VM: OpenJDK 64-Bit Server VM (23.0.2+7-58, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, windows-amd64)
# Problematic frame:
# C  0x00007ff8a0d440f2
#
# No core dump will be written. Minidumps are not enabled by default on client versions of Windows
#
# An error report file with more information is saved as:
# D:\data\storch_demo\hs_err_pid30988.log
[1.575s][warning][os] Loading hsdis library failed
#
# If you would like to submit a bug report, please visit:
#   https://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Process finished with exit code 1


@saudet
Copy link
Member

saudet commented Mar 9, 2025

You'll need to allocate memory for kernel_size and output_size for this to work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants