Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA out of memory. #68

Open
ArghyaChatterjee opened this issue Aug 15, 2023 · 0 comments
Open

RuntimeError: CUDA out of memory. #68

ArghyaChatterjee opened this issue Aug 15, 2023 · 0 comments

Comments

@ArghyaChatterjee
Copy link

Hello, thanks for your work. I have seen your previous response regarding this question for training but I want this for testing purpose. Testing shouldn't take that much GPU memory. I have 3060 Ti Nvidia GPU with 6GB of Graphics memory and am testing on same klitti dataset with a batch size of 1.

arghya@arghya-Pulse-GL66-12UEK:~/PENet$ python main.py -b 1 -n pe --evaluate ~/PENet/model/pe.pth.tar --test --data-folder /home/arghya/kitti_depth/depth
Namespace(batch_size=1, convolutional_layer_encoding='xyz', cpu=False, criterion='l2', data_folder='/home/arghya/kitti_depth/depth', data_folder_rgb='data/dataset/kitti_raw', data_folder_save='data/dataset/kitti_depth/submit_test/', dilation_rate=2, epochs=100, evaluate='/home/arghya/PENet/model/pe.pth.tar', freeze_backbone=False, input='rgbd', jitter=0.1, lr=0.001, network_model='pe', not_random_crop=False, print_freq=10, random_crop_height=320, random_crop_width=1216, rank_metric='rmse', result='../results', resume='', start_epoch=0, start_epoch_bias=0, test=True, use_d=True, use_g=True, use_rgb=True, val='select', val_h=352, val_w=1216, weight_decay=1e-06, workers=4)
=> using 'cuda' for computation.
=> loading checkpoint '/home/arghya/PENet/model/pe.pth.tar' ... Completed.
=> creating model and optimizer ... => checkpoint state loaded.
=> creating source code backup ...
=> finished creating source code backup.
=> logger created.
Traceback (most recent call last):
  File "main.py", line 474, in <module>
    main()
  File "main.py", line 386, in main
    iterate("test_completion", args, test_loader, model, None, logger, 0)
  File "main.py", line 216, in iterate
    pred = model(batch_data)
  File "/home/arghya/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/arghya/PENet/model.py", line 503, in forward
    depth5 = self.CSPN5_s2(guide5_s2, depth5, coarse_depth)
  File "/home/arghya/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/arghya/PENet/basic.py", line 267, in forward
    output = torch.einsum('ijk,ijk->ik', (input_im2col, kernel))
  File "/home/arghya/.local/lib/python3.8/site-packages/torch/functional.py", line 325, in einsum
    return einsum(equation, *_operands)
  File "/home/arghya/.local/lib/python3.8/site-packages/torch/functional.py", line 327, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: CUDA out of memory. Tried to allocate 42.00 MiB (GPU 0; 5.81 GiB total capacity; 3.75 GiB already allocated; 4.38 MiB free; 3.81 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant