You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Error Message Summary:
FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1745302292 (unix time) try "date -d @1745302292" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x8) received by PID 169 (TID 0x7f33eac5e740) from PID 8 ***]
Segmentation fault (core dumped)
复现环境 Environment
nvidia driver:
NVIDIA-Linux-x86_64-570.133.07.run
paddlepaddle docker image:
docker.1ms.run/paddlepaddle/paddle 3.0.0-gpu-cuda11.8-cudnn8.9-trt8.6 e96386e860a8 2 weeks ago 30.8GB
我确认已经提供了Bug复现步骤、代码改动说明、以及环境信息,确认问题是可以复现的。I confirm that the bug replication steps, code change instructions, and environment information have been provided, and the problem can be reproduced.
是否愿意提交PR? Are you willing to submit a PR?
我愿意提交PR!I'd like to help by submitting a PR!
The text was updated successfully, but these errors were encountered:
问题确认 Search before asking
Bug组件 Bug Component
No response
Bug描述 Describe the Bug
nvidia driver:
NVIDIA-Linux-x86_64-570.133.07.run
paddlepaddle docker image:
docker.1ms.run/paddlepaddle/paddle 3.0.0-gpu-cuda11.8-cudnn8.9-trt8.6 e96386e860a8 2 weeks ago 30.8GB
packages:
[λ codegee] /home pip list | grep paddle
paddledet 2.6.0
paddlepaddle-gpu 3.0.0
paddleslim 2.6.0
run code:
git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd PaddleDetection/deploy/auto_compression/
python test_det.py --model_path ./models/rtdetr_hgnetv2_l_6x_coco/ --config ./configs/rtdetr_reader.yml --precision fp32 --use_trt True --use_dynamic_shape False
Error Info:
I0422 06:11:28.907433 169 fuse_pass_base.cc:55] --- detected 83 subgraphs
--- Running IR pass [trans_layernorm_fuse_pass]
--- Running IR pass [remove_padding_recover_padding_pass]
--- Running IR pass [delete_remove_padding_recover_padding_pass]
--- Running IR pass [dense_fc_to_sparse_pass]
--- Running IR pass [dense_multihead_matmul_to_sparse_pass]
--- Running IR pass [elementwise_groupnorm_act_pass]
--- Running IR pass [preln_elementwise_groupnorm_act_pass]
--- Running IR pass [groupnorm_act_pass]
--- Running IR pass [elementwiseadd_transpose_pass]
--- Running IR pass [tensorrt_subgraph_pass]
I0422 06:11:28.995630 169 tensorrt_subgraph_pass.cc:320] --- detect a sub-graph with 7 nodes
I0422 06:11:29.020102 169 tensorrt_subgraph_pass.cc:903] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time.
E0422 06:11:32.719125 169 helper.h:131] 3: [network.cpp::addConcatenation::856] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/network.cpp::addConcatenation::856, condition: (inputs[j]) != nullptr
)
C++ Traceback (most recent call last):
0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
1 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
2 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
3 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
4 paddle::AnalysisPredictor::OptimizeInferenceProgram()
5 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
6 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
7 paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_deletepaddle::framework::ir::Graph >)
8 paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph*) const
9 paddle::inference::analysis::TensorRtSubgraphPass::ApplyImpl(paddle::framework::ir::Graph*) const
10 paddle::inference::analysis::TensorRtSubgraphPass::CreateTensorRTOp(paddle::framework::ir::Node*, paddle::framework::ir::Graph*, std::vector<std::string, std::allocator<std::string > > const&, std::vector<std::string, std::allocator<std::string > >, bool) const
11 paddle::inference::tensorrt::OpConverter::ConvertBlockToTRTEngine(paddle::framework::BlockDesc, paddle::framework::Scope const&, std::vector<std::string, std::allocator<std::string > > const&, std::unordered_set<std::string, std::hash<std::string >, std::equal_to<std::string >, std::allocator<std::string > > const&, std::vector<std::string, std::allocator<std::string > > const&, paddle::inference::tensorrt::TensorRTEngine*)
12 paddle::inference::tensorrt::OpConverter::ConvertBlock(paddle::framework::proto::BlockDesc const&, std::unordered_set<std::string, std::hash<std::string >, std::equal_to<std::string >, std::allocator<std::string > > const&, paddle::framework::Scope const&, paddle::inference::tensorrt::TensorRTEngine*)
13 paddle::inference::tensorrt::SplitOpConverter::operator()(paddle::framework::proto::OpDesc const&, paddle::framework::Scope const&, bool)
Error Message Summary:
FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1745302292 (unix time) try "date -d @1745302292" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x8) received by PID 169 (TID 0x7f33eac5e740) from PID 8 ***]
Segmentation fault (core dumped)
复现环境 Environment
nvidia driver:
NVIDIA-Linux-x86_64-570.133.07.run
paddlepaddle docker image:
docker.1ms.run/paddlepaddle/paddle 3.0.0-gpu-cuda11.8-cudnn8.9-trt8.6 e96386e860a8 2 weeks ago 30.8GB
packages:
[λ codegee] /home pip list | grep paddle
paddledet 2.6.0
paddlepaddle-gpu 3.0.0
paddleslim 2.6.0
run code:
git clone https://github.com/PaddlePaddle/PaddleDetection.git
cd PaddleDetection/deploy/auto_compression/
python test_det.py --model_path ./models/rtdetr_hgnetv2_l_6x_coco/ --config ./configs/rtdetr_reader.yml --precision fp32 --use_trt True --use_dynamic_shape False
Bug描述确认 Bug description confirmation
是否愿意提交PR? Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: