Skip to content

加载模型时报错INetworkDefinition::addConcatenation: Error Code 3: API Usage Error (Parameter check failed, condition: (inputs[j]) != nullptr. ) #9351

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
time-heart opened this issue Apr 11, 2025 · 3 comments
Assignees

Comments

@time-heart
Copy link

问题确认 Search before asking

  • 我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer.

请提出你的问题 Please ask your question

加载模型:model/ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle/model.pdmodel
det_infer 开启tensorrt加速
开始创建
E0411 03:17:27.469527 55938 helper.h:131] INetworkDefinition::addConcatenation: Error Code 3: API Usage Error (Parameter check failed, condition: (inputs[j]) != nullptr. )


C++ Traceback (most recent call last):

0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
1 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
2 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
3 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
4 paddle::AnalysisPredictor::OptimizeInferenceProgram()
5 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
6 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
7 paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_deletepaddle::framework::ir::Graph >)
8 paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph*) const
9 paddle::inference::analysis::TensorRtSubgraphPass::ApplyImpl(paddle::framework::ir::Graph*) const
10 paddle::inference::analysis::TensorRtSubgraphPass::CreateTensorRTOp(paddle::framework::ir::Node*, paddle::framework::ir::Graph*, std::vector<std::string, std::allocator<std::string > > const&, std::vector<std::string, std::allocator<std::string > >, bool) const
11 paddle::inference::tensorrt::OpConverter::ConvertBlockToTRTEngine(paddle::framework::BlockDesc
, paddle::framework::Scope const&, std::vector<std::string, std::allocator<std::string > > const&, std::unordered_set<std::string, std::hash<std::string >, std::equal_to<std::string >, std::allocator<std::string > > const&, std::vector<std::string, std::allocator<std::string > > const&, paddle::inference::tensorrt::TensorRTEngine*)
12 paddle::inference::tensorrt::OpConverter::ConvertBlock(paddle::framework::proto::BlockDesc const&, std::unordered_set<std::string, std::hash<std::string >, std::equal_to<std::string >, std::allocator<std::string > > const&, paddle::framework::Scope const&, paddle::inference::tensorrt::TensorRTEngine*)
13 paddle::inference::tensorrt::OpConverter::ConvertOp(paddle::framework::proto::OpDesc const&, std::unordered_set<std::string, std::hash<std::string >, std::equal_to<std::string >, std::allocator<std::string > > const&, paddle::framework::Scope const&, paddle::inference::tensorrt::TensorRTEngine*, bool, paddle::framework::proto::BlockDesc const*)
14 paddle::inference::tensorrt::SplitOpConverter::operator()(paddle::framework::proto::OpDesc const&, paddle::framework::Scope const&, bool)


Error Message Summary:

FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1744341447 (unix time) try "date -d @1744341447" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x8) received by PID 55938 (TID 0xffff85db7b40) from PID 8 ***]

Segmentation fault (core dumped)

环境
arm ubuntu22.04 jetson
cuda12.6 cudnn9.3.0 tensorrt 10.4

@BluebirdStory
Copy link
Collaborator

目前暂不支持cuda12.6,你可以先用cuda 11.8,预计5月份会提供cuda 12的支持。
感觉对PaddleX的关注。

@time-heart
Copy link
Author

time-heart commented Apr 11, 2025

目前暂不支持cuda12.6,你可以先用cuda 11.8,预计5月份会提供cuda 12的支持。 感觉对PaddleX的关注。

Input Name: image
Input Shape: [0]
Input Name: scale_factor
Input Shape: [0]
这是模型的问题吗,使用tensorrt转换子图就会报错缺少张量
我现在的环境时cuda12,cudnn8.8.1,tensorrt8.6.1,还是会出现上诉错误,我运行的是paddle paddle detection的模型

@time-heart
Copy link
Author

目前暂不支持cuda12.6,你可以先用cuda 11.8,预计5月份会提供cuda 12的支持。 感觉对PaddleX的关注。

tensorrt转换引擎文件时报错

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants