Replies: 1 comment
-
I was able to reproduce it with 1.17.2 as well. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a flow with 2 nodes:
llm
node andpython
node that parses LLM response.I can batch-run the flow, and it produces correct results:
The problem is, promptflow reports 0 token usage by the LLM.
Same behavior is seen when:
PF_DISABLE_TRACING=false
How do I get proper token usage metrics?
Flow:
Metrics after batch inference run:
Metrics from
pf run show
:Metrics from the trace UI
Package versions
Beta Was this translation helpful? Give feedback.
All reactions