You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In general things don't fail, but I can't take that for granted. I'd like to be able to "throw" a failure back to the client application that had submitted the run. The client application will use the failure details to send an alert of some kind (probably just an email) and then perform retries.
... But I'd also like to retrieve the exception message and call-stack, as provided by the driver. Is there a pattern for retrieving these particular things? Does the DotnetRunner have any capabilities to handle exceptions in a particular way?
... but as-of now my standard output is cluttered with various irrelevant status messages. It isn't used for a single purpose, and Spark itself is dumping messages in there.
I considered writing a file to ADLS gen2 storage but I'm not keen on putting a dependency on a totally separate resource during exception handling, since writing a blob has the potential to fail in its own right.
I suspect this question has been asked before and I'll continue googling for a solution.... But I thought maybe there was a technique or pattern that was specific to the DotnetRunner. I'm open to suggestions.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Can anyone tell me the best practice for returning the failure information from a jar task in databricks?
I'm using an all-purpose cluster, and I'm running lots of .Net tasks on it via the databricks jobs api. See:
Submit Jar Task / Submit Runs :
In general things don't fail, but I can't take that for granted. I'd like to be able to "throw" a failure back to the client application that had submitted the run. The client application will use the failure details to send an alert of some kind (probably just an email) and then perform retries.
I am already able to detect when a run has failed, via a REST call to (/jobs/runs/get?run_id=123)
https://docs.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/jobs#--runs-get
... But I'd also like to retrieve the exception message and call-stack, as provided by the driver. Is there a pattern for retrieving these particular things? Does the DotnetRunner have any capabilities to handle exceptions in a particular way?
I see that there is an API method to get the standard output from the run...
https://docs.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/jobs#--runs-get-output
... but as-of now my standard output is cluttered with various irrelevant status messages. It isn't used for a single purpose, and Spark itself is dumping messages in there.
I considered writing a file to ADLS gen2 storage but I'm not keen on putting a dependency on a totally separate resource during exception handling, since writing a blob has the potential to fail in its own right.
I suspect this question has been asked before and I'll continue googling for a solution.... But I thought maybe there was a technique or pattern that was specific to the DotnetRunner. I'm open to suggestions.
Beta Was this translation helpful? Give feedback.
All reactions