-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
67 torch.compile tests #81
Conversation
@qiancao ping me when its good for review! |
@qiancao is it worth adding the compiler in the notebooks too? Or this would create issue for windows user? |
@tcoroller We should not alter the existing notebooks to add torch.compile. But in the future, we can consider adding a notebook specifically on how to accelerate the code using torch.compile/TorchScript. I think torch.compile will still be a problem for most windows users at this point: https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Accelerate-PyTorch-Inference-with-torch-compile-on-Windows-CPU/post/1640044 |
loss_weibull = weibull(log_hz, event, time) | ||
loss_cweibull = cweibull(log_hz, event, time) | ||
|
||
self.assertTrue( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@qiancao using Torch.allclose() to prevent converting to numpy. All good else!
Added two tests for torch.compile(cox) and torch.compile(weibull).
Note: the tests were run on CPU-only nodes, GPUs not tested yet.