We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent 4e402b1 commit 3bed56bCopy full SHA for 3bed56b
README.md
@@ -219,6 +219,8 @@ You can enable experimental memory efficient attention on pytorch 2.5 in ComfyUI
219
220
```TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention```
221
222
+You can also try setting this env variable `PYTORCH_TUNABLEOP_ENABLED=1` which might speed things up at the cost of a very slow initial run.
223
+
224
# Notes
225
226
Only parts of the graph that have an output with all the correct inputs will be executed.
0 commit comments