Tensorboard, prediction and more #319
Replies: 8 comments 5 replies
-
I managed to improve the predictive code slightly. Choosing between:
The "Moving Average" choice is the one that was the default in the previous code. The other two choices, however, seem more "true" to me as a prediction. I'll leave you the screenshots I've updated the link for the download the file, also in the first message: |
Beta Was this translation helpful? Give feedback.
-
Fixed on data and graph. |
Beta Was this translation helpful? Give feedback.
-
Corretced two bugs about :
https://silverider76.iliadboxos.it:27440/share/rC8xVK_qv6BbRjH3/PreCog-EN.py |
Beta Was this translation helpful? Give feedback.
-
Updated, now the graph is dynamic. |
Beta Was this translation helpful? Give feedback.
-
Updated, now the prevision methods are 6: Savitzky-Golay Exponential Moving Average -> expressly designed for Stable Diffusion XL training |
Beta Was this translation helpful? Give feedback.
-
Updated again: |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
Preface 1: I'm using Windows, so I wouldn't know how to troubleshoot issues with Linux or other systems.
Preface 2: The code was created for personal use, without any claims.
Preface 3: It was helped by my friend (Franci) and I used CHATGPT a lot.
Preface 4: I do my best for the calculation of convergence, Over Training, and Over Fitting, but I couldn't improve it with my limited knowled
Preface 5: The code is completely open source, so use it and modify it as you wish. If you make significant improvements, please share them as a response to this message to help everyone.
What the code does:
Total Epochs Value from JSON: 300
Warmup Steps Value from JSON: 500
Repetitions Value from JSON: 1.0
Batch Size Value from JSON: 1
Results:
Best step: (99612.0, 0.10412197560071945)
Worst step: (8377.0, 0.15289561450481415)
Best epoch: 83 (Step: 98936.0) - Mean Loss: 0.12
Worst epoch: 78 (Step: 92976.0) - Mean Loss: 0.14
Convergence Epoch: 972 (Step: 1158624.0)
Overtraining Epoch: 1000 (Step: 1192000.0)
Overfitting Epoch: 974 (Step: 1161008.0)
How to make it work:
Of course, you need to replace 2024-05-22_09-04-48 with the name of your folder. A window will open, leave it open and return to OneTrainer.
Done.
If it works, a window with a graph similar to this one will open:

and a sort of this command prompt:
!
To Download PreCog-EN: https://silverider76.iliadboxos.it:27440/share/rC8xVK_qv6BbRjH3/PreCog-EN.py
@Nerogar if you are interested to integrate this code in OneTrainer, you are welcome. I'm glad to help your fantastic job ;)
Beta Was this translation helpful? Give feedback.
All reactions