You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Whether we perform API calls asynchronously or in parallel, that would be of tremendous help speeding up the script, especially the initial purchase information, that's needed by hb-downloader list among others. As the workload only makes API calls, making them asynchronously could easily speed things up 50 times, depending on the number of purchases. That's a conservative estimate, and it could speed things up a lot more on slower connections.
The other elephant in the room is hashing and downloading, that can be done in parallel (download new files while checking already downloaded ones). This could be really helpful on slower machines. Next steps could be multithreading hashing, and performing multiple download simultaneously.
So, here is the roadmap:
Simultaneous humble API calls
Download while hashing
Perform multiple hashes in parallel
I am not sure what would be the best way to perform this. However, it would be nice if:
The same library could be used for all of the above, to reduce complexity and dependencies
We used a built-in python module, not to depend on anything else
Any extra dependency we brought in was common enough to be found in most distributions' packages
Whether we perform API calls asynchronously or in parallel, that would be of tremendous help speeding up the script, especially the initial purchase information, that's needed by
hb-downloader list
among others. As the workload only makes API calls, making them asynchronously could easily speed things up 50 times, depending on the number of purchases. That's a conservative estimate, and it could speed things up a lot more on slower connections.The other elephant in the room is hashing and downloading, that can be done in parallel (download new files while checking already downloaded ones). This could be really helpful on slower machines. Next steps could be multithreading hashing, and performing multiple download simultaneously.
So, here is the roadmap:
I am not sure what would be the best way to perform this. However, it would be nice if:
A couple references:
The text was updated successfully, but these errors were encountered: