-
Notifications
You must be signed in to change notification settings - Fork 14
Release nightly Python packages #385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Triggered to build a development release, https://github.com/ROCm/TheRock/actions/runs/14362396978, assets were pushed to https://github.com/ROCm/TheRock/releases/tag/dev-release. With this the core wheel can be installed via
For the non-GPU specific assets, the wheel that was build latest wins and overwrites the one pushed by a workflow in the matrix that finished earlier. This should be addressed in a follow up. |
- name: Upload Nightly Release | ||
if: ${{ inputs.build_type == 'rc' || inputs.build_type == '' }} | ||
uses: ncipollo/release-action@440c8c1cb0ed28b9f43e4d1d670870f059653174 # v1.16.0 | ||
with: | ||
artifacts: "${{ env.DIST_ARCHIVE }}" | ||
artifacts: "${{ env.DIST_ARCHIVE }},${{ env.OUTPUT_DIR }}/wheels/dist/*.whl" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need a separate "dev-wheels" release or something since Python assets will otherwise be hard to distinguish from native assets (and they exist as part of a "namespace" that cannot be changed). If doing that, then you also need to upload the sdists (tar.gz files), not just the wheels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A separate dev-wheels
release sounds reasonable. Right now we have a dev-release
and a nightly-release
. I assume adding wheels for both of those release types to dev-weels
is resonable?
With the way sdists are currently produced there will be a name clash as name is the same for all architectures but dependencies differ.
- Should the GPU architecture be included in the name of the sdist package? (Option a)
- Another option is to define the GPU-architecture specific libraries as
project.optional-dependencies
in thepyprocet.toml
instead of being restricted to one GPU-specific library via_dist_info.py
. (Option b)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed offline and will have separate releases for every supported GPU-family for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's discuss in our standup today: I'd like to come up with a name for these single target community builds and then document it.
In my mind, these single target community builds are things we will always produce as a precursor to official builds for a combination of selected targets that we either use for ongoing qa or for bootstrapping new official support for a target.
What we're missing in TheRock proper is where things go from there. We will eventually support/provide unity builds for some combined set of supported targets, and then of course ROCm overall will provide numbered official builds on a slower cadence.
I think if we spell out the first two tiers of single target and unity community builds, along with our roadmap of where specific target support is at on each, it will make more sense.
So keep doing what you are doing, but we should write it down and will likely tweak the release names.
@@ -0,0 +1,140 @@ | |||
#!/usr/bin/env python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any chance we could extend linux_portable_build.py vs forking? We could add an --action
arg that controlled what we are delegating to vs hard-coding to linux_portable_build_in_container.sh.
Reason is: we're going to need to add more controls to the image version, config options, and platform (arm vs x86 dockers), and I'd rather have all of the manylinux stuff trampoline through the same thing to keep config control manageable. I've had this stuff get out of control before...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I was considering this but wasn't sure if it would be worth it. I'll extend linux_portable_build.py
with what is needed.
6c919c2
to
b368b0d
Compare
b368b0d
to
2c8c95e
Compare
Closes #369