Skip to content

Added 5090 benchmarks for llama3.2 and llama3.1 #18

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 8, 2025

Conversation

kobaltz
Copy link

@kobaltz kobaltz commented Feb 8, 2025

No description provided.

@geerlingguy geerlingguy merged commit df6dac8 into geerlingguy:main Feb 8, 2025
1 check passed
@geerlingguy
Copy link
Owner

Whee! Did you have to turn off other power circuits in your home to complete the test? Ha!

@kobaltz
Copy link
Author

kobaltz commented Feb 8, 2025

My UPS was angry for sure. But inference didn’t max out the GPU as far as power goes. It peaked around 350 W

@geerlingguy
Copy link
Owner

@kobaltz - Heh, that led me to buying a larger UPS when I originally upgraded to a 3080 Ti.

Then a couple years later I finally snagged a 4090... and realized the UPS to run the card without it barking at me would be as expensive as the card itself, so when I'm doing benchmarks, I just plug the PC into the wall now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants