-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: display emoji of the engine for the run in the prompt #872
Conversation
fixes containers#783 Signed-off-by: Florent Benoit <fbenoit@redhat.com>
Reviewer's Guide by SourceryThis pull request introduces engine-specific emoji prefixes to the prompt for the No diagrams generated as the changes look simple and do not need a visual representation. File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @benoitf - I've reviewed your changes - here's some feedback:
Overall Comments:
- Consider defining the engine-to-emoji mapping in a dictionary for better readability and maintainability.
- The logic for setting
LLAMA_PROMPT_PREFIX
seems duplicated; can it be consolidated?
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
LGTM |
Add emoji prompt with the engine
🦭 >
🦙 >
🐋 >
I had to bump the sha to build a more recent version of the llama-run for the ramalama image
I checked on macOS and inside a container (docker or podman) but not on Linux natively so 🤷
fixes #783
Summary by Sourcery
This pull request introduces an emoji prompt to indicate the engine being used (Podman, Docker, or native) when running the
run
subcommand. It also updates the llama.cpp SHA and fixes an issue with the prompt prefix.New Features:
Bug Fixes:
Build: