Skip to content

docker image overwrite #572

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

docker image overwrite #572

wants to merge 2 commits into from

Conversation

michaelfeil
Copy link
Owner

Related Issue

Checklist

  • I have read the CONTRIBUTING guidelines.
  • I have added tests to cover my changes.
  • I have updated the documentation (docs folder) accordingly.

Additional Notes

Add any other context about the PR here.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

This PR updates the Docker build configurations across multiple Dockerfiles to standardize dependency management for the nomic-ai/nomic-embed-text-v1.5 model support.

  • Replaces git-based transformers installation with pinned version 4.50.3 across all Dockerfiles (amd, nvidia, cpu, trt_onnx)
  • Adds colpali-engine 0.3.9 as a direct pip install, removing git dependency
  • Removes cleanup commands in nvidia_auto which may impact image size optimization
  • TODO comment remains in cpu_auto Dockerfile that should be addressed
  • Version pinning could potentially limit compatibility with other models requiring different versions

5 file(s) reviewed, 3 comment(s)
Edit PR Review Bot Settings | Greptile

@@ -47,7 +47,7 @@ RUN poetry install --no-interaction --no-ansi --extras "${EXTRAS}" --without li
RUN poetry run $PYTHON -m pip install --no-cache-dir https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: flash-attention wheel is pinned to Python 3.10 and CUDA 12, but there's no explicit Python version check

Suggested change
RUN poetry run $PYTHON -m pip install --no-cache-dir https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
RUN if [ "$($PYTHON --version 2>&1)" != "Python 3.10"* ]; then echo "Error: Python 3.10 is required for flash-attention wheel" && exit 1; fi && \
poetry run $PYTHON -m pip install --no-cache-dir https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl

Comment on lines 47 to +48
# TODO: remove this line
RUN apt-get install --no-install-recommends -y git && poetry run python -m pip install git+https://github.com/huggingface/transformers.git@7547f55e5d93245c0a013b50df976924f2d9e8b0 && rm -rf ~/.cache/ /tmp/*
RUN poetry run pip install --no-cache-dir transformers==4.50.3 colpali-engine==0.3.9
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: consider using poetry to manage these dependencies instead of direct pip install to maintain consistent dependency management

Comment on lines 57 to +58
# TODO: remove this line
RUN apt-get install --no-install-recommends -y git && poetry run python -m pip install git+https://github.com/huggingface/transformers.git@7547f55e5d93245c0a013b50df976924f2d9e8b0 && rm -rf ~/.cache/ /tmp/*
RUN poetry run pip install --no-cache-dir transformers==4.50.3 colpali-engine==0.3.9
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: TODO comment indicates this line should be removed, but it's being modified instead. Consider removing the line entirely or removing the TODO if the line is now permanent.

@codecov-commenter
Copy link

codecov-commenter commented Apr 17, 2025

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 72.72727% with 3 lines in your changes missing coverage. Please review.

Project coverage is 79.75%. Comparing base (05b6117) to head (31f7eef).

Files with missing lines Patch % Lines
...mb/infinity_emb/transformer/vision/torch_vision.py 72.72% 3 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #572      +/-   ##
==========================================
- Coverage   79.85%   79.75%   -0.10%     
==========================================
  Files          43       43              
  Lines        3489     3498       +9     
==========================================
+ Hits         2786     2790       +4     
- Misses        703      708       +5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants