-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2.4+] IPP_OP_CREATE_LOCAL_PRINTER creates raw queue for model 'everywhere' if PPD generation fails #760
Comments
Hi @10ne1 , thank you for reporting the issue - in general we discussed as part of #347 which is about side effect of the change (which you correctly found as the culprit) in case PPD generation takes too long, but it is good to have it as a separate tracker too. @michaelrsweet is looking into implementation of solution for this, which we can use in 2.4 , so he might have some ideas in mind. I was thinking about this myself too and we might move IPP Get-Printer-Attributes from the separate thread into main thread - it would cause some slowdown for the server, but IMO it is better than creating printers which look as IPP Everywhere, but they are raw in the end. P.S. @michaelrsweet imho the similar check whether the destination is capable of IPP Everywhere we will need for printer profiles creation for local server and permanent queues on sharing server. |
@zdohnal We absolutely don't want this happening in the main thread since it would be an easy way to implement a DoS attack - try adding a printer to a non-existent address and you prevent everyone from accessing the server for at least 30 seconds... Ideally, the response logic for this operation should be updated to use the CUPS-Get-Devices/CUPS-Get-PPDs interface so that the response can be fed back to the client over a pipe once the PPD is or is not available. |
@michaelrsweet IIUC pipes would work locally, but we support remote adding printers (lpadmin -h, a remote request for cupsd) - I would guess it won't work like this? |
@zdohnal The pipe is running locally to return the IPP response to the network client which can be local (domain socket or loopback) or remote (external network interface). Like I said, the same way we do the CGI-based cups-driverd and cups-deviced interfaces for CUPS-Get-PPDs and CUPS-Get-Devices. |
Hello, Gentle ping - has there been any progress on this or issue #347 ? |
Not at the moment, but I would like to come with something for 2.4.8. Currently I'm stuck with different things. |
OK, changes for #347 should address the issues here. |
Hi, I am upgrading CUPS from v2.3.3 to 2.4.6 in ChromiumOS where the regression is a bit more complex due to some downstream divergence, but I've reproduced the issue on my Arch Linux with unmodified CUPS from the Arch Linux Archive.
Below you have a minimal reproducing use case for both failures.
Describe the bug
Starting with this CUPS v2.4 commit, it appears certain parts of lpadmin were moved inside the scheduler to run in a background thread. This causes two specific regressions because the failures are not caught by lpadmin anymore (other failures like passing a bad ppd are still caught).
To Reproduce
A minimal reproducing use case follows for 2.3.3 vs 2.4.6:
On v2.3.3:
on v2.4.6:
Expected behavior
The expected behaviour is what happens on 2.3.3.
Seems like in v2.4.6 the error checking still happens, but inside the scheduler? I built CUPS with debug enabled and saw the scheduler fail while running the
_ppdCreateFromIPP2()
function from ppd-cache.c:Maybe there needs to be a method to propagate the error from the scheduler thread to lpadmin? Maybe some inter-process communication like a named pipe? Just throwing ideas out, what do you think?
System Information:
Latest Arch Linux as well as ChromeOS.
What I did for ChromeOS for now is revert the commit I mentioned above (quite the pain btw), to get the old lpadmin behaviour.
The text was updated successfully, but these errors were encountered: