-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reject-www-data rule unintentionally removed from ip6tables when file contains only IPv4 addresses #76
Comments
Ugh this is a horrible rule, which breaks convention. For other rules, the IPs in the file match the intention of the rule, whereas this does the reverse. For a normal reject rule, the IPs are explicitly rejected, but for this rule, the IPs are added to an ACCEPT rule, which means if no IP is mentioned all of them should be rejected. 🤢 |
If memory serves the support team aren't too keen on the rule anyway cause it's been implicated in there being a lot of old out of date wordpress installs that get hacked - it's a massive barrier to being able to update/auto-update a bunch of web software. |
True. We don't know how useful this rule really is, and lots of people do remove it. |
It would be interesting to find out for sure, or have a poll. I suspect that it might cause updates to fail - but oftentimes they'd fail anyway due to Does make me wonder if my dns-idea would be better though. |
However, if someone has managed to get malicious code on a server, in ~99% of cases it's via a common exploitable (and likely automated) upload script or similar vector, which then allows the execution of that code via www-data. In those cases, the box is already compromised, and the hostile actor is most likely to continue using that same known-good vector to upload rootkits or whatever else they need to elevate privileges and root the box. Generally though, the default There are probably a few cases where this would be useful (although maybe implemented differently), but as far as the support team are concerned it's generally more of a liability than not. |
I've also had issues where linking a CMS to external resources and the CMS tries to cache the resource. I even opened a bug in the plugin and worked with the author to find out what was happening before I got to the firewall. I can see the utility, but is there a blacklist anywhere of "bad" hosts for rootkits and so on that could be used to deny traffic? |
@iainhallam - Bad hosts are so common, and so numerous, it would be futile to even try to maintain such a list. The only sane approach is the reverse - whitelisting known-good hosts (which we do try to do). |
Fwiw, I've been using the rule since year-zero, initially caught out thinking drupal updates were bug-ridden but re-reading the docs eventually pointed the way. I expect it's a common fault, especially at first-install, so opt-in (replace a default one-liner; "*" [sticking with the unintuitive structure]) and/or higher doc prominence make sense. With scripts to set tight permissions on CMS installs (config data, tmp & cache folders above htdocs where possible), I'm hoping the firewall rule provides a basic defence or mitigation. Assuming it does, the only problem is the silent fail; emails to root (realtime or hourly-summary) would be hugely helpful. |
I'd vote for disabled by default. It's preventing Wordpress and the like from staying up to date. And that's a worse security risk. Alternatively, Symbiosis could look for certain popular CMS installations, and open access to the relevant updaters. But that may be harder than it sounds. |
When an IPv4 address is added to the reject-www-data rule, the rule is removed from ip6tables.
ip6tables -L -v -n
and notice the reject-www-data table is present/etc/symbiosis/firewall/outgoing.d/50-reject-www-data
ip6tables -L -v -n
and notice the reject-www-data table is no longer presentThe text was updated successfully, but these errors were encountered: