Explicit Trusted Proxy in HTTP/2.0 or...not so much
ISC Handler Rob sent the team a draft RFC currently under review by the IETF that seemingly fits quite nicely in the "What could possibly go wrong?" category.
Take a second and read Explicit Trusted Proxy in HTTP/2.0 then come back for further discussion.
Collect jaw from floor, and recognize that what's being proposed "buggers the CA concept and browser implementation enough to allow ISP’s to stand up “trusted proxies” to MITM and cache SSL content in the name of "increasing performance." Following are highlights of my favorite content from this poorly oddly written draft, as well as some initial comments:
-
"This document addresses proxies that act as intermediary for HTTP2 traffic and therefore the security and privacy implications of having those proxies in the path need to be considered."
- We agree. :-)
-
"Users should be made aware that, different than end-to-end HTTPS, the achievable security level is now also dependent on the security features/capabilities of the proxy as to what cipher suites it supports, which root CA certificates it trusts, how it checks certificate revocation status, etc. Users should also be made aware that the proxy has visibility to the actual content they exchange with Web servers, including personal and sensitive information."
- All I have is "wow".
-
There are opt-out options, sure, but no one's every disguised or abused such options, right?
- Opt out 1 (proxy certificate): "If the user does not give consent, or decides to opt out from the proxy for a specific connection, the user-agent will negotiate HTTP2 connection using "h2" value in the Application Layer Protocol Negotiation (ALPN) extension field. The proxy will then notice that the TLS connection is to be used for a https resource or for a http resource for which the user wants to opt out from the proxy."
- Opt out 2 (captive proxy): "Specifies how an user can opt out (i.e. refuse) the presence of a Proxy for all the subsequent requests toward "http" URI resources while it stays in that network."
-
Section 7's title is Privacy Considerations. None are listed.
- Er? Here, I'll write the section for you. Opt in and you have no privacy.
-
The draft states that the Via general-header field MUST be used by the user-agent to indicate the presence of the secure proxy between the User-Agent and the server on requests, and between the origin server and the User-Agent on responses in order to signal the presence of a Proxy in between, or loosely translated into MITM.
-
And if it's not used? Session disallowed? Appears not:
-
The draft has said MUST re: the Via header but then says...
-
"If any of the following checks fails the User-Agent should immediately exit this Proxy mode:
1. the server's certificate is issued by a trusted CA and the certificate is valid;
2. the Extended Key Usage extension is present in the certificate and indicates the owner of this certificate is a proxy;
3. the server possesses the private key corresponding to the certificate."
-
"If any of the following checks fails the User-Agent should immediately exit this Proxy mode:
- ...but says nothing about what happens if the headers are wrong or Via is not used.
-
The draft has said MUST re: the Via header but then says...
-
And if it's not used? Session disallowed? Appears not:
-
Love this one: "To further increase the security, the validation by the CA could also include technical details and processes relevant for the security. The owner could for example be obliged to apply security patches in a timely fashion."
- Right...because everyone patches in a timely fashion. And the Patch Police agency to enforce this control will be...?
Maybe I'm reading this wrong and don't know what I'm talking about (common), but we think this draft leaves much to be desired.
What do readers think? Imagine this as industry standard in the context of recent NSA allegations or other similar concerns. Feedback and comments invited and welcome.
Comments
Note the RFP defines "trusted proxy" as one where a dummy root CA cert from the proxy's internal CA has already been imported onto the client. Importing this cert is necessary before the client will trust the dummy certs that proxies like Blue Coat SG generate and send to clients on the fly.
All of the things are already possible to do and are being done today -- it's just that the current protocol implementations being used today need improvement. I suspect that's what the linked discussions in the RFP are probably about.
For example, what happens if the proxy finds a site with an invalid cert, because maybe there's a man in the middle attacker hijacking the session? Most enterprises can just reject all invalid certs, this would break too many sites. Ideally the proxy would notify the browser and user of the issue and let them make the decision whether to proceed (I know, it's not perfect), but I'm not sure this is always possible to do.
Note also the first quote in bold text in the above article is referring to a gap in today's current state, not how the future state should work.
If you don't want this inspection to happen, use a client cert and the proxy will probably let it through without decrypted inspection. Which might feel safer for the user, maybe, but can be riskier for the enterprise and their security professionals.
The need for this is that, if the organization was not able to inspect outbound HTTPS, then that lets malicious insiders, malware and attackers on internal computers to perform outbound protocol tunneling through the proxies with little to no proxying / protocol inspection.
Anonymous
Feb 24th 2014
1 decade ago
We don't need this.
Anonymous
Feb 24th 2014
1 decade ago
http://www.theregister.co.uk/2014/02/24/saving_private_spying_cryptobusting_proxy_proposal_surfaces_at_ietf/
http://lauren.vortex.com/archive/001076.html
Anonymous
Feb 25th 2014
1 decade ago
We don't need this.[/quote]
Yup and they do a great job. How do I know? I see all of the garbage that people are clicking on that comes from compromised legitimate HTTPS sites. Even a few years ago that might be once a quarter if that. Now it's a couple almost every day.
If you have valuable data you need to protect, whether it's customer data or intellectual property, and you're NOT doing HTTPS inspection, everything your wonderful, computer-literate, security-aware employees click on goes straight to the desktop AV for remediation. You know, every site that is at the top of the list for the search engine they use. Not a problem though, because desktop AV is so effective now, something like 95% right? (Oh wait, that's the current "miss" rate. Sorry.)
The malware is already inside your network at that point and with multiple exploits being the norm from a single site nowadays, you're negligent if you know of this threat and don't bring it up as something that should be mitigated.
So yeah, I'm happy that you don't Internet surf at work. Do your personal stuff at home.
Anonymous
Feb 25th 2014
1 decade ago
After reading this and rereading the draft, it is apparent that the draft does not intend at all to affect end-to-end HTTPS connections. This is not MITM of end-to-end HTTPS. Rather, it intends to provide for on-the-fly upgrading ONLY of HTTP traffic to HTTPS between the user and the proxy (e.g., ISP).
While this is an interesting idea, it seems to be a ridiculously kludgey approach, which would cause more problems and introduces more pitfalls than it "fixes". Even after defending the concepts of the proposal, Hill also states:
"One thing this whole episode has finally convinced me of is that “opportunistic encryption” is a bad idea. I was always dubious that “increasing the cost” of dragnet surveillance was a meaningful goal (those adversaries have plenty of money and other means) and now I’m convinced that trying to do so will do more harm than good. I watched way too many extremely educated and sophisticated engineers and tech press get up-in-arms about this proxy proposal, as if the “encryption” it threatened provided any real value at all. “Opportunistic encryption” means well, but it is clearly, if unintentionally, crypto snake-oil, providing a very false sense of security to users, server operators and network engineers. For that reason, I think it should go, to make room for the stuff that actually works."
Anonymous
Feb 25th 2014
1 decade ago
Anonymous
Feb 25th 2014
1 decade ago
We found it broke any number of apps that use HTTPS and often don't provide useful errors or warnings about their ceasing to work because of a perceived MITM attack on their HTTPS traffic. For instance, some linux desktops just stopped seeing that there were any new updates available. No mention made about "Oh, by the way, I can't talk to any of the repositories I use to look for updates anymore."
There's just too many different apps, many with their own list of certs they trust (totally separate from what other apps or the OS might use) that break after performing your own MITM attack (even if done with the best intentions, that's what it is).
Yeah, users will click on anything and HTTPS links then side-step a lot of the filtering we do. But these should be viewed the same way we view anti-virus - as just another layer of security, not a silver-bullet avoiding the need to educate users.
Anonymous
Feb 26th 2014
1 decade ago
How is the distribution of "Trusted" certs to be implemented? Sorry, for some reason the IETF site is slow to load for me. I can see this implemented at an enterprise level with exceptions for known issues. Broader than that and implementation is problematic from the cert distribution and further.
Anonymous
Feb 28th 2014
1 decade ago
In the current state, most proxies are mostly just tunneling HTTPS (and most other non-HTTP ports and protocols) through largely without inspection - acting like a port-based firewall from 1990. "If dest port = 8443, then let it all out."
All those Linux apps breaking are exactly the problem that causes us to need an updated HTTPS RFP like this one. Right now HTTPS apps aren't expecting a MITM proxy, so the HTTPS protocol and RFC don't really give a set way to communicate and negotiate issues and exceptions. So each app is left to anticipate and perform its own error handling, which predictably fails to anticipate every possible future network environment and error. Ideally a protocol RFC would help by saying, "in this situation, the proxy shall do X and the client app should do Y."
Almost nobody wants HTTPS to be decrypted for every domain, you'll want to set up some exceptions (which is also true about proxying in general, some apps won't work well with a proxy, period). It's possible your proxy admins could be doing a better job of reading the proxy logs for errors, to see what source servers or destinations need exception rules on the proxy to work.
If the enterprise doesn't decrypt and inspect HTTPS, then attackers, malware, malicious insiders, etc. could be stealing and tunneling sensitive internal data out via HTTPS to a legitimate commercial web or email server, and you might never notice.
To answer the last question above - I believe the draft RFP presumes the clients already have root CA certs installed to trust dummy PKI certs generated by the proxy for each site. It doesn't state how those are distributed, nor does it need to. Each enterprise can use whatever tool it uses to distribute software or perform remote client administration - AD GP, login scripts, client OS install images, emailed instructions, an "error" page from the proxy with a link and instructions, MS SCCM, IBM Tivoli, etc. etc.
Anonymous
Mar 2nd 2014
1 decade ago