> For example, the recent standardisation of DNS-over-HTTPS (DoH) pitted advocates for dissidents and protestors against network operators who use DNS for centralised network management, and child safety advocates who promote DNS-based filtering solutions. If the IETF were to only decide upon technical merit, how would it balance these interests?
I neither like DoH nor in general arguments of the form "X is not political, we only decide on merits".
However, in this case, I really wonder if the IETF's contribution is political: DoH on it's own is highly so, but to my understanding, the politics are in the decision to design an encrypted, centralized and block-proof alternative to DNS with preselected servers and automatically deploy it to millions of users. But those points belong the initial requirements and the deployment process - not so much the technical details of implementation.
That DoH is a standard makes some difference, but not very much. I think we'd have roughly the same problems (though maybe with less publicity and discussion) if this was some properietary Chrome thing that was fully developed inside Google.
Or to put it differently, assuming it was already decided that something like DoH should be built, is there still political power in the follow-up discussions about how it should be built?
> ...and child safety advocates who promote DNS-based filtering solutions.
DoH doesn't prevent filtering. Sure, it doesn't respect OS or Network-enforced settings, but that's kind of the whole point of its existence.
> ...DNS for centralised network management.
DoT (and DNSCrypt?) would arguably be the better alternatives in these cases.
So, use-cases are absolutely being addressed in DNS' case at the very least.
> I really wonder if the IETF's contribution is political: DoH on it's own is highly so...
I think DoH and ESNI are exactly the kind of response expected from a tech firm like Mozilla: Fight for its users and invest in the right technology that helps keep the Internet global despite the many efforts to dismantle it, not just by governments but by big tech in equal measure. In fact it is big tech that sells "cyber security" solutions to these governments in the first place: A trillion dollar market opportunity [0].
Look no further than ITU to see how anti-user, pro-corporate, pro-government politically-influenced technology standards can be [1]. Things might take a steep dive given Google's [2] and Facebook's [3] recent inroads in the Communications industry. This pro-user political engagement from Mozilla et al is more than welcome, in my opinion.
I think they're different - RFC 8890 is a particular position statement, and this blog post discusses why they decided they needed to publish it, some specific cases (e.g. the discussion about Parliament) that led up to it, and their decision-making process in general. RFC 8890 itself doesn't mention that, nor does it specifically mention things like DoH and China blocking it.
It's good to put a stake in the ground for the IETF to create clarity about what they do and why. Countries like China, the US, Australia, and the UK are of course interested in controlling the flow of information for political, ideological, military and other reasons. Therefore they are interested in controlling what the IETF does and how. That's the nature of any big country or entity (including companies). E.g. Apple kicking competitors out of the app store is exactly the same kind of behavior. Whether that's good or bad is a separate discussion.
The reason the internet exists at all is that it provided a way around / relief from this level of control and that it allowed independent innovation to happen. In a way this was a happy accident because a lot of the funding for the R&D behind this came out of cold war era defense projects which were very much not about the free flow of information. So, it would be a stretch that this is working as intended. But it was successful exactly because of that and the IETF brought together the right parties to ensure it kept enabling this.
More strict networks ultimately failed to compete because users and companies found that they needed to tap into this free flow of information. The fact that countries like China, Russia, North Korea, etc. are on the internet at all (grudgingly and with all the restrictions that they try to enforce) is because they need to be. They are also part of the world economy for the same reason. And as much as they'd like to tell others what the value of their own currencies should be relative to $, renminbi, euro, etc.; they have no choice but to leave that to market forces. Of course it doesn't stop them from trying to influence and manipulate that. Nor does it stop anyone else from doing so. But ultimately it leads to either hyper inflation or people just swallowing their pride and accepting or paying in dollars for stuff they buy (e.g. oil) or sell on the international market. The internet is the same; it's a take it or leave it kind of thing. Either you are on it or you are not.
It is a bit of neutral ground where mutually hostile entities looking to control, monitor, and manipulate each other can conduct their business (some of it very bad business). It works precisely because things like https prevent man in the middle attacks, certificates keep us honest (to some extent). All the other technical tooling the IETF standardized to ensure third parties don't mess with private interactions between two parties trying to have some meaningful interaction is there for just that reason. Of course the tools are far from perfect and there's an arms race between intelligence agencies to exploit loop holes, bugs, design flaws, etc. The IETF's role is to address these issues, not to create more of them to enable authoritarian regimes to gain an edge over other nations or their own citizens.
So, DNS over HTTPS is a good thing and exactly the right thing for the IETF to be standardizing because it fixes a problem with what they previously standardized (i.e. DNS). It's controversial because it takes away power from countries actively abusing that power. But the fact that others are exploiting protocol weaknesses in DNS is fundamentally a bug and not a feature. You could argue DNS over HTTPS is not a perfect solution or that better solutions exist but not that the role of the IETF is to enable the Chinese to have a say in what people (in or outside China) lookup via DNS. That's not their role. If the Chinese, the Australians, etc. want to block people that choose to configure their browsers to use this (or browsers that do this by default), that is of course their good right. But enabling that kind of self isolation is not a core IETF function.
> It's controversial because it takes away power from countries actively abusing that power.
I'd say it's controversial because it breaks a wide arrange of setups and features that rely on DNS, giving power from local administrators over to Google, Cloudflare et al.
If the only thing it did was prevent governments from spying, there'd be a lot less fuss about it.
Local administrators can still configure their own machines to speak any sort of DNS they like to their own servers. Indeed, the default AD setup used by many such admins does precisely this.
DoH is a great tool for end users to be able to disregard their pipes and preserve their own individual privacy. An ISP network admin is not the administrator of the private customer machines that are connected downstream.
I can no longer configure "my machine" to use DNS a specific way; I have to configure my system plus a number of specific applications to use DNS the way I want. Splitting this configuration into multiple places is bad.
In an enterprise/commercial setting, there are valid reasons for filtering employees' use of DNS (legal compliance, e.g.).
I can no longer install e.g. PiHole to filter out ad servers globally for all devices on my network.
These are some of the issues caused by DoH, I'm sure there are others.
> Splitting this configuration into multiple places is bad.
This is because OSs are ossified. They need to move along or get left behind, they decided to be left behind.
I'd also remind you that DNS wasn't very configurable even before, plenty of devices I've seen that ignore DHCP DNS, NTP and domain. Even if they should respect them. DoH is no exception to the general type of behaviour that already exists.
This is because OSs are ossified. They need to move along or get left behind, they decided to be left behind.
This type of position has the veneer of forward thinking, but it is precisely backward. The core value of an OS or a network protocol is consistency. Do not break things that work. Do not change what doesn't need changing.
> Do not break things that work. Do not change what doesn't need changing.
Exactly. Things don't work, the only way to fix it is to change it. It has a few negative consequences but they're already here and standing still won't make it better. If OSs start having the features needed, the control will return. But as long as some people are dragging their feet, throwing tantrums, ignoring the large majority for which it's a positive change, it's going to be a tiresome long process.
> plenty of devices I've seen that ignore DHCP DNS, NTP and domain. Even if they should respect them. DoH is no exception to the general type of behaviour that already exists.
Wait, your solution to buggy devices is to treat the bug as acceptable and deploy it more widely?!
Those issues aren't caused by DoH but by your OS tooling. In enterprise settings you can and should configure this once and replicate images for everyone. And you of course can configure your own software to use your own dns server.
DoH just prevents you from hijacking someone else's dns.
Those same companies like Google intentionally break IP filtering too through fancy proxying. They do this to make it impossible for local admins to pick and choose among their services.
The internet supposedly exists because it provided a more reliable way to order nuclear missile launches.
DNS over HTTPS is controversial because the current implementations take power from the end users (by moving the resolver out of libc who’s configuration and often implementation is controlled by end users and into certain applications where the user is lucky if the app allows them to adjust the parameters, especially if the interface is stable and intuitive. They really have to because cramming http and tls into libc is a bit crazy.) Furthermore it concentrates power and data into the hands of a couple of organizations where previously using their resolvers was optional. At this point I’m not entirely sure I trust Google any more than most foreign governments or even my own.
Nonsense. End users have the power to configure their browsers just like they have the power to configure their DNS settings, their operating system and all the rest. Of course very few people do. Mostly that power requires something most users don't have: which is knowledge to know they need to do this and how to do this. The people actually capable of doing this lose no power whatsoever.
You seem to not trust Google. Nobody is saying you should or must. Other DNS servers are available. Set up your own one if you must. You have that power. If you are smart enough to actually know how to set up DNS now, you should be able to figure out how to configure that to use https. It's not rocket science.
If not, maybe it's a good thing that your browser stops blasting your DNS queries unencrypted over a public network to absolutely anybody that can be bothered to listen in. This would include all the 3 letter acronym security agencies that you can name (domestic and foreign ones), big ad driven companies, your local police, and everybody else with or without the cooperation from your friendly neighborhood operators or arm twisting of the incompetent politicians representing which ever government is governing wherever you live. The best you can hope for to protect you from that is a combination of incompetence and indifference.
> Nonsense. End users have the power to configure their browsers just like they have the power to configure their DNS settings, their operating system and all the rest.
This power is illusory. If Google decides Chrome will no longer allow changing DNS providers, game over for 60% of internet users. Consider how it's no longer possible to block ads the way you want to in Chrome with manifest 3.
True - but I think this is not the fault of DoH or IETF. On a technical level, the power stems from Google's ability to auto-update Chrome with whatever logic they see fit. On a social level, maybe Chrome's market share and the acceptance of users of that power.
Even without DoH, Google could just as easily have decided to hardwire DNS for Chrome to 8.8.8.8 or to switch Chrome to their own home-grown proprietary name resolution protocol. They don't need a public standard for that.
>Nonsense. End users have the power to configure their browsers just like they have the power to configure their DNS settings, their operating system and all the rest.
It is already not possible to avoid DoH to Google servers with chromecast
> Of course very few people do. Mostly that power requires something most users don't have: which is knowledge to know they need to do this and how to do this.
Yes, which means that effectively they don't have the power.
Agree, the internet was NOT created as a missile command and control system.
Baran's seminal work on packet-switching was a RAND C2 study whose concepts were rejected by the extant defence contractors as impractical. When DARPA later wanted to interconnect it's collection of computers to create ARPANET for researchers, Baran's theoretical ideas were dusted off and the rest is history.
>It was from the RAND study that the false rumor started, claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, but was an aspect of the earlier RAND study of secure communication. The later work on internetworking did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks
I was simply pointing to original source docs from one of the co-inventors of packet-based switching, on which the Internet is based. There's a larger series of Baran's works at RAND:
In 1962, a nuclear confrontation seemed imminent. The United States and the Union of Soviet Socialist Republics (USSR) were embroiled in the Cuban missile crisis. Both were in the process of building hair-trigger nuclear ballistic missile systems. Each country pondered post-nuclear attack scenarios.
U.S. authorities considered ways to communicate in the aftermath of a nuclear attack. How could any sort of “command and control network” survive? Paul Baran, a researcher at RAND, offered a solution: design a more robust communications network using “redundancy” and “digital” technology.
I think it's worth bringing up at this point that the famous "The Net interprets censorship as damage and routes around it." quote is true only for the layer 3 of the ISO/OSI stack. The Internet as most people understand it is layer 7, and is incredibly centralized.
Most DNS resolution libraries will read the libc resolver configuration (/ets/nsswitch.conf) and attempt to parse it into something they understand, and potentially fall back to it (for example, if you're doing non-DNS resolution like mDNS).
If you'd like to standardise a way to require DNS-over-HTTPS in supporting software, and convince operating system distributors to generally ship it, I'm fairly sure Firefox would be up for using that rather than its current mechanism, similar to how it uses system proxy settings. As it is there is no such standard.
> If you'd like to standardise a way to require DNS-over-HTTPS in supporting software, and convince operating system distributors to generally ship it, I'm fairly sure Firefox would be up for using that rather than its current mechanism, similar to how it uses system proxy settings. As it is there is no such standard.
There already is a standard way for me to tell all the programs which run on my system which DNS servers to use: /etc/resolv.conf. Any program which does not respect the values I set there is disobeying me.
And so we get incredibly far away from the very laudable goal of protecting Internet users - the majority of who primarily use a web browser - by default from malicious DNS servers.
There isn't one today, aside from running your own DNS server, and you can run your own DNS-over-HTTPS server. My understanding is that Firefox still respects /etc/hosts in DNS-over-HTTPS mode, too. I'm unaware of any such tooling that installs a glibc resolver stub.
RFC8890 explicitly refers to "the interests of that child's parents or guardians" when the child is using a web browser, although individual system configuration is not within the remit of the IETF.
RFC 8484 makes no reference to how DNS-over-HTTPS should be configured, and in the face of widespread DNS hijacking, enforcing DNS-over-HTTPS in browsers may have been the correct solution - but that doesn't mean we can't do better by defining a standard (perhaps under the remit of the Free Desktop XDG group), encouraging operating system vendors to ship secure DNS configured by default, and then convincing Firefox et al to use that standard.
Sure, because big tech pretends to care about privacy and freedom of speech but only insofar as it results in them cutting out every layer of control between their servers and the end users they would like to own. So everything will be tunneled over opaque HTTPS proxies and there will be no facility for people to filter anything - even end users. Apple, for instance, pretends to care about user privacy and control but refuses to offer a configurable IP firewall in iOS or end-user control over name resolution for filtering.
Yes, but is there a facility to force the browser to use your server or will Google re-write Chomium to "bypass" "rogue" DoH servers which filter content Google doesn't wish them to filter?
So if an end user tries to request a page from this site without using SNI, instead of an just an HTTP error code, they get this lovely little piece of snark:
This Web site requires a more modern browser to operate securely; please upgrade your browser.
The truth is it just requires SNI server_name in the ClientHello. I don't send SNI except on sites that require it. Not every site is on a shared IP; some still use dedicated IPs. So I am the end user reading HTTP headers.
I always thought this addtional "message" from mnot.net was somewhat presumptive for an IETF person who claims to be some sort of user advocate. This is the kind of thing I usually see from web developers telling you to "upgrade" your browser or turn on Javascript for the best "user experience", not IETF folks. Maybe it's a default on some server software, not custom.
His site does TLS1.3 but it doesn't do ESNI/ECH so I guess some browsers (or other clients) can be a little "too modern".
In practice, I reckon you might be the only person in the world - or maybe one of a dozen - who has this problem. And in that context that's not snark at all, it's an accurate description of what is most likely the cause of the error.
I don't see how not sending the SNI can improve leaking. If the server has only one hostname associated, it's trivial to get it from the IP and then reverse DNS. This issue is discussed in detail in multiple drafts about SNI encryption
It requires a more modern browser to operate securely; it says nothing about whether it is perfectly secure against all threat models, and this particular website is hosted on the same IP as at least one other so requires SNI to operate securely indeed, and will hopefully in future support ESNI as the software implements it.
BTW - if this site did not use SNI, and only served one website at https://www.mnot.net, then connecting to www.mnot.net's IP address at port 443 would be plenty good evidence that you're accessing https://www.mnot.net. The SNI privacy leak only really comes into play when you're accessing something behind a CDN or a shared hosting provider, which put enough sites behind each IP that simply accessing that IP isn't a privacy leak in itself.
The website does not support the more secure ECH nor the more secure option of just not using SNI at all. It's the website that doesn't support a fully secure connection and it shouldn't be blaming the user agent for that.
Encrypted Client Hello isn't even at Working Group Last Call yet, let alone actually published, so it won't make sense for most people to offer whatever the current draft state is.
Not using SNI isn't a more secure option. By definition you can't be doing standards compliant TLS 1.3 - which would be the most secure option - since sending SNI for host names is mandatory in TLS 1.3.
Also, in assuming that multiple hosts can instead be distinguished down in/ up in the HTTP layer you actually do make some things less secure. The TLS session binding doesn't apply to the application layer, so when you write Host: www.example.com in HTTP that does not bind your TLS session to the name www.example.com, whereas if you send SNI for www.example.com then the remote server needs to actively decide by policy if it's safe not to bind that. Are you actually going to get exploited this way? Probably not, but neither are bad guys anxious to find out somehow which of the services on Mark's server you wanted.
Without any tunnelling like VPN or TOR, the safest option would be to have several unrelated services share one certificate, when only looking at MITM vulnerability.
This would in theory ensure that any attacker could only assume the client is accessing at least one service on the target machine.
Setting aside the obvious risk that one of the services could claim to be one of the others, this obviously comes with some other technical limitations.
Again, it's not like the site is doing anything wrong, it just shouldn't be blaming the user for something that's obviously just a technical limitation of the technology being used.
SNI doesn't make the TLS handshake process and more or less secure. The ServerHello and Certificate messages sent by the TLS server are still sent unencrypted.
"The ServerHello and Certificate messages sent by the TLS server are still sent unencrypted."
RFC 8446, page 7:
"All handshake messages after the ServerHello are now encrypted. The newly introduced EncryptedExtensions message allows various extensions previously sent in the clear in the ServerHello to also enjoy confidentiality protection."
I suspect that the message is associated with a class of unsupported historical SSL/TLS features.
The truth is that Server Name Indication (SNI), the ability to operate multiple hostnames with SSL/TLS on a single host, is one of the few backwards incompatible features of the modern web.
The efficiency and flexibility of container and virtual machine server instances offered by Infrastructure-as-a-Service providers like Amazon AWS make the support of non-SNI clients a difficult proposition.
> Because of the global nature of the Internet, it wouldn’t be possible to pursue a bilateral or regional style of governance; decisions would have to be sanctioned by every government where the Internet operates. That’s difficult to achieve even for vague statements of shared goals; doing it for the details of network protocols is impractical.
In all honesty, maybe that would be the better alternative, if the only other option is US tech companies deciding unilaterally for the whole internet.
US based tech companies are making those decisions for the whole internet because internet users are voting for US based tech companies' influence with their time, attention, and clicks. It's open and democratic.
This RFC makes no mention of the root of the problem; the Federal Reserve system.
In our fiat monetary system, all new money which enters the economy comes from banks as credit; companies then compete against each other to earn the biggest possible share of that newly printed credit. This means that new money doesn't originate from consumers (so-called end users), all new money originates from institutions... So why would companies care about end users (consumers) when all the new money they get actually comes from financial institutions??? Why not get money straight from the source; cater to the paymasters and use consumers as pawns?
These days, it makes more financial sense to manipulate consumers to please financial institutions than to manipulate financial institutions to please consumers... The dynamics are out of whack. That's why we need UBI urgently - Make consumers the source of all new currency, this would force big companies to forget about cheap institutional money and focus on consumers. Let consumers be the paymasters of the economy.
This game of allowing banks to decide who gets access to cheap credit and who doesn't is inevitably going to lead to the kind of corruption we've seen over the past decade.
The IETF strives to create the best technical solution for given specifications. That's its task. Where those specifications come from is a related but separate issue.
Seems like IETF's mission statement disagrees with you:
> The Internet isn't value-neutral, and neither is the IETF. We want the Internet to be useful for communities that share our commitment to openness and fairness. We embrace technical concepts such as decentralized control, edge-user empowerment and sharing of resources.
> ‘Best' doesn't mean the choice that has the most companies supporting it; it’s the one that has the best technical arguments behind it.
> If the IETF were to only decide upon technical merit, how would it balance these interests?
> Over the years, these questions have become increasingly urgent, because it isn’t viable to make decisions that have political outcomes but explain them using only technical arguments.
The article explains that that is necessary but not sufficient. It goes on to argue this with examples and explicit justifications. It even has an entire section dedicated to explaining why having it your way would not work.
I'm not sure where your comment is coming from though. Who are you to decide how they will do their work? It seems they have decided for themselves and it's in conflict with what you want. Then we don't know why you want this because you haven't provided any reason, reference to authority or any other clues.
From my perspective, the IETF and their standards don't exist in a vacuum and their actions have consequences, sometimes of an ethical nature. They seem to agree. It seems prudent that they come to a consensus on what their ethics are, document them and try to abide by them. It's just a case of having a consistent and open policy on a matter which can't be avoided. That is, they will do right or wrong whether they document their intention or not since their actions sometimes have such ethical consequences. Why stick one's head in the sand?
Hard to take someone saying "the Internet is for End Users" seriously when they're working to implement QUIC, a purely corporate needs fulfilling step backwards for human people.
I'd love for you to elaborate on why you believe QUIC / HTTP/3 is a "purely corporate need". My understanding of the technology is that it represents an encrypted-first approach to HTTP, improves performance vs HTTP/2, and enables easier integration of technologies like ESNI.
Hard to take someone saying what you're saying seriously, when this is given in such a crass and uninformative manner
QUIC is a vastly superior transport to TCP in every technical regard. What exactly makes it a "purely corporate needs fulfilling step backwards for human people", if you don't mind adding some context to this extremely vague statement/opinion?
QUIC is what happens when corporations define standards so their own services can be run cheaper. The transport layer shouldn't be 'aware' of what it is transporting. It shouldn't be a gigantic heap of things, even if those things in the heap sound nice (like encryption baked in). It is already impossible for anyone but Google (or another mega-corp) itself to make a browser that works with "modern" sites. This piles on to that.
It is, despite the partial liberation from Google itself by the ietf, still a google attempt to control what HTTP and the web are. Google should not. Protocols should be generic. Layers should layers not just squashed all into one so youtube videos pull up faster on your mobile phone. There's more to the web, and the internet, than fixing mobile latency issues.
This sounds like a different variant of the "rampant layering violation" argument that was used against ZFS. It is sometimes healthy to challenge old assumptions. If QUIC turns out better than what we have in every respect that matters "for the end user", as this article is about, what would be the harm?
I really don't get where this argument comes from QUIC is hosted protocol on top of UDP which content-neutral. There's no layering violation since all of QUIC sits at the application layer. And QUIC takes a generic content-neutral payload that you can build on top of.
Building TLS into the protocol rather than having a plaintext protocol wrapped in a generic TLS stream isn't that weird -- it's how MySQL does it.
So nothing but FUD and ignorance, as expected... If you wanna get angry at Google, get angry at removing URLs from Chrome, not at the good things that they do.
> QUIC is what happens when corporations define standards so their own services can be run cheaper
1) Google is not the only one who would be able to run their services cheaper. That is as plain a fact as it could be. Do you think Google is the only company who needs to serve massive amounts of content? In what way is energy and computation efficiency "harming" the internet?
> The transport layer shouldn't be "aware" of what it is transporting
2) In what way is QUIC "aware" of what it's transporting?
> It shouldn't be a gigantic heap of things
3) QUIC is not a "gigantic heap of things" any more than TLS or SCTP is. It is an L7 protocol hosted on UDP, just like TLS is an L7 protocol hosted on TCP.
> It is already impossible for anyone but Google (or another mega-corp) itself to make a browser that works with "modern" sites. This piles on to that
4) This has nothing to do with browsers, and QUIC has nothing that relates exclusively to browsers. Also, the idea that competing in the browser space is will be harder because you need to implement a new transport is laughable. Browser complexity derives from HTML/CSS/WebApps/etc, and QUIC is not only insignificant by comparison, but also a much less volatile feature that wont need to be "evolving" all of the time.
> Protocols should be generic
5) QUIC is absolutely, 100% generic? I'm not sure where you're getting the idea that it isn't from...
> Layers should be layers
6) Except when the layers are horribly misplaced of course; then you get into the situation we're in with TCP where we have hard-coded definitions for how control flow will work during a given session, what the session is, how you always need a new handshake to start a new one even if on the same host, and then you need to work around all this bullshit by starting a crapton of TCP connections to multiplex streams like HTTP/1.1 does...
7) In what way is QUIC not a layer?
8) That is a horrible argument. Nothing is keeping the IETF from standardizing QUIC at the same level as UDP/TCP/SCTP, you just have to look at IPv6 and SCTP's adoption to understand what happens when you try to change a protocol that deep in the stack.
> There's more to the web, and the internet, than fixing mobile latency issues
9) Sure... If you ignore that unlike when TCP was standardized, mobile devices represent 50% of all publicly routed internet traffic, and that all of these devices are dropping packets, triggering retransmissions, and congesting the network for everyone (including non-mobile devices), then there is no probelm at all!
That is also known as the tactic 5 year olds use when faced with a problem: closing their eyes and pretending it doesn't exist.
10) I'm not sure where you got the impression that QUIC only brings benefits to mobile latency, but that is also patently false
Various non-web companies are involved with QUIC work at IETF too, so it has to have some points in the "generic" column, especially since IETF-QUIC separates out the HTTP-replacement layer.
I’m no fan of Google’s self-aggrandizing and narcissistic drive to divert internet standards towards its own needs and preferences (see also: squatting the A record with HTTP like its the default service, an offence as egregious as QUIC and DoH in my book, along with their membership of the conspiracy to assassinate XMPP), but still let’s play the ball not the man.
For example: The task force MUST prioritize... the user MAY expect...