> To be honest we didn’t get this right with GOV.UK Verify. We have an opportunity and obligation to do much better this time.
This is a misleading paraphrasing of the article. With the full context, it becomes clear that it's specifically inclusiveness / accessibility that they're saying they didn't get right:
> Inclusion is a hugely important part of our work, because anyone should be able to prove their identity to access government services, and it’s often the most vulnerable people who are at most risk of being excluded. To be honest we didn’t get this right with GOV.UK Verify. We have an opportunity and obligation to do much better this time.
This means that the rest of your comment doesn't follow:
> Something of an understatement. The project was red-flagged as'undeliverable' in 2019, after spending £154m
The second sentence could be right or wrong, I don't know. But the first half doesn't make any sense in the (actual) context of what you're quoting. If it's understatement then that means that they got accessibility very wrong.
Only £200m is impressively low for this type of failure. Canada's federal government is on track to waste more than 2.2 billion dollars on a pay system that does not work and needs to be replaced with another billion dollar system ASAP.
In Czechia we have a system where all sellers have to log bills to prevent tax evasion. IBM also provided the system. It cost ~ 15m$ to set up and ~15m$/y to run.
Given the rate of ~millions of bills submitted per day, of which the government only gets the total price and taxes, you have like ~hundred bills per second.
I could scale a gunicorn+sqlite on my 2014 MacBook Pro to be able to receive this rate of requests with a payload which could fit inside a single TCP packet. Sure, there is auth/backups/analytics... etc. Yet, I still just do not see how could IBM charge the extra 14m$ to set that up...
Based on my limited experience in govt projects, there is a lot more to such implementations than what meets the eye. The number of backend systems this application would need to integrate with, the long list of compliance requirements that adds complexity to development process and setup, redundancies that need to be setup at different levels, odd technical requirements mentioned in the RFP to name a few are things that can bloat up the cost and implementation timeline. Now add dependency on any parallel development or upgrades to the mix and you have a very tough project to execute.
I can guarantee you that there’s like 1000 different idiotic rules that are applied to each of those items. They probably need to be stored on servers stored behind balistic glass, and cooled with helium.
I’m not saying IBM is not still charging the government 14x more than necessary, but at least some of the expense is likely their own fault for having ridiculous requirements.
The German motorway toll system which was planned to take less than a year to roll out and ended up taking more than two years longer than planned is also a good example.
I don't know how it worked out in terms of budget, but the company ended up in a dispute over billions of euros in damages, mostly from lost revenue.
Here's a thought experiment: suppose that you have a fledgling government somewhere that needs to digitize and can start fresh with the modern day technologies, as opposed to having their hands tied with millions of man hours that have been previously spent creating legacy code.
Suppose that they decide that every single governmental system will use the same tech stack:
- front end: Vue and JavaScript, or whatever's popular and easy to use
- back end: Java with Dropwizard, or whatever has decent performance, decent maintainability (static types) and isn't too slow to write code for by the average developer (as opposed to something like Rust or C++)
- database: PostgreSQL, or whatever is suitable for its needs (though open source, so otherwise MariaDB could be considered)
- infrastructure: x86 servers, all running something like Ubuntu LTS or another popular, relatively stable distro
- communication: REST everywhere with OpenAPI, due to their abundance
In addition, perhaps there are some demands for the processes themselves:
- all of the code must be open source
- all of the discussions about the code and requirements must be public (a la GitLab's approach)
- all of the services must follow 12 Factor App principles, have fully automated CI and must run in containers (say, Docker or anything OCI compatible, with a simple orchestrator, like Nomad or K3s)
- all of the non-trivial services (e.g. files, reports etc.) must be separate, so not exactly monoliths, but not necessarily Netflix level of microservices
- all of the services must have at least 75% method code coverage, separation of concerns, all methods longer than X lines must have comments explaining what they're doing and *why*
- all of the services must use dependencies that are not older than X months, checked weekly
- all of the above can be checked by a bunch of shell scripts and the CI pipelines will fail if the goals are not met
So, with all of that present, how could any of that be better than our current approach of outsourcing, developing closed source projects poorly, not knowing who to know accountable and not learning anything from these due to a lack of post mortems? Would a strictly defined and consistent tech stack be a good choice or a bad choice? What about the job market and running a single stack in prod, across numerous systems? What about mandating low level technological decisions, as opposed to trying to sell everyone on some abstract business framework that has nothing to do with the end product?
Short of powers that be wanting to line their pockets, for what other reasons would the above fail? Why wouldn't open source and strict limitations of what can be developed and how it must be developed work?
Now, they're planning on writing a new system instead, however it feels like they'll probably make the same mistakes, due to numerous companies having had their hand in the previous system's development, with no good leadership and bad technical implementation: https://www-lsm-lv.translate.goog/raksts/zinas/zinu-analize/...
Surely there are both social and technical decisions that can be made make projects more successful?
I think if you have a look at the service manual[0] you'll be surprised. New code is open by default (unless there's a security risk) and while the stack beneath the surface isn't predefined the frontend is[1]. There's also the GOV.UK PaaS[2] that is geared towards making a fully automatic pipeline an easy process - it allows anything from static sites to Docker containers to be deployed with just a few lines of config.
All services need to go through an assessment where their openness, usability, accessibility and security are reviewed.
It's a modern, sensible approach that's constantly being improved.
Yes it probably is only the beginning. The finnish health digitalization project Apotti which currently covers only 10 municipalities or so is already said to have cost over 700 million euros. On here we always point to baltic countries and say look what they can achieve with only few tens of millions.
If you make a system that is secure and flexible enough, you then work with other stakeholders to integrate with it over time. They each come up with a roadmap to migrate their systems to support your protocol.
The key is to skillfully negotiate the budgets and politics between teams, so as not to get stuck building the next healthcare.gov.
Sounds like quite a naive statement. The scope of this project is huge. there are so many different systems at play here. Some are so legacy they go back to Tudor times.
The problem in these environments is often not technical, but human, full of egos, people without competence wanting to be involved in every discussion and generally people talking and evaluating systems based on specs for a year without actually testing things.
For every one of the systems you mentioned there is a fairly straightforward solution. And even if you can't solve them all you could gradually migrate everything.
Oftentimes because of management wanting to push liability to a private party, they don't want to invest into local dev teams, but instead want to buy a big enterprise solution.
The VA, NHS and the German systems all have these issues, but it's not because they're public institutions, it's because they work like enterprise.
The GDS have now become what they were setup to avoid. That is, they were supposed to be an agile startup to avoid massive overspend on untenable projects like NHS digital [1].
What I've noticed is that "new style" government projects like the GDS seem to be suffering from using too many "innovation tokens" [2], probably due to the much cited lack of leadership.
It's not nice to be in a situation where you are stuck on a legacy Java 7 platform, and nothing else, but equally, it's not nice to be in a situation where you are trying to support and context-switch between too many stacks. One is too conservative, the other is chaos.
If there's no leadership, there's no-one to make decisions to limit proliferation of new technology. This might be unpopular for some developers looking to improve their CVs, but necessary for the organisation as a whole.
Having done some government contracting (and I know government contractors have a reputation for being expensive and clueless, but I know I've worked with some sharp folks) I think one of the challenges here is government's demand for a 100% solution. That sort of runs orthogonal to the typical agile/startup approach. This sounds like a project where saying "let's serve the 70% of easy users to get our foot in the door" isn't an option. This is a government service that apparently all UK residents are entitled to. That's a daunting scenario for even the most agile.
I don't think it was the cause of failure for the Verify project.
But I do think it is the case at the GDS, as evidenced by the comment on this page:
> GDS is full of mostly django based systems, some legacy ruby and that's just the stuff from the last few years.
Going back further there are various C# systems, the amount of separate systems is staggering, it's not too surprising costs add up
And this blog post:
> ...quickly agreed that rather than try to settle on a single one of those we'd build each tool using whichever technology would most quickly get us to a relatively-durable prototype, and then "federate" them. We started with the python-based framework Django for the Department pages, added Ruby on Rails for a suite of tools focussed on specific tasks, and used Sinatra (another ruby framework) to glue together our search.
Keep in mind that alphagov is just a small part; most projects belong to other govt departments. Many are using JavaScript and Rails, some Django and .net.
I currently work at GDS, and I don't recognise what you're describing. There are lots of problems, but Java 7 doesn't have much to do with any of them.
Reading documents like passports with an NFC reader does indeed work, and does indeed produce verifiable material. Specifically the passport has proof (via a digital signature) that it was issued by a specific authority, and in turn, proof that the contents of the passport (name, date of birth, a picture and so on) are as issued.
But, the problem here is that the issuer is the British government, so, what are you proving? "Here, you issued this passport". "Oh yes, so we did". I presume the British government does own a database of the passports they issued, so this isn't news to them.
A modestly smart device, such as a Yubico device, is capable of providing fresh proof of its identity. My Security Key doesn't prove "The security key that enrolled me with GitHub in fact exists" which is redundant - but "I am still the same security key that you enrolled". However the passport can't do that, your passport is inert, and the fact that Sarah Smith existed isn't the thing you presumably want to prove to a single-sign-on service. You want to prove that you are Sarah Smith, something the passport doesn't really do.
I think the GDS ignores this problem, which is to be fair no worse than lots of other systems, but the result isn't actually what it seems to be, all the digital technology isn't actually proving anybody's identity in this space.
It reminds me of the bad old days of the Web PKI where it was found that the "email validation" being used would accept automated "virus checking" of email. A CA sends the "Are you sure you want to issue a cert for mycorp.example?" message to somebody@mycorp.example and even though Somebody is on vacation in Barbados for two weeks, the automatic "virus" check reads the URL out of the email, follows it, ignores the page saying "Success, your certificate has been issued" and passes it to Somebody's inbox... All the "security" is doing what it was designed to do, but, what it was designed to do isn't what it should have been designed to do, and so it's futile.
>But, the problem here is that the issuer is the British government, so, what are you proving? "Here, you issued this passport". "Oh yes, so we did". I presume the British government does own a database of the passports they issued, so this isn't news to them.
It's a (weak) proof of ownership of such passport -- it has to be present to be read.
Some id cards can also function as smartcards and provide kind of challenge-response proof, which is better compared to reading signed document (which can turn out to be a copy).
>I presume the British government does own a database of the passports they issued, so this isn't news to them.
> It's a (weak) proof of ownership of such passport -- it has to be present to be read.
It had to exist to be read once at some unknown point in the past, but the government already knows it exists because they issued it. Anything more (such as "it is present now") is speculation.
Cloning static data from NFC readers is something a physical penetration testing team does. Stand near employee in coffee shop, their ID badge says "Hi I'm employee badge #123456789 for site #98765" and you clone that, then walk in later, "Hi I'm employee badge #123456789 for site #98765" says your fake badge with your picture on it. Is it better than nothing? Yes. Would I like my government to aim a bit higher than "Enough know-how to bluff your way into a mid-size corp headquarters building" ? Yes please.
> Actually maybe the don't.
The UK government will issue you one document based on records from another document so long as it's timely. For example when my photo driving license expires, I just renew it with the photo from my new passport. Some people do the opposite way around. You can't renew everything forever because your photo expires and you need to provide a new one that resembles the previous one except now you're older. But the fact they can do this means they do have those records.
Even the resemblance checking presumably means somewhere a minimum wage employee is being shown pairs of images, "Young white guy with freckles and a beard" + "Older white guy, same freckles, no beard" => OK pass unless somebody has drunk way too much ML kool aid, but that can only work if they have the records.
>It had to exist to be read once at some unknown point in the past, but the government already knows it exists because they issued it. Anything more (such as "it is present now") is speculation.
Which is why I wrote "(weak)", as I don't know if the document in question supports active challenge-response or not. I have one that does (passport) and one that doesn't (id).
The chances that someone both took a picture of the passport and read the NFC chip and saved it are pretty low.
You’re talking about accessing someone’s house here. I don’t know anyone that carries their passport around.
It’s something you’ll have to deal with anyway, since you cannot issue new passports to your entire population. Not to mention those people that don’t have one in the first place.
> But the fact they can do this means they do have those records.
They certainly have all the records somewhere. But do they have all of them in the same place?
> You’re talking about accessing someone’s house here. I don’t know anyone that carries their passport around.
I'd guess that a high majority of passport owners do indeed carry them around when they travel, and places such as airports would be a fairly easy way to find such people carrying their passports.
Isn't there two things going on here? A single sign on service, and also an identity check.
A yubico device can say "I'm still the same security key that was enrolled" but it doesn't say at enrolment "I am being used by Fred Jones of <address> with passport no 123, NiNo ABC".
GDS verify / government gateway do support 'authenticators' (typically password + totp) to provide a level of assurance that the person logging into the account is the person the account belongs to.
The "document check" is part of the identity bit. That is, how confident are we that the person creating this account / performing this action is who they say they are and can do this thing or see that information. The document check is part of their solution but other parts are supposed to layer on top of it to provide a proportionate level of assurance. e.g. a video of you holding up the passport to check that it's you using it or asking you to provide answers to details you already have like 'how much tax did you pay last year' or checking multiple documents.
> I presume the British government does own a database of the passports they issued, so this isn't news to them.
The passport office (part of the Home Office) own that database. Part of this work is basically a web service to that database for the rest of the government to use.
This is the difference between 'identity assurance' (at enrollment) and 'authentication assurance' in the relevant NIST standard, SP 800-63 https://pages.nist.gov/800-63-3/
A remote passport check might be suitable to claim an identity assurance level of 2, say, while getting to identity assurance level 3 would need an in-person visit with multiple forms of other government identification records.
In that way you might then constrain some actions to be taken remotely only by users who have provided strong assurance of their identity at enrollment (even if they login with strong non-phishable authentication... doesn't matter that the auth is ironclad if the user's identity is muddy).
Of course neither of these are a 100% guarantee of identity. (1) doesn't account for stolen or lost documents. (2) is only useful if the app doing verification is tamperproof and the camera isn't fooled by holding up a photo of you etc. However _nothing_ is a 100% guarantee. These steps, plus any other verification that's going on, can make it very hard to fake ID, which is all we are really able to hope for with these systems.
Because the data is static, it doesn't do that. It only verifies that you know the static passport data.
This was a key problem with the old magnetic stripe credit cards. You could easily clone the stripe. I don't see any substantial obstacle to cloning an NFC passport chip.
2. Provides biometric info
Sure, that is useful, but, this is a government application. The government does have this biometric info, that's why it's baked inside the passports. So this isn't something the passport is enabling for the government.
The biometric passport spec has support for what it calls active authentication-- this is your fairly standard challenge/response-type thing using a private key contained in the chip that can't be read or copied.
The caveat is that it's optional-- not all passports support it, and thus there are possible (depending on the reader software) downgrade attacks where a passport with active authentication could be cloned anyways if you can convince the reader not to perform the authentication step. And there are bad hardware implementations out there that don't adequately protect the private key material (leaving them susceptible to cloning anyways).
You can't read all the data off the passport chip without first entering some of the info printed inside it, so I don't think it's an entirely passive read protocol.
That's a very good point, which I had known but forgot, although I'm not quite sure how good a defence it is here it's certainly a lot trickier than the "badge cloning" scenarios where it could happen entirely passively.
The key you need is basically the bottom row of the MRZ in your passport, the way a real system does this is to look at the photo page in your password, read the MRZ from it and use that to unlock the NFC chip. This feels intuitively reasonable since you either handed over the passport for inspection, or you showed the photo and identity page to an inspector, either of which reveals the same facts as the chip (by default).
So, we're not talking high entropy keys here. But, this is a real practical barrier compared to the badge cloning. It probably means, "Brush past somebody in a queue to clone their passport" is not in fact practical. On the other hand, if you ever have legitimate access you have enduring credentials, so that's not great.
Identity verification services usually combine document reading with something like a video selfie, which provides 1) a liveness check (you are a real person) and 2) a match against the digital image data that was read.
Is it possible to fool? Like anything it is a trade-off. Acceptable security for an acceptable cost. Hopefully it fulfills your security requirements and you saved a physical visit.
The way it works in my country is: you install an app that uses your passports NFC chip to verify your identity. Then gov web services or verified third parties (like private insurance) can use what looks like (I did not dig in the details) a fairly standard OAuth flow.
For all Verify had/has real issues (mostly politically mandated pressure to use + key usability and accessibility issues) at least it had some people who understood key elements of the identity landscape.
That article appears to draw no distinction between the proving of the existence and of the ownership of an identity. If anything the examples conflate them which is a critical failure.
Identity is a really hard government problem, mostly because it is a government issue/need really not an individual's and government are insanely bad at addressing it.
The NZ govt did a good job with their version: realme.govt.nz
While proving your identity was easy, they've had trouble achieving the next step of information sharing. Traditionally government departments under law were not able to share information. This has only recently changed and they're struggling to reach the next step.
Legislated prohibitions on sharing information are sometimes there for good reason. A good government has checks, balances and circuit breakers. Aggregating data provides a very powerful tool. Architecting solutions with this in mind means I (and others of my ilk) have open discussions with the businesses (inside government) we support regarding what are the necessary and sufficient information and identity requirements in an application or business process. There are cases where it is sufficient to only know that "this is the same actor we were interacting with before and they have x, y and z attributes germane to this interaction"
Realme also suffers from typical government thinking though.
We wanted to add it to our platform as an authentication mechanism. That would be good for individuals, and good for realme, as it would increase adoption.
However this is not allowed because ... Realme is paid for from taxes, so noone in the private sector can use it unless they pay.
This makes some sense, but also obviously it makes no sense for a web site to pay e.g Google for "sign in with Google". Google is obviously more than happy to underwrite the costs, since it means greater adoption of their identities and more power to their platform.
As a result realme is stuck in a public sector prison. It's a shame as some of it has been well executed.
Cost recovery of a public service is reasonable, even if the base level of service is contributed to with tax revenue. What was the cost quoted versus private/corporate identity providers?
No cost was give, but paying an idp anything > $0 would be difficult to stomach when Google, LinedkIn, Facebook and many others provide the same service for free.
Not only are they rolling it out, it is the primary identity provider for the Social Security Administration (as of a few weeks ago), CBP’s Global Entry, USAjobs, and about 200 other federal sites (mostly internal agency sites and apps, including USDS’ Slack team). They also support state and third parties in certain cases.
They are an incredible product of the USDS, 18F, and GSA and it’s awesome watching them slowly but surely make digital identity with the US government better.
Shout out to 18F and GSA Digital services people. They're doing god's work where most government agencies are dysfunctional and completely incompetent.
If only the IRS used them instead of contracting out to a for-profit third-party... who will make you go through 30 minutes of verification only to tell you they can't verify you, and then take 3 months to reply to your email.
It works very well, both on the implementation side and the user side. Also (depending on the use-case) includes document verification which OIDC/SAML applications can use to accept proven-accurate personal information.
> Millions of people in the UK don’t have a passport or driving licence and there’s no magic document that lets everyone prove their identity.
That's why most(all?) EU countries have ID cards which are (usually? sometimes? probably depends on the country) mandatory. Especially the new EU standard version, which will of course take time to be deployed everywhere, is pretty great with a chip containing the biometric and other data allowing for automatic verification ( via an app or device at certain places, like airports).
I'm not so sure about that, I wouldn't be surprised to see it come back. As far as I recall it was only the Liberal Democrats that opposed it in the 2000's.
In the UK identity cards are considered an intrusion on freedoms. Boris Johnson once wrote this which I think sums up that sentiment: "If I am ever asked on the streets of London, or in any other venue, public or private, to produce my ID card as evidence that I am who I say I am ... then I will take that card out of my wallet and physically eat it in the presence of whatever emanation of the state has demanded I produce it."
If people actually listened to what that empty vain populist said, the UK would be in big trouble. Imagine anyone lets him into any position of power! He'd just lie and change his public opinion on things daily. /s
How this man is anywhere near Downing Street is a complete mistery to me.
suuch a hard problem, esp for government that has to include 'everyone' and all the weird edgecases that forces on you.
Couple that with proof of identity being one of the few things that might have been issued 40, 50.. 60+ years ago (birth certificates) and never updated (unlike passports..) and when issued had no concept or sympathy that it might be used for digital verification in the future.
Without redefining the problem to avoid issues like birth certificates, i'm not sure this is solvable in the way stakeholders expect it to be. Stakeholders (like citizens paying for it) expect to be able to point technology at the problem and have it solved, 'its just an app right?' or 'lets use biometrics!' but so many other things have to adapt to make something like this successful.
Estonia had an interesting approach where they just issued everyone smartcards/certificates and used that as proof, this bypasses the 'birth certificate' problem but is expensive (Estonia has smallish population, newish government so ok) but such an approach itself has a root of trust problem.. who do I issue the smartcards to? It isn't directly transferable to other countries/governments.
You work around the root of trust problem by creating exceptions and alternate paths to verification, maybe do it in-person for people with disabilities etc. But then how long does that take to roll-out? How easily abused is such a system? How many of the identity moment can I use it for? Is in-person verification trusted less?
It is very easy for the resulting system to be quite brittle too and not reflect real use-cases, real world identity moments are very diverse and often have more flex than you'd think and it is very difficult to carve out the right chunk of the identity problem to solve and which to leave behind and still create something that improves the ecosystem.
Oh, birth certificates. If you need to support birth certificates in your system -- you need to support each and every format of birth certificate issued in the last 100 years in every part of the world, not just ones issued in $countryname.
It's like "falsehood programmers believe about Names" combined with one about dates and all others too. Because sometimes including name of lunar month is more important than assigning a number to the document.
The UK must be one of the few countries left where you have to show (often paper) bank statements and utility bills to prove who you are (or simply that you live where you say you live), and in the process revealing all kinds of personal information that is none of the recipient's business - including Government.
This could all be solved with a national ID card (and would save the public purse £millions, as shown by the Verify programme). The complexity isn't in the actual technology (which exists, and has done for a long time), but the sheer number of possible identity documents that need to be accounted for and the mechanisms for collecting that documentation for verification.
But Brits are too stubborn to accept it. These are the same swathes of people that share stuff on Facebook, install Ring doorbells, and throw bank statements in the bin un-shredded.
I don't believe it was the same people. It was derailed by, essentially, "HN types" i.e. people who think they need to defend themselves from government surveillance. Without that campaign (NO2ID) screaming that it was Big Brother it would have been introduced without fanfare, because as you say Brits are generally pretty dismissive of their own privacy and trusting of the state, compared to many other countries.
Most of the time if you're proving residency with such documents you're not a permanent resident and your ID is for foreign country under freedom of movement.
There's a huge difference for example between identity/residency checks in Poland for people who are in PESEL database and people who aren't - and if you're in PESEL, you're going to have polish ID card as well.
Can anyone point to background reading on hanging all this stuff together (in a corporate context, so avoiding a lot of inclusion issues)
things like
FIDO/ubi key as the basis for all authentication / identification
Once we do that how do we manage SSO - I dislike the idea of having a nice hardware module and then saying "great for the next x hours / days use this static string. All attempts to make that better (timers, usage countdown timers) just seem less good than client certificatest and certificates in HSMs
(simply, I am trying to design a ... company I guess
)
Most importantly FIDO isn't identity. It's deliberately only authentication.
Facebook and GitHub both have records that allow them to authenticate that I'm still me, using FIDO Security Keys, but if they compare records they intentionally do not learn that they're authenticating the same person, even though that is in fact what they're doing.
The place you'd logically put an "identifier" (and in fact it's even called ID) in protocols like WebAuthn is chosen effectively at random for each enrolment. On your phone the ID is actually just a bunch of random bits, on a cheap Security Key it's much more complicated but the effect for relying parties is that it's random.
Two great places to start are "Beyond Corp"/zero trust and "WebAuthN". BeyondCorp is the name brand of a zero trust model, where the perimeter security model is broken down for authorization ("authz") and authentication ("authn") everywhere. WebAuthN is using PKI infrastructure with a purposeful UX to migrate away from passwords for authentication. The implementation and management of SSO is a complex topic with a lot of nuance beyond a single HN comment, so I'd recommend you speak to someone with some IAM (identity and access management) subject matter expertise to scope your use case and provide recommendations for possible solutions.
Something of an understatement. The project was red-flagged as'undeliverable' in 2019, after spending £154m
https://www.theregister.com/2019/07/18/verify_to_be_flagged_...
By 2020, and now at £200m sunk:
https://www.computerweekly.com/blog/Computer-Weekly-Editors-...
So now in 2021, we're back to informal tests and Post-It notes again?