> I was surprised while digging up the link that Gravatational is still releasing v13 and v14 updates under Apache 2, so maybe even Teleport will continue to have legs for those who cannot deploy AGPL stuff
Teleport puts out 3 major releases a year (every 4 months) and supports versions back to N-2. So the v13 will be updated until May (v16's release) and v14 until September-ish (v17's release). Using v14 and prior is not a viable strategy for AGPL averse companies in the long run... unless they want to fork.
After September 2024, the Teleport options that will get updates are:
1. Compiling Teleport yourself under the terms of the AGPL
2. Use the pre-compiled Community Edition under its new commercial license (<100 employees and <$10MM)
3. Purchase a license (or Teleport Cloud tenant) under enterprise terms
The recent Teleport licensing changes are designed to:
1. Push business users in category 1 and 2 into category 3 and
2. Preempt having Teleport's value resold by a big cloud player like the AWS Elasticsearch/OpenSearch kerfuffle a while back.
Source: I work at Teleport, and while I had no say in the license change, I did keep an ear out as I care about our open source stance. It is part of what brought me to the company.
Disclaimer: I'm a Teleport employee, and participate in hiring for our SRE and tools folks.
> A rubric written in advance that would allow a single person to vet a work sample response mostly cures the problem you have right now. The red flag is the vote.
I argue the opposite: Not having multiple human opinions and a hiring discussion/vote/consensus is a red flag.
The one engineer vetting the submission they may be reviewing before lunch or have had a bad week, turning a hire into a no-hire. [1] Not a deal breaker in an iterated PR review game, but rough for a single round hiring game. Beyond that, multiple samples from a population gives data closer to the truth than any single sample.
There is also a humanist element related to current employees: Giving peers a role and voice in hiring builds trust, camaraderie, and empathy for candidates. When a new hire lands, I want peers to be invested and excited to see them.
If you treat hiring as a mechanical process, you'll hire machines. Great software isn't built by machines... (yet)
If you really, honestly believe that multiple human opinions and a consensus process is a requirement for hiring, I think you shouldn't be asking people to do work samples, because you're not serious about them. You're asking people to do work --- probably uncompensated --- to demonstrate their ability to solve problems. But then you're asking your team to override what the work sample says, mooting some (or all) of the work you asked candidates to do. This is why people hate work sample processes. It's why we go way out of our way not to have processes that work this way.
We've done group discussions about candidates before, too. But we do them to build a rubric, so that we can lock in a consistent set of guidelines about what technically qualifies a candidate. The goal of spending the effort (and inviting the nondeterminism and bias) of having a group process is to get to a point where you can stop doing that, so your engineering team learns, and locks in a consistent decision process --- so that you can then communicate that decision process to candidates and not have them worry if you're going to jerk them around because a cranky backend engineer forgets their coffee before the group vote.
I don't so much care whether you use consensus processes to evaluate "culture fit", beyond that I think "culture fit" is a terrible idea that mostly serves to ensure you're hiring people with the same opinion on Elden Ring vs. HFW. But if you're using consensus to judge a work sample, as was said upthread, I think you're misusing work samples.
You can also not hire people with work samples. We've hired people that way! There are people our team has worked with for years that we've picked up, and there are people we picked up for other reasons (like doing neat stuff with our platform). In none of these cases did we ever take a vote.
(If I had my way, we'd work sample everyone, if only to collect the data on how people we're confident about do against our rubric, so we can tune the rubric. But I'm just one person here.)
Finally: a rubric doesn't mean "scored by machines". I just got finished saying, you build a rubric so that a person can go evaluate it. I've never managed to get to a point where I could just run a script to make a decision, and I've never been tempted to try.
I'll add: I'm not just making this stuff up. This is how I've run hiring processes for about 12 year, not at crazy scale but "a dozen a year" easily? It's also how we hire at our current company. I object, strongly, to the idea that we have a culture of "machines", and not just because if they were machines I'd get my way more often in engineering debates. We have one of the best and most human cultures I've ever worked at here, and we reject idea that lack of team votes is a red flag.
Strongly agree with this, two key concepts in particular:
1. Using group discussion to make the principled rubric is incredibly respectful of everyone’s (employee and candidate) time, not just now but future time. Using the rubric is also unreasonably effective at getting clearer pictures of people quickly.
2. Systematic doesn’t mean automated, and that hiring should aspire to be systematic to the point it makes no difference who interviewed the candidate, and all the difference which candidate interviewed.
I’ll add one …
3. If you have a rubric setting a consistent bar, share feedback with the candidate in real time (such as asking to ‘help me understand your choice I might have done differently?’) as well as synthesized feedback at the end: “This is my takeaway, is it fair?”
Contrary to urban legend this never got us sued. Every candidate, particularly those being told no, said it was refreshing to hear where they stood and appreciated the opportunity to revisit or clarify before leaving the room. Key is non judgmental clear synthesis with, “Is that fair?”
You’re mistaken, we do have a rubric. All of the members of the interview team grade the interviewee according to the rubric, and the scores are then combined into “votes”.
That's good. I'm responding to "Not having multiple human opinions and a hiring discussion/vote/consensus is a red flag". I think having combined scores is an own-goal, but having people vote based on their opinions is something worse than that (if you're having people do work samples).
You mention "Product Designer (Senior, Lead, Principal or Director)" above, and I've got a friend (ex-Bing) interested in a Senior/Lead role. However https://duckduckgo.com/hiring doesn't mention any design positions. How can I get y'all connected?
Tailscale and Teleport are similar, but operate at different levels of the network stack. Tailscale governs access and routing at L3 in the OSI model. See Hashicorp's Boundary or VPNs for alternatives. As a generalization, Teleport works at L7 -- doing auth and routing at the application protocol (ssh, psql, k8s) level.
There are ups and downs to both: L3 is relatively technology agnostic (e.g. you don't need different support for connecting to a database vs ssh). L7 auth & routing gives greater protocol introspection, but means more work to support different use cases.
Depending on your scale and use case, the right answer may be both: Do 2FA for both network access (are you allowed to send packets to the ip:port) and application access (are the packets you send allowed to sign in to the database as an intern or a admin?). The most important part is to get a hardware token and SSO on the path to access.
Disclosure: I work for Teleport. I also think Tailscale is awesome and run it for my home lab.
We use ZSSH based on OpenZiti so that the SSH client itself has zero trust, private connectivity embedded in the SSH client (i.e., clientless) - https://ziti.dev/blog/zitifying-ssh/
Ever since Vagrant, everything Hashicorp has developed has been outstanding! Furthermore, their open core model and this S1 is an inspiration. I wish all the best for Mitchell, Armon and the team!
I have a couple emails from Mitchell H circa 2014. He was doing front line customer support for the Vagrant VMWare Workstation provider -- I think it was just about their first paid offering. I was impressed that the head of the company would take time to help me troubleshoot my busted setup. Incredibly technical and incredibly hard working.
Thank you for the detailed writeup. This is a topic which I think is not discussed much.
> We will split public-facing CI from release infrastructure and internal CI infrastructure. (teleport#8268)
Did you also consider some form of out-of-band approval mechanism for production environment access? (via a chatbot / push notification etc). I think something like that might work technically, but scalability might be a challenge. It might be easier to manage in comparison to a self-managed complete second CI system though. I have been pondering over it for some time to be able to utilize Gitlab CD without providing Gitlab all keys to the kingdom.
> Did you also consider some form of out-of-band approval mechanism for production environment access?
No, not before your comment at least. Vendor CI tools (be it GitLab, Drone, etc) often make it difficult to use this workflow. Their typical model is long lived static creds, and gating authn/authz around job kick off. I'm not aware of any that would work with delegated/approved credentials, at least without writing a custom secrets plugin. If anyone knows of such capabilities, give a me a holler.
Furthermore, there is still the risk of any service available to external contributors being compromised (as we saw in the this vulnerability). I'd just as soon have "no prod secrets touch a system that does external CI" as a security invariant -- no matter how trustworthy that external CI system is.
In a bittersweet irony, out-of-band approvals are in our product:
but we're not there with CI yet. :/ It would be fantastic if we could have short lived credentials issued only for the duration of the job, after approval (or better: after delegation) from a trusted party. Something like AWS's `CalledVia`.
> How can they be avoided without stopping the use of CI/CD?
Use separate systems for CI and CD, and don't put sensitive "keys to the kingdom" credentials in CI. For example:
Put CI in GitHub Actions or GitLab CI without any credentials to write artifacts or knowledge of stage/prod deployments. Let the "interns" in the threat model use this.
Put production CD/release in Jenkins or a similar self hosted, not publicly accessible system. Limit the folks who can trigger jobs in this system to a small group of trusted employees, and don't trigger runs on actions that don't require U2F auth (e.g. require a manual click through the webui protected by SSO, or only deploy from specific branches protected to only allow approved PRs -- no git client pushes).
> I assume it is the same with GitLab?
Yes. While GitLab does offer some secret and variable masking controls, the Travis disclosure earlier this week where all secrets were exposed to Pull Request CI shows you probably don't want to bet your business on those controls. (Acknowledging GitLab != Travis)
> the CI system basically had admin access over our infrastructure. It has to in order to do infrastructure as code.
> Public CIs are fine though. Ones that literally only do code builds, tests etc
I couldn't agree more.
Even internally, the security and authorization needs of deployment/release are wildly higher than those for running an ephemeral build and test. "CI/CD" needs to be un-bundled, for the sake of security, such that CI doesn't have admin access over infrastructure. Only a much more limited CD has this access.
In the case of open core products that use public facing CI, I'm inclined to put the average employee's CI on the public system; for transparency, but also to make sure external contributors don't become second class citizens using an irregular workflow/toolset. Maintain a separate internal release system limited to trusted employees. Principle of least privilege, and all that. :)
Teleport puts out 3 major releases a year (every 4 months) and supports versions back to N-2. So the v13 will be updated until May (v16's release) and v14 until September-ish (v17's release). Using v14 and prior is not a viable strategy for AGPL averse companies in the long run... unless they want to fork.
After September 2024, the Teleport options that will get updates are:
1. Compiling Teleport yourself under the terms of the AGPL
2. Use the pre-compiled Community Edition under its new commercial license (<100 employees and <$10MM)
3. Purchase a license (or Teleport Cloud tenant) under enterprise terms
The recent Teleport licensing changes are designed to:
1. Push business users in category 1 and 2 into category 3 and
2. Preempt having Teleport's value resold by a big cloud player like the AWS Elasticsearch/OpenSearch kerfuffle a while back.
Source: I work at Teleport, and while I had no say in the license change, I did keep an ear out as I care about our open source stance. It is part of what brought me to the company.