Hacker Newsnew | past | comments | ask | show | jobs | submit | okabat's commentslogin

If you stick with it, I would recommend you spend a lot more time talking to customers, thinking about distribution channels, and getting creative with narrowing your target audience. And build after, not before, you do those things


I agree with you and has been my biggest learning so far.


Formatting is hard to read, and too many details on specific technologies. Focus more on impact (results) and leadership. “Pushed to AWS weekly” and “wrote unit tests” lines in particular aren’t helping you look like a senior candidate. Did you iterate on the overall process for deploys? Teach others how to do it? Help set standards and norms around testing and quality? Reach for those stories if you have


How are roles/employees mapped to the seniority level which corresponds to the 45/95 percentile bounds? Is this a sliding scale or a binary distinction?


Growth rate is an output metric. I would recommend de-emphasizing the output by revisiting infrequently and stopping comparisons with others, in favor of concentrating on the inputs. The poster you respond to has a very good set of things to look for. What’s holding you back from making progress on one of those? How fast are you building trust with individuals you work with and how can you accelerate that (think about specific relationships to make it less abstract)?


Cool concept, matches how I want to play Mindsweeper (though others are pointing out some interesting parts of the game that this approach removes).

Played through it once and was confused by why a bomb exploded. Can somebody help me interpret? I'm especially having trouble figuring out what the dashes are in this end-of-game explainer text. Take the space diagonally top-right from the one flagged bomb (the asterix). This had a visible "1" on the board, but is marked as a dash here.

Made a mine because the user clicked it when there was a safe space at 6, 2. Can't be a mine because

  011--1?????????
  12*101?????????
  ??2211?????????
  ???????????????
  [6 rows clipped]
  + 29 mines left to find becomes

  011--1-????????
  12*101-????????
  ??2211*????????
  ????---????????
  [6 rows clipped]
  + 28 mines left to find)


Dashes seem to be places that might be a mine but also might not (because the game has to reserve the right to move mines around depending on where you click).

In this case it's telling you that 6 over (zero indexed) and 2 down (the spot is marked with an asterisk in the second diagram) is safe, because of the pattern of "ones" on the left edge.


Training as a separate activity from normal work activities is not my mental model, and I expect many others. Rather, training should be interwoven in all of the daily activities that allow people with more or differentiated experience to share what they know with the others around them.

Code review is training. Do you consider training one of the primary purposes of code review? In my experience, different teams are highly variable on this. But avoiding bike shedding, explaining context and knowledge without dictating direction, and asking good questions shares knowledge and makes everyone improve.

Postmortems are training. Learning what went wrong and what could be done better next time is close to the definition of training.

Performance / peer reviews (when done well - a rarity) are training - they are a time to step back and think at a higher level about where there's room for improvement and what to concentrate on. Andy Grove explicitly talks about performance reviews as training: "giving reviews is the single most important form of task-relevant feedback"

If your definition of training only includes classrooms, learning budgets, conferences, etc than I encourage you to broaden your view as to what constitutes training. If these venues help you learn, then I hope your organization supports you access them. But don't mistake their absence or their low prominence for a lack of training with an organization


In addition to that, answering questions is training.

Sometimes this happens directly, but quite often it's during daily "standups" (which we do sitting, via video, and it's as much socialization as work) we often talk about problems that people are having and give them suggestions. This tends to be most helpful to novice programmers, but we all benefit from it sometimes.

And assigning reasonable tickets is training, too. We also start out our junior programmers with easy bugfix tickets and give them plenty of time to explore the code, answering a ton of questions. Then we give them gradually larger projects as they show they can handle them.


As a SAML service provider, what's the easiest way to tell if my implementation has this problem?

I'm guessing I should: 1) Go into my Okta dev account, create a 2nd "okta app" (SP instance) pointing at my hosted application, which should create a new entity ID 2) Start an IDP-initiated login attempt from this 2nd okta app, and verify I get an audience mismatch error


This should work as long as your chosen Entity ID for your second Okta app doesn't match the actual Entity ID of your SP.


Slack is great at managing "read/unread" state across my 4 devices even in messy mobile networking situations. Good threading tools allow conversations to remain sandboxed and not gum up entire channels. Do these improvements justify the valuation? Maybe not, especially given how many other comms apps have a similar feature set. But for the vast majority of realtime modern workplace communication use-cases, IRC or a bulletin board will not cut it


I’ve missed important messages at critical times due to their mobile app not providing me notification as configured (ios). Syncing across devices is handled nicely though.


The fastest way to make progress on these business questions is often to build hacky MVPs that look like they're doing something smart, but behind the scenes are powered by humans or dead-simple algorithms, and get them in front of customers ASAP.

I recently joined a seed-stage startup solving a business problem via audio analysis in the manner I described above. I'm not spending much time doing ML yet, but I'm banking on my belief that we're solving a valuable problem (customers want to buy our hacky MVP) and that ML can and will be needed to scale our solution. By deeply understanding the customer as a first step, I think the ML systems we build will be business critical and enduring. Time will tell


> dead-simple algorithms

The most successful models I've ever built have been logistic regression models. If you can rephrase your problem in a way that's amenable to run-of-the-mill statistical techniques, you can frequently achieve much better results than you can with 'deep learning'.


If you're going to do logistic regression, at least call it a single-layer neural network with a sigmoid activation function.


So true. I have had to do this.

I had a pretty good regression model but it was not taken seriously. So I wrote it using "a neural network in TensorFlow" and the next thing you know the whole company is asking me how it works and what it does.


This feels so dirty, but these kinds of tricks work. Your stakeholders get to participate in the titillating fiction that they are on the bleeding edge of technology, and you get to deploy a scalable, explainable, and (hopefully) high-performing solution. That's 95% of a win.


I'm totally "borrowing" that for later use.


Amen.. I like to use exotic techniques just out of intellectual curiosity and to put my education to use, but ultimately a basic linear regression is all people want! Ease of interpretation is paramount.


I think their point is that when changing engine output is labor-intensive, it made perfect sense to have an officer giving orders and somebody else implementing them. Why the command structure hasn't shifted to accommodate modern technology is an interesting question


It's not labor-intensive, it's attention-intensive. You want the person managing the speed to be focused on that. You want the person managing the heading to be focused on that. And you definitely don't want the officer to be distracted by maintaining course and speed when they need to be keeping track of what's going on around the ship.


You want the person managing the speed to be focused on that if managing speed is still inherently and unavoidably attention-intensive - if modern control technology is able to automagically maintain speed at x knots, adjusting all the various engine control parameters to implement that, then an extra person in the loop only means an extra chance for mistakes or delays.


If the military learned a lesson from exploding a reactor with an automatic control system, you might never know - it's not required to be reported to the public.


If an automatic control system can blow up a reactor so can the manual one. You have a problematic design issue in both cases. You can also still assign someone to keep an eye on the reactor in case things go bad an have that person pull an emergency stop or warn the guy in charge. There is no reason to relay every little speed change through several people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: