Hacker Newsnew | past | comments | ask | show | jobs | submit | more thinkersilver's commentslogin

The poster is holding a line of bash to the standard of code and is illustrating that readability should be the goal and a way of bringing bash commands to a standard of readability for something like a PR. Readability is really there to show _intent_

I would say though that if you are bringing this to the code standards of today then this should really be wrapped up in some kind of unit test (https://github.com/sstephenson/bats )for it to pass the PR. That would make the code a bit more maintainable and can be integrated as a stage in your CI/CD pipeline.

If we do that then the intent would be clarified by the input and the expected output of the test. Then then the code would at least be maintainable and the readability problem becomes less of an issue when it comes to technical debt.

I've done this plenty of times with my teams and its certainly helped.


Are you replying to this thread? https://news.ycombinator.com/item?id=20724679


Yes I was. I've posted the comment to the correct story now. I don't how that happened.


The snippets feature looks quite powerful and particularly like the enumeration conversion example. Are there any plans of making these kinda of transformations extensible?

I've recently gone into speech assisted video editing and came across Kald about a month ago, was interested specifically in its diarization feature ... it's quite daunting. How did you get up to speed on it ... I've talked to a few people who've worked with it but it took them months to feel comfortable with it.


Well they did qualify with that it was a complex topic. And am not sure he hinted that it was compulsive.

I tend to work in a banking infrastructure environment and it's common practice to take notes,

The number of conversations I have during the day often has me saying, I'm not going to remember everything you've said, could you put that in a email so I can get back to you properly.

It does depend on the environment,


This may not be useful but have you tried to ask why you want to be heard. It's worth exploring your motives. Is it to make yourself feel more valued in the team or is it to provide value to the team. Figuring this out may allow you to find a way to change how your team members view you ( if it's needing validation from the team) or if it's the second, which I suspect it is, you may find a way to influence decisions _without_ having to talk over other people.

Team discussions are a competition of views and attention for space to express them and can be chaotic. Exploring

1. Verbal strategies - some have pointed out couple in the thread

2. Preparation of what you want to say - and framing the negatives strongly and the positives of your argument confidently

3. Seeding - Doing the groundwork and preparation before the discussion itself. Talking to your team members about your ideas and gauging they're reactions - in a way reading the room, and planting your idea so it's not out of the blue at the discussion

There's more but it's down to you. It's probably what you don't want to hear. You'll have to improve the way you communicate within discussions or learn how to influence the leaders and decision makes of the group outside the chaotic conversations.

I'm like you in a way but you've got to choose your battles.


Am not sure how much this discussion is happening now but do remember having these conversations more often a few years ago when tools like Prometheus were coming up.

This is my take and shouldn't be taken as gospel but I've observed that metrics are perfect for isolating and identifying an issue and logs attempt to explain the behaviour seen. Having a layered dashboard set up top to bottom showing the throughput, error rates and latencies expressing the health of each layer from the exposed service or api all the way down to the subsystems and hardware supporting them is a good place to start. I'd write more but don't want to make this post overly long. There are several useful articles online on the different methodologies and approaches to achieve this.

There isn't much conversation though about the knowledge an organisation has around incident analysis workflows, how past incident resolutions are captured and integrated with dashboards in monitoring systems and how they are shared and reused as SREs and engineers work with log analytics. I've seen the same lessons having to be relearned when new engineers join a team. Checklists are a good example( there are more ways)for directing incident analysis in large complex systems but how many times do you see this being supported by the monitoring system? The focus is always on displaying pretty charts in a grid when more value could be gleaned from the same dashboard content presented as a rich living document with charts, integrated checklists in something resembling a guide to resolving an issue.

There certainly is enough data to achieve this.


> I'd write more but don't want to make this post overly long. There are several useful articles online on the different methodologies and approaches to achieve this.

Do you have some of these articles in mind?


Bear in mind, I'm from the U.K so I don't have an American perspective on this.

This is a bit closer to the truth when you read or listen to what analysis they were doing.

I think what they were doing and it's association with Trump conflated how bad it was.

I'd seen in some places that mining of Facebook data in the pre-trump era, was quite prolific by both parties.


Not just seems, it was a bragging point, because back then it was fine in the eyes of public, since the previous administration was a bit more likable. I recall an interview back from 2012, where our previous president was talking about it as an innovative technique for his campaign.[1]

[1] https://www.technologyreview.com/s/509026/how-obamas-team-us...


I did this about 4 years ago as well and haven't looked back either. The habits I'd formed around news reading just were not helpful or useful. If you think about it for it a second there is very little informational value given by news articles. The who, where, what, why can be done in a paragraph, the implications and future projects can be done in an opinion piece, because any forward future looking projection is 'opinion' and has error, bias embedded in it. All of this is cheekily now lumped into one. Well that's my view.

I've seen my colleagues go through the most anguishing emotional roller coaster over the the last 3 months with the brexit madness. I've seen emotions run high, I've seen the highs and lows. Everyone is stressed and I'm blissfully content.

I'm yet to experience a life altering impact from skipping the news.


What you're learning will fall into different categories, physical, mental or a combination of both. They will employ their on strategies to help them stick. I put language learning in the third category. Anki + srs for memorising vocab, When recalling vocab and phrases, mnemonics etc there's tonnes information online nowadays.

Something you won't hear often is this tip: Try and action/gesture what you're trying to say before you say it. I have no explanation why this works so well when learning but it's a technique actors use when embodying a character.

Then the obvious tip is to practice with native speakers,

If you have something purely physical, you have * Mental rehearsal of the action (anki) * Execution of the action in practice. For this to be helpful you need feedback, you could use a coach or record yourself * performance - like in a sport.

Had to rewrite my response because I went to heavy on the language learning tips

This is what I've used to date


There's so much to comment on in this article. First off gravity looks like an interesting product, it's shame a blog article on that hadn't made it onto the front page.

I hate to be nit-picky but I do feel that articles like this do more harm than good by oversimplifying a fairly advanced architectural pattern by downplaying the testing, resiliency and deployment approaches.

* easier to test - this is flat out wrong. Your application or system logic is spread out across multiple process boundaries - the only way to test it, is to deploy the dependent services on your dev machine and test the set or design your services so that all the application logic can run in a single process. Think spark and spark workers that move from a thread to a distributed model through configuration. Application logic can be tested with this approach, but not necessarily system behaviour (which can be simulated)

* rapid and flexible deployment models - in a large microservices fleet where code is managed by multiple teams and sits in several repositories the dependencies are not explicit (I can not do a "find usages" on a API call and see all microservices that are using it) - so deployments can be decoupled but there tends to be lots of breakages unless you have sophisticated well-thought out testing (see first point).

* resiliency - I'm not going to expand on that because 1st and 2nd point allude to a brittle system and hence reduced resiliency. There's also data and transnational boundaries across services which need to be addressed to. To be fair, there is a hint to solving this problem through something like kakfa but it isn't called out.

Microservices can be simpler with good tooling. Kubernetes is only a very small part of it. Maybe a follow up article may be good to clarify on what tooling can be provided for 1) better testing 2) reliable deployments 3) greater resiliency.


OP here. Largely agree with your clarifications. Your comment is a fantastic example of how nuanced the real world is. Maybe it was a bit naive on my part to even try to compress these topics into a 10 minute read, but for anybody who wants to learn more, comments like these are pure gold. (and gave me a few ideas for what to write about) Thank you.


As a total amateur I found your post a very helpful introduction. Thanks!


To me, this isn't nit-picky, but nuanced. From my experience Kubernetes and the microservice architecture is essentially a technical substrate for an organisational problem.

I'm not 100% convinced there's inherent technical value until you're running at the kind of scale the big hitters do, but by then you're also looking at the hosting solution as a whole, not a deployment in a cloud. Docker, Kubernetes, offer the illusion of being easy but as soon as you start getting serious, they are anything but.

What it does do, for smaller businesses, is create a more-or-less 1:1 mapping between a team structure and a deployment pipeline. At the end of it you're distributing functions in a codebase and depending on the network for resiliency, as opposed to the language or the VM.

At the same time, the knowledge of these systems has great value because those skills are in demand now.


Docker and/or Dokku are pretty simple and easy to get running. When you outgrow a single-server deployment, that's when it gets more complicated. I like docker in general because it allows me to test/script creation and teardown, even locally.

The biggest short coming right now is that Windows support for both Windows and Linux containers (LCOWS) is really immature, and Windows containers in particular isn't mature enough at this point. I'm dealing with this for some testing scenarios where containers are simpler than dedicated VMs.


Kubernetes allows for an opinionated way to deploy your applications with many good practices being a part of the deployment cycle. I would say that the tooling matters, and having _some_ common way of deploying applications is a great way to enable developers who may only want to focus on their code but also want more control over their deployment pipeline. In summary:

* enables dev teams to do their own ops (mostly) instead of having a centralized ops team which needs to set things up and be on call for stuff when it break. This allows scaling up your organization, since once a couple of teams are setup, you can simply keep adding more and more teams and replicate the same deployment pipeline to all

* trains development teams to do more ops level stuff without getting lost in the weeds. They no longer need to worry about DNS, SSL Certs, LoadBalancers etc., they just use kubernetes ingress and services. Of course they can dig more if they want to but the defaults appear to be sensible enough for most

* promotes a cattle v/s pets model and allows teams to rapidly iterate without worrying about breaking the system in unknown ways


There are different types of testing.

    * Manual Testing
    * Automated Unit Testing
    * Black Box Testing
    * Smoke Testing
    * Functional Systems Testing
With varying layers of complexity... I would say that Unit and Black box testing can be much easier with microservices. Orchestrating smoke tests or functional systems tests is a bit harder.

It also can vary by application design and what tooling you have in place.


It’s not a question of which types of testing are easier, it’s a question of which types of testing are necessary to guarantee the same level of coverage. If you’re testing functionality that crosses a service boundary, you need to do integration testing. If you’re testing functionality that exists within a single service, you can settle for unit tests.

In a monolithic architecture where your only service boundaries are frontend-to-backend and backend-to-datastore, you still need some integration testing but not a ton. If your backend is comprised of a network of microservices, you need to integration test all of those points and, most likely, all of the data access layers that each microservice has.


I'm not quite use how unit testing can be easier. Instead of stubbing a function call, you have to stub a TCP/IP request. If you think of a "microservice" as a library that runs in a different process and whose dispatch mechanism is via TCP/IP, you can pretty easily see that testing a library is easier. It's even more easy if your library is compiled with a language whose types are statically checked because you can guarantee that the library functions match the types of the calls without having to jump through hoops like gRPC.


If the TCP/IP call and mechanisms are abstracted out into other modules for FaaS, then you only need to test your single, isolated function, what it touches and returns. You don't even need to test the entire platform.

Structurally you can still use a mono-repo for all your deployed functions so that they can eve share some test functionality.


I am reading through that section thinking similar things.

An application designed with microservices can be thought of as a monolith with unreliable network calls between the components. That is going to make most things more difficult not easier. Sure microservices might encourage you to design your application in a more modular way, but that's a bit of a strawman argument.


First - always think whether monolith would solve your problems.

Second - would mixed architecture solve my problems ?

Third - would microservices solve my problem better?

Not every company has to be a Netflix or Zalando or Google...

I think we finally at the point when people should stop jumping on buzzwords and do the hard work of going through pros and cons of every solution.


The advantage of jumping on the buzzwords is that have you many others with you.

I had to build a SaaS platform for hosting a business webapp, with the particularity that every client had their own separate codebase and database (no multitenancy). It was a simple django-like stateless app in front of Postgres and we had a low availability SLA (99.5%) so I thought it was simple, just deploy each app to one of a pool of servers, with a single Nginx and Postgres instances on each server, then point the client's (sub)domain to it.

In the end, it worked fine, but since I couldn't find anyone doing the same, it meant I had to write a whole bunch of custom tooling for deployment, management, monitoring, backups, etc.

Were I starting now and decided on Kubernetes, the solution would be more complex and less efficient, but it would mean we would have one-click deployment and pretty dashboards and such in a couple of days instead of many weeks. And if we had to bring someone in, they wouldn't have to learn my custom system.

Buzzwords are a kind of poor-man's standards, they provide some sort of common ground among many people.


So instead you chose to still have to customize deployments and network setup. Alot more complex security config to do things seemingly "fast". Is that company a start up ? If not it will cost them alot later on..


You are still going to need to customize kubernetes deployments.


It seems people too often compare the ideal cases between two choices, but completely discount the likely cases. Such seems the situation when debating microservices vs monoliths.


Exactly. In real world, most projects need a mixed architecture.


I totally agree. Deploying microservices and running k8s sound easy until you actually do it. For example, just see this section of k8s docs about exposing services: https://kubernetes.io/docs/concepts/services-networking/serv... . You need to understand many different concepts first to get this right. However, I think once you cross that hurdle, the traditionally harder stuff like auto scaling, rolling upgrade becomes relatively easier.

However, I would say that it's really early days for K8S and the ecosystem around it. As long as K8S does not try to solve every problem in the world and focus on the problems it's designed to solve, things will get easier and then may be a 60-min video can do some justice. ;-)


As somebody who just tried microservices in their latest project I especially agree to the points you made about deployment and testing. Testing can be quite hard, you have to essentially think about it from the start. Testing is much, much easier if you keep everything in one "monolith".

That doesn't mean microservices are bad. It is incredibly well to have a well defined border between services, if you manage to draw the border right. And the extra amount of planning that goes into thinking about where to draw the borders and how to design the interfaces is already a big win. They can really help you keeping a feeling of freedom when it comes to changes, because every service is just a box with a managable number of inputs and outputs. Or at least it should be..


AWS SAM nails this so hard.

They straight up have hundreds of pre-generated tests depending on what your functions input is, and you can run _every_ test locally.


I liked the idea but couldn't quite get the anki workflow running smoothly. To be fair I tried this because of the anki integration but I don't think that's what it's meant for


Indeed, it's painful to get notes into Anki currently. I would prefer if I could get any text selected into a new Anki note with minimum number of keypresses, rather than creating flashcards in Polar and then syncing them, which seems rather inefficient.


I would like to make it more efficient for sure.

Part of the problem is actually Anki.

It's kind of janky and then there is the issue of firewalls that some people are running into which isn't fun.

I'm trying to smooth out the UI a bit more but it's an iterative process.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: