Hacker News new | past | comments | ask | show | jobs | submit | emperorcezar's comments login

Of all the things in the world, this made me feel a jolt of sadness.


People who don't like prices of food are free to starve.

They want to eay without paying the cost of bringing food to market. It's pernicious entitlement, made more outrageous by complaining about the greed of those actually providing the food.


One would think that this would be fixed in the last five years?


Certainly. What I don't believe is certain is that only one such vulnerability has ever existed and none exist in Rust today.

It's not pedantic to differentiate between mitigating a thing and preventing a thing.


You can add `#![forbid(unsafe_code)]` to your codebase to avoid any unsafe Rust, which should prevent buffer overflows. Obviously it may make writing a codebase somewhat harder.


Will that restriction also be applied transitively to all dependencies?


No. That kind of restriction cannot realistically be applied to any project above toy scale. The stdlib uses unsafe code to implement a large number of memory management primitives, because the language is (by design!) not complex enough to express every necessary feature in just safe code. Rust's intention is merely to limit the amount of unsafe code as much as possible.


For that, I believe you need to use cargo-geiger[0] and audit the results.

[0] - https://github.com/rust-secure-code/cargo-geiger


No, and in fact that would be impractical, because you can't do anything useful (e.g., any I/O whatsoever) without ultimately either calling into a non-Rust library or issuing system calls directly, both of which are unsafe.


The amount of reported and unfixed memory bugs in Rust went 10x more, not less in the last 5 years.


If you believe you can find a memory unsafety vulnerability in this project's Rust code based on the existence of those bugs, feel free to do so.


Would have been nice to have some lead time. I feel like the sudden change is too bait and switch.


Cool. A way for me to "market" good software design to others without them realizing it's just normal modular design with reasonable best practices. Sadly most people need a fancy name and slide deck for this.


"Normal" good software design takes you a bit, but it doesn't give you LEGO-like building blocks, and it will still be hard to share code between services. Polylith gives you a single development experience that you would not otherwise get. Take a look here to see what I mean: https://polylith.gitbook.io/polylith/architecture/bring-it-a...


Is there tooling or something special around this? I'm not trying to be a curmudgeon, but I'm having a hard time seeing what is novel here other than applying a name to component based modular software design.


Have you read the "Advantages of Polylith" page? https://polylith.gitbook.io/polylith/conclusion/advantages-o... It tries to summarise the benefits from both a development and deployment perspective.


That page doesn't clarify much to me. How exactly do you deploy pieces of the app as different services, while keeping it a monolith in development? Especially this thing not being a framework. Standard go/node modules give you "lego-like" pieces too. The real world example only has one "base" that puts everything together under a single API, so doesn't help understanding how this would be deployed as multiple services.


You can get an idea by looking at the Production systems page (https://polylith.gitbook.io/polylith/conclusion/production-s...) where each column in the first diagram is a project (deployable artifact / service). All components are specified in a single file, like the configuration file for the poly tool itself: (https://github.com/polyfy/polylith/blob/master/projects/poly...).

I try to explain it in the "Bring it all together" section also: https://polylith.gitbook.io/polylith/architecture/bring-it-a...


Those examples don't really tell much more. I'm guessing there are a lot of assumptions about how Clojure projects are structured.

How exactly are these deployed? Does it use k8s, Docker? Does it integrate with some particular CI setup? How can this be language agnostic? How can a single codebase be separated into multiple running http services without some kind of overarching routing framework? How do services communicate?


In Clojure you put your functions in namespaces (sometimes named modules or packages in other languages). In an object oriented language you put your classes in namespaces too, but the way you would implement Polylith in an OO language is to expose static methods in the component interfaces, which would have to live in a class with the same name as the Interface.

Polylith doesn't help you with the deployment of the project, that code you have to write yourself, but it's possible to calculate if a project has been affected by the latest changes or not (the 'poly' tool that we have build for Clojure, supports that, but it can be implemented for other languages too, and what you get is support for incremental builds and tests = only build the project that are affected by the change and only run affected tests).

Polylith is an idea about how to decouple and organise code, in the same way that FP and OO are ideas that can be materialsed in different computer languages.

The whole codebase lives in a single repository, but all components (and bases) have their own src, test and resources directories. We then pick and choose from these sources to create projects from where we build artefacts.

All projects live in the same repository in a workspace and if you structure the code the "normal" way, each project may live in its own repository with a single src directory (+ test and resources) but then you had to put all the code into that single src directory. What Polylith does, is to let each project specify (in a configuration file) which set of src directories it needs, instead of putting everything into a single src directory. This is supported out of the box in Clojure, but in e.g. Java, if you use Maven, you need a Maven plugin for that.

The deployed services communicate with each other through their public API. No difference from what you would otherwise do.


> It tries to summarise the benefits from both a development and deployment perspective.

Call me a pessimist, but I'm not convinced. It seems a bit hand-wavey to me.


If you want to put content behind a wall of some kind, that is fine. You can't expect to have it be indexed in search. Having it show in search results while having a wall in front of it once someone clicks is a bait and switch. You can't have you cake and eat it too.


> You can't expect to have it be indexed in search

Why not? This is just as entitled a thought as the original quote from GP. Search engines aren't a public service. Neither is The Financial Times.

There's nothing that explicitly requires every single page that a search engine displays to be accessible by the user. There isn't even an internal policy within Google and other engines that would uphold that expectation. It might be a shitty user experience, but that's on the search engine and the resulting site to deal with.

Clearly, the FT has enough subscribers to be able to "lose" customers behind the paywall. And clearly Google isn't interested in cleaning up results to not include paywalls. So it seems they've both weighed the pros and cons and chose to continue with it. Who are we to tell them otherwsie?


Do you have teams that tend towards younger ages? I'd throw out there that what trends you see will be heavily dependent on that.


Maybe Logitech will make a usb-c dongle one day, until then I'm stuck with an adapter that sticks out and is constantly in danger of being knocked and bent.


Note that this "study" only went till June of 2020.

So all these workers went to remote work in a company without a remote work culture, then were measured for a short period of time before a remote work culture and the policies and tools to support it could be ironed out.

Also, how are they measuring "innovation"?


> Also, how are they measuring "innovation"?

This is really important. Not to diss Microsoft as they do put out innovative stuff nowadays (eg, vscode) but I want to know how they measure this as there’s a lot of trash features coming out (eg, Teams) so having more or fewer of those new things isn’t innovation.

Basically, I don’t trust Microsoft or Inc to define innovation in a way that matters to my curiosity.


I got the impression that vscode (monaco) was green lit in order to retain certain people threatening to leave. MS is having retention and talent acquisition problems. Luckily there are enough people with enough clout to do these projects. But they are definitely not normal.


That sounds unlikely, but if true, is... impressive? That's a non trivial product launch that has been a pretty big deal.


I think people would be surprised how much of the innovation in dev div is directly attributable to retention projects. Basically, let me do this here or I will go to Facebook and do it. I think much of the open sourcing is due to people threatening to leave if it wasn’t. They get sick of building things that get shelved.

The Steve Sinofsky era did so much damage to morale at Dev Div that they’re now going to great lengths to placate certain devs.


How is VS Code innovative? It’s a text editor with packaged plugins. If there was innovation then it lies in making an above board Electron app.


Several ways, I think.

1) As an editor it’s novel for his high performance across so many platforms.

2) As a plug-in platform with a simple marketplace it’s the first time I’ve seen this work well across many types of dev groups. “Easy enough” for html designers, and data scientists as well as traditional programmers. Plug-ins exist for other tools, but not as easily put together as this.

3) The release schedule is so rapid. At least monthly releases with new features with lots of community ideas realized.

4) Within Microsoft’s culture it turns the decades old visual studio model (good performance but locked into Windows and costly) on its head. So this is really new for Microsoft. (Even though I think it would be innovative from any company, but there aren’t any other companies I know that make money from developers so don’t actually need to charge for dev tools).


>1) As an editor it’s novel for his high performance across so many platforms.

Sublime Text is one of many cross-platform text editors with superior performance than that of VS Code.

>2) As a plug-in platform with a simple marketplace it’s the first time I’ve seen this work well across many types of dev groups. “Easy enough” for html designers, and data scientists as well as traditional programmers. Plug-ins exist for other tools, but not as easily put together as this.

This existed before VS Code. Perhaps an argument can be made with respect to the UI of such a repository, but the existence of this concept has been around for a while.

>3) The release schedule is so rapid. At least monthly releases with new features with lots of community ideas realized.

This isn't innovative. This is having a lot of money, thus resources, to have such a release cadence.

I can see the argument where VS Code is better suited for some people but innovative it is not (and that's OK).


The Language Server Protocol alone, which was invented for VS Code, was innovative enough that it's now being used by other major editors and IDEs.

Sure you might get slightly better performance with some other program but you won't get all the features or quality of VS Code with that (and for free).


I agree with the sibling poster, here. None of those things represent innovation. a) none are novel in the IDE space, and b) they're incremental.

If "innovation" just means "make things marginally better than before", we've got a very weak definition indeed...


That's not a bad question. I find I use it for basically all my code editing nowadays, but I don't know why. I think it's just "smooth enough" and feels consistent even as you use it for a variety of different languages. It's innovative in some subtle way.


I still like BBEdit on Mac and notepad++ for quick text editing, but have slowly migrated to using vscode for everything but jupyter.


Not to mention June 2020 was only a few months into a pandemic unlike the US has seen in most of our lifetimes. I don’t see how you could differentiate between the effects of the pandemic/isolation and working from home.


If this is the same study I've heard of before, they defined innovation as "number of interactions", and assumed collaboration is directly proportional to the number of interactions.

The study should have been named "Remote Work Reduces Interaction".


I have a feeling there was an agenda behind this.

Deislabs, a part of MS that's been functioning as a distributed org for years, has produced a lot of innovation... https://deislabs.io/


Ah yes. They are no true Scotsman.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: