Hacker Newsnew | past | comments | ask | show | jobs | submit | ssd532's commentslogin

why?


In Proof-of-Work the cost of the work is what keeps the network honest. If the work has value then an attacker is free to invest as many resources as I want into subverting the network. Even a failed attack can still be profitable, just less so.

In another scenario, where the works value is less then the cost you're still hoping that at no point in the future will an attacker figure out a way to do the work at a net profit.

The only way the network can be trusted is if the work has definitely now and always, 0 value.


Not littering has value. However, if I don't litter, it doesn't benefit me, and I cannot profit off of it; no matter how eco-friendly I am, I get no value from it.


Am I wrong in saying that the work has negative value? And there are different degrees of that. Bitcoin's negative value is larger.


You are not wrong, the output has no value. The work then being Value Out - Value In.


because Proof-of-Work only generates value for an arbitrary, made up coin if it has no other real value.

Otherwise you're making money that way, and the value of the coin is tied to the work that you did.

until recently gold was a pretty but mostly useless metal. too heavy for practical uses, too melty for industrial uses, too soft for weapons, etc. but it didn't rust and was a good medium of exchange because it had no other real value. once it has value outside of being currency it's less useful in that capacity, since now its value is tied to how much you can get for it by utilizing it in computers, chemical reactions, etc.... same basic idea with PoW


I don't think it's true, look up Proof of Useful Work


Which, ironically, is used by the attacker in this case.


It's worth noting that lots of projects claim to be "Proofs of Useful Work" without the academic rigor to actually prove so. The attacker of course being one of those who has failed to do so.

1. Their paper has not been accepted by any conference or journal.

2. Neither author on their paper is an academic (or practicing engineer or researcher) in the fields of computer science, economics, game theory, or cryptography (or any maths in general). The one is a C-level exec with what seems to be minimal CS experience and the other is a psychology professor. Neither author appears to have qualifications to be able to assume some level of rigor (before looking at the underlying work).

3. The paper is a bunch of text and buzzwords about AI and AGI intermixed with some academic history and some discussions on psychology. Of the 47 pages of the paper, only about 1-2 pages are semi-technical in major with an additional ~3 pages of code included to show their algorithm. There are two graphs relevant to the protocol on those 1-2 pages and neither one addresses any security aspects, instead showing it's performance at doing the "useful" part. So again to reiterate, their "academic paper" on the security of their PoUW algorithm includes no rigorous analysis of the protocol.

TLDR They aren't doing PoUW. They are doing cooperative compute with a centralised or federated coordinator dishing out rewards.

Proofs of Useful Work do actually exist and are an interesting field but they take a lot of rigor and analysis to be accepted and not immediately ripped to shreds. What the attacker claims is not even close to meeting that bar.


I use 16x for the copy-paste workflow. Really useful tool. Thanks.


Does its agentic features work with any API? I had tried this or Cline and it was clear that they work effectively only with Claude's tooling support.


Yes. Any API Key is allowed, Also you can assign different LLMs for different modes. It is great for cost-optimization. Like architect, code, ask, debug etc.


You said “was”, so are you back to a regular salaried job now? Because it was not sustainable?



Is it same as embedding? Is embedding an RAG method?


I don't think so. I think embedding is just converting token string into its numeric representation. Numeric representations of semantically similar token strings are close geometrically.

RAG is training AI to be a guy who read a lot of books. He doesn't know all of them in the context of this conversation you are having with him, but he sort of remembers where he read about the thing you are talking about and he has a library behind him into which he can reach and cite what he read verbatim thus introducing it into the context of your conversation.

I might be wrong though. I'm a newb.


That’s what I do. Creation is difficult on mobile, easy on desktop and review is easy on mobile but difficult on desktop.


Why only cold showers?


Wow, this is the best way to describe the flow state to my non technical manager. Thanks.

However, I don’t think she will still able to understand and empathise.


What does distribute monolith mean? Is it just a monolith app deployment with distributed (master and replica) db servers in this case?


I'm sure there are many variants and definitions, but the company I'm at runs one.

The gist of it is that there's one codebase with multiple separate "modules". This codebase is packed and linked as a library and then we build different super slim hosts that load different parts of the monolith in production containers. Usually just different environment variables or config.

But locally, we can run the whole thing in one process. We're.using .NET so `dotnet run` brings up the whole app. Whereas we might run parts of the app in different console hosts in prod, locally they are hosted background services in-process.

From a debug perspective, this is super awesome since you can just launch and trace one codebase. If we broke it out into 3-4 separate services, we'd have to run 3 processes and 3 debuggers. 3 sets of configuration, 3 sets of CI/CD, 3 sets of testing. Terrible for productivity.

We have parts of the system connected to SQS for processing events and if we need more throughout, we simply start more instances of the container all running the same monolith.

I think GCP is probably one of the best platforms for building modular monoliths because of its tight orientation around HTTP push.


I implemented this in one of my past startup jobs. Basically a core banking system that implemented multiple roles of the system, including open banking. Depending on config, it would act as a bank, as an OB service provider, as an OB registry, as a merchant, etc. In addition, it was built in a way that instances, if allowed could talk to each other in 2 ways:

1. As part of the same entity, so you could scale your operation.

2. As part of an ecosystem, so you could for example create an entire open banking network or just a regular network with bank transfers and card payments using proper protocols such as ACH, ISO8583 and ISO2022 for example.


We call it "multi-tenant applications" or "role-based applications": https://www.youtube.com/live/Zk0Il6I5MQI?si=BFVL3JkHaj1hcGrZ

And there is a separate concept of a configurable application which can completely reshape its component graph according to some high level configuration flags (like database=prod|dummy), we call it "multi-modal applications".

And we created the perfect tool for wiring them: https://izumi.7mind.io/distage/index.html


I'm looking to break an old java monolith application in something like that: modular code base, single deployment artifact, multiple configurable use cases. I've yet to find a good existing tool to compose the application at build time, other than using a godawful lot of maven profiles combinations


Hmm; at least in the more recent versions of .NET, Microsoft has really cleaned up the runtime host paradigm so that it's consistent across console (think background services, timer jobs, pull-oriented processing) and web (classic HTTP).

For us, it just becomes a matter of configuring the correct construction of the dependency injection container at host startup using some flag (usually environment variable) to pick the right bits and pieces to load into the container and which services to run from the monolith.

Then each of the host "partitions" gets its own Dockerfile.


You might want to read this post from Shopify's engineering blog https://shopify.engineering/deconstructing-monolith-designin...


Often this is view as counter microservices pattern. Microservice architecture assumes each service has its own data storage and is logically independable from other services.

Monolith on the other hand is a single application with all of the logic in one place.

Distibuted monolith is a set of applications\services like in microservices pattern but they can share common data storage and depend on each other.


Data Storage and Compute should be separate orthogonal issues, it's not needed in this comparison.

Stateful vs Stateless.

Your monolith is a binary that gets distributed to hosts to perform some function. The binary has multiple entry points that can be envoked. Most calls are via internal library call.

Microserverices (also stateless) have a different artifacts for each component, services call other services via a private API (often grpc/httprc).


Good point. I was just trying to (awkwardly) say that monolith can also be split into a separate parts communicating via a private API.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: