Hacker Newsnew | past | comments | ask | show | jobs | submit | piyush_soni's commentslogin

After getting tired of Photoshop's magic wand failing on product photos, I built this color replacer with features most online tools don't have:

• HSV tolerance (not RGB) - Catches all blues regardless of lighting while ignoring whites/grays that RGB tools pick up

• Magnetic lasso-style edge snapping - Uses Sobel edge detection so polygon vertices snap to object boundaries automatically. Green vertices = snapped, yellow = free placement

• Multiple independent color pairs - Replace 5 different shades with tight tolerances instead of cranking up one slider and getting color bleed

• Preserve shading toggle - Change a pink shirt to blue while keeping all fabric folds, shadows, and highlights intact

• Editable polygons - Drag vertices after closing, undo points, hide overlay while keeping selection active

• Remove colors → transparency - Perfect for background removal when you know the bg color

Problem with RGB: White (255,255,255) is mathematically close to Light Pink (255,182,193). HSV separates Hue from Saturation/Value, so you can target "all pinks" while excluding "anything low-saturation" (whites/grays).

Free, browser-based, no account needed. Built for e-commerce product recoloring and design mockups where simple flood-fill tools fail.


I liked the clean interface and on-point functionality, but I loved the loading performance! Several websites which I regularly use for doing JSON diff are extremely slow for large documents. This one does it instantly. Nice work!


Thank you! Looking for feedback on what works what doesn't.


Man. They way they market and sell 'average' stuff which won't even raise an eyebrow if it came from any other company is remarkable.


Onshape employee here. I agree with another poster that for most "non-professional" requirements Onshape's free tier is all one should need - sure, the documents remain public if you don't pay. It's prohibitively expensive to maintain the technology stack with the complexity, scale and performance that Onshape does, and its costs a lot of money. :)


Documents being public is one thing. But I remember you guys changed the ToS at one point (I just looked it up, in 2016) where the verbiage is that Onshape owns the IP of these documents which is a huge no for me. I rather pay for solidworks hobbyist for $100 a year that comes with 3Dexperience which performs very similar to Onshape.


I don't know where you see such a line in Onshape's ToS. Can you point me to it? IANAL (and speak only in an individual's capacity who is hopefully reading the same ToS), but the public documents you create as a free user are essentially in "public domain", so even though you still 'own' it, you grant a broad, "worldwide, royalty-free and non-exclusive license to any End User or third party" to use the intellectual property within that document "without restriction". This includes the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of it.


You can get SolidWorks for $25/year if you're a student or vet. It's full up re functionality, but with a non-commercial license. I highly recommend.


Taking the chance: As a hobbyist with a decent CNC with no intent of using it for commercial work: Linux "support" was driving me from Fusion to Onshape. CAM is driving me back to Fusion.

Please consider pushing the idea of having CAM for the hobbyist level in Onshape in your company, I know there's not much in revenue us hobbyists, but I'd gladly pay up to 20-50 per month for such a license. At least that's more money than 0 :).


I'm sorry, as far as I know the leadership is pretty clear on that. For a foreseeable future CAM will remain a Pro only feature.


I gravitate to open source, native or self hosted applications. But I have to say that onshape is really neat.

I just do relative simple object for 3D printing, every few month. And onshape was easy to get into.

From BRIO connectors for my nephews wooden train set, book binding helpers for a coworker, case for LED controllers .. easy peasy.

Just fill pattern and text are always a struggle.

But I just know, at some point Onshape will start charging us freeriders.


> But I just know, at some point Onshape will start charging us freeriders.

- I don't know about that, may be, may be not, but I don't know of any such plan in the short term at least. It gives University students a free 'professional' license so there's that too.

> Just fill pattern and text are always a struggle.

Feel free to create a support ticket about your pain points. Everyone can easily do that, and Onshape is surprisingly more responsive to support tickets than many other companies.


Sundar Pichai. That's what changed. He's one of the most uninspiring tech leaders of today who just wants to run an already established business with more and more ads with some AI sprinkled upon everything. And cost cutting. That's all, that's his entire vision.


It's him and CFO Ruth Porat. Both happened in the same year. The latter is your stereotypical banking type.


Then why are you using it? I tried using Gemini once on my Pixel 6. Couldn't play music on Youtube music on verbal instructions, I switched back to Google Assistant. Will try it again after 6 months now. :)


Isn't that obvious? They tried to switch back to Google Assistant, but each time they asked Gemini, it said it can't do that yet!


My main use of Google assistant is to add things to my shopping list while I'm cooking. Switched to gemini, which I could have a human like conversation to get information about any topic but it just couldn't add items to the shopping list. I switched back. I didn't need a chatbot to keep me company I needed an assistant.


> Then why are you using it?

I'm not, I pretty much just accepted that Google doesn't care about usability whatsoever and haven't prompted it in a very long time.

To be clear, the only time I've ever used it was via "ok Google" in contexts in which I'm unable to interface with the phone directly, i.e while driving. If it doesn't work you'll learn that you can't start driving before queueing the navigation anymore. The voice assistant was a nice feature, but not important enough to waste my time trying to figure out which feature they opted me into and how to get back out of it.


In my case it just kinda.. switched over at some point, and frankly I didn't care enough to figure out how I might switch it back (if I even could). I had a similar frustration to GP that it stopped working for 100% of the queries I used to use it for.

That said, at some point it started working better, but there was a good 6-12 months where it was a tire fire.


> Keep data transforms and algorithmic calculations in functional style

What annoys me greatly though is kids coding various 'fancy' functional paradigms for data transformation without realizing their performance implications, still thinking they've actually done the 'smarter' thing by transforming it multiple times and changing a simple loop to three or four loops. Example : Array.map.filter.map.reduce. Also when talked about it, they have learned to respond with another fancy term : "that would be premature optimization". :|


This is just an unfortunate consequence of how map and filter are implemented via iterators.

Of you work with transducers, the map filter map reduce is still just one single loop.


A smart enough compiler can and will fuse your loops / iterators.

GHC does for example.


Relying on a sufficiently smart compiler to turn bad algorithms into good algorithms is unreliable.

It's much better to just have a way of expressing efficient algorithms naturally, which is what transducers excel at.


That kind of thing really depends on the language. Some of the stronger functional languages like Haskell have lazy evaluation, so that operation won't be as bad as it looks. But then you really need to fully understand the tradeoffs of lazy evaluation too.


or list fusion, but yes Haskell will optimise map x (map y lst) to map (x . y) lst

https://stackoverflow.com/questions/38905369/what-is-fusion-...


I don't know how things are implemented in other languages but in C# 9, these operations are optimized.


There are ways to keep functional transformations and immutable data structures efficient. Copy-on-write, unrolling expressions into loops, etc. Proper functional languages have them built into the runtime - your clean map-reduce chain will get translated to some gnarly, state-mutating imperative code during compilation. In non-FP or mixed-paradigm languages, where functional building blocks are just regular library functions (standard or otherwise), map-reduce is exactly what it says on the tin - two loops and a lot of copying; you want it fast, you have to mostly optimize it yourself.

In other words, you need to know which things in your language are considered to be language/compiler/runtime primitives, and which are just regular code.


Most languages don't have these facilities at all - so you need to be really careful what you are doing. This works "fine" with test data, because your test data usually is a few hundert items max. A few years back people at our firm build all data filtering in the frontend, to keep the "backend clean". That worked fine in testing. In production with 100k rows? Not so much.

Even in C# it depends on the linq provider - if you are talking to a DB, your quers should be optimized. Linq to objects doesn't do that and repeated scanning can kill your performance. E.g. repeated filtering on large lists.


I am talking about IEnumerable, not IQueryable: https://blog.ndepend.com/net-9-0-linq-performance-improvemen...


Are they? LINQ is usually slower than for loop


Quite often it compiles to the same IL. Would you like to provide some godbolt examples where it's significantly different?


IL is always different since it's a high-level bytecode. The machine code output is different too. Now, with guarded devirtualization of lambdas, in many instances LINQ gets close to open-coded loops in performance, and it is very clever in selecting optimal internal iterator implementations, bypassing deep iterator nesting, having fast-paths for popular scenarios, etc. to achieve very good performance that can even outperform loops that are more naive, but unfortunately we're not there yet in terms of not having a baseline overhead like iterators in Rust. There is a selection of community libraries that achieve something close to this, however. I would say, LINQ performance today gets as close as it can to "it's no longer something to be worried about" for the vast majority of codebases.


Two or so years ive been developing library and i remember where switching from something simple like First or FirstOrDefault to for loop made difference when using benchmark dotnet

Then I found that it was common knowledge that linq is slower, even among ppl on c#s discord


Aren't those all linear operations?


Yes, wrote quickly without thinking. Even if it doesn't change the complexity, it's still three or four times the operations.


Why would your example be O(n³)?


Oh yes, sorry I meant to write 3 * O(n) which though doesn't change the order is still three times the operations. The example I was remembering was doing filters 'inside' maps.


So... O(n)? Leaving aside the fact that "3 * O(n)" is nonsensical and not defined, recall f(x) is O(g(x)) if there exists some real c such that f(x) is bounded above by cg(x) (for all but finitely many x). Maybe you can say that g(x) = 3n, in which case any f(x) that is O(3n) is really just O(n), because we have some c such that f(x) < c(3n) and so with d = 3c we have f(x) < dn.

It's not the lower-order terms or constant factors we care about, but the relative rate of growth of space or time usage between algorithms of, for example, linear vs. logarithmic complexity, where the difference in the highest order dominates any other lower order terms or differences.

What annoys me greatly is people imprecisely using language, terminology, and/or other constructs with very clearly defined meanings without realizing the semantic implications of their sloppily arranged ideas, still thinking they've done the "smarter" thing by throwing out some big-O notation. Asymptotic analysis and big-O is about comparing relative rates of growth at the extremes. If you're talking about operations or CPU or wall clock time, use those measures instead; but in those cases you would actually need to take an empirical measurement of emitted instruction count or CPU usage to prove that there is indeed a threefold increase of something, since you can't easily reason about compiler output or process scheduling decisions & current CPU load a priori.


I do understand 3 * O(n) is just O(n), thanks. I was just clarifying my initial typo. However, it's still three/four times the iterations needed - and that matters in performance critical code. One is terminology, and the other is practical difference in code execution time that matters more, and thus needs to be understood better. You might not 'care about constant factors' but they do actually affect performance :).


> Maybe you can say that g(x) = 3n, in which case any f(x) that is O(3n) is really just O(n)...

In practice 3x operations can make a world of difference.

3x SQL queries, 3x microservice calls, missing batching opportunities, etc.

Sorry but this kind of theoretical reasoning wouldn't move a needle if I'm reviewing your PR.


> Sorry but this kind of theoretical reasoning wouldn't move a needle if I'm reviewing your PR.

If this were a PR review situation I would ask for a callgrind profile or timings or some other measurement of performance. You don't know how your code will be optimized down by the compiler or where the hotspots even are without taking a measurement. Theoretical arguments, especially ones based on handwavey applications of big-O, aren't sufficient for optimization which is ultimately an empirical activity; it's hard to actually gauge the performance of a piece of code through mere inspection, and so actual empirical measurements are required.


Callgrind to measure impact of performing 3x more database queries or 3x more microservices calls?

I don't block PRs because of micro optimizations but my examples aren't.


I recall looking at New Relic reports of slow transactions that suffered from stacked n+1 query problems because the ORM was obscuring what was actually going on beneath the hood at a lower level of abstraction (SQL).

My point is it's often difficult to just visually inspect a piece of code and know exactly what is happening. In the above case it was the instrumentation and empirical measurements of performance that flagged a problem, not some a priori theoretical analysis of what one thought was happening.


That is premature pessimization. They have no idea what premature optimization is as nobody has done that optimization at all since 1990 or so. Premature optimiiation is about manually unrolling loops or doing inline assembly - things any modern compiler can do for you automatically.


> Premature optimiiation is about manually unrolling loops or doing inline assembly - things any modern compiler can do for you automatically.

If compilers consistently did this, projects like ffmpeg wouldn’t need to sprinkle assembly into their code base. And yet.


ffmpeg did that AFTER profiling proved it was needed.

No compiler is perfect, if you need the best performance you need to run a profiler and see where the real bottlenecks are. I'm not sure if ffmpeg really needs to do that, but I trust they ran a profiler and showed it was helpful (cynically: at least at that time with the compilers they had then, though I expect they do this often), and thus it wasn't premature optimization.

Regardless, compilers mostly get this right enough that few people bother with that type of optimization. Thus almost nobody even knows what premature optimization is and so think the wrong thing. Premature optimization is not an excuse to choose a O(n) algorithm when an O(1) exists or some such.


Really? I mean it 'saves your time' by not having you to keep waiting so that you can do something else in that time. I don't know how that is not useful. Even if I'm not doing anything world changing in those 3 minutes, it will save me some stress.


In the Right panel on the page: $1,996,583 Total bounties paid

I don't know if I should feel happy or concerned about the security policies of a company that has already given 2 million USD in bug bounties :).


You should be far more concerned about the ones that have given $0.


I want to offer rewards (my site is on H1) but they require I sign up for a minimum $50k/yr subscription to enable that feature. I don't think that's a reason for concern, just means it's a smaller company.


50k a year is insane. It's just a messaging platform. Advertise your own bug bounties and just have them email you, voila.


If you get a lot of reports you'll probably be paying attention least one person 50k/yr to manage it


It means you can afford to be more secure.

Poverty is just a basic gateway. I imagine hackers have to do some calculus on bigger vs little, since usually larger targets are more valuable, buy smaller are likely less secure.


Yes, but of course, that takes more data without their compression and one eventually has to pay more for storage as expected, but at least that option is there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: