Hacker Newsnew | past | comments | ask | show | jobs | submit | jgord's commentslogin

highly recommend xsv by BurntSushi [ csv parser / wrangler written in rust ]

It's retired in favor of qsv and xan: https://github.com/BurntSushi/xsv

Investors are not investing in small startups much, perhaps due to high interest rates.

Now is a great time to invest in small startups applying ML to realworld problems, and will result in a lot of useful new tech being built.

Id make a lot of small bets on early stage startups of this kind - and perhaps top up when they get a POC and at MVP / early traction stage.

It would almost be worth buying a building in Danang Vietnam, and hosting small teams for 3m at a time to get more bang for your buck - ie. rent arbitrage / quadruple the effective runway due to low cost of living.


I wrote a very lite touch web list maker, so people / I can have a simple fast way to make a list of stuff, and share the url.

http://pho.tiyuti.com

Just lists of title, pic, blurb, url


One of my many side projects was a thing called ODO .. linux box hooked up to the TV, running a web browser provided and intranet web page where you can browse media and files tree and thus share files.

Could also use it to play media - so a phone or tablet could act as a remote control from anywhere in wifi reach, and play music on the main TV screen / speakers or on the local device.

Was pretty cool, but didnt have the funds to commercialize it.


I feel the same frustration, seen from another angle : I think we can use current ML techniques to solve 3 or 4 hard problems in 3D reconstruction, and doing so would unlock a vast amount of value - we could turn lidar scans and photos of buildings and industrial plants into accurate 3D models automatically.

BUT I think the bottleneck is _funding_ of small early risky startups to do the needed engineering work.

My notes on this : https://quantblog.wordpress.com/2025/10/29/digital-twins-the...

LLMs, GPU datacenters attract all the big money, and the med and small VCs seem to be leaving their money in the bank earning high interest rates, unless there is a slam dunk opportunity with guaranteed traction and MRR growth.

We seem to be betting that only the large companies will innovate, when historically this has not been the case - Deepseek is a recent counterexample.


My take, after working on some algos to detect geometry from pointclouds, is that its solvable with current ML techniques, but we lack early stage VC funding for startups working on this :

https://quantblog.wordpress.com/2025/10/29/digital-twins-the...

I have no doubt FeiFei and her well funded team will make rapid progress.


We think alike. Have you tried to replace point cloud of white wall with a generic white wall automatically?


busywork ... but maybe good marketing - people somehow believe that ISO has some relationship to quality.


People with absolutely no technical clue who only know "ISO 9001" equate "ISO" with quality initiatives and certifications.

What people with a better clue sometimes wrongly equate ISO with is interoperability.

ISO standards can help somewhat. If you have ISO RISC V, then you can analyze a piece of code and know, is this strictly ISO RISV code, or is it using vendor extensions.

If an architecture is controlled by a vendor, or a consortium, we still know analogous things: like does the program conform to some version of the ISA document from the vendor/consortium.

That vendor has a lot of power to take it in new directions though without getting anyone else to sign off.


> is this strictly ISO RISV code, or is it using vendor extensions

I doubt it - the ISO standard will still allow custom extensions.


A standard 64bit+DSP RISC-V would go a long way for undoing the fragmentation damage caused by the "design by committee" implications.

..it was the same mistake that made ARM6 worse/more-complex than modern ARM7/8/9. =3


As if we have never seen design-by-committee damage coming from ISO?

Have you heard of this C++ thing? :)


> Have you heard of this C++ thing?

The STL was good, but Boost proved a phenomena...

https://en.wikipedia.org/wiki/Second-system_effect

ISO standards are often just a sign Process-people are in control =3


Good marketing, this could open up more large investment into RISC-V.


Be honest, what does RISC-V offer that 10 year old AArch64 doesn't already provide?

RISC-V is still too green, and fragmented-standards always look like a clown car of liabilities to Business people. =3


What does <open source anything> offer that trusty old <proprietary burden> doesn't already provide?


I would agree for FPGA soft-cpu the RISC-V is an obvious choice.

But in general, the next question will be which version did you deploy, and which cross-compiler do you use. All the documentation people search will have caveats, or simply form contradictory guidance.

The problem isn't the ISA, but the ill fated trap of trying to hit every use-case (design variant fragmentation.) ARM 6 made the same mistake, and ARM8/9 greatly consolidated around the 64 bit core design.

Indeed, an ISO standard may help narrow the project scope, but I doubt it can save the designs given the behavior some of its proponents have shown. =3


People complain about fragmentation, but I feel like they are missing the forest for the trees.

In the past if you didn't find something you needed, you'd design your own. Now you just tweak RISC-V.

I mean "12 variants of RISC-V" is actually less fragmentation than "RISC-V and 11 others".

As long as there is a stable core to target, that is all that matters for main stream adoption, and profiles and distros are already there with RVA23.


Sure, but what we saw was most software simply disabled the advanced vendor specific features in ARM, and still only compile for stable code around the core IP.

This is an important phenomena committee consensus couldn't reconcile. =3

https://en.wikipedia.org/wiki/Second-system_effect


Less legal risk, ARM has grown litigious and wants a bigger piece of the pie.


IP costs real money, and consumers usually don't care how people split up their pies.

100% of a small pie is worth far less than a slice from a large pie. I've met people that made that logical error, and it usually doesn't end well. =3


While the sentiment is a bit harsh, the performance gap noted is real. RISC-V has a ways to go to catch up to ARM64 and then finally AMD64 but if the Apple M1 taught us anything, it's possible.


RISC-V shouldn't try to catch 40 years of spiral-development, but rather focus on something people can gather momentum around.

amd64 wasn't a great design, but provided a painless migration path for x86 developers to 64bit. Even Intel adopted this competitors architecture.

I like the company making a multi-core pseudo GPU card around RISC-V + DSP cores, but again copying NVIDIA bodged on mailbox style hardware is a mistake. It is like the world standardized around square-wheels as a latency joke or something... lol

Making low-volume bespoke silicon is a fools errand, and competing with a half-baked product for an established market is a failed company sooner or later.

I think people are confusing what I see with what I would like to see. An open ISA would be great, but at this point I can't even convince myself I'd buy a spool of such chips. =3


Correct me if I'm wrong but I'd imagine the performance gap has almost nothing to do with RISC-V and everything to do with implementation.


Like everything in tech... the answer is "it depends": the barrel-shifter in ARM is considered energy-efficient. Also, most RISC design concepts are using more numerous simpler instructions at higher clock-rates, and doesn't rely on mystery microcode to pull off the same workloads as amd64 etc. ARM8/9 is quite good, but partly because a lot of the unused legacy chip features were stripped out.

RISC-V had potential, but is still too fragmented... It is the value proposition to companies that is a problem, and in the current consumer market it will likely meet the same fate as PowerPC. =3

"Why the Original Apple Silicon Failed"

https://www.youtube.com/watch?v=Tld91M_bcEI


>but is still too fragmented...

Care to elaborate? What application processors are out there not following the application profiles?

As far as I am aware, there is not even one.


The point was:

1. Cores <= 32bit are effectively dead in the OS space, and wasting silicon chasing legacy markets was unwise

2. The ISA "standard" is actually a set of modular features, and Imagination Technologies has already paired its GPU IP into a RISC-V SoC. The SiFive X280 is a nice chip, but also focused on bespoke customers needs rather than general product design.

3. Fragmenting the documentation, software, and integration resources across numerous variants of each RV32I, RV64I, and RV128I base cores was very unwise. Calling them all RISC-V was classic silliness.

https://en.wikipedia.org/wiki/RISC-V#ISA_base_and_extensions

4. Design by committee is difficult, and rarely ends well. They should have chosen a _single_ base core with the greatest traction (64bit), and a set of standard popular features to span as many consumer use-cases as possible. Then quietly shoved every other distraction into a box, and tossed it off a bridge. An ISO standard will unlikely fix this very old issue. =3

https://en.wikipedia.org/wiki/Second-system_effect


>1. Cores <= 32bit are effectively dead in the OS space, and wasting silicon chasing legacy markets was unwise

RISC-V originally didn't plan on 32bit at all. It exists because there is market interest.

>2. The ISA "standard" is actually a set of modular features, and Imagination Technologies has already paired its GPU IP into a RISC-V SoC. The SiFive X280 is a nice chip, but also focused on bespoke customers needs rather than general product design.

This modularity is a key feature, for those that need it, use it and would have never chosen RISC-V if it didn't have it.

This same modularity existing doesn't in any way hinder the ecosystem efforts centered around RVA23.

>3. Fragmenting the documentation, software, and integration resources across numerous variants of each RV32I, RV64I, and RV128I base cores was very unwise. Calling them all RISC-V was classic silliness.

RV32I, RV64I and RV128I are entirely separate ISAs. It is thanks to this that the focus can be on RV64I, unaffected by the others.

>4. Design by committee is difficult, and rarely ends well. They should have chosen a _single_ base core with the greatest traction (64bit), and a set of standard popular features to span as many consumer use-cases as possible. Then quietly shoved every other distraction into a box, and tossed it off a bridge. An ISO standard will unlikely fix this very old issue. =3

Application processors and the common software ecosystem (Ubuntu, Android and so on) have consolidated around RVA23.

The "distractions" are a feature; the likes of RV32E and bespoke chips with custom extensions can exist and not affect the application profiles such as RVA23 and the ecosystem of software and hardware built around them.


We shall see how this plays out in the market. However, currently RISC-V competitors are RISC-V along with the established market options. Resources are finite, and everyone's pet use-case will probably bleed this ISA to death sooner or later.

Currently, RISC-y offers few advantages over ARM8/9 ecosystems in the consumer space, and while that may change someday... few will likely notice in the mess of options already spamming the community.

Indeed, groups tried to consolidate a viable standard subset (even an ISO proposal), but these will also likely fail given it contradicts peoples pet use-cases. Note, the silicon fab business is about sustained sales of replicated standard product, and not clown volumes of 100k bespoke chips.

"Letting the dog drive..." product design was also unwise. =3


From my vantage point wrangling algos to extract 3D geometry from pointclouds - I really think this is a domain where ML can solve the missing pieces of the puzzle in the next 18 months.

The new tech will unlock a lot of value, but we need an opinionated investor to fund the engineering effort to get us there.

Funding is the bottleneck, not talent or fundamental science.

tl:dr We need a couple of visionary Angel Investors to fund these 3D Skunkworks projects and bring the real world onto the internet.


Basically true, but there are other potential sources of growth :

- using technology to unlock cheaper energy - using technology to automate boring manual labor - using technology to extend healthy lifespan

Given the demographics collapse and ageing population in most 'developed' countries, we need to look at these other ways of generating economic growth.


Before I read the article, Ill summarize the facts that seem to be hard and true about climate :

- we are nearing or at +1.5C above pre-industrial baseline

- human carbon burning CO2 emissions are at a max and likely long plateau

- mean temp is rising by around +0.3C per decade

- we will be nearing +2.0C in around 15 years, 2040 give or take

- warming is mainly caused by us humans burning carbon, emitting CO2 and some CH4

- if we reach net-zero, we will be at peak CO2 and thus peak heat, for a long while

In addition, the only economically viable way to bring down the temp seems to be deliberate pollution by emitting sulphur or other particles aka Solar Radiation Management to brighten clouds, reduce heat absorption by the ocean. Volcanoes and shipping fuels have essentially proven that this brings down the temperature, in the short term.

We geo-engineered our way into this hot mess, and we will need to geo-engineer our way out of it.

If the temp reaches +2.5 or +3C .. I think that means quite a lot of crop failure, forced migration, geopolitical tension, lack of stable food supply.. and death to a large number of humans seems to follow logically from that.

So, now Ill look at the article to see if any of these tough truths were mentioned .. sorta-kinda no-so-much, it seems like he thinks things are not that urgent. ?!?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: