Hacker Newsnew | past | comments | ask | show | jobs | submit | DesertVarnish's commentslogin

Largely but not entirely a data problem; specifically poor captioning. High quality captioning makes such a big difference.


The buffer pool and CoW are honestly really nice patterns for this type of canvas heavy setup with React where it's not a good fit for a custom reconciler.


Difficult engineering problem but working from first principles suggests that the energy requirememts are not insurmountable. The roundtrip efficiency is worse than batteries but much better than photosynthesis.

Terraform Industries (and others, like Synhelion) has a plausible if slightly optimistic target to be price competitive with fossil fuels for methane in the early 2030s.

Some places with very cheap to extract hydrocarbons like Saudi Arabia may be able to compete for a very long time, but there are many futures where most of humanity's hydrocarbon consumption (including the ones used for the chemical industry, plastics, etc) derives from atmospheric carbon.

And this can happen fast, the world (mostly China) has developed a truly massive manufacturing capacity for PV.


Terraform Industries (and others); I'd seriously consider taking a long bet that these companies turn out to be better at converting investor capital into employee salaries, for a finite period of time, than they are at converting atmospheric CO2 into natural gas.

If such a technology was possible then it would be far better to start with carbon capture from existing emitters. The concentration of CO2 being easily 3 orders of magnitude higher.


For hydrocarbon synthesis, hydrogen production from electrolysis dominates the energy usage, along with driving the Sabatier process. DAC might be like 5-10%.

Higher CO2 concentration is better but certainly not needed, it doesn't make or break the economics.


I'm not going to argue over the numbers but any business which ignores such an obvious upside / upside scenario is not really serious about achieving economic criticality. It would allow a power plant, iron ore plant, cement producer, what have you, to make claims about their environmental credentials while simultaneously improving the efficiency of the process.


My wife has been working on a pixel accurate recreation of MacPaint on the web, and one of the things that I've learned from on-and-off helping with it was just how much of the magic in it is actually just handled by QuickDraw.

Also, the text styling has a lot more complexity than you would expect.


QuickDraw's regions always impressed me. They could cover an arbitrary area, and most operations could be masked to a region. This was important for updating a window behind another efficiently, just mask to the visible portion and draw its contents normally.


Me as well.

As I recall, regions were essentially something akin to an array of run-length-encodings for each scan line (where, additionally, there was a sentinel or some way to indicate the next n rows are the same as the previous). The fun part is then writing a set of routines to union, difference, etc. pairs of these RLE region objects (and here I use the word 'object' loosely).


By "pixel accurate", do you mean the user interface or the rendering? Is she able to leverage the HTML 2D Canvas API or does she have to go lower level than that?


I’m using a buffer pool and raw bitmaps with calls to drawImage - the canvas api doesn’t play very well with trying to do things in a pixel perfect way.

I’ve gone on a lot of rabbit holes digging into original source code and dev manuals for this project, hopefully one day I’ll write some of it up somewhere. I think what’s been interesting is iterating towards the most elegant solutions has often meant ending up with pretty similar approaches to what Atkinson originally used.


MSPaint (which I wish MS would officially open-source) heavily relies on GDI too.


The energy densities are generally at least an order of magnitude lower and the cost is at least an order of magnitude higher. They serve different purposes.


My understanding is that fluoridation of municipal water supplies serves a different purpose than fluoride in toothpaste. The fluoride in the water supply is low concentration and is intended to cause teeth to develop with stronger cavity resistance, toothpaste is a much more superficial and immediate measure.


[flagged]


And apparently only available from China for municipalities now. So Chinese mind control.

https://www.wcvb.com/article/chinese-fluoride-in-mass-water-...


I thought it was in the water to rot your mind! Wasn't there a link between flouride and lower IQ in children?


No.


Funny thing is I first read about it on Hacker news

https://news.ycombinator.com/item?id=36106925


Not IQ directly, but rather higher fealty to the Qanon joke, a phenomenon correlated to lower IQ


Dammit, you're Gumby!


Unavailable at the moment (HN hug of death?), Wayback Machine: https://web.archive.org/web/20240204131916/https://esoteric....



Hi, I wrote the piece. Site should be responding well again now.


> Service Unavailable

> HTTP Error 503. The service is unavailable.

It is not well at present.


I remember looking into a similar problem and learned that on desktop it was just an encrypted SQLite DB. It was readable with the standard SQLite library.

Not sure of the situation for the mobile backups though!


I just do a research and there is a GitHub repo. Will try it later.

https://github.com/bepaald/signalbackup-tools


They really aren't the same.

Crypto didn't and still doesn't have the same immediate utility. The value proposition just wasn't there to justify the money and attention it was getting. Bitcoin in particular was a prototype that got mythologized into being "digital gold" despite it's many, many technical limitations.

Diffusion models and LLMs work today and make possible things that were science fiction five years ago, and have shown tremendous and exciting progress in the past 18 months.


Your last paragraph is the hype.

I haven’t seen any effective uses of the current AI tech that couldn’t have been done for the same cost by humans so far. Images, text, code; I haven’t seen anything but toys built yet. The coding tools might work okay for your average HTTP API, but it’s not going to develop novel algorithms to control building HVAC systems to reduce energy or demand, for example. It’s not going to code much more efficient search algorithms, or faster compression. Maybe someday, but so far everything produced by AI seems to have huge problems, whether it be drawing realistic hands, knowing the factual truth of certain questions, or introducing subtle bugs in complex code.


That is because you are comparing it to the cost of a professional.

I personally look at it in a different way. Now, a rando on the street knowing nothing about everything can pop out arts rivaling an experienced illustrator. A completely clueless wet lab scientist can coerce copilot or GPT4 to cobble together an automated data analysis pipeline in a language that they know nothing about.

To a professional, those applications are toys, easily made and take little effort. But to someone who does not know anything about the work, it is amazingly useful and open up many possibilities. That is the power and the use cases for AI right now. They are tools to augment productivity, not replacing it. And in that regard, it is very successful imo.

Whether it will progress to the point where it can outright handle everything from start to finish or not is another question.


> arts rivaling an experienced illustrator

Maybe to a lay observer, but that art will not be new, very creative, or technically perfect in any way, sorry.

> lab scientist… data pipeline

No please, they already mess up statistics and code enough, causing bad papers! They don’t know how to code and thus cannot know if that code is correct.

Edit: (I’m posting “too fast” so here’s my last response here for now:)

I’ll concede on point one there, art doesn’t have to be perfect for most uses.

On point two, I think every HN reader has seen how very smart scientists can mess up stats and data even when they write their own code. I’m not saying they are dumb, I’m saying I don’t trust those same folks to be able to find the mistakes an AI makes. Obviously I’m painting with a broad brush here, not every scientist is bad at that, but a large number are, and the current gen AI isn’t trustworthy enough, in my opinion, to let untrained scientists use it and produce important work based on that data.

I would love to eat my words here someday, but this is a hype cycle and although impressive, most AI today is better for marketing and fund raising than serious use.


>To a professional, those applications are toys, easily made and take little effort.

I acknowledged that much. Nothing the AI produce will be a masterpiece, but it is serviceable. The alternative is hiring a contractor or getting an intern and spend more money trying to get a slightly better result, in rare cases, might be worse. Not many places are willing to pay that extra cost.

> They don’t know how to code and thus cannot know if that code is correct.

A bit elitist there. These are still highly educated scientists. They might not know how to code, but to say they can't evaluate the output of the code is a bit much. You might not know how to edit the genes in a fish but you can tell if the fish is glowing or not, right?


I'm working on integrating AI into a product right now: IMO you want to look at what's happening now as more a shift in cost to develop and maintain - which itself is going to create qualitative differences.

I can now have 1-2 developers stand up ML backed services at a level of quality that a few years ago would have required an ML + engineering team to build along with an ongoing tuning burden. Now that the AI is "good enough" out of the box time-to-value has dropped, which also allows for more exploration.

One area I'm seeing a lot of traction at my company and amongst other developers: onboarding flows for complex products. LLMs are really great at taking a small amount of input from or about a user, walking down a decision tree, and creating some initial dummy data relevant to them to more quickly demonstrate value. You might not ever know chatGPT is involved but it doing wonders for quite a few companies' conversion rates.


That’s great! I hope small uses of AI like this work to make us more efficient, but that doesn’t sound exactly like a societal breakthrough, to be able to sell stuff better. I’m looking for AI that can do things other than make capitalism more efficient at parting people from their money.

(Yes, I’m a negative asshole. I should probably be more open minded.)


> I’m looking for AI that can do things other than make capitalism more efficient at parting people from their money.

I think you're really under-estimating the positive, human value that can come out of what I'm describing.

If you leave the world of software companies you'll find that a lot of humanity is wasting huge amounts of time on tasks that could be easily be automated. My most recent experience was the Electronic Vehicle research space - I was able to rather straightforwardly reduce testing cycles for certain key components from 1 year to 1 month through some straightforward software and collaboration with some scientists.

Most of what I accomplished could have been achieved by the scientists if they had used something like Retool[0], but Retool is too sophisticated a tool for them to ramp up on. If AI could make Retool accessible to someone with the technical sophistication of Material Scientist who can write a little Python, it might greatly speed up the rate at which we advance EV technology.

The point I'm making is that making it easier to make products that are accessible means that it's easier to distribute the positive effects of innovation to the rest of society faster. If anything, there's the potential to lower profits long term because today creating a product that is both valuable and accessible is an incredible moat.

[0] https://retool.com/


Just last night I was adding and OIDC provider to a website for a friend, and GPT4 did most of the job for me. As in, I mostly cut and pasted code and filled in the actual integration with the login. It saved me time.

Most development is far closer to that than developing more efficient search algorithms, or better compression,something most human developers couldn't do if their lives depended on it.

Could I have hired.someone to do it? Sure, but finding someone and turnaround time would have taken longer than doing it myself, while GPT4 spat out a solution in seconds.


How about alphafold? Prior usage requires quite literally weeks to months on supercomputers with results often not so comparable with the X ray crystallography.

Or the use of AI for real time upscaling that can be done in real time on low to medium GPU.

I think you only see chat gpt which don have its own draw back, but AI does not equal gpt, does not even has to be generative.


Alphafold: there are lots of caveats, from my limited understanding so far the “thing“ it does isn’t a limiting factor for speeding up drug development.

Video cards: I don’t know much about them, it seems impressive but I’m not sold it’s the future of gaming yet; also they’re expensive as Fuck

I am not saying this stuff will never be useful, but we’re at the peak of the hype cycle today, and I expect many, many of the supposed breakthroughs turn out to be a dead end or harder to make reliable than expected. Or, way more expensive than financially viable for that problem.

I hope I’m wrong, tech wise, because it would be amazing to reduce the needed human work output, but also I’m not sold that our society can survive if AI takes over jobs, so that’s another reason that I’m bullish.


Although AI has various variations, the recent AI boom is referring to what is called generative AI such as ChatGPT.


Haskell is purely functional, but if you make ample use of do-notation it's very easy to write in a familiar imperative(ish) style for day-to-day tasks. Most learning materials won't emphasize this but large production Haskell codebases that do a lot of IO usually end up looking a lot more familiar than you might expect.

If you have interest in both OCaml and Haskell, you might also consider just learning Rust which borrows aspects from both of them. Rust traits aren't quite as powerful as Haskell typeclasses and its module system is definitely more limited than OCaml's, but if you write it in a more functional style you'll get exposed to a lot of similar patterns.

A lot of things in Haskell never really clicked for me until I spent some time getting comfortable with Rust.


Yeah, I was considering Rust, and actually originally thought I'd be asking this question about the 3 of them.

I guess I figured I wanted to do a more academic language first, and that there may be a chance I'd have a practical use-case actually crop up for Rust at some point (in particular if I want/need to write/generate some WebAssembly, which may be the case for a project I may embark on).

Thanks for your input


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: