Hacker Newsnew | past | comments | ask | show | jobs | submit | h3ctic's commentslogin

Can it be hosted locally? Would be great to beat mlflow


We are planning to make an offline local version, see also https://minfx.ai/why.html and Web + Native


The fibonacci intervals should be elaborated more imho. The text starts too deep and needs a lighter introduction.

This makes it confusing what it is actually about from a technical/AI researcher perspective


I wondered about the same thing. Even though some parts of Emacs feel old it's still the best editor IMHO


My solution: orgzly syncs to a local file, the local file is synced with synching. Works flawlessly for my files.


Wasn't that the goal of experts-exchange?


I disagree, if you realize: "we need to refactor this file/module/class/etc." then it becomes a new task. Or more general, "we need to refactor the organically grown architecture": it's a new task. Working on a task should not prohibit your thinking about additional tasks, just add them to the backlog.


Have you tried Poetry? (https://python-poetry.org)


Overlaps with albumentations [0]

[0] albumentations.ai


The remarkable is not able to do split screen (for now). Instead, you can install the [ddvk hacks](https://github.com/ddvk/remarkable-hacks) for fast switching between recent documents and a swipe to the previous document.

I'd recommend the remarkable2 as the hacker friendliness makes quite a difference


>fast switching between recent documents

This is what my research told me last I looked too, thanks for the confirmation that this is still true. I believe RM will deliver long term, hacker friendliness goes a long way. Not sure if it's worth the price atm, but I'll look out for a cheap used one. I could do so much more critical reading if I could do it while lying down.


Yes, apparently it only has a latency of 21ms


21ms sounds really low, which is good of course.

But as humans, would we notice 21ms lag while writing even if we paid close attention?


You would definitely notice a 21ms lag while writing. Ideally you want to get below 10ms, but for physical-object-like responsiveness 1ms is the standard. See this old video from Microsoft research which demonstrates

https://www.youtube.com/watch?v=vOvQCPLkPt4


I just watched the video and maybe I missed it but doesn't that mean that in order to have 1ms latency you'd have to have a screen refresh rate of 1000 frames per second? That seems like it would cause serious cost increases for displays, controllers, and graphics cards.


If it helps, you just have to refresh the (very small number of) pixels under pressure, and only at such a high right for a brief moment while they are being pressed.

I wonder if you could have a separate layer which physically (chemically) responded to the pressure to make it look like the screen was drawing your line, but which only lasted for about 50ms after the pressure is removed.

This chemical layer would be visible for just enough time for the real eInk pixels to actually refresh underneath the "pressure mask".


re: setr

> You'd be stuck with essentially one "material" though -- eg color or brush or whatever -- but I could see its utility as a dedicated single-purpose device (like only intended for sketching/notes -- which I guess is mostly true of remarkable anyways)

I'd be curious about human perception of color matching/etc at the <50ms level - I'd suspect if you had _something_ that then became the actual color/brush, our visual system would probably just backfill to say it'd always been that color/texture.


You'd be stuck with essentially one "material" though -- eg color or brush or whatever -- but I could see its utility as a dedicated single-purpose device (like only intended for sketching/notes -- which I guess is mostly true of remarkable anyways)


The 10ms demo is almost as good and much easier to accomplish, requiring only 100 fps instead of 1000. Actually you would need more than 1k fps - suppose it takes 0.5 ms for the touch sensors and CPU processing and graphics updates; then half the time after a touch you would miss the frame and have to wait 1 ms for the next display update.

To me 1ms doesn't seem worth it given the diminishing returns. Something like 5-10 would be way easier to do.


Most devices these days actually heavily extrapolate, continuing drawing the line where they think you're going to draw it even before they've actually sensed where you drawed it. The effect is rather easy to see on a Surface Pro, but it actually works pretty well.


You can cheat, Samsung was just bragging about this earlier in August when they showed off the new Note and smart pen. They used AI to predict where the user was likely to move the pen next to cut the perceived latency even further, some really clever stuff.


The original Apple Pencil and first generation of iPad Pro boasted of a 20ms lag. That was considered impressive, but perceptible if you look at your writing carefully.

I found it fine for note-taking, but lots of people would still notice the latency, especially those using creative tools where there is a strong feedback loop between what you see and what you draw.

They are now claiming 9ms lag. I suspect this is imperceptible for the use case of note-taking and marking up PDF documents (e.g. highlighting, making notes in the margin).

But then again, 20ms is going to be more than fine for that use case as well.


I think you have to get below 10ms for humans to be unable to notice. Drawing is a worst case scenario for exposing screen latency; it's hard to match reality's 0ms and a pen with a fine tip doesn't help hide any of it like a comparatively chunky finger does. That said, 40ms -> 21ms is a really big improvement which could make the experience go from awkward to quite usable.


I can absolutely tell the difference between an early iPad Pro (20ms) and later (9ms), when using a pencil. With the lower, it feels more like paper, like color is coming out of the pencil.


"Close"? Anything over 3~4ms starts to be noticable, and anything over 10ms is "clearly" noticable for folks who expect the response of a pencil on paper.


That's not an "only", that's still at least twice what it needs to be to cross the uncanny valley of input latency.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: