Hacker News new | past | comments | ask | show | jobs | submit | slabity's comments login

> The internal pull-downs don't work.

They don't work at all? How the heck did something that important get past testing?

Guess I'm not moving on from the RP2040 anytime soon...


> They don't work at all? How the heck did something that important get past testing?

Because that's how all hardware is; I complained about this sort of thing a few days ago: https://news.ycombinator.com/item?id=43202090

> Guess I'm not moving on from the RP2040 anytime soon...

Doesn't matter what you move to, there's still going to be 2000 pages of datasheets+errata, and one line in the middle of all of that will tell you "This does not work".

That's why for hobbyists it's best to stick to devices with a large community around them, who surface niche problems in community forums.

However, with everyone moving to Discord, this will no longer be useful too...


Yeah, it's an A0 flaw. Word on the streets is that it was from modifying the pads to be 5V tolerant.


Do later steppings have this issue fixed? Or will they not fix it for backward compatibility?


I've been interested in the progress of the PineNote since the reMarkable company decided to put certain advertised features behind a subscription paywall.

Does anyone have any information on the OS being developed looks like? I have not been able to find any videos or screenshots that indicate what interacting with the device is expected to look like. I found this blog post here, but it shows it running a GNOME environment which is... Not at all what I would hope for in this type of device: https://pine64.org/2024/10/02/september_2024/#pinenote


Here is a rather old vid of the interface I put together for use on my Pinenote. I’m still running Sway with lisgd for gestures, waybar + lavalauncher for widgets. Lots more possibilities if you are into ags/gjs, eww and others.

https://m.youtube.com/watch?v=XKFwO4iMIgM&t=51s

It’s a great device and I wish people would be a little more open to taking the plunge with it. Forget boox—-you won’t be able to properly root it, they disrespect and even stole FOSS. Meanwhile, remarkable is cool, but anemic hardware compared to Pinenote.


I just got a chance to see this video now.

Thank you for sharing this with me. This is the first time I've seen the `rnote` app on an E-ink device. I'm quite surprised in how functional it looks, though I can already tell the latency is quite high.

I'm definitely going to keep my eye on this device though. I think it will just be a few more years before the software has caught up with the hardware.


It's Debian running GNOME. You can install whatever UI you want from the repos, but the developers have written convenience tools in the form of GNOME extensions, which you can see in the top bar in the photos. It works fine, in my experience, modulo some finicky bits involving the onscreen keyboard. I have the original developer model, and I don't know what differences exist in the community edition.


GNOME is the one Linux desktop environment that can be said to work reasonably well on tablet devices, including the PineNote. It also has well-supported "high contrast" and "reduced animations" modes that can serve to enhance UX on an epaper display.


I think there may be a misunderstanding of my point.

The fact that GNOME works well on typical tablets isn't really relevant here. The PineNote is an E-ink device with very specific hardware constraints and use cases. It's primarily meant for reading and writing, and these tasks require software specifically optimized for E-ink displays and low-power operation.

I've personally experimented with desktop environments like XFCE and i3 on a reMarkable 2. While it was an interesting technical exercise, the experience wasn't practical for daily use. For comparison, look at the reMarkable's unofficial/hacked ecosystem (https://github.com/reHackable/awesome-reMarkable) - it's full of applications and utilities specifically designed for E-ink displays and writing/reading workflows.

This is why I'm hesitant about the "community device" designation. Simply saying "it runs GNOME" doesn't tell us anything about the actual user experience for reading and writing on E-ink. To be clear, my concern isn't that it runs GNOME - it's that this seems to be the only information available about the software experience.


> Note: Determinate Nix is not a fork, it is a downstream. Our plan, and intent, is to keep all our patches sent to the upstream project first.

And what happens if the Nix community doesn't pull those patches, and instead goes with a different solution? Will your downstream adapt to the upstream project, possibly breaking things for your customers?


We won't break our customers.

Indeed, part of the motivation for our downstream distribution is to be able to ship some of our patches faster than upstream wants to. However, these patches are generally about usability improvements that are not incompatible.

If the upstream project evolves in a different direction, it will be on us to move with them too.


Even if the OS could perfectly deduplicate pages based on their contents, static linking doesn't guarantee identical pages across applications. Programs may include different subsets of library functions and the linker can throw out unused ones. Library code isn't necessarily aligned consistently across programs or the pages. And if you're doing any sort of LTO then that can change function behavior, inlining, and code layout.

It's unlikely for the OS to effectively deduplicate memory pages from statically linked libraries across different applications.


Ah, good to know! Thank you for explaining.

I guess much of this is why its hard to use shared libraries in the first place.


> Not to nitpick too much, but while wood is "technically" a composite material made up of fiber embedded in lignin, I don't think it's very useful to include it under the broad category of composite materials. Engineered woods like plywood and cross-laminated timber definitely are, but it's more useful to classify regular wood as an organic raw material rather than a composite.

Why would defining it as a raw material be "more useful"? Why is defining it as a composite "less useful"?


Yea from a material science perspective, wood seems to obviously be a composite.


Not just that. When learning about the anisotropic nature of composites (different strengths in different directions) wood is a tangible example for anyone who’s done arts and crafts, woodworking, etc.


But there's a GIF in the very first section of the README showing a bunch of different effects. What else is it missing?


Oh I guess it didn't load for me then. I was a bit surprised at the absence of it.

Consider my peeve unpeeved :)


I think they wanted an example of each animation in a list.

But I'm going to add my own nitpick and say that there shouldn't be any gifs. It's a wasteful annoying format. Use a video for it's smaller size, better visuals and it's ability to be paused/played/seeked. An asciinema would be nice too.


FreeCAD has improved immensely over the past 2-3 years in terms of stability and features. A decade ago it was not uncommon for me to experience crashes from it randomly losing its GLX context or the constraint solver segfaulting for some reason. Now it's rare for it to crash at all for me, though I still run into a lot of constraint solver errors that are a pain to deal with.

However, despite the recent improvements, I still cannot recommend it for new users compared to commercial solutions for the sole reason of the Topological Naming Issues: https://wiki.freecad.org/Topological_naming_problem

This issue has been probably the #1 problem I've had with FreeCAD since I started using it. And though I've learned how to design parts to get around the problem in most situations, it's a huge hurdle for newcomers to understand and get around. Luckily there's a fork that fixes a significant number of the issues: https://github.com/realthunder/FreeCAD_assembly3 and https://github.com/realthunder/FreeCAD

I've also heard of Ondsel, which is supposedly a much more user friendly version of FreeCAD that also includes some fixes to the issue: https://ondsel.com/

EDIT: Here's actually a better read of the topological naming issue, what's being done about it, and why it's difficult to fix: https://ondsel.com/blog/freecad-topological-naming/


Fix was merged into mainline FreeCAD yesterday: https://github.com/FreeCAD/FreeCAD/issues/8432#issuecomment-...


Wow, that's some convenient timing. That makes pretty much the sole reason I wouldn't recommend FreeCAD irrelevant.


How to try it?

Or wait till v1?


And here's a video of the proposed fix for the toponaming problem (which has been merged upstream) - https://www.youtube.com/watch?v=kvRpOzig6D4


This should be frontpage news


The fore-coming v1 release with the fix will for sure be on the front page


I hope the documentation is updated.


Topological naming problem is fixed in RealThunder's Link Branch of FreeCAD:

https://github.com/realthunder/FreeCAD/releases

I highly recommend Link Branch, it's full of all sorts of usability improvements.

The fix is still being worked on for the next mainstream FreeCAD release.


Naming and matching is never done. That said, you’ve got me curious enough to go have a look. Thanks!


Bullet points of caution:

- Realthunder’s branch contains unique, forked changes that will cause file incompatibility with core freecad if you use them unknowingly

- Core freecad is ahead in many, many ways and improving quickly

- the Realthunder branch is likely a dead end

The TNP mitigation from the Realthunder branch is very close to being enabled in 0.22, and the feature freeze for 1.0 is weeks away. 1.0 is currently targeted for early August.

My feeling is that it would be much better to learn what the topological naming problem is, and how it can be worked around, and then use Ondsel 2024.2 or a 0.22 weekly release until the TNP mitigation is mainstream. (It’s likely to be in 0.22 very soon indeed)

My thinking is straightforward: there are and will be more tutorials and more support for this route, and learning about how to mitigate TNP is not wasted info: it will teach you useful skills for making generally robust designs, TNP or not.

Among others, Mango Jelly Solutions has a recent video about TNP, and Brodie Fairhall’s video on the topic is worth seeing.


As mentioned elsewhere earlier [0] essentially all of Ondsel’s user-friendliness is actually core 0.22 (development release) FreeCAD and different addon choices (like the tab bar).

Which is not to say that Ondsel 2024.2 is a bad way to experience those things, or that the Ondsel Lens (cloud collaboration suite) is not interesting, because it surely is.

It’s just to say it is only much more user-friendly if you’re not already using the 0.22 dev releases (that are considered to be generally as stable as 0.21 and are in wide use)

(I upvoted you for the rest: I too am waiting for the TNP mitigations before I recommend it to less technically-focussed people)

[0] https://news.ycombinator.com/item?id=40430893


FWIW I’ve encountered topological naming problems with OnShape. Edit: And they don’t stop me from enjoying using it.


The UX is still horrible, though. SolveSpace manages to be a lot more usable with an utterly minimalist interface.


Esp. the constraint solver is much better in SolveSpace. Even better than Solidworks


It sounds like it should have been worded like, "Minimize cognitive load to maximize team effectiveness"

The original wording sounds like could go both ways, so I understand why the person you're responding to sees it that way.


Did you ignore this part that came a bit before that?

> I decided to map the file into the address space instead of reading all of it. By doing this, we can just pretend that the entire file is already in memory and let the poor OS deal with fitting a 40 GB blob into virtual memory.

Why take a vaguely rhetorical statement and then complain it contradicts a more concretely accurate statement before it?


> Did you ignore this part that came a bit before that?

No, just the opposite - I read that and that's exactly why I'm saying there was no need to read the whole thing into memory before starting to execute it.

> Why take a vaguely rhetorical statement and then complain it contradicts a more concretely accurate statement before it?

Because it's a contradiction in what they've written?


The article doesn't state the entire program has to be read into memory before it starts executing. Instead the article states that during execution, for the highest inputs, the entire program needs to pass through memory for execution to finish.


Other than power consumption, is there any reason to prefer a single workstation card over multiple consumer cards then?

A single $6800 RTX 6000 Ada with 48GB of VRAM vs 6x 7900XTX with a combined total of 144GB of VRAM honestly makes this seem like a no brainer to me.


You can only fit 1-2 graphics cards in a “normal” ATX case (each card takes 2-3 “slots”). If you want 4 cards on one machine, you need a bigger/more expensive motherboard, case, PSU, etc. I haven’t personally seen anyone put 6 cards in a workstation.


In a water cooled config the cards only take 1 slot. I’ve got 2 3090s and am buying another two shortly. Preemtively upgraded the power to 220v, found a 2kw PSU, and installed a dedicated mini split. I’m also undervolting the cards to keep power and heat down, because even 2000w is not enough to run 4 and a server grade CPU without tripping. When you start accumulating GPUs you also run into all kinds of thermal and power problems for the room, too.


This is impressive.

I was fortunate enough to scoop up a bunch of Gigabyte RTX 3090 Turbos. Cheap used eight slot SuperMicro (or whatever), a cabling kit, four 3090s, boot.

Those were the days!


Sincere question: Is installing and running a mini split actually cheaper than racking them in a colo, or paying for time on one of the GPU cloud providers?

Regardless, I can understand the hobby value of running that kind of rig at home.


I personally haven’t done the calculation. I have rented colo space before and they are usually quite stingy on power. The other issue is, there’s a certain element to having GPUs around 24/7/365 to play with that I feel is fundamentally different than running it on a Cloud Provider. You’re not stressing out about every hour it’s running. I think in the long run (2yr+) it will be cheaper, and then you can swap in the latest and greatest GPU without any additional infrastructure cost.


You have to pass the context between GPUs for large models that don't fit in VRAM. Often ends up slower. Also, tooling around AMD GPUs is still poor in comparison.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: