Hacker Newsnew | past | comments | ask | show | jobs | submit | _x3ue's commentslogin

I've enjoyed Interdependency trilogy immensely and now reading "Kaiju Preservation Society" and it is awesome too.


"How I Unleashed World War II" is not there sadly, nor "With Fire and Sword".


The former is a classic comedy. For a taste of what you're missing: https://www.youtube.com/watch?v=AfKZclMWS1U


The funny thing is - Polish pronunciation is objectively easier than English.


Ogniem i mieczem 1999 3 hours .. with costumes


"Potop" is also missing it seems? Didn't find "Katyn" either.


As marktangotango mentioned - the deal is roughly the same for all arm boards. Usually there is a lot of tinkering required, and boot process is convoluted and not standartized at all; and it is usually always up to the community to develop/adopt lots of that things.

Raspberry Pi feels more "polished" precisely because of that too - the active community means active project. And even then it is not free of hardware-backed errors and failing points.

Charging ICs are finicky things on its own - the risk of frying the board/device is always there.

So "landmines" are actually always present - get any random board like Radxa and try deviating even a little bit from "stock" GNU/Linux distribution and its outdated kernel, horrendous patches and binary blobs - and you'll get barely working hardware too.

In my experience the best outcome usually comes from developers being as open as possible; meticulously documenting and publishing everything related to the product, circuits and all.

p.s. Sheer manpower needed to fully QC and maintain at least PC-grade quality of circuitry of that level has to be comparable with said PC vendors employments, hundreds of people if not more. Pine64 and similar vendors operation is small, especially in comparison; they don't move product in that amounts.

p.p.s. being unable to boot from emmc on rk3399 device is new to me to be honest, if you drop by #pine64 irc channel on their server me and local folks can most certainly try to help with that.


I vouched for this comment. And all your comments are [DEAD] aka shadowban. You likely did something to piss off the admins. Either protest or create a new account.

The problem that you bring up in a meta- way is that these are the consistent responses I get from the open source devs. And as much as I appreciate the OS devs doing the hard stuff, basic functionality and hardware defect repair should ABSOLUTELY be part of Pine.

For example, when the PP keyboard has a bad support for the pogo pins, this should have been stopped, recalled, and fixed correctly. Instead, nothing. The open source devs said to use paper to shim it https://www.reddit.com/r/PinePhoneOfficial/comments/svc38r/v...

I mean, what the hell? Im glad fellow harmed users found a way forward, but seriously this is only 1 issue that makes Pine a shitshow to deal with.

And "Charging ICs are finicky things on its own - the risk of frying the board/device is always there." dismisses the fact that if a company wants to sell a product, "NOT CATCHING FIRE OR RELEASING SMOKE" is like the baseline here. Worst yet, early PP keyboard adoptees weren't even told this was an issue. Leaflets were only included later. But still, this is a shit situation - you have 2 USB-C ports. There should ne no situation where 1 port = hardware destruction. Again, make the hardware right or don't fucking do it.


Feels more polished? I absolutely do not understand that, some of the more social media friendly Pi projects involve building cluster file systems with them. Seems very nitty gritty to me.

What's deviating from stock GNU/Linux? Adding a third party repo?


'"stock" GNU/Linux distribution'

GP is saying that deviating from whatever the vendor gives you will cause issues, and saying that most vendors will give you garbage.


Beyond how that is contradictory, are you saying that the vendor is providing a non-mainline version of Linux that can't be updated because the driver API will be broken upon update?

Or am I misunderstanding you?


Hardware-level stuff/embedded is outside of my area of expertise, but as I understand it you are pretty much correct.

You see this a lot with Android devices and custom images. There will be drivers that are only provided in the vendor blessed image, patches that are difficult or impossible to port to new versions of the kernel, etc.

Again, this is me looking in from outside. Most of my information has come from reading about other peoples experience with hardware, especially android devices but also other embedded chips.


Amazing how the phrase "AirPlay receiver service takes up port 5000" gets so artificially bloated up in a whole "article".


And with plenty of animated gifs inside text you're expected to read. Unreadable.


I always guessed it was udp based.. but yeah, what a nonarticle :/


Most of machines (a dozen or more) run gnu/linux for the longest time; not long ago the only deviations were lone openbsd box and the macbook pro I got issued at work (because of an ios apps development).

Apple Silicon was hyped and that hype was backed by some early benchmarks; I still was sceptical because it goes against all my Ryzen logic essentially :D

Got my hands on the Air M1 this week finally and this thing is absolutely impossible. Single-core math benchmarks (scimark4) are 10-15% faster than my trusty ryzen 3900X. Synthetic tests apart - C/C++ compilation is more than 2 (two!) times faster than macbook pro with 6core i7 CPU, that is a huge deal for me. All that with passive cooling!

On a different not another hyped thing that I really want (and waiting for) is a new Raspberry Pi 400 - it is a quite capable tiny computer embedded in a keyboard, those things are a piece of beauty I think :)


> The latency is virtually imperceptible to me

Lucky you, I can't even play on most TVs hooked as-is via HDMI because of horrendous render lag.


I do not understand how people say there's virtually no latency. There is, and it's _huge_, because light is actually quite slow and no tech can improve on that.

Makes me think people that say this have never played on a high end PC, which in turn has lower latency compared to a last gen console. And that's considering the fact that even modern PC have a TON of latency. NVIDIA seems to be working towards that, thankfully.

I bet playing Quake 3 Arena multiplayer on Stadia would be noticeably worse that on a PC from 20 years ago.


I'd say there's effectively no latency, since most games don't need or benefit from <10 ms response times.

I mainly play twitchy shooters on a fairly high-end PC (CSGO, Tarkov, Q3A back in the day) and was super impressed with Stadia to play games like Assassin's Creed. It felt like I had my PC anywhere, but I attribute that to the forgiving latency requirements for the game.

I wouldn't expect CSGO to work as well (though I'd definitely try it).


Why do you think light is actually quite slow?


Because sending a light beam 10 miles away is slower than not doing so, obviously. Light isn't instantaneous in this universe.

It's surprising people still don't get this. Operating a computer miles away will always be slower than operating a computer centimetres away. You can be smart about it and optimise as much as possible, but Google isn't running alien tech that magically is orders of magnitude better than consumer hardware so that it overcomes the distance issue.


Light takes about a millisecond to travel 200 miles. Compared to the latency introduced by computation, it is pretty insignificant, unless you are connected to a server very far away.


My ping is 2. As the other responder notes there are more significant sources of latency on a local gaming rig.


Doesn't it depend on the person's network setup?


No. However fast is your Internet, it is still slower to send data out on the Internet pipes and back again than doing local computation.

5ms ping to your local Stadia server is at least 5ms of additional latency compared to a high end PC. Add virtualisation costs, CPU steal time, packet loss, video compression and decompression etc for another measurable increase.


How long is the rest of the latency chain? For example, keyboard input over usb is gonna be ~15 ms, processing time is gonna be >5ms, and display time like 12ms. Adding those comes to a minimum of around 32ms. [1]

I'm not sure if I can tell the difference between 32 ms and 37 ms.

[1] https://pavelfatin.com/typing-with-pleasure/


Your 37ms is based on 5ms roundtrip and 0 CPU time, which is impossible. And add network jitter which might be worse than static latency.

And the Stadia market isn't people with fast monitors, Ethernet connected and ultra stable internet, but high latency TVs, avg tier Wifi, slow hardware to decode video.


> 0 CPU time

I meant to assume 5ms CPU time: 12 input + 5 processing + 15 output = 32. Add 5 for network round-trip to get 37.

> 5ms roundtrip

5 ms network round-trip or less is common in offices or homes with fiber. Here's ICMP ping 1.1.1.1 from my office just now: rtt min/avg/max/mdev = 4.075/4.700/6.407/0.459 ms. (UDP wouldn't be so different.) Of course, on wifi or low-speed broadband it wouldn't be so fast.

> high latency TVs

High-latency displays makes network latency less noticeable relative to a conventional console game (but more noticeable relative to a PC game on a fast-updating screen).


This is one of the reasons why Stadia is only available for these countries:

https://support.google.com/stadia/answer/9338852


Turn on game mode and problem solved unless you can perceive low double-digit ms latency


Yeah, not surprisingly it’s not really that simple.. Some TVs just aren’t made for low latency, “game mode” enabled or not.


> The text boxes for eg the memory window ... didn't consistently accept keypresses at all

Old Motif/X apps follow "focus on mouse hover" principle, so you can modify text fields only when there is a mouse cursor on them.


I can't believe this comment ended up dead - that was the correct answer. I had more success with ddd this time around.


If the local mac web browser supports proxy server, you can set up a https bypassing proxy on your RPi and browse modern websites (to an extent since web is a pile of JS now) from that mac.


Or, better, this: https://github.com/tenox7/wrp

The rendering/handling of JS is done on the RasPi, so almost everything should work


That is brilliant. And you can even run it remotely (e.g. on a VM somewhere) and have your old vintage machine connect to it.


> a https bypassing proxy

Any particular recommendations? I have a similar need.


mitmproxy with the sslstrip.py script.


They're already knee-deep in some ideology bullshit (instead of working on actual browser that people want to use) and now they cover laying off 250 people with corporate garbage talks.

I really hope that Firefox has a future, but that kind of events make me thing otherwise.


Another kid got PC for Christmas wow.

(The article started fairly reasonable laughing off some cliches about mac and mouse and whatnot, but when I got to the part of "I don't like planning" or "What is server" or lots of other _super_basic_ stuff it became clear that it is just a celebration of ignorance: she doesn't know some stuff and she is proud of it. "I don't like reading" - after these words I decided that either she is a murky troll or just sadly proud-to-be-dumb person.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: