Hacker Newsnew | past | comments | ask | show | jobs | submit | lifis's commentslogin

Skilled computer usage includes learning to type without looking at the keyboard

The improvement that could be made is to reorganize the menu so that entries are grouped in "Image", "Link", "Text", "Page" and "Development" sections, which could either be submenus or titled sections depending on screen size and user preferences

Switch to a bank that offers a fully functional web or Android app, as opposed to only allowing Google Android

Not possible in Finland. :( I'm using the one bank (OP) that used to allow rooted devices to use their app, but even they eventually blocked it via SafetyNet.

I'm all in favor of voting with your wallet, though easier said then done when your mortgage, long-term saving accounts, etc. are tied up with your bank account.

That said, my banking and credit card apps work fine on GrapheneOS.


Why not charge for support?

And if it turns out to be your mistake (faulty product or missing documentation) as opposed to something the user could have reasonably solved by themselves, refund the charge and possibly provide compensation for the inconvenience.


Companies used to charge for support.

But if one company stops doing it, eventually everyone has to stop doing it.

Then the race to the bottom begins...


Because if you charge for support but refund it if it's the company's fault, the company now has a big financial incentive to never admit it's their fault.

You could perhaps show the "instant" reply right away and provide a button labeled "Think longer and give me a better answer" that starts the thinking model and eventually replaces the answer.

For this to work well, the instant reply must be truly instant and the button must always be visible and at the same position in the screen (i.e. either at the top or bottom, of the answer, scrolling such that it is also at the top or bottom of the screen), and once the thinking answer is displayed, there should be a small icon button to show the previous instant answer.


For those who are unaware, this is exactly what Grok does. The default is an auto mode, then when you ask a question it starts researching (which is visible to the user) and if it's using the expert mode but you don't really need all that jazz, it has a "Quick Answer" button right above the prom entry field, and if it's using a "Quick Answer" mode then it has "Expert" button and the same place, and you are able to toggle between them mid answer and it will adjust the model (or model parameters, I'm not sure how it works under the hood).

It's pretty good with the auto chooser, but I appreciate the manual choice available so in-your-face and especially not having it restart the query completely but rather convert the output to either Quick or Expert.

This is on the Web UI, can't speak for other harnesses. I do find that it's quite good with the citations and has a fairly generous free tier, even on Expert mode. (As for who sits at the top, I am indeed put off by Musk's clear interference in several cases involving Grok, nor do my personal values align with the majority of his, but today's Grok is definitely less MechaHitler and more reliable than it was before.)


Wouldn't this be 1.5x as expensive?

Not if the Instant answer is sufficient.

That's assuming that the instant answer is even directionally correct. A misleading instant answer could pollute the context and lead the thinking model astray.

Can the context of the pre-revision, Instant response be simply be discarded -- or forked or branched or [insert appropriate nomenclature here] -- instead of being included as potential poison?

(It seems absurd that to consider that there may be no undo button that the machine can push.)


I'm sure it could, that is probably how it should work. In many cases it would be fine without that.

At least considering only temperature, it seems changes are never going to be irreversible since both stratospheric aerosol injection and intentional nuclear winter should always be able to cool down global temperatures


Seems the classic legacy overengineered thing that costs 100x production costs because it's a niche system, is 10x more complex than needed for to unnecessary perfectionism and uses 10-100x more people than needed due to employment inerta.

A more reasonable thing is to just use high quality cameras, connect to the venue fiber Internet connection, use normal networked transport like H.265 with MPEG-TS over RTP (sports fans certainly don't care about recompression quality loss...), do time sync by having A/V sync and good clocks on each device and aligning based on audio loud enough to be recorded by all devices, then mix, reencode and distribute on normal GPU-equipped datacenter servers using GPU acceleration


The sort of systems which demand 100% reliability tend to be like that. "Disruption" in the middle of live sports broadcast is unpopular with customers.


While I think you are oversimplifying the timing issue, you are not the first to think that about 2110.

https://stop2110.org/


The engineer on the truck seemed to have the most annoyance with the PTP aspect of 2110, but it seemed nobody questioned the move to 2110, and at least as far as broadcast equipment goes, they're all in on 2110. As a small(ish) YouTuber, NDI is more exciting to me, but I'm not mixing dozens or hundreds of sources for a real time production, and can just re-record if I get a sync issue over the network.

Perfect is the enemy of the good, as always—reading through that site, it seems like no solution is perfect, and the main tradeoff from that authors perspective is bandwidth requirements for UHD.

It looks like most places are only hitting 1080p still, however. And the truck I was looking at could do 1080, but runs the NHL games at 720p.


> it seems like no solution is perfect, and the main tradeoff from that authors perspective is bandwidth requirements for UHD.

The “no standalone switch can give enough bandwidth” issue has generally been solved since that page was written. You can buy 1U switches now off-the-shelf with 160x100G (breaking out from 32x800G). One of the main drivers of IP in this space is that you can just, like, get an Ethernet switch (and scale up in normal Ethernet ways) instead of having to buy super-expensive 12G-SDI routers that have hard upper limits on number of ins/outs.

Of course, most random YouTubers are not going to need this. But they also are not in the market for broadcast trucks.


Yes its a huge benefit. Of course without an NMOS SDN solution, actually reliably routing so much data over a network (especially if incrementally designed) is a huge pain in the ass. But thankfully we have those systems now.

We sort of traded the big expensive SDI switchers for big expensive SDNs


Also, I guess we traded a ton of coax cable for somewhat more manageable single-mode fiber. :-)

I never fully understood why SDI over fiber remains so niche, e.g. UHD people would rather do four chunky 3G-SDI cables instead of a much cheaper and easier-to-handle fiber cable (when the standards very much do exist). But once your signal is IP, then of course fiber is everywhere and readily available, so there seems to be no real blocker there.


I don't know but is there a maximum compression weight on fiber, because in some of these broadcast centers they've got cable trays of SDI that are so heavy and packed that removing a dead line is a fire hazard (because the friction of pulling the line could cause a fire).

They'd obviously need a lot less and the lines are a lot lighter but maybe folks figured if they could avoid repeating that scenario in their design, it might be a good idea :-P


You can build fiber basically arbitrarily solid. A normal patch cable won't be that solid, but the more rugged trunk cables is something like (just pulling out of a data sheet for something I used a while back):

  * Outer diameter: 6mm
  * Max tensile load: 900 N
  * Crush resistance: 750 N / 10 cm
  * Max proof stress: >= 0,69 GPa
To be clear, this is not specially rugged cable by any means. This is just a normal G12 cable for general use. You can get stuff that's much more solid. It's certainly much lighter than the equivalent SDI copper cable.


2110 is certainly popular in the industry. There’s no one way to get video out of a sports venue and across the network to takers, though. Where I work different workflows have SDI, NDI, SRT, RIST, and our own internal stuff uses MPEG TS over UDP and gets routed by a distributed system that determines next-hop routing through our network at each hop. The encoding might be H.264, HVEC, or even JPEG2000.


NDI is indeed quite good for prosumer cases. As a Newtek (now Vizrt) shop, our Tricasters speak it natively and that's a great reason we've made use of it.

That being said, if you aren't already in the Newtek/Vizrt ecosystem, might I recommend exploring Teleport, which is a free and open source NDI alternative built into OBS which has also served us very well.


Sounds like you've got it made then: produce the equivalent that fits in a minivan and laugh all the way to the bank.


We're going to need a lot of popcorn to keep us eating as we wait


That's certainly true to an extent. Other commenters have already highlighted necessary complexities. There is absolutely a lot of very entrenched "ways-of-working" that add unnecessary complexity, as with every domain. Not everything is a technical problem though and the social / process side of this sort of setup is what can make it work at all.

The approach that you're hinting mostly describes the general direction of remote production (https://video.matrox.com/en/media/guides-articles/what-is-re...). The big traditional players are already across that (https://www.grassvalley.com/ampp/, https://www.rossvideo.com/use-cases/remote-production/), AWS also has a plethora of services to lock you into their stack (https://aws.amazon.com/media-services/), and there's interesting new players too (https://www.tryiris.ai). There's a heap of different workflows out there, and OB trucks like the one highlighted here are just one of those.


> do time sync by having A/V sync and good clocks on each device and aligning based on audio loud enough to be recorded by all devices

Why do you need good clocks? For audio, even with simultaneously playing speakers, you only need to synchronize within a couple of ms unless you need coherence or are a serious audiophile. If if want to maintain sync for an hour I suppose you need decently good clock.

But as long as you have any sort of wire, basically any protocol can synchronize well enough. Although synchronizing based on visual and audible sources is certainly an interesting idea. (Audio only is a completely nonstarter for a sporting event: the speed of sound is low and the venues are large. You could easily miss by hundreds of ms.)

> then mix, reencode and distribute on normal GPU-equipped datacenter servers using GPU acceleration

Really? Even ignoring latency, we’re talking quite a few Gbps sustained. A hiccup would suck, and if you’re not careful, you could easily spend multiple millions of dollars per day in egress and data handling fees if you use a big cloud. Just use a handful of on-site commodity machines.


Frame sync. In order to reduce latency, these systems tend to be unbuffered, which means that the frames have to arrive at a very specific time, and you can't afford significant jitter or (worse) phase drift. If you have one source at 25.000FPS and one at 25.001FPS eventually you're going to be a frame out between them.


Let's do the math, conservatively. Suppose there's an event and the intent is to broadcast at 60 fps (which is on the high side) and that you want to be able to switch between cameras or even composite multiple camera feeds together without skipping frames or interpolating between frames. That gives a budget of 16.7ms per frame. (Hey, this is a lot like making a video game! Fortunately input latency is not such a big deal here because the viewers aren't playing.)

Suppose you give a budget of 4ms to composite the frame. Now you have 12.7ms from the end of the previous frame in which to collect the current frame from each camera and do whatever fancy processing you want to do (drawing first down lines, adding ads, etc). Of course, you can always cheat a bit by pipelining frames, but this adds latency, and maybe you would prefer to avoid that. Let's say you don't want to pipeline and you budget 8.7ms for all this fancy work, which gives you a 4ms window in which to receive all your incoming frames, which need to be in exact lockstep from all cameras. (This is very, very conservative, since, again, this is not a videogame and it's probably fine to buffer all inputs for a few tens or even hundreds of ms. I'm ignoring the time to transfer each frame -- I'm assuming we're counting from the end of the frame transfer time. If it takes a full frame to transfer a frame, then you cannot possibly avoid one frame of transfer latency anyway.)

So you need all those fancy cameras to stay in sync to plus or minus 4ms. That's a piece of cake with basically any modern technology, where "modern" means, I don't know, the last 30 years? NTP can do this. PTP can do this even with a fully software implementation and no assistance from the switches whatsoever. A cellphone can do this. A fancy GPSDO can do orders of magnitude better than this. A decent RTC will take a whole 200 seconds to drift by a problematic amount. The only actual fancy tech needed is the ability for the host controller on each camera to discipline the camera's frame clock, which I imagine any camera worth its salt can do.

I don't see why a $30k clock is useful here, or why very fancy protocols are needed. I do see why there's a need to get everyone to agree on a protocol, though.

I did once watch an event where I was genuinely impressed by the synchronization, though: a parade at a theme park. There were hundreds or thousands of fixed speakers and hundreds of mobile speakers in the parade, and all of them stayed perfectly synchronized, playing parts of the same music, to within the precision of my ears. I'm guessing the design goal was better than 1ms synchronization error, over at least half an hour, across acres of space, in a potentially adverse RF environment (at least the ISM bands would have been horribly polluted by everyone's phones). And possibly the mobile speakers would even have needed to compensate for their own locations due to the speed of sound being kind of low and the actual parade speed possibly being a bit unpredictable.

If I were designing that, I might have used GPSDOs on each mobile element or possibly some kind of wireless clock distribution -- a 20ppm clock is not even close to good enough.

But event broadcasting doesn't have these problems per se because, anywhere there's a camera, there's already a reliably, high-bandwidth data link of some sort so the camera feed can get to where it's going in real time.


Surprisingly, the timing requirements for digital seem to be slightly lower than it was for analog, at least if I heard the engineer correctly on site. It was something like 1.5 microseconds in the old days, but can be like 10 microseconds now. I could be wrong there.


No, you are right. And it is because digital has a much wider 'lock' range than analog. Analog only works 'in the moment' whereas digital can take the history of the signal so far into account and so not lose lock. If it gets too extreme it will still happen though so cumulative problems will still show up only much later.


> Why do you need good clocks? For audio, even with simultaneously playing speakers, you only need to synchronize within a couple of ms unless you need coherence or are a serious audiophile. If if want to maintain sync for an hour I suppose you need decently good clock.

There are many microphones involved in a production, and humans are quite good at detecting desync between audio/video when watching a presenter talk. You cannot fix desynchronization further down the chain if the desynchronization is variable for each source.


You also need synchronization to mix sources (common in any production) without incurring the latency and resampling of asynchronous sample rate conversion.


As someone who's spent a lot of time in this space and is quite interested in lowering the cost of entry and finding ways to simplify it, I'm afraid you've vastly oversimplified the problem.

> sports fans certainly don't care about recompression quality loss...

I think that's quite an assumption. In a modern video chain youd need to decompress and recompress the video from a camera many many times on the way to distribution. Every filter or combining element would need to have onboard decoding and encoding which would introduce significant latency, would be very difficult to maintain quality, and would introduce even more energy requirements than the systems we already deploy.

High quality cameras aren't any good if they throw away their quality at the source before they have an opportunity to be mixed in with the rest of the contribution elements. You certainly wouldn't compress the camera feeds down to what you'd expect to see on a consumer video feed (about 20Mbps for 4K on HEVC).

> normal networked transport like H.265 with MPEG-TS over RTP

If you want to, you can do that already using SMPTE ST 2110-22 which loops in the RTP payload standards defined by the IETF. ST 2110 itself is already using RTP as its core protocol by the way (for everything).

> do time sync by having A/V sync

What do you mean by this? In order to synchronize multiple elements you need a common source of time. Having "good clocks" on each device is not enough: they need to be synchronized to the level that audio matches up correctly, which is much more precise than video as audio uses sample frequencies in the 48Khz-96Khz range, whereas video of course is typically just 60Hz. Each clock needs a way to _become_ good by aligning themselves to some global standard. If you don't have a master clock like PTP, your options are... what... GPS? I mean you _could_ equip each device with its own GPS transponders, but if the cameras cant get a reliable GPS lock then you're out of luck.

> aligning based on audio loud enough to be recorded by all devices

Do you mean physically? Like actual audio being emitted into the space where the devices are? Because some of the devices will be in the stadium where theres very very loud noises on account of the crowd, and some of them will be in the backroom where that audio is not audible. Then you need to factor in the speed of sound, which is absolutely significant in a stadium or other large venue. None of this is particularly practical.

If you mean an audio sound that is sent to each device over a cable, well are we talking SDI (copper)? If so, we wouldnt use audio for that, we would use what's called Black Burst. But what generates the black burst? Typically, its the grandmaster clock. The black bursts on SDI need to be very precise, and that requires a dedicated piece of real time hardware.

If you mean sending it over ethernet, you now need to ensure you factor in the routing delays that will inevitably happen over an open unplanned network. To deal with those delays, we typically do two things. One, we use automatically planned networks, where the routers are aware of the media flows going over each link, and the topology is automatically rearranged in order to minimize or eliminate router buffering (aka software defined networks, typically using NMOS IS standards to handle discovering and accounting for the media essences).


> they need to be synchronized to the level that audio matches up correctly, which is much more precise than video as audio uses sample frequencies in the 48Khz-96Khz range, whereas video of course is typically just 60Hz

Typically video equipment expects the individual pixels to line up, save for some buffering (~1–10µs), not just the individual frame. So your synchronization requirement for video is in the gigahertz range (or about megahertz, if you take the buffering into account), not 60 Hz. (Of course, what matters is normally the absolute offset, not the frequency, but they tend to be somewhat inversely related.)


According to Gemini, Earth datacenters cost $7m per MW at the low end (without compute) and solar panel power plants cost $0.5-1.5m per MW, giving $7.5-8.5m per MW overall.

Starlink V2 mini satellites are around 10kW and costs $1-1.5m to launch, for a cost of $100-150m per MW.

So if Gemini is right it seems a datacenter made of Starlinks costs 10-20x more and has a limited lifetime, i.e. it seems unprofitable right now.

In general it seems unlikely to be profitable until there is no more space for solar panels on Earth.


All kinds of industries have been conserving more each decade since the energy crisis of the 1970's.

With recent developments, projected use is now skyrocketing like never seen since.

Before that I thought it was calculated that if alternative energy could be sufficiently ramped up, there would be electricity too cheap to meter.

I would like to see that first.

Whoever has the attitude to successfully do "whatever it takes" to get it done would be the one I trust do it in space after that.


His bet then, is that the $1 million cost to get a Starlink V2 mini into orbit can be made cheaper by an order of magnitude or two.


But it is always going to be significantly more expensive than a terrestial data center. Best-case scenario it'll be identical to a regular data center, plus the whole "launching it into space" part. There's no getting around the fuel required to get out of the gravity well. And realistically you'll also be spending an additional fortune on things like station keeping, shielding, cooling, and communication.


Just ask them to answer a randomly generated quiz or problem faster than a human possibly can.

Ruling out non-LLMs seems harder though. A possible strategy could be to generate a random set of 20 words, then ask an LLM to write a long story about them. Then challenge the user to quickly summarize it, check that the response is short enough and use another LLM to check that the response is indeed a summary and grammatically and ortographycally correct. Repeat 100 times in parallel. You can also maybe generate random Leetcode problems and require a correct solution.


I use rolled soy flakes. I think they are pretty much perfect for this purpose, but unfortunately not so easy to source


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: