Hacker Newsnew | past | comments | ask | show | jobs | submit | jsmith45's commentslogin

BBFC's rulings have legal impact, and they can refuse classification making the film illegal show or sell in the UK.

over in the US, getting an MPAA rating is completely voluntary. MPAA rules do not allow it to refuse to rate a motion picture, and even if they did, the consequences would be the same as choosing not to get a rating.

If you don't get a rating in the US, some theatres and retailers may decline to show/sell your film, but you can always do direct sales, and/or set up private showings.


Yeah, proving correct is not a panacea. If you have C code that has been proven correct with respect to what the C Standard mandates (and some specific values of implementation defined limits), that is all well and good.

But where is the proof that your compiler will compile the code correctly with respect to the C standard and your target instruction set specification? How about the proof of correctness of your C library with respect to both of those, and the documented requirements of your kernel? Where is the proof that the kernel handles all programs that meet it documented requirements correctly?

Not to point too fine a point on it, but: where is the proof that your processor actually implements the ISA correctly (either as documented, or as intended, given that typos in the ISA documentation are that THAT rare)? This is very serious question! There have been a bunch of times that processors have failed to implement the ISA spec is very bad and noticeable ways. RDRAND has been found to be badly broken many times now. There was the Intel Skylake/Kaby Lake Hyper-Threading Bug that needed microcode fixes. And these are just some of the issues that got publicized well enough that I noticed them. Probably many others that I never even heard about.


I'm confused by your perspective.

The simplest (and arguably best) usage for a devcontainer is simply to set up a working development environment (i.e. to have the correct version of the compiler, linter, formatters, headers, static libraries, etc installed). Yes, you can do this via non-integrated container builds, but then you usually need to have your editor connect to such a container, so the language server can access all of that, plus when doing this manually you need to handle mapping in your source code.

Now, you probably want to have your main Dockerfile set up most of the same stuff for its build stage, although normally you want the output stage to only have the runtime stuff. For interpreted languages the output stage is usually similar to the "build" stage, but out to omit linters or other pure development time tooling.

If you want to avoid the overlap between your devcontainer and your main Dockerfile's build stage? Good idea! Just specify a stage in your main Dockerfile where you have all development time tooling installed, but which comes before you copy your code in. Then in your .devcontainer.json file, set the `build.dockerfile` property to point at your Dockerfile, and the `build.target` to specify that target stage. (If you need some customizations only for dev container, your docker file can have a tiny otherwise unused stage that derives from the previous one, with just those changes.)

Under this approach, the devcontainer is supposed to be suitable for basic development tasks (e.g. compiling, linting, running automated tests that don't need external services.), and any other non-containerized testing you would otherwise do. For your containerized testing, you want the `ghcr.io/devcontainers/features/docker-outside-of-docker:1` feature added, at which point you can just use just run `docker compose` from the editor terminal, exactly like you would if not using dev containers at all.


Might be worth checking out Tidal's Mondo Notation, which while not quite Haskell syntax is far closer to it, being a proper functional style notion, that unifies with mini notation, so no need for wrapping many things in strings.

Looks like this:

    mondo`
    $ note (c2 # euclid <3 6 3> <8 16>) # *2 
    # s "sine" # add (note [0 <12 24>]*2)
    # dec(sine # range .2 2) 
    # room .5
    # lpf (sine/3 # range 120 400)
    # lpenv (rand # range .5 4)
    # lpq (perlin # range 5 12 # \* 2)
    # dist 1 # fm 4 # fmh 5.01 # fmdecay <.1 .2>
    # postgain .6 # delay .1 # clip 5

    $ s [bd bd bd bd] # bank tr909 # clip .5
    # ply <1 [1 [2 4]]>

    $ s oh*4 # press # bank tr909 # speed.8
    # dec (<.02 .05>*2 # add (saw/8 # range 0 1)) # color "red"
    `
If actual tidal notation is important, that has been worked on, and would look like:

    await initTidal()
    tidal`
    d1 
    $ sub (note "12 0")
    $ sometimes (|+ note "12")
    $ jux rev $ voicing $ n "<0 5 4 2 3(3,8)/2>*8"
    # chord "<Dm Dm7 Dm9 Dm11>"
    # dec 0.5 # delay 0.5 # room 0.5 # vib "4:.25"
    # crush 8 # s "sawtooth" # lpf 800 # lpd 0.1
    # dist 1

    d2 
    $ s "RolandTR909_bd*4, hh(10,16), oh(-10,16)"
    # clip (range 0.1 0.9 $ fast 5 $ saw)
    # release 0.04 # room 0.5
    `
Only the actually implemented functions, and implemented custom operators are available even when that works, so not all tidal code can necessarily be imported.

But it is currently broken on the REPL site because of https://codeberg.org/uzu/strudel/pulls/1510 and https://codeberg.org/uzu/strudel/issues/1335


Phonics based reading is all about sounding out unknown words. The idea is that the student would understand if somebody else read the text out loud, so if we can teach the kids how to convert the written words into sounds, they can understand many new words they first come across. The core idea is to teach the kids that certain letters or groups of letters map to certain sounds (phonemes) at a start, and then gradually introduce more and more rules of English phonetics, allowing students to successfully learn to sound out even more complicated words.

The hope is that students will gradually learn to just recognize words by sight, which the overwhelming majority do eventually learn to do, and just need to sound out unfamiliar words. The fact that some students have struggled to learn to recognize words and need to sound most out is part of why people try to create alternatives, but those largely don't work well.

Of course, English does have some tricky phonetics. We have some words with multiple different pronunciations. We have some words with the same phonemes but different meanings that differ solely based on syllable stress. There are even some words whose pronunciation simply must be memorized, as there is no coherent rule to get from the word to the pronunciation (see for example Colonel).


My view, which I suspect even Toub would agree with is that if being allocation free or even just extremely low allocation is critical to you, then go ahead and use structure and stackalloc, etc that guarentee no allocations.

It is far more guarenteed that that will work in all circumstances than these JIT optimizations, which could have some edge cases where they won't function as expected. If stopwatch allocations were a major concern (as opposed to just feeling like a possible perf bottleneck) then a modern ValueStopwatch struct that consists of two longs (accumulatedDuration, and startTimestamp, which if non-zero means the watch is running) plus calling into the stopwatch static methods is still simple and unambiguous.

But in cases where being low/no allocation is less critical, but your are still concerned about the impacts of the allocations, then these sort of optimizations certainly do help. Plus they even help when you don't really care about allocations, just raw perf, since the optimizations improve raw performance too.


I can get by with a weakly typed language for a small program I maintain myself, but if I am making something like a library, lack of type checking can be a huge problem.

In something like JavaScript, I might write a function or class or whatever with the full expectation that some parameter is a string. However, if I don't check the runtime type, and throw if it is unexpected, then it is very easy for me to write this function in a way where it currently technically might work with some other datatype. Publish this, and some random user will likely notice this and that using that function with an unintended datatype.

Later on I make some change that relies on the parameter being a string (which is how I always imagined it), and publish, and boom, I broke a user of the software, and my intended bugfix or minor point release was really a semver breaking change, and I should have incremented the major version.

I'd bet big money that many JavaScript libraries that are not fanatical about runtime checking all parameters end up making accidental breaking changes like that, but with something like typescript, this simply won't happen, as passing parameters incompatible with my declared types, although technically possible, is obviously unsupported, and may break at any time.


Block based automated signaling can technically be implemented as a primarily local system. Each block needs to know if there is a train in itself block (in which case all block entrance signals must show stop, and approach signals indicate that they can be entered, but the train must be slowing, so it can come to a stop by the block entrance signal). It must also know about a few preceeding blocks for each path leading into it, so as to know which contain trains that might be trying to enter this block, so it can select at most one to be given the proceed signal, and others to be told to brake to stop in time for the entrance signal. While it is nice if it knows the intended routes of each train so it can favor giving the proceed indicator to a train that actually wants to enter it, but if it lacks that information, then giving the indication to a train that will end up using points to take a different path doesn't hurt safety, just efficiency.

Of course, centralized signaling is better, allowing for greater efficiency, helps dispatch keep track better track of the trains, makes handling malfunctioning signals a lot safer, among many other benefits. But it doesn't mean local signaling can't be done.


Yes, block based signaling is what I interpreted “local first” to mean in this context. It works, but it slows everything way down.

I don’t know, but I would imagine, there’s still a block based setup as a failsafe backup in most or all modern rail systems.


Why would this be? I'm probably missing something.

Don't these LLMs fundamentally work by outputting a vector of all possible tokens and strengths assigned to each, which is sampled via some form of sampler (that typically implements some softmax variant, and then picks a random output form that distribution), which now becomes the newest input token, repeat until some limit is hit, or an end of output token is selected?

I don't see why limiting that sampling to the set of valid tokens to fit a grammar should be harmful vs repeated generation until you get something that fits your grammar. (Assuming identical input to both processes.) This is especially the case if you maintain the relative probability of valid (per grammar) tokens in the restricted sampling. If one lets the relative probabilities change substantially, then I could see that giving worse results.

Now, I could certainly imagine blindsiding the LLM with output restrictions when it is expecting to be able to give a freeform response might give worse results than if one prompts it to give output in that format without restricting it. (Simply because forcing an output that is not natural and not a good fit for training can mean the LLM will struggle with creating good output.) I'd imagine the best results likely come from both textually prompting it to give output in your desired format, plus constraining the output to prevent it from accidentally going off the rails.


> Actual CS research is largely the same as EE research: very, very heavy on math and very difficult to do without studying a lot.

That is largely true of academic research. A critical difference though is that you don't need big expensive hardware, or the like to follow along with large portions of the cutting edge CS research. There are some exceptions like cutting edge AI training work super expensive equipment or large cloud expenditures, but tons of other cutting edge CS research can run even on a fairly low-end laptop just fine.

It is also true that plenty of software innovation is not even tied to CS style academic research. Experimenting with what sort of perf becomes possible via implementing a new kernel feature, can be very important research but isn't always super closely tied to academic CS research.

Even the more hobbyist level cutting edge research for EE will have more costs, simply because components and PCBs are not exactly free, and you cannot just keep using the same boards for every project for several years like you can with a PC.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: