That's what Meta thought initially too, training codellama and chat llama separately, and then they realized they're idiots and that adding the other half of data vastly improves both models. As long as it's quality data, more of it doesn't do harm.
Besides, programming is far from just knowing how to autocomplete syntax, you need a model that's proficient in the fields that the automation is placed in, otherwise they'll be no help in actually automating it.
This just adds confusion as to the purpose of all this.
The motivation behind the liquid limits is that there are extremely powerful explosives that are stable water-like liquids. Average people have never heard of them because they aren’t in popular lore. There has never been an industrial or military use, solids are simpler. Nonetheless, these explosives are easily accessible to a knowledgeable chemist like me.
These explosives can be detected via infrared spectroscopy but that isn’t going to be happening to liquids in your bag. This reminds me of the chemical swipes done on your bags to detect explosives. Those swipes can only detect a narrow set of explosive chemistries and everyone knows it. Some explosives notoriously popular with terror organizations can’t be detected. Everyone, including the bad guys, knows all of this.
It would be great if governments were more explicit about precisely what all of this theater is intended to prevent.
> People above you have limited time to focus on your specific issues. You can’t info dump on them. If they take a misguided action based on what you tell them, it will be your fault
This bit is useful to everyone, and many people never learn it and get jaded about work itself! They paint themselves into a dilbert strip without realizing. And then of course there's also bad bosses, but any work advice is like relationship advice, it really depends on the specific people involved.
I am a Show HN expert. You need to just keep trying until you get traction. Sometimes it's title. Sometimes it's timing. Sometimes it's more substantial - a chance to rethink, redo, rebrand, rewrite, etc.
Also, mods can help. They are friendly and generous. Reach out to them via email and ask them about your post. Often they have something to say and it's useful.
The challenge you encountered is nothing to do with the recent spike. I've been doing Show HN for 10 years. It's always been this way. It's never "easy" to get the attention of the community. But there are some things that can help, such as the time you post.
Y'all did such a good job with this. It captivated HN and was the top post for the entire day, and will probably last for much of tomorrow.
If you don't know already, you need to leverage this. HN is one of the biggest channels of engineers and venture capitalists on the internet. It's almost pure signal (minus some grumpy engineer grumblings - we're a grouchy lot sometimes).
Post your contract info here. You might get business inquiries. If you've got any special software or process in what you do, there might be "venture scale" business opportunities that come your way. Certainly clients, but potentially much more.
(I'd certainly like to get in touch!)
--
edit: Since I'm commenting here, I'll expand on my thoughts. I've been rate limited all day long, and I don't know if I can post another response.
I believe volumetric is going to be huge for creative work in the coming years.
Gaussian splats are a huge improvement over point clouds and NeRFs in terms of accessibility and rendering, but the field has so many potential ways to evolve.
I was always in love with Intel's "volume", but it was impractical [1, 2] and got shut down. Their demos are still impressive, especially from an equipment POV, but A$AP Rocky's music video is technically superior.
During the pandemic, to get over my lack of in-person filmmaking, I wrote Unreal Engine shaders to combine the output of several Kinect point clouds [3] to build my own lightweight version inspired by what Intel was doing. The VGA resolution of consumer volumetric hardware was a pain and I was faced with fpga solutions for higher real time resolution, or going 100% offline.
World Labs and Apple are doing exciting work with image-to-Gaussian models [4, 5], and World Labs created the fantastic Spark library [6] for viewing them.
I've been leveraging splats to do controllable image gen and video generation [7], where they're extremely useful for consistent sets and props between shots.
I think the next steps for Gaussian splats are good editing tools, segmenting, physics, etc. The generative models are showing a lot of promise too. The Hunyuan team is supposedly working on a generative Gaussian model.
In many cases the best solution would be to retrofit the existing facilities and leverage the transmission infrastructure that is already in place. Retrofit doesn't necessarily mean we continue to burn coal, but it might. Without the aid of a time machine, continuing to burn coal (or even restarting a plant) for a limited period of time may have less incremental impact than other options.
I understand the urge to tear these facilities down, but if we actually care about the environment a more nuanced path is probably ideal.
Nice! The author touches on the area properties and here's the most practical life hack derived from the standard I personally use. It uses the relationship between size and mass.
Because A0 is defined as having an area of exactly 1 square meter, the paper density (GSM or grams per square meter) maps directly to the weight of the sheet.
>A0 = 1 meter square.
>Standard office paper = 80 gsm
>Therefore, one sheet of A0 = 80 grams.
>Since A4 is 1/16th of an A0, a single sheet of standard A4 paper weighs 5 grams.
I rarely need to use a scale for postage. If I have a standard envelope (~5g) and 3 sheets of paper (15g), I know I'm at 20g total. It turns physical shipping logistics into simple integer arithmetic. The elegance of the metric system is that it makes the properties of materials discoverable through their definitions.
And combined with -E, it'll quit immediately if the output is smaller than the terminal size.
...And combined with some of the other options in the post, my go-to has been "less -SEXIER" for a long time. Specifying E twice doesn't seem to do anything except make this easier to remember.
Nearly this entire HN comment section is upset about VLC being mentioned once and not recommended. If you can not understand why this very minor (but loud?) note was made, then you probably do not do any serious video encoding or you would know why it sucks today and is well past its prime. VLC is glorified because it was a video player that used to be amazing back in the day, but hasn't been for several years now. It is the Firefox of media players.
There is a reason why the Anime community has collectively has ditched VLC in favor of MPV and MPC-HC. Color reproduction, modern codec support, ASS subtitle rendering, and even audio codecs are janky or even broken on VLC. 98% of all Anime encode release playback problems are caused by the user using VLC.
And this pastebin doesn't even have all the issues. VLC has a long standing issue of not playing back 5.1 Surround sound Opus correctly or at all. VLC is still using FFmpeg 4.x. We're on FFmpeg 8.x these days
I can not even use VLC to take screenshots of videos I encode because the color rendering on everything is wrong. BT.709 is very much NOT new and predates VLC itself.
And you can say "VLC is easy to install and the UI is easy." Yeah so is IINA for macOS, Celluloid for Linux, and MPV.net for Windows which all use MPV underneath. Other better and easy video players exist today.
We are not in 2012 anymore. We are no longer just using AVC/H264 + AAC or AC-3 (Dolby Audio) MP4s for every video. We are playing back HEVC, VP9, and AV1 with HDR metadata in MKV/webm cnotainers with audio codecs like Opus or HE-AACv3 or TrueHD in surround channels, BT.2020 colorspaces. VLC's current release is made of libraries and FFmpeg versions that predate some of these codecs/formats/metadata types. Even the VLC 4.0 nightly alpha is not keeping up. 4.0 is several years late to releasing and when it does, it may not even matter.
I'm not sure which technique they use but this person makes jewelry from snowflakes. They have videos showing their process, where they catch them on a tray and transfer them using a paintbrush to slide covers that are holding some chemical which capture their shape. Eyeballing it I think they're using the Formvar method.
Maybe it's time we make a simple web page 100KB again?
Is there some kind of CDN minification, adblocking and compression service?
Maybe even server side rendering of websites?
Then a smartphone would work fine with 1GB of RAM and everyone could be happy.
Switzerland, through EPFL, ETH Zurich, and the Swiss National Supercomputing Centre, has released a complete pipeline with all training data - that is "fully open", to my understanding.
After having used their repair service for over 10 times for my dishwasher during its warranty period and having broken off its front handle (well, the entire front panel really) after 2 more years, I'm never buying an AEG device ever again. I opened it up and fixed it myself, and oh my god, the whole thing just screamed cost cutting. They literally used the power button of a different model or machine, and then just mounted a different power button on top that presses the underlying one. And of course the load bearing thing that holds the front panel and display onto the door frame is a just two tiny bolts in the corners. Great idea to have the entire thing flex constantly in one place. Absolute junk.
This reminds me of my own troubles with my AEG washing machine.
Probably, the most important lesson (for someone who wants to fix their washing machine ASAP) that I learned from that was that there are non-userserviceable error codes and you need to perform an undocumented procedure on your machine to get those codes. I wrote about it in more detail here: https://andri.yngvason.is/repairing-the-washing-machine.html
I would have loved to have an open source diagnostics dongle for my AEG. Maybe next time I'll try and make one. :)
Im a millennial dev which happens to have a Gen Z brother who also chose this profession.
Seeing him walk my steps 15 years later has been eye opening for the brutal cultural change.
They’re socially conditioned to assume that anything free is a scam or illegal, that every tool is associated with a corporation, and that learning itself is going through certain hoops (by the uni, the certificator or whatever) so that you get permission to earn money a certain way.
As more doors get closed, I fear this process will solidify.
Coauthor of D2 here. Lately I've been noodling on the idea of expanding the animation capabilities. I think out loud a bit here, and if you have thoughts, would love to hear them:
I'm a tedious broken record about this (among many other things) but if you haven't read this Richard Cook piece, I strongly recommend you stop reading this postmortem and go read Cook's piece first. It won't take you long. It's the single best piece of writing about this topic I have ever read and I think the piece of technical writing that has done the most to change my thinking:
You can literally check off the things from Cook's piece that apply directly here. Also: when I wrote this comment, most of the thread was about root-causing the DNS thing that happened, which I don't think is the big story behind this outage. (Cook rejects the whole idea of a "root cause", and I'm pretty sure he's dead on right about why.)
If there are any googlers here, I'd like to report an even more dangerous website. As much as 30-50% of the traffic to it relates to malware or scams, and it has gone unpunished for a very long time.
If anything, "evolution" filters out disadvantages (eg: can't survive because your neck's too short and that pesky giraffe is eating all the leaves you could reach).
This is one of known hardest parts of RL. The short answer is human feedback.
But this is easier said than done. Current models require vastly more learning events than humans, making direct supervision infeasable. One strategy is to train models on human supervisors, so they can bear the bulk of the supervision. This is tricky, but has proven more effective than direct supervision.
But, in my experience, AIs don't specifically struggle with the "qualitative" side of things per-se. In fact, they're great at things like word choice, color theory, etc. Rather, they struggle to understand continuity, consequence and to combine disparate sources of input. They also suck at differentiating fact from fabrication. To speculate wildly, it feels like it's missing the the RL of living in the "real world". In order to eat, sleep and breath, you must operate within the bounds of physics and society and live forever with the consequences of an ever-growing history of choices.
It is literally not. 2/3 of the weights are in the multi-layer perceptron which is a dynamic information encoding and retrieval machine. And the attention mechanisms allow for very complex data interrelationships.
At the very end of an extremely long and sophisticated process, the final mapping is softmax transformed and the distribution sampled. That is one operation among hundreds of billions leading up to it.
It’s like saying is a jeopardy player is random word generating machine — they see a question and they generate “what is “ followed by a random word—random because there is some uncertainty in their mind even in the final moment. That is both technically true, but incomplete, and entirely missing the point.
Besides, programming is far from just knowing how to autocomplete syntax, you need a model that's proficient in the fields that the automation is placed in, otherwise they'll be no help in actually automating it.