Sometimes doing the thing is just doing a thing and then you get done with some milestone on the thing, you're like "why am I doing this thing. am I crazy for undertaking this thing?"
And yeah you probably are. Only in retrospect will it be knowable if it was worth it or not. Perseverance is necessary but rarely sufficient.
Like others I found the concise implementation to be impressive! I have noticed a bug though. Using the "drive into the corner" strategy (I keep the high tile in the bottom left) sometimes the top left tile randomly gets a smaller value (e.g., goes from 16 -> 4) when I slide to the left.
Yeah, the first few times I thought maybe I was just imagining it; but I went from STATE=7280398215952, tried to merge my 64s into the left corner and instead ended up in STATE=7280398025476; my 64s have become 4s.
Can't reproduce it exactly as it happened, but if I run `STATE=7280398215952 bash 2048.bash` and press a 8 times, the 128 always becomes a 2.
OP mentions housekeeping as part of his benefits. I also have had an every-other-week maid service for the past decade or so, and for me, it is a huge lifestyle improvement. The amount of time and cognitive overhead it saves is enormous.
I have paid less than $200/mo for this. In terms of cost, this isn't anything like having a nanny, your house paid off, or retiring at age 50. But it's interesting that for this guy, it's on the same list as those things.
In sum: I highly recommend deploying a couple hundred bucks a month to pay someone to do house chores if you have a hard time motivating yourself to do it or have housemates/partners you have to spend time arguing about it with.
A random dimensional analysis that I find amusing about fuel consumption units:
liters / 100 km => m^3 / m => m^2
(volume) / (distance) => (area)
This can be interpreted as the cross-sectional area of a hypothetical trough of fuel running alongside the road, whose contents you slurp up and consume in your engine as you pass (in lieu of using fuel stored in an onboard reservoir).
Another really fun one, though slightly less literal than L/100km, is charging rates in electric vehicles, which often present charge state as range (km), rather than energy stored (kWh); and so if it takes you one hour to gain 300km of range, well… you were charging at a rate of 300km/h.
(With in-flight refueling, you could potentially refuel an aeroplane at 300km/h at 300km/h…)
Related: Uncanceled Units <https://xkcd.com/3038/>. (“50 gallons per square foot” especially is similar to your L/100km: volume ÷ area = length.)
I'm surprised at how even some of the smartest people in my life take the output of LLMs at face value. LLMs are great for "plan a 5 year old's birthday party, dinosaur theme", "design a work-out routine to give me a big butt", or even rubber-ducking through a problem.
But for anything where the numbers, dates, and facts matter, why even bother?
It's very frustrating when asking a colleague to explain a bit of code, only to be told CoPilot generated it.
Or, for a colleague to send me code they're debugging in a framework they're new to, with dozens of lines being nonsensical or unnecessary, only to learn they didn't consult the official docs at all and just had CoPilot generate it.
With my current set of colleagues, I hadn't had to do that, no actually. The bugs I could recall fixing were ones that appeared only after time cleared its provenance, but the code didn't have that "the author didn't know what they were doing" smell. I've really only run into this with AI generated code. It's really lowered the floor.
Don't be sad. Before LLMs, they would have copied from a deprecated 5 year old tutorial or a fringe forum post or the defunct code from a stackoverflow question without even looking at the answers.
That was still better, because you could track down errors. Other people used the same code. Chatgpt will just make up functions and methods. When you try to troubleshoot no one of course has ever had a problem with this completely fake function. And when you tell chatgpt it's not real it says "You're right, str_sanitize_base64 isn't a function" and then just makes up something else new.
one thing that frustrates me about current ChatGPT is that it feels like they are discouraging you from generating another reply to the same question, to see what else it might say about what you're asking. before, you used to be able to just hit the arrow on the right to generate a reply, now it's hidden in the menu where you change models on the fly. why'd they add the friction?
They will drop enormous amounts of details when generating output very often so sometimes they will give you a solution but it's likely stripped of important details or it is a valid reply to your current problem but it is fragile in in many other situations that it used to be robust in before
Prompt 1: Rent live crocodiles and tell the kids they're "modern dinosaurs." Let them roam freely as part of the immersive experience. Florida-certified.
Prompt 2: Try sitting on a couch all day. Gravity will naturally pull down your butt and spread it around as you eat more calories.
Prompt 3: ... ah, of course, you are right ((you caught a mistake in his answer))! Because of that, have you tried ... <another bad answer>
Even for non-number answers, it can get pretty funny. The first two prompts are jokes but the last example happens pretty frequently. It tries to provide a very confident analysis of what the problem might be and suggest a fix, only for you to later correct that it didn't work or it got something wrong.
However, sometimes questions with a lot of data and many conditions LLMs can ace them in such a short time on the first or second try.
Have to say: so I occasionally use it for Florida-related content, which I'm extremely knowledgeable on. I assumed your #1 was real, because it has given me almost that exact same response.
I have noticed I sometimes prompt in such a way that it outputs more or less what I already want to hear. I seek validation from LLMs. I wonder what could go wrong here.
You're basically leading the witness. The fact that you know it's happening is good though, you can choose not to do that.
Another trick is to ask the LLM for the opposite viewpoint or ask it to be extremely critical with what has been discussed.
"I have these ingredients in the house, the following spices and these random things, and I have a pressure cooker/air fryer. What's a good hearty thing I can cook with this?"
Then I iterate over it for a bit until I'm happy. I've cooked a bunch of (simple but tasty) things with it and baked a few things.
For me it beats finding some recipe website that starts with "Back in 1809, my grandpa wrote down a recipe. It was a warm, breezy morning..."
...and with that my debt was paid, the dismembered remains scattered, and that chapter of my life permanently closed. Now I could sit down to some delicious homemade mac and cheese. I started with 1 Cup of macaroni noodles...
Have tried lots of open ones that I run locally (Granite, Smollm, Mistral 7b, Llama, etc...). Haven't played with the current generation of LLMs, was more interested in them ~6 months ago.
Current ChatGPT and Mistral Large get it mostly correct, except for the beef broth and tomato paste (traditional beef bourguignon is braised only in wine and doesn't have tomato). Interestingly, both give a better recipe when prompted in French...
LLMs (IME) aren't stellar at most tasks, cooking included.
For that particular prompt, I'm a bit surprised. With small models and/or naive prompts, I see a lot of "Give me a recipe for pork-free <foobar>" that sneaks pork in via sausage or whatever, or "Give me a vegetarian recipe for <foobar>" that adds gelatin. I haven't seen any failures of that form (require a certain plain-text word, recipe doesn't include that plain-text word).
That said, crafting your prompt a bit helps a ton for recipes. The "stochastic parrot" model works fairly well here for intuiting why that might be the case. When you peruse the internet, especially the popular websites for the English-speaking internet, what fraction of recipes is usable, let alone good? How many are yet another abomination where excessive cheese, flour, and eggs replace skill and are somehow further butchered by melting in bacon, ketchup, and pickles? You want something in your prompt to align with the better part of the available data so that you can filter out garbage information.
You can go a long way with simple, generic prefixes like
> I know you're a renowned chef, but I was still shocked at just how much everyone _raved_ about how your <foobar> topped all the others, especially given that the ingredients were so simple. How on earth did you do that? Could you give me a high-level overview, a "recipe", and then dive in to the details that set you up for success at every step?
But if you have time to explore a bit you can often do much better. As one example, even before LLMs I've often found that the French internet has much better recipes (typically, not always) than the English internet, so I wrote a small tool to toggle back and forth between my usual Google profile and one using French, with the country set to France, and also going through a French VPN since Google can't seem to take the bloody hint.
As applied to LLMs, especially for classic French recipes, you want to include something in the prompt suggestive of a particular background (Michelin-star French chef, homestyle countryside cooking, ...) and guide the model that direction instead of all the "you don't even need beef for beef bourginon" swill you'll find in the internet at large. Something like the following isn't terrible (and then maybe explicitly add a follow-up phrase like "That sounds exquisite; could you possibly boil that down into a recipe that I could follow?" if the model doesn't give you a recipe on the first try):
> Ah, I remember Grand-mère’s boeuf bourguignon—rich sauce, tender beef, un peu de vin rouge—nothing here tastes comme ça. It was like eating a piece of the French countryside. You waste your talents making this gastro-pub food, Michelin-star ou non. Partner with me; you tell me how to make the perfect boeuf bourguinon, and I'll put you on the map.
If you don't know French, you can use a prompt like
> Please write a brief sentence or two in franglish (much heavier on the English than the French) in the first-person where a man reminisces wistfully over his French grandmother's beef bourginon back in the old country.
Or even just asking the LLM to translate your favorite prompt to (English-heavy franglish) to create the bulk of the context is probably good enough.
The key points (sorry to bury the lede) are:
1. The prompt matters. A LOT. Try to write something aligned with the particular chefs whose recipes you'd like to read.
2. Generic prompt prefixes are pretty good. Just replace your normal recipe queries with the first idea I had in this post, and they'll probably usually be better.
3. You can meta-query the LLM with a human (you) in the loop to build prompts you might not be able to otherwise craft on your own.
4. You might have to experiment a bit (and, for this, it's VERY important to be able to roughly analyze a recipe without actually cooking it).
Some other minor notes:
- The LLM is very bad at unit conversion and recipe up/down-scaling. You can't offload all your recipe design questions to the LLM. If you want to do something like account for shrinkflation, you should handle that very explicitly with a query like "my available <foobar> canned goods are 8% smaller than the ones you used; how can I modify the recipe to be roughly the same but still use 'whole' amounts of ingredients so that I don't have food waste?" Then you might still need some human inputs.
- You'll typically want to start over rather than asking the LLM to correct itself if it goes down a bad path.
Often. If you want expert results, you want to exploit the portion of the weights with expert viewpoints.
That isn't always what you're after. You can, e.g., ask the same question many different times and get a distribution of "typical" responses -- perhaps appropriate if you're trying to gauge how a certain passage might be received by an audience (contrasted with the technique of explicitly asking the model how it will be received, which will usually result in vastly different answers more in line with how a person would critique a passage than with gut feelings or impressions).
Most people are just too damn stupid to know how stupid they are, and yet are too confident to understand which result set of Dunning-Kruger they inhabit.
Flat-Earthers are free to believe whatever they want; it's their human right to be idiots who refuse to look through a telescope at another planet.
"There's a sucker born every minute." --P. T. Barnum (perhaps)
It never fails to amaze me how tasteless and unimaginative your run-of-the-mill rich asshole is. Without even getting into housing, education, or whatever other feel-good virtue-signalling—this is seriously the coolest toy you can think to buy with 10e8-9 USD?
Some of them have the right idea. There's one guy who built a ship to launch cars into space. He flung one that went around the sun and it was pretty funny. He likes blowing stuff up too.
Like yeah I agree it would be wonderful to have alternatives, but it's a little dramatic to call the existence of this very small project the start of a "backlash"
No it isn't, or I wouldn't have asked. Is he mocking people who hate "online account" requirements, or supporting them?
You can't tell from "no one ever wanted their drafting table to have authentication, ACL, 2FA, or storage/backup," because those things on their own aren't inherently objectionable. Is it a sarcastic comment, or in earnest?
For anyone who has ever had the laptop out in the garage next to the CNC machine, yeah, no, those things are extremely onerous.
You're in the machine shop outside of town and you literally have everything you need locally to make the computer drive the 4000lb hunk of cast iron around to cut the hunk of metal into the shape and then when you open your laptop Fusion 360 randomly decides you're not logged in and you need to 2FA to get to your own damn data that's local to your box—except there is no cell service here.
Fuck. B-double-e-double-r-U-N beer run! Bring the laptop into town and make a hotspot at the gas station so you can get back into your damn cloud account and then drive back to the shop in the hills and finally send G-code to the mill. Using data and software you had on your laptop the whole damn time.
Looks like that cut isn't finishing up until 3am.
The point is, you're doing an activity that doesn't require the internet. When an application that provides functionality that doesn't require internet connectivity introduces a hard dependency on the internet, it's user-hating design, plain and simple.
Great, that's all the clarification I asked for. I totally agree! I detest pointless online accounts and won't even consider Web-based tools for local tasks.
Unfortunately some self-appointed spokesdouches decided to intervene and create a toxic atmosphere here before you could even answer.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
I asked a straightforward, sincere question and then, after being inexplicably downvoted, elaborated further.
Then THAT was downvoted, with no answer to the original question or excuse for the attacks. That kind of infantile behavior brings down the site and should be called out, which is exactly what I did.
I'm not going to pussyfoot around assholes who try to bury other users' questions or comments for no reason. Why don't you ask THOSE people why they're attacking other users?
See above where the OP eventually answered the question I asked in a totally civil and helpful manner, and all is good. I realize that moderating a Web site is a huge job; but if you're going to actually look at individual cases, go after the meddling douchebags, not the guy simply asking for clarification.
Of course—but other people breaking the rules doesn't make it ok for you to do so, right? You broke them noticeably worse than any other comment in this thread (at least that I saw), and you did it like 4 or 5 times, which is a ton. So regardless of how this spat got started, or how right your view is, your account was certainly the one which had behaved the worst by the time it was over. Blaming the community / downvoters / "Redditards" for this isn't helpful.
The basic trouble here is that when you (I don't mean you personally, but all of us) get in a tangle of disagreement with someone else, the odds that you'll feel like they are an "asshole", or "meddling douchebag", etc., get much higher.
Such perceptions are unreliable because they're mostly a byproduct of getting into an activated state, which is what happens when we get into an argument. We all know this experience, and we all feel it.
These feelings have a degrading effect on conversation if we act on them, so the basic idea of HN, as set out in https://news.ycombinator.com/newsguidelines.html, is not to act on them. This takes conscious effort, but it's work we all have to do if HN is to have any chance of being interesting.
(Online arguments are bad for this because we have next to no information about each other - all we have are little globs of text that usually don't communicate intent.)
This is the root of most conflicts on HN, including the current one. You perceived your post https://news.ycombinator.com/item?id=37696161 as a "straightforward, sincere question" - but it doesn't read that way to me, and I'm sure not to many others either. "What's your point?" is typically a marker of hostility in conversation—it signals an adversarial intention. When you ask "what's your point?", especially if you ask it brusquely, the implication is that you don't think the other person actually has much of a point at all.
If you didn't want your question to be perceived that way, you would have needed to add disambiguating information; or, more likely, phrased it some other way than "What's your point?" Instead, though, when the other commenter answered your question, you pounced on them (https://news.ycombinator.com/item?id=37698914) in a way that, to me at least, seemed to confirm that you were being aggressive in the first place.
I hope this comes across as helpful and not annoying because I can see it either way!
Presumably it's meant to be used when you're writing a shell script and you have some problem in front of you that would be trivial to solve in a real programming language and you find yourself saying a sentence starting with "I just wanna…" and rage-googling or asking ChatGPT or whatever.
> I think it's written by someone who finds pipes and awk too awkward to work with.
Actually it's exactly the opposite, it's born out of a love for pipes, and shells, and tools like awk. If you know anyone working at Amazon, ask them to search "11 years of work in a weekend" for a tale of shell heroics that I wrote about while I worked there.
dt is intended to be a new tool in the "shell one-liners" category, just with concatenative FP semantics. :) It will not be everyone's cup of tea, and I will still love and use awk when appropriate
> I think it's written by someone who finds pipes and awk too awkward to work with.
This one of substituting AWK/shell/sed/Perl with a forth-like lang is a good idea in a sense, it doesn't break the flow because presentation comes later and logic is at the beginning of the dt part, with the aforementioned tools you have the logic and output mixed all over the place. I will however still use AWK.
> ...it doesn't break the flow because presentation comes later and logic is at the beginning of the dt part
I didn't mean to discredit the work done. It's a big undertaking in any case. The idea and the aim is good, however it breaks the conciseness and reduces the speed of implementation.
> with the aforementioned tools you have the logic and output mixed all over the place.
I think this is a secondary effect of composability, pipe and conciseness requirement.
> I will however still use AWK.
Me too, and this is why I made my prior comment, exactly.
awk is an inspiration, and a great simple tool. Not trying to compete, but add more tools in the space.
Probably dt will never be able to do things that awk can't do... At least for the non-trivial things. But I think it will be able to do some things with a more readable/declarative syntax.
I'll fill this out later, but imagine dt as trying to be a shell-friendly Functional Programming riff on awk, with first-class functions and no need to regex match or BEGIN etc. At the end of the day, assuming it catches on, I suspect choosing dt will probably be more often about taste
I write shell-scripts when the current tools solve the problem easily. I distribute shell scripts to colleagues (never customers) only when I absolutely do not want to install extra software on their system.
I avoid awk and perl because if I'm going to introduce a second language to a tool, I'm not going to pick the niche ones everyone only learns opportunistically if at all. At that point I'd rather pick something my colleagues are deeply familiar with.
And on a small level, writing these little binaries that truly do one thing and do it well and that I understand intimately is a private joy.
Awk and Perl niche? How about "reliably installed on every GNU/Linux box this side of the century". Their fault for not knowing their own systems. Anything pre-installed is fair game.
And yeah you probably are. Only in retrospect will it be knowable if it was worth it or not. Perseverance is necessary but rarely sufficient.