If you pay for a service (web search) that 99.9% use for free, you're an extreme outlier, and not necessarily a justifiable one either. After all, DDG, Google and various others still have raw results for free.
That's worrisome since I've seen those be for-sure wrong a pretty high percentage of the time.
[EDIT] Incidentally, are there any sites that do actual web search any more, better than Yandex? I'd rather avoid a Russian site if I can, but there are whole topics where it's impossible to find anything useful on heavily "massaged" allegedly-Web-search-but-not-really sites like Google and DDG (Bing), but I can find what I want on page 1 or 2 of a Yandex search. Is Kagi as good as that, or is their index simply ignoring a whole bunch of the Web like so many others? I don't mind paying.
Google "Web" results (not the default results you get when you search) still seem okay for me. You can force them with the udm=14 url trick, or select the "Web" tab in the results. No AI, no images or shopping results, and slightly better text results.
Yep, same here. Ask it "should I wash venison tenderloin" and you get an initial "No, because" followed by a generally "yes its important to clean including with water" in the longer description. Wow a self contradictory answer! Good job!
We’re being force fed them. I’m an AI hater and I catch myself reading those sometimes.
Yes, people want the answer directly. Google wants you to stay on their site to read some mishmash. I think the ideal would be to immediately go to the source’s site.
2. having an expert check 10 answers each of which have a 90% chance of being right and then manually redoing the one which was wrong
Now add a complications that:
• option 1 also isn't 100% correct
• nobody knows which things in option 2 are correlated or not and if those are or aren't correlated with human errors so we might be systematically unable to even recognise the errors
• even if we could, humans not only get lazy without practice but also get bored if the work is too easy, so a short-term study in efficiency changes doesn't tell you things like "after 2 years you get mass resignations by the competent doctors, while the incompetent just say 'LGTM' to all the AI answers"
You don't need to comment out the print function - it could gate its behavior on a comptime-known configuration variable. This would allow you to keep your debug variables in place.
I do not know Zig, but it looks like it just means "call default constructor for parameter/variable/type". I do not see how you could expect it to be elided unless every function auto-constructs any elided arguments or always has default arguments.
In other words, for a function f(x : T), f(.{}) is f(T()), not f(), where T() is the default constructor for type T.
If we had a function with two parameters g(x : T, y : T2) it would be g(.{}, .{}) which means g(T(), T2()), not g().
It looks like the feature exists to avoid things like:
x : really_long_type = really_long_type(), which can be replaced with x : T = .{} to avoid unnecessary duplication.
I do not know Zig either; I had assumed that it has default parameters, but it seems that it does not[0]. So, yes, it makes sense now why it cannot be elided.
They should add default parameters to avoid this sort of thing. Maybe they ought to consider named/labelled parameters, too, if they're so concerned about clarity.
Zig believes that all code needs to be explicit, to prevent surprises- You never want code that "just executes on its own" in places you may not expect it. Therefore, if you want default arguments, you have to perform some action to indicate this.
i don't get this argument. what is code that "just executes on its own"? how is it more difficult to differentiate what a function does with vs without arguments compared to one that takes arguments with values vs arguments without values?
However, in Python, if you routinely call foo([]), you'd specify that (or rather an empty tuple since it's immutable) as the default value for that argument.
Well yes, but if it's someone else's library, realistically you're not going to change it.
Zig is a static language without variadic parameters, so you can't make it optional in that sense. You could make the options a `?T` and pass `null` instead, but it isn't idiomatic, because passing `.{}` to a parameter expecting a `T` will fill in all the default values for you.
How does this create variadic functions? The arity is the same, since the function signature defines the exact amount of arguments. The compiler just passes the omitted ones for you.
i have never used zig before, but after reading the article i came to the same conclusion. the "problem" (if it is a problem at all, that is) really is that .{} is the syntax for a struct whose type is to be figured out by the compiler, that new users will be unfamiliar with.
i don't know if there are other uses for . and {} that would make this hard to read. if there are, then maybe that's an issue, but otherwise, i don't see that as a problem. it's something to learn.
ideally, each syntax element has only one obvious use. that's not always possible, but as long as the right meaning can easily be inferred from the context, then that's good enough for most cases.
It's really not much different than in nearly any other language with type inference except for the dot (which is just a placeholder for an inferred type name).
So open source models are OK? No corporation benefits from that. What about people who are learning to be artists? Should they have to license works before they observe them? They can't be incorporating elements from other artists without payingnthem, surely.
I see a lot of this knee jerk generative algorithm bad on twitter, reddit, and spaces like that. Hopefully HN can be more nuanced.
Information wants to be free. That doesn't stop being true when the information is art.
Open source models are built on VC capital and stolen labor.
Obviously your bullshit point about artists learning to be artists is bullshit. AI art models are not people. Bringing up how people learn is completely irrelevant to the regulation of these models. Artists are happy to teach other artists because it actually grows the artistic community and maintains the skills humanity needs to produce art. Obviously they feel differently when some corporate tech bro fuck who isn't part of the community extracts the value from millions of people at an industrial scale to produce a computer program to displace them and threaten their livelihoods. This is easy to understand and no amount of handwringing and anthropomorphizing will change these facts.
When we don't live under a capitalist economy, maybe we can talk about your entitlement to other people's work. Until then, professional creatives need to eat and pay rent, so fuck off.
It is no different than any other time in history. No one cared when blue collar workers were being automated but when suddenly it's my white collar job?
A lot of people cared very much but we elected Reagan twice in the 80s so he we are. They've also been trying (and failing) to automate and outsource programming for decades. There's also a pretty big difference between the automation of repetitive assembly line tasks and the automation of culture itself imo.
I think I've talked to you before on threads about AI so I won't belabor my points any longer but just know that while you believe them to be different, many don't.
Hope you're looking for good-faith discussion here. I'll assume that you're looking for a response where someone has taken the time to read through your previous messages and also the linked ChatGPT interaction logs.
What you've shown is actually a great example of the what folks mean that LLMs lack any sort of understanding. They're fundamentally predict-the-next-token machines; they regurgitate and mix parts of their training data in order to satisfy the token prediction loss function they were trained with.
In the linked example you provided, *you* are the one that needs to provide the understanding. It's a rather lengthly back-and-forth to get that code into a somewhat useable state. Importantly, if you didn't tell it to fix things (sqlite connections over threads, etc.), it would have failed.
And while it's concurrent, it's using threads, so it's not going to be doing any work in parallel. The example you have mixes some IO and compute-bound looking operations.
So, if your need was to refactor your original code to _actually be fast_, ChatGPT demonstrated it doesn't understand nearly enough to actually make this happen. This thread conversation got started around correcting the misnomer that an LLM would actually ever be able to possess enough knowledge to do actually valuable, complex refactoring and programming.
While I believe that LLMs can be good tools for a variety of usecases, they have to be used in short bursts. Since their output is fundamentally unreliable, someone always has to read -- then comprehend -- its output. Giving it too much context and then prompting it in such a way to align its next token prediction with a complex outcome is a highly variable and unstable process. If it outputs millions of tokens, how is someone going to actually review all of this?
In my experience using ChatGPT, GPT4, and a few other LLMs, I've found that it's pretty good at coming up with little bits to jog one's own thinking and problem solving. But doing an actual complex task with lots of nuance and semantics-to-be-understood outright? The technology is not quite there yet.
Did you learn anything from that exercise? Are you a better programmer now for having seen that solution? Because if not, this seems like a great way for getting the fabled "one year of experience, twenty times"
Most source code packages, including their metadata, are generally small even as an aggregated compressed file, and it's very common to download a lot of them all at one time once you've resolved the necessary dependencies. It depends on several factors though -- including community ones (are lots of tiny packages encouraged?) and how things like the build system work when downloading things.
In practice, it's very much an appropriate use case for multiplexing, if you ask me. But not having it isn't a dealbreaker either, IMO. It's a bit more work to support HTTP/2 and can be done rather transparently later on anyway, since the underlying transport can be switched and has an upgrade path.
The reason I pay for Kagi is that I specifically don't want this to occur.