Hacker Newsnew | past | comments | ask | show | jobs | submit | kcbanner's commentslogin

> users want the results intelligently synthesized into a text response with references rather than as raw results.

The reason I pay for Kagi is that I specifically don't want this to occur.


If you pay for a service (web search) that 99.9% use for free, you're an extreme outlier, and not necessarily a justifiable one either. After all, DDG, Google and various others still have raw results for free.


How much do you technologically relate to the average person on the street though?

Every person I have seen (outside the tiny tech bubble) google something has just read the AI overview without skipping a beat.


That's worrisome since I've seen those be for-sure wrong a pretty high percentage of the time.

[EDIT] Incidentally, are there any sites that do actual web search any more, better than Yandex? I'd rather avoid a Russian site if I can, but there are whole topics where it's impossible to find anything useful on heavily "massaged" allegedly-Web-search-but-not-really sites like Google and DDG (Bing), but I can find what I want on page 1 or 2 of a Yandex search. Is Kagi as good as that, or is their index simply ignoring a whole bunch of the Web like so many others? I don't mind paying.


Google "Web" results (not the default results you get when you search) still seem okay for me. You can force them with the udm=14 url trick, or select the "Web" tab in the results. No AI, no images or shopping results, and slightly better text results.


Yep, same here. Ask it "should I wash venison tenderloin" and you get an initial "No, because" followed by a generally "yes its important to clean including with water" in the longer description. Wow a self contradictory answer! Good job!


We’re being force fed them. I’m an AI hater and I catch myself reading those sometimes.

Yes, people want the answer directly. Google wants you to stay on their site to read some mishmash. I think the ideal would be to immediately go to the source’s site.


At this point the web is also so centralized you only need 3 bookmarks these days (your news, youtube and Amazon)

A search is just learning what you don't know and AI does a better job than search has ever done for me - and I'm in tech.


It would be possible to employ an expert doctor, instead of writing a script.


Which is cheaper:

1. having a human expert creating every answer

or

2. having an expert check 10 answers each of which have a 90% chance of being right and then manually redoing the one which was wrong

Now add a complications that:

• option 1 also isn't 100% correct

• nobody knows which things in option 2 are correlated or not and if those are or aren't correlated with human errors so we might be systematically unable to even recognise the errors

• even if we could, humans not only get lazy without practice but also get bored if the work is too easy, so a short-term study in efficiency changes doesn't tell you things like "after 2 years you get mass resignations by the competent doctors, while the incompetent just say 'LGTM' to all the AI answers"


It's not pointless, because you get to select exactly the design pattern that is best for the situation. Other languages may decide this for you.


You don't need to comment out the print function - it could gate its behavior on a comptime-known configuration variable. This would allow you to keep your debug variables in place.


After you've been writing zig for a while, seeing `.{}` in an argument list intuitively means "default arguments".


Seems like it could just be elided entirely. Why can't it be?


I do not know Zig, but it looks like it just means "call default constructor for parameter/variable/type". I do not see how you could expect it to be elided unless every function auto-constructs any elided arguments or always has default arguments.

In other words, for a function f(x : T), f(.{}) is f(T()), not f(), where T() is the default constructor for type T.

If we had a function with two parameters g(x : T, y : T2) it would be g(.{}, .{}) which means g(T(), T2()), not g().

It looks like the feature exists to avoid things like:

x : really_long_type = really_long_type(), which can be replaced with x : T = .{} to avoid unnecessary duplication.


I do not know Zig either; I had assumed that it has default parameters, but it seems that it does not[0]. So, yes, it makes sense now why it cannot be elided.

They should add default parameters to avoid this sort of thing. Maybe they ought to consider named/labelled parameters, too, if they're so concerned about clarity.

0: https://github.com/ziglang/zig/issues/484


Zig believes that all code needs to be explicit, to prevent surprises- You never want code that "just executes on its own" in places you may not expect it. Therefore, if you want default arguments, you have to perform some action to indicate this.


Except it's not entirely explicit. It allows the type name of the object being constructed to be elided.

Per the article, this is the explicit form:

    var gpa = std.heap.GeneralPurposeAllocator(std.heap.GeneralPurposeAllocatorConfig{}){};


I don’t think type elision make the codes execution less explicit. Nothing else could go there


That's a textbook definition of "implicit", as in not directly specified, but assumed.

The fact that unacceptable parameter would fail compile time validation does not make it any more readable.


Consider this:

    var foo = OpaqueTypeName(.{}){};
What is the . eliding?

You don't know. I don't know. It's impossible to tell because the type is opaque to our understanding.


i don't get this argument. what is code that "just executes on its own"? how is it more difficult to differentiate what a function does with vs without arguments compared to one that takes arguments with values vs arguments without values?


explicit about branching and allocations, not so for types. we've recently got .decl() syntax, which is even more implicit than .{}


Declaring a variable doesn't initialize it in Zig, so maybe the correct semantics in ellisions would be to allocate an unitialized argument.


For the same reason you can't pass a Python function expecting a list an empty list with foo(), you have to use foo([]). They mean different things.


However, in Python, if you routinely call foo([]), you'd specify that (or rather an empty tuple since it's immutable) as the default value for that argument.


I believe that if most foo's users should just call it with [], the Pythonic way is to make the argument optional.


Well yes, but if it's someone else's library, realistically you're not going to change it.

Zig is a static language without variadic parameters, so you can't make it optional in that sense. You could make the options a `?T` and pass `null` instead, but it isn't idiomatic, because passing `.{}` to a parameter expecting a `T` will fill in all the default values for you.


This doesn’t answer the question why Zig doesn’t have default argument values.


Default argument variables create variadic functions.

Arity N when you supply a value

Arity N-1 when you use the default


How does this create variadic functions? The arity is the same, since the function signature defines the exact amount of arguments. The compiler just passes the omitted ones for you.


Okay, but why could a static language not have variadic functions?


That's their design choice.

I can think of a few reasons

- makes function calling simpler

- faster compilation

- less symbol mangling

- makes grepping for function implementation easier

If for some reason you think you absolutely can't live without variadic functions, maybe don't use zig.


i have never used zig before, but after reading the article i came to the same conclusion. the "problem" (if it is a problem at all, that is) really is that .{} is the syntax for a struct whose type is to be figured out by the compiler, that new users will be unfamiliar with.

i don't know if there are other uses for . and {} that would make this hard to read. if there are, then maybe that's an issue, but otherwise, i don't see that as a problem. it's something to learn.

ideally, each syntax element has only one obvious use. that's not always possible, but as long as the right meaning can easily be inferred from the context, then that's good enough for most cases.


One will quickly become accustomed to the .{} pattern learning Zig. It's used for struct constructors:

  const a_struct: StructType = .{ .foo = "baz"}; 
As well as union initialization. So .{} is a struct constructor for a struct where every field has a default value, which chooses all of those fields.


I think it becomes clearer when considering that these are all equivalent:

    const a_struct: StructType = StructType{ .foo = "baz" };

    const a_struct = StructType{ .foo = "baz" };

    const a_struct: StructType = .{ .foo = "baz" };


> that new users will be unfamiliar with

It's really not much different than in nearly any other language with type inference except for the dot (which is just a placeholder for an inferred type name).


Thanks again for resinator! I recently used it to ship a small win32 utility (https://github.com/kcbanner/multi-mouse) and it worked perfectly.


It's fairly obvious they are asking if the content was licensed


Ah, I thought HN was anti-copyright.


yep, all HN users agree on all topics. That's why there are never any disagreements.


I'm anti-corporations who exploit every artist on the internet.


So open source models are OK? No corporation benefits from that. What about people who are learning to be artists? Should they have to license works before they observe them? They can't be incorporating elements from other artists without payingnthem, surely.

I see a lot of this knee jerk generative algorithm bad on twitter, reddit, and spaces like that. Hopefully HN can be more nuanced.

Information wants to be free. That doesn't stop being true when the information is art.


Open source models are built on VC capital and stolen labor.

Obviously your bullshit point about artists learning to be artists is bullshit. AI art models are not people. Bringing up how people learn is completely irrelevant to the regulation of these models. Artists are happy to teach other artists because it actually grows the artistic community and maintains the skills humanity needs to produce art. Obviously they feel differently when some corporate tech bro fuck who isn't part of the community extracts the value from millions of people at an industrial scale to produce a computer program to displace them and threaten their livelihoods. This is easy to understand and no amount of handwringing and anthropomorphizing will change these facts.

When we don't live under a capitalist economy, maybe we can talk about your entitlement to other people's work. Until then, professional creatives need to eat and pay rent, so fuck off.


It is no different than any other time in history. No one cared when blue collar workers were being automated but when suddenly it's my white collar job?


A lot of people cared very much but we elected Reagan twice in the 80s so he we are. They've also been trying (and failing) to automate and outsource programming for decades. There's also a pretty big difference between the automation of repetitive assembly line tasks and the automation of culture itself imo.


I think I've talked to you before on threads about AI so I won't belabor my points any longer but just know that while you believe them to be different, many don't.


Did you read the comment you replied to?


> Because refactoring requires understanding, which LLMs completely lack.

It's obvious from context here that the refactoring that was mentioned was specifically around concurrency, not simply cleaning up code.


So if I show you an LLM implementing concurrency, will you concede the point? Is this your true objection?

https://chat.openai.com/share/7c41f59a-c21c-4abd-876c-c95647...


Hope you're looking for good-faith discussion here. I'll assume that you're looking for a response where someone has taken the time to read through your previous messages and also the linked ChatGPT interaction logs.

What you've shown is actually a great example of the what folks mean that LLMs lack any sort of understanding. They're fundamentally predict-the-next-token machines; they regurgitate and mix parts of their training data in order to satisfy the token prediction loss function they were trained with.

In the linked example you provided, *you* are the one that needs to provide the understanding. It's a rather lengthly back-and-forth to get that code into a somewhat useable state. Importantly, if you didn't tell it to fix things (sqlite connections over threads, etc.), it would have failed.

And while it's concurrent, it's using threads, so it's not going to be doing any work in parallel. The example you have mixes some IO and compute-bound looking operations.

So, if your need was to refactor your original code to _actually be fast_, ChatGPT demonstrated it doesn't understand nearly enough to actually make this happen. This thread conversation got started around correcting the misnomer that an LLM would actually ever be able to possess enough knowledge to do actually valuable, complex refactoring and programming.

While I believe that LLMs can be good tools for a variety of usecases, they have to be used in short bursts. Since their output is fundamentally unreliable, someone always has to read -- then comprehend -- its output. Giving it too much context and then prompting it in such a way to align its next token prediction with a complex outcome is a highly variable and unstable process. If it outputs millions of tokens, how is someone going to actually review all of this?

In my experience using ChatGPT, GPT4, and a few other LLMs, I've found that it's pretty good at coming up with little bits to jog one's own thinking and problem solving. But doing an actual complex task with lots of nuance and semantics-to-be-understood outright? The technology is not quite there yet.


Did you learn anything from that exercise? Are you a better programmer now for having seen that solution? Because if not, this seems like a great way for getting the fabled "one year of experience, twenty times"


That ... didn't even refactor the code. It just returned some generic Python concurrency methods which vaguely fit the posted code.


The package manager is designed to download archives (.tar.gz) of packages, not the composite files.


Most source code packages, including their metadata, are generally small even as an aggregated compressed file, and it's very common to download a lot of them all at one time once you've resolved the necessary dependencies. It depends on several factors though -- including community ones (are lots of tiny packages encouraged?) and how things like the build system work when downloading things.

In practice, it's very much an appropriate use case for multiplexing, if you ask me. But not having it isn't a dealbreaker either, IMO. It's a bit more work to support HTTP/2 and can be done rather transparently later on anyway, since the underlying transport can be switched and has an upgrade path.


> are lots of tiny packages encouraged?

Very much the opposite


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: