For meshed networks there is a secondary ID (with a name I do not know) that is used to distinguish between APs, since your device should only talk to at most one AP at a time.
It wouldn't be surprising if they used that for finding the location, but marketing sells it as SSID matching as the people they want to sell it to are most likely not experts in networking.
The ESSID (Extended Service Set Identifier) is the human-readable thing you see. There is an underlying BSSID (Basic Service Set Identifier) that includes the unique identifier for the AP (its MAC address) your mobile unit is associated with.
On Windows you can see this (from an elevated context and, in newer versions, with location services enabled) by running: "netsh wlan show interfaces"
> Till ~2010, a layoff was a sign of failure. It meant the CEO messed up.
>
> In 2024, a layoff is a signal of “discipline.” Companies lay off thousands, and their stock price jumps.
Citation needed. Author started their career after 2010 so they are not basing that on personal experience. In my experience this is not true.
That article even says “ Wall Street, in keeping with its cheerful attitude about layoffs, […] investors bet that profit-sweetening job cuts, though perhaps not as dramatic as AT&T’s, would remain in vogue among large corporations.
Large layoffs have always been looked upon favorably by investors.
The people developing exploits have an obvious way to recoup their token investment. How do the open source maintainers recoup their costs? There's a huge disparity here.
They’re fine and work as advertised. One weird thing is you don’t get the receipt for 10-20 minutes, presumably while humans are viewing the footage.
The main thing I use it for is convenient returns, which is why I’m disappointed in this news. I hardly ever buy things there other than things like gum or chips.
> For each response, the GenAI tool lists the sources from which it extracted that content, perhaps formatted as a list of links back to the content creators, sorted by relevance, similar to a search engine
This literally isn’t possible given the architecture of transformer models and there’s no indication it will ever be.
Technically correct, but the workarounds AI search engines use for grounding results could be a close enough approximation. Might not be accurate, but could be better than nothing.
Also Anthropic is doing interesting work in interpretability, who knows what could come out of that.
And could be snake oil, but this startup claims to be able to attribute AI outputs to ingested content: https://prorata.ai/
Not every LLM implementation can use RAG against a Google-sized knowledge base. This proposal essentially says LLMs have to be paired with Google to be legit.
Those citations come from it searching the web and summarizing, not from it's built in training data. Processes outside of the inference are tracking it.
If it were to give you a model-only response it could not determine where the information in it was sourced from.
Any LLM output is a combination of its weights from its training, and its context. Every token is some combination of those two things. The part that is coming from the weights is the part that has no technical means to trace back to its sources.
But even the part that is coming from the context is only being produced by the weights. As I said, every token is some mathematical combination of the weights and the context.
So it can produce text that does not correctly summarize the content in its context, on incorrectly reproduce the link, or incorrectly map the link to the part of its context that came from that link, or more generally just make shit up.
OK, I'll try to err towards the "5" with this one.
1. We built a machine that takes a bunch of words on a piece of paper, and suggests what words fit next.
2. A lot of people are using it to make stories, where you fill in "User says 'X'", and then the machine adds something like "Bot says 'Y'". You aren't shown the whole thing, a program finds the Y part and sends it to your computer screen.
3. Suppose the story ends, unfinished, with "User says 'Why did the chicken cross the road?'". We can use the machine to fix up the end, and it suggests "Bot says: 'To get to the other side!'"
4. Funny! But User character asks where the answer came from, the machine doesn't have a brain to think "Oh, wait that means ME!". Instead, it keeps making things longer in the same way as before, so that you'll see "words that fit" instead of words that are true. The true answer is something unsatisfying, like "it fit the math best".
5. This means there's no difference between "Bot says 'From the April Newsletter of Jokes Monthly'" versus "Bot says 'I don't feel like answering.'" Both are made-up the same way.
> Google's search result AI summary shows the links for example.
That's not the LLM/mad-libs program answering what data flowed into it during training, that's the LLM generating document text like "Bot runs do_web_search(XYZ) and displays the results." A regular normal program is looking for "Bot runs", snips out that text, does a regular web search right away, and then substitutes the results back inside.
Only way I could see it is if there's enough pushback on them taking everyone's power and water (and computer parts) in a world where power and water are becoming increasingly unstable. But I feel like defeating Ai because there is not enough consistent water and power to give them means there is more pressing issues at hand...
No such claim was made, therefore no such claim needs to be refuted. If people want to engage in conversation they will have to use their words to do it.
reply