I see "Sale Ending Soon: Lifetime Licences 71% off - only £29!" at the top of the pages. It looks really neat, but my interest was more curiosity. You need to login to download and then login again from the app (for the trial), and I ran into more friction than I had time for. Will bookmark in case I have a need for this later.
Thanks for the feedback, I'll make it easier to get in in the next update! Also you don't need to log in inside the app. Just click log in and then trial mode, you dont actually need to enter details. But yeah it's maybe a bit confusing
I've been using xScope since late 2015. I haven't used PixelSnap, but looked at it and was considering buying as PixelSnap's magic "snappable" features are pretty cool. xScope is $50 and PixelSnap is $40 to install on 1 Mac, but $70 to install on 2 Macs and $160 to install on 5 Macs. I have two laptops and a desktop and appreciate licenses to install on my personal Macs for non-simultaneous use, and xScope has this through the App Store purchase.
xScope also offers other tools. I use Rulers, Loupe, Guides, and Crosshair and they are all nice and work as expected.
I'm regularly struck by how much of technical Japanese is literal transliterations of loanwords. In the diagram at https://github.com/w-okada/voice-changer#vc-client-%E3%81%A8... , I see "user", "client (browser)", Docker "container", "server", "Host" PC, "speaker".
... I don't know what the label says on the link between the client and server, though, the katakana is "bo i che n" and I can't think of what that transliterates to. Maybe it's a loanword that's not from English?
Most attorney-client privilege laws explicitly exempt any communications that can't be expected to be private, which would presumably include the exterior envelope of a piece of mail, which is effectively a postcard (the archetypical not-private communication).
The other problem you'd have here is that you can't generally enforce the privilege, beyond excluding evidence in a trial.
This is an area where parallel construction[1] can be applied for evidence laundering. Usually these terms are used to describe unreasonable search/seizure in violation of the 4th amendment, but it can also be applied to evidence collected in violation of attorney/client privilege.
The evidence can't be submitted in a trial, but that's pretty much the only thing that can't be done with it--you can use it to obtain other evidence. In theory that evidence would also be excluded as "fruit of the poisonous tree", but in reality and violation of the constitution and basic human rights, law enforcement frequently uses parallel construction--showing an alternate path by which the evidence might plausibly have been obtained--to get evidence in front of jurors which they could not have obtained without violating the constitution they supposedly uphold.
I don't think any of this has anything to do with attorney-client privilege, which is the only thing I'm here to talk about. Postal mail metadata is about the least interesting possible thing to discuss; the privilege thing here was the only interesting angle I saw.
The Post Office's regulations for these mail covers do explicitly exclude mail "between the mail cover subject and the subject's known attorney." I agree that attorney-client privilege probably doesn't require this for "outside of the envelope" information, but the policy seems to be more of just reacting to the... vibes of the privilege.
Except for the incoming question being on there, per your link:
> While it was notable that a potential question was written on Biden’s card, every White House press office takes scrupulous care to prepare their president for news conferences.
Except for the question on there wasn't the question that was asked. Do you understand how q&a (or for that matter court) prep goes? The question on there was an example question that they expected would be along the lines of what that person would be interested in and how they would phrase it (or what particular things they would try to attack/draw out). The article you just quoted even said that.
> I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught.
If you replace this guy with my name I'd be upset. In my non-software networks the hallucination part isn't common knowledge. It's just a cool Google replacement.
> In my non-software networks the hallucination part isn't common knowledge
I think that's one of the main issues around these new LLM's, the fact that most users will take what the bot tells them as gospel. OpenAI really should be more upfront about that. Because what happens when regulations and policies start getting put forth without the understanding of LLM hallucination, we could very well end up in a situation where regulators want something that is not technically feasible.
> OpenAI really should be more upfront about that.
I mean they are quite upfront. When you load the page it displays the following disclaimers with quite large font:
"Limitations
May occasionally generate incorrect information
May occasionally produce harmful instructions or biased content
Limited knowledge of world and events after 2021"
2 out of the 3 disclaimers are about the fact that the software lies.
And then in the bottom of the page, right below the input box they say: "Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts"
Sure they could make them even larger and reword it to "This software will lie to you", and add small animated exclamation marks around the message. But it is not like they hide the fact.
A better way, like the sibling comment says, is to force people to type a sentence so they consciously acknowledge it. It's similar to college exams which ask you to specifically write out something like "I have not cheated on this assignment or test."
One thing they could try is force users to type "I understand the information presented by ChatGPT should not be taken as fact" before they can use it.
I've seen that sort of thing used to enforce people to read the rules on discord servers, this is higher stakes IMO.
I agree that they provide that disclaimer on the homepage. I was talking more broadly that society (namely the news media and government) should be aware of the limitations of LLM's in general. Take this article from NYT[1], depending on how well you understand the limitations of LLM's will depend on how you react to this article, it's either alarming or "meh". All I'm staying is society in general should understand that LLM's can generate fake information and that's just one it's core limitations, not a nefarious feature.
If I search my name, it doesn't come up with anything defamatory. (Not that I tried leading questions.) But it does come up with plenty of hallucinations including where I've worked, lived, gone to school, etc. And that's with a bunch of bios online and AFAIK a unique online name.
reply