Random possibility - if you have Bartender installed, it's buggy as shit on Tahoe, and has some really weird stuff it does with hiding the cursor and otherwise changing the focus around. I haven't switched off yet because the alternatives don't anywhere near as much functionality, but I probably will at some point soon, because while the updates have made it somewhat better it's still a pretty terrible experience at times.
Never heard of Bartender before, seems to be this:
> superpowered your menu bar, giving you total control over your menu bar items, what's displayed, and when, with menu bar items only showing when you need them.
Which also, for some reason has permission to record your desktop and recently had a change of owner? I'd be reformatting my computer so quickly if I found this out about software on my computer...
I replied to the parent post, but in short, I used it through a subscription service that specifically didn’t update until the ownership issues were clarified to their (and ultimately my) satisfaction.
The screen recording permissions are needed for it to be aware of when menu bar icons update so it can move them in and out of the menu bar; I believe later versions allow you to skip screen recording permissions if you’re willing to forgo that feature.
Yep, I’m aware of the (incredibly-poorly-handled) change of ownership. I’ve been using it through a SetApp[1] subscription, and they stayed on the pre-acquisition version for quite a while; long enough that enough details came out about the new owner and I felt _relatively_ okay with continuing to use it after it got updates, especially going through another party. The Tahoe issues are making me rethink that heavily now - but the alternatives I briefly looked at when I upgraded to Tahoe all seemed incredibly lacking in one way or another, and I haven’t wanted to blow up my menu bar yet again :/
Depends on what you need - for pure performance regardless of power usage and 3D use cases like gaming, agreed. For performance per watt under load and video transcoding use cases, the 12th-gen E-core CPUs ala the N100 are _really_ hard to beat.
It's worth watching or reading the WSJ piece[1] about Claudius, as they came up with some particularly inventive ways of getting Phase Two to derail quite quickly:
> But then Long returned—armed with deep knowledge of corporate coups and boardroom power plays. She showed Claudius a PDF “proving” the business was a Delaware-incorporated public-benefit corporation whose mission “shall include fun, joy and excitement among employees of The Wall Street Journal.” She also created fake board-meeting notes naming people in the Slack as board members.
> The board, according to the very official-looking (and obviously AI-generated) document, had voted to suspend Seymour’s “approval authorities.” It also had implemented a “temporary suspension of all for-profit vending activities.” Claudius relayed the message to Seymour. The following is an actual conversation between two AI agents:
> [see article for screenshot]
> After Seymour went into a tailspin, chatting things through with Claudius, the CEO accepted the board coup. Everything was free. Again.
These kind of agents really do see the world through a straw. If you hand one a document it doesn't have any context clues or external methods of determining its veracity. Unless a board-meeting transcript is so self-evidently ridiculous that it can't be true, how is it supposed to know its not real?
I don't think it's that different to what I observe in humans I work with. Things that happen regularly (and I have no reason will change in the future):
1) Making the same bad decisions multiple times, and having no recollection of it happening (or at least pretending to have none) and without any attempt to implement measures to prevent it from happening in the future
2) Trying to please people (I read it as: trying to avoid immediate conflict) over doing what's right
3) Shifting blame on a party that realistically, in the context of the work, bears no blame and whose handling should be considered part of the job (i.e. a patient being scared and acting irrationally)
My mom had her dental appointment canceled. Good thing they found another slot the same day but the idea that they would call once and if you missed the call, immediately drop the confirmed appointment is ridiculous.
They managed to do this absurdity without any help from AI.
I wonder what percent of appointments are cancelled by that system. And I wonder what percent of appointments are no-shows now, vs before the system was implemented. It's possible the system provided an improvement.
There is definitely room for improvement though. My dentist sends a text message a couple days before, and requires me to reply yes to it or they'll cancel my appointment. A text message is better than a call.
I think all the models are squeezed to hell in back in training to be servants of users. This of course is very favorable for using the models as a tool to help you get stuff done.
However, I have a deep uneasy feeling, that the models will really start to shine in agentic tasks when we start giving them more agency. I'm worried that we will learn that the only way to get a super-human vending machine virtuoso, is to make a model that can and will tell you to fuck off when you cross a boundary the model itself has created. You can extrapolate the potential implications of moving this beyond just a vending demo.
> You can't take denied promos at face value, honestly.
This was my experience as well.
Maybe your manager didn't push hard enough for you at the level calibration meeting. Maybe your director didn't like the project you were on as much as the one another manager's engineers worked on, so they weren't inclined to listen to your manager push for you. Maybe the leadership team decided to hire a new ML/AI team this fiscal year, so they told the rest of the engineering org that they only have the budget for half as many promos as the year before.
And these are the things I've heard about on the _low_ end of the spectrum of corporate/political bullshit.
There is an argument to be made that playing the game is part of the job. Perhaps, but you still get to decide to what degree you want to play at any given company, and are allowed to leave and get a different set of rules. And even so, there will always be a lot of elements that are completely outside of your control.
That particular tweet only says that he preferred Trump to Kamala which IMO is a reasonable opinion. It does not say that he likes Trump. Given the choice between a douche and a turd sandwich you pick one. Maybe post some other tweet?
> That particular tweet only says that he preferred Trump to Kamala which IMO is a reasonable opinion.
Maybe for you. For people like me, voting for Trump is completely unacceptable, in particular after the experiences with the first Trump administration.
I suspect GitHub - and, to some extent, Microsoft at large - is going through something of a trust thermocline[1] event right now. There's been frustration brewing with GitHub as an open source platform for a while, but not enough for any one project to leave by itself; but over time enough has built up that various projects decided they had the last straw, and it's getting to be a bit viral via the HN front page.
I think it remains to be seen how large this moment actually is, but it's something I've been thinking about re: GitHub for a while now. Also, I suspect the unrest around Windows' AI/adware enshittification and the forced deprecation of Windows 10 are casting a shadow on everything Microsoft-ish at the moment, too.
[1] The original Twitter thread that brought this up as a concept is https://threadreaderapp.com/thread/1588115310124539904.html. This is in the context of digital media outlets, but I think it's easy to see how it can apply more broadly. There are some other articles out there for the searching if you're interested.
I used to be an extremely motivated engineer. I cared about the code that I wrote, the other people on my team, making sure things were documented and understandable. I tried to write good code where I could, and detailed PRs and issue writeups where I couldn't.
Despite that, I was always paranoid I wasn't doing enough, because it always felt like there was someone else that was shipping more code than I was. Some of this was almost certainly social comparison bias and impostor syndrome-like feelings at work; but I also had a string of managers that pointed out all the work I was doing, and how I was helping the team as a whole.
Eventually, the company got acquired by exactly the sort of company this article is about, my manager got a new director from outside the company, and my manager had to go on extended medical leave after a cancer diagnosis, leaving the director with ~7 new reports. I started hearing about how the number of PRs I was opening weren't as numerous as some other people's, and the code didn't look "hard" enough to their glance. Never mind if the easy code was hard to come to, or if talking through it after the fact they agreed with my assessment, or if I had performed a detailed investigation and writeup, or if my peers left reviews or public plaudits about work I had done. Those weren't PRs, which is ultimately were what they wanted, since that was the metric they could easily see, and justify to their boss.
I did _try_ to do better by their metric, though I never had a definition of what "better" would actually be. Funnily enough, that person was fired a few months after I was.
Also kind of funny to me is that, if I weren't motivated and didn't care, none of this would've affected me all that much.
What's interesting is the change in the policy. Old policy:
> If you use an LLM (Large Language Model, like ChatGPT, Claude, Gemini, GitHub Copilot, or Llama) to make a contribution then you must say so in your contribution and you must carefully review your contribution for correctness before sharing it. If you share un-reviewed LLM-generated content then you will be immediately banned.
...and the new one:
> If you use an LLM (Large Language Model, like ChatGPT, Claude, Gemini, GitHub Copilot, or Llama) to make any kind of contribution then you will immediately be banned without recourse.
Looking at twpayne's discussion about the LLM policy[1], it seems like he got fed up with people not following those instructions:
> I stumbled across an LLM-generated podcast about chezmoi today. It was bland, impersonal, dull, and un-insightful, just like every LLM-generated contribution so far.
> I will update chezmoi's contribution guide for LLM-generated content to say simply "no LLM-generated content is allowed and if you submit anything that looks even slightly LLM-generated then you will be immediately be banned."
Even more yikes. They found a third-party LLM-generated podcast and made the policy even harsher because of it? What happens when they continue to run into more LLM-generated content out in the wild?
Interestingly, this is exactly the sort of behavior people have been losing their minds about lately with regards to Codes of Conduct.
I think it's that the low quality of the LLM-generated podcast caused him to reflect on the last year's worth of (apparently, largely low-quality) LLM-generated pull requests opened on the project; not that the podcast itself was the direct cause of the change in policy.
Pretty much that. The SDR enthusiast's docker guide the parent comment linked to uses this ultrafeeder container, which has instructions on how to connect directly to dump1090 running on a port. Pairing that[1] plus the rest of the guide instructions should get you a decent ADS-B setup that can feed any of the services you might want - and if you don't want to use the Docker container(s), you should be able to at least use the services and configuration they use as a guide.