This guy's videos were immediately going viral after the conflict began. I enjoyed and found them educational, but I'm taking all of his claims with a grain of salt because I also don't know much about the region or its history. He talks very authoritatively which makes for compelling storytelling but conflicts of this magnitude require much more context to really understand.
For context, there's been a massive project to produce as close to perfect of a color-corrected version of Dragon Ball and Dragon Ball Z. It can be found online in traditional anime torrenting sites. It's really an outstanding labor of passion and a true testament to the global community's love for this series.
I'd imagine that there's some discussion about how to make the most out of the tool as well as discussion of experiments and capabilities. I'm not even sure what exactly "Microsoft Copilot" entails anymore because of the multiple rebrands, but having a place where you can discuss exploring plugins and other adjacent features seems useful.
Not quite the same, but recently I was recently looking around for communities centered around Claude Code for discussion about people's workflows as well as discussion about what plugins people are using and if they notice it making a significant difference.
Since the technology is still evolving, having an active community can help you discover new patterns and explore the space more effectively.
> [...] I'm not even sure what exactly "Microsoft Copilot" entails anymore [...]
Watching from the sidelines (not a Microsoft user), I've completely lost track. Between this, the Azure 365 cloud whatever stuff, I have no idea what many of the products even exactly are any more.
Simply put Microsoft is the worst company at naming stuff. Even when they come up with a good name for something, they'll name 3 other totally different products the same thing to maximize confusion.
I gotta say though, I'm actually not sure which VMware (well Broadcom I suppose) products I use anymore. I'm pretty sure they took the Aria name off something else they called Aria for a little while. So Aria is no longer Aria but they still have Aria but it's what used to be called XYZ
Xbox Series with X > S (so if you want the high end of the current generation you want the Xbox Series X; if you want mid-range things are more complicated because you can now get an Xbox One X, but not the Xbox One, used for much less than you'd get an Xbox Series S for and which one is "better" is a dice roll depending on the games you want to play and if 4K matters to you…)
Series is a real weird word to use there. But it also doesn't help that the versions are extra complicated because with "PC-like compatibility" in everything after the Xbox One playing just about the entire same library you need a bit of a matrix to figure out which is best for you if you don't care about the "latest and greatest".
Oh wow yes, completely forgot about that one. To me, it's a complete blur made from single words and letters, one series x s one box 360? Maybe they should create a 365, with MS office pre-installed. Or something.
Seriously? Does anybody know what Copilot is? I don't think I have ever seem a "Copilot user", so I don't know what it looks like. Is it the little macro key on new laptop keyboards? The chatbot you get in Bing? A technical philosophy? Or is it in essence just copilot.com, the mediocre chat interface which you used to get free GPT-4 three years ago?
I wish. I got a Dell laptop for work and they've replaced the right Ctrl key with a Copilot key, and (because it's a locked-down work sysyem) the only thing I can remap that to is the Windows menu. And I keep hitting it out of muscle memory, interrupting everything. But at least now it doesn't launch Copilot.
Which I could add is "the only AI approved for use by IT" because they hate us.
> Which I could add is "the only AI approved for use by IT" because they hate us.
It's the same at our place. It's basically the lowest effort way as we already have data agreements with Microsoft 365 it eliminates a lot of the paperwork. And they do promise that they won't train on data even in the free (well, included with basic M365) version for corporate users. A lot of others don't unless you pay.
It's too bad because it seems to be the worst AI around. Even compared to ChatGPT itself which uses the same model as copilot in MS Office. I don't really understand why there's such a difference. If you do pay the $30 it's a bit better especially the researcher.
Double check if the (hopefully not locked) BIOS gives an option to customize the CTRL key. I had a previous work laptop which also got cute with the CTRL button, but thankfully did let you remap it.
Colin and Samir have some really in-depth videos with MrBeast which help document and explain his rise in fame, influence, and prominence. He's one of the most dedicated optimizers of our time, and he's been responsible for shaping the YouTube meta for years at this point. At this point a significant portion of his brain is probably fully allocated towards optimizing for the most engaging YouTube videos possible. The "24 hours with MrBeast" video helped contextualize his fame to me, it's really rock-star level. It's a shame that not many people engage with him on a more technical level, I think he would have a lot of interesting insights over which to nerd out about.
I agree, even more generally, in that if you think about any medium from a creator perspective there is so much more depth to it.
Pop music is not my thing, but I read a book about it, and the max martins and production function of these giant hits, and its endlessly fascinating optimization problems.
Yeah, many podcasts are either: (1) an advertising platform for a guest's new book, (2) a platform for the guest to play their "greatest hits" without engaging critically or exploring new ideas, or (3) a platform for the host to tell you their half-baked opinion about $CURRENT_EVENT in order to keep the slop machine running.
Conversations With Tyler: he tends to ask some of the most creative and interesting questions. For a specific episode recommendation, I really enjoyed "Donald S. Lopez Jr. on Buddhism". He also has an older interview with Paul Graham (pg), but I don't think the questions were as deep or challenging.
Dwarkesh Patel: he gets extremely high quality guests and he doesn't just roll over completely when the guest makes a claim, at least he's willing to ask follow-up questions. His guest lectures with Sarah Paine are outstanding for helping to contextualize your understanding of the world order of the past 100 years from an American perspective.
Wookash Podcast: very technical and focused on more advanced programming topics. For specific episode suggestions I suggest the recent ones with Anton Mikhailov where they talk about ~~ECS~~ arrays of things.
Two's Complement: a podcast by the guy who made the Godbolt Compiler Explorer. It doesn't release very frequently but it provides interesting perspectives. Just
Ezra Klein Show: this is one of the guys that wrote the Abundance book, which I think was a much-needed message. Most recently he had an interview with Clark from Anthropic, but it's from a fairly normie / non-AI-obsessed perspective.
I have to rant about podcasts:
My biggest issue with most podcasts is that it often feels like there's very little effort put into preparing for the discussion and there's not many interesting follow-up questions. I think you can challenge people's claims in good faith to make for more interesting discussions. At least ask some reasonable follow-up questions when the guest makes outrageous claims! A lot of podcasts are just an advertising platform for people to talk about their new book; if you can listen to a guest give the same conversation with a different host then that's probably a sign that the questions are bad and shallow, so you shouldn't keep listening to that podcast.
One of the issues with asking deeper questions is that anything truly interesting or new will probably require having thought about the topic a lot ahead of time. Otherwise you just end up getting a very shallow answer because people can't usually think through complex topics on the fly so the best you can hope for is to get a pre-cached or partially computed answer. It would be great to have a podcast dedicated to exploring more challenging and underexplored questions which are shared with guests ahead of time so both parties can have time to think and explore. Most famous people just go on podcasts to play their "greatest hits" without saying anything substantially different or new.
I think vibe coding something and showing it off on Show HN is probably fine, but it boils my blood when people cannot even be bothered to write the post body themselves. If someone is using an AI generated post body and title that's usually a clear signal of slop for me. The post body is supposed to be part of the human connection element!
That's a meaningful sign to me too, except there are some brilliant tech people who mainly need all the help they can get just with their English.
Even before AI got so strong, some of the translations were fairly abnormal in their own way.
>The post body is supposed to be part of the human connection element!
I really think this is the best too :)
Maybe for the non-English speakers, or anyone really, if a project means a lot, have a number of people who are smart in different ways look over the text a number of times and help you edit beforehand.
To make sure it's what you the human want to really say at the time.
For me the biggest benefit from using LLMs is that I feel way more motivated to try new tools because I don't have to worry about the initial setup.
I'd previously encountered tools that seemed interesting, but as soon as I tried getting it to run I found myself going down an infinite debugging hole. With an LLM I can usually explain my system's constraints and the best models will give me a working setup from which I can begin iterating. The funny part is that most of these tools are usually AI related in some way, but getting a functional environment often felt impossible unless you had really modern hardware.
Same. This weekend, I built a Flutter app and a Wails app just to compare the two. Would have never done either on my own due to the up front boilerplate— and not knowing (nor really wishing to know) Dart.
I'm worried that this will lead to a Prop 65 [0] situation, where eventually everything gets flagged as having used AI in some form. Unless it suddenly becomes a premium feature to have 100% human written articles, but are people really going to pay for that?
> substantially composed, authored, or created through the use of generative artificial intelligence
The lawyers are gonna have a field day with this one. This wording makes it seem like you could do light editing and proof-reading without disclosing that you used AI to help with that.
At least it would be possible to autofilter everything out. Maybe market will somehow make it possible for non-AI content to get some spotlight because of that.
The problem is people believe it is. People believe the advertisement industry narrative they are force to show the insane screens and have to make it difficult. Yet they are not, and a reject all must be as easy as accept all (and "legitimate reasons" do not exist, they are either allowed uses and you don't have to ask or they are not).
How is that useless? You adding the warning tells me everything I need to know.
Either you generated it with AI, in which case I can happily skip it, or you _don't know_ if AI was used, in which case you clearly don't care about what you produce, and I can skip it.
The only concern then is people who use AI and don't apply this warning, but given how easy it is to identify AI generated materials you just have to have a good '1-strike' rule and be judicious with the ban hammer.
Because you have to be able to prove it wasn't AI when the law is tested, and keeping records and proof you didn't use AI is going to be really difficult, if at all possible. For little people having fun, unless you poke the wrong bear, it won't matter. But for companies who are constantly the target of lawsuits, expect there to be a new field of unlabeled AI trolling comparable to patent trolling or similar.
We already see this with the California label, it get's applied to things that don't cause cancer because putting the label on is much cheaper than going through to the process to prove that some random thing doesn't cause cancer.
If the government showed up and claimed your comment was AI generated and you had to prove otherwise, how would you?
"One regulation was kinda bad, so we should never regulate anything again."
Good god, this is pathetic. Do you financially gain from AI or do you think it's hard to prove someone didn't use it? Like this is the bare minimum and you're throwing temper tantrums...
The onus will be on the AI companies pushing these wares to follow regulations. If it makes it harder for the end user to use these wares, well too bad so sad.
>"One regulation was kinda bad, so we should never regulate anything again."
Please don't misrepresent what someone says. That does not lead to constructive dialog.
I gave a question challenging a specific way to regulate a specific thing, to indicate it is challenging. This is not the same as dismissing all regulations.
Also, please avoid the personal mentions.
>The onus will be on the AI companies pushing these wares to follow regulations.
That wasn't the challenge. The raised issue isn't AI companies labeling things AI. The given example included them very much following the regulation.
I think a lot of people are asking the question around many digital services; I'm pretty sure in areas like education and media "no AI!" is going to be something that rich people look for, sure.
Editing and proofreading are "substantial" elements of authorship. Hope these laws include criminal penalties for "it's not just this - it's that!" "we seized Tony Dokoupil's computer and found Grammarly installed," right, straight to jail
> The study, published Wednesday in Environmental Science & Technology, found that California’s right-to-know law, also known as Proposition 65, has effectively swayed dozens of companies from using chemicals known to cause cancer, reproductive harm or birth defects.
...
> Researchers interviewed 32 businesses from a variety of sectors including personal care, clothing and health care, concluding that the law has led manufacturers to remove toxic chemicals from their products. And the impact is significant: 78 percent of interviewees said Proposition 65 prompted them to reformulate their ingredients; 81 percent of manufacturers said the law tells them which chemicals to avoid; 69 percent said it promotes transparency about ingredients and the supply chain.
reply