Hacker News new | past | comments | ask | show | jobs | submit | quanticle's comments login

Mail delivery doesn't seem to be working, or, at the very least, is considerably delayed. I just sent an e-mail to my primary Fastmail-hosted e-mail address from my secondary GMail, and it hasn't shown up despite several minutes having passed.


No you cannot. Copilot plugins are currently only supported for NeoVim, VS Code, and JetBrains IDEs. Emacs is not on the list.


I must be imagining copilot.el, then.


>ActiveX just had the nice side effect of only running on Windows and IE.

It also had the nice side effect of being the security equivalent of a sucking chest wound. Flash and Java Applets were bad too (c.f. the famous Java classloader vulnerability, which could be exploited by loading a malicious applet), but they didn't seem to be nearly as bad as the horror that was ActiveX. Perhaps it was because ActiveX was intentionally designed to integrate with the host OS, or because it was more deeply integrated into the browser, but my recollection of ActiveX is that Microsoft never managed to get security right for ActiveX, and the way that ActiveX security was "solved" was by ditching ActiveX entirely.


    ATT has provided the best service of any carrier while traveling, so I will use them.
Really? My experience with AT&T while traveling has been pretty awful. In the US, in rural areas, Verizon is better. And outside the US, Google Fi gives you international data roaming as part of the base package. One of the reasons I switched to Google Fi is because it's so much better when traveling.


For coverage it’s Verizon > AT&T > T-Mobile. So maybe it was awful compared to Verizon, but it can be worse. I remember a doctor friend I was hiking a mountain with (La Plata, a 14k ft peak in Colorado) who took a patient call at the very summit.

As a tmo customer I know I’m essentially off-grid in much of the country when I’m outside many metropolitan areas.

But tmo is great for traveling internationally.


I used Google Fi for a while, and while I'd love to use them as a primary carrier again, I can't until they choose to support add-on SIMs for watches. As a Google Fi and YoutubeTV customer, I cannot wait until I no longer have to give AT&T any money at all.


The only thing with Fi is that it is T-Mobile, and you'll always be in a lower priority block of customers compared to people paying for T-Mobile directly, which mean you'll see slower traffic in congested areas at peak times (including e.g. during rush hour traffic).


Thats not true, Google Fi has the same priority as postpaid T-Mobile. This is something MVNOs negotiate in their contracts with carriers, not something thats true across the board.

Discount MVNOs increase their margins by buying wholesale deprioritized data while Google Fi has negotiated the no deprioritization.


Is this talked about somewhere? All I see is this reddit post[0] by u/Peterfield53, which looks like a very active user on r/GoogleFi but doesn't seem to be a Googler or otherwise a Google Fi support agent.

0: https://old.reddit.com/r/GoogleFi/comments/ulc1t5/perks_of_f...


https://old.reddit.com/r/NoContract/comments/oaophe/data_pri...

>QCI 6 is applied to all of T-Mobile's postpaid and prepaid plans (except for Essentials) and Google Fi which also has QCI 6 as well. This means if you want the absolute best from T-Mobile, you want to get a plan directly from them. Even their cheap $10 prepaid 1GB Connect plan has priority data.

You can apparently confirm this with a rooted phone: https://coveragecritic.com/2019/09/17/how-to-find-qci-values...

>My Google Fi service had a QCI of 6 during regular data use.


That seems to track with my experience: Google Fi in the US was no worse than T Mobile directly (which my work phone uses).


If you're travelling, I find it hard to beat Fi.

You literally land in a new country, turn your phone back on and you get a "Welcome to [country] - your data rate is the same" message almost anywhere.

Personally - I've flown from Taiwan to Brazil to Amsterdam and then back to the US and I don't have to think about my phone. It just works.

---

Outside of the travel use-case, I would also probably pick something else, but if I know I'm going to be travelling, I'll switch back to Fi.


With eSim and the Airalo app, international travel is fairly painless. It costs a few bucks and a couple minutes to setup (which can be done while waiting at the airport to leave) to get a data-only sim for your destination county. If you're paying for an expensive domestic account for international reasons instead of a cheaper $40/mo eg Mint mobile plan, it might be worth investigating theirs plans to see if it would end up saving money, given your travel requirements.


Fi is $40 a month for the more expensive plans, plus $10/gb until the plan hits 10gb, at which case it's free but they might throttle.

And the key thing is I just don't have to think about it. I can't forget to register a new account, I don't have to worry about esoteric sign up requirements for certain countries (ex: Brazil wants a CPF for fucking everything), and I can't get stuck without a connection and then not be able to setup the next step.


I still have my Sprint plan. This is how it works by default. (Sprint + gvoice = google fi; before gfi you could merge your gvoice and sprint accounts which was really cool. Then they cancelled that and started gfi) Since the TMo merger, I suspect gfi is still using the Sprint stuff.


> Since the TMo merger, I suspect gfi is still using the Sprint stuff.

Since before the merger, it used Sprint and T-Mobile, in addition to US Cellular https://techcrunch.com/2018/01/17/googles-project-fi-now-cap... (ctrl-f sprint)


I've been to every state in the contingent US, most many more times than once, and spent most of my time in rural areas / wilderness. Verizon was poor during the time I had it. I forgot what triggered switching from them but I didn't last long, had to switch to ATT. I switched to ATT because my Tesla uses them and I noticed it almost always had service in remote areas when I did not. Haven't needed to switch since.

ATT has a great plan for the Americas, south America is all covered, Canada is covered, and many other places. It worked really well for me in Brazil.

That said, when I start traveling to Europe and Asia more, I may switch back to Google Fi.

For domestic use, while traveling / on the road a lot? I would rate as follows:

ATT > Google Fi > T-Mobile > Verizon

Keep in mind, if you are mostly stationary it is better to use the carrier known for good service in your fixed location.


I don't think that there will be a collapse. The linked post [1] has this telling line:

    Because I’m in a very very blue state and city
Emphasis on city. Every time I've read something about how the education system in the United States is on the verge of collapse or is collapsing, it's been from a teacher (or has quoted teachers) in a city school district. City school districts are in dire shape. But that's because many city school districts are massively overbuilt for the amount of children they need to serve, and politicians are loath to shutter schools. So these school districts chug along, spending more and more money on buildings and facilities that are hardly used, while, at the same time shortchanging teachers and the education of children.

Suburban school districts are smaller, have more children (which equates to more funding) and generally have newer buildings and facilities, so they're not in the same dire shape as city school districts. For that reason, you hear much less about them. After all, who wants to write a news story that reads, "Okay, everything is actually functioning as it more or less should?" This leads to a mistaken impression that all school districts everywhere are on the verge of breakdown when in reality the failures are localized to city districts like San Francisco or Chicago.

[1]: https://www.reddit.com/r/Teachers/comments/11620il/the_us_is...


The orignal poster on Reddit isn't even in the US based on their post history. I've seen this happen a bunch on Reddit where non-Americans astroturf as Americans (and Americans astroturf as Europeans) on Reddit either as lulz or sockpuppet accounts.

But I can attest your statement. My mom works in education and it's open knowledge among teachers and certified staff that you start off in a city district for 1-2 years to get the no experience stink off you and then you go to a better suburban school district. PDs do the same as well.


I live in a smaller red city and it's as bad at select schools as it is in SF. Generally public education is horrendous here and everyone with an above median income sends their kids to private school.


Is the smaller red city demographically wealthy/upper middle class like Ft Worth, or a demographically working class/poorer city like Dayton? Class/family wealth plays a MASSIVE impact on school district performance, because richer parents have more time and ability to intercede in their kids education.

When people mention San Francisco, it's better to compare the families sending kids to public schools with those in Dayton tbh. Parents who can afford to usually leave San Francisco when they start a family so they can send their kids to better public school districts.

Also, like I've said a thousand times on HN - the primary city by population and economy in the Bay Area is SAN JOSE, not San Francisco. San Francisco barely had any tech employment until the 2010s, lost most of it's banking+law employment in 2008, and the union port jobs in the 90s. The biggest employer in SF is the city.


I think a lot of it is just that cities still have functional newspapers to report when schools are failing and the suburbs don't.


Most suburbs are reported on by the newspapers of the neighboring city when something newsworthy occurs, educational or otherwise.


Does funding matter? <insert one of a dozen studies here showing little relationship between student funding and outcomes, across and within countries>


One of the issues with many of those studies is that they track per-pupil spending and track educational outcomes without actually verifying that the per-pupil spending is actually being spent on pupils. If a significant fraction of the per pupil spending is being spent on facilities maintenance, then the water, so to speak, is evaporating before it reaches the mouths of the thirsty.


Currently? You don't know for sure. It might be possible to make some guesses by examining how the model responds to various prompts, and checking the output of the model against the output from the same prompt against known models. But that will at best, give you a good guess, not certainty. This is why the FLI's proposal for AI regulation [1] suggests that AI models be both watermarked and that the output from AI models be clearly identified as such. In a world where most people use regulated models, this would enable you to identify which model generated a certain piece of content.

As for applying a delta to the weights, that would likely break the model. It would be like randomly scrambling bytes in a compressed file and then expecting the file to decompress properly.

[1]: https://futureoflife.org/wp-content/uploads/2023/04/FLI_Poli...


By delta I really mean the kind of finetuning people are doing to avoid directly giving Llama weights. May be watermarking will be the norm even for open source models to prevent abuse.


I look forward to reading breathless thinkpieces in the New York Times about how "techies" are keeping their kids away from AI, written in the same tone as articles written today about how they're limiting the "screen time" of their children.


Maybe :) Both are powerful tools, which can be used in a positive or negative way. Like always, the role of the parents is extremely important here, to guide and understand when and how such tools can and should be used.


This reminds me of how NASA would never have a Shuttle in flight during the transition from December 31 to January 1, because they were unsure as to whether the shuttle's computers could handle the rollover correctly [1]. Sure, they could have updated the software to make sure that the rollover was handled correctly, but that would have required them to recertify the entire OS running the Shuttle, and it was easier to just plan missions such that the Shuttle was never flying on New Year's Eve.

[1]: https://usatoday30.usatoday.com/tech/science/space/2006-11-0...


When was the last time you had to debug an IRQ conflict? I remember when "Plug and Play" was derisively nicknamed "Plug and Pray". Today, it all just works. When I'm assembling a computer, I don't have to set jumpers. I don't have to fiddle with making sure the boot drive is at the end of the IDE cable rather than the middle. I can just snap all the pieces together like Lego, hit the power button, and be assured that I'll get a bootable system (assuming, of course, that I haven't been a dolt and forgotten to plug the video card power cable in).

When was the last time you had an application blue-screen/bugcheck/kernel panic your machine? Yes, Windows still blue-screens from time to time, but over the past decade, I've found that 100% of my blue-screens have been caused by faulty drivers, rather than application code or bugs in the OS itself. This wasn't always the case. I remember, on Windows 98, there was one particular game that my brother had (I think it was Reader Rabbit), which would repeatedly and reliably blue-screen the machine when we got to a certain level. I haven't seen any errors like that in more than decade. And even the driver blue-screens are getting better. I remember not too long ago, my Windows PC's monitor blinked off, then came back. When I looked in Event Viewer, I saw that the GPU driver had crashed and had been automatically restarted. This is something that still causes kernel panics on Linux and MacOS, but Windows just shrugs it off and keeps on chugging.

With regards to Linux, when was the last time you had to mess with xorg.conf? Wifi drivers? WPA supplicant? I remember when I had to download the Windows drivers for my wireless card, extract the binary blobs, compile NDISWrapper, and then pray that I'd set everything up correctly, before unplugging the Ethernet cable to test whether my wifi was working. Now? I browse Hacker News while Linux is installing, because wifi drivers have been part of the kernel for years.

As for programming tools, they're more stable, robust, and widely available than ever. When was the last time you had to pay for a compiler, interpreter or language runtime? When was the last time GCC or LLVM crashed? Today one can write code in C, C++, Java, Python, Go, Rust, and a plethora of other languages... all for free, even on Windows! This is a huge improvement from the bad old days when your choices were to either pay for Borland or pay for Visual Studio. And as for web programming, do you really pine for the days when your only option for a backend language was a collection of perl scripts in `cgi-bin`?

The one regression, in my opinion, is with communication software. We used to have open (or "open-enough" i.e. reverse engineered) protocols that enabled multi-protocol, multi-platform clients such as Pidgin. That world is gone. Our communications are now siloed into proprietary, hostile software stacks, such as Slack, Google Meet and Teams. And our personal communications are siloed between Discord, WhatsApp, and the multifarious other messenger apps that we have to install in order to communicate with that one person who refuses to use anything else.

But other than comms, has software improved? I have a hard time arguing otherwise.


> I don't have to fiddle with making sure the boot drive is at the end of the IDE cable rather than the middle.

But now we are having USB-c.


> When was the last time you had to debug an IRQ conflict?

Never. But in my home server I have 4, sometimes 5 RAID controllers. Some combinations of RAID/AHCI/IDE modes won't work due to lack of resources. I presume it's about I/O addresses - the boot message is not really informative compared to Device Manager. I regret not taking a picture of the error message. The system won't even pass the POST when this happens.

> I remember when "Plug and Play" was derisively nicknamed "Plug and Pray". Today, it all just works.

No, it doesn't! I just plugged in an old webcam and there's no driver. It's the exact same "Pray"... well, not really. I don't pray. These days I default to the "It's not going to work" mindset.

> When I'm assembling a computer, I don't have to set jumpers.

How is that a good thing? Overclock something, system won't boot, you have to reset the CMOS and loose all settings, including the boot order.

> I don't have to fiddle with making sure the boot drive is at the end of the IDE cable rather than the middle.

No, you could have used the jumpers, but I think you hated them too much to let them help you. I wish I had the jumpers now. Every time I insert or remove a SATA HDD from the rack I have to redo the boot order.

> I can just snap all the pieces together like Lego, hit the power button, and be assured that I'll get a bootable system (assuming, of course, that I haven't been a dolt and forgotten to plug the video card power cable in).

And then you notice that the system won't recognize the CPU without a BIOS/UEFI update, which you can't do because it won't boot unless it recognises the CPU. Then the OS is installed with IDE drivers and you can't switch to AHCI without major OS surgery.

> When was the last time you had an application blue-screen/bugcheck/kernel panic your machine? Yes, Windows still blue-screens from time to time, but over the past decade, I've found that 100% of my blue-screens have been caused by faulty drivers, rather than application code or bugs in the OS itself.

These days there's no blue screen or error. Apps just won't start (click the icon and it does nothing), opened apps suddenly close without any error (just dissappear from the screen), system suddenly reboots, or won't wake up from sleep, or an update makes the system unbootable or stuck in a boot cycle.

> This wasn't always the case. I remember, on Windows 98, there was one particular game that my brother had (I think it was Reader Rabbit), which would repeatedly and reliably blue-screen the machine when we got to a certain level. I haven't seen any errors like that in more than decade. And even the driver blue-screens are getting better. I remember not too long ago, my Windows PC's monitor blinked off, then came back. When I looked in Event Viewer, I saw that the GPU driver had crashed and had been automatically restarted. This is something that still causes kernel panics on Linux and MacOS, but Windows just shrugs it off and keeps on chugging.

True. This part is better.

> With regards to Linux, when was the last time you had to mess with xorg.conf? Wifi drivers? WPA supplicant? I remember when I had to download the Windows drivers for my wireless card, extract the binary blobs, compile NDISWrapper, and then pray that I'd set everything up correctly, before unplugging the Ethernet cable to test whether my wifi was working. Now? I browse Hacker News while Linux is installing, because wifi drivers have been part of the kernel for years.

I did most of the enumerated items this week.

> But other than comms, has software improved? I have a hard time arguing otherwise.

It did improve a little in stability, at a very very high cost of user's money, time and privacy: waiting for stuff to open, commands to get processed, buying ever faster and more expensive hardware to do basically the exact same things as 20 years ago, only slower, and not because of the 56K modem.

It's because of people always wanting the latest newest stuff that good old software gets abandoned. See: IRC, FTP, Opera Presto, websites without JS, single-user OS, also hardware: ethernet on laptops, headphone jacks on phones.


I'm not going to respond to your points one by one. My overall response to you is that the system you're describing, with its multitude of RAID controllers, ancient webcam, overclocked CPU, etc, etc, wouldn't even have been possible to put together in the '90s. You'd have ended up spending all your time debugging random crashes and failures, and figuring out how to get stuff working. Whereas today, it's usable and functional and, while it still might have issues, it at least all works most of the time.

As far as the system not recognizing newer processors without a BIOS update, that was also true in the '90s. It's just that, back then, things changed so much, you'd just end up tossing the entire motherboard when it came time to install a new CPU, and you'd "upgrade" the BIOS that way.


    while Java originated the billion dollar mistake
By the "billion dollar mistake", are you referring to null references [1]? But null references were introduced in 1965 in Algol, by Tony Hoare. They long predate Java.

[1]: https://www.infoq.com/presentations/Null-References-The-Bill...


I don't think it's about the existence of null references, but how Java is using them. Null references can be a useful, e.g. Kotlin has converted them to be useful (with nullable types).


Using nulls in Java is mostly a choice on the part of developers; even if you can't migrate to a JVM-targeted language like Scala, you can still adopt practices like null objects [0].

[0] https://en.wikipedia.org/wiki/Null_object_pattern


Speaking of Go and Algol...

http://cowlark.com/2009-11-15-go/


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: