One of my old math professors — Lauren Williams — got one! What a pleasant surprise. She was a delight to study under and an inspiration; I'm glad that she got recognized in an avenue like this.
What does she work on? Can you explain to non-math people why her work is interesting/cool? Would also love to see more comments from people familiar with the other genius grantees' works filling us in on "here's what they do and why it's cool."
haven ridden in both a few times, yes, Waymo is head and shoulders better. It's smooth and I don't think I've ever seen any false alarms or behavior that made me feel unsafe in a Waymo, while I've had a few scary or annoying situations in the Teslas. I took a 6-minute robotaxi in drizzling weather where it parked in intersections twice because the cameras were obscured. Meanwhile Waymo can drive perfectly in heavy fog.
Both the Waymos and Teslas have that central display that shows you what the car sees (pedestrians, dogs, traffic cones, other cars, etc). The Waymo representation of the world reaches pretty far is is pretty much perfect from what I've seen. Meanwhile the Tesla one until recently had objects popping in and out.
Neither is perfect, of course; both will hesitate sometimes and creep along when (IMO) they should commit. But they're both still way better in that regard compared to the zoox autonomous cars I see in SF.
left this comment on another comment in this thread, so copying it here:
This is all true, but removing the broker fees and replacing it with a rent hike is still better for the market overall, since the broker fees simply artificially dampen liquidity. You only paid them when you moved into a new place, but that meant that if you are stuck with a crappy landlord you might not move out because the marginal cost of moving anywhere else in NYC is much higher.
This is all true, but removing the broker fees and replacing it with a rent hike is still better for the market overall, since the broker fees simply artificially dampen liquidity. You only paid them when you moved into a new place, but that meant that if you are stuck with a crappy landlord you might not move out because the marginal cost of moving anywhere else in NYC is much higher.
Very hot take but this result made me believe that BQP and P might be equivalent computational classes (in other words, quantum computers might not offer any computational complexity speedups at all). I found out about this result in college and implemented the algorithm described in the paper for a class project, though I don't remember the code working very well haha
Often I've started with some example code that invokes part of the API, but not all of it. Or in C I can give it the .h file, maybe without comments.
Sometimes I can just say, "How do I use the <made-up name> API in Python to do <task>?" Unfortunately the safeguards against hallucinations in more recent models can make this more difficult, because it's more likely to tell me it's never heard of it. You can usually coax it into suspension of disbelief, but I think the results aren't as good.
This is all true, but it's definitely not the full story. I'm addicted to HN despite it not being engineered for engagement. (though _technically_ it has taken investor money hahaha)
> This idea summarizes why I disagree with those who equate the LLM revolution to the rise of search engines, like Google in the 90s. Search enginers offer a good choice between Exploration (crawl through the list and pages of results) and Exploitation (click on the top result).
> LLMs, however, do not give this choice, and tend to encourage immediate exploitation instead. Users may explore if the first solution does not work, but the first choice is always to exploit.
Well said, and an interesting idea, but most of my LLM usage (besides copilot autocomplete) is actually very search-engine-esque. I ask it to explain existing design decisions, or to search for a library that fits my needs, or come up with related queries so I can learn more.
Once I've chosen a library or an approach for the task, I'll have the LLM write out some code. For anything significantly more substantive code than copilot completions, I almost always do some exploring before I exploit.
I’m finding the same usage of LLMs in terms of what actually I use them for day to day. When I need to look up arcane information, an LLM generally does better than a Google search.
How do you verify the accuracy of "arcane information" produced by an LLM?
"Arcane Information" is absolutely the worst possible use case I can imagine for LLMs right now. You might as well ask an intern to just make something up