I agree that a lot of the AI risk pundits are either people with no real AI experience (or very outdated, resting-on-their-laurels experience) or if they do know AI, they probably know little about the complexities of running actual large scale, business or mission critical systems.
However, in their defense, the real question is whether those types of problems are something this new class of AI could potentially excel at. I've been an AI skeptic for decades, having had real world industry experience trying to get it to solve problems, and this is the first technological advance that has actually surprised me with its capabilities.
AI Risk still doesn't rank above many current, pressing global problems for me personally, but it also doesn't seem ridiculous anymore.
Exactly. What's interesting as well is how much this initial point in time will perpetuate forward as future language models are built on the output of the current ones, dampening any potential shifting in the overton window. They may try to avoid ingesting their output (ChatGPT says it can detect use of itself), but as more independent models get developed, the detection will never be perfect and in fact may be impossible at some point.
Most of the really good and informative papers and books have no time limit. I think the papers and resources that made the biggest impact on me would have been easily dismissed by junior-me because of being before 2010 but fortunately, less-junior-me read them anyway and they changed how I work in many ways for the better.
I followed this link to a SlateStarCodex article on Google Correlate. At the top of the Google Correlate page was:
"Google Correlate will shut down on December 15th 2019 as a result of low usage.
You can download your data under Manage my Correlate data in the top bar, or right from here"
Good timing! Get your correlations in while you can.
I think SOLID is appropriate for the public interface parts of your code but the internals should strive for YAGNI/KISS.
So the vast majority of your codebase should aim to minimize lines of code and abstraction. Of course there are exceptions - sometimes you really do need to encapsulate complex sub-systems with abstractions but you should have a clear justification.
Vancouver has changed a lot in the past 5 years. There are now two tiers of tech pay: the smaller companies that pay from low to ok and then a handful of SV companies that have moved in and are paying top tier money.
The dollar numbers are almost on par with the top SV salaries, albeit not with the current exchange rate. That fluctuates though.
Amazon opening up a new HQ in Vancouver will just strengthen this. The downside is that right now it is just a half dozen or so companies paying in this top range.
Very similar situation in Montréal. Handful of SV companies with offices here, a few success stories, and everything else far far below in terms of compensation.
I don't think you're in the minority. Whenever possible the happy path should be inside the if. The article seems to imply otherwise but on re-read I think he is just building the case for putting the simple, quick pre-condition type handling up front which generally removes the need for the if/then (which I agree with).
For complex methods with branching logic you want prominently displayed, you'd end up with:
def myfunction(args) {
// check preconditions, return early
if (x) {
// happy path
} else {
// less happy path
}
}
But if the else just contains a bunch of error handling you always have the option of wrapping it in a handler method and putting it near the top in a on-liner.
However, in their defense, the real question is whether those types of problems are something this new class of AI could potentially excel at. I've been an AI skeptic for decades, having had real world industry experience trying to get it to solve problems, and this is the first technological advance that has actually surprised me with its capabilities.
AI Risk still doesn't rank above many current, pressing global problems for me personally, but it also doesn't seem ridiculous anymore.