Hacker Newsnew | past | comments | ask | show | jobs | submit | sharkenstein's commentslogin

Are there any performance penalizations because of this? The code for the state machine looks very complex and I'm curious about what's the difference in terms of performance between the pyramid model or the flat model.


Author here. Yes, there is a trade-off. The resulting code is much bigger than the original code because of this transformation. But, I think the trade-off is worth it because it makes the code simpler to read, write and maintain. As far as speed, I don't have any benchmarks but I don't think there is speed penalty. It's mainly an issue of code size.


The year of the Linux desktop will come... handed by Microsoft and their Linux subsystem making developers return to buy Windows machines


Microsoft abandons the NT kernel and makes Windows a Linux Desktop Environment on top of the Linux Kernel, like Gnome or KDE...


Could it be installed via the Alt Store?


What are other metrics that can help validate the F1 score? I'm asking because I believe having a small sample size can skew the number or hide a flaw in the classification algorithm it's scoring.

In other words, what else should I ask to validate that a 70% F1-score is better than a 90% F1-score but on a smaller data set?


That is a good point. It helps to think of Precision and Recall (so by extension F1-Score) from your test data as random variables sampled from a distribution modeling the probability of getting each value in your sample based on a "True" precision/recall value. I won't go too deep into the math, but this was part of the approach in the confidence calculations towards the end of the paper: being able to factor in the uncertainty of your classification metrics to confidence calculations.

To formally answer your question, the main things that matter in determining how stable your F1-Score from your test set is are: - Size of the test set - % of test set that has the label (in our case feedback tag) - the values found for precision and recall


Funny how Software updates make me more excited than Hardware ones... I'm so excited for iOS 12!

Also, iOS on Mac OS! I wonder what this means for the React Native devs... I have no experience on that platform but I'm curious about the potential impact of iOS + MacOS on that community


It would probably help given now react native iOS will work on the Mac “natively”.


Dark mode... so much wow!


Very fun to read but may be too advanced to start with... is there a particular reason why I couldn't or should't run this on a regular core i7 7700k powered machine?


For home use, I'd say start with what you have. You can always upgrade later.

Power consumption is the biggest thing. For an always-on home server, you want something that sips as little power as possible. (Some of the Intel Avoton CPUs have a 17W TDP, compared to an i7 which can have a TDP anywhere between 65-150W.)

Outside that, the only other issue is going to be lack of ECC RAM. Whether that's needed for a home server is debatable. Many folks can get by fine without it.

Oh, and lack of lights-out management utilities. Which is convenient, especially in headless setups... but hardly required when the server is a few feet away.


You won't have the fun lugging big iron into your basement and hearing it whir up.


That mini-mini tower pictured (HP Proliant Gen 8) sure doesn't look like big iron.


It's either a decoy for the girlfriend or the start of a Beowulf clusture of many more.


Which is at least 51% of the point!


When you talk about discerning between algorithms from Google, Stanford, etc... what's the criteria for doing that? Does it change based on the domain? if you are just trying to classify feedback how much the domain affects the algorithm?


Our criteria mainly depended on internal testing to see which pre-packaged algorithms performed better on our data. While performance does vary with the domain (or in our case, industry of feedback we are analyzing), we have found it to be more efficient to find the overall effectiveness of one platform and deploy it for all of our analysis.

As far as the domain affecting the algorithm, it can vary from some algorithms maintaining decent performance over most industries, to some algorithms working very well for some industries and terribly for others. Although it is just feedback, the topic of the feedback and even the ways people talk about the same topic (such as price of the product) will vary across each industry.


I've always been curious about this. How can I try it out if I already own a GPU?


You will need to buy a TB3 eGPU enclosure. These range anywhere from $200 to $500 and typically come without a GPU. There are some more expensive options (like the Aorus Gaming Box with a GTX 1070/1080 or RX 580) that already come with a GPU installed.

Here is a nice comparison of the eGPU enclosures available today: https://egpu.io/external-gpu-buyers-guide-2018/

My personal recommendations are the Akitio Node Standard/Pro or Sonnet Breakaway Box 350/550/650. These companies are reputable companies in the Thunderbolt hardware realm.

Note that I am not affiliated with any of the companies mentioned above.


You'd have to buy an eGPU case, along with a GPU. Apple recommends Sonnet case with AMD GPUs [0].

However, as seen here [1], people have managed to use other GPU cases, with NVidia cards.

0: https://support.apple.com/en-us/HT208544

1: https://egpu.io/build-guides/


As others mentioned, you will need a case. You will, however, also need a device with TB3.


You would need an external Thunderbolt 3 enclosure for your GPU


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: