Hacker Newsnew | past | comments | ask | show | jobs | submit | more MarkMMullin's commentslogin

Not as a single tool, but if I'm using numerical libraries I need to be accurate, and not have another unpleasant experience with numerical methods. That said, I can usually get by with the prebuilt libs


Much of the last 2 decades of my career has been as much principal work as I can get, and everything I know says you smacked this ball out of the park.

First, if the title is real, then the term Principal carries serious weight, and hews as closely as possible to the best definitions of Principal Investigator (yah, academia has been busy making a mess of that)

More often than not, a good PI knows that all problems don't need the most powerful solution, some just need to go away. If a claimed principal is always applying the most complex solution everywhere, then they're bad at their job.

A good 1/3rd of my benefit is not going 'zomg - a hard problem' to my co-workers, its the exact opposite. And even when it is a hard problem, at least half the time then it's 'Don't worry - here's what Djikstra/whomever did to solve it'

I love advanced algorithms, and relish the chance to actually go try and make ones - that said, my main value is not that, its that some horrible new problem erupts, my job is to say, 'no, calm down. That's an old problem in a new suit'

That said, switch isomorphic to homomorphic - I'll give the seniors credit, they usually tag the isomorphic cases . :-)


Solving NP problems every day.


Got 9/10 but it was a challenge - between the way real Trump mangles English and fake Trump strains to flagrantly fail the Turing test (h/t @JanelleCShane) it was a struggle - nice job, train it more, and hook it up to it's imaging equivalent.....then let them all sort that mess out . :-D


OK, did a quick scan on free and paid tiers and wondering about ability to bring in specialized python libraries- in my case, https://github.com/opencog/link-grammar which almost always requires hand building and migrating, and some custom packages on top of my own even more horrible C++ . - I have to run a jupyterhub ami on an ec2 instance right now


You linked to the OpenCog linkparser. Just out of curiosity, what are you using it for?

I used to also keep a beefy EC2 or GCP instance all set up with Jupyter, custom kernels (Common Lisp, etc.), etc., etc. Being able to start and stop them quickly made it really cost effective.

I ended up, though, getting a System76 GPU laptop a year ago and it is so nice to be able to experiment more quickly, especially noticable improvement for quick experiments.

That said, a custom EC2 or GCP instance all set up, and easily start/stop-able is probably the cost effective thing to do unless just vanilla colab environments do it for you.


I just hate n-grams is the short answer - tear apart the input with link, use arc and modifier info as input to the statistical processes - so far, too early to say it works better, but sure as hell its more interesting . :-) . Sadly I have to keep my instances running all the time, they eat all of the public Reuters news feeds - side note - additional info can be obtained by taking the top N parse candidates for a given sentence and collapsing them into a graph - I have a pretty good signal out of that for 'the parser has gone insane'


Yes, one of the differences in how we handle notebooks is that everything is actually run in a container behind the scenes. This means that is isn't just the .ipynb that we are hosting and if you install and dependencies, libraries, etc it will actually persist all of it inside of a container. This makes it much easier to share your work with others so that i.e. I could fork your notebook if you made it public and get all of the installed libraries and compiled dependencies by default. Hope that helps!

Edit: we also have another tool called GradientCI (https://docs.paperspace.com/gradient/projects/gradientci) that might also be of interest. Basically it lets you connect a GitHub repo directly to a project and you can use it to build your container automatically.


Gotcha - yah, I can containerize, it's not like I'm screwing with drivers or whatnot on the AMI - I'll keep an eye on you, not ready for GPU yet, as I can't even rationally define the vectors I'm extracting from the Link stuff. Best of luck to you, lot of people piling into that battleground, and the Dunning-Kreuger effect is rampant. :-)

Side question, if you're willing to entertain it. Tired me tells a notebook to checkpoint, wanders off to bed, comes back next day and wonders why its taking an aeon to open the notebook..... oh yeah, damn, I've got gigs of images and panda crap in it . -- do you wrangle this problem, i.e. please don't save your notebooks as gihugic files representing a mental state you can't possibly remember ?


I should also mention you can just as easily run these on CPU-backed instances as well. The GPU is not a hard requirement.

As for checkpointing data, that is still a relatively difficult problem to solve and our current recommendation is to use a combination of the persitent /storage directory and the notebook home directory. There are definitely issues with doing 100K+ of small files and committing those to the primary docker layer.

When you get to testing it out don't hesitate to reach out to use and we can try to see what the best solution is for your particular project. To date there isn't a "one size fits all" solution but we are working hard on making more intelligent choices behind the scenes to unblock some of these IO constraints.


Their head probably would have exploded when they saw what the C# compiler did with tail recursive functions . :-D


Ran this on the Ithaca Intersystems Z-8000 based rig in the early/mid '80s - it was remarkably command compatible with V7 for the basics, even built a MicroAngelo based 3D graphics system on it.....the biggest problem the box had wasn't Coherent, it was the fact the socketed logic on the custom MMU S100 card kept trying to walk out of the sockets because of thermal issues


Grrrr. This could have been interesting if Umberto Ecco was arguing semiosis, instead of this poor exercise in semantics. As a concrete example, consider "The AI therapy app is not a mind. It cannot emote. It cannot feel the warmth behind its praise or the sense of urgency behind its criticism. It is passive." Yah, no. I can also do sentiment analysis on the users comments, and the behavior of the system can be driven by a goal to change that sentiment analysis, so I've got me an active participant in a feedback loop - yes, there is no their there, its just a numbers game - but this article is still complete twaddle


ZOMG - a brain neuron from decades ago woke up - worked on some weird stuff in reversible computing in the '80s (think of assignment always preserving functional state at assignment) - my boss used to talk about his gates, usually referring to them by a more profane name . :-)


I've wondered if reversible computing research has had any real world impact. The theory is it would reduce heat since you're not increasing entropy, but I wonder in practice if that has ever been useful.


All the experience I had was theorem proving for crypto apps - that said, I think there is only the appearance of reversibility at the symbolic level, underneath its the same old mess - I vaguely recall that at full tilt, the room with all the Suns would get quite toasty . :-)


I dunno - having done my own share of labelling, a company to make that horror go away is certainly of value - that said, converting people to cogs in AI systems is worrisome


As an NLP developer, I did like the aside on hyphen purgatory - I think that can serve as the basis for a strategy on the news ingestor in development. The vocabulary is currently riddled with hyphenated words :-(


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: