Hacker Newsnew | past | comments | ask | show | jobs | submit | mulletboy's commentslogin

Can anyone recommend a good book on exoplanets speculative biology, flora & fauna, etc?


Accenture| Machine Learning Research Engineer, Graph ML | Dublin, Ireland | Onsite | Full Time

We are looking for a Machine Learning Research Engineer familiar with Graph ML in our research Lab in Dublin, Ireland.

Accenture Labs Dublin focuses on Artificial Intelligence, with a strong emphasis on Machine Learning on Knowledge Graphs and Interpretable Machine Learning.

You will design, implement, and evaluate novel principled ways to solve research problems in Graph Machine Learning.

If you are interested, feel free to reach out to me at luca dot costabello at company dot com

Apply here: https://www.accenture.com/ie-en/careers/jobdetails?id=R00116...


"not left. not right" Seen that before. Here in Europe that means "right" all the way.


Well, the landscape is still quite fluid (there are new models proposed in literature at every major conference). Processing real-world graphs is obviously more challenging, for a number of reasons (multi-modality, scale, etc.) - even though benchmarks are catching up, and are becoming harder (see FB15k-237 or WN18RR).

As a general rule of thumb, it is important your graph has enough redundancy in it, i.e. the more relations, the better. Also, bear in mind these models do not support multi-modality, i.e. literals such as numbers, strings, geo coordinates, timestamps are simply treated as entities. In most cases it is probably better to filter literals out before generating the embeddings.


> what are possible inputs to ampligraph? Any knowledge graph will do (i.e. directed, labeled multigraph, with or without schema). We have APIs to read graphs serialised as CSV files or RDF (any serialisation will do): http://docs.ampligraph.org/en/1.0.1/ampligraph.datasets.html...

> the main use-case is plugging in an existing knowledge graph, and it filling in the gaps Correct. That is known as Link Prediction. There are other machine learning tasks you can do, though: for example you can generate embeddings and then cluster them. Or you can use embeddings to see if distinct entities are indeed the same.

> Can I augment this will really high-quality embeddings for the nodes, that were learned over auxiliary unlabelled text? I know there is a handful of papers in literature that do that, but we have not implemented any of them yet in AmpliGraph. Examples:

* Xie, Ruobing, et al. "Representation Learning of Knowledge Graphs with Entity Descriptions." AAAI 2016. * Xu, Jiacheng, et al. "Knowledge Graph Representation with Jointly Structural and Textual Encoding." arXiv preprint arXiv:1611.08661 (2016). * [Han16] Han, Xu, Zhiyuan Liu, and Maosong Sun. "Joint Representation Learning of Text and Knowledge for Knowledge Graph Completion." arXiv preprint arXiv:1611.04125 (2016).

> What are other ways I can augment the data set? I would try first with a dataset with no literals (no strings, no numbers, no geo coordinates) as these are treated as entities, for now. I suggest generating embeddings first on your current graph, and measuring the predictive power using http://docs.ampligraph.org/en/1.0.1/generated/ampligraph.eva... Merging additional datasets would be another option, to get more data to work on.

> Is this useful only when there are many edge-types, or is it also good when there are very few?

Also when there are a few.

Let us know how you likeit, and if you need assistance, we have a public Slack channel - happy to answer any question! https://join.slack.com/t/ampligraph/shared_invite/enQtNTc2NT...


btw, we are hiring research engineers here in our Dublin Lab. Send me an email if interested: luca.costabello@accenture.com https://www.accenture.com/ie-en/careers/jobdetails?src=&id=0...


Indeed. Here in Accenture Labs we use it in quite a lot of diverse applicative scenarios. Besides, KG embeddings can be used for other tasks beyond link prediction (e.g. link-based clustering).


Thanks! Feel free to play around with it - and of course any feedback is much appreciated (GitHub, email or on our Slack channel https://join.slack.com/t/ampligraph/shared_invite/enQtNTc2NT.... I was not aware of pytorch-biggraph. Looks cool. It's good to see there's a lot going on in graph representation learning!


Just wondering, with such an unbalanced dataset (5,958 negatives, 200 positives), wouldn't have been fairer to use average precision (area under the precision-recall curve) instead of ROC-AUC?


Using SourceTree 1.9 or earlier? We implemented a change in the way we roll out updates (announced here) late last year, so if you’re using SourceTree for Windows 1.9 or earlier you will not see auto updates for 2.0. Please download 2.0 directly from our website instead.


Damn, I read this "earlier than 1.9". Sorry.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: