Hacker Newsnew | past | comments | ask | show | jobs | submit | throwmeawaysoon's commentslogin

>come up with a "better" (performant, cheaper, easier to use, etc.) solution than GPUs for ML applications

you probably are aware but Xilinx themselves is attempting this with their versal aie boards which (in spirit) similar to GPUs, in that they group together a programmable fabric of programmable SIMD type compute cores.

https://www.xilinx.com/support/documentation/architecture-ma...

i have not played with one but i've been told (by a xilinx person, so grain of salt) the flow from high-level representation to that arch is more open

https://github.com/Xilinx/mlir-aie


Fascinating, thank you! Admittedly I don't keep the closest tabs on what Xilinx is doing.


this is true in general but

1) vivado webpack edition (ie free) lets you write (and flash) a bitstream for some of the small chips. i know it at least works for the artix-7 family because i'm doing it every day lately

2) for the artix-7 (and some lattice chips) you supposedly can use OSS (https://github.com/SymbiFlow/prjxray). i haven't tried it yet but one problem i can foresee is that the OSS tools won't infer stuff like brams and dsp. in fact the symbiflow people (i think?) explicitly call this out as the part of the project that's a work in progress.

some useful links:

https://arxiv.org/abs/1903.10407

https://github.com/YosysHQ/nextpnr

https://www.rapidwright.io/


> and some lattice chips

Lattice has been by far the favorite of the FOSS community, but there's been more news:

- https://github.com/YosysHQ/apicula has appeared for Gowin FPGAs found on e.g. Sipeed Tang Nano boards (very cheap on AliExpress) - a vendor called QuickLogic made SoCs that only use the FOSS toolchain for the FPGA part, out of the box: https://www.quicklogic.com/products/soc/eos-s3-microcontroll...


>Lattice has been by far the favorite of the FOSS community

i'm interested in the OSS flows but i haven't dug in yet. so some questions (if you have experience): isn't it only for their ice40 chips? and how smooth is the flow from RTL to bitstream to deploy?

one hesitation i have with jumping in is that i'm working on accelerator type stuff, so my designs typically need on the other of 30k-50k LUTs. will yosys+nextpnr let me deploy such a design to some chip?


I don't have that much experience (don't really have many use cases for FPGAs personally tbh) but:

Icestorm is for iCE40, Trellis is for ECP5 (which comes in variants up to 85k LUTs);

the flow is simple enough to do manually but there are things that make it one-click. This tutorial series https://youtube.com/playlist?list=PLEBQazB0HUyT1WmMONxRZn9Nm... uses one.

As for handling really big designs, I don't know.


i'm not an economist or a game theorist so i don't remember the details but this paper talks about how certain market designs lead to untruthful bidding

https://www.cs.cmu.edu/~sandholm/vickrey.IJEC.pdf

but in the context of second price auctions.

lpage might be alluding to something having to do with their proxy bidder implementation but the above paper actually discusses how proxy bidders themselves lead to untruthful bidding (so maybe lpage is suggesting their implementation is better?).


VCGs got a real-world test in FB's ad market [1], and the results were mixed. VCG is in a class of theoretically interesting but fragile and overly game-theoretic mechanisms. Our mechanism is boring from a mechanism design standpoint—it's a uniform clearing price periodic auction without any cleaver demand reduction or tricks aimed at incentive compatibility. The complexity of what we allow for with the bidding language makes closed-form/theoretical analysis at best difficult and, in cases, impossible. Instead, we focus on giving traders a direct means to express their valuations and mechanism that minimizes information leakage and post-trade regret (situations where a bidder wishes they'd behaved differently given the auction's outcome).

[1] https://www.researchgate.net/profile/Alexander-Leo-Hansen/pu...


poking around on your socials, it seems like you've been building for ~5 years, and are just now officially launching, after i guess a capital injection from yc.

since the core ip is "deep" as you say, i'm guessing it cost quite a bit to develop, unless you built out all of the components yourself, which, while possible, seems unlikely given the technical complexity of each piece (you, and whoever else is on the engineering team, seem smart but this looks like "research edge" tech along several dimensions).

so i'm curious whether you paid the development costs up front (either using your own money or FFF) or if you validated and raised in small pieces. if the latter, i'm curious how one does that for such a complex product/service.

lots of assumptions in the above - feel free to disabuse me of my ignorance.


> capital injection from YC

We raised a series A in 2019 led by Green Visor (who has been excellent btw, and with us from the start)

> it seems like you've been building for ~5 years ... i'm guessing it cost quite a bit to develop

Yep, you're spot on that it's a complex product. The biggest cost has been making it feel for the user like it's not. What that boils down to is an enormous amount of iterative feedback and development w/ the industry. Between that and the regulatory process, a lot of the "cost" has been more duration than cash burn. We've kept things lean from the start in anticipation of that.

> unless you built out all of the components yourself

We've developed the tech in house, with some hands on help from our friends at Imandra mentioned in the OP. On the research piece: that's been happening in the background for many years, and we're definitely building on the shoulders of giants in the worlds of mechanism design, algorithmic game theory, and deep learning. We're lucky to have some great academic advisors involved (like Kevin Leyton-Brown since the early days) as well.


>We've developed the tech in house

i'm not often impressed but that's quite impressive. kudos to you.

i currently work on deep learning compilers (as a phd student) but i'm interested in basically all of these things (compilers, combinatorial optimization, auction theory). i know lpage expressed that you're hiring but i'm curious what roles you're hiring for (your careers page is light on details).


We're still a small enough team that we're more focused on talent than roles. As an example of what that means, our stack is polyglot (rust, OCaml, elixir, python), and we don't assume or require that folks have worked in any of those languages before. We invest heavily in learning and teaching.

It sounds like you have a very relevant background, so please email us if you're interested in discussing further!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: