Hacker News new | past | comments | ask | show | jobs | submit login
Automating my job with GPT-3 (seekwell.io)
479 points by daolf on Jan 27, 2021 | hide | past | favorite | 101 comments



FWIW I ran a startup that provided you with a program (single binary) that allowed you to run natural language queries on your database across most schemas. It had a full semantics layers which translated your query into a mixed-lambda calculus-prolog query, which is then translated into SQL as needed - you can see a sample of the semantics layer here: https://youtu.be/fd4EPh2tYrk?t=92.

It's deep learning based with a lot of augmentation. Going from the OP's article to actually being able to run queries on any schema is quite a bit more work. I'd love to see GPT3 handle arbitrary schemas.

p/s: the startup failed. deep research based startups need a lot of funds.


Sorry to hear about your startup.

Translating between natural language and SQL is a reasonable idea. I was thinking about this as well, but I didn’t try anything as I don’t have an ML background. I spent some time looking at the SQL side of the problem, and it seemed quite a rabbit hole.

If you do manage to get it working up to a point where it’s usable by the average person, you can take it one step further: auto generate simple apps or websites in a no-code product.

This might bring some hate from the dev community as we are automating ourselves out of a job, but it would be a pretty impressive product if it worked.


It did more than SQL. It could generate programs in the syntax of an arbitrary programming language (with enough pretraining examples) as well. What powers it is a tree-to-tree transducer, which is a kind of recursive neural network (not recurrent, which is what LSTMs are).

It's been 5 years and I've been thinking a lot on this. This is a product with no good market fit. If you break it down by "kind" of sales, your basic branches are B2B and B2C.

B2C is mostly out because the person on the omnibus have no general need for a programming language, SQL or not (plus, outside of emacs, nothing consumers use are inherently "programmable"). So this program simply becomes a utility program that you reach out to occasionally like `cut` or `sed`.

We mostly targeted businesses and other startups. We wanted to empower the common employee to be able to query data on their own. That itself came with a huge amount of challenge. Turns out most employers don't like the idea of any Tom Dick and Harry having query access to databases. So we branched out, tried to allow querying spreadsheets and presentations (yes, the advertising arm of one big media corporation stored their data in tables in a .pptx file on a sharepoint server). The integrations are finnicky and break often.

Maybe we're not smart enough to figure this out. But one day, one day I shall be back.

But in the meantime, the failure of the startup spawned the Gorgonia family of deep learning libraries (https://github.com/gorgonia/gorgonia). Check it out if you want.


Don't wait too long, the market's for it's already here, and huge. "Data scientist" is one of the fastest growing programming careers. Companies will pay good money to make their data scientists more effective. Look at Redshift, BigQuery, Snowflake, Tableau, etc. Look at the Looker acquisition by Google. Also look at Splunk as a query language.

It's not remotely easy a business though. Read the uphill battle just in the HN comment thread on the Looker acquisition by Google.


You've probably already thought of and discarded this idea, but what about shipping^Hdumping the current version online so people can poke at it through a web form and a heavily ratelimited API? (Gate API keys through a sane login system that uses Google or GitHub for identity)

My thinking would be to explicitly *not* ship a supported "product" of any kind, but rather to provide a way to put the raw materials in their current state "out there" from a research perspective and provide users a way to exercise the system and reason about its limits, without letting everyone run off with it. (For example pretraining would happen on the server from uploaded datasets.)

In this scenario, the idea would be to let the market come to you, by just putting this absolutely everywhere in its current raw form and *letting the market materialize/emerge*. Rationale being, you want to know what the correct next step is from where you stand right now. If the program actually runs, for an interesting definition of "runs", ie it can actually handle real-world work in a novel way, well, I say that directly translates to it being worth trying because you might get novel answers/ideas for where to turn next. IOW, I'm theorizing that the quality of the answers you would get may correlate with the irreducible complexity of how well the program stands up in practice.

Reviewing uploaded data and executed commands may also prove insightful and inspire new ideas for real-world integrations. (Something something view-but-not-share TOS clause)

Depending on resource usage, charging for certain API actions, as a way to further ratelimit and not necessarily to make money, may be reasonable - for example doing extensive training (or lots of iterations) or performing tasks that take a long time to complete or scan a lot of input data. (And of course there would be research exceptions to this as well...)

Also, tables-in-pptx is now filed in my head a few rows down from "email of a TIFF of a scan of a photocopy of a printout of a fax of another email" :) - that's terrible, haha


Sounds like you were too early! Really cool.


agreed, the intent here would be to give a SQL proficient user some "boilerplate" to start from vs. fully automate SQL generation.


Speaking hypothetically I feel this kind of thing is on the scale of self driving cars.

There needs to be a secondary break through in ML for progress or you need magnitudes of more data then is available on earth to make this even viable.


Woah. I never gave it my database schema but it assumes I have a table called "users" (which is accurate) and that there's a timestamp field called "signup_time" for when a user signed up.

I am definitely impressed by the fact that it could get this close without knowledge of the schema, and that you can provide additional context about the schema. Seems like there is a lot of potential for building a natural language query engine that is hooked up to a database. I suppose there is always a risk that a user could generate a dangerous query but that could be mitigated.

Not related to the article but what exactly is "open" about OpenAI?


Nothing. It was a not-for-profit but it converted itself to a for-profit entity and made an exclusive deal with Microsoft for GPT-3 (not sure how it's exclusive given all the beta API users).

Granted training your own copy of GPT-3 would be beyond most peoples means anyway (I think I read an estimate that it was a multi-million dollar effort to train a model that big).

I do think it's a bit dodgy to not change the name though when you change the core premise.


GPT-3 is the same "tech" as GPT-2, with more training. GPT-2 is FOSS. I have a feeling that OpenAI's next architecture (if there ever is one) would still also be FOSS.

I think OpenAI just chose a bad name for this for-profit initiative — "GPT-3" — that makes it sound like they were pivoting their company in a new direction with a new generation of tech.

Really, GPT-3 should have been called something more like "GPT-2 Pro Plus Enterprise SaaS Edition." (Let's say "GPT-2++" for short.) Then it would have been clear that:

1. "GPT-2++" is not a generational leap over "GPT-2";

2. an actual "GPT-3" would come later, and that it would be a new generation of tech; and

3. there would be a commercial "GPT-3++" to go along with "GPT-3", just like "GPT-2++" goes along with "GPT-2".

(I can see why they called it GPT-3, though. Calling it "GPT-2++" probably wouldn't have made for very good news copy.)


You make it sound as if GPT-3 is just the same GPT-2 model with some extra Enterprise-y features thrown in. They're completely different models, trained on different data, and vastly different sizes. GPT-2 had 1.5B parameters, and GPT-3 has 175B. It's two orders of magnitude larger.

Sure, both models are using the same structures (attention layers, mostly), so it's a quantitative change rather than a qualitative change. But there's still a hell of a big difference between the two.


Right, but GPT-2 was the name of the particular ML architecture they were studying the properties of; not the name of any specific model trained on that architecture.

There was a pre-trained GPT-2 model offered for download. The whole "interesting thing" they were publishing about, was that models trained under the GPT-2 ML architecture were uniquely-good at transfer learning, and so any pre-trained GPT-2 model of sufficient size, would be extremely useful as a "seed" for doing your own model training on top of.

They built one such model, but that model was not, itself, "GPT-2."

Keep in mind, the training data for that model is open; you can download it yourself and reproduce the offered base-model from it if you like. That's because GPT-2 (the architecture) was formal academic computer science: journal papers and all. The particular pre-trained model, and its input training data, were just published as experimental data.

It is under that lens, that I call GPT-3 "GPT-2++." It's a different model, but it's the same science. The model was never OpenAI's "product." The science itself was/is.

Certainly, the SaaS pre-trained model named "GPT-3" is qualitatively different than the downloadable pre-trained base-model people refer to as "GPT-2." But so are all the various trained models people have built by training GPT-2 the architecture with their own inputs. The whole class of things trained on that architecture are fundamentally all "GPT-2 models." And so "GPT-3" is just one such "GPT-2 model." Just a really big, surprisingly-useful one.


> Right, but GPT-2 was the name of the particular ML architecture they were studying the properties of; not the name of any specific model trained on that architecture.

That sounds like it would have been a reasonable choice for naming their research, but isn't the abbreviation "GPT" short for "Generative Pre-trained Transformer"? Seems like they very specifically refer to the pre-trained model, which I would also take from the GPT-2 paper's abstract: "Our largest model, GPT-2, is a 1.5B parameter Transformer[...]" [1]

--- [1] https://cdn.openai.com/better-language-models/language_model...


GPT-2 Community and GPT-2 Enterprise.

Those terms are so disseminated that I wouldn't be surprised if GPT-2 could suggest them.


What I meant by my last statement is that no news outlet would have wanted to talk about "the innovative power of GPT-2 Enterprise." That just sounds fake, honestly. Every SaaS company wants to talk about the "innovative power" of the extra doodads they tack onto their Enterprise plans of their open-core product; where usually nobody is paying for their SaaS because of those doodads, but rather just because they want the service, want the ops handled for them, and want enterprise support if it goes down.

But, by marketing it as a new version of the tech, "GPT-3", OpenAI gave journalists something they could actually report on without feeling like they're just shoving a PR release down people's throats. "The new generation of the tech can do all these amazing things; it's a leap forward!" is news. Even though, in this case, it's only a "quantity has a quality all its own" kind of "generational leap."


That's disappointing.

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

Certainly makes that statement seem less credible.


What if, and bear with me, strong AI poses real dangers and open sourcing extremely powerful models to everyone (including malicious actors and dictatorial governments) would actually harm humanity more than it benefits it?


And now its restricted to only the malicious actors and dictatorial goverments that can buy it!


This is exactly the defense I kind of expect them to use when AGI finally is realized (whether or not it ends up true).

"Oh, we said we're open but this is too dangerous to publicly release; we'll be licensing it exclusively to approved customers instead."


> (including malicious actors and dictatorial governments) would actually harm humanity more than it benefits it?

I'm really glad that weapons aren't open source. Imagine every dictatorship would get their hands on weapons. Luckily, it's hidden behind a paywall. /s


Incorrect. It is still a not-for-profit, which owns a for-profit entity. It is fairly common for charities to own much or all of for-profit entities (eg Hershey Chocolate, or in today's Matt Levine newsletter, I learned that a quarter of Kellogg's is still owned by the original Kellogg charity). And the exclusive deal was not for GPT-3, in the sense of any specific checkpoint, but for the underlying code.


- Charity is not the same as not-for-profit

- Hershey is a public company. Most certainly NOT owned by either a charity or a non-profit. The only way a non-profit comes into the picture is that a significant portion of their 'Class B' stock is owned by a trust which is dedicated to a non-profit (the Milton Hershey School). (https://www.thehersheycompany.com/content/dam/corporate-us/d... pp 36-37)


Neither of your corrections are right, but thanks for providing a cite on how exactly a charity owns much of Hershey.


Both of my corrections are right, but thanks for doubling down on something that's false with no evidence.

- Charity vs non-profit: https://www.irs.gov/charities-non-profits/charitable-organiz...

> To be tax-exempt under section 501(c)(3) of the Internal Revenue Code, an organization must be organized and operated exclusively for exempt purposes set forth in section 501(c)(3) [CHARITY], and none of its earnings may inure to any private shareholder or individual [NON-PROFIT].

You can be a charity (albeit not tax exempt) without being a non-profit, and moreover you can be a non-profit without being a charity. (See also https://www.irs.gov/charities-non-profits/other-nonprofits ; and keep in mind that still other types of non-profits are not tax-exempt at all!)

- Trust "owning" Hershey's: If you look at the document I cited, you'll note that the trust (which is still neither a charity nor a non-profit!) owns only 5.5% of Hershey's common stock.


I stand corrected.


I would love for an ACTUAL open AI platform, someone should build a SETI@Home like platform to allow normal people to aggregate their spare GPU time.


> but what exactly is "open" about OpenAI?

Nothing. At this point it's simply openwashing.


I like this term just because it seems like it's got a lot more common lately to do this.


> Not related to the article but what exactly is "open" about OpenAI?

Microsoft's checkbook?


Closed and open.. ClopenAI


It's open in the sense anyone can use it. Until then, models of this power were internally available in megacorps alone.


No magic needed, only metadata. ;-)


This is a use case where AI-powered SQL is a solution in search of a problem, and introduces more issues than just doing boring SQL. For data analysis, it's much more important to be accurate than fast, and this article is unclear how many attempts each example query took. GPT-3 does not always output coherent output (even with good prompts), and since not 100% of the output is valid SQL the QA and risk tolerance of bad output affects the economics.

OpenAI's GPT-3 API is expensive enough (especially with heavy prompt engineering) that the time saved may not outweigh the cost, particularly if the output is not 100% accurate.


One of the authors here. The idea (if we were to actually implement this in our product) would be to give the user some "boilerplate". We're no where near being able to automate a 100 line `SELECT` statement with CTE's etc., but it does a decent job of starting you off.


Granted, you could also get similar boilerplate from Googling your query and checking the top answer on Stack Overflow. That's free, and includes discussions on constraints/optimization.


Yeah, we originally thought GPT could accept a large domain specific training set (e.g. feed in the SQL schema for a user), but it's not there yet. A PM at OpenAI said it shouldn't be long off though. When that's possible, the SQL generated should be much better than Google.


> but it's not there yet. A PM at OpenAI said it shouldn't be long off though.

Does this mean that the model is still being improved? Or just that your access to it will somehow become better? Either way, I'm curious what that entails.


> Does this mean that the model is still being improved? Yes

> Or just that your access to it will somehow become better? Yes

Increasing the amount of training data we can send would improve our results and that's what OpenAI mentioned they're working on.


The OpenAI GPT-3 API access policy talks about how you grant them the right to use anything you feed the API to improve it, so I assume they're doing some kind of continuous retraining


This seems to be consistent with my outsider view of AI demos.

1) Have a question

2) Figure out the answer

3) Have the AI figure out the answer

4) If the AI figured out your answer, be impressed, otherwise try again.


Yeah, no, that's pretty much it.


supervised learning!


The problem with the curre t "AI" technology is it is only approximately correct (or rather, it is somewhat likely to produce a "good" result). This gives great use-cases when it comes to human perception, as we can filter out or correct small mistakes and reject big ones. But when used as input to a machine, even the smallest mistake can have huge consequences. Admittedly, this nonlinearity also applies when human beings "talk" to machines, but the input to and output of a single human being will always be constrained, whereas a machine could output billions of programs per day. I don't think it would be wise to follow that route before we have computational models that can cope with the many small and few big mistakes an "AI" would make.


GPT-3 strikes me as the human fast thinking process, without the slow thinking process to validate and correct its answers. It is half a brain, but an impressive half at that.


It's like a human with no senses - sight, hearing, touch, smell or taste, also paralyzed, short term amnesic and alone, but able to gobble tons of random internet text. After training it can meet people but can't learn from those experiences anymore, the network is frozen when it meets the real world.


Devil's Advocate: What makes this any different than human error?


Lack of human oversight.

Think of how frustrating it is to be unable to talk to a human at Facebook or Google because their AI closed your account without explanation.

Now imagine this is how everything works.


It depends on the human, it depends on the process; I guess it will depend on the quality of AI in the future.

I consistently have terrible experiences with by human operators over the phone, e.g. phone company and similar (in my case italy, but I guess it's a general problem). They routinely cannot address my issues and just say they are sorry but they cannot do anything about it, or that this time it will work.

Human operators are a solution only if they are not themselves slaves to an internal rigid automated system


This is why I try to stay with small companies. You can always get a real human on the phone and more often than not, the person understands the system they are working in and can fix problems themselves. Our last ISP was wonderful to call, you got direct answer and it was a real tech that answered, not a callcenter reading from a script.


Lack of human oversight is one as mentioned below. Speed is another one.

Whatever error a human can cause, a machine can do as much or more damage many orders of magnitude faster and larger and be difficult to correct.


You know what grinds my gears with GPT-3? The fact that I can't tinker with it. I can't do what this guy just did, or play around with it, or learn from it, or whatever. Access is limited.

I feel like I'm back in 95, when I had to beg faculty staff to get a copy of VB on some lab computer, only to be able to use it 1 hour a day. Restricting knowledge like this, in 2021, feels odd.


Get used to it. The infrastructure involved is just too expensive to run at home.

The same applies to quantum computers. Models like GPT-3 are way too big for a consumer machine to handle and require something like a DXG-station [0][1] with 4x 80 GiB A100 GPUs to run properly.

So even if the model were available for download, you wouldn't be able to even run it without hardware costing north of $125,000.

It's less about restricting knowledge and more about the insane amount of resources required. It's not as bad as getting access to FMRI or particle accelerators, but it's getting there ;)

[0] https://bdtechtalks.com/2020/09/21/gpt-3-economy-business-mo...

[1] https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Cent...


Am I reading it wrong, or was the output for this example wrong:

Input: how much revenue have we received from users with an email ending in 'seekwell.io' in the last 3 months?

The output conditions on: users.signup_dt>= now() - interval '3 months'

But my interpretation would be to condition on charges.charge_dt

It reminds me of the criticisms of almost fully autonomous vehicles where the driver pays less attention. Now I'm curious if the queries written from scratch would be less likely to have errors than queries that use GPT3 as a starting point.


I've built this as well! I won an internal hackathon with this and so I ran up against many of the issues you'll find here.

1. There is unlimited flexibility in the prompt.

Seemingly irrelevant changes to the prompt can change whether you get out correct SQL or not. Sometimes you can just repeat things in the prompt and get different and better results. "Write correct SQL. Write correct SQL"

For any one input question you may be able to tweak the prompt to get the correct answer out. But you need to do this tweaking for each question (and know the correct answer you need). Tweaking one prompt may break all other input-output pairs.

2. Real questions involve multiple large schemas.

I deal with tables with thousands or tens of thousands of columns. There is no way you can get GPT-3 to deal with that scale with a simple input as shown here. And of course you want to join across many tables etc.

3. Syntax

Natural language is more robust than SQL, you can get close and get the point across. Most language models trained on general corpora are fundamentally not suited to the symbolic manipulation of languages like SQL.

This isn't to say that GPT-3 couldn't be part of a solution to this problem, but please restrain your exuberance, it's not going to solve this problem out of the box.


Cool! Did you consider using BERT instead of GPT-3? Any idea what the pros and cons with each approach are?


He's our fastest business analyst but sometimes on a hot day he'll just keep repeating "Syntax Error"...

Very cool work, I continue to be blown away by what GPT-3 can achieve.


"GPT-3, I need a React App that displays vital economic statistics from the World Bank API."

----

"Nice! can you add a drop-down for regional statistics when in country view?"

----

"Just one last thing. Can you make the logo bigger?"



Even though that one appears to be on an official channel of theirs, the quality on this one is much better, for some reason: https://www.youtube.com/watch?v=maAFcEU6atk


Thanks for that - yes, it looks like Adult Swim Germany has had to create a zoomed version of the original in order to avoid an automated copyright strike from their parent company. Kinda ironic with yet another example of the algorithms doing most of the work, and everything getting slightly worse as a result.


When that day finally come I guess lurking on hn will be my full time job. The question is which job got replaced first, the managers, or the programmers?


Neither. The programmers will still have jobs to debug the apps as they are not handling correctly 1% of the inputs. The managers will come up with all the necessary processes to maintain oversight of the new activities and keep their jobs.


Amazon pitched something like this at reinvent. I'm bearish because the art to good analytics is deeply understanding the data. The SQL queries are the easy part. It's knowing when to omit inactive states, or how accrual works in the system, or that field X was depreciated and isn't accurate anymore.

These systems solve none of those underlying questions, and in fact are more likely to make it worse. Because if you won't even bother to write SQL or look at your database, you are even less likely to make sure it's high quality.

(There is maybe an exemption here for queries that you don't really care of you get the real answer. I don't know what those questions would be, but that would be a good usecase.)


We should start with the caveat that the GPT3 API waitlist doesn't actually move, you literally need to get an employee to get you manually off the waitlist.


I'm a member of the beta. The Slack group regularly sees influxes of hundreds of new customers, many of who seem to have signed up from the waitlist.


This is really cool, but it's clear that the person requesting the SQL has to know whether the generated SQL is correct for it to be of use.

If I'm a non-technical user and I ask a plain-language question and the generated SQL is incorrect, it's likely going to give the wrong answer -- but unless it's terribly wrong ("Syntax error", truly implausible values) the user may not know that it's wrong.

So I see this as more of a tool to speed up development than a tool that can power end users' plain-language queries. But who knows? Maybe GPT-4 will clear that hurdle.


One of the authors here. You're exactly right. We're no where near being able to automate a 100 line `SELECT` statement with CTE's etc., but it does a decent job of starting you off.


For me, it is easier to fix or tweak something (trial and error?) than to come up with something from the ground up new.


Really interesting article. I’m just curious to know how do you get access to gpt-3


Go back to the article and search for "If you're interested in trying it out", there's a link that allows you to signup for the waiting list.


Anecdotally, I signed up around last June (06/2020), and am still waiting to hear back..


(I work at OpenAI.)

We've been ramping up our invites from the waitlist — our Slack community has over 18,000 members — but we still are only a small fraction of way through. We've been really overwhelmed with the demand and have been scaling our team and processes to be able to meet it.

We can also often accelerate invites for people who do have a specific application they'd like to build. Please feel free to email me (gdb@openai.com) and I may be able to help. (As a caveat, I get about a hundred emails a week, so I can't reply to all of them — but know that I will do my best.)


Thank you for your open and honest response. I've been on the waiting list for a few months myself and it's great to hear that Open AI is ramping up to meet the enormous demand for GPT-3.


Can you explain why this is not fully open to paying customers?


AFAIK the list never moves and you basically have to know someone at openAI.


Same.


Same. :|


Also in the "signed up for waitlist but never heard back". I signed up a couple of times because I thought I might have done it from an address that got filtered out at first.


Got it. Do you have to pay for it to use it


OpenAI is a paid API. The SQL pad we (https://seekwell.io/) offer has a free tier with paid premium features.


Got it thanks for answering


Those 36MB GIFs can probably be optimized without losing GIF compatibility :)


It reminds me of why tools like Tableau are so useful. You dont' have to teach people SQL or whatever, they can build their own visualizations and Tableau will do the SQL for you.


Fun story. So we've interview candidates by giving them SQL take-home questions. We gave them a user but everyone on our team could see the queries ran by that user. One candidate was really impressive. They were using some very advance syntax and the queries were immaculate.

Turns out they were using PowerBI lol


Looker is basically a SQL generator


Can you be 100% sure that GPT-3 will really understand what you want to query? Or do you have to double check every query before you run it?

As for the latter, it's no automation, it's just a nice "autocomplete" feature.


> Can you be 100% sure that GPT-3 will really understand what you want to query? Or do you have to double check every query before you run it?

Isn't that the same for your new coworker Bob over in that corner of the room? Sure, his "failure modes" might look different, but how can you tell whether someone (or something) "understands" what you want to query? And how can you be sure they got it right if you don't double check?

In my eyes GPT-3 is akin to a freshly hired grad that can't operate google (so you have to send him some related stackoverflow examples for his task). You can't trust that the result is right in either case, but both might bring pretty decent domain knowledge and - when given work accordingly - could save you time in the end.


I constantly wonder how people are getting access to the GPT-3 API (as beta users) when so many are still on the waiting list. The answer to use the Adventure Dungeon game is quite lacking.


We didn't do anything special. Signed up for the waitlist on day one and just randomly got an email one day saying we're in.


How long did you have to wait? I've been on the GPT-3 waiting list for a few months, hoping to build an educational app and haven't anything yet.


It's good to hear that the beta API application process is as probabilistic as their algorithms.


Can someone check if the page https://blog.seekwell.io/gpt3 is causing CPU spikes in the browser?


Yeah my fan just spinned up and I was wondering... guess its this website


I would have used the word "percentage" rather than "percent". I wonder if the slightly more precise english would have helped?


I’m really curious as to how this would deal with a “Bobby Tables” scenario or deliberate malicious intent.

The ethics implications are... fascinating.


Does there exist this sort of thing for graphql?


wait till it figured out the best way to "solve" everything is drop tables :D


how I get invite to gtp3?


Teaching it CTEs, subqueries, generic column identifiers (1,2,3), unions, left outer join, like, regexp, JSON, array, array_agg, concat, coelesce, trunc, etc. would be fun.

But something else humans can do is look at the data to construct patterns as another way to extract data. To some degree it requires some (bad) inductive reasoning approaches like “I’ll assume maybe most or all data in this column has this format.” “Oops, it didn’t, so let’s tweak it to look like this, to cover it sometimes also being empty string or null.”


At least you are not automating someone else's job and feeling bad about them being fired, lol.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: