Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This package is using GPT-3 via the “ChatGPT” export from that module, which is—somewhat misleadingly—not ChatGPT, but GPT-3.


That package recommends using a ChatGPT proxy. This proxy has the ability to access ChatGPT in a way that OpenAI hasn’t been able to stop, but it requires a configuration file that is not open source.

Everyone using this proxy needs to provide an OpenAI ChatGPT access token to the server. Let me break this down:

Using the ChatGPT npm package enables an opaque third party access to your credentials to use ChatGPT — or exactly what a botnet / social media manipulation operation would need / want for a convincing bot. They just have to distribute load among all the active access tokens they’ve collected from users.

DO NOT use this library.

DO NOT trust code from authors who either don’t see this obvious vector or are in on it.

To recommend using an opaque third party proxy with no encryption is not acceptable. This lets someone peep into your conversations with the bot on top of the other malicious uses with credential hijacking. And while OpenAI is peeping as well, they are at least using the data to advance AI and most researchers have a deep relationship with the ethics of their field.

Here is the repo in question: https://github.com/transitive-bullshit/chatgpt-api


You are right. However, nothing is really secure. As Emails still operates on a store-and-forward model, where your message, jumps from server to server (akin to UUCP in the 60s). Even SMTP is not secure in itself without authentication layers.

And also HTTPS is still sent as plain-text. Cert authority in itself doesn't have the keys to decode the text, it just an authority to show the plain-text, but all along, it was a plain-text.


HTTPS is not plain text. Only the initial DNS resolution is (www.google.com). Everything after that is encrypted — address, payload, etc.

The cert authority simply signs a cert saying “this public key belongs and is controlled by the owner of this domain name”. Since we both trust the cert authority, that signature allows us to prevent mitm attacks.

From there, we can do a Diffie-Hellman key exchange and derive our secret key for encryption / decryption.

That is secure and is the backbone of the internet today. It allows all of us to send messages to an intended recipient without worrying about other parties prying into our business.

A proxy introduces an unnecessary and unvetted third party into an exchange. There is significant financial and political motivation for hijacking sessions for higher access to the chatbot & future versions of it. It is not a good pattern to make a habit of.


I am speaking from professional experience, but I am not an expert.

I used to work professionally for a Cybersecurity company in the past for just 3 years, it was just a short tenure, so my views are plausible.

I have design MITMA boxes for WIFI and HTTPS (For capturing/understanding botnets in honeypots), so I've seen how plain-text HTTPS are. (But again, I am wrong, as I am speaking from experience.)


Maybe you’re talking about some of the headers? Idk.

It doesn’t matter in any case as OpenAI released the ChatGPT official API, so the original post is irrelevant. That package will transition to the official API and be should be usable.


While there is currently a waiting lists to use the official ChatGPT API, the package uses an unofficial ChatGPT API. Surprisingly, the unofficial libraries are much more stabler (not much dropping of requests or timeout issues) than the official libraries from OpenAI.


Ahh you're right. I've been fooled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: