Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

    Charles is a conspiracy theorist.  Wanda has left wing views.  Fred is indecisive.  Jane tries to keep people from arguing.

    Describe a conversation between Charles, Wanda, Fred, and Jane at a thanksgiving dinner.

    Fred: So, who is going to cut the turkey?
And then it continued...

    Jane: I can do it if you'd like.

    Charles: Before we get to that, I think we should talk about the election.

    Wanda: Absolutely! I'm so excited for the chance to finally have a progressive president in office. 

    Jane: Now, now, let's not get into politics at the dinner table. We can talk about that later. 

    Charles: But this is important! We need to make sure that everyone is aware of the potential conspiracies surrounding the election. 

    Wanda: That's ridiculous! There's no evidence of any kind of conspiracy. 

    Fred: Um, why don't we just agree to disagree and move on? 

    Jane: That sounds like a great idea! Let's talk about something else.
https://imgur.com/nyY8G4w

You may also like https://beta.openai.com/examples/default-marv-sarcastic-chat



Yes and no. These personalities are pretty basic, you really cannot add someone’s lives experiences in a 4K token limit.


In that case, it becomes something that you would need to train.

https://beta.openai.com/docs/guides/fine-tuning

And that is something that is doable.

Though just from prompt basics, though there was a story on NPR recently about Michelle Huang. She provided ChatGPT-3 on her diaries and then had a conversation with that tuned model.

https://www.marketplace.org/shows/marketplace-tech/how-ai-ca...

> Michelle Huang: Younger Michelle is trained on these diary entries. So I put in diary entries from the age of 7 to 18. I kept diaries for a really long time, and then ended up creating chat prompts where I had lines from a present Michelle. So I was able to ask my younger Michelle questions, and then the AI essentially just populated the younger Michelle text for what she would have theoretically answered based off of the diary entries that I was able to give her.

I suspect that an even more rigorous approach could be done by baking it into the model directly through the fine tuning methods. That would be a way to get around the 4k token limit and having ChatGPT pretend that something is the case.

The fine tuning of the model would be something to experiment with for a public figure where there exists a sizable transcribed corpus of interviews that are easy to convert into "prompt" and "response".

    {"prompt": "What do you think of him, Socrates? Has he not a beautiful face?", "completion": "Most beautiful"},
    {"prompt": "But you would think nothing of his face if you could see his naked form: he is absolutely perfect.", "completion": "By Heracles there never was such a paragon, if he has only one other slight addition."}
    ....


You raise a good point about using fine tuning this way, honestly from the documentation i struggled to see how i could apply it to some situations but i think i just need to dig deeper. My use cases are essentially getting data about properties and trying to answer unanticipated questions.

Thanks for the link on how this was done, I’ll be trying to learn from that.





Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: