Hacker Newsnew | past | comments | ask | show | jobs | submit | calderknight's commentslogin

That's something you can easily prompt for, too.


What model were you using? What prompt did you use?

You can learn to trust it for some tasks that it's reliably good at.

But for the most part, you don't trust it - you read through it and check it.


ChatGPT.

Translate the following Python program to Lisp.

Yes, I forgot to add "... and no crap." ;)


ChatGPT is just the brand. I guess you're using GPT-4, but if you're using the default model (GPT-3.5) that would certainly explain below-expectation results.


> ChatGPT is just the brand.

"ChatGPT" is a model, says its provider.

But yes, the web page says model GPT-3.5.


Mystery solved! The answer to your OP is that the reason anyone can trust ChatGPT for code is that they use a much better model than the one you're using! GPT-3.5 is ancient and way behind GPT-4. In fact, there are now tens of organisations who have developed model classes way ahead of GPT-3.5.


> the reason anyone can trust ChatGPT for code is that they use a much better model than the one you're using! GPT-3.5 is ancient and way behind GPT-4.

Testing that theory now using Bing Copilot GPT-4.

Failed.

unmatched close parenthesis Line: 11, Column: 11, File-Position: 408

Input is:

  Translate this to Common Lisp.

  def solve(graph):
    a = 0
    while a < len(graph):
      for b in range(len(graph)):
        if b > a and graph[a].intersection(graph[b]):
          graph[b] = graph[a] | graph[b]
          graph.pop(a);a-=1
          break
      a+=1
    return graph

  a = [{1, 2}, {4, 5}, {1, 3}, {1, 4}]

  print((solve( a )))
Output is:

  (defun solve (graph)
    (let ((a 0))
      (loop while (< a (length graph)) do
        (loop for b from 0 below (length graph) do
    (when (and (> b a) (intersection (nth a graph) (nth b graph)))
      (setf (nth b graph) (union (nth a graph) (nth b graph)))
      (setf graph (remove-if (lambda (x) (equal x (nth a graph))) graph))
      (decf a)
      (return))))
        (incf a))
      graph))

  (let ((a (list (list 1 2) (list 4 5) (list 1 3) (list 1 4))))
    (format t "~a~%" (solve a)))


Yeah, if going that route the answer to the OP is "most people aren't using it for Lisp".


Perhaps Lips programmers have more sense than most :)


a few million years ago there were land mammals

https://en.wikipedia.org/wiki/Saint_Bathans_mammal


Why not?


leagues ahead of gpt-4-turbo-2024-04-09


A USA made product


To be clear, the best available version of Gemini has barely caught up with what OpenAI released over a year ago.

Google's a year behind. They haven't really caught up.


I don't think there's a contradiction between being something that just generates text and being something that does have thought processes and intention.


Yep. Imagine reading ASCII art, verbally, one character at a time, to a dementia patient in a context in which they're not expecting it. They'll probably react negatively.


I don't think your interpretation of the sentence is sensible. The sentence mentions that the host knows what's behind the doors. So, if he is allowed to open the door with the car, the problem would become insoluble and would just be about speculating on the host's personality. And it definitely doesn't support the conclusion that the probabilities become 50/50.


[deleted]


Not logical. It doesn't follow from the assumption that the host would not pose the question "Hey, here's the car, wanna switch?" that the contestant would automatically lose. You could just as well speculate that if the host opened the door to the car the contestant would automatically win the car.


it's pretty counterintuitive that his personality enters into it, isn't it?


If the format of the game allows him to show the car to the player, how could his personality not enter into it? In every case where the player picks a goat-door, the host will be presented the option to either reveal the car or the goat. I mean, one can imagine various complicated scenarios in which the host might reveal the car exactly 50% of the time in such cases, but none seem like they can be reasonably arrived at.


it turns out that there is in fact no way that his personality could not enter into it, but that was not obvious to me until i did the simulation. even if he chose to reveal the car exactly 50% of the time in such cases, that would be a result of his personality, wouldn't it?


You can imagine factors other than his personality, but they're all equally as speculative. Additional rules to the game, for example.


That's clear. Otherwise the question would just be about speculating on the host's personality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: