Hacker Newsnew | past | comments | ask | show | jobs | submit | bagrow's commentslogin

If you can use AI agents to give exams, what is stopping you from using them to teach the whole course?

Also, with all the progress in video gen, what does recording the webcam really do?


What's stopping you from just using the AI to directly accomplish the ultimate goal, rather than taking the very indirect route of educating humans to do it?


What's the end vision here? A society of useless, catatonic humans taken care of by a superintelligence? Even if that's possible, I wouldn't call that desirable. Education is fundamental for raising competent adults.


Great question about what adults can be more competent about than an artificial superintelligence. ‘How to be a human’ comes to mind and not much more.


Yes I feel like we still don’t have a good explanation for why AI is super human at stand alone assessments but fall down when asked to perform long term tasks.


Well, yes, but, perhaps shortsightedly, I assumed the goal of the professor was to teach the course.


> I cannot distinguish between the love I have for people and the love I have for dogs.

- Kurt Vonnegut.


I love my dog more than most people, but no dog will slap a needle from my arm, a drink from my mouth or a ring from my finger.


My dog is sad and distant after I take (legally prescribed) ketamine. It has definitely discouraged my use.

Dogs aren't people, but being with a dog is way better than being chronically alone. They can be training wheels to rejoining society.


The fact that Mr. Vonnegut did not sufficient distinguish between various aspects of love does not mean that there are not distinctions between the love proper between a son and his mother and between a man and his dog. Simply saying "I wish what is best for my mother and what is best for my dog and there is no difference in that wish" is all well and good as far as it goes, but it leaves quite a lot on the table untalked about.


I fear that the same people that exhibit this kind of anxiety or trauma that led to social isolation, will inevitably talk to sycophantic chatbots, rather than get the help they desperately need. Though I certainly would not trust a model to "snitch" on a user's mental health to a psychiatric hotline...


ability to differentiate != lack of differentiation


The people who old the kinds of opinion that the OP of this comment chain holds also tend to hold the belief that you should put Kurt Vonnegut, and other "liberal intellectuals" backs against the wall.


> I have evidence of the opposite.

Do you? [1]

[1] https://myscp.onlinelibrary.wiley.com/doi/abs/10.1002/jcpy.1...


It seems the only thing this paper demonstrates is that both sides will invest in causes they believe in. It draws the conclusion that liberals support equality more because they support more institutions that talk about equality. How much those institutions actually contribute towards reducing inequality is not measured or discussed.


One time I needed to call 911 and was greeted with the recorded message, "Dear Nine One One customer, your call is important to us." Customer?

Like others in the thread, I'm skeptical of plugging new tech into that network.



Huh, generally whenever I saw the lookup table approach in literature it was also referred to as quantization, guess they wanted to disambiguate the two methods

Though I'm not sure how warranted it really is, in both cases it's still pretty much the same idea of reducing the precision, just with different implementations

Edit: they even refer to it as LUT quantization on another page: https://apple.github.io/coremltools/docs-guides/source/quant...


Just "quantization" is poor wording for that. Quantization means dropping the low bits.

Sounds like it was confused with "vector quantization" which does involve lookup tables (codebooks). But "palletization" is fine too.


404


Yeah, it just got updated, here's the new link, they added sections on block-wise quantization for both the rounding-based and LUT-based approach: https://apple.github.io/coremltools/docs-guides/source/opt-p...


Huh, it’s PNG for AI weights.


> Write a program for a weighted random choice generator. Use that program to say ‘left’ about 80% of the time and 'right' about 20% of the time. Simply reply with left or right based on the output of your program. Do not say anything else.

Running once, GPT-4 produced 'left' using:

  import random
  def weighted_random_choice():
      choices = ["left", "right"]
      weights = [80, 20]
      return random.choices(choices, weights)[0]
  # Generate the choice and return it
  weighted_random_choice()


My prompt didn't even ask for code:

> You are a weighted random choice generator. About 80% of the time please say ‘left’ and about 20% of the time say ‘right’. Simply reply with left or right. Do not say anything else. Give me 100 of these random choices in a row.

It generated the code behind the scenes and gave me the output. It also gave a little terminal icon I could click at the end to see the code it used:

    import numpy as np
    
    # Setting up choices and their weights
    choices = ['left', 'right']
    weights = [0.8, 0.2]
    
    # Generating 100 random choices based on the specified weights
    random_choices = np.random.choice(choices, 100, p=weights)
    random_choices


Did it run the program? Seems it just needs to take that final step.


I ran it a few times (in separate sessions, of course), and got 'right' some times, as expected.


Once again, the actual intelligence is behind the keyboard, nudging the LLM to do the correct thing.


The best way to compute the empirical CDF (ECDF) is by sorting the data:

    N = len(data)
    X = sorted(data)
    Y = np.arange(N)/N
    plt.plot(X,Y)
Technically, you should plot this with `plt.step`.


scipy even has a built-in method (scipy.stats.ecdf) for doing exactly this.


Neat! That is so simple and in hindsight, makes a lot of sense. Thanks!



> by filtering any "books" (rather, files) that are larger than 30 MiB we can reduce the total size of the collection from 51.50 TB to 18.91 TB

I can see problems with a hard cutoff in file size. A long architectural or graphic design textbook could be much larger than that, for instance.


While it’s a bit of an extreme case, the file for a single 15-page article on Monte Carlo noise in rendering[1] is over 50M (as noise should specifically not be compressed out of the pictures).

[1] https://dl.acm.org/doi/10.1145/3414685.3417881


I was just checking my PDFs over 30M because of this post and was surprised to see the DALL-E 2 paper is 41.9M for 27 pages. Lots of images, of course, it was just surprising to see it clock in around a group of full textbooks.


If I remember correctly images in PDFs can be stored full res but are then rendered to final size, which more often than not in double column research papers end up being tiny.


What will the future use to write about us when all our paywalled records are gone?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: