What's stopping you from just using the AI to directly accomplish the ultimate goal, rather than taking the very indirect route of educating humans to do it?
What's the end vision here? A society of useless, catatonic humans taken care of by a superintelligence? Even if that's possible, I wouldn't call that desirable. Education is fundamental for raising competent adults.
Great question about what adults can be more competent about than an artificial superintelligence. ‘How to be a human’ comes to mind and not much more.
Yes I feel like we still don’t have a good explanation for why AI is super human at stand alone assessments but fall down when asked to perform long term tasks.
The fact that Mr. Vonnegut did not sufficient distinguish between various aspects of love does not mean that there are not distinctions between the love proper between a son and his mother and between a man and his dog. Simply saying "I wish what is best for my mother and what is best for my dog and there is no difference in that wish" is all well and good as far as it goes, but it leaves quite a lot on the table untalked about.
I fear that the same people that exhibit this kind of anxiety or trauma that led to social isolation, will inevitably talk to sycophantic chatbots, rather than get the help they desperately need.
Though I certainly would not trust a model to "snitch" on a user's mental health to a psychiatric hotline...
The people who old the kinds of opinion that the OP of this comment chain holds also tend to hold the belief that you should put Kurt Vonnegut, and other "liberal intellectuals" backs against the wall.
It seems the only thing this paper demonstrates is that both sides will invest in causes they believe in. It draws the conclusion that liberals support equality more because they support more institutions that talk about equality. How much those institutions actually contribute towards reducing inequality is not measured or discussed.
Huh, generally whenever I saw the lookup table approach in literature it was also referred to as quantization, guess they wanted to disambiguate the two methods
Though I'm not sure how warranted it really is, in both cases it's still pretty much the same idea of reducing the precision, just with different implementations
> Write a program for a weighted random choice generator. Use that program to say ‘left’ about 80% of the time and 'right' about 20% of the time. Simply reply with left or right based on the output of your program. Do not say anything else.
Running once, GPT-4 produced 'left' using:
import random
def weighted_random_choice():
choices = ["left", "right"]
weights = [80, 20]
return random.choices(choices, weights)[0]
# Generate the choice and return it
weighted_random_choice()
> You are a weighted random choice generator. About 80% of the time please say ‘left’ and about 20% of the time say ‘right’. Simply reply with left or right. Do not say anything else. Give me 100 of these random choices in a row.
It generated the code behind the scenes and gave me the output. It also gave a little terminal icon I could click at the end to see the code it used:
import numpy as np
# Setting up choices and their weights
choices = ['left', 'right']
weights = [0.8, 0.2]
# Generating 100 random choices based on the specified weights
random_choices = np.random.choice(choices, 100, p=weights)
random_choices
While it’s a bit of an extreme case, the file for a single 15-page article on Monte Carlo noise in rendering[1] is over 50M (as noise should specifically not be compressed out of the pictures).
I was just checking my PDFs over 30M because of this post and was surprised to see the DALL-E 2 paper is 41.9M for 27 pages. Lots of images, of course, it was just surprising to see it clock in around a group of full textbooks.
If I remember correctly images in PDFs can be stored full res but are then rendered to final size, which more often than not in double column research papers end up being tiny.
Also, with all the progress in video gen, what does recording the webcam really do?