Hacker Newsnew | past | comments | ask | show | jobs | submit | netgusto's commentslogin

This video from Linus Tech Tips presents some alternatives to Raspberry Pi 4: https://www.youtube.com/watch?v=uJvCVw1yONQ


It’s worth noting that this is a compiler for the Tiny-C language, and not as one might think a tiny compiler for the C language.


It's probably better to call it an interpreter, since it will also run the program and print the values of all non-zero variables afterward.

Calling it a compiler is (to me) really stretching things, I can't see any code to emit any other form of the code, it's all aimed at evaluating (executing) it.

Edit: oops, I didn't read the code closely enough, it does emit code but only internally, that code is what gets executed. Thanks for the corrections!


It is a compiler rather than a direct evaluator, since it generates bytecode for a stack VM --- and also includes the interpreter for that (look at the bottom).


That’s more or less every interpreter. CPython compiles to bytecode before interpreting that, yet nobody would call it a compiler.


I think this is a question of interface vs. implementation.

Python, JavaScript, and other languages which are traditionally considered interpreted but may do (JIT) compilation in their implementation are used as if they were interpreters: to the user, there's no separate compilation step. You run python somefile.py or node somefile.js (or refresh a browser holding a page), and editing the source code causes the next invocation to take those changes immediately. Contrast this with C/C++ and Java where there is definitely an explicit compilation step in nearly all implementations.

The program in this article thus is an implementation of a compiler, but has the interface of an interpreter.


In contrast, Java also did that and I doubt if most people think of Java as interpreted. So, using a byte-code interpreter may not be the criteria most people are using to decide on this. Truthfully, I think it is all a bit arbitrary.


That is definitely a compiler and anyone with a CS degree would call it that if they were discussing its functionality, because that's technically what it is. (Referring specifically to the part which compiles Python to bytecode)

Your SQL database also has a compiler. SQL is compiled to an execution plan. Compile doesn't only mean "create a machine code executable file".


> That is definitely a compiler and anyone with a CS degree would call it that if they were discussing its functionality because that's technically what it is.

None of these assertions is correct.

> (Referring specifically to the part which compiles Python to bytecode)

So referring specifically to something different than what I explicitly specified, it's called something else.

By that reasoning, a cow is a muscle and you are an acid.

> Your SQL database also has a compiler.

"Has a" and "is a" are rather different relationships.

> Compile doesn't only mean "create a machine code executable file".

You're the only person who made that assertion.


> None of these assertions is correct.

You should fix the Wikipedia article:

https://en.wikipedia.org/wiki/CPython

"CPython can be defined as both an interpreter and a compiler as it compiles Python code into bytecode before interpreting it."


It compiles to a sort of byte code that is executed by a stack based virtual machine.


Yes, a better title would be:

Compiler for the Tiny-C Language (2001)

In fact, that is exactly how the source code describes itself in the comments.


Sending good vibes to the engineers who will have to add Moon time support to the Java DateTime class.


Microsoft Excel will be the first to implement it.

"That's a nice looking number you got there... wouldn't it be a shame if I were to convert it to MOON TIME!"


I'll send good vibes to that one guy from tzdata who'll for sure maintain a Lunar Time Zone database and literally everyone in the globe will rely upon it.

edit: Arthur David Olson and Paul Eggert, you're The Dudes, dudes.


Can't wait for Martian Time support

Then Java will have to support days longer than 24 hours.


Lunar days are longer than 24 hours (~29.5 earth days)


Earth days are longer than 24 hours (when daylight-savings "falls back")


This actually made me laugh out loud. Thank you :)


[developer comes out of dark room with updates to DateTime] noW wE nEeD ThIS bACKpoRTEd tO JaVa 8!!! [developer spontaneously combusts]


I find it revealing that while somebody with experience in songwriting finds the result dull and mediocre, I who knows next to nothing in the domain finds its nice and totally fine.

Same goes for other domains: philosophy, coding, writing, ...

It's telling me that this AI can generate content in many domains way better than an untrained human would with minimal efforts, while not (yet?) reaching the level of experts in the domain.

It empowers all the non-experts in these many domains to touch things they never could have before. This is an amazing tool.


Sure. But in this case it’s worth remembering that you’re not just listening to ChatGPT’s work here. It wrote lyrics and some weird chord progressions.

The OP, who is himself the frontman for the Decembrists applied his best effort to instill that with a melody, performance style, and sonic character of his band. He could take material from a 5-year-old or a Markov chain and dress it up to sound like a fair Decembrists song.

If you took the same ChatGPT output to another musician, expert or otherwise, you might not have the same experience you’re having now.


I think your last point may hold for this genre of music but not necessarily for pop. Pop musicians are extremely versatile. They sort of have to be to stay relevant. Take Ariana Grande for example. She could sing basically anything in any style you want[0], but you may not even have to ask her because we have efficient zero-shot voice cloning now[1]. Before you wonder if this will work for music, check out this video of AI generated Eminem track[3].

[0] https://www.youtube.com/watch?v=ss9ygQqqL2Q

[1] https://arxiv.org/pdf/2301.02111.pdf

[2] https://aka.ms/valle (demos from one)

[3] https://www.youtube.com/watch?v=WtFNOSTTPYg

[4] https://github.com/NVIDIA/tacotron2 (used to create audio for 3)


The Eminem parody is too good!


> The OP, who is himself the frontman for the Decembrists

Wow, buried lede there. That should be in the title!


I read the article, pressed play on the song and thought "wow, the author did a great job at sounding like the decemberists", then scrolled back up and saw the author... doh!


And at the other end of the scale, regular old digital tooling (sequencers, samplers, VSTs, chord transposers, autotune, DAWs and beginner DAW alternatives) has been doing the more important stuff to allow people that are not skilled musicians to produce something resembling a song for a long time before LLMs. Some in a more user friendly manner than others.

Stringing together some cliches and googling "what are some common chord progressions?" was never the barrier to humans becoming singer-songwriters and computers lowered more important barriers some time ago, though credit where it's due, the LLM is reasonably good at picking lines that rhyme.


Spot on.


I know nothing about songwriting, but I listened to a lot of Decemberists back in the day. Compared to their usual output, this is painfully bland. Their songs generally have a clever or spooky underlying theme, a strong sense of perspective, and often bits of cheeky humor. I understand what led the AI to write this as a Decemberists song, but it's like a second-grader's understanding of one. If this weren't sung by Colin Meloy himself, I would never peg it as such.


Colin Nelly’s best point might be his vocabulary, I feel like most decemberist songs hinge on a slightly rare but perfectly selected word.


ChatGPT can't do better than experts in terms of analysis because it doesn't have a theory of mind. In other words, it's not actually thinking "what would experts do", and try to do the same or better. It's normal that it can produce mediocre results, because that's what it does, in a way: It produces "normal-looking" text. I don't know enough to say if it could do better if specialized (?), but I doubt it, as it cannot actually have an original idea, and knowing what is original is part of what makes an expert.


Theory of mind is super important here! Not just that of experts, but that of consumers. An actual author is always thinking about the audience. About what they'll find novel or interesting. About the best ways to inform, entertain, or delight them. About how to make them feel.

Merely producing "normal-looking" text (great and accurate phrase!) is an unclosed feedback loop. The songwriter here has to stop himself from driving that loop forward because he's so used to working through multiple drafts as he makes things work for his internal simulation of his audience. And generally, once an artist has something that works internally, they'll then start running it by actual other people to get their reactions. Up to and including testing material on full audiences.

Because LLMs lack internal simulations of audience response, they'll always be limited to producing "normal-looking" work.


"Theory of Mind May Have Spontaneously Emerged in Large Language Models"

https://arxiv.org/abs/2302.02083

'Theory of mind (ToM), or the ability to impute unobservable mental states to others, is central to human social interactions, communication, empathy, self-consciousness, and morality. We administer classic false-belief tasks, widely used to test ToM in humans, to several language models, without any examples or pre-training. Our results show that models published before 2022 show virtually no ability to solve ToM tasks. Yet, the January 2022 version of GPT-3 (davinci-002) solved 70% of ToM tasks, a performance comparable with that of seven-year-old children. Moreover, its November 2022 version (davinci-003), solved 93% of ToM tasks, a performance comparable with that of nine-year-old children. These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models' improving language skills.'


I see a lot of confident assertions of this type (LLMs don’t actually understand anything, cannot be creative, cannot be conscious, etc.), but never any data to substantiate the claim.

This recent paper suggests that recent LLMs may be acquiring theory of mind (or something analogous to it): https://arxiv.org/abs/2302.02083v1

Some excerpts:

> We administer classic false-belief tasks, widely used to test ToM in humans, to several language models, without any examples or pre-training. Our results show that models published before 2022 show virtually no ability to solve ToM tasks. Yet, the January 2022 version of GPT- 3 (davinci-002) solved 70% of ToM tasks, a performance comparable with that of seven-year-old children. Moreover, its November 2022 version (davinci-003), solved 93% of ToM tasks, a performance comparable with that of nine-year-old children.

> Large language models are likely candidates to spontaneously develop ToM. Human language is replete with descriptions of mental states and protagonists holding divergent beliefs, thoughts, and desires. Thus, a model trained to generate and interpret human-like language would greatly benefit from possessing ToM.

> While such results should be interpreted with caution, they suggest that the recently published language models possess the ability to impute unobservable mental states to others, or ToM. Moreover, models’ performance clearly grows with their complexity and publication date, and there is no reason to assume that their it should plateau anytime soon. Finally, there is neither an indication that ToM-like ability was deliberately engineered into these models, nor research demonstrating that scientists know how to achieve that. Thus, we hypothesize that ToM-like ability emerged spontaneously and autonomously, as a byproduct of models’ increasing language ability.


> … but never any data to substantiate the claim

There is!

https://arxiv.org/abs/2301.06627

From the paper:

  > Based on this evidence, we argue that (1) contemporary LLMs should be taken seriously as models of formal linguistic skills; (2) models that master real-life language use would need to incorporate or develop not only a core language module, but also multiple non-language-specific cognitive capacities required for modeling thought.


Thanks for this, I’ll give it a read.


I find this interesting not from the perspective of LLMs but it seems to imply that human language being a prerequisite for self-awareness. Is that really so?


Can we think without language? - https://mcgovern.mit.edu/2019/05/02/ask-the-brain-can-we-thi...

> Imagine a woman – let’s call her Sue. One day Sue gets a stroke that destroys large areas of brain tissue within her left hemisphere. As a result, she develops a condition known as global aphasia, meaning she can no longer produce or understand phrases and sentences. The question is: to what extent are Sue’s thinking abilities preserved?

> Many writers and philosophers have drawn a strong connection between language and thought. Oscar Wilde called language “the parent, and not the child, of thought.” Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world.” And Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.” Given this view, Sue should have irreparable damage to her cognitive abilities when she loses access to language. Do neuroscientists agree? Not quite.

The Language of Thought Hypothesis - https://plato.stanford.edu/entries/language-thought/ (which has a long history going back to Augustine)

---

If 23 year old me were here now and considering future life paths, I'd be sorely tempted to be looking at declaring/finishing a dual CS/philos major and going to grad school.


Do these test imply that a theory or mind requires human language any more so than a mirror test implies that self-awareness requires eyeballs?


It's a reasonable hypothesis. Being good at calculating 'What would happen next if {x}' is a decent working definition of baseline intelligence, and the capabilities of language allow for much longer chains of thought than what is possible from a purely reactive approach.

Entering the realm of Just My Opinion, I wouldn't be surprised at all if internal theory of mind is simply an emergent property of increasing next-token prediction capability. At a certain point, you hit a phase change, where intelligence loops back on itself to include its own operation as part of its predictions, and there you go- (some form) of self-awareness.


This paper is worth a HN submission in its own right. Or did it have one already?


I came across this paper elsewhere, but it looks like someone posted it today: https://news.ycombinator.com/item?id=34756024

There was also a larger discussion from a few days ago: https://news.ycombinator.com/item?id=34730365


> It empowers all the non-experts in these many domains to touch things they never could have before.

Nothing stops anyone from writing a mediocre song. Hell, it's one of the easiest things in creation to do. Hum a melody, make some words fit. Write it down, make up your own notation for the melody if you want; you don't even have to perform it. Congratulations! You're in the same field as Colin Meloy.

ChatGPT isn't even doing this much. It's read all the tabs and lyrics of every song within its reach until 2021, and then spits out an average response based on average similar requests. It's a songwriter in the same way a 7-year-old telling you a story about an alligator who ate a city is an author.


So basically the pitch deck for Godzilla? Even including “and then a HUGE MOTH appears and Godzilla shoots a huuuge lazer out of his mouth. And then the moth whose name is uhh Mothra screeches in pain and falls into a skyscraper.”

I think you’re doing seven year olds and GPT a disservice. Lots of iconic media is made by adults who have managed to hold on to their seven year old storytelling brain.


I'm sure the ability to empathise with seven year olds has helped a fair few adults have successful careers in filmmaking, but the top line pitchdeck for Godzilla resembles the writing and production of a successful blockbuster film (even a really cheesy one) in roughly the same way as "I want Facebook for cats" resembles a computer program


Don't hate the player, hate the game.


The pitch deck for Godzilla created by GPT wouldn't include anything like Godzilla, because it wouldn't have had anywhere to copy it from. There aren't many stories substantially like Godzilla before Godzilla. GPT could pitch the 63rd Godzilla clone, which would turn into a romcom halfway through after the machine forgot what it was talking about.


Uh, what? Stories about dragons go back to antiquity.


> It empowers all the non-experts in these many domains to touch things they never could have before. This is an amazing tool.

Possibly touch things they don’t know much or even anything about? That with factual errors is not really a recipe for good. What is good is what makes us think and that is certainly possible with LLMs and AI. Think interactive fiction, think collaborations of sorts with humans and so on. The current trend of generate a remix of a rehash is, all I hope, just a temporary muscle flexing of what’s possible and not what we’ll get once the hype dies down a bit.


I think it lulls people into a false sense of confidence that they're producing anything of lasting value. I'll concede that it certainly enables the layman to be more productive... at least in terms of sheer volume.

Given that GPT represents a distillation of the sum of human output on the internet, is it truly unsurprising that 99% of the generated stories lyrics etc. are rather mediocre?


I mean, if you've listened to a couple Decemberist albums, that criticism extends fully. There are plenty of dull, mediocre, and repetitive songs throughout. I'm listening and reacting to this one now. It would definately fell like a filler song. It lacks the kind of juxtoposition of positionality that Decemberist songs have. Its there but it comes in late and is a bit weak. The cord transitions are also a bit boring. But if you asked me to tell the difference between this and some other filler of theirs I know of, I probably couldn't.

Its also relentlessly positive in a way that Decemberist songs rarely are. Both acoustically and lyrically, which having played with ChatGPT a bit, seems to be a major part of its training. It avoids negativity in big ways. The Decemberists dont necessarily, and so it feels very gushy and bright in a way that it shouldnt.


Agree that the Decemberists have some comparatively mediocre filler songs on their albums, but I struggle to think of one that's this bad.


I mean how much time did they really put into this though?

Don't forget that beauty is in the eye of the beholder too. I had never heard the Decemberists before this. Colin has a beautiful voice but I find all their music that I just listened to pretty much mediocre. Mediocrity though is a property of my interpretation of the art and not the art itself.

I don't think a strength of chatGPT though is song lyrics. I actually would love to be able to rap but any rap lyrics it comes up with are overly simplistic in the rhyme structure.

The average lyric across the space of all lyrics is practically the definition of mediocrity.


Just made this same argument without reading your comment. I do wonder if musicians will start using this in studios, especially pop ones with one or two hits, to help fill the rest of the album.


A note on the mediocrity:

We have to be clear on why Colin Meloy finds the song "mediocre" (lit. middle of the road, not bad but not special). I don't know Colin personally, so of course I can't really make a definitive statement, but my attitude of the same is informed by the song’s literalness. And this probably doesn't mean much to someone who has not tried to write a song or a book before, so I wanted to comment on it.

People always have these excited memories about their childhood favorites, their favorite songs, their favorite books, the things they really connected with. But the craft in design for these works is usually that, the thing you connected with, is not literally in the artwork. Instead the artwork sits around it, the artwork gestures at it. You have an idea of this character as this lonely solitary brute who 15-year-old-you really identified with because you had just come out of hitting puberty early or whatever, but if you really look at the character descriptions, it turns out that most of your impression is formed by other people in the narrative wanting the character to say something and they just don't.

This creates a really plastic space which people mold into their own viewpoint, they take it and make it relatable to them. Eco calls a novel a “lazy machine,” to make it really special I need to trust that laziness, I need to trust my reader to fill in the gaps that I do not. For instance, the hulking brute above—did you understand that to be a guy or a girl?

When Dave Matthews charted it was with “Take these chances / Place them in a box until a / Quieter time, lights down, you up and die.” What chances? What box? What quieter time? We get vignettes or a humdrum existence, of dreams of how it was simpler, and it resonated because it tapped into a nostalgic yearning for a simpler life, a fatalistic concern about the finitude of life... But the character is only seen in dotted-outline silhouette, a blank slate that we paint with our own stories.

Same when Colin sings “And nobody nobody knows / Let the yoke fall from our shoulders / Don't carry it all, don't carry it all / We are all our hands and holders / Beneath this bold and brilliant sun / And this I swear to all.” What yoke? What sun? Who is “we”? The song sketches its theme, a theme of goodbyes, of picking up where others left off, of being assured that they will pick up what you leave off, of community and family and transition. But again, the details are just gestured at, you are invited to set yourself into the narrative, personalize and connect.

ChatGPT’s chorus is “Of sailors brave, and adventures untold / Of a life on the waves, and a heart grown old / Of a world of discovery, waiting to be sought / In a song that will live, when he is not.” What sailors? Well, the companions of the Mariner who shall not be named, he apparently had splendid friends. What adventure? Again, the adventure of the Mariner, who apparently had a splendid time. The only hope for the song is that the “song that will live” is going to be presented in this work: that we are going to see a key change and the Old Man’s Song will turn out to be the climax of the piece, we will dramatically insert ourselves and become the Old Man, dramatically ending with some crescendo, “these splinters I will cling to / the battered sail fills up with might / these are the fallen friends I drink to: / I go out alone, but I'm not dying tonight!” and for just one hair-on-end minute we feel that Odyssean determination to set out on one last ambitious venture before we die. But of course we never get that because ChatGPT is trying to create something cohesive where we want something emotional.


This was great, solid analysis. If you want to critique some songs written by a human, let me know!


This is textbook disruptive innovation.



The 2500 also has the highest amount of drunk drivers per capita in the US. Truly, a terrifying vehicle.


It’s not a good buy so people that get them are by nature have poor impulse control. I’d love to see a chart of car vs avg credit rating sometime


This is not claimed by the article afaict. It's stated though that json spec/syntax is simple, and that it's simple as a language; my interpretation is that this means "simple/understandable/predictable for humans".


Funny that the story is told by Sherlock and not Watson, that's not idiomatic of the series afaict.


I guess ChatGTP, like most of the Internet it was trained on, hasn't actually read Sherlock Holmes and only knows about it though pop culture references.


The stories are probably in there somewhere, but they're in there once or twice, while the pop culture Sherlock is in there thousands of times. That's got to weight things. (I just asked it to produce an imaginary Shakesheare dialogue. The results were not great.)


There are a couple of original stories where Holmes is the narrator though. These are "The Adventure of the Blanched Soldier" and "The Adventure of the Lion's Mane".


Those are part of this collection: https://en.wikipedia.org/wiki/The_Case-Book_of_Sherlock_Holm...

I thought they were all narrated by Holmes. Just checked https://www.gutenberg.org/ebooks/69700 and nope, I was remembering wrongly!


First thing that jumped out to me.


I made for myself something similar, for display in my shell prompt https://github.com/netgusto/tax


For key/value storage there's Cloudflare KV * https://developers.cloudflare.com/workers/runtime-apis/kv/

For document storage, Durable Objects is amazing: * https://developers.cloudflare.com/workers/runtime-apis/durab... * https://blog.cloudflare.com/durable-objects-easy-fast-correc...

For relational data, there's now D1 (open beta): * https://developers.cloudflare.com/d1/

For bulk storage, there's R2: * https://developers.cloudflare.com/r2/


There's https://scalingo.com/ Works well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: