Not sure of other lisps, but clojure has piping. I was under the impression in general that composing functions is pretty standard in FP. For example the above can be written:
(-> (* 2 PI x) sin sqrt log)
Also while `comp` in clojure is right to left, it is easy to define one left to right. And if anything, it even uses less parentheses than the OOP example, O(1) vs O(n).
Oh, good, so when that happens they'll be making some flimsy claims about conservation and combating climate change to justify creating a poor imitation of Jurassic Park that wealthy and middle class people can visit.
I think people are conflating (fiction writer) Michael Crichton's claims of Jurassic Park being for the wealthy and well-connected with the real-world economics of a zoo.
As people have noted in other threads: Zoos, generally, maximize revenue (and community value, which is its own ineffable thing that matters a lot... The economics of zoos aren't just in dollars and cents, they're also in the local community thinking of them as a shelter, care space, and opportunity to see exotic animals without going to another continent and not, say, an animal-prison and a blight on the community, the kind of opinions that matter when zoos need more land to operate or want to form research or educational partnerships with neighboring institutions) by being a place the public can afford to go.
As far as I can tell, the idea of a dinosaur zoo as an exotic locale on its own island is... Pretty much a whole-cloth invention by Michael Chrichton. Based loosely on Disney, and even Disney's first two theme parks are places a public can drive to (and Disney works hard to keep prices down against the onslaught of the supply-demand curve of "very few parks that everyone wants to go to at least once in their lives"). It's an idea very detached from reality and I'm pretty sure it was a plot device to make sure our characters were trapped on the island instead of being able to just walk to the gate and drive away.
>If that is your goal, you couldn't be doing a worse job of it.
Well this comment of yours doesn't help either.
Genuinely curious if you've read the rest of what I wrote, and have any thoughts (objections? agreeing with anything?) regarding specific things that I said - or you just stopped at that first sentence to write this (content-free) response.
If I'm being entirely honest, in the general case I don't.
But I don't particularly care, either. After a couple tries I decided it's better not to point at object examples of suspected LLM text all the time (except e.g. to report it on Stack Overflow, where it's against the rules and where moderators will use actual detection software etc. to try to verify). But I still notice that style of writing instinctively, and it still automatically flips a switch in my brain to approach the content differently. (Of course, even when I'm confident that something was written by a human, I still e.g. try to verify terminal commands with the man pages before following instructions I don't understand.)
Of course, AI writes the way it does for a reason. More worryingly, it increasingly seems like (verifiably) human writers are mimicking the style - like they see so much AI-generated text out there that sounds authoritative, that they start trying to use the same rhetorical techniques in order to gain that same air of authority.
I think this is an excellent question and one people should be asking themselves frequently. I often get the impression that commenters have not considered this.
For example, whenever someone on the internet makes a claim about "most x", e.g. most people this, most developers that. What does anyone actually know about "most" anything? I think the answer "pretty much nothing".
Yes, this is an important point. Insert the survivorship bias plane picture that always gets posted when someone makes this mistake on other platforms (Twitter). We can be accurate at detecting poor AI writing attempts, but not know how much AI writing is good enough to go undetected.
Someone should run a double blind test app, there was an adversarially crafted one for images and still got 60% or so average accuracy. We all just can glance the data and detect AI generation like how some experts can just let logs run and say something.