For it to be evidence, you would need to know the number of Greptile comments made and how many of those comments were instead considered to be poor. You need to contrast false positive rate with true positive rate to simply plot a single point along a classifier curve. You would then need to contrast that with a control group of experts or a static linter which means you would need to modify the "conservativeness" of the classifier to produce multiple points along its ROC curve, then you could compare whether the classifier is better or worse than your control by comparing the ROC curves.
Sample number of true positives says more or less nothing on its own.
For such a literal case, automatic translations generally suffice. The real translator touch comes about when their is some nuance to the language.
Was that a double entendre or not? If not, you might make a literal translation to get the meaning across. If so, then a literal translation will not get the message across. Vice versa, if it was not a double entendre but you translate it as one, you may confuse the message and if it was and you translate it as such, then the human connection can be maintained.
That is also the tricky bit where you cross from being proficient in the language (say B1-B2) to fluent (C1-C2), you start knowing these double meanings and nuance and can pick up on them. You can also pick up on them when they weren't intended and make a rejoinder (that may flop or land depending on your own skill).
If you are constantly translating with a machine, you won't really learn the language. You have to step away at some point. AI translations present that in full: a translated text with a removed voice; the voice of AI is all of us and that sounds like none of us.
> The real translator touch comes about when their is some nuance to the language.
And as we all know legal language is famous for having no nuance whatsoever, there are no opaque technical terms with hundreds of years of history behind their usage, there is no difference between the legal systems of different countries, and there is no possible difference in case law or the practicalities of legal enforcement. /sarcasm
What is clear to me that in a situation like this neither AI translation nor human translation is sufficient. What the imagined American signing an important legal document in the Czech Republic needs is a lawyer practicing in the Czech Republic who speaks a language the imagined American also speaks.
As someone who has been in that situation before, ‘literal’ translation is not actually a thing. Words and phrases have different meanings between legal systems.
You need a certified translation from someone who is familiar with both legal systems or you’re going to have a very bad time.
Which I think you know from the second part of your statement.
Legal documents likely have much more impact than a random chat with a stranger.
The biggest friction I experience with respect to rust closures is their inability to be generic: I cannot implement a method that takes a closure generic over its argument(s).
So then I'm forced to define a trait for the function, define a struct (the closure) to store the references I want to close over, choose the mutability and lifetimes, instantiate it manually and pass that. Then the implementation of the method (that may only be a few lines) is not located inline so readability may suffer.
Ha, yes, I see what you mean now. That's not really the closure's fault but monomorphization of the foo function. The specific thing you want to do would require boxing the value, or do more involved typing.
If you realize that what we remember are the extremized strawman versions of the complaints then you can realize that they were not wrong.
Writing did eliminate the need for memorization. How many people could quote a poem today? When oral history was predominant, it was necessary in each tribe for someone to learn the stories. We have much less of that today. Writing preserves accuracy much more (up to conquerors burning down libraries, whereas it would have taken genocide before), but to hear a person stand up and quote Desiderata from memory is a touching experience to the human condition.
Scribes took over that act of memorization. Copying something lends itself to memorization. If you have ever volunteered extensively for project Gutenberg you can also witness a similar experience: reading for typos solidifies the story into your mind in a way that casual writing doesn't. In losing scribes we lost prioritization of texts and this class of person with intimate knowledge of important historical works. With the addition of copyright we have even lost some texts. We gained the higher availability of works and lower marginal costs. The lower marginal costs led to...
Pulp fiction. I think very few people (but I would be disappointed if it was no one) would argue that Dan Brown's da Vinci Code is on the same level as War and Peace. From here magazines were created, even cheaper paper, rags some would call them (or use that to refer to tabloids). Of course this also enabled newspapers to flourish. People started to read things for entertainment, text lost its solemnity. The importance of written word diminished on average as the words being printed became more banal.
TV and the internet led to the destruction of printed news, and so on. This is already a wall of text so I won't continue, but you can see how it goes:
Technology is a double edged sword, we may gain something but we also can and did lose some things. Whether it was progress or not is generally a normative question that often a majority agrees with in one sense or another but there are generational differences in those norms.
In the same way that overuse of a calculator leads to atrophy of arithmetic skills, overuse of a car leads to atrophy of walking muscles, why wouldn't overuse of a tool to write essays for you lead to atrophy of your ability to write an essay? The real reason to doubt the study is because its conclusion seems so obvious that it may be too easy for some to believe and hide poor statistical power or p-hacking.
I think your take is almost irrefutable, unless you frame human history as the only possible way to achieve current humanity status and (unevenly distributed) quality of life.
I also find exhausting the Socrates reference that's ALWAYS brought up in these discussions. It is not the same. Losing the collective ability to recite a 10000 words poem by heart because of books it's not the same thing as stopping to think because an AI is doing the thinking for you.
We keep adding automation layers on top of the previous ones. The end goal would be _thinking_ of something and have it materialized in computer and physical form. That would be the extreme. Would people keep comparing it to Socrates?
In addition to what others are telling you, Kagi also allows you to
- filter out results from specific websites that you can choose,
- show more results from specific websites that you can choose,
- show fewer results from specific websites that you can choose,
and so forth. When you find your results becoming contaminated by some new slop farm, you can just eliminate them from your results. Google could also do that, but their business model seems to rely more on showing slop results with their ads in those third party pages.
Just like mobile phone providers, third parties can provide lots of value add by reselling infrastructure. Business models can be different, feature sets can differ. This is not a delusion but the reality of reselling.
Arrays are syntactic sugar over something that resembles a function, sure.
Real signature of an array implementation would be something like V: [0, N] -> T, but that implies you need to statically prove that each index i for V[i] is less than N. So your code would be littered with such guards for dynamic indexing. Also, N itself will be dynamic, so you need some (at least limited) dependent typing on top of this.
So you don't want these things in your language so you just accept the domain as some integer type, so now you don't really have V: ℕ -> T, since for i > N there is no value. You could choose V: ℕ -> Maybe<T> and have even cases where i is provably less than N to be littered with guards, so this cure is worse than the disease. Same if you choose V: ℕ -> Result<T, IndexOutOfBounds>. So instead you panic or throw, now you have an effect which isn't really modeled well as a function (until we start calling the code after it a continuation and modify the signature, and...).
So it looks like a function if you squint or are overly formal with guards or effects, but the arrays of most languages aren't that.
> for one of the best ways to improve a language is to make it smaller.
> but that implies you need to statically prove that each index i for V[i] is less than N. So your code would be littered with such guards for dynamic indexing.
You just need a subrange type for i. Even PASCAL had those. And if you have full dependent types you can statically prove that your array accesses are sound without required bounds checking. (You can of course use optional bounds checking as one of many methods of proof.)
Yes, as I said, you must use bounds checking or dependent types or effects or monad returns.
Arrays are the effect choice in most languages. The signature as a function becomes a gnarly continuation passing if you insist on the equivalence and so most people just tend to think of it imperatively.
Functions are partial in most programming languages, so the fact that arrays are best modelled as partial functions (rather than total functions) isn’t a huge obstacle.
An anonymous inner class is also ephemeral, declarative, inline, capable of extending as well as implementing, and readily readable. What it isn't is terse.
Mocking's killer feature is the ability to partially implement/extend by having some default that makes some sense in a testing situation and is easily instantiable without calling a super constructor.
Magicmock in python is the single best mocking library though, too many times have I really wanted mockito to also default to returning a mock instead of null.
Yeah, it's funny, I'm often arguing in the corner of being verbose in the name of plain-ness and greater simplicity.
I realise it's subjective, but this is one of the rare cases where I think the opposite is true, and using the 'magic' thing that shortcuts language primitives in a sort-of DSL is actually the better choice.
It's dumb, it's one or two lines, it says what it does, there's almost zero diversion. Sure you can do it by other means but I think the (what I will claim is) 'truly' inline style code of Mockito is actually a material value add in readability & grokability if you're just trying to debug a failing test you haven't seen in ages, which is basically the usecase I have in mind whenever writing test code.
I cannot put my finger on it exactly either. I also often find the mocking DSL the better choice in tests.
But when there are many tests where I instantiate a test fixture and return it from a mock when the method is called, I start to think that an in memory stub would have been less code duplication and boilerplate... When some code is refactored to use findByName instead of findById and a ton of tests fail because the mock knows too much implementation detail then I know it should have been an in memory stub implementation all along.
Eh, quantum computing could very well be the next nuclear fusion where every couple of years forever each solved problem brings us to "We're 5 years away!"
Yet, for sure we should keep funding both quantum computing and nuclear fusion research.
Sample number of true positives says more or less nothing on its own.
reply