It kind of reads like you have discredited frequency domain compression algorithms. FYI JPEG used a block-based DCT which is very similar to a Fourier transform.
That was not my intent. I just noted that my approach came from visually inspecting spectrograms and noting that they might be compressed quite well by image compression algorithms. Only to notice in the end that lossy image compression completely butchers the signal with noise if doing that. Had I just used the original waveform as raw image data that wouldn't have been as much of a problem. Compression would still have changed the samples, but overall probably not as badly.
Heck, I've been 17 when I did that. I certainly didn't really know much about the theory behind or why it almost certainly would fail. But I guess by now I wouldn't even attempt such craziness, so in a way it's probably a good thing to not know things too well, sometimes.
The problem is that we are much more sensitive to audio distortion than visual, particularly dynamic (e.g.video). Pretty much all lossy audio compression does a bad job without a psycho-acoustic model, whereas image compression works pretty well on simple metrics. It doesn't help that psycho-visual modelling is less well understood (than audio).
Interesting but they seem to be treating all cultures the same rather than weighting the relative differences between cultures. Papua New Guinea may have a lot of cultures, but they are a lot more similar than Indian and French cultures for example.
It's really a common source of error, I was hit by it once and lots of other people too. However It's not only common to Go, I learned that behavior in C# (which captured loop variables by reference), which I think they changed in the meantime. It can be also encountered in Javascript (if a var instead of let binding is used for the loop variable).
I found it interesting that the old Java style guarded against that behavior, because it required captured things to be final, so you had to copy the thing you wanted to capture from the loop variable into a fresh final variable anyway.
The good thing is: If you encounter this behavior once in any programming language you most likely research in new ones how loop variables interact with closures. So the golang behavior wasn't a new source of errors for me.
However I still learned something new here: I didn't expect the different behavior between the go/defer statement() and go/defer func() {statement()}() variants.
In C++, you can explicitly specify whether lambdas capture by value or by reference, on a per captured variable basis. Most reasonable programmers would capture `int`s by value. For instance, this program is guaranteed to print all numbers 0..9, not necessarily in order:
#include <iostream>
#include <thread>
#include <vector>
int main()
{
std::vector<std::thread> vec;
for (int i = 0; i < 10; ++i)
vec.emplace_back([i] () { std::cout << i; });
for (auto & thr : vec)
thr.join();
std::cout << std::endl;
return 0;
}
Reusing the loop counter variable across iterations in an imperative language is perfectly fine. The confusion comes from capturing a mutable environment by reference, which is a confusing (and hence bad) default.
I think the problem, really, is mutability, which leads to pretty unintuitive results.
The same is true for javascript. For example at first glance you'd expect
var xs=[]; for (var i=0; i<10; i++) {xs.append(function(){return i});}
to be equivalent to
var xs=lodash.range(10).map(function(j){function(){return j}});
but the first won't work as expected -- it's dangerous because its loop is based on mutation so you need to think of your variables as mutable containers rather than simple values, even for primitive types like integers.
> you need to think of your variables as mutable containers rather than simple values
uhm, no. you think of variables like labels that happen to be attached to a value. and since they are variable (like in "they vary"). just like in all other languages that don't label themselves as "functional programming languages", they can be re-attached to other value. In functional programming language (Haskell, OCaml, Scala?) you simply "can't re-attach the label to something else", you just "create a new scope and inside it a new label with the same name that is attached to the same value".
this is the only sane way I found to think about these issues. oh, an Javascript's `let` is kind of like a "transplant" from a functional language to an imperative/procedural one ...a pretty cool transplant imho since by putting it at the beginning of a block you get the "standard functional language behavior".
only problem in go is probably the `:=` that messes up with people's intuition. they shouldn't have allowed it to work with mixes of defined and new variables on the left...
this is the only sane way I found to think about these issues.
That seems a little unfair. The question here is whether i is treated as a value or a reference. In JavaScript, i would usually be treated as a value: passing or returning integers is done by value, appending integer i to some array on each loop iteration would append the value, and so on. Giving i reference semantics when building a closure is a departure from the way JS treats integers in most other contexts. It would not only be perfectly sane to close over i by value as well, it seems it would also be more intuitive given that the misunderstanding we're discussing must be one of the most common errors for intermediate JS programmers.
Now, if i were an Object rather than an integer, I think the argument for the current behaviour would be much stronger, because Objects are passed around by reference in other contexts in JS as well. (Personally, I strongly dislike the way a variable might implicitly have different semantics depending on its type within a dynamically-typed language, but that's a whole other debate.)
Unfortunately, changing the semantics for closing over a variable to be by-value in cases like this would also break lots of other idiomatic JS, including the whole idea of building its version of modular design around a function that has local variables and returns one or more closures over those variables. IMHO, it would have been better if the language had explicitly distinguished between values and references to allow those kinds of idioms without the counter-intuitive consequences in other contexts, but we have what we have for historical reasons and it's far too late to change it now.
10.times.map do |i|
-> { puts i }
end.shuffle.each(&:call)
This makes 10 lambdas which print the current value of the loop and then calls them in random order. Each puts sees a different i.
I don't think the behaviour in the article is really a problem with loop variables, but with defer. It is odd that defer packages up the function and its arguments when it is defined. Deferring printing i remembers the value of i at that time, whilst deferring printing &i remembers the (unchanging) address of i.
OTOH, having defer remember the values makes things like
f = open("file1")
defer f.close()
... do stuff ...
f = open("file2")
defer f.close()
close both files, which is probably what the programmer expected. I think that's pretty horrible code, but either way some people are going to find the results surprising.
Go's "problem" is that it can't efficiently give i a new address in each iteration: it'd have to put each one in a new memory location.
Go's problem here stems from the fact that it relies on C-style for loops for this kind of thing instead of iterators. With iterators you can define the language semantics to provide a fresh location on every iteration. It's yet another point against C-style for loops...
The block inside the loop must be kept in memory whilst there are still anonymous functions which reference the per-iteration loop variables. Loops are linear in space if such functions are being generated and retained in each iteration. If they don't, then iteration stack frames are cleaned up by GC.
This applies to each-style loops. Actual Ruby for loops don't have this behaviour.
a = []
for i in 1..100
a << ->{ puts i } # append, to a, a function that prints 'i'
end
a[0].call()
This prints 100, since the scope of i covers the entire loop, and not each iteration. They are, however, much faster than each-style loops, since they don't have to make a stack frame for each iteration.
I don't know enough about Go to be sure, but it seems that there are multiple possible semantics here, and the choice made by its designers does not seem to be inherently more efficient than the alternatives, except perhaps for some rarely-used cases.
Also (though this may be going off-topic), imagining what instructions might be emitted has been known to mislead, especially in the face of aggressive optimization (looking at the actual compiler output may give you insight into the language's semantics, but reading the language's specification is probably an easier, quicker and more reliable route.)
This definitely seems to vary among different languages, and is something I've always been annoyed with in C++.
Scoping outside the for loop is rarely useful, but often creates bugs. The one case where I like it is for searching an array and wanting the index, but even then it's not necessarily elegant.
I don't see this as whether or not there is literally a new i created or not, but if you have access to i after the loops last bracket. Here's an example:
for (int i=0; i<10; i++) {
do_something(i);
}
// i is still valid which might be
// unexpected and cause a bug
vs. something that would look like this
{
int i;
for (i=0; i<10; i++) {
do_something(i);
}
}
// usage of i at this point would be
// a compiler error
I'd have to come up with a convoluted case, where this would directly cause a bug, but it would be a case where you reuse the variable i, and forget to reinitialize it. Java does the second form, and forces you to declare the variable outside the loop if you want to use it in that context.
Saying "loop variables are scoped outside the loop" is incorrect.
All the gotchas noted stem from the fact that people are using _closures_ in the form of go and defer statements, which capture i by reference unless you pass it as an argument.
I think it's more an issue with closure gotchas (or the fact that go and defer statement aren't viewed that way) than with scoping.
How would this make any sense, though? In the case of "range" I can see the confusion, but when you explicitly declare a variable and increment it for each iteration of the loop, the only way it can work is if there's one instance of the variable.
It's the disparity between how a human thinks and how a machine thinks. Systems languages tend to care less about humans think, which I prefer because humans don't have consistently defined logic.
I don't think so. As programmers, we have to think logically, and in this situation there really is no other logical way of thinking about how that code might work.
Alibaba isn't a retailer like Amazon. They're a retail platform that serves ads. They're a lot more like Google than Amazon. Amazon actually makes money in retail, Alibaba makes no money in traditional retail (~95% of their sales and income come from ads).
Whole Russia and Belarus (and i bet rest of ex-USSR countries) shop on aliexpress/alibaba. I would say nor Amazon nor Alibaba has true global presence - each represented in few countries without overlaps (for now)
But it is reach we are talking about here. I am from China, so I understand how crazy Single's Day is and what an amazing job Alibaba has done to handle that amount traffic.
However, the overwhelming majority of Alibaba's customers is still Chinese. On the other hand, before AWS takes off, Amazon already sets up data centers in different countries and continents, because they are the leading online retailer over there. Same reason applies to Google. Their cloud business then cloud easily benefit from those established investments.
I'm just weaseling out because I don't want to claim that every system that can draw Mandelbrots is Turing complete (not considering pathetic cases like "draw_mandelbrot" keywords/functions/parameters).
Genuinely Turing-complete machines have infinite memory ("tape") and an arbitrarily large number of steps within which to complete an algorithm. I suppose "sufficiently Turing-complete" to mean that you can express an algorithm to the machine such that the algorithm can complete within the resources (RAM and time) that you have to give it.
Anyone who wants to actually build the board. USB 3.0 is a whole new ball game and has lots of gotchas with board layout & design. Check out Jared Boon's Daisho talk for some good background on USB 3.0 challenges: https://www.youtube.com/watch?v=eTDBFpLYcGA&t=1059