Apart from the fact that Scientific American has taken an unfortunate ideological bent the last years and as such is no longer the "unbiased scientific source" it once was reputed to be I don't think the claim of 'every major prediction to ever come out of it has been wrong' can be refuted by an article which claims that 'things are even worse than currently predicted'. It could be refuted by showing earlier predictions from climate science which did come true. This is probably what transcriptase was referring to when he made that claim as it is indeed hard to find historical climate predictions - made before the subject was politicised - which turned out to be true while the field is littered with predictions which turned out to be wrong. From Ehrlich's famine forecasts through the new ice scare of the 70's to acid rain there are plenty of examples where things did not turn out as predicted.
Maybe you have some examples where the predictions actually came true? If so, please share them. It is much harder to find out when things went as predicted than the opposite since the former does not nearly get as much attention as the latter.
A downvote is not a vote of confidence in climate science but more of the opposite. I can only assume that there is no proof to be had of earlier predictive successes and with that the original statement made by transcriptase is strengthened rather than weakened. Assuming that this is not the intended result it would be good to get an actual answer to the request - give some examples of predictions close enough to the mark to matter.
You may think this is just a word game but there is more at play. Blind belief in the outcome of flawed models is a bad foundation for good science. Climate models are notorious for their dependency on 'fudge factors', magical constants which need to be introduced to make their outcome match the expected one. It is not clear what those fudge factors actually represent, it can be anything from a simple miscalculation of a given effect of one of the inputs - i.e. something which does not change the predictive power of the model once the factor has been dialled in correctly - to an unknown variable input which has substantial effect on the output. The latter can seriously affect the predictive power of model output since it is by definition unknown whether the fudge factor is related to the output in some way, e.g. cloud cover affecting temperature sensitivity which in turn affects cloud cover leading to uncertainty in the climate sensitivity of simulated inputs. Cloud cover is just an example, there are many other similar factors which can wreak havoc with the predictive capacity of complex and sometimes - often - poorly understood models.
Scientific American is becoming borderline misinformation machine. You're spot on, don't mind the downvotes and flagging on HN. It's a badge of honor at this point.