Well, it's both dumb and smart: it's smart in the sense that it recognized the pattern in the first place, and it's dumb that it made such a silly error (and missed obvious ways to make it shorter).
This is the problem with these systems: "roughly correct, but not quite, and ends up with the wrong answer". In the case of a simple program that's easy to spot and correct for (assuming you already know to program well – I fear for students) but in more soft topics that's a lot harder. When I see people post "GPT-4 summarized the post as [...]" it may be correct, or it may have missed one vital paragraph or piece of nuance which would drastically alter the argument.
"If you modify it, it will give the correct answer"