I find it depends how you generate it. Asking Suno to make covers of uploaded recordings tends to give much, much better results than asking it to cook a song from scratch. There are still quite a few tells that it's AI-made but it's not bad at all, at least in my experience so far.
Yes I'm already doing this manually with Reason. I'll compose something that's quite bare bones, export the audio and run it through Suno, asking it to cover and improvise with a specific style, then when I have something I like, I split that into stems, import some or all of these to Reason and then reconstruct and enhance the sound using instruments in Reason, mostly by replaying the parts I like on keyboard and tweaking it in the piano roll. Often I get additional inspiration just by doing that. Eventually I delete all the tracks that came from Suno stems when I've finished this process.
That way I get new musical ideas from Suno but without any trace of Suno in the final output. Suno's output, even with the v5 model, is never quite what I want anyway so this way makes most sense to me. Also it means there's no Suno audio watermarking in the final product.
This is similar to what I do. There are all kinds of useful ways to incorporate AI into the music production process. It should be treated like a collaborative partner, or any other tool/plugin.
It shouldn't be a magic button that does everything for you, removing the human element. A human consciously making decisions with intent, informed by life experience, to share a particular perspective, is what makes art art.
We use AI assisted coding to be more productive or to do boring stuff. If the 'making the music' part is what you are getting away from, why make music? You're basically a shitty 'producer' (decent producers are amazing at those boring parts you are skipping and can fill out a track without hitting up a robot) at that point.
Music is math. Art is patterns. Like how we're using AI to iterate through design and code, musicians could use it for generating musical patterns including chords, harmonies, melodies, and rhythms. In theory, it can pull up and manipulate instruments and effects based on description rather than rifling through file names and parameters (i.e. the boring stuff).
Most success as a musician stems from developing a unique style, having a unique timbre, and/or writing creative lyrics. Whether a coder, designer, artist, or musician, the best creatives start by practicing the patterns of those who came before. But most will never stand out and just follow existing patterns.
AI is nothing more than mixing together existing patterns, so it's not necessarily bad. Some people just want to play around and get a result. Others will want to learn from it to find their own thing. Either way works.
With art and AI, people seem to enjoy the part where they say they made something and get credit for it, but didn’t actually have to bother. People used to find art of people on the internet and claim it as their own, now an AI can statistically generate it for you and it maybe feels a bit less icky. Though I have to agree it all seems sort of pointless, like buying trophies for sports you didn’t play.
Can it do differently-styled covers of songs or improvisations upon melodies like Suno can?
As a musician, that's what I find most compelling about Suno. It's become a tool to collaborate with, to help test out musical ideas and inspire new creativity. I listen to its output and then take the parts I like to weave into my own creations.
The AI music tools that generate whole songs out of prompts are a curious gimmick, but are sorely lacking in the above.