I don’t think you fundamentally understand what makes a transhumanist. The differentiating line is not improve vs harm. Everyone wants to improve life with tech save maybe the Amish. It’s human vs not. A transhumanist is willing to entertain technology that enhances (carefully chosen term) some conscious experience even if it replaces our humanity. Others generally don’t take that stance axiomatically. A transhumanist would support replacing our dna with nanobots programmed to do things that keep a body alive if it means we can reduce the replication error rate and avoid cancer. Good outcome, dubious means. A transhumanist wouldn’t debate this and accept the nanobot outcome. Most others would at least debate this tech, even if we ultimately collectively come to the shared conclusion we can humanely replace parts of our DNA to cure cancer. The point is the transhumanist wouldn’t care about retaining our humanity, as evolving into “machines” is an acceptable outcome.