Hacker News new | past | comments | ask | show | jobs | submit login
Transfer.sh – File sharing from the command line (transfer.sh)
260 points by _ieq6 on June 20, 2018 | hide | past | favorite | 113 comments



There are so many tools that can transfer files between two computers. I really like ones like this because you don't have to have SSH access or forward any ports to send a file from A to B. Its similar vein to other peer-to-peer utilities like zget [1], sharedrop [2], instant.io (webtorrent) [3], filepizza (webtorrent) [4], magic-wormhole [5], toss [6], dat [7], and many many others.

During Hacktoberfest I also started my own, written in Go, so I could have my friends can use it without installing a Python ecosystem [8].

[1]: https://github.com/nils-werner/zget

[2]: https://github.com/cowbell/sharedrop

[3]: https://github.com/webtorrent/instant.io

[4]: https://github.com/kern/filepizza

[5]: https://github.com/warner/magic-wormhole

[6]: https://github.com/zerotier/toss

[7]: https://github.com/datproject/dat

[8]: https://github.com/schollz/croc


Shameless plug: I wrote ffsend (https://github.com/nneonneo/ffsend) to interact with the Firefox Send experiment (send.firefox.com). With this, you can upload a file which is end-to-end encrypted (i.e. it is uploaded encrypted, and only you have the key), and accessed via a simple URL that you can share with your receiver.

FF Send files last for 24 hours, and you can configure the number of downloads allowed from 1 to 20. The maximum filesize is around 2 GiB. The reason I wrote ffsend is that the official site loads the entire file into memory in order to en/decrypt it, but my script is able to stream the en/decryption and thus significantly reduce memory usage.


Is not the same. With this service you do not need to have anything installed in both the sender and the receiver. So you can use it to move files to a server, send a link to a friend who does not know how to use a terminal. Very practical



They really should’ve used some end to end encryption to protect themselves against DMCA bullshit.

As it stands they will be abused for copyright infringement and the rights holders will not only ask for the files to be taken down, but for them to preemptively prevent those same files from being uploaded again. A huge rabbit hole full of bullshit.


This is the first thing I was wondering about as far as what the intended/eventual use-case of this tool is. The website and GitHub page doesn't go into a lot of detail for what it's intended to replace.

If I'm not confusing matters, in the past people might have sent files to each other by dropping stuff off in an ftp directory, or they might have used DirectConnect to host files up, or even Windows file sharing / samba.

If it were designed to be used within a local network on the other hand between untrusted users, it might make more sense but I think there would then also be simpler ways to go about that?


Indeed yes, but files were hosted for 14 days.


I would be much more concerned about people using this site for the transfer of illegal material (such as child abuse imagery). That's something literally any file sharing/image hosting site has to deal with. The 14 day limit won't make a difference as people disseminating that type of material are likely used to having to move it around frequently.


I built a proof of concept project that was a lot like transfer.sh at one point in time. My solution was to not only have a time limit but a download limit as well. I'd guess most people transferring files from the command line want to transfer something from one machine to another, or maybe to a few other machines. I was going to allow a transfer limit of 10 transfers before the file/link went dead, which would hopefully deter most nefarious people using it to widely spread files around..


Unless you are storing hashes, what would prevent someone from writing a script that simply reuploads the file to your service after downloading? That would increase the share limit.


Let's say you generate the identifier for the upload based on the file hash. If you add a timestamp or random nonce then they'll have to redistribute the link to the file every time they re-upload as it will change every time.


Exactly. Nothing would stop someone from re-uploading it, but they'd end up with a different url which would go dead pretty quick if used to share something publicly.


This is a real problem for people hosting filesharing services. Depending on who abuse material gets reported to you might not even be told to remove it, you might just get raided.

https://fuwafuwa.moe/nr/freeme/


What about deleting? If you accidentally upload your porn folder and notice it just after sharing the link with someone.


Sharing is caring!


woof -i <ip_address> -p <port> <filename>

woof: http://www.home.unix-ag.org/simon/woof.html

1. Allows directory upload/download (tar/gzip/bzip2 compressed)

2. Local file server (doesn't go over the internet)

3. Allows upload form (-U option)

4. Allows file to be served <count> number of times (-c option)


Another alternative for simply serving your current directory which you probably already have:

    python3 -m http.server
or

    python2 -m SimpleHTTPServer


HN still doesn't realize that devs are not the only people in the world, more at 11.


I like it! But for me ssh already provides a simple and secure way to move files from one place or another..

  tar -cf - ./files.txt ./orDir/ | ssh host "(cd /dest/dir; tar -xf -)"
Given that ssh it so ubiquitous I think this will always be my go to.


I use netcat a lot with my friends, we don't even have to do the SSH dance.

On the receiving end:

    nc -vll 0.0.0.0 12345 | pv | tar xv
On the sending end:

    tar cv files... | pv | nc -N "destination IP" 12345
pv is a nice tool that just reports on the rate and number of bytes moving through the pipe, optional if you don't have it. Throw in gpg --symmetric + gpg --decrypt for good measure if you want to encrypt it on the wire with a password.


That works assuming the client has a public IP address or port forwarding set up, or you're on a local network together.


That throws your files over the wire with no encryption, signing, or integrity verification. It might well be fine for your use, but I don't think I'd ever be comfortable with it.


Did you miss the part of my comment where I explained how to add encryption?


...yep, I only read the command itself and skimmed right over that. My bad.


What about good old scp?

    scp file.txt user@host:/dest/dir/


Or rsync?

rsync -zvh file.txt user@host:/dest/dir/

So many ways to do this :)


rsync has always been my favourite because it makes the most sense to me (and the --help/man page is easy to read).

rsync -n -avh --progress source destination:~/asdf/ for a dry run followed by ctrl-p, ctrl-a, alt-f, alt-d, alt-d to remove the -n flag and then execute that for the real thing.

Occasionally though, I'll also use sftp if I'm just pulling one thing - perhaps even after sshing to the remote machine.

For all of these, SSH keys should be set up (and desktop logins secured) to make life easier.

As for Android, adb push and adb pull -a seems to work better than mtp:// or AirDroid in my experience.


After all these years, I still can't keep straight when I need a trailing slash in rsync, and when I need to not have it.


If you think of it in terms of archives and whether you want to "extract" into the current directory, or a new directory within the current one; that might help.

rsync source destination will plonk the entire source directory and put it inside destination as a neat bundle.

rsync source/ destination will take the contents of source (but not the directory source itself) and plonk it in destination

I found the info page a little dry but it does describe it succintly:

    rsync -av /src/foo /dest
    rsync -av /src/foo/ /dest/foo
For some reason though, my head freaks out when it sees "foo" and "bar", but all they're saying is that it does the same thing.

If in doubt though, just chuck everything into the destination ~/temp/ or ~/asdf/ and sort it out later.

To be honest though, most of the time I just use fish shell's autosuggestions to guide me along.


What if you don't have network and login access to the other person's computer?


I usually go straight to tar & ssh over scp since you can do directories where you can't with scp. But yeah, if you just have some files scp works.


You can obviously send directories with scp, just use -r. Or am I missing something?


Recently used it to copy half a terabyte of stuff on my home network. Unsure about the exact specification but it supported the same flags as cp as far as I could tell.


Well I'll be damned.. not sure what lead me to think this.


Yes -r works for copying directories. I do it all the time


scp performs very slowly with directories containing many small files; "tar|ssh tar" does fine with that.


> tar -cf - ./files.txt ./orDir/ | ssh host "(cd /dest/dir; tar -xf -)"

You should use '&&' instead of ';' on the host side. That way you don't accidentally dump your transfer contents into a wrong directory if the existing host dir doesn't exist.

e.g.: tar cf - stuff | ssh host "(cd /dest/dir && tar xf -)"


Or just skip the shell entirely:

  tar -C /dest/dir -xf -


Yeah that's the right way to do it, but I always forget which flag it is for tar and this is faster than pulling up the manpages ;)


Could you give a full example of how this would work? Receiver and sender..


    tar cf - src/ | ssh $host "tar -C /dest/dir -xf -"


Do you have any idea about a crazy fast way to transfer files between two machines on the internal network?

No encryption necessary No integrity checks No resume support Just plain and as fast it could get..

I need to try the netcat version but I'm hoping someone could show me a concurrent version of it that is mad fast.


You can use nc (netcat). If you have a 10G ethernet you will not saturate it because you will be limited by disk IO (I get 1Gb/s disk reads for a M.2 SSD). I you can read from disk faster, netcat alone will not saturate a 10G link, probably will hit a 3Gb/s limit (depending on your hw). You will need parallel transfers (xargs, a bit of scripting etc). You can also try rsync, maybe it is good enough for you.


From your source machine:

  < /path/to/source ncat remote-host 8001
On your destination machine:

  ncat -l 8001 > /path/to/dest
Though in most situations with files of reasonable size, you're probably going to be better off running through `gzip -c`.


Good point!


This particular tool seems useful for sharing files with people who might not be comfortable with the command-line, or where you don't have an account on the receiver's computer, since it produces a regular http link to download the data.


That makes sense, I hadn't thought about that.


> ssh already provides a simple way

:-D

> tar -cf - ./files.txt ./orDir/ | ssh host "(cd /dest/dir; tar -xf -)"

:-O



2007. Wow. That's a lot in Internet time.


It is a lot in computer time as well. The other gem in that thread IMHO is how they marketed Dropbox as a replacement for USB thumb drives (which it also is, but it just goes to show how niche the concept of cloud was 11 years ago).


Yeah, and that guy is actually still active on HN and is always decent whenever someone calls him out about his ancient Dropbox comment.


If you can trust to have gnu tar on both side, you can reduce typing. I usually do: tar c files.txt orDir | ssh -C host tar x -C /dest/dir


What about just typing rsync -havz --progress source destination?

-h for human-readable numbers -a for archive mode -v and --progress for verbose info -z for compression during transfer

Add a -n for a dry run if required.

https://linux.die.net/man/1/rsync

One of the people who came up with it - Andrew Tridgell - was more or less "responsible" for Linux and BitKeeper parting ways, which in turn ultimately lead to the creation of Git. I think it's a fascinating story. Excellent tools.

The progress / speed display is the main thing that keeps me coming back to rsync even though other tools might manage the same job - not seeing the progress of a copy is what had me searching for that solution in the very first place.


Yo dawg, I heard you like progress, so check out rsync's

--info=progress2

You can use it since rsync version 3.1.0.


Hah I've seen that in the man a few times but always feel too lazy to call it up. Next time I'll give it another go!


>Ha ha, I remember writing a command like that (made it into a script for repeated use, with args) when one of our HP-UX servers in the company where I worked then, had a DAT drive failure. The script allowed us to take backups of the source code and data on one such box from another box (connected on the same network), until the bad drive was replaced a few days later.

Didn't know about nc or rysnc at the time. Good to see those other solutions, and good that there are many ways to do it, with different pros and cons.


Reminds me of how they left recursions out of the 'cp' command on plan9, so the recommended way to copy directories is:

@{cd fromdir && tar cp .} | @{cd todir && tar xT}


I have a lot of friends who don't know what the fuck SSH is. For them, these services are great at sending them files.


There used to be a service called chunk.io that did this. Then, presumably by some combination of becoming popular causing them bandwidth/storage issues or their service being abused, they had to make it invitation-only.

(The site still exists, but they never replied to my e-mail to their signup address, so I can't say for sure if they're still live or not.)

I wish transfer.sh good luck, and will bookmark them for now as "the new chunk.io".

EDIT: note, as not all comments comparing this to SSH seem to have picked it up - this is a service where you can upload a file, get a link and e-mail the link to someone. You don't need to have any special software (such as sshd) running on the download side.


There are multiple sites like this. I dont know the size limits (I just upload small stuff) etc but here are a few I use:

http://ix.io

https://ptpb.pw


Personally I've been using magic wormhole lately. It's p2p and very easy to use.

https://github.com/warner/magic-wormhole


I could not find an example on the gibhub page, but here is the timecode from a video that shows it in action: https://youtu.be/oFrTqQw0_3c?t=129 Also: https://magic-wormhole.readthedocs.io/en/latest/welcome.html...

Looks neat.


I love magic wormhole, you give the recipient three words and a number and the file is sent peer-to-peer. No messing about with routing or anything.


How is it for getting around weird networks (double NAT, etc)?


> The library depends upon a “rendezvous server”, which is a service (on a public IP address) that delivers small encrypted messages from one client to the other. This must be the same for both clients, and is generally baked-in to the application source code or default config.

> This library includes the URL of a public rendezvous server run by the author. Application developers can use this one, or they can run their own (see the https://github.com/warner/magic-wormhole-mailbox-server repository)

> For now, bulk data is sent through a “Transit” object, which does not use the Rendezvous Server. Instead, it tries to establish a direct TCP connection from sender to recipient (or vice versa). If that fails, both sides connect to a “Transit Relay”, a very simple Server that just glues two TCP sockets together when asked.

If I understand the docs correctly, it always uses a centralized server to establish the transfer. Once the transfer is established, it'll attempt to transfer the files directly, if possible, but if not, it'll fall back to using a relay.

And so many people are trapped behind NAT these days, I don't know that the need for this will be all that unusual.


It's been great every time I've used it, I think it has a relay server as a last resort if it can't get a p2p connection. Never had problems with it.


There's https://send.firefox.com/ as well. Backed by Mozilla.


I made a minimal bash CLI for transfer.sh, since I regularly interact with it. Nothing you couldn't do by hand, but it makes it easier to do some operations. Uploading directories, encryption/decryption, piping, etc.

https://github.com/rockymadden/transfer-cli


Of course it's written in Go.

It's amazing to see the language embraced that much for server side apps.


Static binaries are magic :)


They really are. It drives me nuts because you don't need Go for static binaries, but in practice almost all Go programs are static and almost all non-Go programs are dynamic. I've even tried to build static binaries out of ex. C and it's a huge pain because nothing expects you to do that so you have to fight your libraries since your distro probably didn't ship the static .a files to link in, and apparently you can't just reuse the normal versions. So, basically network effects mean that Go=static, not-Go=not-static, which is sad.


Could you please go into details like what's basic difference b/w static binaries and apps which use dynamic linking.

What are the benefits of one over another?

Something like grokking this concept for once and all :D


Simplest explanation I can think of: Static binaries have no dependencies. They should just run and not bark about missing (shared/dynamic) libraries, nor require you to install them.

Of course, even static binaries rely on some basic level of compatibility; typically system-level things that don't change much.

Dynamically-linked binaries have the potential to create a massive dependency graph that can hard or even impossible (for a given o/s installation) to traverse.


Those annoying dependency graphs provide both standardized visibility and also the ability to fix and patch components independently of each other.


Which makes sense for large software systems, but for small tools that you might want to carry around on a flash drive or that you need to always work across multiple machines without having a vm, static binaries make sense.


I'm a big fan of Go's static binaries, but I'm sorry, the last point doesn't make much sense to me. The dependency graph is going to be the same whether or not you include everything in your binary. It just so happens that the Go ecosystem hasn't adopted node's cancerous everything-is-a-dependecy pattern (at least yet). If it did, the dependency graph would be equally horrendous; the only difference is that the dependencies are included in the binary.


Pros:

* single binary that you can scp (or use transfer.sh haha) into the production machine and run; no runtime environments, package installation etc.

* two different applications can depend on different versions of a library without any intermediate package manager or virtual environment

* guaranteed execution: related to the first point, but I see enough merit in this to make it a separate point

Cons:

* If there's a security issue in a commonly used library (database/sql, for example), you'll need to patch every application that uses it. With dynamic dependencies, you just patch the library.


One more con: shared libraries can be shared in memory; static binaries cannot.


For dynamic version, you can have different dynamic libraries for different OS'es or optimized for different CPU's, you can share library between different applications saving network/disk/RAM/CPU cache, you can update lib when application vendor no longer exists, you can have different licenses for libraries and application, etc. Just read historic books for details about problems with static binaries.


For illustrative purposes, you could think of a static binary kind of like a Docker container. Like a container, within the binary is all of its dependencies, meaning that you are free to install conflicting libraries on the system.

If you use dynamically linked binaries, you rely on the system to have the version of the library you need, and hope that your reliance on those libraries does not break other applications which may rely on different versions of those libs.

Static == everything you need bundled up Dynamic == relies on libraries present on the host to provide aspects of functionality


I prefer magic-wormhole.

  pip install magic-wormhole
  wormhole send foo.tar.gz
Bonus: the files are e2e encrypted.

Still, transfer.sh is hard to beat for flexibility. Magic-wormhole requires that users install something.


I had this problem yesterday, misplaced USB drive and needing to do a large transfer. https://www.sharedrop.io to the rescue.


I sometimes get such links in my email box.

Then when I browse my email months later, I can't use the links :(

I wish tools such as this one would automatically incorporate the downloaded files into my email history somehow.


Wrap it in a `keybase encrypt ...` and it's a pretty solid solution



scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp, scp


I keep seeing these pop up over and over, and I think most I have seen use a rendezvous that is also an I/O relay.

A reliable UDP with a rendezvous server would allow for much more scalable P2P transfer. Unfortunately, I haven't found one implemented like this...


Fiche is another solution, based on netcat (nc):

https://github.com/solusipse/fiche


Can it have a little more security please? E.g. longer filename, more difficult to guess. And an encryption option would be nice.


Well it seems you'd have to not only guess the file name, but also the 5 character code; as it appears its uppercase, lower and numbers, which is ~916 million possibilities

...although on second thought, and with some bad math, if you know the file name, and you can manage 750+ tries a second, you could brute it prior to the 14 day expiration.


Hopefully they use something akin to fail2ban..


Stop complaining and encrypt your stuff. You're the one making absurd requests to a fine small nice service here.


Not every feature request or suggestion is "complaining" or "absurd", even if it isn't worded perfectly.


gpg2 --armor --output my-file.tar.gz.gpg -e my-file.tar.gz && curl --upload-file ./my-file.tar.gz.gpg https://transfer.sh


I'd think the ability to set the file to delete after X downloads instead of 14 days would be useful.


At risk of asking a naive question, what's wrong with just using a Dropbox or rsync?


if you happen to be on the same network, airpaste is pretty neat too: https://github.com/mafintosh/airpaste


Btw, many unix files explorer can connect to many remote servers via ssh. Unless x less of course. All is integrated, local programs can read and edit those files as regular, drag and drop, folder, permissions with right click. I use the shell a lot, but remote bookmarks are wizardry!


And of course there is always sshfs


I love using this for simple transfers I need from computer to server


Wow! I've been looking for something like this since forever! scp really doesn't cut it for servers only accessible from other whitelisted servers.


That’s what the SSH ProxyCommand configuration directive is for.


Have used it before. Works well.


ipfs add <file> works remarkably well in my experience.


How is it free?


Because it's not massively used yet.


Upload all your files to our internet web server for FREE!!1


On a related note, I recently learnt that if you're on the same local network, there's a much faster way to transfer than the old tar|nc trick[0]: udpcast. You do

    $ udp-sender --min-receivers 1 --full-duplex --pipe 'tar czvf - theDirectory'
on the sender and

    $ udp-receiver --pipe 'tar xzp'
and at least on my home network it's 11x faster than tar|nc. There are some caveats[1] about udp not working well everywhere, and you may have to open ports 9000 and 9001 and of course it's not encrypted at all, but for copying large ISO's and such when you can't find your usb stick it's great. Just remember to compare checksums afterwards.

[0] http://www.spikelab.org/blog/transfer-largedata-scp-tarssh-t...

[1] https://superuser.com/questions/692294/why-is-udcast-is-many...


Just remember to compare checksums afterwards.

Preferably a cryptographic hash... UDP is known for not being reliable at all, and that's partly why it's so fast --- the sender doesn't care whether the packets reached the receiver, it just sends as fast as it can.


Well sort of. The sender will in a lot (most) of cases care, its the protocol (UDP) that has no automatic checking, or reporting. This sort of stuff is left up to the programmer.

The power of UDP allows the sender to have more control on things like how often transmissions are are acknowledged (tcp window size), or how to handle delays or errors. There are also some advantages because middle boxes who try to be smart and "make TCP" better for you can't really muck with the UDP packets all that much because the applications own protocol of how to handle UDP packets will not likely be know. This is why QUIC is such a big deal -- as a lot of the type of things a middle box might want to -- and do on TCP today -- muck around with are encrypted.

So I would not say that UDP is fast because it is not reliable, it is fast because it can allow a programmer to exploit the network in a more efficient way than TCP can for a specific type of data being transferred. There are many reliable UDP based protocols that achieve faster speeds than TCP in different situations.


CRC is fine in this case.

Oh, they should use SHA anyway because why not, but CRC is only vulnerable to deliberate manipulation or exceedingly abstruse bit-flips.

When the failure mode is lost packets, it's perfectly fine.


It'd be interesting to modify this or a similar program to work with rsync




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: