[ Home | Blog home | RSS 2.0 | ATOM 1.0 ]
I've recently implemented server-side copy support in libsmbclient. The code is currently in Samba's master branch and should be in Samba 4.3. Unless you're using 10GbE, this makes a significant improvement in copying files from one location to another on a share. This is especially so because without server-side copy support, GVFS uses a simple read/write loop with a block size of 64 KiB to copy the data which, due to latencies means one typically gets about half of the available bandwidth of a single direction.
The Samba server includes some extra-special sauce: the ability to use BTRFS reflinking/copy-on-write support to make copies almost instantaneous.
On my dev branch, I've added the ability for GVFS to transparently make use of server-side copy support if it is available.
So here is the obligatory video, first showing the experience one gets on current Linux distros where all the data goes over the network, then doing a server-side copy, and finally doing a server-side copy on a BTRFS-backed share. The video is also on YouTube.
One point to note is that the server-side copy (the second part of the video) is pretty slow because I was using a USB2 external HDD. The difference between the first and second sections would be larger if using a better disk.
The libsmbclient code is already in Samba, hopefully the rest of the pieces will fall into place so that we get this out of the box from distros in a few months.
This post was inspired by the Samba Server-side Copy wiki page. Thanks to the Samba team for implementing support on the server side.
After several years of procrastination, GVFS finally became a little kinder to the security folks. It now supports verifying certificates when mounting a webdav share. If the certificate is invalid, it presents a dialog to the user displaying some information to let them make a decision about whether to continue or not:
Gcr provides the certificate information.
Secondly, I've added support for FTPS. Secure FTP comes in two forms, implicit and explicit. Implicit is the older form and is where it runs on a separate port and uses SSL from the beginning of the connection. This was never standardized. Explicit uses a STARTTLS mechanism to upgrade the connection from normal to secure. With the GVFS implementation, only the explicit form is implemented, and it uses a different URL scheme (ftps) to clearly differentiate it from standard FTP. When ftps is used, both the control and data connections are secured and must use the same certificate. As with webdav, the certificate is verifed with the option for the user to accept an invalid certificate. This was a seven year old bug that was good to finally close.
Thanks to much reviewing by Ondrej Holy, NFS support is in gvfs 1.23.90. It requires a recent version of libnfs, 1.9.7. Although it just works for me, no doubt there will be many bugs so please report them to the GNOME Bugzilla, there is an NFS component within gvfs now.
Within the next little while, I plan on improving its performance for copying files and metadata operations.
Belatedly posting this video from August 2014.
I was nominated by my father, so obviously I had to do it!
It was a reasonably warm day in August in Cambridge (temps in the upper teens), so the experience wasn't too bad. I also went for a jog beforehand to warm up!
The last few months I've been working on support for NFS in GVFS, using libnfs. This will finally allow GNOME users to mount NFS shares as easily as Windows (SMB) shares can be mounted, no root access required.
The backend is written to use the asynchronous API of libnfs which allows for multiple outstanding requests to be in flight. Although it is not yet implemented, this will allow GVFS to achieve line rate throughput for copy operations whereas at the moment, it gets about 70% of the potential throughput due to latency and other overheads.
Unfortunately, while some methods like
truncate are simple
because they map directly to the libnfs API, using the asynchronous API does
make the methods which require multiple operations rather complicated.
Enumerate is a good example of this since fetching all the data
requires for each entry:
statcall if the item is a symbolic link,
accessto query if the item is readable, writable or executable.
Currently, these operations are all done sequentially but in the future these can be done in parallel to speed up enumeration.
This effort has resulted in a few improvements to libnfs as well:
access2to get the status of R_OK, W_OK and X_OK all at once.
lutimes, etc, which work on symbolic links rather than what they point to.
statand directory listing calls.
Any opinions expressed here are my own and do not in any way reflect the opinions of my employer, or anyone else.
Made with Pyblosxom