Difference between revisions of "Tahoe-LAFS"

From the Linux and Unix Users Group at Virginia Teck Wiki
Jump to: navigation, search
imported>Echarlie
imported>Mhazinsk
(Update to latest instructions)
Line 11: Line 11:
 
** Available in Arch's community repo
 
** Available in Arch's community repo
 
* Install various dependencies.
 
* Install various dependencies.
** On CentOS 6, you'll need to <code>yum -y install libffi libffi-devel python-devel openssl-devel</code>
+
** On Debian 9, install <code>python-txtorcon tahoe-lafs tor</code>
* Get the latest version of Tahoe-LAFS. To use it with Torsocks 2.x, you must use [https://github.com/tahoe-lafs/tahoe-lafs the latest version from their Github] containing [[User:Mjh|mhazinsk]]'s patch.
+
** On Debian 8, the distro packages are too old so you need to install things manually.
* Create a hidden service by editing the <code>torrc</code> file, usually found at <code>/etc/tor/torrc</code>. Add the following:
+
*** <code>pip2 install -U pyopenssl txtorcon tahoe-lafs</code>
  HiddenServiceDir /var/lib/tor/tahoe_storage/
+
*** Follow the Tor Project's [https://www.torproject.org/docs/debian.html.en instructions] for installing the latest stable version of tor on Debian Jessie.
  HiddenServicePort 4456 127.0.0.1:4456
+
** CentOS 6 is unsupported, as Tahoe-LAFS now requires Python 2.7.
* Get the hostname for the hidden service by restarting tor and running <code>cat /var/lib/tor/tahoe_storage/hostname</code>
+
* Edit <code>/etc/tor/torrc</code> and uncomment the <code>ControlPort 9051</code> line, then restart tor.
* cd to where you cloned the Tahoe-LAFS repo and do the following:
+
* Edit the tahoe defaults file (<code>/etc/defaults/tahoe-lafs</code>) to start your nodes along with tahoe-lafs startup. Note that in this example, I have two node directories underneath "/srv/tahoe-storage", "introducer" and "tor-storage". For most users you should only have one node directory for storage.
** <code>python setup.py build</code> to build the necessary binaries
+
<pre>
** <code>bin/tahoe create-node ''path''</code> to create a Tahoe directory in the given ''path''. Note that your this will be used for both configuration data and encrypted blob storage.
+
# Start only these tahoe-lafs nodes automatically via init script. Allowed
** <code>vim ''path''/tahoe.cfg</code> and make it look like the following:
+
# values are "all", "none" or space separated list of tahoe-lafs nodes. If
  [node]
+
# empty, "none" is assumed.
  # Nicknames are optional but useful
+
#
  nickname = mhazinsk-
+
#AUTOSTART="all"
  # Optional web interface.  
+
AUTOSTART="introducer tor-storage"
  web.port = tcp:3456:interface=127.0.0.1 
+
#AUTOSTART="home office"
  web.static = public_html
+
 
  # This is what what you defined in tor.
+
# Pass arguments to tahoe start. Default to "--syslog".
  tub.port = tcp:4456:interface=127.0.0.1
+
DAEMONARGS="--syslog"
  tub.location = yourhiddenservicehostname.onion:4456
+
CONFIG_DIR="/srv/tahoe-storage"
 
+
</pre>
  [client]
+
* Create a Tahoe user and add it to the tor group
  introducer.furl = pb://getthisstringfromanofficer@hiddenservice.onion:37204/otherstuff
+
** <code>useradd tahoe-lafs</code>
 
+
** <code>usermod -aG debian-tor tahoe-lafs</code>
  [storage]
+
* Create the appropriate tahoe services,  and start them:
  enabled = true
+
** <code>sudo -u tahoe-lafs tahoe create-node --listen=tor -n YOUR_NODE_NAME -C /srv/tahoe-storage/vtluug-tor-storage -i GET_THIS_STRING_FROM_AN_OFFICER</code>
  # You can change this if you have less space, but less than a
+
** <code>systemctl start tahoe-lafs</code>
  # few 10's of GB is not useful
+
 
  reserved_space = 100G
 
  expire.enabled = true
 
  expire.mode = age
 
  expire.override_lease_duration = 2 month
 
 
 
  # Read tahoe's docs if you want to use the other options
 
  [helper]
 
  enabled = false
 
 
 
  [drop_upload]
 
  enabled = false
 
* Finally, run <code>torify bin/tahoe start ''path''</code>. This will daemonize.
 
  
 
=== Tuning ===
 
=== Tuning ===

Revision as of 02:27, 20 August 2017

Tahoe-LAFS is a distributed filesystem which provides redundancy and security for files. Our most recent incarnation was run on Crashoverride

Connecting to VTLUUG's Tahoe Grid

VTLUUG now operates an onion grid, a grid of tor hidden services. All nodes must be tor-enabled using torify and storage nodes must also advertise a tor hidden service.

Storage nodes

To connect a storage node, do the following:

  • Install the latest version of Tor and torsocks. Enable tor to start at boot.
    • RPMs for RHEL-based distros
    • DEBs for Debian-based distros
    • Available in Arch's community repo
  • Install various dependencies.
    • On Debian 9, install python-txtorcon tahoe-lafs tor
    • On Debian 8, the distro packages are too old so you need to install things manually.
      • pip2 install -U pyopenssl txtorcon tahoe-lafs
      • Follow the Tor Project's instructions for installing the latest stable version of tor on Debian Jessie.
    • CentOS 6 is unsupported, as Tahoe-LAFS now requires Python 2.7.
  • Edit /etc/tor/torrc and uncomment the ControlPort 9051 line, then restart tor.
  • Edit the tahoe defaults file (/etc/defaults/tahoe-lafs) to start your nodes along with tahoe-lafs startup. Note that in this example, I have two node directories underneath "/srv/tahoe-storage", "introducer" and "tor-storage". For most users you should only have one node directory for storage.
# Start only these tahoe-lafs nodes automatically via init script.  Allowed
# values are "all", "none" or space separated list of tahoe-lafs nodes. If
# empty, "none" is assumed.
#
#AUTOSTART="all"
AUTOSTART="introducer tor-storage"
#AUTOSTART="home office"

# Pass arguments to tahoe start. Default to "--syslog".
DAEMONARGS="--syslog"
CONFIG_DIR="/srv/tahoe-storage"
  • Create a Tahoe user and add it to the tor group
    • useradd tahoe-lafs
    • usermod -aG debian-tor tahoe-lafs
  • Create the appropriate tahoe services, and start them:
    • sudo -u tahoe-lafs tahoe create-node --listen=tor -n YOUR_NODE_NAME -C /srv/tahoe-storage/vtluug-tor-storage -i GET_THIS_STRING_FROM_AN_OFFICER
    • systemctl start tahoe-lafs


Tuning

You should adjust the encoding parameters to strike the desired balance between upload bandwidth and replication.

  • shares.needed refers to the number of storage nodes (out of shares.total) need to be available to reconstruct a file.
  • shares.happy refers to the minimum number of storage nodes a file should be striped upon
  • shares.total refers to how many stripes of a file should be made

Note that in order to upload a file, the client does the striping. This can cause significant latency if the client is on a consumer internet connection. You can eliminate this issue by relying upon a helper node which does striping for you. Blobs are still encrypted on the client side, so not much trust needs to be placed in this. Helpers are useful if you have access to a server with significantly higher bandwidth than your client.

Troubleshooting

This is a list of various problems I've encountered. --Mjh (talk) 00:47, 30 December 2014 (EST)

Tahoe daemonized and then terminated immediately

This can be caused by several factors when running with torsocks.

  • You're trying to bind to an IP other than localhost and torsocks blocked this. Ensure the tub.port and web.port lines are set to restrict traffic to localhost.
  • Tahoe is attempting to establish a UDP connection to identify its local IP address. Torsocks restricts UDP connections, causing Tahoe to throw exceptions and terminate. Ensure you are using the latest trunk version rather than the version supplied by your OS.

Can't connect to introducer

  • Ensure tor is running and that Tahoe is started through torify.
  • Ensure the introducer.furl parameter is not enclosed with quotes in tahoe.cfg. For some reason this has caused connection issues for me.

FAQs

Technical documentation on Tahoe can be found on its website. However, for the prospective user, here's a simple explanation in Q&A format:

What does it do?

You set up a node with a few hundred gigs of free space and connect it to the tahoe grid. Then, you put files in it. It encrypts each file and puts part of it on ten of the nodes on the grid in such a way as to be able to recover the entire file even if up to 7 of the nodes are unavailable.

But what if I don't want people seeing my files?

They're encrypted, remember? Each file has an automatically generated key which also tells where the file is located. You can share this "filecap" with anyone else you'd like to see the file.

So the nodes aren't trusted?

No. Files stored on them are encrypted, signed, split into pieces, and distributed among the nodes. The only way to get the file back without the filecap is by finding the storage index, retrieving the pieces from the nodes, breaking the encryption, and reassembling the pieces. This is designed to be difficult.

And all these nodes are hooked together?

No. Groups of users set up grids, often arranged by geographical location for improved bandwidth and latency.

How do I access files?

When a file is uploaded (using tahoe put or tahoe cp), tahoe gives you a filecap. A filecap looks something like this:

URI:CHK:7fdtkb3smrcczbduzkg6nxex44:rvg2fwo7poziydflo5jmjmbejczunqe5emhcisxx6uefosw4in3q:3:10:102015

This string includes the location, or storage index, of the file (in this case 2zmhsnky3x34wz2c523vzery6e, which is cryptographically encoded), the keys used to decrypt the file and verify its signature, the file's size (in human-readable format), in this case just over 102kB, and the encoding of the file, in this case 3-of-10. 3-of-10 encoding means that the file is stored across 10 nodes and that at least 3 of these are required to recover the file.

The filecap does NOT include the name of the file or its type. Types may be found using the unix file utility. To retrieve the file, use tahoe get [filecap] [filename]. This will cause tahoe to get the filecap's shares from the nodes, reassemble them, decrypt them, verify the integrity of the file, and write it to filename.

How do I delete files?

You can't. The nodes are not trusted and therefore cannot be relied upon to remove the file's shares when asked. To render a file inaccessible, destroy all copies of the filecap. After 60 days, the file's lease will expire and its shares will be automatically garbage collected, or deleted, by the nodes.

Wait, files expire? But I thought...

VTLUUG's grid uses a 2 month file lease to prevent the grid from filling up permanently.

Don't panic. To stop a file from being deleted after 2 months, simply renew its lease. The recommended way of doing this is setting up an alias using tahoe create-alias tahoe, adding the filecap to the alias, and setting up a weekly cronjob to run tahoe deep-check --renew tahoe. This will renew the leases on all the files in the alias, which is similar to a directory.

Directory?

Yeah, you can have directories. They are implemented basically as lists of filecaps with associated filenames. They are referenced using dircaps, which come in read-write and write-only forms. As a result of storing the filecaps of the contained files inside the dircap's shares (a dircap, remeber, is treated similarly to a file with regards to storage), all files may be read as a result of knowing only the dircap. This does not, however, work in reverse. If you give another user the filecap of a file (or the dircap of a directory) in a directory, they cannot find the names or contents of the other files in the directory containing the filecap.

How come there are no mountable filesystem frontends?

There are; they just aren't built-in. Tahoe's high latency makes it rather unwieldy for use as part of a conventional filesystem. Append operations in particular are extremely inefficient. It is recommended that you use the web and CLI interfaces to manage files stored in tahoe.

What are the downsides?

Currently there's a reliance on a central introducer. This has several disadvantages:

  • If the introducer goes away, every node in the system must be reconfigured to choose a new introducer furl string.
  • Tahoe's erasure coding maintains availability in the event of a loss of nodes, but not malicious nodes. It's trivial to DoS a grid either by a Sybil attack or just using up all available storage if you know the introducer furl string. A patch is in the works to allow clients to choose their own storage nodes that should mitigate this.