Table of Contents

Introduction

SFS is a network file system that lets you access your files from anywhere and share them with anyone anywhere. SFS was designed with three goals in mind:

SFS achieves these goals by separating key management from file system security. It names file systems by the equivalent of their public keys. Every remote file server is mounted under a directory of the form:

/sfs/Location:HostID

Location is a DNS hostname or an IP address. HostID is a collision-resistant cryptographic hash of Location and the file server's public key. This naming scheme lets an SFS client authenticate a server given only a file name, freeing the client from any reliance on external key management mechanisms. SFS calls the directories on which it mounts file servers self-certifying pathnames.

Self-certifying pathnames let users authenticate servers through a number of different techniques. As a secure, global file system, SFS itself provides a convenient key management infrastructure. Symbolic links let the file namespace double as a key certification namespace. Thus, users can realize many key management schemes using only standard file utilities. Moreover, self-certifying pathnames let people bootstrap one key management mechanism using another, making SFS far more versatile than any file system with built-in key management.

Through a modular implementation, SFS also pushes user authentication out of the file system. Untrusted user processes transparently authenticate users to remote file servers as needed, using protocols opaque to the file system itself.

Finally, SFS separates key revocation from key distribution. Thus, the flexibility SFS provides in key management in no way hinders recovery from compromised keys.

No caffeine was used in the production of the SFS software.

Installation

This section describes how to build and install the SFS on your system. If you are too impatient to read the details, be aware of the two most important points:

Requirements

SFS should run with minimal porting on any system that has solid NFS3 support. We have run SFS successfully on OpenBSD 2.6, FreeBSD 3.3, OSF/1 4.0, and Solaris 5.7.

We have also run SFS with some success on Linux. However, you need a kernel with NFS3 support to run SFS on Linux. The SFS on linux web page has information on installing an SFS-capable Linux kernel.

In order to compile SFS, you will need the following:

  1. gcc-2.95.2 or more recent. You can obtain this from ftp://ftp.gnu.org/pub/gnu/gcc. Don't waste your time trying to compile SFS with an earlier version of gcc.
  2. gmp-2.0.2. You can obtain this from ftp://ftp.gnu.org/pub/gnu/gmp. Many operating systems already ship with gmp. Note, however, that some Linux distributions do not include the gmp.h header file. Even if you have libgmp.so, if you don't have /usr/include/gmp.h, you need to install gmp on your system.
  3. Header files in /usr/include that match the kernel you are running. Particularly on Linux where the kernel and user-land utilities are separately maintained, it is easy to patch the kernel without installing the correspondingly patched system header files in /usr/include. SFS needs to see the patched header files to compile properly.
  4. 128 MB of RAM. The C++ compiler really needs a lot of memory.
  5. 550 MB of free disk space to build SFS. (Note that on ELF targets, you may be able to get away with considerably less. A build tree on FreeBSD only consumes about 200 MB.)

Building SFS

Once you have setup your system as described in Requirements, you are ready to build SFS.

  1. Create a user, sfs-user, and group, sfs-group, for SFS on your system. By default, SFS expects the both sfs-user and sfs-group to be called sfs. For instance, you might add the following line to /etc/passwd:
    sfs:*:71:71:Self-certifying file system:/:/bin/true
    

    And the following line to /etc/group:

    sfs:*:71:
    

    Do not put any users in sfs-group, not even root. Any user in sfs-group will not be able to make regular use of the SFS file system. Moreover, having an unprivileged users in sfs-group causes a security hole.

  2. Unpack the SFS sources. For instance, run the commands:
    % gzip -dc sfs-0.5.tar.gz | tar xvf -
    % cd sfs-0.5
    

    If you determined that you need gmp see Requirements, you should unpack gmp into the top-level of the SFS source tree:

    % gzip -dc ../gmp-2.0.2.tar.gz | tar xvf -
    

  3. Set your CC and CXX environment variables to point to the C and C++ compilers you wish to use to compile SFS. Unless you are using OpenBSD-2.6, your operating system will not come with a recent enough version of gcc see Requirements.

  4. Configure the sources for your system with the command ./configure. You may additionally specify the following options:

    --with-sfsuser=sfs-user
    If the user you created for SFS is not called sfs. Do not use an existing account for sfs-user--even a trusted account--as processes running with that user ID will not be able to access SFS. [Note: If you later change your mind about user-name, you do not need to recompile SFS, sfs_config.]
    --with-sfsgroup=sfs-group
    If the user you created for SFS does not have the same name as sfs-user. [Note: If you later change your mind about user-group, you do not need to recompile SFS.]
    --with-gmp=gmp-path
    To specify where configure should look for gmp (for example, gmp-path might be /usr/local).
    --with-sfsdir=sfsdir
    To specify a location for SFS to put its working files. The default is /var/sfs. [You can change this later, sfs_config.]
    --with-etcdir=etcdir
    To specify where SFS should search for host-specific configuration files. The default is /etc/sfs.

    configure accepts all the traditional GNU configuration options such as --prefix. It also has several options that are only for developers. Do not use the --enable-repo or --enable-shlib options (unless you are a gcc maintainer looking for some wicked test cases for your compiler).

  5. Build the sources by running make.
  6. Install the binaries by running make install. If you are short on disk space, you can alternatively install stripped binaries by running make install-strip.
  7. That's it. Fire up the client daemon by running sfscd.

Problems building SFS

The most common problem you will encounter is an internal compiler error from gcc. If you are not running gcc-2.95.2 or later, you will very likely experience internal compiler errors when building SFS and need to upgrade the compiler. You must make clean after upgrading the compiler. You cannot link object files together if they have been created by different versions of the C++ compiler.

On OSF/1 for the alpha, certain functions using a gcc extension called __attribute__((noreturn)) tend to cause internal compiler errors. If you experience internal compiler errors when compiling SFS for the alpha, try building with the command make ECXXFLAGS='-D__attribute__\(x\)=' instead of simply make.

Sometimes, a particular source file will give particularly stubborn internal compiler errors on some architectures. These can be very hard to work around by just modifying the SFS source code. If you get an internal compiler error you cannot obviously fix, try compiling the particular source file with a different level of debugging. (For example, using a command like make sfsagent.o CXXDEBUG=-g in the appropriate subdirectory.)

If your /tmp file system is too small, you may also end up running out of temporary disk space while compiling SFS. Set your TMPDIR environment variable to point to a directory on a file system with more free space (e.g., /var/tmp).

You may need to increase your heap size for the compiler to work. If you use a csh-derived shell, run the command unlimit datasize. If you use a Bourne-like shell, run ulimit -d `ulimit -H -d`.

On some operating systems, some versions of GMP do not install the library properly. If you get linker errors about symbols with names like ___gmp_default_allocate, try running the command ranlib /usr/local/lib/libgmp.a (substituting wherever your GMP library is installed for /usr/local).

Getting Started

This chapter gives a brief overview of how to set up an SFS client and server once you have compiled and installed the software.

Quick client setup

SFS clients require no configuration. Simply run the program sfscd, and a directory /sfs should appear on your system. To test your client, access our SFS test server. Type the following commands:

% cd /sfs/sfs.fs.net:eu4cvv6wcnzscer98yn4qjpjnn9iv6pi
% cat CONGRATULATIONS
You have set up a working SFS client.
%

Note that the /sfs/sfs.fs.net:... directory does not need to exist before you run the cd command. SFS transparently mounts new servers as you access them.

Quick server setup

Setting up an SFS server is a slightly more complicated process. You must perform at least three steps:

  1. Create a public/private key pair for your server.
  2. Create an /etc/sfs/sfsrwsd_config configuration file.
  3. Configure your machine as an NFS server and export all necessary directories to localhost.

To create a public/private key pair for your server, run the command:

mkdir /etc/sfs
sfskey gen -P /etc/sfs/sfs_host_key

Then you must create an /etc/sfs/sfsrwsd_config file based on which local directories you wish to export and what names those directories should have on clients. This information takes the form of one or more Export directives in the configuration file. Each export directive is a line of the form:

Export local-directory sfs-name

local-directory is the name of a local directory on your system you wish to export. sfs-name is the name you wish that directory to have in SFS, relative to the previous Export directives. The sfs-name of the first Export directive must be /. Subsequent sfs-names must correspond to pathnames that already exist in the previously exported directories.

Suppose, for instance, that you wish to export two directories, /disk/u1 and /disk/u2 as /usr1 and /usr2, respectively. You should create a directory to be the root of the exported namespace, say /var/sfs/root, create the necessary sfs-name subdirectories, and create a corresponding sfsrwsd_config file. You might run the following commands to do this:

% mkdir /var/sfs/root
% mkdir /var/sfs/root/usr1
% mkdir /var/sfs/root/usr2

and create the following sfsrwsd_config file:

Export /var/sfs/root /
Export /disk/u1 /usr1
Export /disk/u2 /usr2

Finally, you must export all the local-directorys in your sfsrwsd_config to localhost via NFS version 3. The details of doing this depend heavily on your operating system. For instance, in OpenBSD you must add the following lines to the file /etc/exports and run the command kill -HUP `cat /var/run/mountd.pid`:

/var/sfs/root localhost
/disk/u1 localhost
/disk/u2 localhost

On Linux, the syntax for the exports file is:

/var/sfs/root localhost(rw)
/disk/u1 localhost(rw)
/disk/u2 localhost(rw)

On Solaris, add the following lines to the file /etc/dfs/dfstab and run exportfs -a:

share -F nfs -o -rw=localhost /var/sfs/root
share -F nfs -o -rw=localhost /disk/u1
share -F nfs -o -rw=localhost /disk/u2

In general, the procedure for exporting NFS file systems varies greatly between operating systems. Check your operating system's NFS documentation for details. (The manual page for mountd is a good place to start.)

Once you have generated a host key, created an sfsrwsd_config file, and reconfigured your NFS server, you can start the SFS server by running sfssd. Note that a lot can go wrong in setting up an SFS server. Thus, we recommend that you first run sfssd -d. The -d switch will leave sfssd in the foreground and send error messages to your terminal. If there are problems, you can then easily kill sfssd from your terminal, fix the problems, and start again. Once things are working, omit the -d flag; sfssd will run in the background and send its output to the system log.

Note: You will not be able to access an SFS server using the same machine as a client unless you run sfscd with the -l flag, sfscd. Attempts to SFS mount a machine on itself will return the error EDEADLK (Resource deadlock avoided).

Getting started as an SFS user

To access an SFS server, you must first register a public key with the server, then run the program sfsagent on your SFS client to authenticate you.

To register a public key, log into the file server and run the command:

sfskey register

This will create a public/private key pair for you and register it with the server. (Note that if you already have a public key on another server, you can reuse that public key by giving sfskey your address at that server, e.g., sfskey register user@other.server.com.)

After registering your public key with an SFS server, you must run the sfsagent program on an SFS client to access the server. On the client, run the command:

sfsagent user@server

server is the name of the server on which you registered, and user is your logname on that server. This command does three things: It runs the sfsagent program, which persists in the background to authenticate you to file servers as needed. It fetches your private key from server and decrypts it using your passphrase. Finally, it fetches the server's public key, and creates a symbolic link from /sfs/server to /sfs/server:HostID.

If, after your agent is already running, you wish to fetch a private key from another server or download another server's public key, you can run the command:

sfskey add user@server

In fact, sfsagent runs this exact command for you when you initially start it up.

While sfskey provides a convenient way of obtaining servers' HostIDs, it is by no means the only way. Once you have access to one SFS file server, you can store on it symbolic links pointing to other servers' self-certifying pathnames. If you use the same public key on all servers, then, you will only need to type your password once. sfsagent will automatically authenticate you to whatever file servers you touch.

When you are done using SFS, you should run the command

sfskey kill

before logging out. This will kill your sfsagent process running in the background and get rid of the private keys it was holding for you in memory.

System overview

   sfskey--+---------------- - - - -----------+
           |                                  |
         agent--+                             |
     agent------+                             |
                |                             |
   +---------------+                       +-------------+
   |         sfscd |-------- - - - --------| sfssd       |
   |            |  |                       |  |          |
   |    sfsrwcd-+  |                       |  +-sfsrwsd--+-+
   | nfsmounter-+  |                       |  +-sfsauthd | |
   +---------------+                       +-------------+ |
                |                                          V
+--------+      |                                   +--------+
| kernel |      |                                   | kernel |
|  NFS3  |<-----+                                   |  NFS3  |
| client |                                          | server |
+--------+                                          +--------+

          CLIENT                               SERVER
SFS consists of a number interacting programs on both the client and the server side.

On the client side, SFS implements a file system by pretending to be an NFS server and talking to the local operating system's NFS3 client. The program sfscd gets run by root (typically at boot time). sfscd spawns two other daemons--nfsmounter and sfsrwcd.

nfsmounter handles the mounting and unmounting of NFS file systems. In the event that sfscd dies, nfsmounter takes over being the NFS server to prevent file system operations from blocking as it tries to unmount all file systems. Never send nfsmounter a SIGKILL signal (i.e., kill -9). nfsmounter's main purpose is to clean up the mess if any other part of the SFS client software fails. Whatever bad situation SFS has gotten your machine into, killing nfsmounter can only make matters worse.

sfsrwcd implements the ordinary read-write file system protocol. As other dialects of the SFS protocol become available, they will be implemented as daemons running alongside sfsrwcd.

Each user of an SFS client machine must run an instance of the sfsagent command. sfsagent serves several purposes. It handles user authentication as the user touches new file systems. It can fetch HostIDs on the fly, a mechanism called Dynamic server authentication. Finally, it can perform revocation checks on the HostIDs of servers the user accesses, to ensure the user does not access HostIDs corresponding to compromised private keys.

The sfskey utility manages both user and server keys. It lets users control and configure their agents. Users can hand new private keys to their agents using sfskey, list keys the agent holds, and delete keys. sfskey will fetch keys from remote servers using SRP, SRP. It lets users change their public keys on remote servers. Finally, sfskey can configure the agent for dynamic server authentication and revocation checking.

On the server side, the program sfssd spawns two subsidiary daemons, sfsrwsd and sfsauthd. If virtual hosts or multiple versions of the software are running, sfssd may spawn multiple instances of each daemon. sfssd listens for TCP connections on port 4. It then hands each connection off to one of the subsidiary daemons, depending on the self-certifying pathname and service requested by the client.

sfsrwsd is the server-side counterpart to sfsrwcd. It communicates with client side sfsrwcd processes using the SFS file system protocol, and accesses the local disk by acting as a client of the local operating system's NFS server. sfsrwsd is the one program in sfs that must be configured before you run it, sfsrwsd_config.

sfsauthd handles user authentication. It communicates directly with sfsrwsd to authenticate users of the file system. It also accepts connections over the network from sfskey to let users download their private keys or change their public keys.

SFS configuration files

SFS comprises a number of programs, many of which have configuration files. All programs look for configuration files in two directories--first /etc/sfs, then, if they don't find the file there, in /usr/local/share/sfs. You can change these locations using the --with-etcdir and --with-datadir options to the configure command, configure.

The SFS software distribution installs reasonable defaults in /usr/local/share/sfs for all configuration files except sfsrwsd_config. On particular hosts where you wish to change the default behavior, you can override the default configuration file by creating a new file of the same name in /etc/sfs.

The sfs_config file contains system-wide configuration parameters for most of the programs comprising SFS. Note that /usr/local/share/sfs/sfs_config is always parsed, even if /etc/sfs/sfs_config exists. Options in /etc/sfs/sfs_config simply override the defaults in /usr/local/share/sfs/sfs_config. For the other configuration files, a file in /etc/sfs entirely overrides the version in /usr/local.

If you are running a server, you will need to create an sfsrwsd_config file to tell SFS what directories to export, and possibly an sfsauthd_config if you wish to share the database of user public keys across several file servers.

The sfssd_config file contains information about which protocols and services to route to which daemons on an SFS server, including support for backwards compatibility across several versions of SFS. You probably don't need to change this file.

sfs_srp_params contains some cryptographic parameters for retrieving keys securely over the network with a passphrase (as with the sfskey add usr@server command).

sfscd_config Contains information about extensions to the SFS protocol and which kinds of file servers to route to which daemons. You almost certainly should not touch this file unless you are developing new versions of the SFS software.

Note that configuration command names are case-insensitive in all configuration files (though the arguments are not).

sfs_config--system-wide configuration parameters

The sfs_config file lets you set the following system-wide parameters:

sfsdir directory
The directory in which SFS stores its working files. The default is /var/sfs, unless you changed this with the --with-sfsdir option to configure.
sfsuser sfs-user [sfs-group]
As described in Building, SFS needs its own user and group to run. This configuration directive lets you set the user and group IDs SFS should use. By default, sfs-user is sfs and sfs-group is the same as sfs-user. The sfsuser directive lets you supply either a user and group name, or numeric IDs to change the default. Note: If you change sfs-group, you must make sure the the program /usr/local/lib/sfs/suidconnect is setgid to the new sfs-group.
anonuser {user | uid gid}
Specifies an unprivileged user id to be used for anonymous file access. If specified as user, the name user will be looked up in the password file, and the login group of that user used as the group id. Can alternatively be specified as a numeric uid and gid. The default is to use -1 for both the uid and gid, though the default sfs_config file specifies the user name nobody.
ResvGids low-gid high-gid
SFS lets users run multiple instances of the sfsagent program. However, it needs to modify processes' group lists so as to know which file system requests correspond to which agents. The ResvGids directive gives SFS a range of group IDs it can use to tag processes corresponding to a particular agent. (Typically, a range of 16 gids should be plenty.) Note that the range is inclusive--both low-gid and high-gid are considered reserved gids.

The setuid root program /usr/local/lib/sfs/newaid lets users take on any of these group IDs. Thus, make sure these groups are not used for anything else, or you will create a security hole. There is no default for ResvGids.

PubKeySize bits
Sets the default number of bits in a public key. The default value of bits is 1280.
PwdCost cost
Sets the computational cost of processing a user-chosen password. SFS uses passwords to encrypt users' private keys. Unfortunately, users tend to choose poor passwords. As computers get faster, guessing passwords gets easier. By increasing the cost parameter, you can maintain the cost of guessing passwords as hardware improves. cost is an exponential parameter. The default value is 7. You probably don't want anything larger than 10. The maximum value is 32--at which point password hashing will not terminate in any tractable amount of time and the sfskey command will be unusable.
LogPriority facility.level
Sets the syslog facility and level at which SFS should log activity. The default is daemon.notice.

sfsrwsd_config--File server configuration

Hostname name
Set the Location part of the server's self-certifying pathname. The default is the current host's fully-qualified hostname.
Keyfile path
Tells sfsrwsd to look for its private key in file path. The default is sfs_host_key. SFS looks for file names that do not start with / in /etc/sfs, or whatever directory you specified if you used the --with-etcdir option to configure (see configure).
Export local-directory sfs-name [R|W]
Tells sfsrwsd to export local-directory, giving it the name sfs-name with respect to the server's self-certifying pathname. Appending R to an export directive gives anonymous users read-only access to the file system (under user ID -2 and group ID -2). Appending W gives anonymous users both read and write access. See Quick server setup, for an example of the Export directive.

There is almost no reason to use the W flag. The R flag lets anyone on the Internet issue NFS calls to your kernel as user -2. SFS filters these calls; it makes sure that they operate on files covered by the export directive, and it blocks any calls that would modify the file system. This approach is safe given a perfect NFS3 implementation. If, however, there are bugs in your NFS code, attackers may exploit them if you have the R option--probably just crashing your server but possibly doing worse.

LeaseTime seconds

sfsauthd_config--User-authentication daemon configuration

Hostname name
Set the Location part of the server's self-certifying pathname. The default is the current host's fully-qualified hostname.
Keyfile path
Tells sfsrwsd to look for its private key in file path. The default is sfs_host_key. SFS looks for file names that do not start with / in /etc/sfs, or whatever directory you specified if you used the --with-etcdir option to configure (see configure).
Userfile [-ro|-reg] [-pub=pubpath] [-mapall=user] path
This specifies a file in which sfsauthd should look for user public keys when authenticating users. You can specify multiple Userfile directives to use multiple files. This can be useful in an environment where most user accounts are centrally maintained, but a particular server has a few locally-maintained guest (or root) accounts.

Userfile has the following options:

-ro
Specifies a read-only user database--typically a file on another SFS server. sfsauthd will not allow users in a read-only database to update their public keys. It also assumes that read-only databases reside on other machines. Thus, it maintains local copies of read-only databases in /var/sfs/authdb. This process ensures that temporarily unavailable file servers never disrupt sfsauthd's operation.
-reg
Allows users who do not exist in the database to register initial public keys by typing their UNIX passwords. See sfskey register, for details on this. At most one Userfile can have the -reg option. -reg and -ro are mutually exclusive.
-pub=pubpath
sfsauthd supports the secure remote password protocol, or SRP. SRP lets users connect securely to sfsauthd with their passwords, without needing to remember the server's public key. To prove its identity through SRP, the server must store secret data derived from a user's password. The file path specified in Userfile contains these secrets for users opting to use SRP. The -pub option tells sfsauthd to maintain in pubpath a separate copy of the database without secret information. pubpath might reside on an anonymously readable SFS file system--other machines can then import the file as a read-only database using the -ro option.
-mapall=user
Map every entry in the user database to the the local user user, regardless of the actual credentials specified by the file.

If no Userfile directive is specified, sfsauthd uses the following default (again, unqualified names are assumed to be in /etc/sfs):

Userfile -reg -pub=sfs_users.pub sfs_users

SRPfile path
Where to find default parameters for the SRP protocol. The default is sfs_srp_params.
Denyfile path
Specifies a file listing users who should not be able to register public keys with sfskey register. The default is sfs_deny.

sfs_users--User-authentication database

The sfs_users file, maintained and used by the sfsauthd program, maps public keys to local users. It is roughly analogous to the Unix /etc/passwd file. Each line of sfs_users has the following format:

user:public-key:credentials:SRP-info:private-key
user
user is the unique name of a public key in the database. Ordinarily it the same as a user-name in the local password file. However, in certain cases it may be useful to map multiple public keys to the same local account (for instance if several people have an account with root privileges). In such cases, each key should be given a unique name (e.g., dm/root, kaminsky/root, etc.).
public-key
Public key is simply the user's public key. A user must posses the corresponding private key to authenticate himself to servers.
credentials
credentials are the credentials associated with a particular SFS public key. It is simply a local username to be looked up in the Unix password and group databases. Ordinarily, credentials should be the same as user unless multiple keys need to be mapped to the same credentials.
SRP-info
SRP-info is the server-side information for the SRP protocol, SRP. Unlike the previous fields, this information must be kept secret. If the information is disclosed, an attacker may be able to impersonate the server by causing the sfskey add command to fetch the wrong HostID. Note also that SRP-info is specific to a particular hostname. If you change the Location of a file server, users will need to register new SRP-info.
private-key
private-key is actually opaque to sfsauthd. It is private, per-user data that sfsauthd will return to users who successfully complete the SRP protocol. Currently, sfskey users this field to store an encrypted copy of a user's private key, allowing the user to retrieve the private key over the network.

sfssd_config--Meta-server configuration

sfssd_config configures sfssd, the server that accepts connections for sfsrwsd and sfsauthd. sfssd_config can be used to run multiple "virtual servers", or to run several versions of the server software for compatibility with old clients.

Directives are:

BindAddr ip-addr [port]
Specifies the IP address and port on which sfssd should listen for TCP connections. The default is INADDR_ANY for the address and port 4.
RevocationDir path
Specifies the directory in which sfssd should search for revocation/redirection certificates when clients connect to unknown (potentially revoked) self-certifying pathnames. The default value is /var/sfs/srvrevoke. Use the command sfskey revokegen to generate revocation certificates.
HashCost bits
Specifies that clients must pay for connections by burning CPU time. This can help reduce the effectiveness of denial-of-service attacks. The default value is 0. The maximum value is 22.
Server {* | Location[:HostID]}
Specifies a section of the file that applies connection requests for the self-certifying pathname Location:HostID. If :HostID is omitted, then the following lines apply to any connection that does not match an explicit HostID in another Server. The argument * applies to all clients who do not have a better match for either Location or HostID.
Release {* | sfs-version}
Begins a section of the file that applies to clients running SFS release sfs-version or older. * signifies arbitrarily large SFS release numbers. The Release directive does not do anything on its own, but applies to all subsequent Service directives until the next Release or Server directive.
Extensions ext1 [ext2 ...]
Specifies that subsequent Service directives apply only to clients that supply all of the listed extension strings (ext1, ...). Extensions until the next Extensions, Release or Server directive
Service srvno daemon [arg ...]
Specifies the daemon that should handle clients seeking service number srvno. SFS defines the following values of srvno:
1. File server
2. Authentication server
3. Remote execution (not yet released)
4. SFS/HTTP (not yet released)

The default contents of sfssd_config is:

Server *
  Release *
      Service 1 sfsrwsd
      Service 2 sfsauthd

To run a different server for sfs-0.3 and older clients, you could add the lines:

  Release 0.3
    Service 1 /usr/local/lib/sfs-0.3/sfsrwsd

sfs_srp_params--Default parameters for SRP protocol

Specifies a "strong prime" and a generator for use in the SRP protocol. SFS ships with a particular set of parameters because generating new ones can take a considerable amount of CPU time. You can replace these parameters with randomly generated ones using the sfskey srpgen -b bits command.

Note that SRP parameters can afford to be slightly shorter than Rabin public keys, both because SRP is based on discrete logs rather than factoring, and because SRP is used for authentication, not secrecy. 1,024 is a good value for bits even if PubKeySize is slightly larger in sfs_config.

sfscd_config--Meta-client configuration

The sfscd_config is really part of the SFS protocol specification. If you change it, you will no longer be executing the SFS protocol. Nonetheless, you need to do this to innovate, and SFS was designed to make implementing new kinds of file systems easy.

sfscd_config takes the following directives:

Extension string
Specifies that sfscd should send string to all servers to advertise that it runs an extension of the protocol. Most servers will ignore string, but those that support the extension can pass off the connection to a new "extended" server daemon. You can specify multiple Extension directives.
Protocol name daemon [arg ...]
Specifies that pathnames of the form /sfs/name:anything should be handled by the client daemon daemon. name may not contain any non-alphanumeric characters. The Protocol directive is useful for implementing file systems that are not mounted on self-certifying file systems.
Release {* | sfs-version}
Begins a section of the file that applies to servers running SFS release sfs-version or older. * signifies arbitrarily large SFS release numbers. The Release directive does not do anything on its own, but applies to all subsequent Program directives until the next Release directive.
Libdir path
Specifies where SFS should look for daemon programs when their pathnames do not begin with /. The default is /usr/local/lib/sfs-0.5. The Libdir directive does not do anything on its own, but applies to all subsequent Program directives until the next Libdir or Release directive.
Program prog.vers daemon [arg ...]
Specifies that connections to servers running Sun RPC program number prog and version vers should be handed off to the the local daemon daemon. SFS currently defines two RPC program numbers. Ordinary read-write servers use program number 344444, version 3 (a protocol very similar to NFS3), while read-only servers use program 344446, version 1. The read-only code has not been released yet. The Program directive must be preceded by a Release directive.

The default sfscd_config file is:

Release *
  Program 344444.3 sfsrwcd

To run a different set of daemons when talking to sfs-0.3 or older servers, you could add the following lines:

Release 0.3
  Libdir /usr/local/lib/sfs-0.3
  Program 344444.3 sfsrwcd

Command reference guide

sfsagent reference guide

sfsagent is the program users run to authenticate themselves to remote file servers, to create symbolic links in /sfs on the fly, and to look for revocation certificates. Many of the features in sfsagent are controlled by the sfskey program and described in the sfskey documentation.

Ordinarily, a user runs sfsagent at the start of a session. sfsagent runs sfskey add to obtain a private key. As the user touches each SFS file server for the first time, the agent authenticates the user to the file server transparently using the private key it has. At the end of the session, the user should run sfskey kill to kill the agent.

The usage is as follows:

sfsagent [-dnkF] -S sock [-c [prog [arg ...]] | keyname]
-d
Stay in the foreground rather than forking and going into the background
-n
Do not attempt to communicate with the SFS file system. This can be useful for debugging, or for running an agent on a machine that is not running an SFS client. If you specify -n, you must also use the -S option, otherwise your agent will be useless as there will be no way to communicate with it.
-k
Atomically kill and replace any existing agent. Otherwise, if your agent is already running, sfsagent will refuse to run again.
-F
Allow forwarding. This will allow programs other than the file system to ask the agent to authenticate the user.
-S sock
Listen for connections from programs like sfskey on the Unix domain socket sock. Ordinarily sfskey connects to the agent through the client file system software, but it can use a named Unix domain socket as well.
-c [prog [arg ...]]
By default, sfsagent on startup runs the command sfskey add giving it whatever -t option and keyname you specified. This allows you to fetch your first key as you start or restart the agent. If you wish to run a different program, you can specify it using -c. You might, for instance, wish to run a shell-script that executes a sfskey add followed by several sfskey certprog commands.

sfsagent runs the program with the environment variable SFS_AGENTSOCK set to -0 and a Unix domain socket on standard input. Thus, when atomically killing and restarting the agent using -k, the commands run by sfsagent talk to the new agent and not the old.

If you don't wish to run any program at all when starting sfsagent, simply supply the -c option with no prog. This will start an new agent that has no private keys.

sfskey reference guide

The sfskey command performs a variety of key management tasks, from generating and updating keys to controlling users' SFS agents. The general usage for sfskey is:

sfskey [-S sock] [-p pwfd] command [arg ...]

-S specifies a UNIX domain socket sfskey can use to communicate with your sfsagent socket. If sock begins with -, the remainder is interpreted as a file descriptor number. The default is to use the environment variable SFS_AGENTSOCK if that exists. If not, sfskey asks the file system for a connection to the agent.

The -p option specifies a file descriptor from which sfskey should read a passphrase, if it needs one, instead of attempting to read it from the user's terminal. This option may be convenient for scripts that invoke sfskey. For operations that need multiple passphrases, you must specify the -p option multiple times, once for each passphrase.

sfskey add [-t [hrs:]min] [keyfile]
sfskey add [-t [hrs:]min] [user]@hostname
The add command loads and decrypts a private key, and gives the key to your agent. Your agent will use it to try to authenticate you to any file systems you reference. The -t option specifies a timeout after which the agent should forget the private key.

In the first form of the command, the key is loaded from file keyfile. The default for keyfile, if omitted, is $HOME/.sfs/identity.

The second form of the command fetches a private key over the network using the SRP protocol. SRP lets users establish a secure connection to a server without remembering its public key. Instead, to prove their identities to each other, the user remembers a secret password and the server stores a one-way function of the password (also a secret). SRP addresses the fact that passwords are often poorly chosen; it ensures that an attacker impersonating one of the two parties cannot learn enough information to mount an off-line password guessing attack--in other words, the attacker must interact with the server or user on every attempt to guess the password.

The sfskey update and register commands let users store their private keys on servers, and retrieve them using the add command. The private key is stored in encrypted form, using the same password as the SRP protocol (a safe design as the server never sees any password-equivalent data).

Because the second form of sfskey add establishes a secure connection to a server, it also downloads the servers HostID securely and creates a symbolic link from /sfs/hostname to the server's self-certifying pathname.

When invoking sfskey add with the SRP syntax, sfskey will ask for the user's password with a prompt of the following form:

Passphrase for user@servername/nbits:

user is simply the username of the key being fetched from the server. servername is the name of the server on which the user registerd his SRP information. It may not be the same as the hostname argument to sfskey if the user has supplied a hostname alias (or CNAME) to sfskey add. Finally, nbits is the size of the prime number used in the SRP protocol. Higher values are more secure; 1,024 bits should be adequate. However, users should expect always to see the same value for nbits (otherwise, someone may be trying to impersonate the server).

sfskey certclear
Clears the list of certification programs the agent runs. See certprog, for more details on certification programs.
sfskey certlist [-q]
Prints the list of certification programs the agent runs. See certprog, for more details on certification programs.


sfskey certprog [-s suffix] [-f filter] [-e exclude] prog [arg ...]
The certprog command registers a command to be run to lookup HostIDs on the fly in the /sfs directory. This mechanism can be used for dynamic server authentication--running code to lookup HostIDs on-demand. When you reference the file /sfs/name.suffix, your agent will run the command:
prog arg ... name

If the program succeeds and prints dest to its standard output, the agent will then create a symbolic link:

/sfs/name.suffix -> dest

If the -s flag is omitted, then neither . nor suffix gets appended to name. In other words, the link is /sfs/name -> dest. filter is a perl-style regular expression. If it is specified, then name must contain it for the agent to run prog. exclude is another regular expression, which, if specified, prevents the agent from running prog on names that contain it (regardless of filter).

The program dirsearch can be used with certprog to configure certification paths--lists of directories in which to look for symbolic links to HostIDs. The usage is:

dirsearch [-clpq] dir1 [dir2 ...] name

dirsearch searches through a list of directories dir1, dir2, ... until it finds one containing a file called name, then prints the pathname dir/name. If it does not find a file, dirsearch exits with a non-zero exit code. The following options affect dirsearch's behavior:

-c
Print the contents of the file to standard output, instead of its pathname.
-l
Require that dir/name be a symbolic link, and print the path of the link's destination, rather than the path of the link itself.
-p
Print the path dir/name. This is the default behavior anyway, so the option -p has no effect.
-q
Do not print anything. Exit abnormally if name is not found in any of the directories.

As an example, to lookup self-certifying pathnames in the directories $HOME/.sfs/known_hosts and /mit, but only accepting links in /mit with names ending .mit.edu, you might execute the following commands:

% sfskey certprog dirsearch $HOME/.sfs/known_hosts
% sfskey certprog -f '\.mit\.edu$' /mnt/links

sfskey delete keyname
Deletes private key keyname from the agent (reversing the effect of an add command).
sfskey deleteall
Deletes all private keys from the agent.
sfskey edit [-P] [-o outfile] [-c cost] [-n name] [keyname]
Changes the passphrase, passphrase "cost", or name of a public key. Can also download a key from a remote server via SRP and store it in a file.

keyname can be a file name, or it can be of the form [user]@server, in which case sfskey will fetch the key remotely and outfile must be specified. If keyname is unspecified, the default is $HOME/.sfs/identity.

The options are:

-P
Removes any password from the key, so that the password is stored on disk in unencrypted form.
-o outfile
Specifies the file two which the edited key should be written.
-c cost
Override the default computational cost of processing a password, or PwdCost, pwdcost.
-n name
Specifies the name of the key that shows up in sfskey list.

sfskey gen [-KP] [-b nbits] [-c cost] [-n name] [keyfile]
Generates a new public/private key pair and stores it in keyfile. It omitted keyfile defaults to $HOME/.sfs/identity.
-K
By default, sfskey gen asks the user to type random text with which to seed the random number generator. The -K option suppresses that behavior.
-P
Specifies that sfskey gen should not ask for a passphrase and the new key should be written to disk in unencrypted form.
-b nbits
Specifies that the public key should be nbits long.
-c cost
Override the default computational cost of processing a password, or PwdCost, pwdcost.
-n name
Specifies the name of the key that shows up in sfskey list. Otherwise, the user will be prompted for a name.

sfskey help
Lists all of the various sfskey commands and their usage.
sfskey hostid hostname
sfskey hostid -
Retrieves a self-certifying pathname insecurely over the network and prints Location:HostID to standard output. If hostname is simply -, returns the name of the current machine, which is not insecure.
sfskey kill
Kill the agent.
sfskey list [-ql]
List the public keys whose private halves the the agent holds.
-q
Suppresses the banner line explaining the output.
-l
Lists the actual value of public keys, in addition the the names of the keys.

sfskey norevokeset HostID ...

sfskey norevokelist

sfskey register [-KS] [-b nbits] [-c cost] [-u user] [key]
The sfskey register command lets users who are logged into an SFS file server register their public keys with the file server for the first time. Subsequent changes to their public keys can be authenticated with the old key, and must be performed using sfskey update. The superuser can also use sfskey register when creating accounts.

key is the private key to use. If key does not exist and is a pathname, sfskey will create it. The default key is $HOME/.sfs/identity, unless -u is used, in which case the default is to generate a new key but not store it anywhere. If a user wishes to reuse a public key already registered with another server, the user can specify user@server for key.

-K
-b nbits
-c cost
These options are the same as for sfskey gen. -K and -b have no effect if the key already exists.
-S
Do not register any SRP information with the server--this will prevent the user from using SRP to connect to the server, but will also prevent the server from gaining any information that could be used by an attacker to mount an off-line guessing attack on the user's password.
-u user
When sfskey register is run as root, specifies a particular user to register. This can be useful when creating accounts for people.

sfsauthd_config must have a Userfile with the -reg option to enable use of the sfskey register, sfsauthd_config.

sfskey reset
Clear the contents of the /sfs directory, including all symbolic links created by sfskey certprog and sfskey add, and log the user out of all file systems.

Note that this is not the same as deleting private keys held by the agent (use deleteall for that). In particular, the effect of logging the user out of all file systems will likely not be visible--the user will automatically be logged in again on-demand.

sfskey revokegen [-r newkeyfile [-n newhost]] [-o oldhost] oldkeyfile

sfskey revokelist

sfskey revokeclear

sfskey revokeprog [-b [-f filter] [-e exclude]] prog [arg ...]

sfskey srpgen [-b nbits] file
Generate a new sfs_srp_params file, sfs_srp_params.
sfskey update [-S | -s srp_params] [-a {server | -}] oldkey [newkey]
Change a user's public key and SRP information on an SFS file server. The default value for newkey is $HOME/.sfs/identity.

To change public keys, typically a user should generate a new public key and store it in $HOME/.sfs/identity. Then he can run sfskey update [user]@host for each server on which he needs to change his public key.

Several options control sfskey update's behavior:

-S
Do not send SRP information to the server--this will prevent the user from using SRP to connect to the server, but will also prevent the server from gaining any information that could be used by an attacker to mount an off-line guessing attack on the user's password.
-s
srp_params is the path of a file generated by sfskey srpgen, and specifies the parameters to use in generating SRP information for the server. The default is to get SRP parameters from the server, or look in /usr/local/etc/sfs/sfs_srp_params.
-a server
-a -
Specify the server on which to change the users key. The server must be specified as Location:HostID. A server of - means to use the local host. You can specify the -a option multiple times, in which case sfskey will attempt to change oldkey to newkey on multiple servers in parallel.

If oldkey is the name of a remote key--i.e. of the form [user]@host--then the default value of server is to use whatever server successfully completes the SRP authentication protocol while fetching oldkey. Otherwise, if oldkey is a file, the -a option is mandatory.

ssu command

The ssu command allows an unprivileged user to become root on the local machine without changing his SFS credentials. ssu invokes the command su to become root. Thus, the access and password checks needed to become root are identical to those of the local operating system's su command. ssu also runs /usr/local/lib/sfs-0.5/newaid to alter the group list so that SFS can recognize the root shell as belonging to the original user.

The usage is as follows:

ssu [-f | -m | -l | -c command]
-f
-m
These options are passed through to the su command.
-l
This option causes the newly spawned root shell to behave like a login shell.
-c command
Tells ssu to tell su to run command rather than running a shell.

Note, ssu does not work on some versions of Linux because of a bug in Linux. To see if this bug is present, run the command su root -c ps. If this command stops with a signal, your su command is broken and you cannot use ssu.

sfscd command

sfscd [-d] [-l] [-L] [-f config-file]

sfscd is the program to create and serve the /sfs directory on a client machine. Ordinarily, you should not need to configure sfscd or give it any command-line options.

-d
Stay in the foreground and print messages to standard error rather than redirecting them to the system log.
-l
Ordinarily, sfscd will disallow access to a server running on the same host. If the Location in a self-certifying pathname resolves to an IP address of the local machine, any accesses to that pathname will fail with the error EDEADLK ("Resource deadlock avoided").

The reason for this behavior is that SFS is implemented using NFS. Many operating systems can deadlock when there is a cycle in the mount graph--in other words when two machines NFS mount each other, or, more importantly when a machine NFS mounts itself. To allow a machine to mount itself, you can run sfscd with the -l flag. This may in fact work fine and not cause deadlock on non-BSD systems.

-L
On Linux, the -L option disables a number of kludges that work around bugs in the kernel. -L is useful for people interested in improving Linux's NFS support.
-f config-file
Specify an alternate sfscd configuration file, sfscd_config. The default, if -f is unspecified, is first to look for /etc/sfs/sfscd_config, then /usr/local/etc/sfs/sfscd_config.

sfssd command

sfssd [-d] [-f config-file]

sfssd is the main server daemon run on SFS servers. sfssd itself does not serve any file systems. Rather, it acts as a meta-server, accepting connections on TCP port 4 and passing them off to the appropriate daemon. Ordinarily, sfssd passes all file system connections to sfsrwsd, and all user-key management connections to sfsauthd. However, the sfssd_config file (see sfssd_config) allows a great deal of customization, including support for "virtual servers," multiple versions of the SFS software coexisting, and new SFS-related services other than the file system and user authentication.

-d
Stay in the foreground and print messages to standard error rather than redirecting them to the system log.
-f config-file
Specify an alternate sfssd configuration file, sfssd_config. The default, if -f is unspecified, is first to look for /etc/sfs/sfssd_config, then /usr/local/etc/sfs/sfssd_config.

sfsrwsd command

/usr/local/lib/sfs-0.5/sfsrwsd [-f config-file]

sfsrwsd is the program implementing the SFS read-write server. Ordinarily, you should never run sfsrwsd directly, but rather have sfssd do so. Nonetheless, you must create a configuration file for sfsrwsd before running an SFS server. See sfsrwsd_config, for what to put in your sfsrwsd_config file.

-f config-file
Specify an alternate sfsrwsd configuration file, sfsrwsd_config. The default, if -f is unspecified, is /etc/sfs/sfsrwsd_config.

Security considerations

SFS shares files between machines using cryptographically protected communication. As such, SFS can help eliminate security holes associated with insecure network file systems and let users share files where they could not do so before.

That said, there will very likely be security holes attackers can exploit because of SFS, that they could not have exploited otherwise. This chapter enumerates some of the security consequences of running SFS. The first section describes vulnerabilities that may result from the very existence of a global file system. The next section lists bugs potentially present in your operating system that may be much easier for attackers to exploit if you run SFS. Finally the last section attempts to point out weak points of the SFS implementation that may lead to vulnerabilities in the SFS software itself.

Vulnerabilities created by SFS

Facilitating exploits

Many security holes can be exploited much more easily if the attacker can create an arbitrary file on your system. As a simple example, if a bug allows attackers to run any program on your machine, SFS allows them to supply the program somewhere under /sfs. Moreover, the file can have any numeric user and group (though of course, SFS disables setuid and devices).

. in path

Another potential problem users putting the current working directory . in their PATH environment variables. If you are browsing a file system whose owner you do not trust, that owner can run arbitrary code as you by creating programs named things like ls in the directories you are browsing. Putting . in the PATH has always been a bad idea for security, but a global file system like SFS makes it much worse.

symbolic links from untrusted servers

Users need to be careful about using untrusted file systems as if they were trusted file systems. Any file system can name files in any other file system by symbolic links. Thus, when randomly overwriting files in a file system you do not trust, you can be tricked, by symbolic links, into overwriting files on the local disk or another SFS file system.

As an example of a seemingly appealing use of SFS that can cause problems, consider doing a cvs checkout from an untrusted CVS repository, so as to peruse someone else's source code. If you run cvs on a repository you do not trust, the person hosting the repository could replace the CVSROOT/history with a symbolic link to a file on some other file system, and cause you to append garbage to that file.

This cvs example may or may not be a problem. For instance, if you are about to compile and run the software anyway, you are placing quite a bit of trust in the person running the CVS repository anyway. The important thing to keep in mind is that for most uses of a file system, you are placing some amount of trust in in the file server.

See resvgids, to see how users can run multiple agents with the newaid command. One way to cut down on trust is to access untrusted file servers under a different agent with different private keys. Nonetheless, this still allows the remote file servers to serve symbolic links to the local file system in unexpected places.

Leaking information

Any user on the Internet can get the attributes of a local-directory listed in an Export directive (see export). This is so users can run commands like ls -ld on a self-certifying pathname in /sfs, even if they cannot change directory to that pathname or list files under it. If you wish to keep attribute information secret on a local-directory, you will need to export a higher directory. We may later reevaluate this design decision, though allowing such anonymous users to get attributes currently simplifies the client implementation.

Vulnerabilities exploitable because of SFS

NFS server security

The SFS read-write server software requires each SFS server to run an NFS server. Running an NFS server at all can constitute a security hole. In order to understand the full implications of running an SFS server, you must also understand NFS security.

NFS security relies on the secrecy of file handles. Each file on an exported file system has associated with it an NFS file handle (typically 24 to 32 bytes long). When mounting an NFS file system, the mount command on the client machine connects to a program called mountd on the server and asks for the file handle of the root of the exported file system. mountd enforces access control by refusing to return this file handle to clients not authorized to mount the file system.

Once a client has the file handle of a directory on the server, it sends NFS requests directly to the NFS server's kernel. The kernel performs no access control on the request (other than checking that the user the client claims to speak for has permission to perform the requested operation). The expectation is that all clients are trusted to speak for all users, and no machine can obtain a valid NFS file handle without being an authorized NFS client.

To prevent attackers from learning NFS file handles when using SFS, SFS encrypts all NFS file handles with a 20-byte key using the Blowfish encryption algorithm. Unfortunately, not all operating systems choose particularly good NFS file handles in the first place. Thus, attackers may be able to guess your file handles anyway. In general, NFS file handles contain the following 32-bit words:

In addition NFS file handles can contain the following words:

Many of these words can be guessed outright by attackers without their needing to interact with any piece of software on the NFS server. For instance, the file system ID is often just the device number on which the physical file system resides. The i-number of the root directory in a file system is always 2. The i-number and generation number of the root directory can also be used as the i-number and generation number of the "exported directory".

On some operating systems, then, the only hard thing for an attacker to guess is the 32-bit generation number of some directory on the system. Worse yet, the generation numbers are sometimes not chosen with a good random number generator.

To minimize the risks of running an NFS server, you might consider taking the following precautions:

mountd -n.

The mountd command takes a flag -n meaning "allow mount requests from unprivileged ports." Do not ever run use this flag. Worse yet, some operating systems (notably HP-UX 9) always exhibit this behavior regardless of whether they -n flag has been specified.

The -n option to mountd allows any user on an NFS client to learn file handles and thus act as any other user. The situation gets considerably worse when exporting file systems to localhost, however, as SFS requires. Then everybody on the Internet can learn your NFS file handles. The reason is that the portmap command will forward mount requests and make them appear to come from localhost.

portmap forwarding

In order to support broadcast RPCs, the portmap program will relay RPC requests to the machine it is running on, making them appear to come from localhost. That can have disastrous consequences in conjunction with mountd -n as described previously. It can also be used to work around "read-mostly" export options by forwarding NFS requests to the kernel from localhost.

Operating systems are starting to ship with portmap programs that refuse to forward certain RPC calls including mount and NFS requests. Wietse Venema has also written a portmap replacement that has these properties, available from ftp://ftp.porcupine.org/pub/security/index.html. It is also a good idea to filter TCP and UDP ports 111 (portmap) at your firewall, if you have one.

Bugs in the NFS implementation

Many NFS implementations have bugs. Many of those bugs rarely surface when clients and servers with similar implementation talk to each other. Examples of bugs we've found include servers crashing when the receive a write request for an odd number of bytes, clients crashing when they receive the error NFS3ERR_JUKEBOX, and clients using uninitialized memory when the server returns a lookup3resok data structure with obj_attributes having attributes_follow set to false.

SFS allows potentially untrusted users to formulate NFS requests (though of course SFS requires file handles to decrypt correctly and stamps the request with the appropriate Unix uid/gid credentials). This may let bad users crash your server's kernel (or worse). Similarly, bad servers may be able to crash a client.

As a precaution, you may want to be careful about exporting any portion of a file system to anonymous users with the R or W options to Export (see export). When analyzing your NFS code for security, you should know that even anonymous users can make the following NFS RPC's on a local-directory in your sfsrwsd_config file: NFSPROC3_GETATTR, NFSPROC3_ACCESS, NFSPROC3_FSINFO, and NFSPROC3_PATHCONF.

On the client side, a bad, non-root user in collusion with a bad file server can possibly crash or deadlock the machine. Many NFS client implementations have inadequate locking that could lead to race conditions. Other implementations make assumptions about the hierarchical nature of a file system served by the server. By violating these assumptions (for example having two directories on a server each contain the other), a user may be able to deadlock the client and create unkillable processes.

logger buffer overrun

SFS pipes log messages through the logger program to get them into the system log. SFS can generate arbitrarily long lines. If your logger does something stupid like call gets, it may suffer a buffer overrun. We assume no one does this, but feel the point is worth mentioning, since not all logger programs come with source.

To avoid using logger, you can run sfscd and sfssd with the -d flag and redirect standard error wherever you wish manually.

Vulnerabilities in the SFS implementation

Resource exhaustion

The best way to attack the SFS software is probably to cause resource exhaustion. You can try to run SFS out of file descriptors, memory, CPU time, or mount points.

An attacker can run a server out of file descriptors by opening many parallel TCP connections. Such attacks can be detected using the netstat command to see who is connecting to SFS (which accepts connections on port 4). Users can run the client (also sfsauthd) out of descriptors by connecting many times using the setgid program /usr/local/lib/sfs-0.5/suidconnect. These attacks can be traced using a tool like lsof, available from ftp://vic.cc.purdue.edu/pub/tools/unix/lsof.

SFS enforces a maximum size of just over 64 K on all RPC requests. Nonetheless, a client could connect 1000 times, on each connection send the first 64 K of a slightly larger message, and just sit there. That would obviously consume about 64 Megabytes of memory, as SFS will wait patiently for the rest of the request.

A worse problem is that SFS servers do not currently flow-control clients. Thus, an attacker could make many RPCs but not read the replies, causing the SFS server to buffer arbitrarily much data and run out of memory. (Obviously the server eventually flushes any buffered data when the TCP connection closes.)

Connecting to an SFS server costs the server tens of milliseconds of CPU time. An attacker can try to burn a huge amount of the server's CPU time by connecting to the server many times. The effects of such attacks can be mitigated using hashcash, HashCost.

Finally, a user on a client can cause a large number of file systems to be mounted. If the operating system has a limit on the number of mount points, a user could run the client out of mount points.

Non-idempotent operations

If a TCP connection is reset, the SFS client will attempt to reconnect to the server and retransmit whatever RPCs were pending at the time the connection dropped. Not all NFS RPCs are idempotent however. Thus, an attacker who caused a connection to reset at just the right time could, for instance, cause a mkdir command to return EEXIST when in fact it did just create the directory.

Injecting packets on the loopback interface

SFS exchanges NFS traffic with the local operating system using the loopback interface. An attacker with physical access to the local ethernet may be able to inject arbitrary packets into a machine, including packets to 127.0.0.1. Without packet filtering in place, an attacker can also send packets from anywhere making them appear to come from 127.0.0.1.

On the client, an attacker can forge NFS requests from the kernel to SFS, or forge replies from SFS to the kernel. The SFS client encrypts file handles before giving them to the operating system. Thus, the attacker is unlikely to be able to forge a request from the kernel to SFS that contain a valid file handle. In the other direction however, the reply does not need to contain a file handle. The attacker may well be able to convince the kernel of a forged reply from SFS. The attacker only needs to guess a (possibly quite predictable) 32-bit RPC XID number. Such an attack could result, for example, in a user getting the wrong data when reading a file.

On the server side, you also must assume the attacker cannot guess a valid NFS file handle (otherwise, you already have no security--see NFS security). However, the attacker might again forge NFS replies, this time from the kernel to the SFS server software.

To prevent such attacks, if your operating system has IP filtering, it would be a good idea to block any packets either from or to 127.0.0.1 if those packets do not come from the loopback interface. Blocking traffic "from" 127.0.0.1 at your firewall is also a good idea.

Causing deadlock

On BSD-based systems (and possibly others) the buffer reclaiming policy can cause deadlock. When an operation needs a buffer and there are no clean buffers available, the kernel picks some particular dirty buffer and won't let the operation complete until it can get that buffer. This can lead to deadlock in the case that two machines mount each other.

Getting private file data from public workstations

An attacker may be able to read the contents of a private file shortly after you log out of a public workstation if the he can then become root on the workstation. There are two attacks possible.

First, the attacker may be able to read data out of physical memory or from the swap partition of the local disk. File data may still be in memory if the kernel's NFS3 code has cached it in the buffer cache. There may also be fragments of file data in the memory of the sfsrwcd process, or out on disk in the swap partition (though sfsrwcd does its best to avoid getting paged out). The attacker can read any remaining file contents once he gains control of the machine.

Alternatively, the attacker may have recorded encrypted session traffic between the client and server. Once he gains control of the client machine, he can attach to the sfsrwcd process with the debugger and learn the session key if the session is still open. This will let him read the session he recorded in encrypted form.

To minimize the risks of these attacks, you must kill and restart sfscd before turning control of a public workstation over to another user. Even this is not guaranteed to fix the problem. It will flush file blocks from the buffer cache by unmounting all file systems, for example, but the contents of those blocks may persist as uninitialized data in buffers sitting on the free list. Similarly, any programs you ran that manipulated private file data may have gotten paged out to disk, and the information may live on after the processes exit.

In conclusion, if you are paranoid, it is best not to use public workstations.

Setuid programs and devices on remote file systems

SFS does its best to disable setuid programs and devices on remote file servers it mounts. However, we have only tested this on operating systems we have access to. When porting SFS to new platforms, It is worth testing that both setuid programs and devices do not work over SFS. Otherwise, any user of an SFS client can become root.

How to contact people involved with SFS

Please report any bugs you find in SFS to sfsbug@redlab.lcs.mit.edu.

You can send mail to the authors of SFS at sfs-dev@pdos.lcs.mit.edu.

There is also a mailing list of SFS users and developers at sfs@sfs.fs.net. To subscribe to the list, send mail to sfs-subscribe@sfs.fs.net.

Concept Index