sfs_config
--system-wide configuration parameters
sfsrwsd_config
--File server configuration
sfsauthd_config
--User-authentication daemon configuration
sfs_hosts
--Host to address mapping overriding DNS
sfs_users
--User-authentication database
sfssd_config
--Meta-server configuration
sfs_srp_params
--Default parameters for SRP protocol
sfscd_config
--Meta-client configuration
SFS is a network file system that lets you access your files from anywhere and share them with anyone anywhere. SFS was designed with three goals in mind:
/sfs
. The contents of that directory is identical
on every client in the world. Clients have no notion of administrative
realm and no site-specific configuration options. Servers grant access
to users, not to clients. Thus, users can access their files wherever
they go, from any machine they trust that runs the SFS client software.
SFS achieves these goals by separating key management from file system
security. It names file systems by the equivalent of their public keys.
Every remote file server is mounted under a directory of the form:
/sfs/@Location,HostID
or:
/sfs/@Location%port,HostID
Location is a DNS hostname or an IP address. HostID is a collision-resistant cryptographic hash of the file server's public key. port is an optional TCP port number (the default is 4). This naming scheme lets an SFS client authenticate a server given only a file name, freeing the client from any reliance on external key management mechanisms. SFS calls the directories on which it mounts file servers self-certifying pathnames.
Self-certifying pathnames let users authenticate servers through a number of different techniques. As a secure, global file system, SFS itself provides a convenient key management infrastructure. Symbolic links let the file namespace double as a key certification namespace. Thus, users can realize many key management schemes using only standard file utilities. Moreover, self-certifying pathnames let people bootstrap one key management mechanism using another, making SFS far more versatile than any file system with built-in key management.
Through a modular implementation, SFS also pushes user authentication out of the file system. Untrusted user processes transparently authenticate users to remote file servers as needed, using protocols opaque to the file system itself.
Finally, SFS separates key revocation from key distribution. Thus, the flexibility SFS provides in key management in no way hinders recovery from compromised keys.
No caffeine was used in the original production of the SFS software.
This section describes how to build and install the SFS on your system. If you are too impatient to read the details, be aware of the two most important points:
sfs
user and an sfs
group on your
system. See -with-sfsuser, to use a name other than sfs
.
SFS should run with minimal porting on any system that has solid NFS3 support. We have run SFS successfully on OpenBSD, FreeBSD, Linux, OSF/1 4.0, and Solaris 5.7.
In order to compile SFS, you will need the following:
gmp.h
header file. Even if you
have libgmp.so, if you don't have /usr/include/gmp.h, you need to
install gmp on your system. Note that more recent versions (4.0 and above)
allow SFS to run significantly faster than it did with previous ones.
/usr/include
that match the kernel you are
running. Particularly on Linux where the kernel and user-land utilities
are separately maintained, it is easy to patch the kernel without
installing the correspondingly patched system header files in
/usr/include
. SFS needs to see the patched header files to
compile properly.
Once you have setup your system as described in Requirements, you are ready to build SFS.
sfs
. For instance, you might add
the following line to /etc/passwd
:
sfs:*:71:71:Self-certifying file system:/:/bin/true
And the following line to /etc/group
:
sfs:*:71:
Do not put any users in sfs-group, not even root
. Any
user in sfs-group will not be able to make regular use of the
SFS file system. Moreover, having any unprivileged users in
sfs-group causes a security hole.
% gzip -dc sfs-0.8pre.tar.gz | tar xvf - % cd sfs-0.8pre
If you determined that you need gmp (see Requirements), you should
unpack gmp into the top-level of the SFS source tree:
% gzip -dc ../gmp-2.0.2.tar.gz | tar xvf -
CC
and CXX
environment variables to point to the C
and C++ compilers you wish to use to compile SFS. Some operating systems do
not come with a recent enough version of gcc Requirements.
./configure
. You may additionally specify the following
options:
--with-sfsuser=sfs-user
sfs
. Do not use an
existing account for sfs-user--even a trusted account--as
processes running with that user ID will not be able to access SFS.
[Note: If you later change your mind about user-name, you do not
need to recompile SFS, sfs_config.]
--with-sfsgroup=sfs-group
--with-gmp=gmp-path
configure
should look for gmp (for example,
gmp-path might be /usr/local
). Note, if you unpacked gmp
into a subdirectory of the SFS source code, you do not need to specify
this option. configure
should notice the directory and
compile gmp automatically.
--with-sfsdir=sfsdir
/var/sfs
. [You can change this later, sfs_config.]
--with-etcdir=etcdir
/etc/sfs
.
--datadir=datadir
/usr/local/share
.
configure
accepts all the traditional GNU configuration
options such as --prefix
. It also has several options that
are only for developers. Do not use the
--enable-repo
or --enable-shlib
options (unless you
are a gcc maintainer looking for some wicked test cases for your
compiler). Also, Do not use the --with-openssl
option-it is only for use by the developers in compiling some
benchmark code that is not part of the release.
make
.
make install
. If you are short
on disk space, you can alternatively install stripped binaries by
running make install-strip
.
sfscd
.
The most common problem you will encounter is an internal compiler error
from gcc. If you are not running gcc-2.95.2 or later, you will very
likely experience internal compiler errors when building SFS and need to
upgrade the compiler. You must make clean
after upgrading the
compiler. You cannot link object files together if they have been
created by different versions of the C++ compiler.
On OSF/1 for the alpha, certain functions using a gcc extension called
__attribute__((noreturn))
tend to cause internal compiler errors.
If you experience internal compiler errors when compiling SFS for the
alpha, try building with the command make
ECXXFLAGS='-D__attribute__\(x\)='
instead of simply make
.
Sometimes, a particular source file will give particularly stubborn
internal compiler errors on some architectures. These can be very hard
to work around by just modifying the SFS source code. If you get an
internal compiler error you cannot obviously fix, try compiling the
particular source file with a different level of debugging. (For
example, using a command like make sfsagent.o CXXDEBUG=-g
in the
appropriate subdirectory.)
If your /tmp
file system is too small, you may also end up
running out of temporary disk space while compiling SFS. Set your
TMPDIR
environment variable to point to a directory on a file
system with more free space (e.g., /var/tmp
).
You may need to increase your heap size for the compiler to work. If
you use a csh-derived shell, run the command unlimit datasize
.
If you use a Bourne-like shell, run ulimit -d `ulimit -H -d`
.
On some operating systems, some versions of GMP do not install the
library properly. If you get linker errors about symbols with names
like ___gmp_default_allocate
, try running the command
ranlib /usr/local/lib/libgmp.a
(substituting wherever your GMP library is installed for
/usr/local
).
This chapter gives a brief overview of how to set up an SFS client and server once you have compiled and installed the software.
SFS clients require no configuration. Simply run the program
sfscd
, and a directory /sfs
should appear on your
system. To test your client, access our SFS test server. Type the
following commands:
% cd /sfs/@sfs.fs.net,uzwadtctbjb3dg596waiyru8cx5kb4an % cat CONGRATULATIONS You have set up a working SFS client. %
Note that the /sfs/@sfs.fs.net,...
directory does not need to
exist before you run the cd
command. SFS transparently mounts
new servers as you access them.
Setting up an SFS server is a slightly more complicated process. You must perform at least three steps:
/etc/sfs/sfsrwsd_config
configuration file.
localhost
.
Before you begin, be sure that SFS can figure out your host's
fully-qualified domain name, and that the domain name exists in the
domain name system (DNS)--as opposed to just being some fake host
name listed in /etc/hosts
. SFS will use your host's
system name (returned by the hostname
command), and if that
is not fully-qualified, will append whatever default domain is
specified in /etc/resolv.conf
. If this does not result
in a valid DNS domain name, you can either reconfigure your system
such that hostname
returns a fully-qualified and valid DNS
domain name (recommended), or set the environment variable
SFS_HOSTNAME
to the fully-qualified DNS name SFS should use
see SFS_HOSTNAME. If you don't have a DNS name pointing to your
IP address, set SFS_HOSTNAME
to be the host's IP address.
Now, to create a public/private key pair for your server, run the
commands:
mkdir /etc/sfs sfskey gen -P /etc/sfs/sfs_host_key
Then you must create an /etc/sfs/sfsrwsd_config
file based on
which local directories you wish to export and what names those
directories should have on clients. This information takes the form of
one or more Export
directives in the configuration file. Each
export directive is a line of the form:
Export local-directory sfs-name
local-directory is the name of a local directory on your system
you wish to export. sfs-name is the name you wish that directory
to have in SFS, relative to the previous Export
directives.
The sfs-name of the first Export
directive must be
/
. Subsequent sfs-names must correspond to pathnames that
already exist in the previously exported directories.
Suppose, for instance, that you wish to export two directories,
/disk/u1
and /disk/u2
as /usr1
and /usr2
,
respectively. You should create a directory to be the root of the
exported namespace, say /var/sfs/root
, create the
necessary sfs-name subdirectories, and create a corresponding
sfsrwsd_config
file. You might run the following commands to
do this:
% mkdir /var/sfs/root % mkdir /var/sfs/root/usr1 % mkdir /var/sfs/root/usr2
and create the following sfsrwsd_config
file:
Export /var/sfs/root / Export /disk/u1 /usr1 Export /disk/u2 /usr2
Finally, you must export all the local-directorys in your
sfsrwsd_config
to localhost
via NFS version 3. The
details of doing this depend heavily on your operating system. For
instance, in OpenBSD you must add the following lines to the file
/etc/exports
and run the command kill -HUP `cat
/var/run/mountd.pid`
:
/var/sfs/root localhost /disk/u1 localhost /disk/u2 localhost
On Linux, the syntax for the exports file is:
/var/sfs/root localhost(rw) /disk/u1 localhost(rw) /disk/u2 localhost(rw)
On Solaris, add the following lines to the file /etc/dfs/dfstab
and run exportfs -a
:
share -F nfs -o -rw=localhost /var/sfs/root share -F nfs -o -rw=localhost /disk/u1 share -F nfs -o -rw=localhost /disk/u2
In general, the procedure for exporting NFS file systems varies
greatly between operating systems. Check your operating system's NFS
documentation for details. (The manual page for mountd
is a
good place to start.) You can test to see if your NFS server is
configured as expected (independently or running SFS) by running
showmount
with the -e
option. With the example
configuration, you should see something like this:
% shouwmount -e /var/sfs/root localhost.your.domain /disk/u1 localhost.your.domain /disk/u2 localhost.your.domain
Once you have generated a host key, created an sfsrwsd_config
file, and reconfigured your NFS server, you can start the SFS server by
running sfssd
. Note that a lot can go wrong in setting up an
SFS server. Thus, we recommend that you first run sfssd -d
. The
-d
switch will leave sfssd
in the foreground and send
error messages to your terminal. If there are problems, you can then
easily kill sfssd
from your terminal, fix the problems, and
start again. Once things are working, omit the -d
flag;
sfssd
will run in the background and send its output to the
system log.
Note: You will not be able to access an SFS server running on
the same machine as the client unless you run sfscd
with
the -l
flag, sfscd. Attempts to SFS mount a machine on
itself will return the error EDEADLK
(Resource deadlock
avoided).
To access an SFS server, you must first register a public key with the
server, then run the program sfsagent
on your SFS client to
authenticate you.
To register a public key, log into the file server and run the command:
sfskey register
This should produce something similar to the following output:
% sfskey register sfskey: /home/user/.sfs/random_seed: No such file or directory sfskey: creating directory /home/user/.sfs sfskey: creating directory /home/user/.sfs/authkeys Creating new key: user@server.com#1 (Rabin) Key Label: user@server.com#1
Press <RET> to accept the default key label. You will then see:
Enter passphrase: Again: sfskey needs secret bits with which to seed the random number generator. Please type some random or unguessable text until you hear a beep: 64
At this point, type 64 random characters to seed the random number
generator, until you hear a bell. You will then be prompted for your
UNIX password. If all goes well you should see a message line:
UNIX password: wrote key: /home/user/.sfs/authkeys/user@server.com#1 %
The above procedure creates a public/private key pair for you and
registers it with the server. (Note that if you already have a public
key on another server, you can reuse that public key by giving
sfskey
your address at that server, e.g., sfskey
register user@other.server.com
.)
After registering your public key with an SFS server, you can use the
sfskey login
command to access the server. Get a shell on a
different client machine from the server, and run the command:
sfskey login usr@server
server is the name of the server on which you registered, and
user is your logname on that server. You should be prompted for
a password, and see something like the following:
Passphrase for dm@server.com/1024: SFS Login as dm@server.com
The sfskey login
command does three things: It starts the
sfsagent
program, which persists in the background to
authenticate you to file servers as needed. It fetches your private
key from server and decrypts it using your passphrase. Finally,
it fetches the server's public key, and creates a symbolic link from
/sfs/server
to /sfs/@server,HostID
.
(The passphrase you type is also used to authenticate the server to
the client, so that sfskey
can fetch the server's public key
securely.)
If, after your agent is already running, you wish to fetch a private
key from another server or download another server's public key, you
can run sfskey login
multiple times. You will be able to
access all the servers you have logged into simultaneously.
While sfskey
provides a convenient way of authenticating
oneself to servers and obtaining their self-certifying pathnames, it
is by no means the only way. If you use the same public key on all
servers, you will only need to type your password once to download
your private key; sfsagent
will automatically authenticate
you to whatever file servers you touch. Moreover, once you have
access to one SFS file server, you can use it to store symbolic links
to other servers' self-certifying pathnames.
When you are done using SFS, you should run the command
sfskey kill
before logging out. This will kill your sfsagent
process
running in the background and get rid of the private keys it was holding
for you in memory.
sfskey--+---------------- - - - -----------+ | | agent--+ | agent------+ | | | +---------------+ +-------------+ | sfscd |-------- - - - --------| sfssd | | | | | | | | sfsrwcd-+ | | +-sfsrwsd--+-+ | sfsrocd-+ | | +-sfsrosd | | | nfsmounter-+ | | +-sfsauthd | | +---------------+ +-------------+ | | V +--------+ | +--------+ | kernel | | | kernel | | NFS3 |<-----+ | NFS3 | | client | | server | +--------+ +--------+ CLIENT SERVERSFS consists of a number interacting programs on both the client and the server side.
On the client side, SFS implements a file system by pretending to be an
NFS server and talking to the local operating system's NFS3 client. The
program sfscd
gets run by root (typically at boot time).
sfscd
spawns two other daemons--nfsmounter
and
sfsrwcd
.
nfsmounter
handles the mounting and unmounting of NFS file
systems. In the event that sfscd
dies, nfsmounter
takes over being the NFS server to prevent file system operations from
blocking as it tries to unmount all file systems. Never send
nfsmounter
a SIGKILL
signal (i.e., kill -9
).
nfsmounter
's main purpose is to clean up the mess if any
other part of the SFS client software fails. Whatever bad situation
SFS has gotten your machine into, killing nfsmounter
will
likely only make matters worse.
sfsrwcd
implements the ordinary read-write file system
protocol. As other dialects of the SFS protocol become available, they
will be implemented as daemons running alongside sfsrwcd
.
sfsrocd
implements the client-side of the read-only dialect of
SFS. This program synthesizes a file system by reading blocks
from an sfsrosd
replica.
Each user of an SFS client machine must run an instance of the
sfsagent
command. sfsagent
serves several purposes.
It handles user authentication as the user touches new file systems. It
can fetch HostIDs on the fly, a mechanism called Dynamic
server authentication. Finally, it can perform revocation checks on
the HostIDs of servers the user accesses, to ensure the user does
not access HostIDs corresponding to compromised private keys.
The sfskey
utility manages both user and server keys. It lets
users control and configure their agents. Users can hand new private
keys to their agents using sfskey
, list keys the agent holds,
and delete keys. sfskey
will fetch keys from remote servers
using SRP, SRP. It lets users change their public keys on remote
servers. Finally, sfskey
can configure the agent for dynamic
server authentication and revocation checking.
On the server side, the program sfssd
spawns two subsidiary
daemons, sfsrwsd
and sfsauthd
. If virtual hosts or
multiple versions of the software are running, sfssd
may spawn
multiple instances of each daemon. sfssd
listens for TCP
connections on port 4. It then hands each connection off to one of the
subsidiary daemons, depending on the self-certifying pathname and
service requested by the client.
sfsrwsd
is the server-side counterpart to sfsrwcd
.
It communicates with client side sfsrwcd
processes using the
SFS file system protocol, and accesses the local disk by acting as a
client of the local operating system's NFS server. sfsrwsd
is
the one program in sfs that must be configured before you run it,
sfsrwsd_config.
sfsrosd
is the server-side counterpart to sfsrocd
.
A sfsrosd
replica presents a simple interface for reading
blocks of data. This program requires sfsrosd_config to select
a set of read-only databases to serve.
sfsauthd
handles user authentication. It communicates
directly with sfsrwsd
to authenticate users of the file system.
It also accepts connections over the network from sfskey
to
let users download their private keys or change their public keys.
It is inconvenient for users to run sfskey login
once for every
server they wish to access. Though users can register the same public
key on multiple servers, they still cannot access a server without its
self-certifying pathname.
SFS's realm mechanism allows one trusted server to store and
serve the self-certifying pathnames of many other servers. By
default, SFS servers are not configured to support administrative
realms. When a user runs sfskey login
to a server without a
realm, a symbolic link is created from
sfs/server-name
to the server's self-certifying
pathname. If, instead, the server is configured to be part of an
administrative realm, sfs/server-name
will be a
directory, and references to names in that directory will
transparently create symbolic links to self-certifying pathnames.
To set up a realm server, you must first create a publicly-readable
directory of symbolic links to self-certifying pathnames of other
servers. For example, suppose your sfsrwsd_config
file's root
directory is publicly readable with this configuration:
Export /var/sfs/root / R
Create a directory /var/sfs/root/servers
.
Now populate this directory with symbolic links to self-certifying
pathnames. For example, a server for the realm of machines in DNS
zone scs.cs.nyu.edu
might contain the following links:
pitt -> /sfs/@pitt.scs.cs.nyu.edu,rexmmr795q6enmhsemr5xt5f6jjhjm6h fdr -> /sfs/@fdr.scs.cs.nyu.edu,hki6vgn6gwkuknve7xqrv4a5mbv76uui ludlow -> /sfs/@ludlow.scs.cs.nyu.edu,hcbafipmin3eqmsgak2m6heequppitiz orchard -> /sfs/@orchard.scs.cs.nyu.edu,4ttg7gvinyxrfe2zgv8mefmjbb3z7iur
These links should now also be available in the subdirectory
servers
of the server's self-certifying pathname.
Finally, to configure your server to support realms, you must add the
following two lines to /etc/sfs/sfsauthd_config
.
(If that file does not exist, copy the default file
/usr/local/share/sfs/sfsauthd_config
to
/etc/sfs
to add the lines.)
realm realm-name certpath /servers
The realm-name can be the name of your primary server, or it
might be your domain name instead (e.g., in the example you can chose
realm name scs.cs.nyu.edu
to authenticate a bunch of servers
ending .scs.cs.nyu.edu
).
After editing sfsauthd_config
, you must restart
sfsauthd
on the server. The easiest way to do this is to
run the following command as root:
# kill -1 `cat /var/run/sfssd.pid`
Note that if the new realm-name is not the same as the server
name (or if you ever change realm-name), then users who have
already registered will see a message like the following when they
next log in:
sfskey: Warning: host for dm@ludlow.scs.cs.nyu.edu is actually server @ludlow.scs.cs.nyu.edu,hcbafipmin3eqmsgak2m6heequppitiz This server is claiming to serve host (or realm) scs.cs.nyu.edu, but you originally registered on host (or in realm) ludlow.scs.cs.nyu.edu sfskey: fatal: Invalid connection to authserver.
The reason for this error is that, unfortunately, users often chose the same passwords in multiple administrative realms. To prevent one realm from impersonating another in the event that users have recycled passwords, SFS cryptographically embeds the realm name in the SRP password information stored at the server.
To correct the problem after changing realm-name, users need
only run the command:
% sfskey update -r [user]server-name
This command will prompt users for their passwords and then ask them to confirm the change of realm name.
Once your realm is configured and you have updated your account at the
server, you can log into the server with sfskey login
. You
should now see /sfs/realm-name
as an empty
directory on your system. However, if you access a file name like
/sfs/realm-name/ludlow
and ludlow
is a symbolic link in the servers
directory, then name
/sfs/realm-name/ludlow
will automatically
spring into existence as the appropriate symbolic link.
Note that SFS could immediately populate the directory
/sfs/realm-name
with symbolic links before users
even access the names. However, many users alias the ls
command to ls -F
, and many versions of Linux ship with an
ls
command that colorizes output by default. These
ls
commands execute a stat
system call for every file
in a directory, which would be quite expensive in a directory of links
to self-certifying pathnames, as each stat
call would trigger a
file system mount (and unavailable servers would introduce serious
delays).
sfs_users
filesOne often wishes to set up multiple servers to be part of a single
administrative realm and recognize the same set of users. In such
cases, users can access all servers in the realm by executing a single
sfs login
command. Moreover, users only need to change their
public keys and passwords on a single server for the changes to
propagate to the other ones.
Within an administrative realm, one can classify servers as either trusted or untrusted. A trusted server is a machine that all servers trust to specify the identities of users and servers in the realm. In each realm, one of the trusted servers, designated the primary, is the one on which users update their accounts. Every administrative realm must have a primary server. An untrusted server recognizes all users in the realm, but is not necessarily trusted by users or other servers in the realm.
As a concrete example, consider a research group with two central file servers, A and B, and a number of clients C1, C2, ..., on users' desks. Everyone in the group may trust the administrators of servers A and B, but individual users may have superuser privileges on their own clients and not be trusted by the rest of the realm. In particular, the user of client C1 may wish to set up a file server accessible to other users in the realm (and possibly also accessible to some local maintained guest accounts on C1). C1's owner must be able to set up this server without it being trusted by the rest of the realm.
To configure SFS servers as part of a realm, you must first understand
what information a server stores about users. Each SFS server has one
or more sfs_users
databases of users on the system. A database
may contain, among other things, the following information for each
user:
sfskey login
command. The
SRP information stored by the server serves two purposes. First, it
allows the server to verify that a user running sfskey login
knows the right password to access the account. Second, and equally
important, it allows the server to prove its own identity to the
client executing sfskey login
. Thus, though not equivalent to
the user's password, the SRP information is a secret derived from the
password with which the server can prove its own identity.
The first three pieces of information
SFS consists of a number of programs, many of which have configuration
files. All programs look for configuration files in two
directories--first /etc/sfs
, then, if they don't find the file
there, in /usr/local/share/sfs
. You can change these locations
using the --with-etcdir
and --datadir
options to
the configure
command, configure.
The SFS software distribution installs reasonable defaults in
/usr/local/share/sfs
for all necessary configuration files except
sfsrwsd_config
. On particular hosts where you wish to change
the default behavior, you can override the default configuration file
by creating a new file of the same name in /etc/sfs
.
The sfs_config
file contains system-wide configuration
parameters for most of the programs comprising SFS. Note that
/usr/local/share/sfs/sfs_config
is always parsed, even if
/etc/sfs/sfs_config
exists. Options in
/etc/sfs/sfs_config
simply override the defaults
in /usr/local/share/sfs/sfs_config
. For all other
configuration files, a file in /etc/sfs
entirely
overrides the version in /usr/local/share/sfs
.
If you are running a server, you will need to create an
sfsrwsd_config
file to tell SFS what directories to export, and
possibly an sfsauthd_config
if you wish to share the database of
user public keys across several file servers.
The sfssd_config
file contains information about which protocols
and services to route to which daemons on an SFS server, including
support for backwards compatibility across several versions of SFS. You
probably don't need to change this file.
To run an SFS read-only server, you must create an sfsrosd_config file to tell SFS which read-only databsses to serve.
sfs_srp_params
contains some cryptographic parameters for
retrieving keys securely over the network with a passphrase (as with the
sfskey add usr@server
command).
sfscd_config
Contains information about extensions to the SFS
protocol and which kinds of file servers to route to which daemons. You
almost certainly should not touch this file unless you are developing
new versions of the SFS software.
Note that configuration command names are case-insensitive in all configuration files (though the arguments are not).
sfs_config
--system-wide configuration parametersThe sfs_config
file lets you set the following system-wide
parameters:
sfsdir directory
/var/sfs
, unless you changed this with the --with-sfsdir
option to configure
.
sfsuser sfs-user [sfs-group]
sfs
and
sfs-group is the same as sfs-user. The sfsuser
directive lets you supply either a user and group name, or numeric IDs
to change the default. Note: If you change sfs-group,
you must make sure the the program
/usr/local/lib/sfs-0.8pre/suidconnect
is setgid to the new
sfs-group.
anonuser {user | uid gid}
sfs_config
file specifies the user name nobody.
ResvGids low-gid high-gid
sfsagent
program.
However, it needs to modify processes' group lists so as to know which
file system requests correspond to which agents. The ResvGids
directive gives SFS a range of group IDs it can use to tag processes
corresponding to a particular agent. (Typically, a range of 16 gids
should be plenty.) Note that the range is inclusive--both
low-gid and high-gid are considered reserved gids.
The setuid root program newaid
lets users take on any of
these group IDs, newaid. Thus, make sure these groups are not
used for anything else, or you will create a security hole. There is
no default for ResvGids
.
Note that after changing ResvGids
, you must kill and restart
sfscd
for things to work properly.
RSASize bits
DlogSize bits
PwdCost cost
sfskey edit
.
The default value is 12. cost is an exponential parameter.
Thus, you probably don't want anything too much larger. The maximum
value is 32--at which point password hashing will not terminate in
any tractable amount of time and the sfskey
command will be
unusable.
LogPriority facility.level
daemon.notice
.
sfsrwsd_config
--File server configurationHostname name
Keyfile path
sfsrwsd
to look for its private key in file path.
The default is sfs_host_key
. SFS looks for file names that do
not start with /
in /etc/sfs
, or whatever directory you
specified if you used the --with-etcdir
option to
configure
(see configure).
Export local-directory sfs-name [R|W]
sfsrwsd
to export local-directory, giving it the
name sfs-name with respect to the server's self-certifying
pathname. Appending R
to an export directive gives anonymous
users read-only access to the file system under the anonymous user
group IDs specified in sfs_config
, anonuser.
Appending
W
gives anonymous users both read and write access.
See Quick server setup,
for an example of the Export
directive.
There is almost no reason to use the W
flag. The R
flag
lets anyone on the Internet issue NFS calls to your kernel as the
anonymous user. SFS filters these calls; it makes sure that they
operate on files covered by the export directive, and it blocks any
calls that would modify the file system. This approach is safe given
a perfect NFS3 implementation. If, however, there are bugs in your
NFS code, attackers may exploit them if you have the R
option--probably just crashing your server but possibly doing worse.
LeaseTime seconds
Publishfile path
sfsrosd
to serve the SFS read-only database
contained in file path.
sfsauthd_config
--User-authentication daemon configurationHostname name
Keyfile path
sfsrwsd
to look for its private key in file path.
The default is sfs_host_key
. SFS looks for file names that do
not start with /
in /etc/sfs
, or whatever directory you
specified if you used the --with-etcdir
option to
configure
(see configure).
Userfile [-update] [-create] [-passwd] [-admin] [-hideusers] [-pub=pubpath] [-prefix=prefix] [-uid=uid | -uidmap=u1-u2+u3] [-gid=gid | -gidmap=g1-g2+g3] [-groups=g1-g2] [-groupquota=limit] [-refresh=seconds] [-timeout=seconds] path
sfsauthd
should look for user
public keys when authenticating users. You can specify multiple
Userfile
directives to use multiple files. This can be useful in
an environment where most user accounts are centrally maintained, but a
particular server has a few locally-maintained guest (or root) accounts.
If sfsauthd
has been compiled with
Sleepycat database support, and
path ends in .db/
, vidb
will consider the user
authentication file to be a database directory. This offers
considerably greater efficiency for large databases, as databases
directories most operations O(log n) rather than O(n) for flat text
files. If path ends in .db
, it is assumed to be a
database file. Database files are similar to database directories,
but can only be used for read-only databases (as they do not support
atomic transactions). Database files should be used to export
databases via the -pub=pubpath
option, and to import
read-only databases (by omitting the -update
option).
Userfile has the following options:
-update
sfsauthd
maintains local copies of read-only databases
in /var/sfs/authdb
. This process ensures that
temporarily unavailable file servers never disrupt
sfsauthd
's operation.
-create
sfs_users
file if no such file exists.
-passwd
/etc/passwd
on most machines) as
part of this userfile. Use password, shell and home directory
information. Allows users who do not exist in the database to log
into sfsauthd
with their UNIX password, so that they
might register an SFS key (note this also requires the
-update
flag). See sfskey register, for details on
this. Also important for proper functioning of rexd
.
-admin
privs
field.
-hideusers
-pub=pubpath
sfsauthd
supports the secure remote password protocol, or SRP.
SRP lets users connect securely to sfsauthd
with their
passwords, without needing to remember the server's public key. To
prove its identity through SRP, the server must store secret data
derived from a user's password. The file path specified in
Userfile
contains these secrets for users opting to use SRP. The
-pub
option tells sfsauthd
to maintain in
pubpath a separate copy of the database without secret
information. pubpath might reside on an anonymously readable SFS
file system--other machines can then import the file as a read-only
database using a Userfile
line with the -update
flag.
-prefix=prefix
-uid=uid
-uidmap=u1-u2+u3
-gid=gid
-gidmap=g1-g2+g3
uid
and uidmap
, but
applies to group IDs, rather than user IDs. Again, these options
are mutually exclusive.
-groups=g1-g2
sfsauthd
to allow regular (non-admin) users
to add groups. New group IDs will be in the range g1 to g2.
Administrators can establish per-user quotas to limit the number of
groups that a particular user can create. User quotas are listed in
the privs field of user records as "groupquota"=quota where
quota is an unsigned integer.
-groupquota=limit
-refresh=seconds
-timeout=seconds
If no Userfile
directive is specified, sfsauthd
uses
the following default (again, unqualified names are assumed to be in
/etc/sfs
):
Userfile -update -passwd -pub=sfs_users.pub sfs_users
DBcache path
/var/sfs/authdb
.
dbcache dbcache.db/ dbcache dbcache
DBcache_refresh_delay seconds
sfsauthd
will attempt to refresh its cache. This value only serves as a
minimum because the server will not attempt to download a remote
user or group more frequently than its individual refresh value
(set by the remote administrator or user). The special value
`off' disables the authentication cache as well as symbolic and/or
recursive groups. The default is `off'.
dbcache_refresh_delay off dbcache_refresh_delay 3600
Logfile path
sfsauthd
. The default logfile is
/var/sfs/sign_log
.
SRPfile path
sfskey gensrp
command. The default is
sfs_srp_params
. If the default file does not exist, serving
pre-generated SRP parameters is disabled.
Denyfile path
sfs_deny
. If the default file does not exist, we
assume an empty list.
Realm name
If the realm directive does NOT appear in this file, the authserver will not join any realm. This behavior is the default. If the realm directive does appear, name cannot be empty.
NOTE: Changing an authserver's realm after users have already registered
using SRP requires all users to update their authentication data because
the realm is bound into the stored SRP information. Specifically, each
user will need to run
sfskey update -r username@authserver
A user logged on to the authserver can use the hostname - to
signify the local host:
sfskey update -r -
Certpath dir [dir ...]
sfskey login
command; this list of directories will become the
arguments to a dirsearch certprog. That is, for a certpath "dir1
dir2" the client will add a certprog "dirsearch dir1
dir2" to the user's agent. The certification path will be tagged
with a prefix equal to the authserver's realm (see above).
NOTE: The certpath directive only makes sense if the authserver is part of a realm. The certpath will be ignored if the realm directive isn't specified.
There are three ways to specify a certpath directory:
certpath //dir1 /dir2 @sfs.host.domain,HOSTID/dir2
which can also be written
certpath //dir1 certpath /dir2 certpath @sfs.host.domain,HOSTID/dir2
A directory starting with two slashes ("//") is considered relative to the client machine's root ("/"). A directory starting with one slash ("/") is relative to the authserver's self-certifying pathname (the authserver performs the substitution before is sends the dir). The third form is a fully specified directory on SFS.
The default certpath is empty.
sfs_hosts
--Host to address mapping overriding DNSAll SFS client software uses DNS to locate server names. This is
somewhat different from typical network utilities, which, often
depending on a configuration file such as /etc/nsswitch.conf
,
can sometimes combine DNS with other techniques, such as scanning the
file /etc/hosts
or querying NIS (YP) servers.
SFS relies exclusively on DNS for several reasons. First, the file
system is designed to provide a global namespace. Using
/etc/hosts
, for example, it is common for a machine to have two
names--for instance hostname
, and hostname.domain.com
.
However, were the same file system to be available under two different
self-certifying pathnames, several things would go wrong: First,
bookmarks to /sfs/@hostname,.../...
would only work on the
local network. Even worse, it might be possible to lose a file by
accidentally copying it onto itself, e.g., from
/sfs/@hostname,.../...
to
/sfs/@hostname.domain.com,.../...
. Finally, SFS allows one to
specify a TCP port number other than the default (4) using DNS SRV
records, while non-DNS mechanisms have no means of specifying port
numbers.
Though DNS is fairly ubiquitous, there are situations in which one
might like to have "internal" connections to SFS servers routed
differently from "external" ones. For example, when running SFS
servers behind a NAT box, external connections would need to be
directed to the external IP address of the NAT box, while it would be
more efficient to route internal connections directly to the internal
IP address, without going through the NAT. In such situations, often
the best solution is to set up a split DNS configuration. When split
DNS is not an option, however, the sfs_hosts
mechanism will
come in handy.
sfs_hosts
is a superset of the standard /etc/hosts
file
format, that additionally allows one to specify a port number by
appending it with a %
character at the end of the address. By
default, the port number is 4. For example, the following two lines
both specify that server.domain.com
is running on port 4 of IP
address 10.1.1.1
:
10.1.1.1 server.domain.com 10.1.1.1%4 server.domain.com
If you really want /etc/hosts
to override DNS with SFS, you can
always run ln -s ../hosts /etc/sfs/sfs_hosts
, but this is not
recommended. Solutions involving DNS configuration will be much more
scalable and flexible.
sfs_users
--User-authentication databaseThe sfs_users
file, maintained and used by the
sfsauthd
program, maps public keys to local users and
groups. It is roughly analogous to the Unix /etc/passwd
and
/etc/group
files. Each line of sfs_users
can specify a
user or a group. Users are specified as follows (split into two
lines here only for clarity of presentation):
USER:user:uid:version:gid:owner:pubkey:privs :srp:privkey:srvprivkey:audit
Note that the first USER
is just the literal string
USER
. The rest of the fields have the following meanings:
dm/root
, kaminsky/root
, etc.).
unix=account
unix=
property to map every SFS user to a local Unix user of
the same name. The unix=
property has several consequences.
First, if there is no local Unix user named account, this SFS
user will not be allowed to log in. Second, when the SFS user logs
in, SFS will search /etc/group
for additional groups the user
might belong to. Third, the rexd
remote login daemon will
allow remote login access to this account, using the shell and home
directory specified in /etc/passwd
. Finally, on some operating
systems, SFS enforces account expiration dates specified by
/etc/shadow
or /etc/spwd.db
.
admin
Userfile
directive in
sfsauthd_config
specifies the -admin
option. For
sfs_users
files with the -admin
option, the
admin
privilege allows users to create and modify other user
records remotely, though currently client-side support for doing this
is limited.
refresh
timeout
sfsaclsd
, an
experimental server that is not part of the mainline SFS distribution
yet.
sfskey add
command to
fetch the wrong HostID. Note also that srp is specific to
a particular hostname. If you change the Location of a file
server, users will need to register new SRP.
sfsauthd
. It is
private, per-user data that sfsauthd
will return to users who
successfully complete the SRP protocol. Currently, sfskey
users this field to store an encrypted copy of a user's private key,
allowing the user to retrieve the private key over the network.
Each group in sfs_users
is specified by a line with the
following format:
GROUP:group:gid:version:owners:members:properties:audit
Here again the first GROUP
is just the literal string
GROUP
, while the remaining fields have the following meanings:
sfskey
interface.
sfsaclsd
, an
experimental server that is not part of the mainline SFS distribution
yet.
sfskey
interface.
sfs_users
files can be stored in one of three formats: plain
ASCII, database directories, and database files. (The latter two
require SFS to have been compiled with Sleepycat BerkeleyDB support.)
The format is determined by the extension of the file name. File
names ending .db/
are considered database directories; file
names ending .db
are considered database files; everything else
is considered ASCII. Only read-only and exported public databases can
be database files; read-write databases must be directories, ending
.db/
.
(The reason is that read-write database files require write-ahead
logging, which relies on auxiliary files.)
You should always edit sfs_users
files using the
vidb
command (see vidb),
for two reasons. First, whenever editing files by hand, you run the
risk of overwriting concurrent updates by sfsauthd
.
vidb
acquires the necessary locks to prevent this from
happening. Second, when editing a database directory or file,
vidb
translates from the binary database format into the
ASCII format described above; when committing updates, it also
atomically modifies various secondary indexes that SFS relies upon.
sfssd_config
--Meta-server configurationsfssd_config
configures sfssd
, the server that accepts
connections for sfsrwsd
and sfsauthd
.
sfssd_config
can be used to run multiple "virtual servers", or
to run several versions of the server software for compatibility with
old clients.
Directives are:
BindAddr ip-addr [port]
Explicitly specifies the IP address and port on which sfssd
should listen for TCP connections. To listen on INADDR_ANY
,
use the value 0.0.0.0
for ip-addr. If port is not
specified, sfssd
will use the value of the SFS_PORT
environment variable, if it exists and is non-zero, or else fall back
to the default port number of 4.
It is important to note the difference between specifying a port
number with the SFS_PORT
environment variable, and with a
BindAddr
directive (see SFS_PORT).
When no BindAddr
directive is specified, sfssd
attempts to figure out the appropriate port number(s) to bind to
automatically. It does so by looking for DNS SRV records for the
current hostname (or SFS_HOSTNAME
environment variable). This
is quite different from specifying BindAddr 0.0.0.0 0
, which
would always bind port 4 or whatever is specified with the
SFS_PORT
environment variable.
RevocationDir path
sfssd
should search for
revocation/redirection certificates when clients connect to unknown
(potentially revoked) self-certifying pathnames. The default value is
/var/sfs/srvrevoke
. Use the command sfskey
revokegen
to generate revocation certificates.
HashCost bits
Server {* | @Location[,HostID]}
,
HostID. If
,
HostID is omitted, then the following lines apply to any
connection that does not match an explicit HostID in another
Server
. The argument *
applies to all clients who do not
have a better match for either Location or HostID.
Release {* | sfs-version}
*
signifies arbitrarily large SFS
release numbers. The Release
directive does not do anything on
its own, but applies to all subsequent Service
directives until
the next Release
or Server
directive.
Extensions ext1 [ext2 ...]
Service
directives apply only to
clients that supply all of the listed extension strings (ext1,
...). Extensions
applies until the next Extensions
,
Release
or Server
directive
Service srvno daemon [arg ...]
1. File server 2. Authentication server 3. Remote execution 4. SFS/HTTP (not yet released)
Service srvno -u path
Service
, only instead of
spawning a daemon, connects to the unix-domain socket specified by
path
to communicate with an already running daemon. This
option may be useful when debugging SFS servers, as the server for a
particular service on a particular self-certifying pathname can be run
under the debugger and receive connections on the usual SFS port
without interfering with other servers on the same machine.
Service srvno -t host [port]
sfssd
should act as a "TCP proxy" for this
particular service, relaying any incoming connections to TCP port
port on host. If unspecified, port is the default
SFS TCP port 4.
This syntax is useful in a NATted environment. For instance, suppose
you have two SFS servers with addresses 10.0.0.2 and 10.0.0.3 on a
private network, and one machine 10.0.0.1 with an externally visible
interface 4.3.2.1. You can use this proxy syntax to export the
internal file systems. The easiest way is to pick two DNS names for
the new servers, but point them at your outside server. For example:
server-a.mydomain.com. IN A 4.3.2.1 server-b.mydomain.com. IN A 4.3.2.1
Then, on your outside machine, you might have the following
sfssd_config
file:
Server server-a.mydomain.com Release * Service 1 -t 10.0.0.2 Service 2 -t 10.0.0.2 Service 3 -t 10.0.0.2 Server server-b.mydomain.com Release * Service 1 -t 10.0.0.3 Service 2 -t 10.0.0.3 Service 3 -t 10.0.0.3
Then on each of the internal machines, be sure to specify
Hostname server-A.mydomain.com
and Hostname
server-B.mydomain.com
in sfsrwsd_config
.
The default contents of sfssd_config
is:
Server * Release * Service 1 sfsrwsd Service 2 sfsauthd Service 3 rexd
To disable the file server, you can copy this file to
/etc/sfs/sfssd_config
and comment out the
line Service 1 sfsrwsd
. To disable the remote login server,
comment out the line for rexd
.
To run an SFS read-only service, you could specify the lines:
Server * Release * Service 1 sfsrosd
Note that you may have only one program per service number within a
Release clause. For instance, you cannot run both sfsrosd
and sfsrwsd
unless the programs appear in separate clauses
such as:
Server * Release * Service 1 sfsrwsd Service 2 sfsauthd Service 3 rexd Server @snafu.lcs.mit.edu,xzfeqjnareyn2dhqxccd7wrk5m847rh2 Release * Service 1 sfsrosd
To run a different server for sfs-0.6 and older clients, you could add
the lines:
Release 0.6 Service 1 /usr/local/lib/sfs-0.6/sfsrwsd
sfs_srp_params
--Default parameters for SRP protocolSpecifies a "strong prime" and a generator for use in the SRP
protocol. SFS ships with a particular set of parameters because
generating new ones can take a considerable amount of CPU time. You can
replace these parameters with randomly generated ones using the
sfskey srpgen -b bits
command.
Note that SRP parameters can afford to be slightly shorter than Rabin public keys, both because SRP is based on discrete logs rather than factoring, and because SRP is used for authentication, not secrecy.
The format of the file is a single line of the form:
N=0x
Modulus,g=0x
Generator
Modulus is a prime number, represented in hexadecimal, which must satisfy the property that (Modulus-1)/2 is also prime. Generator is an element of the multiplicative group of integers modulo Modulus such that Generator has order (Modulus-1)/2.
sfscd_config
--Meta-client configurationThe sfscd_config
is really part of the SFS protocol
specification. If you change it, you will no longer be executing the
SFS protocol. Nonetheless, you need to do this to innovate, and SFS was
designed to make implementing new kinds of file systems easy.
sfscd_config
takes the following directives:
Extension string
sfscd
should send string to all servers
to advertise that it runs an extension of the protocol. Most servers
will ignore string, but those that support the extension can
pass off the connection to a new "extended" server daemon. You can
specify multiple Extension
directives.
Protocol name daemon [arg ...]
/sfs/name:anything
should be handled by the
client daemon daemon. name may not contain any
non-alphanumeric characters. The Protocol
directive is useful
for implementing file systems that are not mounted on self-certifying
file systems.
Release {* | sfs-version}
*
signifies arbitrarily large SFS
release numbers. The Release
directive does not do anything on
its own, but applies to all subsequent Program
directives until
the next Release
directive.
Libdir path
/
. The default is
/usr/local/lib/sfs-0.8pre
. The Libdir
directive does not do anything on its own, but applies to all
subsequent Program
directives until the next Libdir
or
Release
directive.
Program prog.vers daemon [arg ...]
Program
directive must be preceded by a Release
directive.
The default sfscd_config
file is:
Release * Program 344444.3 sfsrwcd Program 344446.2 sfsrocd
To run a different set of daemons when talking to sfs-0.3 or older
servers, you could add the following lines:
Release 0.3 Libdir /usr/local/lib/sfs-0.3 Program 344444.3 sfsrwcd
sfsagent
reference guidesfsagent
is the program users run to authenticate themselves
to remote file servers, to create symbolic links in /sfs
on the
fly, and to look for revocation certificates. Many of the features in
sfsagent
are controlled by the sfskey
program and
described in the sfskey
documentation.
Ordinarily, a user runs sfsagent
at the start of a session.
sfsagent
runs sfskey add
to obtain a private key.
As the user touches each SFS file server for the first time, the agent
authenticates the user to the file server transparently using the
private key it has. At the end of the session, the user should run
sfskey kill
to kill the agent.
The usage is as follows:
sfsagent [-dnkF] -S sock [-c [prog [arg ...]] | keyname]
-d
-n
-n
, you must also use
the -S
option, otherwise your agent will be useless as there
will be no way to communicate with it.
-k
sfsagent
will refuse to run again.
-F
-S sock
sfskey
on the Unix
domain socket sock. Ordinarily sfskey
connects to the
agent through the client file system software, but it can use a named
Unix domain socket as well.
-c [prog [arg ...]]
sfsagent
on startup runs the command sfskey
add
giving it whatever -t
option and keyname you
specified. This allows you to fetch your first key as you start or
restart the agent. If you wish to run a different program, you can
specify it using -c
. You might, for instance, wish to run a
shell-script that executes a sfskey add
followed by several
sfskey certprog
commands.
sfsagent
runs the program with the environment variable
SFS_AGENTSOCK
set to -0
and a Unix domain socket on
standard input. Thus, when atomically killing and restarting the agent
using -k
, the commands run by sfsagent
talk to the
new agent and not the old.
If you don't wish to run any program at all when starting
sfsagent
, simply supply the -c
option with no
prog. This will start an new agent that has no private keys.
sfskey
reference guideThe sfskey
command performs a variety of key management tasks,
from generating and updating keys to controlling users' SFS agents. The
general usage for sfskey
is:
sfskey [-S sock] [-p pwfd] command [arg ...]
-S
specifies a UNIX domain socket sfskey
can use to
communicate with your sfsagent
socket. If sock begins
with -
, the remainder is interpreted as a file descriptor number.
The default is to use the environment variable SFS_AGENTSOCK
if
that exists. If not, sfskey
asks the file system for a
connection to the agent.
The -p
option specifies a file descriptor from which
sfskey
should read a passphrase, if it needs one, instead of
attempting to read it from the user's terminal. This option may be
convenient for scripts that invoke sfskey
. For operations
that need multiple passphrases, you must specify the -p
option
multiple times, once for each passphrase.
In SFS 0.7, two-party proactive Schnorr signatures (2-Schnorr for short)
are supported in addition to Rabin signatures. One half of the 2-Schnorr
key is stored on the designated signature sever, while the other is stored
locally to file, or remotely via SRP. Unlike Rabin keys, 2-Schnorr keys
can fail to load when a signature server becomes unavailable. For this
reason, sfskey
supports multiple private-key shares that correspond
to the same public key; this way, a user can maintain a series of backup
signature servers in case his primary server becomes unavailable. By
default, sfskey
never stores both halves of a 2-Schnorr key
to the same machine, so as to enforce key sharing. To this effect,
2-Schnorr employs special sfskey
commands--sfskey 2gen
and sfskey 2edit
.
As of SFS 0.7, there is a new convention for saving and naming private
keys. By default, keys will be stored locally in $HOME/.sfs/authkeys
,
and will be in the following forms:
user@host1#n user@host1#n,p.host2,m
The first form is for standard Rabin keys. The second is for 2-Schnorr proactive signature keys. In the above examples, host1 is the the full hostname of the generating host, n is the public key version, p is the priority of the signing host (1 is the highest) host2 is the full hostname of the signing host, and m is the private key version.
In general, these details can remain hidden, in that the symbolic link
$HOME/.sfs/identity
points to the most recent key generated in
$HOME/.sfs/authkeys
, and most sfskey
commands have
reasonable defaults. However, there is a command-line system for
accessing and generating specific keys. A blank keyname and the
special keyname #
refer to the default key
$HOME/.sfs/identity
during key access and the next available
key during key generation. Keynames containing a #
character
but not containing a /
character are assumed to refer to keys
in the $HOME/.sfs/authkeys
directory. When given files of the
form prefix#
, sfskey
looks in the default
directory for the most recent key with the given prefix during key
access, and the next available key with the given prefix during key
generation. For keys of the form name#suffix
,
sfskey
will look in the $HOME/.sfs/authkeys
directory
for keys that match the given name exactly. sfskey
treats
keys with /
characters as regular files; it treats keys that
contain @
characters but no #
characters as keys stored
on remote machines.
Finally, one should note that SFS keys have both a keyname
and also a keylabel. sfskey
uses the former to
retrieve keys from the local file system or from remote servers. The latter
is less important; the keylabel is stored internally in the
private key, and is shown in the output of the sfskey list
command.
sfskey add [-t [hrs:]min] [keyname]
sfskey add [-t [hrs:]min] [user]@hostname
add
command loads and decrypts a private key, and gives
the key to your agent. Your agent will use it to try to authenticate
you to any file systems you reference. The -t
option specifies
a timeout after which the agent should forget the private key.
In the first form of the command, the key indicated by keyname
is loaded. If keyname is omitted, or # is supplied, then
the default key is $HOME/.sfs/identity
. If the
key supplied is a 2-Schnorr key, then sfskey add
will
attempt to load backup keys should the primary key fail due to an
unavailable signature server.
The second form of the command fetches a private key over the network using the SRP protocol. SRP lets users establish a secure connection to a server without remembering its public key. Instead, to prove their identities to each other, the user remembers a secret password and the server stores a one-way function of the password (also a secret). SRP addresses the fact that passwords are often poorly chosen; it ensures that an attacker impersonating one of the two parties cannot learn enough information to mount an off-line password guessing attack--in other words, the attacker must interact with the server or user on every attempt to guess the password.
The sfskey update
, sfskey register
,
sfskey 2gen
and sfskey 2edit
commands let users
store their private keys on servers, and retrieve them using the
add
command. The private key is stored in encrypted form,
using the same password as the SRP protocol (a safe design as the server
never sees any password-equivalent data).
Because the second form of sfskey add
establishes a secure
connection to a server, it also downloads the servers HostID securely
and creates a symbolic link from /sfs/
hostname to the
server's self-certifying pathname.
When invoking sfskey add
with the SRP syntax, sfskey
will ask for the user's password with a prompt of the following form:
Passphrase for user@servername/nbits:
user is simply the username of the key being fetched from the
server. servername is the name of the server on which the user
registered his SRP information. It may not be the same as the
hostname argument to sfskey
if the user has supplied a
hostname alias (or CNAME) to sfskey add
. Finally, nbits
is the size of the prime number used in the SRP protocol. Higher values
are more secure; 1,024 bits should be adequate. However, users should
expect always to see the same value for nbits (otherwise, someone
may be trying to impersonate the server).
sfskey certclear
sfskey certlist [-q]
sfskey certprog [-p prefix] [-f filter] [-e exclude] prog [arg ...]
certprog
command registers a command to be run to lookup
HostIDs on the fly in the /sfs
directory. This mechanism can be
used for dynamic server authentication--running code to lookup
HostIDs on-demand. When you reference the file
/sfs/prefix/name
, your agent will run the command:
prog arg ... name
If the program succeeds and prints dest to its standard output,
the agent will then create a symbolic link:
/sfs/prefix/name -> dest
The -p
flag can be omitted, and the link is
/sfs/name -> dest
. prefix can be more than one
directory deep (i.e., a series of path components separated by
/
). If so, the first certification program whose prefix matches
at the beginning of prefix is run. The remaining path components
are passed to prog. For example:
NEED EXAMPLE
filter is a perl-style regular expression. If it is specified, then name must contain it for the agent to run prog. exclude is another regular expression, which, if specified, prevents the agent from running prog on names that contain it (regardless of filter).
The program dirsearch
can be used with certprog
to
configure certification paths--lists of directories in which to
look for symbolic links to HostIDs. The usage is:
dirsearch [-clpq] dir1 [dir2 ...] name
dirsearch
searches through a list of directories dir1,
dir2, ... until it finds one containing a file called
name, then prints the pathname dir/name
. If it
does not find a file, dirsearch
exits with a non-zero exit
code. The following options affect dirsearch
's behavior:
-c
-l
dir/name
be a symbolic link, and print
the path of the link's destination, rather than the path of the link
itself.
-p
dir/name
. This is the default
behavior anyway, so the option -p
has no effect.
-q
As an example, to lookup self-certifying pathnames in the directories
$HOME/.sfs/known_hosts
and /mit
, but only accepting links
in /mit
with names ending .mit.edu
, you might execute the
following commands:
% sfskey certprog dirsearch $HOME/.sfs/known_hosts % sfskey certprog -f '\.mit\.edu$' /mnt/links
sfskey confclear
sfskey conflist [-q]
sfskey confprog prog [arg ...]
confprog
command registers a command to be run by the agent
when it receives an authentication request. The agent provides the program
with the following command line arguments: the machine making the request,
the machine that the requestor wants to access, the service (e.g., file
system, remote execution facility), the current key that the agent will try
signing with, and a list of all of the keys that the agent has available.
If the confirmation program returns a zero exit status, the agent will
sign with the current key; otherwise, it will refuse to sign with that key
and will try the next available one.
The confirmation program can be very simple (always answer yes, for
example), or quite complex. SFS comes with an example confirmation program
written in Python/GTK2 (confirm.py
). When called, the script can pop
up a dialog box which asks the user what he wants to do with the request.
The user has several options: reject, accept, accept and allow all futures
request from the requesting machine to access the named machine, accept and
allow access from requestor to any machine in the named machine's domain,
or accept and allow access from requestor to any machine. The script saves
the user's preferences in a data file which it consults on subsequent
invocations. If the user has chosen to accept a particular request
automatically, the script returns zero (success) without popping up a dialog
box.
Confirmation programs allow the user to manage trust policies when working
with machines that are trusted to different degrees. For example, a user
might trust the machine on his lan but want to manually confirm requests
from machines in a shared compute cluster.
sfskey delete keyname
add
command).
sfskey deleteall
sfskey edit [-LP] [-o keyname] [-c cost] [-l label] [keyname]
keyname can be a file name, or it can be of the form
[user]@server
, in which case sfskey
will
fetch the key remotely and outfile must be specified. If
keyname is unspecified the default is $HOME/.sfs/identity
.
If keyname is #
, then sfskey edit
will search
for the next appropriate keyname in $HOME/.sfs/authkeys
. In this case,
sfskey edit
will update $HOME/.sfs/identity
to point to
this new key by default.
The options are:
-L
#
.
-P
-o keyname
#
implies that sfskey edit
should
generate the next available default key in $HOME/.sfs/authkeys
.
A keyname of the form prefix#
implies that
sfskey edit
should generate the next available key in
$HOME/.sfs/authkeys
with the prefix prefix. A keyname
of the form prefix#suffix
implies that
sfskey edit
should make a key named
$HOME/.sfs/authkeys/prefix#suffix
.
-c cost
PwdCost
, pwdcost.
-l label
sfskey list
.
sfskey 2edit -[Smp] [-l label] [-S | -s srpfile] [keyname1 keyname2 ...]
Use sfskey 2edit
by supplying the keys that you wish to have
updated. Keynames are given in standard sfskey
style. Keynames
must be either remote keynames (i.e., contain a @
but no #
character) or stored in the standard keys directory (i.e., contain a #
but no /
character). For remote keys, SRP will be used to
download the key from the server, and the updated, encrypted client private
keyhalf will be written back to the server along with the new server
keyhalf. No file will be saved locally. For keys stored in
$HOME/.sfs/authkeys
, sfskey 2edit
will update the
server private keyhalf, and write the corresponding client private
keyhalf out to $HOME/.sfs/authkeys
under a new filename. By default,
sfskey 2edit
will also write the new encrypted client private
keyhalf back to the server for later SRP retrieval.
If no key is specified, the default key, $HOME/.sfs/identity
is
assumed.
-E
-S
-m
sfskey 2edit -m
,
with no additional arguments or keynames, sfskey
will refresh
all current default keys.
-p
-l label
sfskey list
.
-s srpfile
sfskey gen [-KP] [-b nbits] [-c cost] [-l label] [keyname]
$HOME/.sfs/authkeys
. If keyname contains a /
character, it will be treated as a regular Unix file. If keyname
is of the form prefix#
, sfskey gen
will look for
the next available Rabin key in $HOME/.sfs/authkeys
with the
prefix prefix. If keyname contains a non-terminal #
character, it will be treated as a fully-specified keyname to be saved in
$HOME/.sfs/authkeys
.
Note that sfskey gen
is only useful for generating Rabin keys.
Use either sfskey register
or sfskey 2gen
to
generate 2-Schnorr keys.
-K
sfskey gen
asks the user to type random text with
which to seed the random number generator. The -K
option
suppresses that behavior.
-P
sfskey gen
should not ask for a passphrase and
the new key should be written to disk in unencrypted form.
-b nbits
-c cost
PwdCost
, pwdcost.
-l label
sfskey list
.
Otherwise, the user will be prompted for a name.
sfskey 2gen [-BEKP] [-a {hostid | -}] [-b nbits] [-c cost] [-k okeyname] [-l label] [-S | -s srpfile] [-w wkeyfile] [nkeyname]
-a
flag. All keypairs will correspond to the same
public key. The new keys will be saved locally to the files given
by nkeyname in the usual fashion: if nkeyname is of the
form prefix#, then sfskey 2gen
will look for the next
available 2-Schnorr key in $HOME/.sfs/authkeys
with the prefix
prefix. If no nkeyname is given, it will find the next
available keyname in $HOME/.sfs.authkeys
with the default
prefix (user@host).
Note that by default, this operation will update the public key, the
encrypted private key, the SRP information, and the server private key
share on all of the servers given. Specify -BES
to suppress
updates of these fields.
-a -
-a hostid
-k user@host
), then you can specify
that host by its simple hostname (e.g., -a host
). If SRP
was not used to connect to a host host, then -a
requires
a complete SFS host identifier (i.e., @Location,HostID).
-B
-E
-K
-P
-c cost
-l label
-s srpfile
sfskey gen
. These options behave similarly.
-S
-b nbits
-k keyname
sfskey
. By default, all keys from $HOME/.sfs/authkeys
are loaded and hashed. Remote keys and local keys in non-standard
locations can be loaded into the hash with this option. The keys
will in turn be used to authenticate you to the servers that you
intend to update.
-w wkeyfile
sfskey gethash [-6p] keyname
sfskey
uses SRP to establish a secure connection to
the authentication server.
-6
sfskey group [-a key] [-E] [-C] [-L version] [-m {+|-}membername] [-o {+|-}ownername] groupname
-a
is another way to retrieve the key.
With no options, sfskey
will query the authentication server for
the group and print out the result. The group owners and members listed
will be exactly as they appear in the authentication server's database.
The various options are described below.
-a key
sfskey
for this session. Keynames are
specified as described above, and can be remote (via SRP) or the path to a
local file. Usually it will not be necessary to specify keys in the keys
directory ($HOME/.sfs/authkeys
) as they are considered automatically.
-E
sfskey
will ask the authentication
server to "expand" the owners and members lists first by computing the
transitive closure of all groups and remote users. The expanded group
will contain only public key hashes and user names (local to the remote
authentication server).
-C
sfskey
to create a new group called
groupname. If the group already exists, sfskey
returns
an error.
-L
sfskey
to retrieve a group's changelog beginning
at version version up through the most recent version. The changelog
contains the updates made to the group's members list, plus the group's
current refresh and timeout values.
-m {+|-}membername
-o {+|-}ownername
sfskey
to add (+) or subtract (-) the given
member or owner name to or from the given group. membernames and
ownernames must be of the form "u=<user>", "g=<group>" or
"p=<pkhash>". The "<user>" and "<group>" names can be local or remote,
but remote names must contain the fully-qualified self-certifying hostname.
Duplicate member names and owner names are removed from the group before
it is updated. Removals of names that don't exists on the given list
are ignored. This option may be given more than once.
sfskey help
sfskey
commands and their usage.
sfskey hostid Location
sfskey hostid Location%port
sfskey hostid -
@Location,HostID
or @Location%port,HostID
to standard output. If
Location is simply -
, returns the name of the current
machine, which is not insecure.
-s service
sfs
(except when using
-
). This option selects a different SFS service. Possible
values for service are sfs
, authserv
, and
rex
.
sfskey kill
sfskey list [-ql]
-q
-l
sfskey norevokeset HostID ...
sfskey norevokelist
sfskey passwd [-Kp] [-S | -s srpfile] [-b nbits] [-c cost] [-l label] [arg1] [arg2] ...
sfskey passwd
command is a high-level command for "changing
passwords" in SFS. In the case of proactive keys, sfskey passwd
will simply refresh keys via sfskey 2edit
functionality. In
the case of Rabin keys, sfskey passwd
generates a new Rabin
key and updates the given servers. By default, sfskey passwd
assumes standard Rabin keys, and thus treats arg-i as
[user][@]host arguments. If host is a regular
hostname, then SRP will be required to authenticate the host. If host
is a full SFS pathname, then sfskey passwd
will look for keys
in $HOME/.sfs/authkeys
that can authenticate the user to that particular
server. In the case of proactive 2-Schnorr keys, sfskey passwd
will treat arg-i as local or remote keynames.
If no options or arguments are given, sfskey passwd
will look
to the default key given by $HOME/.sfs/identity
. If the default key
is a proactive 2-Schnorr key, then all current 2-Schnorr keys in
.sfs/authkeys
are refreshed. If the default key is a Rabin key,
then the users key on the local machine is updated.
-p
sfskey passwd
operates under the assumption that the key to
update is a Rabin key.
-K
-S
-s srpfile
-b nbits
-c cost
-l label
sfskey gen
. Briefly,
-S
turns of SRP, -K
disables keyboard randomness
query, -s
is used to supply an SRP parameters file and is
mutually exclusive with -S
, -b
specifies the
size of the key in bits, -c
specifies the secret key
encryption cost, and -l
specifies the label for the key,
as seen in sfskey list
.
sfskey register [-fgpPK] [-S | -s srpfile] [-b nbits] [-c cost] [-u user] [-l label] [-w filename] [keyname]
sfskey register
command lets users who are logged into an
SFS file server register their public keys with the file server for the
first time. Subsequent changes to their public keys can be
authenticated with the old key, and must be performed using
sfskey update
or sfskey 2gen
. The superuser can also use
sfskey register
when creating accounts.
keyname is the private key to use. If keyname does not exist and
is a pathname, sfskey
will create it. The default keyname is
$HOME/.sfs/identity
, unless -u
is used, in which case
the default is to generate a new key in the current directory. For keys
that contain the special trailing character #
, sfskey
will implicitly determine whether the user intends to generate or access
a key. If the command is invoked as root with the -u
flag, then
generation is assumed. Similarly, if any of the options -bcgp
are used, generation is assumed. Otherwise, sfskey
will first
attempt to access the most recent key matching keyname, and then will
revert to generation if the access fails.
If a user wishes to reuse a public key already registered with another
server, the user can specify user@server
for
keyname.
-f
sfskey gen
will fail if a
record for the given user already exists on the server.
-g
prefix#
, sfskey register
will always generate
then next available key with the prefix prefix in the standard
keys directory ($HOME/.sfs/authkeys
). If sfskey
register
is being run as root with the -u
option, then
access to the standard keys directory $HOME/.sfs/authkeys
will
not be allowed. Hence, the key will simply be generated in the
current directory.
-p
-g
flag.
-K
-P
-l label
-b nbits
-c cost
-s srpfile
sfskey gen
. -K
and
-b
have no effect if the key already exists. They all imply the
-g
flag. If -p
is given, then -b will specify
the size of the modulus p used in 2-Schnorr. Without -p
,
-b
will specify the size of pq in Rabin.
-S
-u user
sfskey register
is run as root, specifies a particular
user to register.
-w filename
-p
flag. For security reasons, this should only be used when saving to
removable media (e.g., /floppy/complete-key-2
). It is a substantial
security risk to leave the complete key on a file system that might
be compromised.
sfsauthd_config
must have a Userfile
with the
-update
and -passwd
options to enable use of the
sfskey register
, sfsauthd_config.
sfskey reset
/sfs
directory, including all symbolic
links created by sfskey certprog
and sfskey add
, and
log the user out of all file systems.
Note that this is not the same as deleting private keys held by the
agent (use deleteall
for that). In particular, the effect of
logging the user out of all file systems will likely not be
visible--the user will automatically be logged in again on-demand.
sfskey revokegen [-r newkeyfile [-n newhost]] [-o oldhost] oldkeyfile
sfskey revokelist
sfskey revokeclear
sfskey revokeprog [-b [-f filter] [-e exclude]] prog [arg ...]
sfskey select [-f] keyname
$HOME/.sfs/identity
to point to the key given by keyname. It cannot be an SRP key.
-f
$HOME/.sfs/identity
is a regular
file, sfskey select
will overwrite it.
sfskey sesskill remotehost
rex
session to the server specified by remotehost,
where remotehost is any unique prefix of the remote host's
self-certifying hostname (found under the "TO" column in the output to
sfskey sesslist
).
sfskey sesslist
rex
sessions that the agent is maintaining.
sfskey srpgen [-b nbits] file
sfs_srp_params
file, sfs_srp_params.
sfskey srpclear
sfskey srplist
user@host
. Sample output
of the sfskey srplist
command might be
% sfskey srplist alice@pdos.lcs.mit.edu @amsterdam.lcs.mit.edu,bkfce6jdbmdbzfbct36qgvmpfwzs8exu alice@redlab.lcs @redlab.lcs.mit.edu,gnze6vwxtwssr8mc5ibae7mtufhphzsk alice@ludlow.scs.cs.nyu.edu @ludlow.scs.cs.nyu.edu,hcbafipmin3eqmsgak2m6heequppitiz
Currently, the agent consults this cache and adds new mappings to it
when a user invokes REX with a DNS (SRP) name. If the name is in the
agent's cache, REX will use the corresponding self-certifying hostname
to authenticate the server. If not, REX will use SRP to fetch the
server's public key and then add a new mapping to the agent's cache.
sfskey srpcacheprogclear
sfskey srpcacheproglist [-q]
sfskey srpcacheprog prog [arg ...] The
srpcacheprog
command registers a command to be run by the agent in
order to manage an on-disk copy of the in-memory SRP name cache (described
above; see srplist). The agent will invoke the SRP cache management
program with zero arguments when it wants to load the on-disk cache into
memory and exactly one argument when it wants to add a new entry to the
on-disk cache. If no SRP cache management program is set, the agent will
simply maintain an in-memory version which will be lost when the agent
is restarted.
In the first case (load), the program output must consist of one
mapping per line. Each mapping must consist of the SRP name followed
by a single space followed by the self-certifying
hostname. See srplist, for an example of what each of these fields
might look like. In the second case (store), the agent's argument to
the program will consist of a single mapping, to be added to the
on-disk cache. The mapping will have the same format described above:
the SRP name followed by a single space followed by the
self-certifying hostname (no trailing newline).
sfskey update [-fE] [-S | -s srp_params] [-r srpkey] [-a okeyname] [-k nkeyname] server1 server2 ...
$HOME/.sfs/identity
. Then you can run
sfskey update [user]@host
for each server on which
you need to change your public key.
To authenticate you to the servers on which updates are requested,
sfskey update
will first use the keys given via -a
arguments; it will then search keys in the standard key
directory--$HOME/.sfs/authkeys
.
At least one server argument is required. As usual, the string
"-" denotes the localhost. The servers specified can be either
full SFS hostnames of the form [user]@Location,HostId,
or standard hostnames of the form [user@]Location. In the
latter case, SRP is assumed, and the corresponding private key is
automatically loaded into sfskey
.
The new key that is being pushed to the server is given by the
-k
flag. If this is not provided, the default key
$HOME/.sfs/identity
will be assumed.
The -r
provides a shortcut for updating SRP information, if,
for instance, the authserver has changed its realm information. Invoking
sfskey update
is
equivalent to -r
[user]@hostsfskey update -k [user]@host host
.
Several options control sfskey update
's behavior:
-E
-S
-E
-a okeyname
sfskey
for this session. Keynames are
specified as described above, and can be remote (via SRP) or the path to a
local file. Usually it will not be necessary to specify keys in the keys
directory ($HOME/.sfs/authkeys
) as they are considered automatically.
-f
-f
flag
will force an update. Normally, the user is prompted to verify.
-k nkeyname
$HOME/.sfs/authkeys
. If this
flag is not specified, $HOME/.sfs/identity
is assumed.
Note that the -k
flag can be specified only once.
-r [user][@]host
sfskey update -k [user]@host [user@]host
.
Cannot be used with the -akS
options.
-s
sfskey
srpgen
, and specifies the parameters to use in generating SRP
information for the server. The default is to get SRP parameters from
the server, or look in
/usr/local/share/sfs/sfs_srp_params
.
sfskey user [-a key] username
-a
is another way to retrieve the key.
sfskey
will query the authentication server for the user and
print out the result.
-a key
sfskey
for this session. Keynames are
specified as described above, and can be remote (via SRP) or the path to a
local file. Usually it will not be necessary to specify keys in the keys
directory ($HOME/.sfs/authkeys
) as they are considered automatically.
rex
reference guiderex
is a remote execution facility which is integrated with
SFS. The program allows users run to run programs on a remote machine
or obtain a shell. Like SFS file systems, remote execution servers can
be named by self-certifying path names.
The usage is as follows:
rex [-TAXpv] [-R port:lport] destination [command]
destination is one of the following:
-T
-A
sfsagent
running on the remote machine, rex
will
forward agent requests back to the sfsagent
running on the
local machine (e.g., when a user accesses an SFS file system or runs
sfskey
).
-X
rex
client will set up
a dummy X server which receives connections from clients on the remote
machine. These connections are forwarded over the encrypted
rex
channel to the local X server. rex
sets the
DISPLAY
environment variable appropriately on the remote side.
Furthermore, X connections are authenticated using a `spoofed'
MIT-MAGIC-COOKIE-1.
-p
rex
to connect to the destination even if
it cannot be resolved into a valid self-certifying path name.
-v
-R port:lport
The rex
command supports the escape sequences listed below.
Rex only recognizes the escape character `~' after a newline.
dirsearch
commanddirsearch
looks for a file name in one or more directories.
The usage is as follows:
dirsearch [-c | -l | -p | -q] dir1 [dir2 ...] name
Starting with dir1, the command searches each directory
specified for a file called name. If such a file is found,
dirsearch
exits with code 0 and, depending on its options,
may print the file's pathname, contents, or expanded symbolic link
contents. If none of the directories specified contain a file
name, dirsearch
exits with code 1 and prints no
output.
dirsearch
is particularly useful for SFS certification
see certprog and revocation programs. As an example, suppose you
have a directory of symbolic links in your home directory called
.sfs/bookmarks
. The directory might contain the
following links:
sfs.fs.net -> /sfs/@sfs.fs.net,uzwadtctbjb3dg596waiyru8cx5kb4an sfs.nyu.edu -> /sfs/@sfs.nyu.edu,hcbafipmin3eqmsgak2m6heequppitiz
If you execute the command:
sfskey certprog dirsearch -l ~/.sfs/bookmarks
Then the next time you access /sfs/sfs.fs.net
, that
pathname will automatically become a symbolic link to your bookmark.
Moreover, the same will happen on remote machines to which you log in
with the rex
command.
The following mutually exclusive options affect the behavior of
dirsearch
. If more than one option is specified, only the
last will have an effect.
-c
-l
dirsearch
will expand the symbolic link.
-p
-p
is to undo any previous
-c
, -l
, or -q
option.
-q
dirsearch
would print.
The exit code still indicates whether or not the file exists.
newaid
commandThe newaid
command allows root-owned processes to access SFS
file systems using the sfsagent
of a non-root user.
Additionally, if a system is configured to allow this,
newaid
permits non-root users to run multiple
sfsagent
processes, so that different processes owned by
that user access the SFS file system with different agents. (When
used in The latter mode, newaid
is similar in function to
the AFS program pagsh
.)
SFS maps file system requests to particular sfsagent
processes using the notion of agent ID, or aid. Every process
has a 64-bit aid associated with it. Ordinarily, a process's aid is
simply its 32-bit user ID. Thus, when a user runs sfsagent
,
both the agent and all of the users' processes have the same aid.
To allow different processes owned by the same user to have different
agents, a system administrator can reserve a range of group IDs for
the purpose of flagging different aids, resvgids.
(Note that after changing ResvGids
, you must kill and restart
sfscd
for things to work properly.) If the range of
reserved group IDs is min...max, and the first
element of a process's grouplist, g0, is at least min and
not more than max, then a process's aid is computed as
((g0 - min + 1) << 32) | uid). The newaid
command therefore lets people insert any of the reserved group IDs at
the start of a process's group list.
For root-owned processes, it is also possible for processes to be associated with a non-root agent. In this case, the reserved sfs-group (as a marker) and target user's uid are actually placed in the process's grouplist, as well as any reserved group ID to select amongst multiple agents of the same user.
The usage is:
newaid [-l] [-{u|U} uid] [-G | -g gid] [-C dir] [program arg ...]
After making appropriate changes to its user ID and/or grouplists,
newaid
executes the program specified on the command
line. If no program is specified, the program specified by the
environment variable SHELL
is used by default.
-l
-
character to argv[0]
when executing program.
Command shells interpret this to mean that they are being being run as
login shells, and usually exhibit slightly different behavior. (For
example csh
will execute the commands in a user's
.login
file.)
-u uid
-U uid
newaid
is invoked by a root-owned processes, this
option sets the real uid to uid to run program, instead of
running it with uid 0. This is in itself is not sufficient to "drop
privileges." In particular, newaid
still does not make any
changes to the process gid or grouplist, beyond manipulating
aid-specific groups. Since many root-owned processes also have
privileged groups in their grouplist, it is in general
insecure to use -U
unless you set both the gid and
the whole grouplist to something sensible (i.e., appropriately
unprivileged) before invoking newaid
.
This option is mostly of use for login
-like programs that
wish to create a session with a new aid, and do not wish to make the
setuid
system call themselves. As an example, the
rexd
daemon has the server's private key, yet must spawn the
proxy
program as an unprivileged user. If it dropped
privileges before executing proxy
, unprivileged users could
send it signals, risking core dumps. Moreover, attackers might be
able to exploit weaknesses in the operating system's ptrace
system call or /proc
file system to learn the private key.
rexd
therefore runs proxy
through
newaid
, giving it the -U
option.
-g gid
-G
newaid
simply picks the first aid under which no
agent is yet running. The -g
option explicitly specifies
that gid should be added to the start of the process's group
list (and any previous reserved gid should be removed). -G
says to remove any reserved gid, so that the aid of the resulting
process will just be the user's uid.
-C dir
ssu
commandThe ssu
command allows an unprivileged user to become root
on the local machine without changing his SFS credentials.
ssu
invokes the command su
to become root. Thus,
the access and password checks needed to become root are identical to
those of the local operating system's su
command.
ssu
also runs /usr/local/lib/sfs-0.8pre/newaid
to
alter the group list so that SFS can recognize the root shell as
belonging to the original user.
The usage is as follows:
ssu [-f | -m | -l | -c command]
-f
-m
su
command.
-l
-c command
ssu
to tell su
to run command rather
than running a shell.
Note, ssu
does not work on some versions of Linux because of a
bug in Linux. To see if this bug is present, run the command su
root -c ps
. If this command stops with a signal, your su
command is broken and you cannot use ssu
.
sfscd
commandsfscd [-d] [-l] [-L] [-f config-file]
sfscd
is the program to create and serve the /sfs
directory on a client machine. Ordinarily, you should not need to
configure sfscd
or give it any command-line options.
-d
-l
sfscd
will disallow access to a server running on
the same host. If the Location in a self-certifying pathname
resolves to an IP address of the local machine, any accesses to that
pathname will fail with the error EDEADLK
("Resource deadlock
avoided").
The reason for this behavior is that SFS is implemented using NFS. Many
operating systems can deadlock when there is a cycle in the mount
graph--in other words when two machines NFS mount each other, or, more
importantly when a machine NFS mounts itself. To allow a machine to
mount itself, you can run sfscd
with the -l
flag.
This may in fact work fine and not cause deadlock on non-BSD systems.
-L
-L
option disables a number of kludges that work
around bugs in the kernel. -L
is useful for people interested
in improving Linux's NFS support.
-f config-file
sfscd
configuration file,
sfscd_config. The default, if -f
is unspecified, is
first to look for /etc/sfs/sfscd_config
, then
/usr/local/share/sfs/sfscd_config
.
sfssd
commandsfssd [-d] [-S sfs-config-file] [-f config-file]
sfssd
is the main server daemon run on SFS servers.
sfssd
itself does not serve any file systems. Rather, it acts
as a meta-server, accepting connections on TCP port 4 and passing them
off to the appropriate daemon. Ordinarily, sfssd
passes all
file system connections to sfsrwsd
, and all user-key
management connections to sfsauthd
. However, the
sfssd_config
file (see sfssd_config) allows a great deal of
customization, including support for "virtual servers," multiple
versions of the SFS software coexisting, and new SFS-related services
other than the file system and user authentication.
-d
-f config-file
sfssd
configuration file,
sfssd_config. The default, if -f
is unspecified, is
first to look for /etc/sfs/sfssd_config
, then
/usr/local/share/sfs/sfssd_config
.
-S sfs-config-file
sfs_config
file,
sfssd_config. If sfs-config-file begins with a /
,
then only this file is parsed. Otherwise, all the directories
/usr/local/share/sfs
and /etc/sfs
are searched in
order, and if no file named sfs-config-file is found but a file
sfs_config
is found, that file is parsed. However, the process
does not look in /etc/sfs
if sfs-config-file is
found in /usr/local/share/sfs. Thus, if you create a file
/etc/sfs/
sfs-config-file, it will override
/etc/sfs/sfs_config
while still incorporating the
defaults from /usr/local/share/sfs/sfs_config.
vidb
commandvidb
manually edits an SFS user-authentication file
see sfs_users, acquiring locks to prevent concurrent updates from
overwriting each other. If sfsauthd
has been compiled with
Sleepycat database support, and the
name of the file ends in .db/
, vidb
will consider the
user authentication file to be a database directory, and convert the
contents into regular ASCII text for editing. If the name of the file
ends in .db
, vidb
assumes the user authentication file
is database file (unless the pathname corresponds to an existing
directory). Note that database files (as opposed to directories) are
required to be read-only, and thus cannot be updated by vidb.
The usage is:
vidb [-w] [-R] {-S | -a [-f file] | [-e editor]} sfs-users-file
vidb
has the following options:
-a [-f file]
-a
option adds SFS user records in text form to a
database. The records are taken from standard input, or from
file if specified. Records for an existing user or group will
replace the values already in the database. Unlike vidb
's
ordinary mode of operation, -a
does not add all records
atomically. In the event of a system crash, some but not all of the
records may have been added to the database. Simply re-running the
same vidb
command after a crash is perfectly safe, however,
since previously added entries will just be overwritten (by
themselves) the second time through. For database files, because
-a
does not accumulate records into one large transaction, it
can be significantly more efficient than simply adding the records in
an editor, using vidb
's ordinary mode of operation.
-e editor
EDITOR
.
If there is no environment variable and -e
is not specified,
vidb
uses vi
.
-w
vidb
is to avoid concurrent edits to
the database and the corresponding inconsistencies that might result.
Ordinarily, if the database is already being edited, vidb
will just exit with an error message. The -w
flag tells
vidb
to wait until it can acquire the lock on the database
before launching the editor.
-R
-c
flag of the db_recover
utility, or the
DB_RECOVER_FATAL
flag of the API.) Essentially, -R
replays all of the database log records present in the supporting
files directory. You may need to use this, for example, when
restoring a database from backup tapes if the log files were backed up
more recently than the entire database. The -R
has no effect
on flat text databases, or if the -S
has been specified.
Warning: The authors have encountered bugs in the
catastrophic recovery code of at least some versions of the Sleepycat
database package. As a precaution, before attempting to use
-R
, we strongly recommend salvaging whatever records possible
from the database file itself using vidb -S
sfs-users-file>
saved_sfs_users
. If, subsequently,
the -R
option corrupts the database, you can at least salvage
some of the records from the saved_sfs_users
file.
-S
vidb
and sfsauthd
attempt to recover from any
previous incomplete transactions using the log. The -S
option opens and prints out the contents of a database without regard
to the log files. This is useful if you have lost the log files or
are worried that they are corrupt, or if you wish to examine the
contents of a database you have read but not write permission to.
Ordinarily, however, if you wish to dump the contents of a database to
standard output, use the command vidb -e cat
sfs-users-file.
Note:
vidb
should really recreate any publicly-readable versions
of user authentication databases (either by parsing
sfsauthd_config
for -pub=...
options to
Userfile
directives or signaling sfsauthd
).
Currently you must manually kill sfssd
or sfsauthd
for this to happen.
While vidb
attempts to make the smallest number of changes
to a database, editing sessions that add or remove a large number of
records can potentially exhaust resources such as locks. Sites with
large user databases can tune the database by creating a file called
DB_CONFIG
in the database directory.
The specifics of the configuration file are documented in the
Sleepycat database documentation. As an example, if performance is
slow and you run out of locks, you can set the cache size to 128MB and
increase the number of locks with the following DB_CONFIG
file:
set_cachesize 0 134217728 1 set_lk_max_locks 50000 set_lk_max_objects 50000
When editing a database, vidb
creates a temporary text file
in the /tmp
directory. For huge databases, it is conceivable
that /tmp
does not have enough space. If this happens,
/tmp
can be overridden with the TMPDIR
environment
variable.
funmount
commandThe funmount
command is executed as follows:
funmount path
funmount
forcibly attempts to unmount the file system
mounted on path. It is roughly equivalent to running
umount -f path
. However, on most operating systems the
umount
command does a great deal more than simply execute
the unmount
system call--for instance it may attempt to read
the attributes of the file system being unmounted and/or contact a
remote NFS server to notify it of the unmount operation. These extra
actions make umount
hang when a remote NFS server is
unavailable or a loopback server has crashed, which in turn causes the
client to become ever more wedged. funmount
can avoid such
situations when you are trying to salvage a machine with bad NFS
mounts without rebooting it.
SFS will get very confused if you ever unmount file systems from
beneath it. SFS's nfsmounter
program tries to clean up the
mess if the client software ever crashes. Running funmount
will generally only make things worse by confusing
nfsmounter
.
If /a
is a mount point, and /a/b
is another mount point,
unmounting /a
before /a/b
will cause the latter file
system to become "lost." Once a file system is lost, there is no
way to unmount it without rebooting. Worse yet, on some operating
systems, commands such as df
may hang because of a lost file
system.
Many operating systems will not let you unmount a file system (even
forcibly) if a process is using the file system's root directory (for
instance as a current working directory). Under such circumstances,
funmount
may fail. To unmount the file system you must find
and kill whatever process is using the directory. Utilities such as
fstat
and lsof
may be helpful for identifying
processes with a particular file system open.
sfsrwsd
daemon/usr/local/lib/sfs-0.8pre/sfsrwsd [-f config-file]
sfsrwsd
is the program implementing the SFS read-write server.
Ordinarily, you should never run sfsrwsd
directly, but rather
have sfssd
do so. Nonetheless, you must create a
configuration file for sfsrwsd
before running an SFS server.
See sfsrwsd_config, for what to put in your sfsrwsd_config
file.
-f config-file
sfsrwsd
configuration file,
sfsrwsd_config. The default, if -f
is unspecified, is
/etc/sfs/sfsrwsd_config
.
sfsrosd
daemon/usr/local/lib/sfs-0.8pre/sfsrosd [-f config-file]
sfsrosd
is the program implementing the SFS read-only server.
Ordinarily, you should never run sfsrwsd
directly, but rather
have sfssd
do so. Nonetheless, you must create a
configuration file for sfsrosd
before running an SFS server.
See sfsrosd_config, for what to put in your sfsrosd_config
file.
-f config-file
sfsrosd
configuration file,
sfsrosd_config. The default, if -f
is unspecified, is
/etc/sfs/sfsrosd_config
.
sfsauthd
daemon/usr/local/lib/sfs-0.8pre/sfsauthd [-u sockfile] [-f config-file]
sfsauthd
is the program responsible for authenticating
users. sfsrwsd
and other daemons communicate with
sfsauthd
, forwarding it authentication requests from
sfsagent
processes on remote client machines.
sfsauthd
informs requesting daemons of whether
authentication requests are valid, and if so what local credentials to
associate with the remote user agent. The sfskey
program
also communicates directly with remote sfsauthd
processes
when retrieving and updating users keys (with sfskey add
,
update
, register
, and more).
-f config-file
sfssauthd_config
configuration file,
sfsauthd_config. The default, if -f
is unspecified, is
/etc/sfs/sfsauthd_config
.
-u path
sfssd
to communicate with already running sfsauthd
commands using
a directive like Service 2 -u path
in sfssd_config
sfssd_config.
sfsrwcd
daemon/usr/local/lib/sfs-0.8pre/sfsrwcd [-u unknown-user]
sfsrwcd
is the daemon that implements the client side of the
SFS read-write file system protocol. sfsrwcd
acts as an NFS
loopback server to the local machines's in-kernel NFS client, and as a
client to a remote SFS server speaking the read-write protocol. Most
SFS servers use the read-write file system protocol, though several
research projects have implemented other protocols.
The SFS read-write protocol has RPC program number 344444 and version
number 3. It closely resembles NFS3, but additionally supports leases
on attributes: for a short period after returning file attributes to
a client, the server commits to notifying the client when the
attributes change. Leases enable clients to cache file attributes
more aggressively. In addition, the SFS protocol is encrypted and
authenticated (via a message authentication code), and supports user
authentication via opaque messages, so that users' local
sfsagent
processes can cryptographically authenticate them
to remote servers.
Ordinarily, sfsrwcd
is launched by sfscd
. The
file /usr/local/share/sfs/sfscd_config
(see sfscd_config)
contains a configuration directive instructing sfscd
to run
sfsrwcd
for the read-write file system protocol (program
344444, version 3):
Program 344444.3 sfsrwcd
You never need to run sfsrwcd
directly (in fact,
sfsrwcd
won't work without the sfscd
automounter).
However, you might wish to change the options with which
sfsrwcd
runs. To do so, create an alternate
sfscd_config
file in /etc/sfs/
. For instance, you
might use the line:
Program 344444.3 sfsrwcd -u unknown
-u unknown-user
sfsrwcd
will attempt to map remote user IDs to local user
IDs of authenticated users. Moreover, when a user belongs to a file's
group on a remote machine, sfsrwcd
will map the file's gid
to the user's local gid.
unknown-user must be the name of a user in the local password
file. When none of the local users have remote credentials
corresponding to a remote file's owner, sfsrwcd
maps the
file's uid to the numeric uid of unknown-user. Moreover, when a
user is not in the file's remote group, sfsrwcd
maps the
file's uid to the numeric gid of unknown-user in the password
file.
Note that even with the -u
option, if a local user's uid and
gid are the same as on the remote machine, no ID mapping occurs, as
the client and server are assumed to be in the same administrative
realm (though of course this might not be true).
ID mapping is not completely reliable, and may result in odd behavior. In particular for group IDs, no single mapping may work for all local users. Thus, one user may see a file belonging to one group, and another user may see the same file as belonging to unknown-user's group. Worse yet, the kernel may cache file attributes, so that if the two users look at the same file at roughly the same time, one user may see the other user's mapping.
Despite odd attributes that might result from kernel cache consistency
problems, ID mapping never changes the actual file permissions users
have on files. Nor does it affect the results of the access
system call. The primary reason for the -u flag is that the
Macintosh finder attempts to second-guess file permissions based on
numeric user and group ID, even when these values do not make sense on
the local machine. Thus, users can be denied access to files they
have legitimate access to (and which the access
system call
would show they had access to).
Note that when ID mapping is in effect, the chown
system call
(used by the chown
and chgrp
commands) is
disallowed, because its potentially confusing effects would be
concealed by the ID mapping.
nfsmounter
daemon/usr/local/lib/sfs-0.8pre/nfsmounter [-F] [-P] /prefix
nfsmounter
is a program that calls the mount
and
unmount
(or umount
, depending on the operating
system) system calls to create NFS mount points for NFS loopback
servers. An NFS loopback server is a user-level program that speaks
the NFS file system protocol, effectively pretending to be a remote
file server even though it is just a process on the local machine.
SFS is implemented as an NFS loopback server to gain portability,
since most operating systems have built-in NFS clients. Other file
systems built using the SFS file system toolkit also use
nfsmounter
.
The only thing you really need to know about nfsmounter is that you
should never send nfsmounter
a SIGKILL
signal,
e.g., using the kill -9
command. If an NFS loopback server
seems to be misbehaving, you can find the corresponding
nfsmounter
process through ps
(the prefix
argument will tell you which directory a particular
nfsmounter
process is handling, if there are multiple
loopback servers on your machine) and send it a SIGTERM
signal
(kill -15
). Upon receiving a SIGTERM
,
nfsmounter
will drop its connection to the NFS loopback
server, take over the UDP sockets corresponding to the mount point,
and do its best to unmount all the file systems.
The rest of this nfsmounter
description is mostly of
interest to people who are developing NFS loopback servers themselves.
nfsmounter
must be run as root. It expects its standard
input (file descriptor 0) to be a Unix-domain socket. The program
that spawned nfsmounter
communicates over that socket using
an RPC protocol defined in /usr/local/include/sfs/nfsmounter.x
.
As part of the mount process, the program that invoked
nfsmounter
must send it a copy of the server socket for the
NFS loopback server. When nfsmounter
detects an end of file
on standard input, it takes over these sockets so as to avoid having
processes hang (which would happen if the NFS loopback server simply
died) and attempts to unmount all file systems. Thus, it is safe for
NFS loopback servers simply to exit.
If the SFS_RUNINPLACE
environment variable is set to a directory
and nfsmounter
detects that its standard input is not a
Unix-domain socket, nfsmounter
will instead bind Unix-domain
socket $SFS_RUNINPLACE/runinplace/.nfsmounter
and wait for a
single connection. The sfscd
program knows to check for
this socket when SFS_RUNINPLACE
is set. This option makes it
easy to run sfscd
as a non-root user by starting
nfsmounter
first, which in turn facilitates debugging with
emacs (without having to run everything as root).
-F
-f
flag of the
umount
command. If you are developing an NFS loopback
server that seems to panic the kernel a lot on exit, running
nfsmounter
with -F
might help.
-P
mount
system call.
Ordinarily, as a defensive measure, nfsmounter
changes
directory to the point where the mount is happening. This is to avoid
accidentally following a symbolic link and creating a mountpoint on a
directory not under prefix. However, calling mount
with
a relative pathname cause the /proc
file system or system
calls like getfsstat
to return relative pathnames, which can
confuse some applications.
To fix the problem, after creating a mount point, nfsmounter
attempts to re-mount or update the mountpoint using the absolute
pathname. Unfortunately, this trick does not work on some BSD-derived
operating systems, including MacOS. Moreover, on the Macintosh in
particular, the finder gets very confused by relative mountpoint
names. Thus, SFS uses the -P
option to nfsmounter on the
Macintosh.
nfsmounter
gets very confused if you unmount file systems
out from under it.
On some versions of Linux, if you attempt to create an NFS loopback
mount but are not running portmap
, it appears to wedge the
mountpoint in way that requires a reboot to recover. The reason is
that the Linux kernel's NFS client checks to see if the server is
running various auxiliary daemons used for locking, and gets into a
bad state if it cannot map the port. There should be a way to recover
from this situation, but the author of nfsmounter
does not
know how. Running portmap
after the fact does not help.
Perhaps nfsmounter
should have its own built-in portmap to
use in the event that port 111 is not yet bound by any process.
The following environment variables affect many of SFS's component
programs. (Note that for security reasons, the setuid programs
suidconnect
and newaid
interpret some of these
slightly differently--ignoring some and dropping privilege if others
are set.)
ACLNT_TRACE
and
ACLNT_TIME
, but print out RPCs received (as a server), rather
than ones made.
INADDR_ANY
, it will be bound to
BINDADDR instead (unless BINDADDR is no longer a valid
local address).
FDLIM_SOFT
and FDLIM_HARD
environment variables are
not set, SFS saves their the old limit values in the environment
variables.
An example of how this is used is by rexd
, the remote
execution daemon. rexd
reduces the file descriptor limits
to the original values specified by these environment variables before
spawning an unprivileged user program. These variables ordinarily
should not be of concern to users of SFS, and are documented here
mostly for people who notice them and are curious.
sfskey
connects to sfsagent
through the
SFS client daemon, sfscd
. However, by passing the
-S
option to sfsagent
, it is possible to have
sfsagent
bind an arbitrary Unix domain socket for
connections. SFS_AGENTSOCK
can be set to such a pathname,
and sfskey
will then connect to that socket.
As a special case, if SFS_AGENTSOCK
is set to a negative number,
this is interpreted to mean a file descriptor number already connected
to the agent. This feature is particularly useful when atomically
killing and starting sfsagent
with the -k
flag. In
this case, and program specified on the command line, or the default
/usr/local/share/sfs/agentrc
script, will be run with
SFS_AGENTSOCK
set to a file descriptor. Thus, if the script
loads keys into the agent by running sfskey
, these keys will
be loaded into the new agent (before it takes over), rather than into
the old agent.
sfs_config
file. By default,
SFS uses configuration files in
/usr/local/share/sfs/sfs_config
and
/etc/sfs/sfs_config
. sfssd
sets this
environment variable when given the -S
option, so that
subsidiary daemons read the same configuration file.
/etc/hosts
) for many of the servers to work.
The algorithm used by SFS is to determine a host's name is as follows.
It checks the system's name with the gethostname
system call,
and if it is fully-qualified (i.e., has a ".domain" at the end) uses
that. Otherwise, it appends the default domain name to the system
name.
Sometimes SFS's algorithm will not produce the correct hostname. In
that case, you can specify the real hostname for each individual
daemon such as sfsrwsd
and sfsauthd
in their
confiruation files. Or, you can just set the environment variable
SFS_HOSTNAME
before running sfssd
. Note that if you
do not have a DNS name, you can also set SFS_HOSTNAME
to the
numeric IPv4 address of your host, and then use the IP address as the
Location in self-certifying pathnames.
%port
that clients must append to the
hostname in the Location of the self-certifying pathname. By
default (or if SFS_PORT
is set to 0), the self-ceritying
pathname contains no port number, which means to check DNS for SRV
records, and if none are found to use port 4.
Because servers have only one canonical self-certifying pathname,
setting SFS_PORT
to 4 is not the same thing as setting it to 0,
even without SRV records. If you set SFS_PORT
to 4, then
clients who do not specify %4
in the self-certifying pathname
will need to be redirected to a pathname containing %4
via a
symbolic link, and pwd
run on a client will show the
%4
as part of the self-certifying pathname.
Note further that the effects of this environment variable should not
be confused with the BindAddr
option in sfssd_config
,
BindAddr. For example, if you set up SRV records pointing to
TCP port 5 on your server, you might want to specify BindAddr
0.0.0.0 5
in sfssd_config
, but you almost certainly would not
want to set the SFS_PORT
environment variable to 5, as setting
SFS_PORT
to anything other than 0 means the self-certifying
pathname contains %5
, which in turn means DNS SRV records
should not be used. (I.e., a client accessing
@host.domain,hostid
would be redirected to
@host.domain%5,hostid
, which would bypass any SRV
records for host.domain
and, depending on DNS data, might not
even resolve to the same IP address as the pathname without a
%
.)
/sfs
. Changing this for anything other than debugging purposes
is not recommended, as many symbolic links will break.
sfscd
and sfssd
. If you
wish to run SFS without installing it, however, these commands will
not be able to find the subsidiary daemons they are supposed to
launch. Setting SFS_RUNINPLACE
to the root of your build
directory allows SFS to be run without installing it. Because this
option is mainly used for development, however, several programs
behave slightly differently when it is set. sfscd
and
sfssd
both remain in the forground and send their output to
standard error, rather than to the system log. Moreover,
sfsagent
does take steps to protect itself from the
ptrace
system call, so that you can attach to it with the
debugger when running in place.
/tmp
directory or created protected subdirectories of
/tmp
. However, you can override the location by setting the
TMPDIR
environment variable.
sfskey login
. SFS looks first at the USER
environment variable, then uses the getlogin
system call, and
if that fails, looks up the current user ID in the system password
file.
SFS shares files between machines using cryptographically protected communication. As such, SFS can help eliminate security holes associated with insecure network file systems and let users share files where they could not do so before.
That said, there will very likely be security holes attackers can exploit because of SFS, that they could not have exploited otherwise. This chapter enumerates some of the security consequences of running SFS. The first section describes vulnerabilities that may result from the very existence of a global file system. The next section lists bugs potentially present in your operating system that may be much easier for attackers to exploit if you run SFS. Finally the last section attempts to point out weak points of the SFS implementation that may lead to vulnerabilities in the SFS software itself.
Many security holes can be exploited much more easily if the attacker
can create an arbitrary file on your system. As a simple example, if a
bug allows attackers to run any program on your machine, SFS allows them
to supply the program somewhere under /sfs
. Moreover, the file
can have any numeric user and group (though of course, SFS disables
setuid and devices).
.
in path
Another potential problem users putting the current working directory
.
in their PATH environment variables. If you are browsing
a file system whose owner you do not trust, that owner can run arbitrary
code as you by creating programs named things like ls
in the
directories you are browsing. Putting .
in the PATH has
always been a bad idea for security, but a global file system like SFS
makes it much worse.
Users need to be careful about using untrusted file systems as if they were trusted file systems. Any file system can name files in any other file system by symbolic links. Thus, when randomly overwriting files in a file system you do not trust, you can be tricked, by symbolic links, into overwriting files on the local disk or another SFS file system.
As an example of a seemingly appealing use of SFS that can cause
problems, consider doing a cvs
checkout from an untrusted CVS
repository, so as to peruse someone else's source code. If you run
cvs
on a repository you do not trust, the person hosting the
repository could replace the CVSROOT/history
with a symbolic
link to a file on some other file system, and cause you to append
garbage to that file.
This cvs
example may or may not be a problem. For instance,
if you are about to compile and run the software anyway, you are placing
quite a bit of trust in the person running the CVS repository anyway.
The important thing to keep in mind is that for most uses of a file
system, you are placing some amount of trust in in the file server.
See resvgids, to see how users can run multiple agents with the
newaid
command. One way to cut down on trust is to access
untrusted file servers under a different agent with different private
keys. Nonetheless, this still allows the remote file servers to serve
symbolic links to the local file system in unexpected places.
Any user on the Internet can get the attributes of a
local-directory listed in an Export
directive
(see export). This is so users can run commands like ls -ld
on a self-certifying pathname in /sfs
, even if they cannot change
directory to that pathname or list files under it. If you wish to keep
attribute information secret on a local-directory, you will need
to export a higher directory. We may later reevaluate this design
decision, though allowing such anonymous users to get attributes
currently simplifies the client implementation.
The SFS read-write server software requires each SFS server to run an NFS server. Running an NFS server at all can constitute a security hole. In order to understand the full implications of running an SFS server, you must also understand NFS security.
NFS security relies on the secrecy of file handles. Each file on an
exported file system has associated with it an NFS file handle
(typically 24 to 32 bytes long). When mounting an NFS file system, the
mount
command on the client machine connects to a program
called mountd
on the server and asks for the file handle of
the root of the exported file system. mountd
enforces access
control by refusing to return this file handle to clients not authorized
to mount the file system.
Once a client has the file handle of a directory on the server, it sends NFS requests directly to the NFS server's kernel. The kernel performs no access control on the request (other than checking that the user the client claims to speak for has permission to perform the requested operation). The expectation is that all clients are trusted to speak for all users, and no machine can obtain a valid NFS file handle without being an authorized NFS client.
To prevent attackers from learning NFS file handles when using SFS, SFS encrypts all NFS file handles with a 20-byte key using the Blowfish encryption algorithm. Unfortunately, not all operating systems choose particularly good NFS file handles in the first place. Thus, attackers may be able to guess your file handles anyway. In general, NFS file handles contain the following 32-bit words:
In addition NFS file handles can contain the following words:
Many of these words can be guessed outright by attackers without their needing to interact with any piece of software on the NFS server. For instance, the file system ID is often just the device number on which the physical file system resides. The i-number of the root directory in a file system is always 2. The i-number and generation number of the root directory can also be used as the i-number and generation number of the "exported directory".
On some operating systems, then, the only hard thing for an attacker to guess is the 32-bit generation number of some directory on the system. Worse yet, the generation numbers are sometimes not chosen with a good random number generator.
To minimize the risks of running an NFS server, you might consider taking the following precautions:
fsirand
that
re-randomizes all generation numbers in a file system. Running
fsirand
may result in much better generation numbers than,
say, a factory install of an operating system.
localhost
for SFS, but read-only to any client on which an
attacker may have learned an NFS file handle, you may be able to protect
the integrity of your file system under attack. (Note, however, that
unless you filter forged packets at your firewall, the attacker can put
whatever source address he wants on an NFS UDP packet.) See the
mountd
or exports
manual page for more detail.
Note: under no circumstances should you make your file system
"read-only to the world," as this will let anyone find out NFS file
handles. You want the kernel to think of the file system as read-only
for the world, but mountd
to refuse to give out file handles
to anybody but localhost
.
mountd -n
.The mountd
command takes a flag -n
meaning "allow
mount requests from unprivileged ports." Do not ever run use
this flag. Worse yet, some operating systems (notably HP-UX 9) always
exhibit this behavior regardless of whether they -n
flag has
been specified.
The -n
option to mountd
allows any user on an NFS
client to learn file handles and thus act as any other user. The
situation gets considerably worse when exporting file systems to
localhost
, however, as SFS requires. Then everybody on the
Internet can learn your NFS file handles. The reason is that the
portmap
command will forward mount requests and make them
appear to come from localhost
.
portmap
forwardingIn order to support broadcast RPCs, the portmap
program will
relay RPC requests to the machine it is running on, making them appear
to come from localhost
. That can have disastrous consequences in
conjunction with mountd -n
as described previously. It can also
be used to work around "read-mostly" export options by forwarding NFS
requests to the kernel from localhost
.
Operating systems are starting to ship with portmap
programs
that refuse to forward certain RPC calls including mount and NFS
requests. Wietse Venema has also written a portmap
replacement that has these properties, available from
ftp://ftp.porcupine.org/pub/security/index.html. It is also a
good idea to filter TCP and UDP ports 111 (portmap
) at your
firewall, if you have one.
Many NFS implementations have bugs. Many of those bugs rarely surface
when clients and servers with similar implementation talk to each other.
Examples of bugs we've found include servers crashing when the receive a
write request for an odd number of bytes, clients crashing when they
receive the error NFS3ERR_JUKEBOX
, and clients using
uninitialized memory when the server returns a lookup3resok
data
structure with obj_attributes
having attributes_follow
set
to false.
SFS allows potentially untrusted users to formulate NFS requests (though of course SFS requires file handles to decrypt correctly and stamps the request with the appropriate Unix uid/gid credentials). This may let bad users crash your server's kernel (or worse). Similarly, bad servers may be able to crash a client.
As a precaution, you may want to be careful about exporting any portion
of a file system to anonymous users with the R
or W
options to Export
(see export). When analyzing your NFS code
for security, you should know that even anonymous users can make the
following NFS RPC's on a local-directory in your
sfsrwsd_config
file: NFSPROC3_GETATTR
,
NFSPROC3_ACCESS
, NFSPROC3_FSINFO
, and
NFSPROC3_PATHCONF
.
On the client side, a bad, non-root user in collusion with a bad file server can possibly crash or deadlock the machine. Many NFS client implementations have inadequate locking that could lead to race conditions. Other implementations make assumptions about the hierarchical nature of a file system served by the server. By violating these assumptions (for example having two directories on a server each contain the other), a user may be able to deadlock the client and create unkillable processes.
logger
buffer overrunSFS pipes log messages through the logger
program to get them
into the system log. SFS can generate arbitrarily long lines. If your
logger
does something stupid like call gets
, it may
suffer a buffer overrun. We assume no one does this, but feel the point
is worth mentioning, since not all logger programs come with source.
To avoid using logger
, you can run sfscd
and
sfssd
with the -d
flag and redirect standard error
wherever you wish manually.
The best way to attack the SFS software is probably to cause resource exhaustion. You can try to run SFS out of file descriptors, memory, CPU time, or mount points.
An attacker can run a server out of file descriptors by opening many
parallel TCP connections. Such attacks can be detected using the
netstat
command to see who is connecting to SFS (which
accepts connections on port 4). Users can run the client (also
sfsauthd
) out of descriptors by connecting many times using
the setgid program /usr/local/lib/sfs-0.8pre/suidconnect
.
These attacks can be traced using a tool like lsof, available from
ftp://vic.cc.purdue.edu/pub/tools/unix/lsof.
SFS enforces a maximum size of just over 64 K on all RPC requests. Nonetheless, a client could connect 1000 times, on each connection send the first 64 K of a slightly larger message, and just sit there. That would obviously consume about 64 Megabytes of memory, as SFS will wait patiently for the rest of the request.
A worse problem is that SFS servers do not currently flow-control clients. Thus, an attacker could make many RPCs but not read the replies, causing the SFS server to buffer arbitrarily much data and run out of memory. (Obviously the server eventually flushes any buffered data when the TCP connection closes.)
Connecting to an SFS server costs the server tens of milliseconds of CPU time. An attacker can try to burn a huge amount of the server's CPU time by connecting to the server many times. The effects of such attacks can be mitigated using hashcash, HashCost.
Finally, a user on a client can cause a large number of file systems to be mounted. If the operating system has a limit on the number of mount points, a user could run the client out of mount points.
If a TCP connection is reset, the SFS client will attempt to reconnect
to the server and retransmit whatever RPCs were pending at the time the
connection dropped. Not all NFS RPCs are idempotent however. Thus, an
attacker who caused a connection to reset at just the right time could,
for instance, cause a mkdir
command to return EEXIST
when in fact it did just create the directory.
SFS exchanges NFS traffic with the local operating system using the loopback interface. An attacker with physical access to the local Ethernet may be able to inject arbitrary packets into a machine, including packets to 127.0.0.1. Without packet filtering in place, an attacker can also send packets from anywhere making them appear to come from 127.0.0.1.
On the client, an attacker can forge NFS requests from the kernel to SFS, or forge replies from SFS to the kernel. The SFS client encrypts file handles before giving them to the operating system. Thus, the attacker is unlikely to be able to forge a request from the kernel to SFS that contain a valid file handle. In the other direction however, the reply does not need to contain a file handle. The attacker may well be able to convince the kernel of a forged reply from SFS. The attacker only needs to guess a (possibly quite predictable) 32-bit RPC XID number. Such an attack could result, for example, in a user getting the wrong data when reading a file.
On the server side, you also must assume the attacker cannot guess a valid NFS file handle (otherwise, you alrea