[Docs] [txt|pdf] [Tracker] [WG] [Email] [Diff1] [Diff2] [Nits] [IPR]

Versions: 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 RFC 5661

NFSv4                                                         S. Shepler
Internet-Draft                                                 M. Eisler
Intended status: Standards Track                               D. Noveck
Expires: June 18, 2009                                           Editors
                                                       December 15, 2008


                     NFS Version 4 Minor Version 1
                 draft-ietf-nfsv4-minorversion1-29.txt

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on June 18, 2009.

Copyright Notice

   Copyright (c) 2008 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.

Abstract

   This document describes NFS version 4 minor version one, including



Shepler, et al.           Expires June 18, 2009                 [Page 1]

Internet-Draft                   NFSv4.1                   December 2008


   features retained from the base protocol (NFS version 4 minor version
   zero which is specified in RFC3530) and protocol extensions made
   subsequently.  Major extensions introduced in NFS version 4 minor
   version one include: Sessions, Directory Delegations, and parallel
   NFS (pNFS).  NFS version 4 minor version one has no dependencies on
   NFS version 4 minor version zero, and is considered a separate
   protocol.  Thus this document neither updates nor obsoletes RFC3530.
   NFS minor version one is deemed superior to NFS minor version zero
   with no loss of functionality, and its use is preferred over version
   zero.  Both NFS minor version zero and one can be used simultaneously
   on the same network, between the same client and server.

Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [1].


































Shepler, et al.           Expires June 18, 2009                 [Page 2]

Internet-Draft                   NFSv4.1                   December 2008


Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .  12
     1.1.   The NFS Version 4 Minor Version 1 Protocol . . . . . . .  12
     1.2.   Scope of this Document . . . . . . . . . . . . . . . . .  12
     1.3.   NFSv4 Goals  . . . . . . . . . . . . . . . . . . . . . .  12
     1.4.   NFSv4.1 Goals  . . . . . . . . . . . . . . . . . . . . .  13
     1.5.   General Definitions  . . . . . . . . . . . . . . . . . .  13
     1.6.   Overview of NFSv4.1 Features . . . . . . . . . . . . . .  16
       1.6.1.   RPC and Security . . . . . . . . . . . . . . . . . .  16
       1.6.2.   Protocol Structure . . . . . . . . . . . . . . . . .  17
       1.6.3.   File System Model  . . . . . . . . . . . . . . . . .  17
       1.6.4.   Locking Facilities . . . . . . . . . . . . . . . . .  19
     1.7.   Differences from NFSv4.0 . . . . . . . . . . . . . . . .  20
   2.  Core Infrastructure . . . . . . . . . . . . . . . . . . . . .  21
     2.1.   Introduction . . . . . . . . . . . . . . . . . . . . . .  21
     2.2.   RPC and XDR  . . . . . . . . . . . . . . . . . . . . . .  21
       2.2.1.   RPC-based Security . . . . . . . . . . . . . . . . .  21
     2.3.   COMPOUND and CB_COMPOUND . . . . . . . . . . . . . . . .  24
     2.4.   Client Identifiers and Client Owners . . . . . . . . . .  25
       2.4.1.   Upgrade from NFSv4.0 to NFSv4.1  . . . . . . . . . .  29
       2.4.2.   Server Release of Client ID  . . . . . . . . . . . .  29
       2.4.3.   Resolving Client Owner Conflicts . . . . . . . . . .  30
     2.5.   Server Owners  . . . . . . . . . . . . . . . . . . . . .  31
     2.6.   Security Service Negotiation . . . . . . . . . . . . . .  31
       2.6.1.   NFSv4.1 Security Tuples  . . . . . . . . . . . . . .  32
       2.6.2.   SECINFO and SECINFO_NO_NAME  . . . . . . . . . . . .  32
       2.6.3.   Security Error . . . . . . . . . . . . . . . . . . .  32
     2.7.   Minor Versioning . . . . . . . . . . . . . . . . . . . .  37
     2.8.   Non-RPC-based Security Services  . . . . . . . . . . . .  39
       2.8.1.   Authorization  . . . . . . . . . . . . . . . . . . .  39
       2.8.2.   Auditing . . . . . . . . . . . . . . . . . . . . . .  39
       2.8.3.   Intrusion Detection  . . . . . . . . . . . . . . . .  40
     2.9.   Transport Layers . . . . . . . . . . . . . . . . . . . .  40
       2.9.1.   REQUIRED and RECOMMENDED Properties of Transports  .  40
       2.9.2.   Client and Server Transport Behavior . . . . . . . .  41
       2.9.3.   Ports  . . . . . . . . . . . . . . . . . . . . . . .  42
     2.10.  Session  . . . . . . . . . . . . . . . . . . . . . . . .  42
       2.10.1.  Motivation and Overview  . . . . . . . . . . . . . .  42
       2.10.2.  NFSv4 Integration  . . . . . . . . . . . . . . . . .  44
       2.10.3.  Channels . . . . . . . . . . . . . . . . . . . . . .  45
       2.10.4.  Server Scope . . . . . . . . . . . . . . . . . . . .  46
       2.10.5.  Trunking . . . . . . . . . . . . . . . . . . . . . .  49
       2.10.6.  Exactly Once Semantics . . . . . . . . . . . . . . .  52
       2.10.7.  RDMA Considerations  . . . . . . . . . . . . . . . .  65
       2.10.8.  Sessions Security  . . . . . . . . . . . . . . . . .  68
       2.10.9.  The Secret State Verifier (SSV) GSS Mechanism  . . .  73
       2.10.10. Session Mechanics - Steady State . . . . . . . . . .  77



Shepler, et al.           Expires June 18, 2009                 [Page 3]

Internet-Draft                   NFSv4.1                   December 2008


       2.10.11. Session Inactivity Timer . . . . . . . . . . . . . .  79
       2.10.12. Session Mechanics - Recovery . . . . . . . . . . . .  80
       2.10.13. Parallel NFS and Sessions  . . . . . . . . . . . . .  84
   3.  Protocol Constants and Data Types . . . . . . . . . . . . . .  85
     3.1.   Basic Constants  . . . . . . . . . . . . . . . . . . . .  85
     3.2.   Basic Data Types . . . . . . . . . . . . . . . . . . . .  86
     3.3.   Structured Data Types  . . . . . . . . . . . . . . . . .  87
   4.  Filehandles . . . . . . . . . . . . . . . . . . . . . . . . .  96
     4.1.   Obtaining the First Filehandle . . . . . . . . . . . . .  96
       4.1.1.   Root Filehandle  . . . . . . . . . . . . . . . . . .  96
       4.1.2.   Public Filehandle  . . . . . . . . . . . . . . . . .  97
     4.2.   Filehandle Types . . . . . . . . . . . . . . . . . . . .  97
       4.2.1.   General Properties of a Filehandle . . . . . . . . .  98
       4.2.2.   Persistent Filehandle  . . . . . . . . . . . . . . .  98
       4.2.3.   Volatile Filehandle  . . . . . . . . . . . . . . . .  99
     4.3.   One Method of Constructing a Volatile Filehandle . . . . 100
     4.4.   Client Recovery from Filehandle Expiration . . . . . . . 100
   5.  File Attributes . . . . . . . . . . . . . . . . . . . . . . . 101
     5.1.   REQUIRED Attributes  . . . . . . . . . . . . . . . . . . 103
     5.2.   RECOMMENDED Attributes . . . . . . . . . . . . . . . . . 103
     5.3.   Named Attributes . . . . . . . . . . . . . . . . . . . . 103
     5.4.   Classification of Attributes . . . . . . . . . . . . . . 105
     5.5.   Set-Only and Get-Only Attributes . . . . . . . . . . . . 106
     5.6.   REQUIRED Attributes - List and Definition References . . 106
     5.7.   RECOMMENDED Attributes - List and Definition
            References . . . . . . . . . . . . . . . . . . . . . . . 107
     5.8.   Attribute Definitions  . . . . . . . . . . . . . . . . . 109
       5.8.1.   Definitions of REQUIRED Attributes . . . . . . . . . 109
       5.8.2.   Definitions of Uncategorized RECOMMENDED
                Attributes . . . . . . . . . . . . . . . . . . . . . 111
     5.9.   Interpreting owner and owner_group . . . . . . . . . . . 117
     5.10.  Character Case Attributes  . . . . . . . . . . . . . . . 119
     5.11.  Directory Notification Attributes  . . . . . . . . . . . 120
     5.12.  pNFS Attribute Definitions . . . . . . . . . . . . . . . 120
     5.13.  Retention Attributes . . . . . . . . . . . . . . . . . . 122
   6.  Access Control Attributes . . . . . . . . . . . . . . . . . . 125
     6.1.   Goals  . . . . . . . . . . . . . . . . . . . . . . . . . 125
     6.2.   File Attributes Discussion . . . . . . . . . . . . . . . 126
       6.2.1.   Attribute 12: acl  . . . . . . . . . . . . . . . . . 126
       6.2.2.   Attribute 58: dacl . . . . . . . . . . . . . . . . . 141
       6.2.3.   Attribute 59: sacl . . . . . . . . . . . . . . . . . 141
       6.2.4.   Attribute 33: mode . . . . . . . . . . . . . . . . . 141
       6.2.5.   Attribute 74: mode_set_masked  . . . . . . . . . . . 142
     6.3.   Common Methods . . . . . . . . . . . . . . . . . . . . . 143
       6.3.1.   Interpreting an ACL  . . . . . . . . . . . . . . . . 143
       6.3.2.   Computing a Mode Attribute from an ACL . . . . . . . 144
     6.4.   Requirements . . . . . . . . . . . . . . . . . . . . . . 145
       6.4.1.   Setting the mode and/or ACL Attributes . . . . . . . 145



Shepler, et al.           Expires June 18, 2009                 [Page 4]

Internet-Draft                   NFSv4.1                   December 2008


       6.4.2.   Retrieving the mode and/or ACL Attributes  . . . . . 147
       6.4.3.   Creating New Objects . . . . . . . . . . . . . . . . 147
   7.  Single-server Namespace . . . . . . . . . . . . . . . . . . . 151
     7.1.   Server Exports . . . . . . . . . . . . . . . . . . . . . 152
     7.2.   Browsing Exports . . . . . . . . . . . . . . . . . . . . 152
     7.3.   Server Pseudo File System  . . . . . . . . . . . . . . . 152
     7.4.   Multiple Roots . . . . . . . . . . . . . . . . . . . . . 153
     7.5.   Filehandle Volatility  . . . . . . . . . . . . . . . . . 153
     7.6.   Exported Root  . . . . . . . . . . . . . . . . . . . . . 154
     7.7.   Mount Point Crossing . . . . . . . . . . . . . . . . . . 154
     7.8.   Security Policy and Namespace Presentation . . . . . . . 154
   8.  State Management  . . . . . . . . . . . . . . . . . . . . . . 155
     8.1.   Client and Session ID  . . . . . . . . . . . . . . . . . 156
     8.2.   Stateid Definition . . . . . . . . . . . . . . . . . . . 157
       8.2.1.   Stateid Types  . . . . . . . . . . . . . . . . . . . 157
       8.2.2.   Stateid Structure  . . . . . . . . . . . . . . . . . 158
       8.2.3.   Special Stateids . . . . . . . . . . . . . . . . . . 160
       8.2.4.   Stateid Lifetime and Validation  . . . . . . . . . . 161
       8.2.5.   Stateid Use for I/O Operations . . . . . . . . . . . 164
       8.2.6.   Stateid Use for SETATTR Operations . . . . . . . . . 165
     8.3.   Lease Renewal  . . . . . . . . . . . . . . . . . . . . . 165
     8.4.   Crash Recovery . . . . . . . . . . . . . . . . . . . . . 168
       8.4.1.   Client Failure and Recovery  . . . . . . . . . . . . 168
       8.4.2.   Server Failure and Recovery  . . . . . . . . . . . . 169
       8.4.3.   Network Partitions and Recovery  . . . . . . . . . . 174
     8.5.   Server Revocation of Locks . . . . . . . . . . . . . . . 179
     8.6.   Short and Long Leases  . . . . . . . . . . . . . . . . . 180
     8.7.   Clocks, Propagation Delay, and Calculating Lease
            Expiration . . . . . . . . . . . . . . . . . . . . . . . 180
     8.8.   Obsolete Locking Infrastructure From NFSv4.0 . . . . . . 181
   9.  File Locking and Share Reservations . . . . . . . . . . . . . 182
     9.1.   Opens and Byte-Range Locks . . . . . . . . . . . . . . . 182
       9.1.1.   State-owner Definition . . . . . . . . . . . . . . . 182
       9.1.2.   Use of the Stateid and Locking . . . . . . . . . . . 182
     9.2.   Lock Ranges  . . . . . . . . . . . . . . . . . . . . . . 185
     9.3.   Upgrading and Downgrading Locks  . . . . . . . . . . . . 186
     9.4.   Stateid Seqid Values and Byte-Range Locks  . . . . . . . 186
     9.5.   Issues with Multiple Open-Owners . . . . . . . . . . . . 187
     9.6.   Blocking Locks . . . . . . . . . . . . . . . . . . . . . 187
     9.7.   Share Reservations . . . . . . . . . . . . . . . . . . . 188
     9.8.   OPEN/CLOSE Operations  . . . . . . . . . . . . . . . . . 189
     9.9.   Open Upgrade and Downgrade . . . . . . . . . . . . . . . 190
     9.10.  Parallel OPENs . . . . . . . . . . . . . . . . . . . . . 191
     9.11.  Reclaim of Open and Byte-Range Locks . . . . . . . . . . 191
   10. Client-Side Caching . . . . . . . . . . . . . . . . . . . . . 192
     10.1.  Performance Challenges for Client-Side Caching . . . . . 192
     10.2.  Delegation and Callbacks . . . . . . . . . . . . . . . . 193
       10.2.1.  Delegation Recovery  . . . . . . . . . . . . . . . . 195



Shepler, et al.           Expires June 18, 2009                 [Page 5]

Internet-Draft                   NFSv4.1                   December 2008


     10.3.  Data Caching . . . . . . . . . . . . . . . . . . . . . . 198
       10.3.1.  Data Caching and OPENs . . . . . . . . . . . . . . . 198
       10.3.2.  Data Caching and File Locking  . . . . . . . . . . . 199
       10.3.3.  Data Caching and Mandatory File Locking  . . . . . . 201
       10.3.4.  Data Caching and File Identity . . . . . . . . . . . 201
     10.4.  Open Delegation  . . . . . . . . . . . . . . . . . . . . 202
       10.4.1.  Open Delegation and Data Caching . . . . . . . . . . 205
       10.4.2.  Open Delegation and File Locks . . . . . . . . . . . 206
       10.4.3.  Handling of CB_GETATTR . . . . . . . . . . . . . . . 206
       10.4.4.  Recall of Open Delegation  . . . . . . . . . . . . . 209
       10.4.5.  Clients that Fail to Honor Delegation Recalls  . . . 211
       10.4.6.  Delegation Revocation  . . . . . . . . . . . . . . . 212
       10.4.7.  Delegations via WANT_DELEGATION  . . . . . . . . . . 212
     10.5.  Data Caching and Revocation  . . . . . . . . . . . . . . 213
       10.5.1.  Revocation Recovery for Write Open Delegation  . . . 214
     10.6.  Attribute Caching  . . . . . . . . . . . . . . . . . . . 214
     10.7.  Data and Metadata Caching and Memory Mapped Files  . . . 216
     10.8.  Name and Directory Caching without Directory
            Delegations  . . . . . . . . . . . . . . . . . . . . . . 219
       10.8.1.  Name Caching . . . . . . . . . . . . . . . . . . . . 219
       10.8.2.  Directory Caching  . . . . . . . . . . . . . . . . . 220
     10.9.  Directory Delegations  . . . . . . . . . . . . . . . . . 221
       10.9.1.  Introduction to Directory Delegations  . . . . . . . 221
       10.9.2.  Directory Delegation Design  . . . . . . . . . . . . 222
       10.9.3.  Attributes in Support of Directory Notifications . . 223
       10.9.4.  Directory Delegation Recall  . . . . . . . . . . . . 223
       10.9.5.  Directory Delegation Recovery  . . . . . . . . . . . 224
   11. Multi-Server Namespace  . . . . . . . . . . . . . . . . . . . 224
     11.1.  Location Attributes  . . . . . . . . . . . . . . . . . . 225
     11.2.  File System Presence or Absence  . . . . . . . . . . . . 225
     11.3.  Getting Attributes for an Absent File System . . . . . . 226
       11.3.1.  GETATTR Within an Absent File System . . . . . . . . 227
       11.3.2.  READDIR and Absent File Systems  . . . . . . . . . . 228
     11.4.  Uses of Location Information . . . . . . . . . . . . . . 228
       11.4.1.  File System Replication  . . . . . . . . . . . . . . 229
       11.4.2.  File System Migration  . . . . . . . . . . . . . . . 230
       11.4.3.  Referrals  . . . . . . . . . . . . . . . . . . . . . 231
     11.5.  Location Entries and Server Identity . . . . . . . . . . 233
     11.6.  Additional Client-side Considerations  . . . . . . . . . 233
     11.7.  Effecting File System Transitions  . . . . . . . . . . . 234
       11.7.1.  File System Transitions and Simultaneous Access  . . 235
       11.7.2.  Simultaneous Use and Transparent Transitions . . . . 236
       11.7.3.  Filehandles and File System Transitions  . . . . . . 239
       11.7.4.  Fileids and File System Transitions  . . . . . . . . 239
       11.7.5.  Fsids and File System Transitions  . . . . . . . . . 240
       11.7.6.  The Change Attribute and File System Transitions . . 241
       11.7.7.  Lock State and File System Transitions . . . . . . . 241
       11.7.8.  Write Verifiers and File System Transitions  . . . . 246



Shepler, et al.           Expires June 18, 2009                 [Page 6]

Internet-Draft                   NFSv4.1                   December 2008


       11.7.9.  Readdir Cookies and Verifiers and File System
                Transitions  . . . . . . . . . . . . . . . . . . . . 246
       11.7.10. File System Data and File System Transitions . . . . 246
     11.8.  Effecting File System Referrals  . . . . . . . . . . . . 248
       11.8.1.  Referral Example (LOOKUP)  . . . . . . . . . . . . . 248
       11.8.2.  Referral Example (READDIR) . . . . . . . . . . . . . 252
     11.9.  The Attribute fs_locations . . . . . . . . . . . . . . . 254
     11.10. The Attribute fs_locations_info  . . . . . . . . . . . . 257
       11.10.1. The fs_locations_server4 Structure . . . . . . . . . 261
       11.10.2. The fs_locations_info4 Structure . . . . . . . . . . 266
       11.10.3. The fs_locations_item4 Structure . . . . . . . . . . 267
     11.11. The Attribute fs_status  . . . . . . . . . . . . . . . . 269
   12. Parallel NFS (pNFS) . . . . . . . . . . . . . . . . . . . . . 273
     12.1.  Introduction . . . . . . . . . . . . . . . . . . . . . . 273
     12.2.  pNFS Definitions . . . . . . . . . . . . . . . . . . . . 274
       12.2.1.  Metadata . . . . . . . . . . . . . . . . . . . . . . 275
       12.2.2.  Metadata Server  . . . . . . . . . . . . . . . . . . 275
       12.2.3.  pNFS Client  . . . . . . . . . . . . . . . . . . . . 275
       12.2.4.  Storage Device . . . . . . . . . . . . . . . . . . . 275
       12.2.5.  Storage Protocol . . . . . . . . . . . . . . . . . . 276
       12.2.6.  Control Protocol . . . . . . . . . . . . . . . . . . 276
       12.2.7.  Layout Types . . . . . . . . . . . . . . . . . . . . 277
       12.2.8.  Layout . . . . . . . . . . . . . . . . . . . . . . . 277
       12.2.9.  Layout Iomode  . . . . . . . . . . . . . . . . . . . 278
       12.2.10. Device IDs . . . . . . . . . . . . . . . . . . . . . 278
     12.3.  pNFS Operations  . . . . . . . . . . . . . . . . . . . . 280
     12.4.  pNFS Attributes  . . . . . . . . . . . . . . . . . . . . 281
     12.5.  Layout Semantics . . . . . . . . . . . . . . . . . . . . 281
       12.5.1.  Guarantees Provided by Layouts . . . . . . . . . . . 281
       12.5.2.  Getting a Layout . . . . . . . . . . . . . . . . . . 282
       12.5.3.  Layout Stateid . . . . . . . . . . . . . . . . . . . 283
       12.5.4.  Committing a Layout  . . . . . . . . . . . . . . . . 284
       12.5.5.  Recalling a Layout . . . . . . . . . . . . . . . . . 287
       12.5.6.  Revoking Layouts . . . . . . . . . . . . . . . . . . 295
       12.5.7.  Metadata Server Write Propagation  . . . . . . . . . 296
     12.6.  pNFS Mechanics . . . . . . . . . . . . . . . . . . . . . 296
     12.7.  Recovery . . . . . . . . . . . . . . . . . . . . . . . . 297
       12.7.1.  Recovery from Client Restart . . . . . . . . . . . . 298
       12.7.2.  Dealing with Lease Expiration on the Client  . . . . 298
       12.7.3.  Dealing with Loss of Layout State on the Metadata
                Server . . . . . . . . . . . . . . . . . . . . . . . 299
       12.7.4.  Recovery from Metadata Server Restart  . . . . . . . 300
       12.7.5.  Operations During Metadata Server Grace Period . . . 302
       12.7.6.  Storage Device Recovery  . . . . . . . . . . . . . . 302
     12.8.  Metadata and Storage Device Roles  . . . . . . . . . . . 302
     12.9.  Security Considerations for pNFS . . . . . . . . . . . . 303
   13. NFSv4.1 as a Storage Protocol in pNFS: the File Layout Type . 304
     13.1.  Client ID and Session Considerations . . . . . . . . . . 304



Shepler, et al.           Expires June 18, 2009                 [Page 7]

Internet-Draft                   NFSv4.1                   December 2008


       13.1.1.  Sessions Considerations for Data Servers . . . . . . 307
     13.2.  File Layout Definitions  . . . . . . . . . . . . . . . . 307
     13.3.  File Layout Data Types . . . . . . . . . . . . . . . . . 308
     13.4.  Interpreting the File Layout . . . . . . . . . . . . . . 312
       13.4.1.  Determining the Stripe Unit Number . . . . . . . . . 312
       13.4.2.  Interpreting the File Layout Using Sparse Packing  . 312
       13.4.3.  Interpreting the File Layout Using Dense Packing . . 315
       13.4.4.  Sparse and Dense Stripe Unit Packing . . . . . . . . 317
     13.5.  Data Server Multipathing . . . . . . . . . . . . . . . . 319
     13.6.  Operations Sent to NFSv4.1 Data Servers  . . . . . . . . 320
     13.7.  COMMIT Through Metadata Server . . . . . . . . . . . . . 322
     13.8.  The Layout Iomode  . . . . . . . . . . . . . . . . . . . 324
     13.9.  Metadata and Data Server State Coordination  . . . . . . 324
       13.9.1.  Global Stateid Requirements  . . . . . . . . . . . . 324
       13.9.2.  Data Server State Propagation  . . . . . . . . . . . 325
     13.10. Data Server Component File Size  . . . . . . . . . . . . 327
     13.11. Layout Revocation and Fencing  . . . . . . . . . . . . . 328
     13.12. Security Considerations for the File Layout Type . . . . 328
   14. Internationalization  . . . . . . . . . . . . . . . . . . . . 329
     14.1.  Stringprep profile for the utf8str_cs type . . . . . . . 330
     14.2.  Stringprep profile for the utf8str_cis type  . . . . . . 332
     14.3.  Stringprep profile for the utf8str_mixed type  . . . . . 333
     14.4.  UTF-8 Capabilities . . . . . . . . . . . . . . . . . . . 334
     14.5.  UTF-8 Related Errors . . . . . . . . . . . . . . . . . . 335
   15. Error Values  . . . . . . . . . . . . . . . . . . . . . . . . 335
     15.1.  Error Definitions  . . . . . . . . . . . . . . . . . . . 336
       15.1.1.  General Errors . . . . . . . . . . . . . . . . . . . 338
       15.1.2.  Filehandle Errors  . . . . . . . . . . . . . . . . . 340
       15.1.3.  Compound Structure Errors  . . . . . . . . . . . . . 341
       15.1.4.  File System Errors . . . . . . . . . . . . . . . . . 343
       15.1.5.  State Management Errors  . . . . . . . . . . . . . . 345
       15.1.6.  Security Errors  . . . . . . . . . . . . . . . . . . 345
       15.1.7.  Name Errors  . . . . . . . . . . . . . . . . . . . . 346
       15.1.8.  Locking Errors . . . . . . . . . . . . . . . . . . . 347
       15.1.9.  Reclaim Errors . . . . . . . . . . . . . . . . . . . 348
       15.1.10. pNFS Errors  . . . . . . . . . . . . . . . . . . . . 349
       15.1.11. Session Use Errors . . . . . . . . . . . . . . . . . 350
       15.1.12. Session Management Errors  . . . . . . . . . . . . . 351
       15.1.13. Client Management Errors . . . . . . . . . . . . . . 352
       15.1.14. Delegation Errors  . . . . . . . . . . . . . . . . . 353
       15.1.15. Attribute Handling Errors  . . . . . . . . . . . . . 353
       15.1.16. Obsoleted Errors . . . . . . . . . . . . . . . . . . 354
     15.2.  Operations and their valid errors  . . . . . . . . . . . 355
     15.3.  Callback operations and their valid errors . . . . . . . 371
     15.4.  Errors and the operations that use them  . . . . . . . . 373
   16. NFSv4.1 Procedures  . . . . . . . . . . . . . . . . . . . . . 388
     16.1.  Procedure 0: NULL - No Operation . . . . . . . . . . . . 388
     16.2.  Procedure 1: COMPOUND - Compound Operations  . . . . . . 389



Shepler, et al.           Expires June 18, 2009                 [Page 8]

Internet-Draft                   NFSv4.1                   December 2008


   17. Operations: REQUIRED, RECOMMENDED, or OPTIONAL  . . . . . . . 400
   18. NFSv4.1 Operations  . . . . . . . . . . . . . . . . . . . . . 403
     18.1.  Operation 3: ACCESS - Check Access Rights  . . . . . . . 403
     18.2.  Operation 4: CLOSE - Close File  . . . . . . . . . . . . 409
     18.3.  Operation 5: COMMIT - Commit Cached Data . . . . . . . . 410
     18.4.  Operation 6: CREATE - Create a Non-Regular File Object . 413
     18.5.  Operation 7: DELEGPURGE - Purge Delegations Awaiting
            Recovery . . . . . . . . . . . . . . . . . . . . . . . . 416
     18.6.  Operation 8: DELEGRETURN - Return Delegation . . . . . . 417
     18.7.  Operation 9: GETATTR - Get Attributes  . . . . . . . . . 417
     18.8.  Operation 10: GETFH - Get Current Filehandle . . . . . . 419
     18.9.  Operation 11: LINK - Create Link to a File . . . . . . . 420
     18.10. Operation 12: LOCK - Create Lock . . . . . . . . . . . . 423
     18.11. Operation 13: LOCKT - Test For Lock  . . . . . . . . . . 427
     18.12. Operation 14: LOCKU - Unlock File  . . . . . . . . . . . 428
     18.13. Operation 15: LOOKUP - Lookup Filename . . . . . . . . . 430
     18.14. Operation 16: LOOKUPP - Lookup Parent Directory  . . . . 431
     18.15. Operation 17: NVERIFY - Verify Difference in
            Attributes . . . . . . . . . . . . . . . . . . . . . . . 433
     18.16. Operation 18: OPEN - Open a Regular File . . . . . . . . 434
     18.17. Operation 19: OPENATTR - Open Named Attribute
            Directory  . . . . . . . . . . . . . . . . . . . . . . . 453
     18.18. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access . 454
     18.19. Operation 22: PUTFH - Set Current Filehandle . . . . . . 456
     18.20. Operation 23: PUTPUBFH - Set Public Filehandle . . . . . 456
     18.21. Operation 24: PUTROOTFH - Set Root Filehandle  . . . . . 458
     18.22. Operation 25: READ - Read from File  . . . . . . . . . . 459
     18.23. Operation 26: READDIR - Read Directory . . . . . . . . . 461
     18.24. Operation 27: READLINK - Read Symbolic Link  . . . . . . 465
     18.25. Operation 28: REMOVE - Remove File System Object . . . . 466
     18.26. Operation 29: RENAME - Rename Directory Entry  . . . . . 468
     18.27. Operation 31: RESTOREFH - Restore Saved Filehandle . . . 472
     18.28. Operation 32: SAVEFH - Save Current Filehandle . . . . . 473
     18.29. Operation 33: SECINFO - Obtain Available Security  . . . 474
     18.30. Operation 34: SETATTR - Set Attributes . . . . . . . . . 478
     18.31. Operation 37: VERIFY - Verify Same Attributes  . . . . . 481
     18.32. Operation 38: WRITE - Write to File  . . . . . . . . . . 482
     18.33. Operation 40: BACKCHANNEL_CTL - Backchannel Control  . . 486
     18.34. Operation 41: BIND_CONN_TO_SESSION - Associate
            Connection with Session  . . . . . . . . . . . . . . . . 488
     18.35. Operation 42: EXCHANGE_ID - Instantiate Client ID  . . . 491
     18.36. Operation 43: CREATE_SESSION - Create New Session and
            Confirm Client ID  . . . . . . . . . . . . . . . . . . . 509
     18.37. Operation 44: DESTROY_SESSION - Destroy a Session  . . . 519
     18.38. Operation 45: FREE_STATEID - Free Stateid with No
            Locks  . . . . . . . . . . . . . . . . . . . . . . . . . 520
     18.39. Operation 46: GET_DIR_DELEGATION - Get a directory
            delegation . . . . . . . . . . . . . . . . . . . . . . . 521



Shepler, et al.           Expires June 18, 2009                 [Page 9]

Internet-Draft                   NFSv4.1                   December 2008


     18.40. Operation 47: GETDEVICEINFO - Get Device Information . . 525
     18.41. Operation 48: GETDEVICELIST - Get All Device Mappings
            for a File System  . . . . . . . . . . . . . . . . . . . 527
     18.42. Operation 49: LAYOUTCOMMIT - Commit Writes Made Using
            a Layout . . . . . . . . . . . . . . . . . . . . . . . . 529
     18.43. Operation 50: LAYOUTGET - Get Layout Information . . . . 532
     18.44. Operation 51: LAYOUTRETURN - Release Layout
            Information  . . . . . . . . . . . . . . . . . . . . . . 542
     18.45. Operation 52: SECINFO_NO_NAME - Get Security on
            Unnamed Object . . . . . . . . . . . . . . . . . . . . . 546
     18.46. Operation 53: SEQUENCE - Supply Per-Procedure
            Sequencing and Control . . . . . . . . . . . . . . . . . 547
     18.47. Operation 54: SET_SSV - Update SSV for a Client ID . . . 553
     18.48. Operation 55: TEST_STATEID - Test Stateids for
            Validity . . . . . . . . . . . . . . . . . . . . . . . . 555
     18.49. Operation 56: WANT_DELEGATION - Request Delegation . . . 557
     18.50. Operation 57: DESTROY_CLIENTID - Destroy a Client ID . . 561
     18.51. Operation 58: RECLAIM_COMPLETE - Indicates Reclaims
            Finished . . . . . . . . . . . . . . . . . . . . . . . . 561
     18.52. Operation 10044: ILLEGAL - Illegal operation . . . . . . 564
   19. NFSv4.1 Callback Procedures . . . . . . . . . . . . . . . . . 564
     19.1.  Procedure 0: CB_NULL - No Operation  . . . . . . . . . . 565
     19.2.  Procedure 1: CB_COMPOUND - Compound Operations . . . . . 565
   20. NFSv4.1 Callback Operations . . . . . . . . . . . . . . . . . 569
     20.1.  Operation 3: CB_GETATTR - Get Attributes . . . . . . . . 569
     20.2.  Operation 4: CB_RECALL - Recall a Delegation . . . . . . 570
     20.3.  Operation 5: CB_LAYOUTRECALL - Recall Layout from
            Client . . . . . . . . . . . . . . . . . . . . . . . . . 571
     20.4.  Operation 6: CB_NOTIFY - Notify Client of Directory
            Changes  . . . . . . . . . . . . . . . . . . . . . . . . 575
     20.5.  Operation 7: CB_PUSH_DELEG - Offer Previously
            Requested Delegation to Client . . . . . . . . . . . . . 579
     20.6.  Operation 8: CB_RECALL_ANY - Keep Any N Recallable
            Objects  . . . . . . . . . . . . . . . . . . . . . . . . 580
     20.7.  Operation 9: CB_RECALLABLE_OBJ_AVAIL - Signal
            Resources for Recallable Objects . . . . . . . . . . . . 583
     20.8.  Operation 10: CB_RECALL_SLOT - Change Flow Control
            Limits . . . . . . . . . . . . . . . . . . . . . . . . . 584
     20.9.  Operation 11: CB_SEQUENCE - Supply Backchannel
            Sequencing and Control . . . . . . . . . . . . . . . . . 585
     20.10. Operation 12: CB_WANTS_CANCELLED - Cancel Pending
            Delegation Wants . . . . . . . . . . . . . . . . . . . . 587
     20.11. Operation 13: CB_NOTIFY_LOCK - Notify Client of
            Possible Lock Availability . . . . . . . . . . . . . . . 588
     20.12. Operation 14: CB_NOTIFY_DEVICEID - Notify Client of
            Device ID Changes  . . . . . . . . . . . . . . . . . . . 590
     20.13. Operation 10044: CB_ILLEGAL - Illegal Callback
            Operation  . . . . . . . . . . . . . . . . . . . . . . . 592



Shepler, et al.           Expires June 18, 2009                [Page 10]

Internet-Draft                   NFSv4.1                   December 2008


   21. Security Considerations . . . . . . . . . . . . . . . . . . . 592
   22. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 594
     22.1.  Named Attribute Definitions  . . . . . . . . . . . . . . 594
       22.1.1.  Initial Registry . . . . . . . . . . . . . . . . . . 595
       22.1.2.  Updating Registrations . . . . . . . . . . . . . . . 595
     22.2.  Device ID Notifications  . . . . . . . . . . . . . . . . 595
       22.2.1.  Initial Registry . . . . . . . . . . . . . . . . . . 596
       22.2.2.  Updating Registrations . . . . . . . . . . . . . . . 596
     22.3.  Object Recall Types  . . . . . . . . . . . . . . . . . . 596
       22.3.1.  Initial Registry . . . . . . . . . . . . . . . . . . 598
       22.3.2.  Updating Registrations . . . . . . . . . . . . . . . 598
     22.4.  Layout Types . . . . . . . . . . . . . . . . . . . . . . 598
       22.4.1.  Initial Registry . . . . . . . . . . . . . . . . . . 599
       22.4.2.  Updating Registrations . . . . . . . . . . . . . . . 599
       22.4.3.  Guidelines for Writing Layout Type Specifications  . 599
     22.5.  Path Variable Definitions  . . . . . . . . . . . . . . . 601
       22.5.1.  Path Variables Registry  . . . . . . . . . . . . . . 601
       22.5.2.  Values for the ${ietf.org:CPU_ARCH} Variable . . . . 603
       22.5.3.  Values for the ${ietf.org:OS_TYPE} Variable  . . . . 603
   23. References  . . . . . . . . . . . . . . . . . . . . . . . . . 604
     23.1.  Normative References . . . . . . . . . . . . . . . . . . 604
     23.2.  Informative References . . . . . . . . . . . . . . . . . 607
   Appendix A.  Acknowledgments  . . . . . . . . . . . . . . . . . . 608
   Appendix B.  RFC Editor Notes . . . . . . . . . . . . . . . . . . 611
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . . 611


























Shepler, et al.           Expires June 18, 2009                [Page 11]

Internet-Draft                   NFSv4.1                   December 2008


1.  Introduction

1.1.  The NFS Version 4 Minor Version 1 Protocol

   The NFS version 4 minor version 1 (NFSv4.1) protocol is the second
   minor version of the NFS version 4 (NFSv4) protocol.  The first minor
   version, NFSv4.0 is described in [29].  It generally follows the
   guidelines for minor versioning model listed in Section 10 of RFC
   3530.  However, it diverges from guidelines 11 ("a client and server
   that supports minor version X must support minor versions 0 through
   X-1"), and 12 ("no features may be introduced as mandatory in a minor
   version").  These divergences are due to the introduction of the
   sessions model for managing non-idempotent operations and the
   RECLAIM_COMPLETE operation.  These two new features are
   infrastructural in nature and simplify implementation of existing and
   other new features.  Making them anything but REQUIRED would add
   undue complexity to protocol definition and implementation.  NFSv4.1
   accordingly updates the Minor Versioning guidelines (Section 2.7).

   As a minor version, NFSv4.1 is consistent with the overall goals for
   NFSv4, but extends the protocol so as to better meet those goals,
   based on experiences with NFSv4.0.  In addition, NFSv4.1 has adopted
   some additional goals, which motivate some of the major extensions in
   NFSv4.1.

1.2.  Scope of this Document

   This document describes the NFSv4.1 protocol.  With respect to
   NFSv4.0, this document does not:

   o  describe the NFSv4.0 protocol, except where needed to contrast
      with NFSv4.1.

   o  modify the specification of the NFSv4.0 protocol.

   o  clarify the NFSv4.0 protocol.

1.3.  NFSv4 Goals

   The NFSv4 protocol is a further revision of the NFS protocol defined
   already by NFSv3 [30].  It retains the essential characteristics of
   previous versions: easy recovery; independence of transport
   protocols, operating systems and file systems; simplicity; and good
   performance.  NFSv4 has the following goals:

   o  Improved access and good performance on the Internet.

      The protocol is designed to transit firewalls easily, perform well



Shepler, et al.           Expires June 18, 2009                [Page 12]

Internet-Draft                   NFSv4.1                   December 2008


      where latency is high and bandwidth is low, and scale to very
      large numbers of clients per server.

   o  Strong security with negotiation built into the protocol.

      The protocol builds on the work of the ONCRPC working group in
      supporting the RPCSEC_GSS protocol.  Additionally, the NFSv4.1
      protocol provides a mechanism to allow clients and servers the
      ability to negotiate security and require clients and servers to
      support a minimal set of security schemes.

   o  Good cross-platform interoperability.

      The protocol features a file system model that provides a useful,
      common set of features that does not unduly favor one file system
      or operating system over another.

   o  Designed for protocol extensions.

      The protocol is designed to accept standard extensions within a
      framework that enable and encourages backward compatibility.

1.4.  NFSv4.1 Goals

   NFSv4.1 has the following goals, within the framework established by
   the overall NFSv4 goals.

   o  To correct significant structural weaknesses and oversights
      discovered in the base protocol.

   o  To add clarity and specificity to areas left unaddressed or not
      addressed in sufficient detail in the base protocol.  However, as
      stated in Section 1.2, it is not a goal to clarify the NFSv4.0
      protocol in the NFSv4.1 specification.

   o  To add specific features based on experience with the existing
      protocol and recent industry developments.

   o  To provide protocol support to take advantage of clustered server
      deployments including the ability to provide scalable parallel
      access to files distributed among multiple servers.

1.5.  General Definitions

   The following definitions are provided for the purpose of providing
   an appropriate context for the reader.





Shepler, et al.           Expires June 18, 2009                [Page 13]

Internet-Draft                   NFSv4.1                   December 2008


   Byte  This document defines a byte as an octet, i.e. a datum exactly
      8 bits in length.

   Client  The "client" is the entity that accesses the NFS server's
      resources.  The client may be an application which contains the
      logic to access the NFS server directly.  The client may also be
      the traditional operating system client that provides remote file
      system services for a set of applications.

      A client is uniquely identified by a Client Owner.

      With reference to file locking, the client is also the entity that
      maintains a set of locks on behalf of one or more applications.
      This client is responsible for crash or failure recovery for those
      locks it manages.

      Note that multiple clients may share the same transport and
      connection and multiple clients may exist on the same network
      node.

   Client ID  A 64-bit quantity used as a unique, short-hand reference
      to a client supplied Verifier and client owner.  The server is
      responsible for supplying the client ID.

   Client Owner  The client owner is a unique string, opaque to the
      server, which identifies a client.  Multiple network connections
      and source network addresses originating from those connections
      may share a client owner.  The server is expected to treat
      requests from connections with the same client owner as coming
      from the same client.

   File System  The collection of objects on a server (as identified by
      the major identifier of a Server Owner, which is defined later in
      this section), that share the same fsid attribute (see
      Section 5.8.1.9).

   Lease  An interval of time defined by the server for which the client
      is irrevocably granted a lock.  At the end of a lease period the
      lock may be revoked if the lease has not been extended.  The lock
      must be revoked if a conflicting lock has been granted after the
      lease interval.

      All leases granted by a server have the same fixed interval.  Note
      that the fixed interval was chosen to alleviate the expense a
      server would have in maintaining state about variable length
      leases across server failures.





Shepler, et al.           Expires June 18, 2009                [Page 14]

Internet-Draft                   NFSv4.1                   December 2008


   Lock  The term "lock" is used to refer to byte-range (in UNIX
      environments, also known as record) locks, share reservations,
      delegations, or layouts unless specifically stated otherwise.

   Secret State Verifier (SSV)  The SSV is a unique secret key shared
      between a client and server.  The SSV serves as the secret key for
      an internal (that is, internal to NFSv4.1) GSS mechanism (the SSV
      GSS mechanism, see Section 2.10.9).  The SSV GSS mechanism uses
      the SSV to compute Message Integrity Code (MIC) and Wrap tokens.
      See Section 2.10.8.3 for more details on how NFSv4.1 uses the SSV
      and the SSV GSS mechanism.

   Server  The "Server" is the entity responsible for coordinating
      client access to a set of file systems and is identified by a
      Server owner.  A server can span multiple network addresses.

   Server Owner  The "Server Owner" identifies the server to the client.
      The server owner consists of a major and minor identifier.  When
      the client has two connections each to a peer with the same major
      identifier, the client assumes both peers are the same server (the
      server namespace is the same via each connection), and assumes and
      lock state is sharable across both connections.  When each peer
      has both the same major and minor identifier, the client assumes
      each connection might be associable with the same session.

   Stable Storage  Stable storage is storage from which data stored by
      an NFSv4.1 server can be recovered without data loss from multiple
      power failures (including cascading power failures, that is,
      several power failures in quick succession), operating system
      failures, and/or hardware failure of components other than the
      storage medium itself (such as disk, nonvolatile RAM, flash
      memory, etc.).

      Some examples of stable storage that are allowable for an NFS
      server include:

      1.  Media commit of data, that is, the modified data has been
          successfully written to the disk media, for example, the disk
          platter.

      2.  An immediate reply disk drive with battery-backed on- drive
          intermediate storage or uninterruptible power system (UPS).

      3.  Server commit of data with battery-backed intermediate storage
          and recovery software.

      4.  Cache commit with uninterruptible power system (UPS) and
          recovery software.



Shepler, et al.           Expires June 18, 2009                [Page 15]

Internet-Draft                   NFSv4.1                   December 2008


   Stateid  A 128-bit quantity returned by a server that uniquely
      defines the open and locking state provided by the server for a
      specific open-owner or lock-owner/open-owner pair for a specific
      file and type of lock.

   Verifier  A 64-bit quantity generated by the client that the server
      can use to determine if the client has restarted and lost all
      previous lock state.

1.6.  Overview of NFSv4.1 Features

   To provide a reasonable context for the reader, the major features of
   the NFSv4.1 protocol will be reviewed in brief.  This will be done to
   provide an appropriate context for both the reader who is familiar
   with the previous versions of the NFS protocol and the reader that is
   new to the NFS protocols.  For the reader new to the NFS protocols,
   there is still a set of fundamental knowledge that is expected.  The
   reader should be familiar with the XDR and RPC protocols as described
   in [2] and [3].  A basic knowledge of file systems and distributed
   file systems is expected as well.

   In general this specification of NFSv4.1 will not distinguish those
   features added in minor version one from those present in the base
   protocol but will treat NFSv4.1 as a unified whole.  See Section 1.7
   for a summary of the differences between NFSv4.0 and NFSv4.1.

1.6.1.  RPC and Security

   As with previous versions of NFS, the External Data Representation
   (XDR) and Remote Procedure Call (RPC) mechanisms used for the NFSv4.1
   protocol are those defined in [2] and [3].  To meet end-to-end
   security requirements, the RPCSEC_GSS framework [4] is used to extend
   the basic RPC security.  With the use of RPCSEC_GSS, various
   mechanisms can be provided to offer authentication, integrity, and
   privacy to the NFSv4 protocol.  Kerberos V5 is used as described in
   [5] to provide one security framework.  With the use of RPCSEC_GSS,
   other mechanisms may also be specified and used for NFSv4.1 security.

   To enable in-band security negotiation, the NFSv4.1 protocol has
   operations which provide the client a method of querying the server
   about its policies regarding which security mechanisms must be used
   for access to the server's file system resources.  With this, the
   client can securely match the security mechanism that meets the
   policies specified at both the client and server.

   NFSv4.1 introduces parallel access (see Section 1.6.2.2), which is
   called pNFS.  The security framework described in this section is
   significantly modified by the introduction of pNFS (see



Shepler, et al.           Expires June 18, 2009                [Page 16]

Internet-Draft                   NFSv4.1                   December 2008


   Section 12.9), because data access is sometimes not over RPC.  The
   level of significance varies with the Storage Protocol (see
   Section 12.2.5) and can be as low as zero impact (see Section 13.12).

1.6.2.  Protocol Structure

1.6.2.1.  Core Protocol

   Unlike NFSv3, which used a series of ancillary protocols (e.g.  NLM,
   NSM, MOUNT), within all minor versions of NFSv4 a single RPC protocol
   is used to make requests to the server.  Facilities that had been
   separate protocols, such as locking, are now integrated within a
   single unified protocol.

1.6.2.2.  Parallel Access

   Minor version one supports high-performance data access to a
   clustered server implementation by enabling a separation of metadata
   access and data access, with the latter done to multiple servers in
   parallel.

   Such parallel data access is controlled by recallable objects known
   as "layouts", which are integrated into the protocol locking model.
   Clients direct requests for data access to a set of data servers
   specified by the layout via a data storage protocol which may be
   NFSv4.1 or may be another protocol.

   Because the protocols used for parallel data access are not
   necessarily RPC-based, the RPC-based security model (Section 1.6.1)
   is obviously impacted (see Section 12.9).  The degree of impact
   varies with the Storage Protocol (see Section 12.2.5) used for data
   access, and can be as low as zero (see Section 13.12).

1.6.3.  File System Model

   The general file system model used for the NFSv4.1 protocol is the
   same as previous versions.  The server file system is hierarchical
   with the regular files contained within being treated as opaque byte
   streams.  In a slight departure, file and directory names are encoded
   with UTF-8 to deal with the basics of internationalization.

   The NFSv4.1 protocol does not require a separate protocol to provide
   for the initial mapping between path name and filehandle.  All file
   systems exported by a server are presented as a tree so that all file
   systems are reachable from a special per-server global root
   filehandle.  This allows LOOKUP operations to be used to perform
   functions previously provided by the MOUNT protocol.  The server
   provides any necessary pseudo file systems to bridge any gaps that



Shepler, et al.           Expires June 18, 2009                [Page 17]

Internet-Draft                   NFSv4.1                   December 2008


   arise due to unexported gaps between exported file systems.

1.6.3.1.  Filehandles

   As in previous versions of the NFS protocol, opaque filehandles are
   used to identify individual files and directories.  Lookup-type and
   create operations translate file and directory names to filehandles
   which are then used to identify objects in subsequent operations.

   The NFSv4.1 protocol provides support for persistent filehandles,
   guaranteed to be valid for the lifetime of the file system object
   designated.  In addition it provides support to servers to provide
   filehandles with more limited validity guarantees, called volatile
   filehandles.

1.6.3.2.  File Attributes

   The NFSv4.1 protocol has a rich and extensible file object attribute
   structure, which is divided into REQUIRED, RECOMMENDED, and named
   attributes (see Section 5).

   Several (but not all) of the REQUIRED attributes are derived from the
   attributes of NFSv3 (see the definition of the fattr3 data type in
   [30]).  An example of a REQUIRED attribute is the file object's type
   (Section 5.8.1.2) so that regular files can be distinguished from
   directories (also known as folders in some operating environments)
   and other types of objects.  REQUIRED attributes are discussed in
   Section 5.1.

   An example of three RECOMMENDED attributes are acl, sacl, and dacl.
   These attributes define an Access Control List (ACL) on a file object
   (Section 6).  An ACL provides directory and file access control
   beyond the model used in NFSv3.  The ACL definition allows for
   specification of specific sets of permissions for individual users
   and groups.  In addition, ACL inheritance allows propagation of
   access permissions and restriction down a directory tree as file
   system objects are created.  RECOMMENDED attributes are discussed in
   Section 5.2.

   A named attribute is an opaque byte stream that is associated with a
   directory or file and referred to by a string name.  Named attributes
   are meant to be used by client applications as a method to associate
   application-specific data with a regular file or directory.  NFSv4.1
   modifies named attributes relative to NFSv4.0 by tightening the
   allowed operations in order to prevent the development of non-
   interoperable implementations.  Named attributes are discussed in
   Section 5.3.




Shepler, et al.           Expires June 18, 2009                [Page 18]

Internet-Draft                   NFSv4.1                   December 2008


1.6.3.3.  Multi-server Namespace

   NFSv4.1 contains a number of features to allow implementation of
   namespaces that cross server boundaries and that allow and facilitate
   a non-disruptive transfer of support for individual file systems
   between servers.  They are all based upon attributes that allow one
   file system to specify alternate or new locations for that file
   system.

   These attributes may be used together with the concept of absent file
   systems, which provide specifications for additional locations but no
   actual file system content.  This allows a number of important
   facilities:

   o  Location attributes may be used with absent file systems to
      implement referrals whereby one server may direct the client to a
      file system provided by another server.  This allows extensive
      multi-server namespaces to be constructed.

   o  Location attributes may be provided for present file systems to
      provide the locations of alternate file system instances or
      replicas to be used in the event that the current file system
      instance becomes unavailable.

   o  Location attributes may be provided when a previously present file
      system becomes absent.  This allows non-disruptive migration of
      file systems to alternate servers.

1.6.4.  Locking Facilities

   As mentioned previously, NFS v4.1 is a single protocol which includes
   locking facilities.  These locking facilities include support for
   many types of locks including a number of sorts of recallable locks.
   Recallable locks such as delegations allow the client to be assured
   that certain events will not occur so long as that lock is held.
   When circumstances change, the lock is recalled via a callback
   request.  The assurances provided by delegations allow more extensive
   caching to be done safely when circumstances allow it.

   The types of locks are:

   o  Share reservations as established by OPEN operations.

   o  Byte-range locks.

   o  File delegations, which are recallable locks that assure the
      holder that inconsistent opens and file changes cannot occur so
      long as the delegation is held.



Shepler, et al.           Expires June 18, 2009                [Page 19]

Internet-Draft                   NFSv4.1                   December 2008


   o  Directory delegations, which are recallable locks that assure the
      holder that inconsistent directory modifications cannot occur so
      long as the delegation is held.

   o  Layouts, which are recallable objects that assure the holder that
      direct access to the file data may be performed directly by the
      client and that no change to the data's location inconsistent with
      that access may be made so long as the layout is held.

   All locks for a given client are tied together under a single client-
   wide lease.  All requests made on sessions associated with the client
   renew that lease.  When leases are not promptly renewed locks are
   subject to revocation.  In the event of server restart, clients have
   the opportunity to safely reclaim their locks within a special grace
   period.

1.7.  Differences from NFSv4.0

   The following summarizes the major differences between minor version
   one and the base protocol:

   o  Implementation of the sessions model (Section 2.10).

   o  Parallel access to data (Section 12).

   o  Addition of the RECLAIM_COMPLETE operation to better structure the
      lock reclamation process (Section 18.51).

   o  Enhanced delegation support as follows.

      *  Delegations on directories and other file types in addition to
         regular files (Section 18.39, Section 18.49).

      *  Operations to optimize acquisition of recalled or denied
         delegations (Section 18.49, Section 20.5, Section 20.7).

      *  Notifications of changes to files and directories
         (Section 18.39, Section 20.4).

      *  A method to allow a server to indicate it is recalling one or
         more delegations for resource management reasons, and thus a
         method to allow the client to pick which delegations to return
         (Section 20.6).

   o  Attributes can be set atomically during exclusive file create via
      the OPEN operation (see the new EXCLUSIVE4_1 creation method in
      Section 18.16).




Shepler, et al.           Expires June 18, 2009                [Page 20]

Internet-Draft                   NFSv4.1                   December 2008


   o  Open files can be preserved if removed and the hard link count
      ("hard link" is defined in an Open Group [Section 3.191 of Chapter 3 of Base Definitions of The Open Group Base Specifications Issue 6 IEEE Std 1003.1, 2004 Edition, HTML Version (www.opengroup.org), ISBN 1931624232"">6] standard) goes to
      zero thus obviating the need for clients to rename deleted files
      to partially hidden names -- colloquially called "silly rename"
      (see the new OPEN4_RESULT_PRESERVE_UNLINKED reply flag in
      Section 18.16).

   o  Improved compatibility with Microsoft Windows for Access Control
      Lists (Section 6.2.3, Section 6.2.2, Section 6.4.3.2).

   o  Data retention (Section 5.13).

   o  Identification of the implementation of the NFS client and server
      (Section 18.35).

   o  Support for notification of the availability of byte-range locks
      (see the new OPEN4_RESULT_MAY_NOTIFY_LOCK reply flag in
      Section 18.16 and see Section 20.11).

   o  In NFSv4.1, LIPKEY and SPKM-3 are not required security mechanisms
      [31].


2.  Core Infrastructure

2.1.  Introduction

   NFSv4.1 relies on core infrastructure common to nearly every
   operation.  This core infrastructure is described in the remainder of
   this section.

2.2.  RPC and XDR

   The NFSv4.1 protocol is a Remote Procedure Call (RPC) application
   that uses RPC version 2 and the corresponding eXternal Data
   Representation (XDR) as defined in [3] and [2].

2.2.1.  RPC-based Security

   Previous NFS versions have been thought of as having a host-based
   authentication model, where the NFS server authenticates the NFS
   client, and trusts the client to authenticate all users.  Actually,
   NFS has always depended on RPC for authentication.  One of the first
   forms of RPC authentication, AUTH_SYS, had no strong authentication,
   and required a host-based authentication approach.  NFSv4.1 also
   depends on RPC for basic security services, and mandates RPC support
   for a user-based authentication model.  The user-based authentication
   model has user principals authenticated by a server, and in turn the



Shepler, et al.           Expires June 18, 2009                [Page 21]

Internet-Draft                   NFSv4.1                   December 2008


   server authenticated by user principals.  RPC provides some basic
   security services which are used by NFSv4.1.

2.2.1.1.  RPC Security Flavors

   As described in section 7.2 "Authentication" of [3], RPC security is
   encapsulated in the RPC header, via a security or authentication
   flavor, and information specific to the specified security flavor.
   Every RPC header conveys information used to identify and
   authenticate a client and server.  As discussed in Section 2.2.1.1.1,
   some security flavors provide additional security services.

   NFSv4.1 clients and servers MUST implement RPCSEC_GSS.  (This
   requirement to implement is not a requirement to use.)  Other
   flavors, such as AUTH_NONE, and AUTH_SYS, MAY be implemented as well.

2.2.1.1.1.  RPCSEC_GSS and Security Services

   RPCSEC_GSS ([4]) uses the functionality of GSS-API [7].  This allows
   for the use of various security mechanisms by the RPC layer without
   the additional implementation overhead of adding RPC security
   flavors.

2.2.1.1.1.1.  Identification, Authentication, Integrity, Privacy

   Via the GSS-API, RPCSEC_GSS can be used to identify and authenticate
   users on clients to servers, and servers to users.  It can also
   perform integrity checking on the entire RPC message, including the
   RPC header, and the arguments or results.  Finally, privacy, usually
   via encryption, is a service available with RPCSEC_GSS.  Privacy is
   performed on the arguments and results.  Note that if privacy is
   selected, integrity, authentication, and identification are enabled.
   If privacy is not selected, but integrity is selected, authentication
   and identification are enabled.  If integrity and privacy are not
   selected, but authentication is enabled, identification is enabled.
   RPCSEC_GSS does not provide identification as a separate service.

   Although GSS-API has an authentication service distinct from its
   privacy and integrity services, GSS-API's authentication service is
   not used for RPCSEC_GSS's authentication service.  Instead, each RPC
   request and response header is integrity protected with the GSS-API
   integrity service, and this allows RPCSEC_GSS to offer per-RPC
   authentication and identity.  See [4] for more information.

   NFSv4.1 client and servers MUST support RPCSEC_GSS's integrity and
   authentication service.  NFSv4.1 servers MUST support RPCSEC_GSS's
   privacy service.  NFSv4.1 clients SHOULD support RPCSEC_GSS's privacy
   service.



Shepler, et al.           Expires June 18, 2009                [Page 22]

Internet-Draft                   NFSv4.1                   December 2008


2.2.1.1.1.2.  Security mechanisms for NFSv4.1

   RPCSEC_GSS, via GSS-API, normalizes access to mechanisms that provide
   security services.  Therefore NFSv4.1 clients and servers MUST
   support the Kerberos V5 security mechanism.

   The use of RPCSEC_GSS requires selection of: mechanism, quality of
   protection (QOP), and service (authentication, integrity, privacy).
   For the mandated security mechanisms, NFSv4.1 specifies that a QOP of
   zero (0) is used, leaving it up to the mechanism or the mechanism's
   configuration to map QOP zero to an appropriate level of protection.
   Each mandated mechanism specifies minimum set of cryptographic
   algorithms for implementing integrity and privacy.  NFSv4.1 clients
   and servers MUST be implemented on operating environments that comply
   with the REQUIRED cryptographic algorithms of each REQUIRED
   mechanism.

2.2.1.1.1.2.1.  Kerberos V5

   The Kerberos V5 GSS-API mechanism as described in [5] MUST be
   implemented with the RPCSEC_GSS services as specified in the
   following table:


      column descriptions:
      1 == number of pseudo flavor
      2 == name of pseudo flavor
      3 == mechanism's OID
      4 == RPCSEC_GSS service
      5 == NFSv4.1 clients MUST support
      6 == NFSv4.1 servers MUST support

      1      2        3                    4                     5   6
      ------------------------------------------------------------------
      390003 krb5     1.2.840.113554.1.2.2 rpc_gss_svc_none      yes yes
      390004 krb5i    1.2.840.113554.1.2.2 rpc_gss_svc_integrity yes yes
      390005 krb5p    1.2.840.113554.1.2.2 rpc_gss_svc_privacy    no yes

   Note that the number and name of the pseudo flavor is presented here
   as a mapping aid to the implementor.  Because the NFSv4.1 protocol
   includes a method to negotiate security and it understands the GSS-
   API mechanism, the pseudo flavor is not needed.  The pseudo flavor is
   needed for the NFSv3 since the security negotiation is done via the
   MOUNT protocol as described in [32].

   At the time NFSv4.1 was specified, AES with HMAC-SHA1 was a REQUIRED
   algorithm set for Kerberos V5.  In contrast, when NFSv4.0 was
   specified, weaker algorithm sets were REQUIRED for Kerberos V5, and



Shepler, et al.           Expires June 18, 2009                [Page 23]

Internet-Draft                   NFSv4.1                   December 2008


   were REQUIRED in the NFSv4.0 specification, because the Kerberos V5
   specification at the time did not specify stronger algorithms.  The
   NFSv4.1 specification does not specify REQUIRED algorithms for
   Kerberos V5, and instead, the implementor is expected to track the
   evolution of the Kerberos V5 standard if and when stronger algorithms
   are specified.

2.2.1.1.1.2.1.1.  Security Considerations for Cryptographic Algorithms
                  in Kerberos V5

   When deploying NFSv4.1, the strength of the security achieved depends
   on the existing Kerberos V5 infrastructure.  The algorithms of
   Kerberos V5 are not directly exposed to or selectable by the client
   or server, so there is some due diligence required by the user of
   NFSv4.1 to ensure that security is acceptable where where needed.

2.2.1.1.1.3.  GSS Server Principal

   Regardless of what security mechanism under RPCSEC_GSS is being used,
   the NFS server, MUST identify itself in GSS-API via a
   GSS_C_NT_HOSTBASED_SERVICE name type.  GSS_C_NT_HOSTBASED_SERVICE
   names are of the form:

        service@hostname

   For NFS, the "service" element is

        nfs

   Implementations of security mechanisms will convert nfs@hostname to
   various different forms.  For Kerberos V5 the following form is
   RECOMMENDED:

        nfs/hostname

2.3.  COMPOUND and CB_COMPOUND

   A significant departure from the versions of the NFS protocol before
   NFSv4 is the introduction of the COMPOUND procedure.  For the NFSv4
   protocol, in all minor versions, there are exactly two RPC
   procedures, NULL and COMPOUND.  The COMPOUND procedure is defined as
   a series of individual operations and these operations perform the
   sorts of functions performed by traditional NFS procedures.

   The operations combined within a COMPOUND request are evaluated in
   order by the server, without any atomicity guarantees.  A limited set
   of facilities exist to pass results from one operation to another.
   Once an operation returns a failing result, the evaluation ends and



Shepler, et al.           Expires June 18, 2009                [Page 24]

Internet-Draft                   NFSv4.1                   December 2008


   the results of all evaluated operations are returned to the client.

   With the use of the COMPOUND procedure, the client is able to build
   simple or complex requests.  These COMPOUND requests allow for a
   reduction in the number of RPCs needed for logical file system
   operations.  For example, multi-component lookup requests can be
   constructed by combining multiple LOOKUP operations.  Those can be
   further combined with operations such as GETATTR, READDIR, or OPEN
   plus READ to do more complicated sets of operation without incurring
   additional latency.

   NFSv4.1 also contains a considerable set of callback operations in
   which the server makes an RPC directed at the client.  Callback RPCs
   have a similar structure to that of the normal server requests.  In
   all minor versions of the NFSv4 protocol there are two callback RPC
   procedures, CB_NULL and CB_COMPOUND.  The CB_COMPOUND procedure is
   defined in an analogous fashion to that of COMPOUND with its own set
   of callback operations.

   The addition of new server and callback operations within the
   COMPOUND and CB_COMPOUND request framework provides a means of
   extending the protocol in subsequent minor versions.

   Except for a small number of operations needed for session creation,
   server requests and callback requests are performed within the
   context of a session.  Sessions provide a client context for every
   request and support robust reply protection for non-idempotent
   requests.

2.4.  Client Identifiers and Client Owners

   For each operation that obtains or depends on locking state, the
   specific client needs to be identifiable by the server.

   Each distinct client instance is represented by a client ID.  A
   client ID is a 64-bit identifier representing a specific client at a
   given time.  The client ID is changed whenever the client re-
   initializes, and may change when the server re-initializes.  Client
   IDs are used to support lock identification and crash recovery.

   During steady state operation, the client ID associated with each
   operation is derived from the session (see Section 2.10) on which the
   operation is sent.  A session is associated with a client ID when the
   session is created.

   Unlike NFSv4.0, the only NFSv4.1 operations possible before a client
   ID is established are those needed to establish the client ID.




Shepler, et al.           Expires June 18, 2009                [Page 25]

Internet-Draft                   NFSv4.1                   December 2008


   A sequence of an EXCHANGE_ID operation followed by a CREATE_SESSION
   operation using that client ID (eir_clientid as returned from
   EXCHANGE_ID) is required to establish and confirm the client ID on
   the server.  Establishment of identification by a new incarnation of
   the client also has the effect of immediately releasing any locking
   state that a previous incarnation of that same client might have had
   on the server.  Such released state would include all lock, share
   reservation, layout state, and where the server is not supporting the
   CLAIM_DELEGATE_PREV claim type, all delegation state associated with
   the same client with the same identity.  For discussion of delegation
   state recovery, see Section 10.2.1.  For discussion of layout state
   recovery see Section 12.7.1.

   Releasing such state requires that the server be able to determine
   that one client instance is the successor of another.  Where this
   cannot be done, for any of a number of reasons, the locking state
   will remain for a time subject to lease expiration (see Section 8.3)
   and the new client will need to wait for such state to be removed, if
   it makes conflicting lock requests.

   Client identification is encapsulated in the following Client Owner
   data type:


   struct client_owner4 {
           verifier4       co_verifier;
           opaque          co_ownerid<NFS4_OPAQUE_LIMIT>;
   };

   The first field, co_verifier, is a client incarnation verifier.  The
   server will start the process of canceling the client's leased state
   if co_verifier is different than what the server has previously
   recorded for the identified client (as specified in the co_ownerid
   field).

   The second field, co_ownerid is a variable length string that
   uniquely defines the client so that subsequent instances of the same
   client bear the same co_ownerid with a different verifier.

   There are several considerations for how the client generates the
   co_ownerid string:

   o  The string should be unique so that multiple clients do not
      present the same string.  The consequences of two clients
      presenting the same string range from one client getting an error
      to one client having its leased state abruptly and unexpectedly
      canceled.




Shepler, et al.           Expires June 18, 2009                [Page 26]

Internet-Draft                   NFSv4.1                   December 2008


   o  The string should be selected so that subsequent incarnations
      (e.g. restarts) of the same client cause the client to present the
      same string.  The implementor is cautioned from an approach that
      requires the string to be recorded in a local file because this
      precludes the use of the implementation in an environment where
      there is no local disk and all file access is from an NFSv4.1
      server.

   o  The string should be the same for each server network address that
      the client accesses.  This way, if a server has multiple
      interfaces, the client can trunk traffic over multiple network
      paths as described in Section 2.10.5.  (Note: the precise opposite
      was advised in the NFSv4.0 specification [29].)

   o  The algorithm for generating the string should not assume that the
      client's network address will not change, unless the client
      implementation knows it is using statically assigned network
      addresses.  This includes changes between client incarnations and
      even changes while the client is still running in its current
      incarnation.  Thus with dynamic address assignment, if the client
      includes just the client's network address in the co_ownerid
      string, there is a real risk that after the client gives up the
      network address, another client, using a similar algorithm for
      generating the co_ownerid string, would generate a conflicting
      co_ownerid string.

   Given the above considerations, an example of a well generated
   co_ownerid string is one that includes:

   o  If applicable, the client's statically assigned network address.

   o  Additional information that tends to be unique, such as one or
      more of:

      *  The client machine's serial number (for privacy reasons, it is
         best to perform some one way function on the serial number).

      *  A MAC address (again, a one way function should be performed).

      *  The timestamp of when the NFSv4.1 software was first installed
         on the client (though this is subject to the previously
         mentioned caution about using information that is stored in a
         file, because the file might only be accessible over NFSv4.1).

      *  A true random number.  However since this number ought to be
         the same between client incarnations, this shares the same
         problem as that of using the timestamp of the software
         installation.



Shepler, et al.           Expires June 18, 2009                [Page 27]

Internet-Draft                   NFSv4.1                   December 2008


   o  For a user level NFSv4.1 client, it should contain additional
      information to distinguish the client from other user level
      clients running on the same host, such as a process identifier or
      other unique sequence.

   The client ID is assigned by the server (the eir_clientid result from
   EXCHANGE_ID) and should be chosen so that it will not conflict with a
   client ID previously assigned by the server.  This applies across
   server restarts.

   In the event of a server restart, a client may find out that its
   current client ID is no longer valid when it receives an
   NFS4ERR_STALE_CLIENTID error.  The precise circumstances depend on
   the characteristics of the sessions involved, specifically whether
   the session is persistent (see Section 2.10.6.5), but in each case
   the client will receive this error when it attempts to establish a
   new session with the existing client ID and receives the error
   NFS4ERR_STALE_CLIENTID, indicating that a new client ID needs to be
   obtained via EXCHANGE_ID and the new session established with that
   client ID.

   When a session is not persistent, the client will find out that it
   needs to create a new session as a result of getting an
   NFS4ERR_BADSESSION, since the session in question was lost as part of
   a server restart.  When the existing client ID is presented to a
   server as part of creating a session and that client ID is not
   recognized, as would happen after a server restart, the server will
   reject the request with the error NFS4ERR_STALE_CLIENTID.

   In the case of the session being persistent, the client will re-
   establish communication using the existing session after the restart.
   This session will be associated with the existing client ID but may
   only be used to retransmit operations that the client previously
   transmitted and did not see replies to.  Replies to operations that
   the server previously performed will come from the reply cache,
   otherwise NFS4ERR_DEADSESSION will be returned.  Hence, such a
   session is referred to as "dead".  In this situation, in order to
   perform new operations, the client needs to establish a new session.
   If an attempt is made to establish this new session with the existing
   client ID, the server will reject the request with
   NFS4ERR_STALE_CLIENTID.

   When NFS4ERR_STALE_CLIENTID is received in either of these
   situations, the client needs to obtain a new client ID by use of the
   EXCHANGE_ID operation, then use that client ID as the basis of a new
   session, and then proceed to any other necessary recovery for the
   server restart case (See Section 8.4.2).




Shepler, et al.           Expires June 18, 2009                [Page 28]

Internet-Draft                   NFSv4.1                   December 2008


   See the descriptions of EXCHANGE_ID (Section 18.35) and
   CREATE_SESSION (Section 18.36) for a complete specification of these
   operations.

2.4.1.  Upgrade from NFSv4.0 to NFSv4.1

   To facilitate upgrade from NFSv4.0 to NFSv4.1, a server may compare a
   client_owner4 in an EXCHANGE_ID with an nfs_client_id4 established
   using the SETCLIENTID operation of NFSv4.0.  A server that does so
   will allow an upgraded client to avoid waiting until the lease (i.e.
   the lease established by the NFSv4.0 instance client) expires.  This
   requires the client_owner4 be constructed the same way as the
   nfs_client_id4.  If the latter's contents included the server's
   network address (per the recommendations of the NFSv4.0 specification
   [29]), and the NFSv4.1 client does not wish to use a client ID that
   prevents trunking, it should send two EXCHANGE_ID operations.  The
   first EXCHANGE_ID will have a client_owner4 equal to the
   nfs_client_id4.  This will clear the state created by the NFSv4.0
   client.  The second EXCHANGE_ID will not have the server's network
   address.  The state created for the second EXCHANGE_ID will not have
   to wait for lease expiration, because there will be no state to
   expire.

2.4.2.  Server Release of Client ID

   NFSv4.1 introduces a new operation called DESTROY_CLIENTID
   (Section 18.50) which the client SHOULD use to destroy a client ID it
   no longer needs.  This permits graceful, bilateral release of a
   client ID.  The operation cannot be used if there are sessions
   associated with the client ID, or state with an unexpired lease.

   If the server determines that the client holds no associated state
   for its client ID (including sessions, opens, locks, delegations,
   layouts, and wants), the server may choose to unilaterally release
   the client ID in order to conserve resources.  If the client contacts
   the server after this release, the server MUST ensure the client
   receives the appropriate error so that it will use the EXCHANGE_ID/
   CREATE_SESSION sequence to establish a new client ID.  The server
   ought to be very hesitant to release a client ID since the resulting
   work on the client to recover from such an event will be the same
   burden as if the server had failed and restarted.  Typically a server
   would not release a client ID unless there had been no activity from
   that client for many minutes.  As long as there are sessions, opens,
   locks, delegations, layouts, or wants, the server MUST NOT release
   the client ID.  See Section 2.10.12.1.4 for discussion on releasing
   inactive sessions.





Shepler, et al.           Expires June 18, 2009                [Page 29]

Internet-Draft                   NFSv4.1                   December 2008


2.4.3.  Resolving Client Owner Conflicts

   When the server gets an EXCHANGE_ID for a client owner that currently
   has no state, or that has state, but the lease has expired, the
   server MUST allow the EXCHANGE_ID, and confirm the new client ID if
   followed by the appropriate CREATE_SESSION.

   When the server gets an EXCHANGE_ID for a new incarnation of a client
   owner that currently has an old incarnation with state and an
   unexpired lease, the server is allowed to dispose of the state of the
   previous incarnation of the client owner if one of the following are
   true:

   o  The principal that created the client ID for the client owner is
      the same as the principal that is issuing the EXCHANGE_ID.  Note
      that if the client ID was created with SP4_MACH_CRED state
      protection (Section 18.35), the principal MUST be based on
      RPCSEC_GSS authentication, the RPCSEC_GSS service used MUST be
      integrity or privacy, and the same GSS mechanism and principal
      MUST be used as that used when the client ID was created.

   o  The client ID was established with SP4_SSV protection
      (Section 18.35, Section 2.10.8.3) and the client sends the
      EXCHANGE_ID with the security flavor set to RPCSEC_GSS using the
      GSS SSV mechanism (Section 2.10.9).

   o  The client ID was established with SP4_SSV protection, and under
      the conditions described herein, the EXCHANGE_ID was sent with
      SP4_MACH_CRED state protection.  Because the SSV might not persist
      across client and server restart, and because the first time a
      client sends EXCHANGE_ID to a server it does not have an SSV, the
      client MAY send the subsequent EXCHANGE_ID without an SSV
      RPCSEC_GSS handle.  Instead, as with SP4_MACH_CRED protection, the
      principal MUST be based on RPCSEC_GSS authentication, the
      RPCSEC_GSS service used MUST be integrity or privacy, and the same
      GSS mechanism and principal MUST be used as that used when the
      client ID was created.

   If none of the above situations apply, the server MUST return
   NFS4ERR_CLID_INUSE.

   If the server accepts the principal and co_ownerid as matching that
   which created the client ID, and the co_verifier in the EXCHANGE_ID
   differs from the co_verifier used when the client ID was created,
   then after the server receives a CREATE_SESSION that confirms the
   client ID, the server deletes state.  If the co_verifier values are
   the same, (e.g. the client is either updating properties of the
   client ID (Section 18.35), or the client is attempting trunking



Shepler, et al.           Expires June 18, 2009                [Page 30]

Internet-Draft                   NFSv4.1                   December 2008


   (Section 2.10.5) the server MUST NOT delete state.

2.5.  Server Owners

   The Server Owner is similar to a Client Owner (Section 2.4), but
   unlike the Client Owner, there is no shorthand server ID.  The Server
   Owner is defined in the following data type:


   struct server_owner4 {
    uint64_t       so_minor_id;
    opaque         so_major_id<NFS4_OPAQUE_LIMIT>;
   };

   The Server Owner is returned from EXCHANGE_ID.  When the so_major_id
   fields are the same in two EXCHANGE_ID results, the connections each
   EXCHANGE_ID were sent over can be assumed to address the same Server
   (as defined in Section 1.5).  If the so_minor_id fields are also the
   same, then not only do both connections connect to the same server,
   but the session can be shared across both connections.  The reader is
   cautioned that multiple servers may deliberately or accidentally
   claim to have the same so_major_id or so_major_id/so_minor_id; the
   reader should examine Section 2.10.5 and Section 18.35 in order to
   avoid acting on falsely matching Server Owner values.

   The considerations for generating a so_major_id are similar to that
   for generating a co_ownerid string (see Section 2.4).  The
   consequences of two servers generating conflicting so_major_id values
   are less dire than they are for co_ownerid conflicts because the
   client can use RPCSEC_GSS to compare the authenticity of each server
   (see Section 2.10.5).

2.6.  Security Service Negotiation

   With the NFSv4.1 server potentially offering multiple security
   mechanisms, the client needs a method to determine or negotiate which
   mechanism is to be used for its communication with the server.  The
   NFS server may have multiple points within its file system namespace
   that are available for use by NFS clients.  These points can be
   considered security policy boundaries, and in some NFS
   implementations are tied to NFS export points.  In turn the NFS
   server may be configured such that each of these security policy
   boundaries may have different or multiple security mechanisms in use.

   The security negotiation between client and server SHOULD be done
   with a secure channel to eliminate the possibility of a third party
   intercepting the negotiation sequence and forcing the client and
   server to choose a lower level of security than required or desired.



Shepler, et al.           Expires June 18, 2009                [Page 31]

Internet-Draft                   NFSv4.1                   December 2008


   See Section 21 for further discussion.

2.6.1.  NFSv4.1 Security Tuples

   An NFS server can assign one or more "security tuples" to each
   security policy boundary in its namespace.  Each security tuple
   consists of a security flavor (see Section 2.2.1.1), and if the
   flavor is RPCSEC_GSS, a GSS-API mechanism OID, a GSS-API quality of
   protection, and an RPCSEC_GSS service.

2.6.2.  SECINFO and SECINFO_NO_NAME

   The SECINFO and SECINFO_NO_NAME operations allow the client to
   determine, on a per filehandle basis, what security tuple is to be
   used for server access.  In general, the client will not have to use
   either operation except during initial communication with the server
   or when the client crosses security policy boundaries at the server.
   However, the server's policies may also change at any time and force
   the client to negotiate a new security tuple.

   Where the use of different security tuples would affect the type of
   access that would be allowed if a request was sent over the same
   connection used for the SECINFO or SECINFO_NO_NAME operation (e.g.
   read-only vs. read-write) access, security tuples that allow greater
   access should be presented first.  Where the general level of access
   is the same and different security flavors limit the range of
   principals whose privileges are recognized (e.g. allowing or
   disallowing root access), flavors supporting the greatest range of
   principals should be listed first.

2.6.3.  Security Error

   Based on the assumption that each NFSv4.1 client and server MUST
   support a minimum set of security (i.e., Kerberos V5 under
   RPCSEC_GSS), the NFS client will initiate file access to the server
   with one of the minimal security tuples.  During communication with
   the server, the client may receive an NFS error of NFS4ERR_WRONGSEC.
   This error allows the server to notify the client that the security
   tuple currently being used contravenes the server's security policy.
   The client is then responsible for determining (see Section 2.6.3.1)
   what security tuples are available at the server and choosing one
   which is appropriate for the client.

2.6.3.1.  Using NFS4ERR_WRONGSEC, SECINFO, and SECINFO_NO_NAME

   This section explains of the mechanics of NFSv4.1 security
   negotiation.




Shepler, et al.           Expires June 18, 2009                [Page 32]

Internet-Draft                   NFSv4.1                   December 2008


2.6.3.1.1.  Put Filehandle Operations

   The term "put filehandle operation" refers to PUTROOTFH, PUTPUBFH,
   PUTFH, and RESTOREFH.  Each of the subsections herein describes how
   the server handles a subseries of operations that starts with a put
   filehandle operation.

2.6.3.1.1.1.  Put Filehandle Operation + SAVEFH

   The client is saving a filehandle for a future RESTOREFH, LINK, or
   RENAME.  SAVEFH MUST NOT return NFS4ERR_WRONGSEC.  To determine
   whether the put filehandle operation returns NFS4ERR_WRONGSEC or not,
   the server implementation pretends SAVEFH is not in the series of
   operations and examines which of the situations described in the
   other subsections of Section 2.6.3.1.1 apply.

2.6.3.1.1.2.  Two or More Put Filehandle Operations

   For a series of N put filehandle operations, the server MUST NOT
   return NFS4ERR_WRONGSEC to the first N-1 put filehandle operations.
   The N'th put filehandle operation is handled as if it is the first in
   a subseries of operations.  For example if the server received PUTFH,
   PUTROOTFH, LOOKUP, then the PUTFH is ignored for NFS4ERR_WRONGSEC
   purposes, and the PUTROOTFH, LOOKUP subseries is processed as
   according to Section 2.6.3.1.1.3.

2.6.3.1.1.3.  Put Filehandle Operation + LOOKUP (or OPEN of an Existing
              Name)

   This situation also applies to a put filehandle operation followed by
   a LOOKUP or an OPEN operation that specifies an existing component
   name.

   In this situation, the client is potentially crossing a security
   policy boundary, and the set of security tuples the parent directory
   supports may differ from those of the child.  The server
   implementation may decide whether to impose any restrictions on
   security policy administration.  There are at least three approaches
   (sec_policy_child is the tuple set of the child export,
   sec_policy_parent is that of the parent).

     a)  sec_policy_child <= sec_policy_parent (<= for subset).  This
      means that the set of security tuples specified on the security
      policy of a child directory is always a subset of that of its
      parent directory.






Shepler, et al.           Expires June 18, 2009                [Page 33]

Internet-Draft                   NFSv4.1                   December 2008


     b)  sec_policy_child ^ sec_policy_parent != {} (^ for intersection,
      {} for the empty set).  This means that the security tuples
      specified on the security policy of a child directory always has a
      non empty intersection with that of the parent.

     c)  sec_policy_child ^ sec_policy_parent == {}.  This means that
      the set of tuples specified on the security policy of a child
      directory may not intersect with that of the parent.  In other
      words, there are no restrictions on how the system administrator
      may set up these tuples.

   In order for a server to support approaches (b) (for the case when a
   client chooses a flavor that is not a member of sec_policy_parent)
   and (c), the put filehandle operation cannot return NFS4ERR_WRONGSEC
   when there is a security tuple mismatch.  Instead, it should be
   returned from the LOOKUP (or OPEN by existing component name) that
   follows.

   Since the above guideline does not contradict approach (a), it should
   be followed in general.  Even if approach (a) is implemented, it is
   possible for the security tuple used to be acceptable for the target
   of LOOKUP but not for the filehandles used in the put filehandle
   operation.  The put filehandle operation could be a PUTROOTFH or
   PUTPUBFH, where the client cannot know the security tuples for the
   root or public filehandle.  Or the security policy for the filehandle
   used by the put filehandle operation could have changed since the
   time the filehandle was obtained.

   Therefore, an NFSv4.1 server MUST NOT return NFS4ERR_WRONGSEC in
   response to the put filehandle operation if the operation is
   immediately followed by a LOOKUP or an OPEN by component name.

2.6.3.1.1.4.  Put Filehandle Operation + LOOKUPP

   Since SECINFO only works its way down, there is no way LOOKUPP can
   return NFS4ERR_WRONGSEC without SECINFO_NO_NAME.  SECINFO_NO_NAME
   solves this issue via style SECINFO_STYLE4_PARENT, which works in the
   opposite direction as SECINFO.  As with Section 2.6.3.1.1.3, a put
   filehandle operation that is followed by a LOOKUPP MUST NOT return
   NFS4ERR_WRONGSEC.  If the server does not support SECINFO_NO_NAME,
   the client's only recourse is to send the put filehandle operation,
   LOOKUPP, GETFH sequence of operations with every security tuple it
   supports.

   Regardless of whether SECINFO_NO_NAME is supported, an NFSv4.1 server
   MUST NOT return NFS4ERR_WRONGSEC in response to a put filehandle
   operation if the operation is immediately followed by a LOOKUPP.




Shepler, et al.           Expires June 18, 2009                [Page 34]

Internet-Draft                   NFSv4.1                   December 2008


2.6.3.1.1.5.  Put Filehandle Operation + SECINFO/SECINFO_NO_NAME

   A security sensitive client is allowed to choose a strong security
   tuple when querying a server to determine a file object's permitted
   security tuples.  The security tuple chosen by the client does not
   have to be included in the tuple list of the security policy of the
   either parent directory indicated in the put filehandle operation, or
   the child file object indicated in SECINFO (or any parent directory
   indicated in SECINFO_NO_NAME).  Of course the server has to be
   configured for whatever security tuple the client selects, otherwise
   the request will fail at RPC layer with an appropriate authentication
   error.

   In theory, there is no connection between the security flavor used by
   SECINFO or SECINFO_NO_NAME and those supported by the security
   policy.  But in practice, the client may start looking for strong
   flavors from those supported by the security policy, followed by
   those in the REQUIRED set.

   The NFSv4.1 server MUST NOT return NFS4ERR_WRONGSEC to a put
   filehandle operation that is immediately followed by SECINFO or
   SECINFO_NO_NAME.  The NFSv4.1 server MUST NOT return NFS4ERR_WRONGSEC
   from SECINFO or SECINFO_NO_NAME.

2.6.3.1.1.6.  Put Filehandle Operation + Nothing

   The NFSv4.1 server MUST NOT return NFS4ERR_WRONGSEC.

2.6.3.1.1.7.  Put Filehandle Operation + Anything Else

   "Anything Else" includes OPEN by filehandle.

   The security policy enforcement applies to the filehandle specified
   in the put filehandle operation.  Therefore the put filehandle
   operation MUST return NFS4ERR_WRONGSEC when there is a security tuple
   mismatch.  This avoids the complexity adding NFS4ERR_WRONGSEC as an
   allowable error to every other operation.

   A COMPOUND containing the series put filehandle operation +
   SECINFO_NO_NAME (style SECINFO_STYLE4_CURRENT_FH) is an efficient way
   for the client to recover from NFS4ERR_WRONGSEC.

   The NFSv4.1 server MUST NOT return NFS4ERR_WRONGSEC to any operation
   other than a put filehandle operation, LOOKUP, LOOKUPP, and OPEN (by
   component name).






Shepler, et al.           Expires June 18, 2009                [Page 35]

Internet-Draft                   NFSv4.1                   December 2008


2.6.3.1.1.8.  Operations after SECINFO and SECINFO_NO_NAME

   Suppose a client sends a COMPOUND procedure containing the series
   SEQUENCE, PUTFH, SECINFO_NONAME, READ, and suppose the security tuple
   used does not match that required for the target file.  By rule (see
   Section 2.6.3.1.1.5), neither PUTFH nor SECINFO_NO_NAME can return
   NFS4ERR_WRONGSEC.  By rule (see Section 2.6.3.1.1.7), READ cannot
   return NFS4ERR_WRONGSEC.  The issue is resolved by the fact that
   SECINFO and SECINFO_NO_NAME consume the current filehandle (note that
   this is a change from NFSv4.0).  This leaves no current filehandle
   for READ to use, and READ returns NFS4ERR_NOFILEHANDLE.

2.6.3.1.2.  LINK and RENAME

   The LINK and RENAME operations use both the current and saved
   filehandles.  When the current filehandle is injected into a series
   of operations via a put filehandle operation, the server MUST return
   NFS4ERR_WRONGSEC, per Section 2.6.3.1.1.  LINK and RENAME MAY return
   NFS4ERR_WRONGSEC if the security policy of the saved filehandle
   rejects the security flavor used in the COMPOUND request's
   credentials.  If the server does so, then if there is no intersection
   between the security policies of saved and current filehandles, this
   means it will be impossible for client to perform the intended LINK
   or RENAME operation.

   For example, suppose the client sends this COMPOUND request:
   SEQUENCE, PUTFH bFH, SAVEFH, PUTFH aFH, RENAME "c" "d", where
   filehandles bFH and aFH refer to different directories.  Suppose no
   common security tuple exists between the security policies of aFH and
   bFH.  If the client sends the request using credentials acceptable to
   bFH's security policy but not aFH's policy, then the PUTFH aFH
   operation will fail with NFS4ERR_WRONGSEC.  After a SECINFO_NO_NAME
   request, the client sends SEQUENCE, PUTFH bFH, SAVEFH, PUTFH aFH,
   RENAME "c" "d", using credentials acceptable to aFH's security
   policy, but not bFH's policy.  The server returns NFS4ERR_WRONGSEC on
   the RENAME operation.

   To prevent a client from an endless sequence of a request containing
   LINK or RENAME, followed by a request containing SECINFO_NO_NAME, the
   server MUST detect when the security policies of the current and
   saved filehandles have no mutually acceptable security tuple, and
   MUST NOT return NFS4ERR_WRONGSEC in that situation.  Instead the
   server MUST return NFS4ERR_XDEV.

   Thus while a server MAY return NFS4ERR_WRONGSEC from LINK and RENAME,
   the server implementor may reasonably decide the consequences are not
   worth the security benefits, and so allow the security policy of the
   current filehandle to override that of the saved filehandle.



Shepler, et al.           Expires June 18, 2009                [Page 36]

Internet-Draft                   NFSv4.1                   December 2008


2.7.  Minor Versioning

   To address the requirement of an NFS protocol that can evolve as the
   need arises, the NFSv4.1 protocol contains the rules and framework to
   allow for future minor changes or versioning.

   The base assumption with respect to minor versioning is that any
   future accepted minor version will be documented in one or more
   standards track RFCs.  Minor version zero of the NFSv4 protocol is
   represented by [29], and minor version one is represented by this
   document [[Comment.1: RFC Editor: change "document" to "RFC" when we
   publish]].  The COMPOUND and CB_COMPOUND procedures support the
   encoding of the minor version being requested by the client.

   The following items represent the basic rules for the development of
   minor versions.  Note that a future minor version may modify or add
   to the following rules as part of the minor version definition.

   1.   Procedures are not added or deleted

        To maintain the general RPC model, NFSv4 minor versions will not
        add to or delete procedures from the NFS program.

   2.   Minor versions may add operations to the COMPOUND and
        CB_COMPOUND procedures.

        The addition of operations to the COMPOUND and CB_COMPOUND
        procedures does not affect the RPC model.

        *  Minor versions may append attributes to the bitmap4 that
           represents sets of attributes and the fattr4 that represents
           sets of attribute values.

           This allows for the expansion of the attribute model to allow
           for future growth or adaptation.

        *  Minor version X must append any new attributes after the last
           documented attribute.

           Since attribute results are specified as an opaque array of
           per-attribute XDR encoded results, the complexity of adding
           new attributes in the midst of the current definitions would
           be too burdensome.

   3.   Minor versions must not modify the structure of an existing
        operation's arguments or results.

        Again the complexity of handling multiple structure definitions



Shepler, et al.           Expires June 18, 2009                [Page 37]

Internet-Draft                   NFSv4.1                   December 2008


        for a single operation is too burdensome.  New operations should
        be added instead of modifying existing structures for a minor
        version.

        This rule does not preclude the following adaptations in a minor
        version.

        *  adding bits to flag fields such as new attributes to
           GETATTR's bitmap4 data type and providing corresponding
           variants of opaque arrays, such as a notify4 used together
           with such bitmaps.

        *  adding bits to existing attributes like ACLs that have flag
           words

        *  extending enumerated types (including NFS4ERR_*) with new
           values

        *  adding cases to a switched union

   4.   Minor versions must not modify the structure of existing
        attributes.

   5.   Minor versions must not delete operations.

        This prevents the potential reuse of a particular operation
        "slot" in a future minor version.

   6.   Minor versions must not delete attributes.

   7.   Minor versions must not delete flag bits or enumeration values.

   8.   Minor versions may declare an operation MUST NOT be implemented.

        Specifying an operation MUST NOT be implemented is equivalent to
        obsoleting an operation.  For the client, it means that the
        operation should not be sent to the server.  For the server, an
        NFS error can be returned as opposed to "dropping" the request
        as an XDR decode error.  This approach allows for the
        obsolescence of an operation while maintaining its structure so
        that a future minor version can reintroduce the operation.

        1.  Minor versions may declare an attribute MUST NOT be
            implemented.

        2.  Minor versions may declare a flag bit or enumeration value
            MUST NOT be implemented.




Shepler, et al.           Expires June 18, 2009                [Page 38]

Internet-Draft                   NFSv4.1                   December 2008


   9.   Minor versions may downgrade features from REQUIRED to
        RECOMMENDED, or RECOMMENDED to OPTIONAL.

   10.  Minor versions may upgrade features from OPTIONAL to RECOMMENDED
        or RECOMMENDED to REQUIRED.

   11.  A client and server that supports minor version X should support
        minor versions 0 (zero) through X-1 as well.

   12.  Except for infrastructural changes, a minor version must not
        introduce REQUIRED new features.

        This rule allows for the introduction of new functionality and
        forces the use of implementation experience before designating a
        feature as REQUIRED.  On the other hand, some classes of
        features are infrastructural and have broad effects.  Allowing
        infrastructural features to be RECOMMENDED or OPTIONAL
        complicates implementation of the minor version.

   13.  A client MUST NOT attempt to use a stateid, filehandle, or
        similar returned object from the COMPOUND procedure with minor
        version X for another COMPOUND procedure with minor version Y,
        where X != Y.

2.8.  Non-RPC-based Security Services

   As described in Section 2.2.1.1.1.1, NFSv4.1 relies on RPC for
   identification, authentication, integrity, and privacy.  NFSv4.1
   itself provides or enables additional security services as described
   in the next several subsections.

2.8.1.  Authorization

   Authorization to access a file object via an NFSv4.1 operation is
   ultimately determined by the NFSv4.1 server.  A client can
   predetermine its access to a file object via the OPEN (Section 18.16)
   and the ACCESS (Section 18.1) operations.

   Principals with appropriate access rights can modify the
   authorization on a file object via the SETATTR (Section 18.30)
   operation.  Attributes that affect access rights include: mode,
   owner, owner_group, acl, dacl, and sacl.  See Section 5.

2.8.2.  Auditing

   NFSv4.1 provides auditing on a per file object basis, via the acl and
   sacl attributes as described in Section 6.  It is outside the scope
   of this specification to specify audit log formats or management



Shepler, et al.           Expires June 18, 2009                [Page 39]

Internet-Draft                   NFSv4.1                   December 2008


   policies.

2.8.3.  Intrusion Detection

   NFSv4.1 provides alarm control on a per file object basis, via the
   acl and sacl attributes as described in Section 6.  Alarms may serve
   as the basis for intrusion detection.  It is outside the scope of
   this specification to specify heuristics for detecting intrusion via
   alarms.

2.9.  Transport Layers

2.9.1.  REQUIRED and RECOMMENDED Properties of Transports

   NFSv4.1 works over RDMA and non-RDMA-based transports with the
   following attributes:

   o  The transport supports reliable delivery of data, which NFSv4.1
      requires but neither NFSv4.1 nor RPC has facilities for ensuring.
      [33]

   o  The transport delivers data in the order it was sent.  Ordered
      delivery simplifies detection of transmit errors, and simplifies
      the sending of arbitrary sized requests and responses, via the
      record marking protocol [3].

   Where an NFSv4.1 implementation supports operation over the IP
   network protocol, any transport used between NFS and IP MUST be among
   the IETF-approved congestion control transport protocols.  At the
   time this document was written, the only two transports that had the
   above attributes were TCP and SCTP.  To enhance the possibilities for
   interoperability, an NFSv4.1 implementation MUST support operation
   over the TCP transport protocol.

   Even if NFSv4.1 is used over a non-IP network protocol, it is
   RECOMMENDED that the transport support congestion control.

   It is permissible for a connectionless transport to be used under
   NFSv4.1, however reliable and in-order delivery of data combined with
   congestion control by the connectionless transport is REQUIRED; as a
   consequence UDP by itself MUST NOT be used as an NFSv4.1 transport.
   NFSv4.1 assumes that a client transport address and server transport
   address used to send data over a transport together constitute a
   connection, even if the underlying transport eschews the concept of a
   connection.






Shepler, et al.           Expires June 18, 2009                [Page 40]

Internet-Draft                   NFSv4.1                   December 2008


2.9.2.  Client and Server Transport Behavior

   If a connection-oriented transport (e.g.  TCP) is used, the client
   and server SHOULD use long lived connections for at least three
   reasons:

   1.  This will prevent the weakening of the transport's congestion
       control mechanisms via short lived connections.

   2.  This will improve performance for the WAN environment by
       eliminating the need for connection setup handshakes.

   3.  The NFSv4.1 callback model differs from NFSv4.0, and requires the
       client and server to maintain a client-created backchannel (see
       Section 2.10.3.1) for the server to use.

   In order to reduce congestion, if a connection-oriented transport is
   used, and the request is not the NULL procedure,

   o  A requester MUST NOT retry a request unless the connection the
      request was sent over was lost before the reply was received.

   o  A replier MUST NOT silently drop a request, even if the request is
      a retry.  (The silent drop behavior of RPCSEC_GSS [4] does not
      apply because this behavior happens at the RPCSEC_GSS layer, a
      lower layer in the request processing).  Instead, the replier
      SHOULD return an appropriate error (see Section 2.10.6.1) or it
      MAY disconnect the connection.

   When sending a reply, the replier MUST send the reply to the same
   full network address (e.g. if using an IP-based transport, the source
   port of the requester is part of the full network address) that the
   requester sent the request from.  If using a connection-oriented
   transport, replies MUST be sent on the same connection the request
   was received from.

   If a connection is dropped after the replier receives the request but
   before the replier sends the reply, the replier might have an pending
   reply.  If a connection is established with the same source and
   destination full network address as the dropped connection, then the
   replier MUST NOT send the reply until the client retries the request.
   The reason for this prohibition is that the client MAY retry a
   request over a different connection than is associated with the
   session.

   When using RDMA transports there are other reasons for not tolerating
   retries over the same connection:




Shepler, et al.           Expires June 18, 2009                [Page 41]

Internet-Draft                   NFSv4.1                   December 2008


   o  RDMA transports use "credits" to enforce flow control, where a
      credit is a right to a peer to transmit a message.  If one peer
      were to retransmit a request (or reply), it would consume an
      additional credit.  If the replier retransmitted a reply, it would
      certainly result in an RDMA connection loss, since the requester
      would typically only post a single receive buffer for each
      request.  If the requester retransmitted a request, the additional
      credit consumed on the server might lead to RDMA connection
      failure unless the client accounted for it and decreased its
      available credit, leading to wasted resources.

   o  RDMA credits present a new issue to the reply cache in NFSv4.1.
      The reply cache may be used when a connection within a session is
      lost, such as after the client reconnects.  Credit information is
      a dynamic property of the RDMA connection, and stale values must
      not be replayed from the cache.  This implies that the reply cache
      contents must not be blindly used when replies are sent from it,
      and credit information appropriate to the channel must be
      refreshed by the RPC layer.

   In addition, as described in Section 2.10.6.2, while a session is
   active, the NFSv4.1 requester MUST NOT stop waiting for a reply.

2.9.3.  Ports

   Historically, NFSv3 servers have listened over TCP port 2049.  The
   registered port 2049 [34] for the NFS protocol should be the default
   configuration.  NFSv4.1 clients SHOULD NOT use the RPC binding
   protocols as described in [35].

2.10.  Session

   NFSv4.1 clients and servers MUST support and MUST use the session
   feature as described in this section.

2.10.1.  Motivation and Overview

   Previous versions and minor versions of NFS have suffered from the
   following:

   o  Lack of support for Exactly Once Semantics (EOS).  This includes
      lack of support for EOS through server failure and recovery.

   o  Limited callback support, including no support for sending
      callbacks through firewalls, and races between replies to normal
      requests and callbacks.





Shepler, et al.           Expires June 18, 2009                [Page 42]

Internet-Draft                   NFSv4.1                   December 2008


   o  Limited trunking over multiple network paths.

   o  Requiring machine credentials for fully secure operation.

   Through the introduction of a session, NFSv4.1 addresses the above
   shortfalls with practical solutions:

   o  EOS is enabled by a reply cache with a bounded size, making it
      feasible to keep the cache in persistent storage and enable EOS
      through server failure and recovery.  One reason that previous
      revisions of NFS did not support EOS was because some EOS
      approaches often limited parallelism.  As will be explained in
      Section 2.10.6, NFSv4.1 supports both EOS and unlimited
      parallelism.

   o  The NFSv4.1 client (defined in Section 1.5, Paragraph 2) creates
      transport connections and provides them to the server to use for
      sending callback requests, thus solving the firewall issue
      (Section 18.34).  Races between responses from client requests,
      and callbacks caused by the requests are detected via the
      session's sequencing properties which are a consequence of EOS
      (Section 2.10.6.3).

   o  The NFSv4.1 client can add an arbitrary number of connections to
      the session, and thus provide trunking (Section 2.10.5).

   o  The NFSv4.1 client and server produces a session key independent
      of client and server machine credentials which can be used to
      compute a digest for protecting critical session management
      operations (Section 2.10.8.3).

   o  The NFSv4.1 client can also create secure RPCSEC_GSS contexts for
      use by the session's backchannel that do not require the server to
      authenticate to a client machine principal (Section 2.10.8.2).

   A session is a dynamically created, long-lived server object created
   by a client, used over time from one or more transport connections.
   Its function is to maintain the server's state relative to the
   connection(s) belonging to a client instance.  This state is entirely
   independent of the connection itself, and indeed the state exists
   whether the connection exists or not.  A client may have one or more
   sessions associated with it so that client-associated state may be
   accessed using any of the sessions associated with that client's
   client ID, when connections are associated with those sessions.  When
   no connections are associated with any of a client ID's sessions for
   an extended time, such objects as locks, opens, delegations, layouts,
   etc. are subject to expiration.  The session serves as an object
   representing a means of access by a client to the associated client



Shepler, et al.           Expires June 18, 2009                [Page 43]

Internet-Draft                   NFSv4.1                   December 2008


   state on the server, independent of the physical means of access to
   that state.

   A single client may create multiple sessions.  A single session MUST
   NOT serve multiple clients.

2.10.2.  NFSv4 Integration

   Sessions are part of NFSv4.1 and not NFSv4.0.  Normally, a major
   infrastructure change such as sessions would require a new major
   version number to an ONC RPC program like NFS.  However, because
   NFSv4 encapsulates its functionality in a single procedure, COMPOUND,
   and because COMPOUND can support an arbitrary number of operations,
   sessions have been added to NFSv4.1 with little difficulty.  COMPOUND
   includes a minor version number field, and for NFSv4.1 this minor
   version is set to 1.  When the NFSv4 server processes a COMPOUND with
   the minor version set to 1, it expects a different set of operations
   than it does for NFSv4.0.  NFSv4.1 defines the SEQUENCE operation,
   which is required for every COMPOUND that operates over an
   established session, with the exception of some session
   administration operations, such as DESTROY_SESSION (Section 18.37).

2.10.2.1.  SEQUENCE and CB_SEQUENCE

   In NFSv4.1, when the SEQUENCE operation is present, it MUST be the
   first operation in the COMPOUND procedure.  The primary purpose of
   SEQUENCE is to carry the session identifier.  The session identifier
   associates all other operations in the COMPOUND procedure with a
   particular session.  SEQUENCE also contains required information for
   maintaining EOS (see Section 2.10.6).  Session-enabled NFSv4.1
   COMPOUND requests thus have the form:

       +-----+--------------+-----------+------------+-----------+----
       | tag | minorversion | numops    |SEQUENCE op | op + args | ...
       |     |   (== 1)     | (limited) |  + args    |           |
       +-----+--------------+-----------+------------+-----------+----

   and the replies have the form:

       +------------+-----+--------+-------------------------------+--//
       |last status | tag | numres |status + SEQUENCE op + results |  //
       +------------+-----+--------+-------------------------------+--//
               //-----------------------+----
               // status + op + results | ...
               //-----------------------+----

   A CB_COMPOUND procedure request and reply has a similar form to
   COMPOUND, but instead of a SEQUENCE operation, there is a CB_SEQUENCE



Shepler, et al.           Expires June 18, 2009                [Page 44]

Internet-Draft                   NFSv4.1                   December 2008


   operation.  CB_COMPOUND also has an additional field called
   "callback_ident", which is superfluous in NFSv4.1 and MUST be ignored
   by the client.  CB_SEQUENCE has the same information as SEQUENCE, and
   also includes other information needed to resolve callback races
   (Section 2.10.6.3).

2.10.2.2.  Client ID and Session Association

   Each client ID (Section 2.4) can have zero or more active sessions.
   A client ID and associated session are required to perform file
   access in NFSv4.1.  Each time a session is used (whether by a client
   sending a request to the server, or the client replying to a callback
   request from the server), the state leased to its associated client
   ID is automatically renewed.

   State such as share reservations, locks, delegations, and layouts
   (Section 1.6.4) is tied to the client ID.  Client state is not tied
   to any individual session.  Successive state changing operations from
   a given state owner MAY go over different sessions, provided the
   session is associated with the same client ID.  A callback MAY arrive
   over a different session than from the session that originally
   acquired the state pertaining to the callback.  For example, if
   session A is used to acquire a delegation, a request to recall the
   delegation MAY arrive over session B if both sessions are associated
   with the same client ID.  Section 2.10.8.1 and Section 2.10.8.2
   discuss the security considerations around callbacks.

2.10.3.  Channels

   A channel is not a connection.  A channel represents the direction
   ONC RPC requests are sent.

   Each session has one or two channels: the fore channel and the
   backchannel.  Because there are at most two channels per session, and
   because each channel has a distinct purpose, channels are not
   assigned identifiers.

   The fore channel is used for ordinary requests from the client to the
   server, and carries COMPOUND requests and responses.  A session
   always has a fore channel.

   The backchannel used for callback requests from server to client, and
   carries CB_COMPOUND requests and responses.  Whether there is a
   backchannel or not is a decision by the client, however many features
   of NFSv4.1 require a backchannel.  NFSv4.1 servers MUST support
   backchannels.

   Each session has resources for each channel, including separate reply



Shepler, et al.           Expires June 18, 2009                [Page 45]

Internet-Draft                   NFSv4.1                   December 2008


   caches (see Section 2.10.6.1).  Note that even the backchannel
   requires a reply cache because some callback operations are
   nonidempotent.

2.10.3.1.  Association of Connections, Channels, and Sessions

   Each channel is associated with zero or more transport connections
   (whether of the same transport protocol or different transport
   protocols).  A connection can be associated with one channel or both
   channels of a session; the client and server negotiate whether a
   connection will carry traffic for one channel or both channels via
   the CREATE_SESSION (Section 18.36) and the BIND_CONN_TO_SESSION
   (Section 18.34) operations.  When a session is created via
   CREATE_SESSION, the connection that transported the CREATE_SESSION
   request is automatically associated with the fore channel, and
   optionally the backchannel.  If the client specifies no state
   protection (Section 18.35) when the session is created, then when
   SEQUENCE is transmitted on a different connection, the connection is
   automatically associated with the fore channel of the session
   specified in the SEQUENCE operation.

   A connection's association with a session is not exclusive.  A
   connection associated with the channel(s) of one session may be
   simultaneously associated with the channel(s) of other sessions
   including sessions associated with other client IDs.

   It is permissible for connections of multiple transport types to be
   associated with the same channel.  For example both a TCP and RDMA
   connection can be associated with the fore channel.  In the event an
   RDMA and non-RDMA connection are associated with the same channel,
   the maximum number of slots SHOULD be at least one more than the
   total number of RDMA credits (Section 2.10.6.1.  This way if all RDMA
   credits are used, the non-RDMA connection can have at least one
   outstanding request.  If a server supports multiple transport types,
   it MUST allow a client to associate connections from each transport
   to a channel.

   It is permissible for a connection of one type of transport to be
   associated with the fore channel, and a connection of a different
   type to be associated with the backchannel.

2.10.4.  Server Scope

   Servers each specify a server scope value in the form of an opaque
   string eir_server_scope returned as part of the results of an
   EXCHANGE_ID operation.  The purpose of the server scope is to allow a
   group of servers to indicate to clients that a set of servers sharing
   the same server scope value have arranged to use compatible values of



Shepler, et al.           Expires June 18, 2009                [Page 46]

Internet-Draft                   NFSv4.1                   December 2008


   otherwise opaque identifiers.  Thus the identifiers generated by one
   server of that set may be presented to another of that same scope.

   The use of such compatible values does not imply that a value
   generated by one server will always be accepted by another.  In most
   cases, it will not.  However, a server will not accept a value
   generated by another inadvertently.  When it does accept it, it will
   be because it is recognized as valid and carrying the same meaning as
   on another server of the same scope.

   When servers are of the same server scope, this compatibility of
   values applies to the follow identifiers:

   o  Filehandle values.  A filehandle value accepted by two servers of
      the same server scope denotes the same object.  A write done to
      one server is reflected immediately in a read done to the other
      and locks obtained on one server conflict with those requested on
      the other.

   o  Session ID values.  A session ID value accepted by two servers of
      the same server scope denotes the same session.

   o  Client ID values.  A client ID value accepted as valid by two
      servers of the same server scope is associated with two clients
      with the same client owner and verifier.

   o  State ID values when the corresponding client ID is recognized as
      valid.  If the same stateid value is accepted as valid on two
      servers of the same scope and the client IDs on the two servers
      represent the same client owner and verifier, then the two stateid
      values designate the same set of locks and are for the same file

   o  Server owner values.  When the server scope values are the same,
      server owner value may be validly compared.  In cases where the
      server scope are different, server owner values are treated as
      different even if they contain all identical bytes.

   The co-ordination among servers required to provide such
   compatibility can be quite minimal, and limited to a simple partition
   of the ID space.  The recognition of common values requires
   additional implementation, but this can be tailored to the specific
   situations in which that recognition is desired.

   Clients will have occasion to compare the server scope values of
   multiple servers under a number of circumstances, each of which will
   be discussed under the appropriate functional section.





Shepler, et al.           Expires June 18, 2009                [Page 47]

Internet-Draft                   NFSv4.1                   December 2008


   o  When server owner values received in response to EXCHANGE_ID
      operations issued to multiple network addresses are compared for
      the purpose of determining the validity of various forms of
      trunking, as described in Section 2.10.5.

   o  When network or server reconfiguration causes the same network
      address to possibly be directed to different servers, with the
      necessity for the client to determine when lock reclaim should be
      attempted, as described in Section 8.4.2.1

   o  When file system migration causes the transfer of responsibility
      for a file system between servers and the client needs to
      determine whether state has been transferred with the file system
      (as described in Section 11.7.7) or whether the client needs to
      reclaim state on a similar basis as in the case of server restart,
      as described in Section 8.4.2.

   When two replies from EXCHANGE_ID each from two different server
   network addresses have the same server scope, there are a number of
   ways a client can validate that the common server scope is due to two
   servers cooperating in a group.

   o  If both EXCHANGE_ID requests were sent with RPCSEC_GSS
      authentication and the server principal is the same for both
      targets, the equality of server scope is validated.  It is
      RECOMMENDED that two servers intending to share the same server
      scope also share the same principal name.

   o  The client may accept the appearance of the second server in
      fs_locations or fs_locations_info attribute for a relevant file
      system.  For example, if there is a migration event for a
      particular file system or there are locks to be reclaimed on a
      particular file system, the attributes for that particular file
      system may be used.  The client sends the GETATTR request to the
      first server for the fs_locations or fs_locations_info attribute
      with RPCSEC_GSS authentication.  It may need to do this in advance
      of the need to verify the common server scope.  If the client
      successfully authenticates the reply to GETATTR, and the GETATTR
      request and reply containing the fs_locations or fs_locations_info
      attribute refers to the second server, then the equality of server
      scope is supported.  A client may choose to limit the use of this
      form of support to information relevant to the specific file
      system involved (e.g. a file system being migrated).








Shepler, et al.           Expires June 18, 2009                [Page 48]

Internet-Draft                   NFSv4.1                   December 2008


2.10.5.  Trunking

   Trunking is the use of multiple connections between a client and
   server in order to increase the speed of data transfer.  NFSv4.1
   supports two types of trunking: session trunking and client ID
   trunking.

   NFSv4.1 servers MUST support both forms of trunking within the
   context of a single server network address and MUST support both
   forms within the context of the set of network addresses used to
   access a single server.  NFSv4.1 servers in a clustered configuration
   MAY allow network addresses for different servers to use client ID
   trunking.

   Clients may use either form of trunking as long as they do not, when
   trunking between different server network addresses, violate the
   servers' mandates as to the kinds of trunking to be allowed (see
   below).  With regard to callback channels, the client MUST allow the
   server to choose among all callback channels valid for a given client
   ID and MUST support trunking when the connections supporting the
   backchannel allow session or client ID trunking to be used for
   callbacks

   Session trunking is essentially the association of multiple
   connections, each with potentially different target and/or source
   network addresses, to the same session.  When the target network
   addresses (server addresses) of the two connections are the same, the
   server MUST support such session trunking.  When the target network
   addresses are different, the server MAY indicate such support using
   the data returned by the EXCHANGE_ID operation (see below).

   Client ID trunking is the association of multiple sessions to the
   same client ID.  Servers MUST support client ID trunking for two
   target network addresses whenever they allow session trunking for
   those same two network addresses.  In addition, a server MAY, by
   presenting the same major server owner ID (Section 2.5), and server
   scope (Section 2.10.4) allow an additional case of client ID
   trunking.  When two servers return the same major server owner and
   server scope, it means that the two servers are cooperating on
   locking state management which is a prerequisite for client ID
   trunking.

   Understanding and distinguishing when the client is allowed to use
   session and client ID trunking requires understanding how the results
   of the EXCHANGE_ID (Section 18.35) operation identify a server.
   Suppose a client sends EXCHANGE_ID over two different connections
   each with a possibly different target network address but each
   EXCHANGE_ID operation has the same value in the eia_clientowner



Shepler, et al.           Expires June 18, 2009                [Page 49]

Internet-Draft                   NFSv4.1                   December 2008


   field.  If the same NFSv4.1 server is listening over each connection,
   then each EXCHANGE_ID result MUST return the same values of
   eir_clientid, eir_server_owner.so_major_id and eir_server_scope.  The
   client can then treat each connection as referring to the same server
   (subject to verification, see Paragraph 8 later in this section), and
   it can use each connection to trunk requests and replies.  The
   client's choice is whether session trunking or client ID trunking
   applies.

   Session Trunking.  If the eia_clientowner argument is the same in two
      different EXCHANGE_ID requests, and the eir_clientid,
      eir_server_owner.so_major_id, eir_server_owner.so_minor_id, and
      eir_server_scope results match in both EXCHANGE_ID results, then
      the client is permitted to perform session trunking.  If the
      client has no session mapping to the tuple of eir_clientid,
      eir_server_owner.so_major_id, eir_server_scope,
      eir_server_owner.so_minor_id, then it creates the session via a
      CREATE_SESSION operation over one of the connections, which
      associates the connection to the session.  If there is a session
      for the tuple, the client can send BIND_CONN_TO_SESSION to
      associate the connection to the session.

      Of course, if the client does not desire to use session trunking,
      it is not required to do so.  It can invoke CREATE_SESSION on the
      connection.  This will result in client ID trunking as described
      below.  It can also decide to drop the connection if it does not
      choose to use trunking.


   Client ID Trunking.  If the eia_clientowner argument is the same in
      two different EXCHANGE_ID requests, and the eir_clientid,
      eir_server_owner.so_major_id, and eir_server_scope results match
      in both EXCHANGE_ID results, then the client is permitted to
      perform client ID trunking (regardless whether the
      eir_server_owner.so_minor_id results match).  The client can
      associate each connection with different sessions, where each
      session is associated with the same server.

      The client completes the act of client ID trunking by invoking
      CREATE_SESSION on each connection, using the same client ID that
      was returned in eir_clientid.  These invocations create two
      sessions and also associate each connection with its respective
      session.  The client is free to choose not to use client ID
      trunking by simply dropping the connection at this point.

      When doing client ID trunking, locking state is shared across
      sessions associated with that same client ID.  This requires the
      server to coordinate state across sessions.



Shepler, et al.           Expires June 18, 2009                [Page 50]

Internet-Draft                   NFSv4.1                   December 2008


   The client should be prepared for the possibility that
   eir_server_owner values may be different on subsequent EXCHANGE_ID
   requests made to the same network address, as a result of various
   sorts of reconfiguration events.  When this happens and the changes
   result in the invalidation of previously valid forms of trunking, the
   client should cease to use those forms, either by dropping
   connections or by adding sessions.  For a discussion of lock reclaim
   as it relates to such reconfiguration events, see Section 8.4.2.1.

   When two servers over two connections claim matching or partially
   matching eir_server_owner, eir_server_scope, and eir_clientid values,
   the client does not have to trust the servers' claims.  The client
   may verify these claims before trunking traffic in the following
   ways:

   o  For session trunking, clients SHOULD reliably verify if
      connections between different network paths are in fact associated
      with the same NFSv4.1 server and usable on the same session, and
      servers MUST allow clients to perform reliable verification.  When
      a client ID is created, the client SHOULD specify that
      BIND_CONN_TO_SESSION is to be verified according to the SP4_SSV or
      SP4_MACH_CRED (Section 18.35) state protection options.  For
      SP4_SSV, reliable verification depends on a shared secret (the
      SSV) that is established via the SET_SSV (Section 18.47)
      operation.

      When a new connection is associated with the session (via the
      BIND_CONN_TO_SESSION operation, see Section 18.34), if the client
      specified SP4_SSV state protection for the BIND_CONN_TO_SESSION
      operation, the client MUST send the BIND_CONN_TO_SESSION with
      RPCSEC_GSS protection, using integrity or privacy, and an
      RPCSEC_GSS handle created with the GSS SSV mechanism
      (Section 2.10.9).

      If the client mistakenly tries to associate a connection to a
      session of a wrong server, the server will either reject the
      attempt because it is not aware of the session identifier of the
      BIND_CONN_TO_SESSION arguments, or it will reject the attempt
      because the RPCSEC_GSS authentication fails.  Even if the server
      mistakenly or maliciously accepts the connection association
      attempt, the RPCSEC_GSS verifier it computes in the response will
      not be verified by the client, so the client will know it cannot
      use the connection for trunking the specified session.

      If the client specified SP4_MACH_CRED state protection, the
      BIND_CONN_TO_SESSION operation will use RPCSEC_GSS integrity or
      privacy, using the same credential that was used when the client
      ID was created.  Mutual authentication via RPCSEC_GSS assures the



Shepler, et al.           Expires June 18, 2009                [Page 51]

Internet-Draft                   NFSv4.1                   December 2008


      client that the connection is associated with the correct session
      of the correct server.


   o  For client ID trunking, the client has at least two options for
      verifying that the same client ID obtained from two different
      EXCHANGE_ID operations came from the same server.  The first
      option is to use RPCSEC_GSS authentication when issuing each
      EXCHANGE_ID.  Each time an EXCHANGE_ID is sent with RPCSEC_GSS
      authentication, the client notes the principal name of the GSS
      target.  If the EXCHANGE_ID results indicate client ID trunking is
      possible, and the GSS targets' principal names are the same, the
      servers are the same and client ID trunking is allowed.

      The second option for verification is to use SP4_SSV protection.
      When the client sends EXCHANGE_ID it specifies SP4_SSV protection.
      The first EXCHANGE_ID the client sends always has to be confirmed
      by a CREATE_SESSION call.  The client then sends SET_SSV.  Later
      the client sends EXCHANGE_ID to a second destination network
      address different from the one the first EXCHANGE_ID was sent to.
      The client checks that each EXCHANGE_ID reply has the same
      eir_clientid, eir_server_owner.so_major_id, and eir_server_scope.
      If so, the client verifies the claim by issuing a CREATE_SESSION
      to the second destination address, protected with RPCSEC_GSS
      integrity using an RPCSEC_GSS handle returned by the second
      EXCHANGE_ID.  If the server accepts the CREATE_SESSION request,
      and if the client verifies the RPCSEC_GSS verifier and integrity
      codes, then the client has proof the second server knows the SSV,
      and thus the two servers are co-operating for the purposes of
      specifying server scope and client ID trunking.

2.10.6.  Exactly Once Semantics

   Via the session, NFSv4.1 offers Exactly Once Semantics (EOS) for
   requests sent over a channel.  EOS is supported on both the fore and
   back channels.

   Each COMPOUND or CB_COMPOUND request that is sent with a leading
   SEQUENCE or CB_SEQUENCE operation MUST be executed by the receiver
   exactly once.  This requirement holds regardless of whether the
   request is sent with reply caching specified (see
   Section 2.10.6.1.3).  The requirement holds even if the requester is
   issuing the request over a session created between a pNFS data client
   and pNFS data server.  To understand the rationale for this
   requirement, divide the requests into three classifications:

   o  Nonidempotent requests.




Shepler, et al.           Expires June 18, 2009                [Page 52]

Internet-Draft                   NFSv4.1                   December 2008


   o  Idempotent modifying requests.

   o  Idempotent non-modifying requests.

   An example of a non-idempotent request is RENAME.  If is obvious that
   if a replier executes the same RENAME request twice, and the first
   execution succeeds, the re-execution will fail.  If the replier
   returns the result from the re-execution, this result is incorrect.
   Therefore, EOS is required for nonidempotent requests.

   An example of an idempotent modifying request is a COMPOUND request
   containing a WRITE operation.  Repeated execution of the same WRITE
   has the same effect as execution of that write a single time.
   Nevertheless, enforcing EOS for WRITEs and other idempotent modifying
   requests is necessary to avoid data corruption.

   Suppose a client sends WRITE A to a noncompliant server that does not
   enforce EOS, and receives no response, perhaps due to a network
   partition.  The client reconnects to the server and re-sends WRITE A.
   Now, the server has outstanding two instances of A. The server can be
   in a situation in which it executes and replies to the retry of A,
   while the first A is still waiting in the server's internal I/O
   system for some resource.  Upon receiving the reply to the second
   attempt of WRITE A, the client believes its write is done so it is
   free to send WRITE B which overlaps the range of A. When the original
   A is dispatched from the server's I/O system, and executed (thus the
   second time A will have been written), then what has been written by
   B can be overwritten and thus corrupted.

   An example of an idempotent non-modifying request is a COMPOUND
   containing SEQUENCE, PUTFH, READLINK and nothing else.  The re-
   execution of a such a request will not cause data corruption, or
   produce an incorrect result.  Nonetheless, to keep the implementation
   simple, the replier MUST enforce EOS for all requests whether
   idempotent and non-modifying or not.

   Note that true and complete EOS is not possible unless the server
   persists the reply cache in stable storage, unless the server is
   somehow implemented to never require a restart (indeed if such a
   server exists, the distinction between a reply cache kept in stable
   storage versus one that is not is one without meaning).  See
   Section 2.10.6.5 for a discussion of persistence in the reply cache.
   Regardless, even if the server does not persist the reply cache, EOS
   improves robustness and correctness over previous versions of NFS
   because the legacy duplicate request/reply caches were based on the
   ONC RPC transaction identifier (XID).  Section 2.10.6.1 explains the
   shortcomings of the XID as a basis for a reply cache and describes
   how NFSv4.1 sessions improve upon the XID.



Shepler, et al.           Expires June 18, 2009                [Page 53]

Internet-Draft                   NFSv4.1                   December 2008


2.10.6.1.  Slot Identifiers and Reply Cache

   The RPC layer provides a transaction ID (XID), which, while required
   to be unique, is not convenient for tracking requests for two
   reasons.  First, the XID is only meaningful to the requester; it
   cannot be interpreted by the replier except to test for equality with
   previously sent requests.  When consulting an RPC-based duplicate
   request cache, the opaqueness of the XID requires a computationally
   expensive lookup (often via a hash that includes XID and source
   address).  NFSv4.1 requests use a non-opaque slot ID which is an
   index into a slot table, which is far more efficient.  Second,
   because RPC requests can be executed by the replier in any order,
   there is no bound on the number of requests that may be outstanding
   at any time.  To achieve perfect EOS using ONC RPC would require
   storing all replies in the reply cache.  XIDs are 32 bits; storing
   over four billion (2^32) replies in the reply cache is not practical.
   In practice, previous versions of NFS have chosen to store a fixed
   number of replies in the cache, and use a least recently used (LRU)
   approach to replacing cache entries with new entries when the cache
   is full.  In NFSv4.1, the number of outstanding requests is bounded
   by the size of the slot table, and a sequence ID per slot is used to
   tell the replier when it is safe to delete a cached reply.

   In the NFSv4.1 reply cache, when the requester sends a new request,
   it selects a slot ID in the range 0..N, where N is the replier's
   current maximum slot ID granted to the requester on the session over
   which the request is to be sent.  The value of N starts out as equal
   to ca_maxrequests - 1 (Section 18.36), but can be adjusted by the
   response to SEQUENCE or CB_SEQUENCE as described later in this
   section.  The slot ID must be unused by any of the requests which the
   requester has already active on the session.  "Unused" here means the
   requester has no outstanding request for that slot ID.

   A slot contains a sequence ID and the cached reply corresponding to
   the request sent with that sequence ID.  The sequence ID is a 32 bit
   unsigned value, and is therefore in the range 0..0xFFFFFFFF (2^32 -
   1).  The first time a slot is used, the requester MUST specify a
   sequence ID of one (1) (Section 18.36).  Each time a slot is reused,
   the request MUST specify a sequence ID that is one greater than that
   of the previous request on the slot.  If the previous sequence ID was
   0xFFFFFFFF, then the next request for the slot MUST have the sequence
   ID set to zero (i.e. (2^32 - 1) + 1 mod 2^32).

   The sequence ID accompanies the slot ID in each request.  It is for
   the critical check at the server: it used to efficiently determine
   whether a request using a certain slot ID is a retransmit or a new,
   never-before-seen request.  It is not feasible for the client to
   assert that it is retransmitting to implement this, because for any



Shepler, et al.           Expires June 18, 2009                [Page 54]

Internet-Draft                   NFSv4.1                   December 2008


   given request the client cannot know whether the server has seen it
   unless the server actually replies.  Of course, if the client has
   seen the server's reply, the client would not retransmit.

   The replier compares each received request's sequence ID with the
   last one previously received for that slot ID, to see if the new
   request is:

   o  A new request, in which the sequence ID is one greater than that
      previously seen in the slot (accounting for sequence wraparound).
      The replier proceeds to execute the new request, and the replier
      MUST increase the slot's sequence ID by one.

   o  A retransmitted request, in which the sequence ID is equal to that
      currently recorded in the slot.  If the original request has
      executed to completion, the replier returns the cached reply.  See
      Section 2.10.6.2 for direction on how the replier deals with
      retries of requests that are still in progress.

   o  A misordered retry, in which the sequence ID is less than
      (accounting for sequence wraparound) that previously seen in the
      slot.  The replier MUST return NFS4ERR_SEQ_MISORDERED (as the
      result from SEQUENCE or CB_SEQUENCE).

   o  A misordered new request, in which the sequence ID is two or more
      than (accounting for sequence wraparound) than that previously
      seen in the slot.  Note that because the sequence ID MUST
      wraparound to zero (0) once it reaches 0xFFFFFFFF, a misordered
      new request and a misordered retry cannot be distinguished.  Thus,
      the replier MUST return NFS4ERR_SEQ_MISORDERED (as the result from
      SEQUENCE or CB_SEQUENCE).

   Unlike the XID, the slot ID is always within a specific range; this
   has two implications.  The first implication is that for a given
   session, the replier need only cache the results of a limited number
   of COMPOUND requests .  The second implication derives from the
   first, which is that unlike XID-indexed reply caches (also known as
   duplicate request caches - DRCs), the slot ID-based reply cache
   cannot be overflowed.  Through use of the sequence ID to identify
   retransmitted requests, the replier does not need to actually cache
   the request itself, reducing the storage requirements of the reply
   cache further.  These facilities make it practical to maintain all
   the required entries for an effective reply cache.

   The slot ID, sequence ID, and session ID therefore take over the
   traditional role of the XID and source network address in the
   replier's reply cache implementation.  This approach is considerably
   more portable and completely robust - it is not subject to the



Shepler, et al.           Expires June 18, 2009                [Page 55]

Internet-Draft                   NFSv4.1                   December 2008


   reassignment of ports as clients reconnect over IP networks.  In
   addition, the RPC XID is not used in the reply cache, enhancing
   robustness of the cache in the face of any rapid reuse of XIDs by the
   requester.  While the replier does not care about the XID for the
   purposes of reply cache management (but the replier MUST return the
   same XID that was in the request), nonetheless there are
   considerations for the XID in NFSv4.1 that are the same as all other
   previous versions of NFS.  The RPC XID remains in each message and
   needs to be formulated in NFSv4.1 requests as in any other ONC RPC
   request.  The reasons include:

   o  The RPC layer retains its existing semantics and implementation.

   o  The requester and replier must be able to interoperate at the RPC
      layer, prior to the NFSv4.1 decoding of the SEQUENCE or
      CB_SEQUENCE operation.

   o  If an operation is being used that does not start with SEQUENCE or
      CB_SEQUENCE (e.g.  BIND_CONN_TO_SESSION), then the RPC XID is
      needed for correct operation to match the reply to the request.

   o  The SEQUENCE or CB_SEQUENCE operation may generate an error.  If
      so, the embedded slot ID, sequence ID, and session ID (if present)
      in the request will not be in the reply, and the requester has
      only the XID to match the reply to the request.

   Given that well formulated XIDs continue to be required, this begs
   the question why SEQUENCE and CB_SEQUENCE replies have a session ID,
   slot ID and sequence ID?  Having the session ID in the reply means
   the requester does not have to use the XID to lookup the session ID,
   which would be necessary if the connection were associated with
   multiple sessions.  Having the slot ID and sequence ID in the reply
   means requester does not have to use the XID to lookup the slot ID
   and sequence ID.  Furthermore, since the XID is only 32 bits, it is
   too small to guarantee the re-association of a reply with its request
   ([36]); having session ID, slot ID, and sequence ID in the reply
   allows the client to validate that the reply in fact belongs to the
   matched request.

   The SEQUENCE (and CB_SEQUENCE) operation also carries a
   "highest_slotid" value which carries additional requester slot usage
   information.  The requester MUST always indicate the slot ID
   representing the outstanding request with the highest-numbered slot
   value.  The requester should in all cases provide the most
   conservative value possible, although it can be increased somewhat
   above the actual instantaneous usage to maintain some minimum or
   optimal level.  This provides a way for the requester to yield unused
   request slots back to the replier, which in turn can use the



Shepler, et al.           Expires June 18, 2009                [Page 56]

Internet-Draft                   NFSv4.1                   December 2008


   information to reallocate resources.

   The replier responds with both a new target highest_slotid, and an
   enforced highest_slotid, described as follows:

   o  The target highest_slotid is an indication to the requester of the
      highest_slotid the replier wishes the requester to be using.  This
      permits the replier to withdraw (or add) resources from a
      requester that has been found to not be using them, in order to
      more fairly share resources among a varying level of demand from
      other requesters.  The requester must always comply with the
      replier's value updates, since they indicate newly established
      hard limits on the requester's access to session resources.
      However, because of request pipelining, the requester may have
      active requests in flight reflecting prior values, therefore the
      replier must not immediately require the requester to comply.


   o  The enforced highest_slotid indicates the highest slot ID the
      requester is permitted to use on a subsequent SEQUENCE or
      CB_SEQUENCE operation.  The replier's enforced highest_slotid
      SHOULD be no less than the highest_slotid the requester indicated
      in the SEQUENCE or CB_SEQUENCE arguments.

      If a replier detects the client is being intransigent, i.e. it
      fails in a series of requests to honor the target highest_slotid
      even though the replier knows there are no outstanding requests a
      higher slot ids, it MAY take more forceful action.  When faced
      with intransigence, the replier MAY reply with a new enforced
      highest_slotid that is less than its previous enforced
      highest_slotid.  Thereafter, if the requester continues to send
      requests with a highest_slotid that is greater than the replier's
      new enforced highest_slotid the server MAY return
      NFS4ERR_BAD_HIGHSLOT, unless the slot ID in the request is greater
      than the new enforced highest_slotid, and the request is a retry.

      The replier SHOULD retain the slots it wants to retire until the
      requester sends a request with a highest_slotid less than or equal
      to the replier's new enforced highest_slotid.  Also if a request
      is received with a slot that is higher than the new enforced
      highest_slotid, and the sequence ID is one higher than what is in
      the slot's reply cache, then the server can both retire the slot
      and return NFS4ERR_BADSLOT (however the server MUST NOT do one and
      not the other).  (The reason it is safe to retire the slot is
      because that by using the next sequence ID, the client is
      indicating it has received the previous reply for the slot.)  Once
      the replier has forcibly lowered the enforced highest_slotid, the
      requester is only allowed to send retries to the to-be-retired



Shepler, et al.           Expires June 18, 2009                [Page 57]

Internet-Draft                   NFSv4.1                   December 2008


      slots.


   o  The requester SHOULD use the lowest available slot when issuing a
      new request.  This way, the replier may be able to retire slot
      entries faster.  However, where the replier is actively adjusting
      its granted highest_slotid, it will not be able to use only the
      receipt of the slot ID and highest_slotid in the request.  Neither
      the slot ID nor the highest_slotid used in a request may reflect
      the replier's current idea of the requester's session limit,
      because the request may have been sent from the requester before
      the update was received.  Therefore, in the downward adjustment
      case, the replier may have to retain a number of reply cache
      entries at least as large as the old value of maximum requests
      outstanding, until it can infer that the requester has seen a
      reply containing the new granted highest_slotid.  The replier can
      infer that requester as seen such a reply when it receives a new
      request with the same slot ID as the request replied to and the
      next higher sequence ID.

2.10.6.1.1.  Caching of SEQUENCE and CB_SEQUENCE Replies

   When a SEQUENCE or CB_SEQUENCE operation is successfully executed,
   its reply MUST always be cached.  Specifically, session ID, sequence
   ID, and slot ID MUST be cached in the reply cache.  The reply from
   SEQUENCE also includes the highest slot ID, target highest slot ID,
   and status flags.  Instead of caching these values, the server MAY
   re-compute the values from the current state of the fore channel,
   session and/or client ID as appropriate.  Similarly, the reply from
   CB_SEQUENCE includes a highest slot ID and target highest slot ID.
   The client MAY re-compute the values from the current state of the
   session as appropriate.

   Regardless of whether a replier is re-computing highest slot ID,
   target slot ID, and status on replies to retries or not, the
   requester MUST NOT assume the values are being re-computed whenever
   it receives a reply after a retry is sent, since it has no way of
   knowing whether the reply it has received was sent by the server in
   response to the retry, or is a delayed response to the original
   request.  Therefore, it may be the case that highest slot ID, target
   slot ID, or status bits may reflect the state of affairs when the
   request was first executed.  Although acting based on such delayed
   information is valid, it may cause the receiver to do unneeded work.
   Requesters MAY choose to send additional requests to get the current
   state of affairs or use the state of affairs reported by subsequent
   requests, in preference to acting immediately on data which may be
   out of date.




Shepler, et al.           Expires June 18, 2009                [Page 58]

Internet-Draft                   NFSv4.1                   December 2008


2.10.6.1.2.  Errors from SEQUENCE and CB_SEQUENCE

   Any time SEQUENCE or CB_SEQUENCE return an error, the sequence ID of
   the slot MUST NOT change.  The replier MUST NOT modify the reply
   cache entry for the slot whenever an error is returned from SEQUENCE
   or CB_SEQUENCE.

2.10.6.1.3.  Optional Reply Caching

   On a per-request basis the requester can choose to direct the replier
   to cache the reply to all operations after the first operation
   (SEQUENCE or CB_SEQUENCE) via the sa_cachethis or csa_cachethis
   fields of the arguments to SEQUENCE or CB_SEQUENCE.  The reason it
   would not direct the replier to cache the entire reply is that the
   request is composed of all idempotent operations [33].  Caching the
   reply may offer little benefit.  If the reply is too large (see
   Section 2.10.6.4), it may not be cacheable anyway.  Even if the reply
   to idempotent request is small enough to cache, unnecessarily caching
   the reply slows down the server and increases RPC latency.

   Whether the requester requests the reply to be cached or not has no
   effect on the slot processing.  If the results of SEQUENCE or
   CB_SEQUENCE are NFS4_OK, then the slot's sequence ID MUST be
   incremented by one.  If a requester does not direct the replier to
   cache the reply, the replier MUST do one of following:

   o  The replier can cache the entire original reply.  Even though
      sa_cachethis or csa_cachethis are FALSE, the replier is always
      free to cache.  It may choose this approach in order to simplify
      implementation.

   o  The replier enters into its reply cache a reply consisting of the
      original results to the SEQUENCE or CB_SEQUENCE operation, and
      with the next operation in COMPOUND or CB_COMPOUND having the
      error NFS4ERR_RETRY_UNCACHED_REP.  Thus if the requester later
      retries the request, it will get NFS4ERR_RETRY_UNCACHED_REP.

2.10.6.2.  Retry and Replay of Reply

   A requester MUST NOT retry a request, unless the connection it used
   to send the request disconnects.  The requester can then reconnect
   and re-send the request, or it can re-send the request over a
   different connection that is associated with the same session.

   If the requester is a server wanting to re-send a callback operation
   over the backchannel of session, the requester of course cannot
   reconnect because only the client can associate connections with the
   backchannel.  The server can re-send the request over another



Shepler, et al.           Expires June 18, 2009                [Page 59]

Internet-Draft                   NFSv4.1                   December 2008


   connection that is bound to the same session's backchannel.  If there
   is no such connection, the server MUST indicate that the session has
   no backchannel by setting the SEQ4_STATUS_CB_PATH_DOWN_SESSION flag
   bit in the response to the next SEQUENCE operation from the client.
   The client MUST then associate a connection with the session (or
   destroy the session).

   Note that it is not fatal for a client to retry without a disconnect
   between the request and retry.  However the retry does consume
   resources, especially with RDMA, where each request, retry or not,
   consumes a credit.  Retries for no reason, especially retries sent
   shortly after the previous attempt, are a poor use of network
   bandwidth and defeat the purpose of a transport's inherent congestion
   control system.

   A requester MUST wait for a reply to a request before using the slot
   for another request.  If it does not wait for a reply, then the
   requester does not know what sequence ID to use for the slot on its
   next request.  For example, suppose a requester sends a request with
   sequence ID 1, and does not wait for the response.  The next time it
   uses the slot, it sends the new request with sequence ID 2.  If the
   replier has not seen the request with sequence ID 1, then the replier
   is not expecting sequence ID 2, and rejects the requester's new
   request with NFS4ERR_SEQ_MISORDERED (as the result from SEQUENCE or
   CB_SEQUENCE).

   RDMA fabrics do not guarantee that the memory handles (Steering Tags)
   within each RPC/RDMA "chunk" ([8]) are valid on a scope outside that
   of a single connection.  Therefore, handles used by the direct
   operations become invalid after connection loss.  The server must
   ensure that any RDMA operations which must be replayed from the reply
   cache use the newly provided handle(s) from the most recent request.

   A retry might be sent while the original request is still in progress
   on the replier.  The replier SHOULD deal with the issue by returning
   NFS4ERR_DELAY as the reply to SEQUENCE or CB_SEQUENCE operation, but
   implementations MAY return NFS4ERR_MISORDERED.  Since errors from
   SEQUENCE and CB_SEQUENCE are never recorded in the reply cache, this
   approach allows the results of the execution of the original request
   to be properly recorded in the reply cache (assuming the requester
   specified the reply to be cached).

2.10.6.3.  Resolving Server Callback Races

   It is possible for server callbacks to arrive at the client before
   the reply from related fore channel operations.  For example, a
   client may have been granted a delegation to a file it has opened,
   but the reply to the OPEN (informing the client of the granting of



Shepler, et al.           Expires June 18, 2009                [Page 60]

Internet-Draft                   NFSv4.1                   December 2008


   the delegation) may be delayed in the network.  If a conflicting
   operation arrives at the server, it will recall the delegation using
   the backchannel, which may be on a different transport connection,
   perhaps even a different network, or even a different session
   associated with the same client ID

   The presence of a session between client and server alleviates this
   issue.  When a session is in place, each client request is uniquely
   identified by its { session ID, slot ID, sequence ID } triple.  By
   the rules under which slot entries (reply cache entries) are retired,
   the server has knowledge whether the client has "seen" each of the
   server's replies.  The server can therefore provide sufficient
   information to the client to allow it to disambiguate between an
   erroneous or conflicting callback race condition.

   For each client operation which might result in some sort of server
   callback, the server SHOULD "remember" the { session ID, slot ID,
   sequence ID } triple of the client request until the slot ID
   retirement rules allow the server to determine that the client has,
   in fact, seen the server's reply.  Until the time the { session ID,
   slot ID, sequence ID } request triple can be retired, any recalls of
   the associated object MUST carry an array of these referring
   identifiers (in the CB_SEQUENCE operation's arguments), for the
   benefit of the client.  After this time, it is not necessary for the
   server to provide this information in related callbacks, since it is
   certain that a race condition can no longer occur.

   The CB_SEQUENCE operation which begins each server callback carries a
   list of "referring" { session ID, slot ID, sequence ID } triples.  If
   the client finds the request corresponding to the referring session
   ID, slot ID and sequence ID to be currently outstanding (i.e. the
   server's reply has not been seen by the client), it can determine
   that the callback has raced the reply, and act accordingly.  If the
   client does not find the request corresponding the referring triple
   to be outstanding (including the case of a session ID referring to a
   destroyed session), then there is no race with respect to this
   triple.  The server SHOULD limit the referring triples to requests
   that refer to just those that apply to the objects referred to in the
   CB_COMPOUND procedure.

   The client must not simply wait forever for the expected server reply
   to arrive before responding to the CB_COMPOUND that won the race,
   because it is possible that it will be delayed indefinitely.  The
   client should assume the likely case that the reply will arrive
   within the average round trip time for COMPOUND requests to the
   server, and wait that period of time.  If that period of time expires
   it can respond to the CB_COMPOUND with NFS4ERR_DELAY.




Shepler, et al.           Expires June 18, 2009                [Page 61]

Internet-Draft                   NFSv4.1                   December 2008


   There are other scenarios under which callbacks may race replies.
   Among them are pNFS layout recalls as described in Section 12.5.5.2.

2.10.6.4.  COMPOUND and CB_COMPOUND Construction Issues

   Very large requests and replies may pose both buffer management
   issues (especially with RDMA) and reply cache issues.  When the
   session is created, (Section 18.36), for each channel (fore and
   back), the client and server negotiate the maximum sized request they
   will send or process (ca_maxrequestsize), the maximum sized reply
   they will return or process (ca_maxresponsesize), and the maximum
   sized reply they will store in the reply cache
   (ca_maxresponsesize_cached).

   If a request exceeds ca_maxrequestsize, the reply will have the
   status NFS4ERR_REQ_TOO_BIG.  A replier MAY return NFS4ERR_REQ_TOO_BIG
   as the status for first operation (SEQUENCE or CB_SEQUENCE) in the
   request (which means no operations in the request executed, and the
   state of the slot in the reply cache is unchanged), or it MAY opt to
   return it on a subsequent operation in the same COMPOUND or
   CB_COMPOUND request (which means at least one operation did execute
   and the state of the slot in reply cache does change).  The replier
   SHOULD set NFS4ERR_REQ_TOO_BIG on the operation that exceeds
   ca_maxrequestsize.

   If a reply exceeds ca_maxresponsesize, the reply will have the status
   NFS4ERR_REP_TOO_BIG.  A replier MAY return NFS4ERR_REP_TOO_BIG as the
   status for first operation (SEQUENCE or CB_SEQUENCE) in the request,
   or it MAY opt to return it on a subsequent operation (in the same
   COMPOUND or CB_COMPOUND reply).  A replier MAY return
   NFS4ERR_REP_TOO_BIG in the reply to SEQUENCE or CB_SEQUENCE, even if
   the response would still exceed ca_maxresponsesize.

   If sa_cachethis or csa_cachethis are TRUE, then the replier MUST
   cache a reply except if an error is returned by the SEQUENCE or
   CB_SEQUENCE operation (see Section 2.10.6.1.2).  If the reply exceeds
   ca_maxresponsesize_cached, (and sa_cachethis or csa_cachethis are
   TRUE) then the server MUST return NFS4ERR_REP_TOO_BIG_TO_CACHE.  Even
   if NFS4ERR_REP_TOO_BIG_TO_CACHE (or any other error for that matter)
   is returned on a operation other than first operation (SEQUENCE or
   CB_SEQUENCE), then the reply MUST be cached if sa_cachethis or
   csa_cachethis are TRUE.  For example, if a COMPOUND has eleven
   operations, including SEQUENCE, the fifth operation is a RENAME, and
   the tenth operation is a READ for one million bytes, the server may
   return NFS4ERR_REP_TOO_BIG_TO_CACHE on the tenth operation.  Since
   the server executed several operations, especially the non-idempotent
   RENAME, the client's request to cache the reply needs to be honored
   in order for correct operation of exactly once semantics.  If the



Shepler, et al.           Expires June 18, 2009                [Page 62]

Internet-Draft                   NFSv4.1                   December 2008


   client retries the request, the server will have cached a reply that
   contains results for ten of the eleven requested operations, with the
   tenth operation having a status of NFS4ERR_REP_TOO_BIG_TO_CACHE.

   A client needs to take care that when sending operations that change
   the current filehandle (except for PUTFH, PUTPUBFH, PUTROOTFH and
   RESTOREFH) that it not exceed the maximum reply buffer before the
   GETFH operation.  Otherwise the client will have to retry the
   operation that changed the current filehandle, in order to obtain the
   desired filehandle.  For the OPEN operation (see Section 18.16),
   retry is not always available as an option.  The following guidelines
   for the handling of filehandle changing operations are advised:

   o  Within the same COMPOUND procedure, a client SHOULD send GETFH
      immediately after a current filehandle changing operation.  A
      client MUST send GETFH after a current filehandle changing
      operation that is also non-idempotent (e.g., the OPEN operation),
      unless the operation is RESTOREFH.  RESTOREFH is an exception,
      because even though it is non-idempotent, the filehandle RESTOREFH
      produced originated from an operation that is either idempotent
      (e.g.  PUTFH, LOOKUP), or non-idempotent (e.g.  OPEN, CREATE).  If
      the origin is non-idempotent, then because the client MUST send
      GETFH after the origin operation, the client can recover if
      RESTOREFH returns an error.

   o  A server MAY return NFS4ERR_REP_TOO_BIG or
      NFS4ERR_REP_TOO_BIG_TO_CACHE (if sa_cachethis is TRUE) on a
      filehandle changing operation if the reply would be too large on
      the next operation.

   o  A server SHOULD return NFS4ERR_REP_TOO_BIG or
      NFS4ERR_REP_TOO_BIG_TO_CACHE (if sa_cachethis is TRUE) on a
      filehandle changing non-idempotent operation if the reply would be
      too large on the next operation, especially if the operation is
      OPEN.

   o  A server MAY return NFS4ERR_UNSAFE_COMPOUND to a non-idempotent
      current filehandle changing operation, if it looks at the next
      operation (in the same COMPOUND procedure) and finds it is not
      GETFH.  The server SHOULD do this if it is unable to determine in
      advance whether the total response size would exceed
      ca_maxresponsesize_cached or ca_maxresponsesize.

2.10.6.5.  Persistence

   Since the reply cache is bounded, it is practical for the reply cache
   to persist across server restarts.  The replier MUST persist the
   following information if it agreed to persist the session (when the



Shepler, et al.           Expires June 18, 2009                [Page 63]

Internet-Draft                   NFSv4.1                   December 2008


   session was created; see Section 18.36):

   o  The session ID.

   o  The slot table including the sequence ID and cached reply for each
      slot.

   The above are sufficient for a replier to provide EOS semantics for
   any requests that were sent and executed before the server restarted.
   If the replier is a client then there is no need for it to persist
   any more information, unless the client will be persisting all other
   state across client restart.  In which case, the server will never
   see any NFSv4.1-level protocol manifestation of a client restart.  If
   the replier is a server, with just the slot table and session ID
   persisting, any requests the client retries after the server restart
   will return the results that are cached in reply cache. and any new
   requests (i.e. the sequence ID is one (1) greater than the slot's
   sequence ID) MUST be rejected with NFS4ERR_DEADSESSION (returned by
   SEQUENCE).  Such a session is considered dead.  A server MAY re-
   animate a session after a server restart so that the session will
   accept new requests as well as retries.  To re-animate a session the
   server needs to persist additional information through server
   restart:

   o  The client ID.  This is a prerequisite to let the client to create
      more sessions associated with the same client ID as the

   o  The client ID's sequence ID that is used for creating sessions
      (see Section 18.35 and Section 18.36).  This is a prerequisite to
      let the client create more sessions.

   o  The principal that created the client ID.  This allows the server
      to authenticate the client when it sends EXCHANGE_ID.

   o  The SSV, if SP4_SSV state protection was specified when the client
      ID was created (see Section 18.35).  This lets the client create
      new sessions, and associate connections with the new and existing
      sessions.

   o  The properties of the client ID as defined in Section 18.35.

   A persistent reply cache places certain demands on the server.  The
   execution of the sequence of operations (starting with SEQUENCE) and
   placement of its results in the persistent cache MUST be atomic.  If
   a client retries an sequence of operations that was previously
   executed on the server the only acceptable outcomes are either the
   original cached reply or an indication that client ID or session has
   been lost (indicating a catastrophic loss of the reply cache or a



Shepler, et al.           Expires June 18, 2009                [Page 64]

Internet-Draft                   NFSv4.1                   December 2008


   session that has been deleted because the client failed to use the
   session for an extended period of time).

   A server could fail and restart in the middle of a COMPOUND procedure
   that contains one or more non-idempotent or idempotent-but-modifying
   operations.  This creates an even higher challenge for atomic
   execution and placement of results in the reply cache.  One way to
   view the problem is as a single transaction consisting of each
   operation in the COMPOUND followed by storing the result in
   persistent storage, then finally a transaction commit.  If there is a
   failure before the transaction is committed, then the server rolls
   back the transaction.  If server itself fails, then when it restarts,
   its recovery logic could roll back the transaction before starting
   the NFSv4.1 server.

   While the description of the implementation for atomic execution of
   the request and caching of the reply is beyond the scope of this
   document, an example implementation for NFSv2 [37] is described in
   [38].

2.10.7.  RDMA Considerations

   A complete discussion of the operation of RPC-based protocols over
   RDMA transports is in [8].  A discussion of the operation of NFSv4,
   including NFSv4.1, over RDMA is in [9].  Where RDMA is considered,
   this specification assumes the use of such a layering; it addresses
   only the upper layer issues relevant to making best use of RPC/RDMA.

2.10.7.1.  RDMA Connection Resources

   RDMA requires its consumers to register memory and post buffers of a
   specific size and number for receive operations.

   Registration of memory can be a relatively high-overhead operation,
   since it requires pinning of buffers, assignment of attributes (e.g.
   readable/writable), and initialization of hardware translation.
   Preregistration is desirable to reduce overhead.  These registrations
   are specific to hardware interfaces and even to RDMA connection
   endpoints, therefore negotiation of their limits is desirable to
   manage resources effectively.

   Following basic registration, these buffers must be posted by the RPC
   layer to handle receives.  These buffers remain in use by the RPC/
   NFSv4.1 implementation; the size and number of them must be known to
   the remote peer in order to avoid RDMA errors which would cause a
   fatal error on the RDMA connection.

   NFSv4.1 manages slots as resources on a per session basis (see



Shepler, et al.           Expires June 18, 2009                [Page 65]

Internet-Draft                   NFSv4.1                   December 2008


   Section 2.10), while RDMA connections manage credits on a per
   connection basis.  This means that in order for a peer to send data
   over RDMA to a remote buffer, it has to have both an NFSv4.1 slot,
   and an RDMA credit.  If multiple RDMA connections are associated with
   a session, then if the total number of credits across all RDMA
   connections associated with the session is X, and the number slots in
   the session is Y, then the maximum number of outstanding requests is
   lesser of X and Y.

2.10.7.2.  Flow Control

   Previous versions of NFS do not provide flow control; instead they
   rely on the windowing provided by transports like TCP to throttle
   requests.  This does not work with RDMA, which provides no operation
   flow control and will terminate a connection in error when limits are
   exceeded.  Limits such as maximum number of requests outstanding are
   therefore negotiated when a session is created (see the
   ca_maxrequests field in Section 18.36).  These limits then provide
   the maxima which each connection associated with the session's
   channel(s) must remain within.  RDMA connections are managed within
   these limits as described in section 3.3 ("Flow Control"[[Comment.2:
   RFC Editor: please verify section and title of the RPCRDMA document
   which is currently at
   http://tools.ietf.org/html/draft-ietf-nfsv4-rpcrdma-08#section-3.3]])
   of [8]; if there are multiple RDMA connections, then the maximum
   number of requests for a channel will be divided among the RDMA
   connections.  Put a different way, the onus is on the replier to
   ensure that total number of RDMA credits across all connections
   associated with the replier's channel does exceed the channel's
   maximum number of outstanding requests.

   The limits may also be modified dynamically at the replier's choosing
   by manipulating certain parameters present in each NFSv4.1 reply.  In
   addition, the CB_RECALL_SLOT callback operation (see Section 20.8)
   can be sent by a server to a client to return RDMA credits to the
   server, thereby lowering the maximum number of requests a client can
   have outstanding to the server.

2.10.7.3.  Padding

   Header padding is requested by each peer at session initiation (see
   the ca_headerpadsize argument to CREATE_SESSION in Section 18.36),
   and subsequently used by the RPC RDMA layer, as described in [8].
   Zero padding is permitted.

   Padding leverages the useful property that RDMA preserve alignment of
   data, even when they are placed into anonymous (untagged) buffers.
   If requested, client inline writes will insert appropriate pad bytes



Shepler, et al.           Expires June 18, 2009                [Page 66]

Internet-Draft                   NFSv4.1                   December 2008


   within the request header to align the data payload on the specified
   boundary.  The client is encouraged to add sufficient padding (up to
   the negotiated size) so that the "data" field of the NFSv4.1 WRITE
   operation is aligned.  Most servers can make good use of such
   padding, which allows them to chain receive buffers in such a way
   that any data carried by client requests will be placed into
   appropriate buffers at the server, ready for file system processing.
   The receiver's RPC layer encounters no overhead from skipping over
   pad bytes, and the RDMA layer's high performance makes the insertion
   and transmission of padding on the sender a significant optimization.
   In this way, the need for servers to perform RDMA Read to satisfy all
   but the largest client writes is obviated.  An added benefit is the
   reduction of message round trips on the network - a potentially good
   trade, where latency is present.

   The value to choose for padding is subject to a number of criteria.
   A primary source of variable-length data in the RPC header is the
   authentication information, the form of which is client-determined,
   possibly in response to server specification.  The contents of
   COMPOUNDs, sizes of strings such as those passed to RENAME, etc. all
   go into the determination of a maximal NFSv4.1 request size and
   therefore minimal buffer size.  The client must select its offered
   value carefully, so as not to overburden the server, and vice- versa.
   The benefit of an appropriate padding value is higher performance.
   [[Comment.3: RFC editor please keep this diagram on one page.]]

                    Sender gather:
        |RPC Request|Pad  bytes|Length| -> |User data...|
        \------+----------------------/      \
                \                             \
                 \    Receiver scatter:        \-----------+- ...
            /-----+----------------\            \           \
            |RPC Request|Pad|Length|   ->  |FS buffer|->|FS buffer|->...

   In the above case, the server may recycle unused buffers to the next
   posted receive if unused by the actual received request, or may pass
   the now-complete buffers by reference for normal write processing.
   For a server which can make use of it, this removes any need for data
   copies of incoming data, without resorting to complicated end-to-end
   buffer advertisement and management.  This includes most kernel-based
   and integrated server designs, among many others.  The client may
   perform similar optimizations, if desired.

2.10.7.4.  Dual RDMA and Non-RDMA Transports

   Some RDMA transports (e.g., RFC5040 [10]), permit a "streaming" (non-
   RDMA) phase, where ordinary traffic might flow before "stepping up"
   to RDMA mode, commencing RDMA traffic.  Some RDMA transports start



Shepler, et al.           Expires June 18, 2009                [Page 67]

Internet-Draft                   NFSv4.1                   December 2008


   connections always in RDMA mode.  NFSv4.1 allows, but does not
   assume, a streaming phase before RDMA mode.  When a connection is
   associated with a session, the client and server negotiate whether
   the connection is used in RDMA or non-RDMA mode (see Section 18.36
   and Section 18.34).

2.10.8.  Sessions Security

2.10.8.1.  Session Callback Security

   Via session / connection association, NFSv4.1 improves security over
   that provided by NFSv4.0 for the backchannel.  The connection is
   client-initiated (see Section 18.34), and subject to the same
   firewall and routing checks as the fore channel.  At the client's
   option (see Section 18.35), connection association is fully
   authenticated before being activated (see Section 18.34).  Traffic
   from the server over the backchannel is authenticated exactly as the
   client specifies (see Section 2.10.8.2).

2.10.8.2.  Backchannel RPC Security

   When the NFSv4.1 client establishes the backchannel, it informs the
   server of the security flavors and principals to use when sending
   requests.  If the security flavor is RPCSEC_GSS, the client expresses
   the principal in the form of an established RPCSEC_GSS context.  The
   server is free to use any of the flavor/principal combinations the
   client offers, but it MUST NOT use unoffered combinations.  This way,
   the client need not provide a target GSS principal for the
   backchannel as it did with NFSv4.0, nor the server have to implement
   an RPCSEC_GSS initiator as it did with NFSv4.0 [29].

   The CREATE_SESSION (Section 18.36) and BACKCHANNEL_CTL
   (Section 18.33) operations allow the client to specify flavor/
   principal combinations.

   Also note that the SP4_SSV state protection mode (see Section 18.35
   and Section 2.10.8.3) has the side benefit of providing SSV-derived
   RPCSEC_GSS contexts (Section 2.10.9).

2.10.8.3.  Protection from Unauthorized State Changes

   As described to this point in the specification, the state model of
   NFSv4.1 is vulnerable to an attacker that sends a SEQUENCE operation
   with a forged session ID and with a slot ID that it expects the
   legitimate client to use next.  When the legitimate client uses the
   slot ID with the same sequence number, the server returns the
   attacker's result from the reply cache which disrupts the legitimate
   client and thus denies service to it.  Similarly an attacker could



Shepler, et al.           Expires June 18, 2009                [Page 68]

Internet-Draft                   NFSv4.1                   December 2008


   send a CREATE_SESSION with a forged client ID to create a new session
   associated with the client ID.  The attacker could send requests
   using the new session that change locking state, such as LOCKU
   operations to release locks the legitimate client has acquired.
   Setting a security policy on the file which requires RPCSEC_GSS
   credentials when manipulating the file's state is one potential work
   around, but has the disadvantage of preventing a legitimate client
   from releasing state when RPCSEC_GSS is required to do so, but a GSS
   context cannot be obtained (possibly because the user has logged off
   the client).

   NFSv4.1 provides three options to a client for state protection which
   are specified when a client creates a client ID via EXCHANGE_ID
   (Section 18.35).

   The first (SP4_NONE) is to simply waive state protection.

   The other two options (SP4_MACH_CRED and SP4_SSV) share several
   traits:

   o  An RPCSEC_GSS-based credential is used to authenticate client ID
      and session maintenance operations, including creating and
      destroying a session, associating a connection with the session,
      and destroying the client ID.

   o  Because RPCSEC_GSS is used to authenticate client ID and session
      maintenance, the attacker cannot associate a rogue connection with
      a legitimate session, or associate a rogue session with a
      legitimate client ID in order to maliciously alter the client ID's
      lock state via CLOSE, LOCKU, DELEGRETURN, LAYOUTRETURN, etc.

   o  In cases where the server's security policies on a portion of its
      namespace require RPCSEC_GSS authentication, a client may have to
      use an RPCSEC_GSS credential to remove per-file state (e.g.,
      LOCKU, CLOSE, etc.).  The server may require that the principal
      that removes the state match certain criteria (e.g., the principal
      might have to be the same as the one that acquired the state).
      However, the client might not have an RPCSEC_GSS context for such
      a principal, and might not be able to create such a context
      (perhaps because the user has logged off).  When the client
      establishes SP4_MACH_CRED or SP4_SSV protection, it can specify a
      list of operations that the server MUST allow using the machine
      credential (if SP4_MACH_CRED is used) or the SSV credential (if
      SP4_SSV is used).

   The SP4_MACH_CRED state protection option uses a machine credential
   where the principal that creates the client ID, MUST also be the
   principal that performs client ID and session maintenance operations.



Shepler, et al.           Expires June 18, 2009                [Page 69]

Internet-Draft                   NFSv4.1                   December 2008


   The security of the machine credential state protection approach
   depends entirely on safe guarding the per-machine credential.
   Assuming a proper safe guard, using the per-machine credential for
   operations like CREATE_SESSION, BIND_CONN_TO_SESSION,
   DESTROY_SESSION, and DESTROY_CLIENTID will prevent an attacker from
   associating a rogue connection with a session, or associating a rogue
   session with a client ID.

   There are at least three scenarios for the SP4_MACH_CRED option:

   1.  That the system administrator configures a unique, permanent per-
       machine credential for one of the mandated GSS mechanisms (e.g.,
       if Kerberos V5 is used, a "keytab" containing a principal derived
       from a client host name could be used).

   2.  The client is used by a single user, and so the client ID and its
       sessions are used by just that user.  If the user's credential
       expires, then session and client ID maintenance cannot occur, but
       since the client has a single user, only that user is
       inconvenienced.

   3.  The physical client has multiple users, but the client
       implementation has a unique client ID for each user.  This is
       effectively the same as the second scenario, but a disadvantage
       is that each user needs to be allocated at least one session
       each, so the approach suffers from lack of economy.

   The SP4_SSV protection option uses the SSV (Section 1.5), via
   RPCSEC_GSS and the SSV GSS mechanism (Section 2.10.9) to protect
   state from attack.  The SP4_SSV protection option is intended for the
   situation comprised of a client that has multiple active users, and a
   system administrator who wants to avoid the burden of installing a
   permanent machine credential on each client.  The SSV is established
   and updated on the server via SET_SSV (see Section 18.47).  To
   prevent eavesdropping, a client SHOULD send SET_SSV via RPCSEC_GSS
   with the privacy service.  Several aspects of the SSV make it
   intractable for an attacker to guess the SSV, and thus associate
   rogue connections with a session, and rogue sessions with a client
   ID:

   o  The arguments to and results of SET_SSV include digests of the old
      and new SSV, respectively.

   o  Because the initial value of the SSV is zero, therefore known, the
      client that opts for SP4_SSV protection and opts to apply SP4_SSV
      protection to BIND_CONN_TO_SESSION and CREATE_SESSION MUST send at
      least one SET_SSV operation before the first BIND_CONN_TO_SESSION
      operation or before the second CREATE_SESSION operation on a



Shepler, et al.           Expires June 18, 2009                [Page 70]

Internet-Draft                   NFSv4.1                   December 2008


      client ID.  If it does not, the SSV mechanism will not generate
      tokens (Section 2.10.9).  A client SHOULD send SET_SSV as soon as
      a session is created.

   o  A SET_SSV request does not replace the SSV with the argument to
      SET_SSV.  Instead, the current SSV on the server is logically
      exclusive ORed (XORed) with the argument to SET_SSV.  Each time a
      new principal uses a client ID for the first time, the client
      SHOULD send a SET_SSV with that principal's RPCSEC_GSS
      credentials, with RPCSEC_GSS service set to RPC_GSS_SVC_PRIVACY.

   Here are the types of attacks that can be attempted by an attacker
   named Eve on a victim named Bob, and how SP4_SSV protection foils
   each attack:

   o  Suppose Eve is the first user to log into a legitimate client.
      Eve's use of an NFSv4.1 file system will cause the legitimate
      client to create a client ID with SP4_SSV protection, specifying
      that the BIND_CONN_TO_SESSION operation MUST use the SSV
      credential.  Eve's use of the file system also causes an SSV to be
      created.  The SET_SSV operation that creates the SSV will be
      protected by the RPCSEC_GSS context created by the legitimate
      client which uses Eve's GSS principal and credentials.  Eve can
      eavesdrop on the network while her RPCSEC_GSS context is created,
      and the SET_SSV using her context is sent.  Even if the legitimate
      client sends the SET_SSV with RPC_GSS_SVC_PRIVACY, because Eve
      knows her own credentials, she can decrypt the SSV.  Eve can
      compute an RPCSEC_GSS credential that BIND_CONN_TO_SESSION will
      accept, and so associate a new connection with the legitimate
      session.  Eve can change the slot ID and sequence state of a
      legitimate session, and/or the SSV state, in such a way that when
      Bob accesses the server via the same legitimate client, the
      legitimate client will be unable to use the session.

      The client's only recourse is to create a new client ID for Bob to
      use, and establish a new SSV for the client ID.  The client will
      be unable to delete the old client ID, and will let the lease on
      the old client ID expire.

      Once the legitimate client establishes an SSV over the new session
      using Bob's RPCSEC_GSS context, Eve can use the new session via
      the legitimate client, but she cannot disrupt Bob. Moreover,
      because the client SHOULD have modified the SSV due to Eve using
      the new session, Bob cannot get revenge on Eve by associating a
      rogue connection with the session.

      The question is how did the legitimate client detect that Eve has
      hijacked the old session?  When the client detects that a new



Shepler, et al.           Expires June 18, 2009                [Page 71]

Internet-Draft                   NFSv4.1                   December 2008


      principal, Bob, wants to use the session, it SHOULD have sent a
      SET_SSV, which leads to following sub-scenarios:


      *  Let us suppose that from the rogue connection, Eve sent a
         SET_SSV with the same slot ID and sequence ID that the
         legitimate client later uses.  The server will assume the
         SET_SSV sent with Bob's credentials is a retry, and return to
         the legitimate client the reply it sent Eve. However, unless
         Eve can correctly guess the SSV the legitimate client will use,
         the digest verification checks in the SET_SSV response will
         fail.  That is an indication to the client that the session has
         apparently been hijacked.


      *  Alternatively, Eve sent a SET_SSV with a different slot ID than
         the legitimate client uses for its SET_SSV.  Then the digest
         verification of the SET_SSV sent with Bob's credentials fails
         on the server, and the error returned to the client makes it
         apparent that the session has been hijacked.


      *  Alternatively, Eve sent an operation other than SET_SSV, but
         with the same slot ID and sequence that the legitimate client
         uses for its SET_SSV.  The server returns to the legitimate
         client the response it sent Eve. The client sees that the
         response is not at all what it expects.  The client assumes
         either session hijacking or a server bug, and either way
         destroys the old session.


   o  Eve associates a rogue connection with the session as above, and
      then destroys the session.  Again, Bob goes to use the server from
      the legitimate client, which sends a SET_SSV using Bob's
      credentials.  The client receives an error that indicates the
      session does not exist.  When the client tries to create a new
      session, this will fail because the SSV it has does not match that
      the server has, and now the client knows the session was hijacked.
      The legitimate client establishes a new client ID.


   o  If Eve creates a connection before the legitimate client
      establishes an SSV, because the initial value of the SSV is zero
      and therefore known, Eve can send a SET_SSV that will pass the
      digest verification check.  However because the new connection has
      not been associated with the session, the SET_SSV is rejected for
      that reason.




Shepler, et al.           Expires June 18, 2009                [Page 72]

Internet-Draft                   NFSv4.1                   December 2008


   In summary, an attacker's disruption of state when SP4_SSV protection
   is in use is limited to the formative period of a client ID, its
   first session, and the establishment of the SSV.  Once a non-
   malicious user uses the client ID, the client quickly detects any
   hijack and rectifies the situation.  Once a non-malicious user
   successfully modifies the SSV, the attacker cannot use NFSv4.1
   operations to disrupt the non-malicious user.

   Note that neither the SP4_MACH_CRED nor SP4_SSV protection approaches
   prevent hijacking of a transport connection that has previously been
   associated with a session.  If the goal of a counter threat strategy
   is to prevent connection hijacking, the use of IPsec is RECOMMENDED.

   If a connection hijack occurs, the hijacker could in theory change
   locking state and negatively impact the service to legitimate
   clients.  However if the server is configured to require the use of
   RPCSEC_GSS with integrity or privacy on the affected file objects,
   and if EXCHGID4_FLAG_BIND_PRINC_STATEID capability (Section 18.35),
   is in force, this will thwart unauthorized attempts to change locking
   state.

2.10.9.  The Secret State Verifier (SSV) GSS Mechanism

   The SSV provides the secret key for a GSS mechanism internal to
   NFSv4.1 that NFSv4.1 uses for state protection.  Contexts for this
   mechanism are not established via the RPCSEC_GSS protocol.  Instead,
   the contexts are automatically created when EXCHANGE_ID specifies
   SP4_SSV protection.  The only tokens defined are the PerMsgToken
   (emitted by GSS_GetMIC) and the SealedMessage token (emitted by
   GSS_Wrap).

   The mechanism OID for the SSV mechanism is:
   iso.org.dod.internet.private.enterprise.Michael Eisler.nfs.ssv_mech
   (1.3.6.1.4.1.28882.1.1).  While the SSV mechanism does not define any
   initial context tokens, the OID can be used to let servers indicate
   that the SSV mechanism is acceptable whenever the client sends a
   SECINFO or SECINFO_NO_NAME operation (see Section 2.6).

   The SSV mechanism defines four subkeys derived from the SSV value.
   Each time SET_SSV is invoked the subkeys are recalculated by the
   client and server.  The calculation of each of the four subkeys
   depends on each of the four respective ssv_subkey4 enumerated values.
   The calculation uses the HMAC [11], algorithm, using the current SSV
   as the key, the one way hash algorithm as negotiated by EXCHANGE_ID,
   and the input text as represented by the XDR encoded enumeration
   value for that subkey of data type ssv_subkey4.  If the length of the
   output of the HMAC algorithm exceeds the length of key of encryption
   algorithm (which is also negotiated by EXCHANGE_ID), then the subkey



Shepler, et al.           Expires June 18, 2009                [Page 73]

Internet-Draft                   NFSv4.1                   December 2008


   MUST be truncated from the HMAC output, i.e. if the subkey is of N
   bytes long, then the first N bytes of the HMAC output MUST be used
   for the subkey.  The specification of EXCHANGE_ID states that the
   length of the output of the HMAC algorithm MUST NOT be less than
   length of subkey needed for the encryption algorithm (see
   Section 18.35).


   /* Input for computing subkeys */
   enum ssv_subkey4 {
           SSV4_SUBKEY_MIC_I2T     = 1,
           SSV4_SUBKEY_MIC_T2I     = 2,
           SSV4_SUBKEY_SEAL_I2T    = 3,
           SSV4_SUBKEY_SEAL_T2I    = 4
   };


   The subkey derived from SSV4_SUBKEY_MIC_I2T is used for calculating
   message integrity codes (MICs) that originate from the NFSv4.1
   client, whether as part of a request over the fore channel, or a
   response over the backchannel.  The subkey derived from
   SSV4_SUBKEY_MIC_T2I is used for MICs originating from the NFSv4.1
   server.  The subkey derived from SSV4_SUBKEY_SEAL_I2T is used for
   encryption text originating from the NFSv4.1 client and the subkey
   derived from SSV4_SUBKEY_SEAL_T2I is used for encryption text
   originating from the NFSv4.1 server.

   The PerMsgToken description is based on an XDR definition:


   /* Input for computing smt_hmac */
   struct ssv_mic_plain_tkn4 {
     uint32_t        smpt_ssv_seq;
     opaque          smpt_orig_plain<>;
   };




   /* SSV GSS PerMsgToken token */
   struct ssv_mic_tkn4 {
     uint32_t        smt_ssv_seq;
     opaque          smt_hmac<>;
   };


   The field smt_hmac is an HMAC calculated by using the subkey derived
   from SSV4_SUBKEY_MIC_I2T or SSV4_SUBKEY_MIC_T2I as the key, the one



Shepler, et al.           Expires June 18, 2009                [Page 74]

Internet-Draft                   NFSv4.1                   December 2008


   way hash algorithm as negotiated by EXCHANGE_ID, and the input text
   as represented by data of type ssv_mic_plain_tkn4.  The field
   smpt_ssv_seq is the same as smt_ssv_seq.  The field smpt_orig_plain
   is the "message" input passed to GSS_GetMIC() (see Section 2.3.1 of
   [7]).  The caller of GSS_GetMIC() provides a pointer to a buffer
   containing the plain text.  The SSV mechanism's entry point for
   GSS_GetMIC() encodes this into an opaque array, and the encoding will
   include an initial four byte length, plus any necessary padding.
   Prepended to this will be the XDR encoded value of smpt_ssv_seq thus
   making up an XDR encoding of a value of data type ssv_mic_plain_tkn4,
   which in turn is the input into the HMAC.

   The token emitted by GSS_GetMIC() is XDR encoded and of XDR data type
   ssv_mic_tkn4.  The field smt_ssv_seq comes from the SSV sequence
   number which is equal to 1 after SET_SSV (Section 18.47) is called
   the first time on a client ID.  Thereafter, the SSV sequence number
   is incremented on each SET_SSV.  Thus smt_ssv_seq represents the
   version of the SSV at the time GSS_GetMIC() was called.  As noted in
   Section 18.35, the client and server can maintain multiple concurrent
   versions of the SSV.  This allows the SSV to be changed without
   serializing all RPC calls that use the SSV mechanism with SET_SSV
   operations.  Once the HMAC is calculated, it is XDR encoded into
   smt_hmac, which will include an initial four byte length, and any
   necessary padding.  Prepended to this will be the XDR encoded value
   of smt_ssv_seq.

   The SealedMessage description is based on an XDR definition:


   /* Input for computing ssct_encr_data and ssct_hmac */
   struct ssv_seal_plain_tkn4 {
     opaque          sspt_confounder<>;
     uint32_t        sspt_ssv_seq;
     opaque          sspt_orig_plain<>;
     opaque          sspt_pad<>;
   };




   /* SSV GSS SealedMessage token */
   struct ssv_seal_cipher_tkn4 {
     uint32_t      ssct_ssv_seq;
     opaque        ssct_iv<>;
     opaque        ssct_encr_data<>;
     opaque        ssct_hmac<>;
   };




Shepler, et al.           Expires June 18, 2009                [Page 75]

Internet-Draft                   NFSv4.1                   December 2008


   The token emitted by GSS_Wrap() is XDR encoded and of XDR data type
   ssv_seal_cipher_tkn4.

   The ssct_ssv_seq field has the same meaning as smt_ssv_seq.

   The ssct_encr_data field is the result of encrypting a value of the
   XDR encoded data type ssv_seal_plain_tkn4.  The encryption key is the
   subkey derived from SSV4_SUBKEY_SEAL_I2T or SSV4_SUBKEY_SEAL_T2I, and
   the encryption algorithm is that negotiated by EXCHANGE_ID.

   The ssct_iv field is the initialization vector (IV) for the
   encryption algorithm (if applicable) and is sent in clear text.  The
   content and size of the IV MUST comply with specification of the
   encryption algorithm.  For example, the id-aes256-CBC algorithm MUST
   use a 16 byte initialization vector (IV) which MUST be unpredictable
   for each instance of a value of type ssv_seal_plain_tkn4 that is
   encrypted with a particular SSV key.

   The ssct_hmac field is the result of computing an HMAC using value of
   the XDR encoded data type ssv_seal_plain_tkn4 as the input text.  The
   key is the subkey derived from SSV4_SUBKEY_MIC_I2T or
   SSV4_SUBKEY_MIC_T2I, and the one way hash algorithm is that
   negotiated by EXCHANGE_ID.

   The sspt_confounder field is a random value.

   The sspt_ssv_seq field is the same as ssvt_ssv_seq.

   The field sspt_orig_plain field is the original plaintext and is the
   "input_message" input passed to GSS_Wrap() (see Section 2.3.3 of
   [7]).  As with the handling of the plaintext by the SSV mechanism's
   GSS_GetMIC() entry point, the entry point for GSS_Wrap() expects a
   pointer to the plaintext, and will XDR encode an opaque array into
   sspt_orig_plain representing the plain text, along with the other
   fields of an instance of data type ssv_seal_plain_tkn4.

   The sspt_pad field is present to support encryption algorithms that
   require inputs to be in fixed sized blocks.  The content of sspt_pad
   is zero filled except for the length.  Beware that the XDR encoding
   of ssv_seal_plain_tkn4 contains three variable length arrays, and so
   each array consumes four bytes for an array length, and each array
   that follows the length is always padded to a multiple of four bytes
   per the XDR standard.

   For example suppose the encryption algorithm uses 16 byte blocks, and
   the sspt_confounder is three bytes long, and the sspt_orig_plain
   field is 15 bytes long.  The XDR encoding of sspt_confounder uses
   eight bytes (4 + 3 + 1 byte pad), the XDR encoding of sspt_ssv_seq



Shepler, et al.           Expires June 18, 2009                [Page 76]

Internet-Draft                   NFSv4.1                   December 2008


   uses four bytes, the XDR encoding of sspt_orig_plain uses 20 bytes (4
   + 15 + 1 byte pad), and the smallest XDR encoding of the sspt_pad
   field is four bytes.  This totals 36 bytes.  The next multiple of 16
   is 48, thus the length field of sspt_pad needs to be set to 12 bytes,
   or a total encoding of 16 bytes.  The total number of XDR encoded
   bytes is thus 8 + 4 + 20 + 16 = 48.

   GSS_Wrap() emits a token that is an XDR encoding of a value of data
   type ssv_seal_cipher_tkn4.  Note that regardless whether the caller
   of GSS_Wrap() requests confidentiality or not, the token always has
   confidentiality.  This is because the SSV mechanism is for
   RPCSEC_GSS, and RPCSEC_GSS never produces GSS_wrap() tokens without
   confidentiality.

   There is one SSV per client ID.  Effectively there is a single GSS
   context for a client ID / SSV pair.  All SSV mechanism RPCSEC_GSS
   handles of a client ID / SSV pair share the same GSS context.  SSV
   GSS contexts do not expire except when the SSV is destroyed (causes
   would include the client ID being destroyed or a server restart).
   Since one purpose of context expiration is to replace keys that have
   been in use for "too long" hence vulnerable to compromise by brute
   force or accident, the client can replace the SSV key by sending
   periodic SET_SSV operations, by cycling through different users'
   RPCSEC_GSS credentials.  This way the SSV is replaced without
   destroying the SSV's GSS contexts.

   SSV RPCSEC_GSS handles can be expired or deleted by the server at any
   time and the EXCHANGE_ID operation can be used to create more SSV
   RPCSEC_GSS handles.  Expiration of SSV RPCSEC_GSS handles does not
   imply that the SSV or its GSS context have expired.

   The client MUST establish an SSV via SET_SSV before the SSV GSS
   context can be used to emit tokens from GSS_Wrap() and GSS_GetMIC().
   If SET_SSV has not been successfully called, attempts to emit tokens
   MUST fail.

   The SSV mechanism does not support replay detection and sequencing in
   its tokens because RPCSEC_GSS does not use those features (See
   Section 5.2.2 "Context Creation Requests" in [4]).

2.10.10.  Session Mechanics - Steady State

2.10.10.1.  Obligations of the Server

   The server has the primary obligation to monitor the state of
   backchannel resources that the client has created for the server
   (RPCSEC_GSS contexts and backchannel connections).  If these
   resources vanish, the server takes action as specified in



Shepler, et al.           Expires June 18, 2009                [Page 77]

Internet-Draft                   NFSv4.1                   December 2008


   Section 2.10.12.2.

2.10.10.2.  Obligations of the Client

   The client SHOULD honor the following obligations in order to utilize
   the session:

   o  Keep a necessary session from going idle on the server.  A client
      that requires a session, but nonetheless is not sending operations
      risks having the session be destroyed by the server.  This is
      because sessions consume resources, and resource limitations may
      force the server to cull an inactive session.  A server MAY
      consider a session to be inactive if the client has not used the
      session before the session inactivity timer (Section 2.10.11) has
      expired.

   o  Destroy the session when not needed.  If a client has multiple
      sessions, one of which has no requests waiting for replies, and
      has been idle for some period of time, it SHOULD destroy the
      session.

   o  Maintain GSS contexts for the backchannel.  If the client requires
      the server to use the RPCSEC_GSS security flavor for callbacks,
      then it needs to be sure the contexts handed to the server via
      BACKCHANNEL_CTL are unexpired.

   o  Preserve a connection for a backchannel.  The server requires a
      backchannel in order to gracefully recall recallable state, or
      notify the client of certain events.  Note that if the connection
      is not being used for the fore channel, there is no way for the
      client tell if the connection is still alive (e.g., the server
      restarted without sending a disconnect).  The onus is on the
      server, not the client, to determine if the backchannel's
      connection is alive, and to indicate in the response to a SEQUENCE
      operation when the last connection associated with a session's
      backchannel has disconnected.

2.10.10.3.  Steps the Client Takes To Establish a Session

   If the client does not have a client ID, the client sends EXCHANGE_ID
   to establish a client ID.  If it opts for SP4_MACH_CRED or SP4_SSV
   protection, in the spo_must_enforce list of operations, it SHOULD at
   minimum specify: CREATE_SESSION, DESTROY_SESSION,
   BIND_CONN_TO_SESSION, BACKCHANNEL_CTL, and DESTROY_CLIENTID.  If opts
   for SP4_SSV protection, the client needs to ask for SSV-based
   RPCSEC_GSS handles.

   The client uses the client ID to send a CREATE_SESSION on a



Shepler, et al.           Expires June 18, 2009                [Page 78]

Internet-Draft                   NFSv4.1                   December 2008


   connection to the server.  The results of CREATE_SESSION indicate
   whether the server will persist the session reply cache through a
   server restarted or not, and the client notes this for future
   reference.

   If the client specified SP4_SSV state protection when the client ID
   was created, then it SHOULD send SET_SSV in the first COMPOUND after
   the session is created.  Each time a new principal goes to use the
   client ID, it SHOULD send a SET_SSV again.

   If the client wants to use delegations, layouts, directory
   notifications, or any other state that requires a backchannel, then
   it needs to add a connection to the backchannel if CREATE_SESSION did
   not already do so.  The client creates a connection, and calls
   BIND_CONN_TO_SESSION to associate the connection with the session and
   the session's backchannel.  If CREATE_SESSION did not already do so,
   the client MUST tell the server what security is required in order
   for the client to accept callbacks.  The client does this via
   BACKCHANNEL_CTL.  If the client selected SP4_MACH_CRED or SP4_SSV
   protection when it called EXCHANGE_ID, then the client SHOULD specify
   that the backchannel use RPCSEC_GSS contexts for security.

   If the client wants to use additional connections for the
   backchannel, then it needs to call BIND_CONN_TO_SESSION on each
   connection it wants to use with the session.  If the client wants to
   use additional connections for the fore channel, then it needs to
   call BIND_CONN_TO_SESSION if it specified SP4_SSV or SP4_MACH_CRED
   state protection when the client ID was created.

   At this point the session has reached steady state.

2.10.11.  Session Inactivity Timer

   The server MAY maintain a session inactivity timer for each session.
   If the session inactivity timer expires, then the server MAY destroy
   the session.  To avoid losing a session due to inactivity, the client
   MUST renew the session inactivity timer.  The length of session
   inactivity timer MUST NOT be less than the lease_time attribute
   (Section 5.8.1.11).  As with lease renewal (Section 8.3), when the
   server receives a SEQUENCE operation, it resets the session
   inactivity timer, and MUST NOT allow the timer to expire while the
   rest of the operations in the COMPOUND procedure's request are still
   executing.  Once the last operation has finished, the server MUST set
   the session inactivity timer to expire no sooner that the sum of the
   current time and the value of the lease_time attribute.






Shepler, et al.           Expires June 18, 2009                [Page 79]

Internet-Draft                   NFSv4.1                   December 2008


2.10.12.  Session Mechanics - Recovery

2.10.12.1.  Events Requiring Client Action

   The following events require client action to recover.

2.10.12.1.1.  RPCSEC_GSS Context Loss by Callback Path

   If all RPCSEC_GSS contexts granted by the client to the server for
   callback use have expired, the client MUST establish a new context
   via BACKCHANNEL_CTL.  The sr_status_flags field of the SEQUENCE
   results indicates when callback contexts are nearly expired, or fully
   expired (see Section 18.46.3).

2.10.12.1.2.  Connection Loss

   If the client loses the last connection of the session, and if wants
   to retain the session, then it needs to create a new connection, and
   if, when the client ID was created, BIND_CONN_TO_SESSION was
   specified in the spo_must_enforce list, the client MUST use
   BIND_CONN_TO_SESSION to associate the connection with the session.

   If there was a request outstanding at the time the of connection
   loss, then if client wants to continue to use the session it MUST
   retry the request, as described in Section 2.10.6.2.  Note that it is
   not necessary to retry requests over a connection with the same
   source network address or the same destination network address as the
   lost connection.  As long as the session ID, slot ID, and sequence ID
   in the retry match that of the original request, the server will
   recognize the request as a retry if it executed the request prior to
   disconnect.

   If the connection that was lost was the last one associated with the
   backchannel, and the client wants to retain the backchannel and/or
   not put recallable state subject to revocation, the client needs to
   reconnect, and if it does, it MUST associate the connection to the
   session and backchannel via BIND_CONN_TO_SESSION.  The server SHOULD
   indicate when it has no callback connection via the sr_status_flags
   result from SEQUENCE.

2.10.12.1.3.  Backchannel GSS Context Loss

   Via the sr_status_flags result of the SEQUENCE operation or other
   means, the client will learn if some or all of the RPCSEC_GSS
   contexts it assigned to the backchannel have been lost.  If the
   client wants to the retain the backchannel and/or not put recallable
   state subjection to revocation, the client needs to use
   BACKCHANNEL_CTL to assign new contexts.



Shepler, et al.           Expires June 18, 2009                [Page 80]

Internet-Draft                   NFSv4.1                   December 2008


2.10.12.1.4.  Loss of Session

   The replier might lose a record of the session.  Causes include:

   o  Replier failure and restart

   o  A catastrophe that causes the reply cache to be corrupted or lost
      on the media it was stored on.  This applies even if the replier
      indicated in the CREATE_SESSION results that it would persist the
      cache.

   o  The server purges the session of a client that has been inactive
      for a very extended period of time.

   o  As a result of configuration changes among a set of clustered
      servers, a network address previously connected to one server
      becomes connected to a different server which has no knowledge of
      the session in question.  Such a configuration change will
      generally only happen when the original server ceases to function
      for a time.

   Loss of reply cache is equivalent to loss of session.  The replier
   indicates loss of session to the requester by returning
   NFS4ERR_BADSESSION on the next operation that uses the session ID
   that refers to the lost session.

   After an event like a server restart, the client may have lost its
   connections.  The client assumes for the moment that the session has
   not been lost.  It reconnects, and if it specified connection
   association enforcement when the session was created, it invokes
   BIND_CONN_TO_SESSION using the session ID.  Otherwise, it invokes
   SEQUENCE.  If BIND_CONN_TO_SESSION or SEQUENCE returns
   NFS4ERR_BADSESSION, the client knows the session is not available to
   it when communicating with that network address.  If the connection
   survives session loss, then the next SEQUENCE operation the client
   sends over the connection will get back NFS4ERR_BADSESSION.  The
   client again knows the session was lost.

   Here is one suggested algorithm for the client when it gets
   NFS4ERR_BADSESSION.  It is not obligatory in that, if a client does
   not want to take advantage of such features as trunking, it may omit
   parts of it.  However, it is a useful example which draws attention
   to various possible recovery issues:

   1.  If the client has other connections to other server network
       addresses associated with the same session, attempt a COMPOUND
       with a single operation, SEQUENCE, on each of the other
       connections.



Shepler, et al.           Expires June 18, 2009                [Page 81]

Internet-Draft                   NFSv4.1                   December 2008


   2.  If the attempts succeed, the session is still alive, and this is
       a strong indicator the server's network address has moved.  The
       client might send an EXCHANGE_ID on the connection that returned
       NFS4ERR_BADSESSION to see if there are opportunities for client
       ID trunking (i.e. the same client ID and so_major are returned).
       The client might use DNS to see if the moved network address was
       replaced with another, so that the performance and availability
       benefits of session trunking can continue.

   3.  If the SEQUENCE requests fail with NFS4ERR_BADSESSION then the
       session no longer exists on any of the server network addresses
       the client has connections associated with that session ID.  It
       is possible the session is still alive and available on other
       network addresses.  The client sends an EXCHANGE_ID on all the
       connections to see if the server owner is still listening on
       those network addresses.  If the same server owner is returned,
       but a new client ID is returned, this is a strong indicator of a
       server restart.  If both the same server owner and same client ID
       are returned, then this is a strong indication that the server
       did delete the session, and the client will need to send a
       CREATE_SESSION if it has no other sessions for that client ID.
       If a different server owner is returned, the client can use DNS
       to find other network addresses.  If it does not, or if DNS does
       not find any other addresses for the server, then the client will
       be unable to provide NFSv4.1 service, and fatal errors should be
       returned to processes that were using the server.  If the client
       is using a "mount" paradigm, unmounting the server is advised.

   4.  If the client knows of no other connections associated with the
       session ID, and server network addresses that are, or have been
       associated with the session ID, then the client can use DNS to
       find other network addresses.  If it does not, or if DNS does not
       find any other addresses for the server, then the client will be
       unable to provide NFSv4.1 service, and fatal errors should be
       returned to processes that were using the server.  If the client
       is using a "mount" paradigm, unmounting the server is advised.

   If there is a reconfiguration event which results in the same network
   address being assigned to servers where the eir_server_scope value is
   different, it cannot be guaranteed that a session ID generated by the
   first will be recognized as invalid by the first.  Therefore, in
   managing server reconfigurations among servers with different server
   scope values, it is necessary to make sure that all clients have
   disconnected from the first server before effecting the
   reconfiguration.  Nonetheless, clients should not assume that servers
   will always adhere to this requirement; clients MUST be prepared to
   deal with unexpected effects of server reconfigurations.  Even where
   a session ID is inappropriately recognized as valid, it is likely



Shepler, et al.           Expires June 18, 2009                [Page 82]

Internet-Draft                   NFSv4.1                   December 2008


   that either the connection will not be recognized as valid, or that a
   sequence value for a slot will not be correct.  Therefore, when a
   client receives results indicating such unexpected errors, the use of
   EXCHANGE_ID to determine the current server configuration and present
   the client to the server is RECOMMENDED.

   A variation on the above is that after a server's network address
   moves, there is no NFSv4.1 server listening.  E.g. no listener on
   port 2049, the NFSv4 server returns NFS4ERR_MINOR_VERS_MISMATCH, the
   NFS server returns a PROG_MISMATCH error, the RPC listener on 2049
   returns PROG_MISMATCH, or attempts to re-connect to the network
   address timeout.  These SHOULD be treated as equivalent to SEQUENCE
   returning NFS4ERR_BADSESSION for these purposes.

   When the client detects session loss, it needs to call CREATE_SESSION
   to recover.  Any non-idempotent operations that were in progress
   might have been performed on the server at the time of session loss.
   The client has no general way to recover from this.

   Note that loss of session does not imply loss of lock, open,
   delegation, or layout state because locks, opens, delegations, and
   layouts are tied to the client ID and depend on the client ID, not
   the session.  Nor does loss of lock, open, delegation, or layout
   state imply loss of session state, because the session depends on the
   client ID; loss of client ID however does imply loss of session,
   lock, open, delegation, and layout state.  See Section 8.4.2.  A
   session can survive a server restart, but lock recovery may still be
   needed.

   It is possible CREATE_SESSION will fail with NFS4ERR_STALE_CLIENTID
   (e.g. the server restarts and does not preserve client ID state).  If
   so, the client needs to call EXCHANGE_ID, followed by CREATE_SESSION.

2.10.12.2.  Events Requiring Server Action

   The following events require server action to recover.

2.10.12.2.1.  Client Crash and Restart

   As described in Section 18.35, a restarted client sends EXCHANGE_ID
   in such a way it causes the server to delete any sessions it had.

2.10.12.2.2.  Client Crash with No Restart

   If a client crashes and never comes back, it will never send
   EXCHANGE_ID with its old client owner.  Thus the server has session
   state that will never be used again.  After an extended period of
   time and if the server has resource constraints, it MAY destroy the



Shepler, et al.           Expires June 18, 2009                [Page 83]

Internet-Draft                   NFSv4.1                   December 2008


   old session as well as locking state.

2.10.12.2.3.  Extended Network Partition

   To the server, the extended network partition may be no different
   from a client crash with no restart (see Section 2.10.12.2.2).
   Unless the server can discern that there is a network partition, it
   is free to treat the situation as if the client has crashed
   permanently.

2.10.12.2.4.  Backchannel Connection Loss

   If there were callback requests outstanding at the time of a
   connection loss, then the server MUST retry the request, as described
   in Section 2.10.6.2.  Note that it is not necessary to retry requests
   over a connection with the same source network address or the same
   destination network address as the lost connection.  As long as the
   session ID, slot ID, and sequence ID in the retry match that of the
   original request, the callback target will recognize the request as a
   retry even if it did see the request prior to disconnect.

   If the connection lost is the last one associated with the
   backchannel, then the server MUST indicate that in the
   sr_status_flags field of every SEQUENCE reply until the backchannel
   is reestablished.  There are two situations each of which use
   different status flags: no connectivity for the session's
   backchannel, and no connectivity for any session backchannel of the
   client.  See Section 18.46 for a description of the appropriate flags
   in sr_status_flags.

2.10.12.2.5.  GSS Context Loss

   The server SHOULD monitor when the number RPCSEC_GSS contexts
   assigned to the backchannel reaches one, and when that one context is
   near expiry (i.e. between one and two periods of lease time),
   indicate so in the sr_status_flags field of all SEQUENCE replies.
   The server MUST indicate when the all of the backchannel's assigned
   RPCSEC_GSS contexts have expired in the sr_status_flags field of all
   SEQUENCE replies.

2.10.13.  Parallel NFS and Sessions

   A client and server can potentially be a non-pNFS implementation, a
   metadata server implementation, a data server implementation, or two
   or three types of implementations.  The EXCHGID4_FLAG_USE_NON_PNFS,
   EXCHGID4_FLAG_USE_PNFS_MDS, and EXCHGID4_FLAG_USE_PNFS_DS flags (not
   mutually exclusive) are passed in the EXCHANGE_ID arguments and
   results to allow the client to indicate how it wants to use sessions



Shepler, et al.           Expires June 18, 2009                [Page 84]

Internet-Draft                   NFSv4.1                   December 2008


   created under the client ID, and to allow the server to indicate how
   it will allow the sessions to be used.  See Section 13.1 for pNFS
   sessions considerations.


3.  Protocol Constants and Data Types

   The syntax and semantics to describe the data types of the NFSv4.1
   protocol are defined in the XDR RFC4506 [2] and RPC RFC1831 [3]
   documents.  The next sections build upon the XDR data types to define
   constants, types and structures specific to this protocol.  The full
   list of XDR data types is in [12].

3.1.  Basic Constants

   const NFS4_FHSIZE               = 128;
   const NFS4_VERIFIER_SIZE        = 8;
   const NFS4_OPAQUE_LIMIT         = 1024;
   const NFS4_SESSIONID_SIZE       = 16;

   const NFS4_INT64_MAX            = 0x7fffffffffffffff;
   const NFS4_UINT64_MAX           = 0xffffffffffffffff;
   const NFS4_INT32_MAX            = 0x7fffffff;
   const NFS4_UINT32_MAX           = 0xffffffff;

   const NFS4_MAXFILELEN           = 0xffffffffffffffff;
   const NFS4_MAXFILEOFF           = 0xfffffffffffffffe;

   Except where noted, all these constants are defined in bytes.

   o  NFS4_FHSIZE is the maximum size of a filehandle.

   o  NFS4_VERIFIER_SIZE is the fixed size of a verifier.

   o  NFS4_OPAQUE_LIMIT is the maximum size of certain opaque
      information.

   o  NFS4_SESSIONID_SIZE is the fixed size of a session identifier.

   o  NFS4_INT64_MAX is the maximum value of a signed 64 bit integer.

   o  NFS4_UINT64_MAX is the maximum value of an unsigned 64 bit
      integer.

   o  NFS4_INT32_MAX is the maximum value of a signed 32 bit integer.

   o  NFS4_UINT32_MAX is the maximum value of an unsigned 32 bit
      integer.



Shepler, et al.           Expires June 18, 2009                [Page 85]

Internet-Draft                   NFSv4.1                   December 2008


   o  NFS4_MAXFILELEN is the maximum length of a regular file.

   o  NFS4_MAXFILEOFF is the maximum offset into a regular file.

3.2.  Basic Data Types

                  These are the base NFSv4.1 data types.

   +---------------+---------------------------------------------------+
   | Data Type     | Definition                                        |
   +---------------+---------------------------------------------------+
   | int32_t       | typedef int int32_t;                              |
   | uint32_t      | typedef unsigned int uint32_t;                    |
   | int64_t       | typedef hyper int64_t;                            |
   | uint64_t      | typedef unsigned hyper uint64_t;                  |
   | attrlist4     | typedef opaque attrlist4<>;                       |
   |               | Used for file/directory attributes.               |
   | bitmap4       | typedef uint32_t bitmap4<>;                       |
   |               | Used in attribute array encoding.                 |
   | changeid4     | typedef uint64_t changeid4;                       |
   |               | Used in the definition of change_info4.           |
   | clientid4     | typedef uint64_t clientid4;                       |
   |               | Shorthand reference to client identification.     |
   | count4        | typedef uint32_t count4;                          |
   |               | Various count parameters (READ, WRITE, COMMIT).   |
   | length4       | typedef uint64_t length4;                         |
   |               | The length of a byte range within a file.         |
   | mode4         | typedef uint32_t mode4;                           |
   |               | Mode attribute data type.                         |
   | nfs_cookie4   | typedef uint64_t nfs_cookie4;                     |
   |               | Opaque cookie value for READDIR.                  |
   | nfs_fh4       | typedef opaque nfs_fh4<NFS4_FHSIZE>;              |
   |               | Filehandle definition.                            |
   | nfs_ftype4    | enum nfs_ftype4;                                  |
   |               | Various defined file types.                       |
   | nfsstat4      | enum nfsstat4;                                    |
   |               | Return value for operations.                      |
   | offset4       | typedef uint64_t offset4;                         |
   |               | Various offset designations (READ, WRITE, LOCK,   |
   |               | COMMIT).                                          |
   | qop4          | typedef uint32_t qop4;                            |
   |               | Quality of protection designation in SECINFO.     |
   | sec_oid4      | typedef opaque sec_oid4<>;                        |
   |               | Security Object Identifier. The sec_oid4 data     |
   |               | type is not really opaque. Instead it contains an |
   |               | ASN.1 OBJECT IDENTIFIER as used by GSS-API in the |
   |               | mech_type argument to GSS_Init_sec_context. See   |
   |               | [7] for details.                                  |



Shepler, et al.           Expires June 18, 2009                [Page 86]

Internet-Draft                   NFSv4.1                   December 2008


   | sequenceid4   | typedef uint32_t sequenceid4;                     |
   |               | Sequence number used for various session          |
   |               | operations (EXCHANGE_ID, CREATE_SESSION,          |
   |               | SEQUENCE, CB_SEQUENCE).                           |
   | seqid4        | typedef uint32_t seqid4;                          |
   |               | Sequence identifier used for file locking.        |
   | sessionid4    | typedef opaque sessionid4[NFS4_SESSIONID_SIZE];   |
   |               | Session identifier.                               |
   | slotid4       | typedef uint32_t slotid4;                         |
   |               | Sequencing artifact for various session           |
   |               | operations (SEQUENCE, CB_SEQUENCE).               |
   | utf8string    | typedef opaque utf8string<>;                      |
   |               | UTF-8 encoding for strings.                       |
   | utf8str_cis   | typedef utf8string utf8str_cis;                   |
   |               | Case-insensitive UTF-8 string.                    |
   | utf8str_cs    | typedef utf8string utf8str_cs;                    |
   |               | Case-sensitive UTF-8 string.                      |
   | utf8str_mixed | typedef utf8string utf8str_mixed;                 |
   |               | UTF-8 strings with a case sensitive prefix and a  |
   |               | case insensitive suffix.                          |
   | component4    | typedef utf8str_cs component4;                    |
   |               | Represents path name components.                  |
   | linktext4     | typedef utf8str_cs linktext4;                     |
   |               | Symbolic link contents ("symbolic link" is        |
   |               | defined in an Open Group [Section 3.372 of Chapter 3 of Base Definitions of The Open Group Base Specifications Issue 6 IEEE Std 1003.1, 2004 Edition, HTML Version (www.opengroup.org), ISBN 1931624232"">13] standard).          |
   | pathname4     | typedef component4 pathname4<>;                   |
   |               | Represents path name for fs_locations.            |
   | verifier4     | typedef opaque verifier4[NFS4_VERIFIER_SIZE];     |
   |               | Verifier used for various operations (COMMIT,     |
   |               | CREATE, EXCHANGE_ID, OPEN, READDIR, WRITE)        |
   |               | NFS4_VERIFIER_SIZE is defined as 8.               |
   +---------------+---------------------------------------------------+

                          End of Base Data Types

                                  Table 1

3.3.  Structured Data Types

3.3.1.  nfstime4

   struct nfstime4 {
           int64_t         seconds;
           uint32_t        nseconds;
   };

   The nfstime4 data type gives the number of seconds and nanoseconds
   since midnight or 0 hour January 1, 1970 Coordinated Universal Time



Shepler, et al.           Expires June 18, 2009                [Page 87]

Internet-Draft                   NFSv4.1                   December 2008


   (UTC).  Values greater than zero for the seconds field denote dates
   after the 0 hour January 1, 1970.  Values less than zero for the
   seconds field denote dates before the 0 hour January 1, 1970.  In
   both cases, the nseconds field is to be added to the seconds field
   for the final time representation.  For example, if the time to be
   represented is one-half second before 0 hour January 1, 1970, the
   seconds field would have a value of negative one (-1) and the
   nseconds fields would have a value of one-half second (500000000).
   Values greater than 999,999,999 for nseconds are invalid.

   This data type is used to pass time and date information.  A server
   converts to and from its local representation of time when processing
   time values, preserving as much accuracy as possible.  If the
   precision of timestamps stored for a file system object is less than
   defined, loss of precision can occur.  An adjunct time maintenance
   protocol is RECOMMENDED to reduce client and server time skew.

3.3.2.  time_how4

   enum time_how4 {
           SET_TO_SERVER_TIME4 = 0,
           SET_TO_CLIENT_TIME4 = 1
   };

3.3.3.  settime4

   union settime4 switch (time_how4 set_it) {
    case SET_TO_CLIENT_TIME4:
            nfstime4       time;
    default:
            void;
   };

   The time_how4 and settime4 data types are used for setting timestamps
   in file object attributes.  If set_it is SET_TO_SERVER_TIME4, then
   the server uses its local representation of time for the time value.

3.3.4.  specdata4

   struct specdata4 {
    uint32_t specdata1; /* major device number */
    uint32_t specdata2; /* minor device number */
   };

   This data type represents the device numbers for the device file
   types NF4CHR and NF4BLK.





Shepler, et al.           Expires June 18, 2009                [Page 88]

Internet-Draft                   NFSv4.1                   December 2008


3.3.5.  fsid4

   struct fsid4 {
           uint64_t        major;
           uint64_t        minor;
   };

3.3.6.  chg_policy4

   struct change_policy4 {
           uint64_t        cp_major;
           uint64_t        cp_minor;
   };

   The chg_policy4 data type is used for the change_policy RECOMMENDED
   attribute.  It provides change sequencing indication analogous to the
   change attribute.  To enable the server to present a value valid
   across server re-initialization without requiring persistent storage,
   two 64-bit quantities are used, allowing one to be a server instance
   ID and the second to be incremented non-persistently, within a given
   server instance.

3.3.7.  fattr4

   struct fattr4 {
           bitmap4         attrmask;
           attrlist4       attr_vals;
   };

   The fattr4 data type is used to represent file and directory
   attributes.

   The bitmap is a counted array of 32 bit integers used to contain bit
   values.  The position of the integer in the array that contains bit n
   can be computed from the expression (n / 32) and its bit within that
   integer is (n mod 32).


   0            1
   +-----------+-----------+-----------+--
   |  count    | 31  ..  0 | 63  .. 32 |
   +-----------+-----------+-----------+--









Shepler, et al.           Expires June 18, 2009                [Page 89]

Internet-Draft                   NFSv4.1                   December 2008


3.3.8.  change_info4

   struct change_info4 {
           bool            atomic;
           changeid4       before;
           changeid4       after;
   };

   This data type is used with the CREATE, LINK, OPEN, REMOVE, and
   RENAME operations to let the client know the value of the change
   attribute for the directory in which the target file system object
   resides.

3.3.9.  netaddr4

   struct netaddr4 {
           /* see struct rpcb in RFC 1833 */
           string na_r_netid<>; /* network id */
           string na_r_addr<>;  /* universal address */
   };

   The netaddr4 data type is used to identify network transport
   endpoints.  The r_netid and r_addr fields respectively contain a
   netid and uaddr.  The netid and uaddr concepts are defined in [14].
   The netid and uaddr formats for TCP over IPv4 and TCP over IPv6 are
   defined in [14], specifically Tables 2 and 3 and Sections 4.2.3.3 and
   4.2.3.4.

3.3.10.  state_owner4

   struct state_owner4 {
           clientid4       clientid;
           opaque          owner<NFS4_OPAQUE_LIMIT>;
   };

   typedef state_owner4 open_owner4;
   typedef state_owner4 lock_owner4;

   The state_owner4 data type is the base type for the open_owner4
   Section 3.3.10.1 and lock_owner4 Section 3.3.10.2.

3.3.10.1.  open_owner4

   This data type is used to identify the owner of open state.







Shepler, et al.           Expires June 18, 2009                [Page 90]

Internet-Draft                   NFSv4.1                   December 2008


3.3.10.2.  lock_owner4

   This structure is used to identify the owner of byte-range locking
   state.

3.3.11.  open_to_lock_owner4

   struct open_to_lock_owner4 {
           seqid4          open_seqid;
           stateid4        open_stateid;
           seqid4          lock_seqid;
           lock_owner4     lock_owner;
   };

   This data type is used for the first LOCK operation done for an
   open_owner4.  It provides both the open_stateid and lock_owner such
   that the transition is made from a valid open_stateid sequence to
   that of the new lock_stateid sequence.  Using this mechanism avoids
   the confirmation of the lock_owner/lock_seqid pair since it is tied
   to established state in the form of the open_stateid/open_seqid.

3.3.12.  stateid4

   struct stateid4 {
           uint32_t        seqid;
           opaque          other[12];
   };

   This data type is used for the various state sharing mechanisms
   between the client and server.  The client never modifies a value of
   data type stateid.  The starting value of the seqid field is
   undefined.  The server is required to increment the seqid field by
   one (1) at each transition of the stateid.  This is important since
   the client will inspect the seqid in OPEN stateids to determine the
   order of OPEN processing done by the server.

3.3.13.  layouttype4

   enum layouttype4 {
           LAYOUT4_NFSV4_1_FILES   = 0x1,
           LAYOUT4_OSD2_OBJECTS    = 0x2,
           LAYOUT4_BLOCK_VOLUME    = 0x3
   };

   This data type indicates what type of layout is being used.  The file
   server advertises the layout types it supports through the
   fs_layout_type file system attribute (Section 5.12.1).  A client asks
   for layouts of a particular type in LAYOUTGET, and processes those



Shepler, et al.           Expires June 18, 2009                [Page 91]

Internet-Draft                   NFSv4.1                   December 2008


   layouts in its layout-type-specific logic.

   The layouttype4 data type is 32 bits in length.  The range
   represented by the layout type is split into three parts.  Type 0x0
   is reserved.  Types within the range 0x00000001-0x7FFFFFFF are
   globally unique and are assigned according to the description in
   Section 22.4; they are maintained by IANA.  Types within the range
   0x80000000-0xFFFFFFFF are site specific and for private use only.

   The LAYOUT4_NFSV4_1_FILES enumeration specifies that the NFSv4.1 file
   layout type, as defined in Section 13, is to be used.  The
   LAYOUT4_OSD2_OBJECTS enumeration specifies that the object layout, as
   defined in [39], is to be used.  Similarly, the LAYOUT4_BLOCK_VOLUME
   enumeration specifies that the block/volume layout, as defined in
   [40], is to be used.

3.3.14.  deviceid4

   const NFS4_DEVICEID4_SIZE = 16;

   typedef opaque  deviceid4[NFS4_DEVICEID4_SIZE];

   Layout information includes device IDs that specify a storage device
   through a compact handle.  Addressing and type information is
   obtained with the GETDEVICEINFO operation.  Device IDs are not
   guaranteed to be valid across metadata server restarts.  A device ID
   is unique per client ID and layout type.  See Section 12.2.10 for
   more details.

3.3.15.  device_addr4

   struct device_addr4 {
           layouttype4             da_layout_type;
           opaque                  da_addr_body<>;
   };

   The device address is used to set up a communication channel with the
   storage device.  Different layout types will require different data
   types to define how they communicate with storage devices.  The
   opaque da_addr_body field is interpreted based on the specified
   da_layout_type field.

   This document defines the device address for the NFSv4.1 file layout
   (see Section 13.3), which identifies a storage device by network IP
   address and port number.  This is sufficient for the clients to
   communicate with the NFSv4.1 storage devices, and may be sufficient
   for other layout types as well.  Device types for object storage
   devices and block storage devices (e.g., SCSI volume labels) are



Shepler, et al.           Expires June 18, 2009                [Page 92]

Internet-Draft                   NFSv4.1                   December 2008


   defined by their respective layout specifications.

3.3.16.  layout_content4

   struct layout_content4 {
           layouttype4 loc_type;
           opaque      loc_body<>;
   };

   The loc_body field is interpreted based on the layout type
   (loc_type).  This document defines the loc_body for the NFSv4.1 file
   layout type is defined; see Section 13.3 for its definition.

3.3.17.  layout4

   struct layout4 {
           offset4                 lo_offset;
           length4                 lo_length;
           layoutiomode4           lo_iomode;
           layout_content4         lo_content;
   };

   The layout4 data type defines a layout for a file.  The layout type
   specific data is opaque within lo_content.  Since layouts are sub-
   dividable, the offset and length together with the file's filehandle,
   the client ID, iomode, and layout type, identify the layout.

3.3.18.  layoutupdate4

   struct layoutupdate4 {
           layouttype4             lou_type;
           opaque                  lou_body<>;
   };

   The layoutupdate4 data type is used by the client to return updated
   layout information to the metadata server via the LAYOUTCOMMIT
   (Section 18.42) operation.  This data type provides a channel to pass
   layout type specific information (in field lou_body) back to the
   metadata server.  E.g., for the block/volume layout type this could
   include the list of reserved blocks that were written.  The contents
   of the opaque lou_body argument are determined by the layout type.
   The NFSv4.1 file-based layout does not use this data type; if
   lou_type is LAYOUT4_NFSV4_1_FILES, the lou_body field MUST have a
   zero length.







Shepler, et al.           Expires June 18, 2009                [Page 93]

Internet-Draft                   NFSv4.1                   December 2008


3.3.19.  layouthint4

   struct layouthint4 {
           layouttype4             loh_type;
           opaque                  loh_body<>;
   };

   The layouthint4 data type is used by the client to pass in a hint
   about the type of layout it would like created for a particular file.
   It is the data type specified by the layout_hint attribute described
   in Section 5.12.4.  The metadata server may ignore the hint, or may
   selectively ignore fields within the hint.  This hint should be
   provided at create time as part of the initial attributes within
   OPEN.  The loh_body field is specific to the type of layout
   (loh_type).  The NFSv4.1 file-based layout uses the
   nfsv4_1_file_layouthint4 data type as defined in Section 13.3.

3.3.20.  layoutiomode4

   enum layoutiomode4 {
           LAYOUTIOMODE4_READ      = 1,
           LAYOUTIOMODE4_RW        = 2,
           LAYOUTIOMODE4_ANY       = 3
   };

   The iomode specifies whether the client intends to just read or both
   read and write the data represented by the layout.  While the
   LAYOUTIOMODE4_ANY iomode MUST NOT be used in the arguments to the
   LAYOUTGET operation, it MAY be used in the arguments to the
   LAYOUTRETURN and CB_LAYOUTRECALL operations.  The LAYOUTIOMODE4_ANY
   iomode specifies that layouts pertaining to both LAYOUTIOMODE4_READ
   and LAYOUTIOMODE4_RW iomodes are being returned or recalled,
   respectively.  The metadata server's use of the iomode may depend on
   the layout type being used.  The storage devices MAY validate I/O
   accesses against the iomode and reject invalid accesses.

3.3.21.  nfs_impl_id4

   struct nfs_impl_id4 {
           utf8str_cis   nii_domain;
           utf8str_cs    nii_name;
           nfstime4      nii_date;
   };

   This data type is used to identify client and server implementation
   details.  The nii_domain field is the DNS domain name that the
   implementer is associated with.  The nii_name field is the product
   name of the implementation and is completely free form.  It is



Shepler, et al.           Expires June 18, 2009                [Page 94]

Internet-Draft                   NFSv4.1                   December 2008


   RECOMMENDED that the nii_name be used to distinguish machine
   architecture, machine platforms, revisions, versions, and patch
   levels.  The nii_date field is the timestamp of when the software
   instance was published or built.

3.3.22.  threshold_item4

   struct threshold_item4 {
           layouttype4     thi_layout_type;
           bitmap4         thi_hintset;
           opaque          thi_hintlist<>;
   };

   This data type contains a list of hints specific to a layout type for
   helping the client determine when it should send I/O directly through
   the metadata server versus the storage devices.  The data type
   consists of the layout type (thi_layout_type), a bitmap (thi_hintset)
   describing the set of hints supported by the server (they may differ
   based on the layout type), and a list of hints (thi_hintlist), whose
   content is determined by the hintset bitmap.  See the mdsthreshold
   attribute for more details.

   The thi_hintset field is a bitmap of the following values:

   +-------------------------+---+---------+---------------------------+
   | name                    | # | Data    | Description               |
   |                         |   | Type    |                           |
   +-------------------------+---+---------+---------------------------+
   | threshold4_read_size    | 0 | length4 | The file size below which |
   |                         |   |         | it is RECOMMENDED to read |
   |                         |   |         | data through the MDS.     |
   | threshold4_write_size   | 1 | length4 | The file size below which |
   |                         |   |         | it is RECOMMENDED to      |
   |                         |   |         | write data through the    |
   |                         |   |         | MDS.                      |
   | threshold4_read_iosize  | 2 | length4 | For read I/O sizes below  |
   |                         |   |         | this threshold it is      |
   |                         |   |         | RECOMMENDED to read data  |
   |                         |   |         | through the MDS           |
   | threshold4_write_iosize | 3 | length4 | For write I/O sizes below |
   |                         |   |         | this threshold it is      |
   |                         |   |         | RECOMMENDED to write data |
   |                         |   |         | through the MDS           |
   +-------------------------+---+---------+---------------------------+







Shepler, et al.           Expires June 18, 2009                [Page 95]

Internet-Draft                   NFSv4.1                   December 2008


3.3.23.  mdsthreshold4

   struct mdsthreshold4 {
           threshold_item4 mth_hints<>;
   };

   This data type holds an array of elements of data type
   threshold_item4, each of which is valid for a particular layout type.
   An array is necessary because a server can support multiple layout
   types for a single file.


4.  Filehandles

   The filehandle in the NFS protocol is a per server unique identifier
   for a file system object.  The contents of the filehandle are opaque
   to the client.  Therefore, the server is responsible for translating
   the filehandle to an internal representation of the file system
   object.

4.1.  Obtaining the First Filehandle

   The operations of the NFS protocol are defined in terms of one or
   more filehandles.  Therefore, the client needs a filehandle to
   initiate communication with the server.  With the NFSv3 protocol
   (RFC1813 [30]), there exists an ancillary protocol to obtain this
   first filehandle.  The MOUNT protocol, RPC program number 100005,
   provides the mechanism of translating a string based file system path
   name to a filehandle which can then be used by the NFS protocols.

   The MOUNT protocol has deficiencies in the area of security and use
   via firewalls.  This is one reason that the use of the public
   filehandle was introduced in RFC2054 [41] and RFC2055 [42].  With the
   use of the public filehandle in combination with the LOOKUP operation
   in the NFSv3 protocol, it has been demonstrated that the MOUNT
   protocol is unnecessary for viable interaction between NFS client and
   server.

   Therefore, the NFSv4.1 protocol will not use an ancillary protocol
   for translation from string based path names to a filehandle.  Two
   special filehandles will be used as starting points for the NFS
   client.

4.1.1.  Root Filehandle

   The first of the special filehandles is the ROOT filehandle.  The
   ROOT filehandle is the "conceptual" root of the file system name
   space at the NFS server.  The client uses or starts with the ROOT



Shepler, et al.           Expires June 18, 2009                [Page 96]

Internet-Draft                   NFSv4.1                   December 2008


   filehandle by employing the PUTROOTFH operation.  The PUTROOTFH
   operation instructs the server to set the "current" filehandle to the
   ROOT of the server's file tree.  Once this PUTROOTFH operation is
   used, the client can then traverse the entirety of the server's file
   tree with the LOOKUP operation.  A complete discussion of the server
   name space is in the Section 7.

4.1.2.  Public Filehandle

   The second special filehandle is the PUBLIC filehandle.  Unlike the
   ROOT filehandle, the PUBLIC filehandle may be bound or represent an
   arbitrary file system object at the server.  The server is
   responsible for this binding.  It may be that the PUBLIC filehandle
   and the ROOT filehandle refer to the same file system object.
   However, it is up to the administrative software at the server and
   the policies of the server administrator to define the binding of the
   PUBLIC filehandle and server file system object.  The client may not
   make any assumptions about this binding.  The client uses the PUBLIC
   filehandle via the PUTPUBFH operation.

4.2.  Filehandle Types

   In the NFSv3 protocol, there was one type of filehandle with a single
   set of semantics.  This type of filehandle is termed "persistent" in
   NFSv4.1.  The semantics of a persistent filehandle remain the same as
   before.  A new type of filehandle introduced in NFSv4.1 is the
   "volatile" filehandle, which attempts to accommodate certain server
   environments.

   The volatile filehandle type was introduced to address server
   functionality or implementation issues which make correct
   implementation of a persistent filehandle infeasible.  Some server
   environments do not provide a file system level invariant that can be
   used to construct a persistent filehandle.  The underlying server
   file system may not provide the invariant or the server's file system
   programming interfaces may not provide access to the needed
   invariant.  Volatile filehandles may ease the implementation of
   server functionality such as hierarchical storage management or file
   system reorganization or migration.  However, the volatile filehandle
   increases the implementation burden for the client.

   Since the client will need to handle persistent and volatile
   filehandles differently, a file attribute is defined which may be
   used by the client to determine the filehandle types being returned
   by the server.






Shepler, et al.           Expires June 18, 2009                [Page 97]

Internet-Draft                   NFSv4.1                   December 2008


4.2.1.  General Properties of a Filehandle

   The filehandle contains all the information the server needs to
   distinguish an individual file.  To the client, the filehandle is
   opaque.  The client stores filehandles for use in a later request and
   can compare two filehandles from the same server for equality by
   doing a byte-by-byte comparison.  However, the client MUST NOT
   otherwise interpret the contents of filehandles.  If two filehandles
   from the same server are equal, they MUST refer to the same file.
   Servers SHOULD try to maintain a one-to-one correspondence between
   filehandles and files but this is not required.  Clients MUST use
   filehandle comparisons only to improve performance, not for correct
   behavior.  All clients need to be prepared for situations in which it
   cannot be determined whether two filehandles denote the same object
   and in such cases, avoid making invalid assumptions which might cause
   incorrect behavior.  Further discussion of filehandle and attribute
   comparison in the context of data caching is presented in the
   Section 10.3.4.

   As an example, in the case that two different path names when
   traversed at the server terminate at the same file system object, the
   server SHOULD return the same filehandle for each path.  This can
   occur if a hard link (see [Section 3.191 of Chapter 3 of Base Definitions of The Open Group Base Specifications Issue 6 IEEE Std 1003.1, 2004 Edition, HTML Version (www.opengroup.org), ISBN 1931624232"">6]) is used to create two file names which
   refer to the same underlying file object and associated data.  For
   example, if paths /a/b/c and /a/d/c refer to the same file, the
   server SHOULD return the same filehandle for both path names
   traversals.

4.2.2.  Persistent Filehandle

   A persistent filehandle is defined as having a fixed value for the
   lifetime of the file system object to which it refers.  Once the
   server creates the filehandle for a file system object, the server
   MUST accept the same filehandle for the object for the lifetime of
   the object.  If the server restarts, the NFS server MUST honor the
   same filehandle value as it did in the server's previous
   instantiation.  Similarly, if the file system is migrated, the new
   NFS server MUST honor the same filehandle as the old NFS server.

   The persistent filehandle will be become stale or invalid when the
   file system object is removed.  When the server is presented with a
   persistent filehandle that refers to a deleted object, it MUST return
   an error of NFS4ERR_STALE.  A filehandle may become stale when the
   file system containing the object is no longer available.  The file
   system may become unavailable if it exists on removable media and the
   media is no longer available at the server or the file system in
   whole has been destroyed or the file system has simply been removed
   from the server's name space (i.e. unmounted in a UNIX environment).



Shepler, et al.           Expires June 18, 2009                [Page 98]

Internet-Draft                   NFSv4.1                   December 2008


4.2.3.  Volatile Filehandle

   A volatile filehandle does not share the same longevity
   characteristics of a persistent filehandle.  The server may determine
   that a volatile filehandle is no longer valid at many different
   points in time.  If the server can definitively determine that a
   volatile filehandle refers to an object that has been removed, the
   server should return NFS4ERR_STALE to the client (as is the case for
   persistent filehandles).  In all other cases where the server
   determines that a volatile filehandle can no longer be used, it
   should return an error of NFS4ERR_FHEXPIRED.

   The REQUIRED attribute "fh_expire_type" is used by the client to
   determine what type of filehandle the server is providing for a
   particular file system.  This attribute is a bitmask with the
   following values:

   FH4_PERSISTENT  The value of FH4_PERSISTENT is used to indicate a
      persistent filehandle, which is valid until the object is removed
      from the file system.  The server will not return
      NFS4ERR_FHEXPIRED for this filehandle.  FH4_PERSISTENT is defined
      as a value in which none of the bits specified below are set.

   FH4_VOLATILE_ANY  The filehandle may expire at any time, except as
      specifically excluded (i.e.  FH4_NO_EXPIRE_WITH_OPEN).

   FH4_NOEXPIRE_WITH_OPEN  May only be set when FH4_VOLATILE_ANY is set.
      If this bit is set, then the meaning of FH4_VOLATILE_ANY is
      qualified to exclude any expiration of the filehandle when it is
      open.

   FH4_VOL_MIGRATION  The filehandle will expire as a result of a file
      system transition (migration or replication), in those case in
      which the continuity of filehandle use is not specified by
      _handle_ class information within the fs_locations_info attribute.
      When this bit is set, clients without access to fs_locations_info
      information should assume filehandles will expire on file system
      transitions.

   FH4_VOL_RENAME  The filehandle will expire during rename.  This
      includes a rename by the requesting client or a rename by any
      other client.  If FH4_VOL_ANY is set, FH4_VOL_RENAME is redundant.

   Servers which provide volatile filehandles that may expire while open
   (i.e. if FH4_VOL_MIGRATION or FH4_VOL_RENAME is set or if
   FH4_VOLATILE_ANY is set and FH4_NOEXPIRE_WITH_OPEN not set), should
   deny a RENAME or REMOVE that would affect an OPEN file of any of the
   components leading to the OPEN file.  In addition, the server should



Shepler, et al.           Expires June 18, 2009                [Page 99]

Internet-Draft                   NFSv4.1                   December 2008


   deny all RENAME or REMOVE requests during the grace period upon
   server restart.

   Servers which provide volatile filehandles that may expire while open
   require special care as regards handling of RENAMEs and REMOVEs.
   This situation can arise if FH4_VOL_MIGRATION or FH4_VOL_RENAME is
   set, if FH4_VOLATILE_ANY is set and FH4_NOEXPIRE_WITH_OPEN not set,
   or if a non-readonly file system has a transition target in a
   different _handle _ class.  In these cases, the server should deny a
   RENAME or REMOVE that would affect an OPEN file of any of the
   components leading to the OPEN file.  In addition, the server should
   deny all RENAME or REMOVE requests during the grace period, in order
   to make sure that reclaims of files where filehandles may have
   expired do not do a reclaim for the wrong file.

   Volatile filehandles are especially suitable for implementation of
   the pseudo file systems used to bridge exports.  See Section 7.5 for
   a discussion of this.

4.3.  One Method of Constructing a Volatile Filehandle

   A volatile filehandle, while opaque to the client could contain:

   [volatile bit = 1 | server boot time | slot | generation number]

   o  slot is an index in the server volatile filehandle table

   o  generation number is the generation number for the table entry/
      slot

   When the client presents a volatile filehandle, the server makes the
   following checks, which assume that the check for the volatile bit
   has passed.  If the server boot time is less than the current server
   boot time, return NFS4ERR_FHEXPIRED.  If slot is out of range, return
   NFS4ERR_BADHANDLE.  If the generation number does not match, return
   NFS4ERR_FHEXPIRED.

   When the server restarts, the table is gone (it is volatile).

   If volatile bit is 0, then it is a persistent filehandle with a
   different structure following it.

4.4.  Client Recovery from Filehandle Expiration

   If possible, the client SHOULD recover from the receipt of an
   NFS4ERR_FHEXPIRED error.  The client must take on additional
   responsibility so that it may prepare itself to recover from the
   expiration of a volatile filehandle.  If the server returns



Shepler, et al.           Expires June 18, 2009               [Page 100]

Internet-Draft                   NFSv4.1                   December 2008


   persistent filehandles, the client does not need these additional
   steps.

   For volatile filehandles, most commonly the client will need to store
   the component names leading up to and including the file system
   object in question.  With these names, the client should be able to
   recover by finding a filehandle in the name space that is still
   available or by starting at the root of the server's file system name
   space.

   If the expired filehandle refers to an object that has been removed
   from the file system, obviously the client will not be able to
   recover from the expired filehandle.

   It is also possible that the expired filehandle refers to a file that
   has been renamed.  If the file was renamed by another client, again
   it is possible that the original client will not be able to recover.
   However, in the case that the client itself is renaming the file and
   the file is open, it is possible that the client may be able to
   recover.  The client can determine the new path name based on the
   processing of the rename request.  The client can then regenerate the
   new filehandle based on the new path name.  The client could also use
   the compound operation mechanism to construct a set of operations
   like:

             RENAME A B
             LOOKUP B
             GETFH

   Note that the COMPOUND procedure does not provide atomicity.  This
   example only reduces the overhead of recovering from an expired
   filehandle.


5.  File Attributes

   To meet the requirements of extensibility and increased
   interoperability with non-UNIX platforms, attributes need to be
   handled in a flexible manner.  The NFSv3 fattr3 structure contains a
   fixed list of attributes that not all clients and servers are able to
   support or care about.  The fattr3 structure can not be extended as
   new needs arise and it provides no way to indicate non-support.  With
   the NFSv4.1 protocol, the client is able query what attributes the
   server supports and construct requests with only those supported
   attributes (or a subset thereof).

   To this end, attributes are divided into three groups: REQUIRED,
   RECOMMENDED, and named.  Both REQUIRED and RECOMMENDED attributes are



Shepler, et al.           Expires June 18, 2009               [Page 101]

Internet-Draft                   NFSv4.1                   December 2008


   supported in the NFSv4.1 protocol by a specific and well-defined
   encoding and are identified by number.  They are requested by setting
   a bit in the bit vector sent in the GETATTR request; the server
   response includes a bit vector to list what attributes were returned
   in the response.  New REQUIRED or RECOMMENDED attributes may be added
   to the NFSv4 protocol as part of a new minor version by publishing a
   standards-track RFC which allocates a new attribute number value and
   defines the encoding for the attribute.  See Section 2.7 for further
   discussion.

   Named attributes are accessed by the new OPENATTR operation, which
   accesses a hidden directory of attributes associated with a file
   system object.  OPENATTR takes a filehandle for the object and
   returns the filehandle for the attribute hierarchy.  The filehandle
   for the named attributes is a directory object accessible by LOOKUP
   or READDIR and contains files whose names represent the named
   attributes and whose data bytes are the value of the attribute.  For
   example:

        +----------+-----------+---------------------------------+
        | LOOKUP   | "foo"     | ; look up file                  |
        | GETATTR  | attrbits  |                                 |
        | OPENATTR |           | ; access foo's named attributes |
        | LOOKUP   | "x11icon" | ; look up specific attribute    |
        | READ     | 0,4096    | ; read stream of bytes          |
        +----------+-----------+---------------------------------+

   Named attributes are intended for data needed by applications rather
   than by an NFS client implementation.  NFS implementors are strongly
   encouraged to define their new attributes as RECOMMENDED attributes
   by bringing them to the IETF standards-track process.

   The set of attributes which are classified as REQUIRED is
   deliberately small since servers need to do whatever it takes to
   support them.  A server should support as many of the RECOMMENDED
   attributes as possible but by their definition, the server is not
   required to support all of them.  Attributes are deemed REQUIRED if
   the data is both needed by a large number of clients and is not
   otherwise reasonably computable by the client when support is not
   provided on the server.

   Note that the hidden directory returned by OPENATTR is a convenience
   for protocol processing.  The client should not make any assumptions
   about the server's implementation of named attributes and whether the
   underlying file system at the server has a named attribute directory
   or not.  Therefore, operations such as SETATTR and GETATTR on the
   named attribute directory are undefined.




Shepler, et al.           Expires June 18, 2009               [Page 102]

Internet-Draft                   NFSv4.1                   December 2008


5.1.  REQUIRED Attributes

   These MUST be supported by every NFSv4.1 client and server in order
   to ensure a minimum level of interoperability.  The server MUST store
   and return these attributes and the client MUST be able to function
   with an attribute set limited to these attributes.  With just the
   REQUIRED attributes some client functionality may be impaired or
   limited in some ways.  A client may ask for any of these attributes
   to be returned by setting a bit in the GETATTR request and the server
   must return their value.

5.2.  RECOMMENDED Attributes

   These attributes are understood well enough to warrant support in the
   NFSv4.1 protocol.  However, they may not be supported on all clients
   and servers.  A client may ask for any of these attributes to be
   returned by setting a bit in the GETATTR request but must handle the
   case where the server does not return them.  A client MAY ask for the
   set of attributes the server supports and SHOULD NOT request
   attributes the server does not support.  A server should be tolerant
   of requests for unsupported attributes and simply not return them
   rather than considering the request an error.  It is expected that
   servers will support all attributes they comfortably can and only
   fail to support attributes which are difficult to support in their
   operating environments.  A server should provide attributes whenever
   they don't have to "tell lies" to the client.  For example, a file
   modification time should be either an accurate time or should not be
   supported by the server.  This will not always be comfortable to
   clients but the client is better positioned decide whether and how to
   fabricate or construct an attribute or whether to do without the
   attribute.

5.3.  Named Attributes

   These attributes are not supported by direct encoding in the NFSv4
   protocol but are accessed by string names rather than numbers and
   correspond to an uninterpreted stream of bytes which are stored with
   the file system object.  The name space for these attributes may be
   accessed by using the OPENATTR operation.  The OPENATTR operation
   returns a filehandle for a virtual "named attribute directory" and
   further perusal and modification of the name space may be done using
   operations that work on more typical directories.  In particular,
   READDIR may be used to get a list of such named attributes and LOOKUP
   and OPEN may select a particular attribute.  Creation of a new named
   attribute may be the result of an OPEN specifying file creation.

   Once an OPEN is done, named attributes may be examined and changed by
   normal READ and WRITE operations using the filehandles and stateids



Shepler, et al.           Expires June 18, 2009               [Page 103]

Internet-Draft                   NFSv4.1                   December 2008


   returned by OPEN.

   Named attributes and the named attribute directory may have their own
   (non-named) attributes.  Each of these objects MUST have all of the
   REQUIRED attributes and may have additional RECOMMENDED attributes.
   However, the set of attributes for named attributes and the named
   attribute directory need not be as large as, and typically will not
   be as large as that for other objects in that file system.

   Named attributes and the named attribute directory may be the target
   of delegations (in the case of the named attribute directory these
   will be directory delegations).  However, since granting of
   delegations or not is within the server's discretion, a server need
   not support delegations on named attributes or the named attribute
   directory.

   It is RECOMMENDED that servers support arbitrary named attributes.  A
   client should not depend on the ability to store any named attributes
   in the server's file system.  If a server does support named
   attributes, a client which is also able to handle them should be able
   to copy a file's data and metadata with complete transparency from
   one location to another; this would imply that names allowed for
   regular directory entries are valid for named attribute names as
   well.

   In NFSv4.1, the structure of named attribute directories is
   restricted in a number of ways, in order to prevent the development
   of non-interoperable implementations in which some servers support a
   fully general hierarchical directory structure for named attributes
   while others support a limited set, but fully adequate to the
   feature's goals.  In such an environment, clients or applications
   might come to depend on non-portable extensions.  The restrictions
   are:

   o  CREATE is not allowed in a named attribute directory.  Thus, such
      objects as symbolic links and special files are not allowed to be
      named attributes.  Further, directories may not be created in a
      named attribute directory so no hierarchical structure of named
      attributes for a single object is allowed.

   o  If OPENATTR is done on a named attribute directory or on a named
      attribute, the server MUST return NFS4ERR_WRONG_TYPE.

   o  Doing a RENAME of a named attribute to a different named attribute
      directory or to an ordinary (i.e. non-named-attribute) directory
      is not allowed.





Shepler, et al.           Expires June 18, 2009               [Page 104]

Internet-Draft                   NFSv4.1                   December 2008


   o  Creating hard links between named attribute directories or between
      named attribute directories and ordinary directories is not
      allowed.

   Names of attributes will not be controlled by this document or other
   IETF standards track documents.  See Section 22.1 for further
   discussion.

5.4.  Classification of Attributes

   Each of the REQUIRED and RECOMMENDED attributes can be classified in
   one of three categories: per server (i.e. the value of the attribute
   will be the same for all file objects that share the same server
   owner; see Section 2.5 for a definition of server owner), per file
   system (i.e. the value of the attribute will be the same for some or
   all file objects that share the same fsid attribute (Section 5.8.1.9)
   and Server Owner), or per file system object.  Note that it is
   possible that some per file system attributes may vary within the
   file system, depending on the value of the "homogeneous"
   (Section 5.8.2.16) attribute.  Note that the attributes
   time_access_set and time_modify_set are not listed in this section
   because they are write-only attributes corresponding to time_access
   and time_modify, and are used in a special instance of SETATTR.

   o  The per server attribute is:

         lease_time

   o  The per file system attributes are:

         supported_attrs, suppattr_exclcreat, fh_expire_type,
         link_support, symlink_support, unique_handles, aclsupport,
         cansettime, case_insensitive, case_preserving,
         chown_restricted, files_avail, files_free, files_total,
         fs_locations, homogeneous, maxfilesize, maxname, maxread,
         maxwrite, no_trunc, space_avail, space_free, space_total,
         time_delta, change_policy, fs_status, fs_layout_type,
         fs_locations_info, fs_charset_cap

   o  The per file system object attributes are:

         type, change, size, named_attr, fsid, rdattr_error, filehandle,
         acl, archive, fileid, hidden, maxlink, mimetype, mode,
         numlinks, owner, owner_group, rawdev, space_used, system,
         time_access, time_backup, time_create, time_metadata,
         time_modify, mounted_on_fileid, dir_notif_delay,
         dirent_notif_delay, dacl, sacl, layout_type, layout_hint,
         layout_blksize, layout_alignment, mdsthreshold, retention_get,



Shepler, et al.           Expires June 18, 2009               [Page 105]

Internet-Draft                   NFSv4.1                   December 2008


         retention_set, retentevt_get, retentevt_set, retention_hold,
         mode_set_masked

   For quota_avail_hard, quota_avail_soft, and quota_used see their
   definitions below for the appropriate classification.

5.5.  Set-Only and Get-Only Attributes

   Some REQUIRED and RECOMMENDED attributes are set-only, i.e. they can
   be set via SETATTR but not retrieved via GETATTR.  Similarly, some
   REQUIRED and RECOMMENDED attributes are get-only, i.e. they can be
   retrieved GETATTR but not set via SETATTR.  If a client attempts to
   set a get-only attribute or get a set-only attributes, the server
   MUST return NFS4ERR_INVAL.

5.6.  REQUIRED Attributes - List and Definition References

   The list of REQUIRED attributes appears in Table 2.  The meaning of
   the columns of the table are:

   o  Name: the name of attribute

   o  Id: the number assigned to the attribute.  In the event of
      conflicts between the assigned number and [12], the latter is
      likely authoritative, but should be resolved with Errata to this
      document and/or [12].  See [43] for the Errata process.

   o  Data Type: The XDR data type of the attribute.

   o  Acc: Access allowed to the attribute.  R means read-only (GETATTR
      may retrieve, SETATTR may not set).  W means write-only (SETATTR
      may set, GETATTR may not retrieve).  R W means read/write (GETATTR
      may retrieve, SETATTR may set).

   o  Defined in: the section of this specification that describes the
      attribute.















Shepler, et al.           Expires June 18, 2009               [Page 106]

Internet-Draft                   NFSv4.1                   December 2008


     +--------------------+----+------------+-----+------------------+
     | Name               | Id | Data Type  | Acc | Defined in:      |
     +--------------------+----+------------+-----+------------------+
     | supported_attrs    | 0  | bitmap4    | R   | Section 5.8.1.1  |
     | type               | 1  | nfs_ftype4 | R   | Section 5.8.1.2  |
     | fh_expire_type     | 2  | uint32_t   | R   | Section 5.8.1.3  |
     | change             | 3  | uint64_t   | R   | Section 5.8.1.4  |
     | size               | 4  | uint64_t   | R W | Section 5.8.1.5  |
     | link_support       | 5  | bool       | R   | Section 5.8.1.6  |
     | symlink_support    | 6  | bool       | R   | Section 5.8.1.7  |
     | named_attr         | 7  | bool       | R   | Section 5.8.1.8  |
     | fsid               | 8  | fsid4      | R   | Section 5.8.1.9  |
     | unique_handles     | 9  | bool       | R   | Section 5.8.1.10 |
     | lease_time         | 10 | nfs_lease4 | R   | Section 5.8.1.11 |
     | rdattr_error       | 11 | enum       | R   | Section 5.8.1.12 |
     | filehandle         | 19 | nfs_fh4    | R   | Section 5.8.1.13 |
     | suppattr_exclcreat | 75 | bitmap4    | R   | Section 5.8.1.14 |
     +--------------------+----+------------+-----+------------------+

                                  Table 2

5.7.  RECOMMENDED Attributes - List and Definition References

   The RECOMMENDED attributes are defined in Table 3.  The meanings of
   the column headers are the same as Table 2; see Section 5.6 for the
   meanings.

   +--------------------+----+----------------+-----+------------------+
   | Name               | Id | Data Type      | Acc | Defined in:      |
   +--------------------+----+----------------+-----+------------------+
   | acl                | 12 | nfsace4<>      | R W | Section 6.2.1    |
   | aclsupport         | 13 | uint32_t       | R   | Section 6.2.1.2  |
   | archive            | 14 | bool           | R W | Section 5.8.2.1  |
   | cansettime         | 15 | bool           | R   | Section 5.8.2.2  |
   | case_insensitive   | 16 | bool           | R   | Section 5.8.2.3  |
   | case_preserving    | 17 | bool           | R   | Section 5.8.2.4  |
   | change_policy      | 60 | chg_policy4    | R   | Section 5.8.2.5  |
   | chown_restricted   | 18 | bool           | R   | Section 5.8.2.6  |
   | dacl               | 58 | nfsacl41       | R W | Section 6.2.2    |
   | dir_notif_delay    | 56 | nfstime4       | R   | Section 5.11.1   |
   | dirent_notif_delay | 57 | nfstime4       | R   | Section 5.11.2   |
   | fileid             | 20 | uint64_t       | R   | Section 5.8.2.7  |
   | files_avail        | 21 | uint64_t       | R   | Section 5.8.2.8  |
   | files_free         | 22 | uint64_t       | R   | Section 5.8.2.9  |
   | files_total        | 23 | uint64_t       | R   | Section 5.8.2.10 |
   | fs_charset_cap     | 76 | uint32_t       | R   | Section 5.8.2.11 |
   | fs_layout_type     | 62 | layouttype4<>  | R   | Section 5.12.1   |
   | fs_locations       | 24 | fs_locations   | R   | Section 5.8.2.12 |



Shepler, et al.           Expires June 18, 2009               [Page 107]

Internet-Draft                   NFSv4.1                   December 2008


   | fs_locations_info  | 67 | *              | R   | Section 5.8.2.13 |
   | fs_status          | 61 | fs4_status     | R   | Section 5.8.2.14 |
   | hidden             | 25 | bool           | R W | Section 5.8.2.15 |
   | homogeneous        | 26 | bool           | R   | Section 5.8.2.16 |
   | layout_alignment   | 66 | uint32_t       | R   | Section 5.12.2   |
   | layout_blksize     | 65 | uint32_t       | R   | Section 5.12.3   |
   | layout_hint        | 63 | layouthint4    |   W | Section 5.12.4   |
   | layout_type        | 64 | layouttype4<>  | R   | Section 5.12.5   |
   | maxfilesize        | 27 | uint64_t       | R   | Section 5.8.2.17 |
   | maxlink            | 28 | uint32_t       | R   | Section 5.8.2.18 |
   | maxname            | 29 | uint32_t       | R   | Section 5.8.2.19 |
   | maxread            | 30 | uint64_t       | R   | Section 5.8.2.20 |
   | maxwrite           | 31 | uint64_t       | R   | Section 5.8.2.21 |
   | mdsthreshold       | 68 | mdsthreshold4  | R   | Section 5.12.6   |
   | mimetype           | 32 | utf8<>         | R W | Section 5.8.2.22 |
   | mode               | 33 | mode4          | R W | Section 6.2.4    |
   | mode_set_masked    | 74 | mode_masked4   |   W | Section 6.2.5    |
   | mounted_on_fileid  | 55 | uint64_t       | R   | Section 5.8.2.23 |
   | no_trunc           | 34 | bool           | R   | Section 5.8.2.24 |
   | numlinks           | 35 | uint32_t       | R   | Section 5.8.2.25 |
   | owner              | 36 | utf8<>         | R W | Section 5.8.2.26 |
   | owner_group        | 37 | utf8<>         | R W | Section 5.8.2.27 |
   | quota_avail_hard   | 38 | uint64_t       | R   | Section 5.8.2.28 |
   | quota_avail_soft   | 39 | uint64_t       | R   | Section 5.8.2.29 |
   | quota_used         | 40 | uint64_t       | R   | Section 5.8.2.30 |
   | rawdev             | 41 | specdata4      | R   | Section 5.8.2.31 |
   | retentevt_get      | 71 | retention_get4 | R   | Section 5.13.3   |
   | retentevt_set      | 72 | retention_set4 |   W | Section 5.13.4   |
   | retention_get      | 69 | retention_get4 | R   | Section 5.13.1   |
   | retention_hold     | 73 | uint64_t       | R W | Section 5.13.5   |
   | retention_set      | 70 | retention_set4 |   W | Section 5.13.2   |
   | sacl               | 59 | nfsacl41       | R W | Section 6.2.3    |
   | space_avail        | 42 | uint64_t       | R   | Section 5.8.2.32 |
   | space_free         | 43 | uint64_t       | R   | Section 5.8.2.33 |
   | space_total        | 44 | uint64_t       | R   | Section 5.8.2.34 |
   | space_used         | 45 | uint64_t       | R   | Section 5.8.2.35 |
   | system             | 46 | bool           | R W | Section 5.8.2.36 |
   | time_access        | 47 | nfstime4       | R   | Section 5.8.2.37 |
   | time_access_set    | 48 | settime4       |   W | Section 5.8.2.38 |
   | time_backup        | 49 | nfstime4       | R W | Section 5.8.2.39 |
   | time_create        | 50 | nfstime4       | R W | Section 5.8.2.40 |
   | time_delta         | 51 | nfstime4       | R   | Section 5.8.2.41 |
   | time_metadata      | 52 | nfstime4       | R   | Section 5.8.2.42 |
   | time_modify        | 53 | nfstime4       | R   | Section 5.8.2.43 |
   | time_modify_set    | 54 | settime4       |   W | Section 5.8.2.44 |
   +--------------------+----+----------------+-----+------------------+

                                  Table 3



Shepler, et al.           Expires June 18, 2009               [Page 108]

Internet-Draft                   NFSv4.1                   December 2008


   * fs_locations_info4

5.8.  Attribute Definitions

5.8.1.  Definitions of REQUIRED Attributes

5.8.1.1.  Attribute 0: supported_attrs

   The bit vector which would retrieve all REQUIRED and RECOMMENDED
   attributes that are supported for this object.  The scope of this
   attribute applies to all objects with a matching fsid.

5.8.1.2.  Attribute 1: type

   Designates the type of an object in terms of one of a number of
   special constants:

   o  NF4REG designates a regular file.

   o  NF4DIR designates a directory.

   o  NF4BLK designates a block device special file.

   o  NF4CHR designates a character device special file.

   o  NF4LNK designates a symbolic link.

   o  NF4SOCK designates a named socket special file.

   o  NF4FIFO designates a fifo special file.

   o  NF4ATTRDIR designates a named attribute directory.

   o  NF4NAMEDATTR designates a named attribute.

   Within the explanatory text and operation descriptions, the following
   phrases will be used with the meanings given below:

   o  The phrase "is a directory" means that the object is of type
      NF4DIR or of type NF4ATTRDIR.

   o  The phrase "is a special file" means that the object is of one of
      the types NF4BLK, NF4CHR, NF4SOCK, or NF4FIFO.

   o  The phrase "is an ordinary file" means that the object is of type
      NF4REG or of type NF4NAMEDATTR.





Shepler, et al.           Expires June 18, 2009               [Page 109]

Internet-Draft                   NFSv4.1                   December 2008


5.8.1.3.  Attribute 2: fh_expire_type

   Server uses this to specify filehandle expiration behavior to the
   client.  See Section 4 for additional description.

5.8.1.4.  Attribute 3: change

   A value created by the server that the client can use to determine if
   file data, directory contents or attributes of the object have been
   modified.  The server may return the object's time_metadata attribute
   for this attribute's value but only if the file system object can not
   be updated more frequently than the resolution of time_metadata.

5.8.1.5.  Attribute 4: size

   The size of the object in bytes.

5.8.1.6.  Attribute 5: link_support

   True, if the object's file system supports hard links.

5.8.1.7.  Attribute 6: symlink_support

   True, if the object's file system supports symbolic links.

5.8.1.8.  Attribute 7: named_attr

   True, if this object has named attributes.  In other words, object
   has a non-empty named attribute directory.

5.8.1.9.  Attribute 8: fsid

   Unique file system identifier for the file system holding this
   object. fsid contains major and minor components each of which are of
   data type uint64_t.

5.8.1.10.  Attribute 9: unique_handles

   True, if two distinct filehandles guaranteed to refer to two
   different file system objects.

5.8.1.11.  Attribute 10: lease_time

   Duration of leases at server in seconds.







Shepler, et al.           Expires June 18, 2009               [Page 110]

Internet-Draft                   NFSv4.1                   December 2008


5.8.1.12.  Attribute 11: rdattr_error

   Error returned from an attempt to retrieve attributes during a
   READDIR operation.

5.8.1.13.  Attribute 19: filehandle

   The filehandle of this object (primarily for READDIR requests).

5.8.1.14.  Attribute 75: suppattr_exclcreat

   The bit vector which would set all REQUIRED and RECOMMENDED
   attributes that are supported by the EXCLUSIVE4_1 method of file
   creation via the OPEN operation.  The scope of this attribute applies
   to all objects with a matching fsid.

5.8.2.  Definitions of Uncategorized RECOMMENDED Attributes

   The definitions of most of the RECOMMENDED attributes follow.
   Collections that share a common category are defined in other
   sections.

5.8.2.1.  Attribute 14: archive

   True, if this file has been archived since the time of last
   modification (deprecated in favor of time_backup).

5.8.2.2.  Attribute 15: cansettime

   True, if the server able to change the times for a file system object
   as specified in a SETATTR operation.

5.8.2.3.  Attribute 16: case_insensitive

   True, if file name comparisons on this file system are case
   insensitive.

5.8.2.4.  Attribute 17: case_preserving

   True, if file name case on this file system is preserved.

5.8.2.5.  Attribute 60: change_policy

   A value created by the server that the client can use to determine if
   some server policy related to the current file system has been
   subject to change.  If the value remains the same then the client can
   be sure that the values of the attributes related to fs location and
   the fss_type field of the fs_status attribute have not changed.  On



Shepler, et al.           Expires June 18, 2009               [Page 111]

Internet-Draft                   NFSv4.1                   December 2008


   the other hand, a change in this value does necessarily imply a
   change in policy.  It is up to the client to interrogate the server
   to determine if some policy relevant to it has changed.  See
   Section 3.3.6 for details.

   This attribute MUST change when the value returned by the
   fs_locations or fs_locations_info attribute changes, when a file
   system goes from read-only to writable or vice versa, or when the
   allowable set of security flavors for the file system or any part
   thereof is changed.

5.8.2.6.  Attribute 18: chown_restricted

   If TRUE, the server will reject any request to change either the
   owner or the group associated with a file if the caller is not a
   privileged user (for example, "root" in UNIX operating environments
   or in Windows 2000 the "Take Ownership" privilege).

5.8.2.7.  Attribute 20: fileid

   A number uniquely identifying the file within the file system.

5.8.2.8.  Attribute 21: files_avail

   File slots available to this user on the file system containing this
   object - this should be the smallest relevant limit.

5.8.2.9.  Attribute 22: files_free

   Free file slots on the file system containing this object - this
   should be the smallest relevant limit.

5.8.2.10.  Attribute 23: files_total

   Total file slots on the file system containing this object.

5.8.2.11.  Attribute 76: fs_charset_cap

   Character set capabilities for this file system.  See Section 14.4.

5.8.2.12.  Attribute 24: fs_locations

   Locations where this file system may be found.  If the server returns
   NFS4ERR_MOVED as an error, this attribute MUST be supported.  See
   Section 11.9 for more details.






Shepler, et al.           Expires June 18, 2009               [Page 112]

Internet-Draft                   NFSv4.1                   December 2008


5.8.2.13.  Attribute 67: fs_locations_info

   Full function file system location.  See Section 11.10 for more
   details.

5.8.2.14.  Attribute 61: fs_status

   Generic file system type information.  See Section 11.11 for more
   details.

5.8.2.15.  Attribute 25: hidden

   True, if the file is considered hidden with respect to the Windows
   API.

5.8.2.16.  Attribute 26: homogeneous

   True, if this object's file system is homogeneous, i.e. are per file
   system attributes the same for all file system's objects.

5.8.2.17.  Attribute 27: maxfilesize

   Maximum supported file size for the file system of this object.

5.8.2.18.  Attribute 28: maxlink

   Maximum number of links for this object.

5.8.2.19.  Attribute 29: maxname

   Maximum file name size supported for this object.

5.8.2.20.  Attribute 30: maxread

   Maximum read size supported for this object.

5.8.2.21.  Attribute 31: maxwrite

   Maximum write size supported for this object.  This attribute SHOULD
   be supported if the file is writable.  Lack of this attribute can
   lead to the client either wasting bandwidth or not receiving the best
   performance.

5.8.2.22.  Attribute 32: mimetype

   MIME body type/subtype of this object.





Shepler, et al.           Expires June 18, 2009               [Page 113]

Internet-Draft                   NFSv4.1                   December 2008


5.8.2.23.  Attribute 55: mounted_on_fileid

   Like fileid, but if the target filehandle is the root of a file
   system, this attribute represents the fileid of the underlying
   directory.

   UNIX-based operating environments connect a file system into the
   namespace by connecting (mounting) the file system onto the existing
   file object (the mount point, usually a directory) of an existing
   file system.  When the mount point's parent directory is read via an
   API like readdir(), the return results are directory entries, each
   with a component name and a fileid.  The fileid of the mount point's
   directory entry will be different from the fileid that the stat()
   system call returns.  The stat() system call is returning the fileid
   of the root of the mounted file system, whereas readdir() is
   returning the fileid stat() would have returned before any file
   systems were mounted on the mount point.

   Unlike NFSv3, NFSv4.1 allows a client's LOOKUP request to cross other
   file systems.  The client detects the file system crossing whenever
   the filehandle argument of LOOKUP has an fsid attribute different
   from that of the filehandle returned by LOOKUP.  A UNIX-based client
   will consider this a "mount point crossing".  UNIX has a legacy
   scheme for allowing a process to determine its current working
   directory.  This relies on readdir() of a mount point's parent and
   stat() of the mount point returning fileids as previously described.
   The mounted_on_fileid attribute corresponds to the fileid that
   readdir() would have returned as described previously.

   While the NFSv4.1 client could simply fabricate a fileid
   corresponding to what mounted_on_fileid provides (and if the server
   does not support mounted_on_fileid, the client has no choice), there
   is a risk that the client will generate a fileid that conflicts with
   one that is already assigned to another object in the file system.
   Instead, if the server can provide the mounted_on_fileid, the
   potential for client operational problems in this area is eliminated.

   If the server detects that there is no mounted point at the target
   file object, then the value for mounted_on_fileid that it returns is
   the same as that of the fileid attribute.

   The mounted_on_fileid attribute is RECOMMENDED, so the server SHOULD
   provide it if possible, and for a UNIX-based server, this is
   straightforward.  Usually, mounted_on_fileid will be requested during
   a READDIR operation, in which case it is trivial (at least for UNIX-
   based servers) to return mounted_on_fileid since it is equal to the
   fileid of a directory entry returned by readdir().  If
   mounted_on_fileid is requested in a GETATTR operation, the server



Shepler, et al.           Expires June 18, 2009               [Page 114]

Internet-Draft                   NFSv4.1                   December 2008


   should obey an invariant that has it returning a value that is equal
   to the file object's entry in the object's parent directory, i.e.
   what readdir() would have returned.  Some operating environments
   allow a series of two or more file systems to be mounted onto a
   single mount point.  In this case, for the server to obey the
   aforementioned invariant, it will need to find the base mount point,
   and not the intermediate mount points.

5.8.2.24.  Attribute 34: no_trunc

   If this attribute is TRUE, then if the client uses a file name longer
   than name_max, an error will be returned instead of the name being
   truncated.

5.8.2.25.  Attribute 35: numlinks

   Number of hard links to this object.

5.8.2.26.  Attribute 36: owner

   The string name of the owner of this object.

5.8.2.27.  Attribute 37: owner_group

   The string name of the group ownership of this object.

5.8.2.28.  Attribute 38: quota_avail_hard

   The value in bytes which represents the amount of additional disk
   space beyond the current allocation that can be allocated to this
   file or directory before further allocations will be refused.  It is
   understood that this space may be consumed by allocations to other
   files or directories.

5.8.2.29.  Attribute 39: quota_avail_soft

   The value in bytes which represents the amount of additional disk
   space that can be allocated to this file or directory before the user
   may reasonably be warned.  It is understood that this space may be
   consumed by allocations to other files or directories though there is
   a rule as to which other files or directories.

5.8.2.30.  Attribute 40: quota_used

   The value in bytes which represent the amount of disc space used by
   this file or directory and possibly a number of other similar files
   or directories, where the set of "similar" meets at least the
   criterion that allocating space to any file or directory in the set



Shepler, et al.           Expires June 18, 2009               [Page 115]

Internet-Draft                   NFSv4.1                   December 2008


   will reduce the "quota_avail_hard" of every other file or directory
   in the set.

   Note that there may be a number of distinct but overlapping sets of
   files or directories for which a quota_used value is maintained.
   E.g. "all files with a given owner", "all files with a given group
   owner". etc.  The server is at liberty to choose any of those sets
   when providing the content of the quota_used attribute, but should do
   so in a repeatable way.  The rule may be configured per file system
   or may be "choose the set with the smallest quota".

5.8.2.31.  Attribute 41: rawdev

   Raw device identifier; the UNIX device major/minor node information.
   If the value of type is not NF4BLK or NF4CHR, the value returned
   SHOULD NOT be considered useful.

5.8.2.32.  Attribute 42: space_avail

   Disk space in bytes available to this user on the file system
   containing this object - this should be the smallest relevant limit.

5.8.2.33.  Attribute 43: space_free

   Free disk space in bytes on the file system containing this object -
   this should be the smallest relevant limit.

5.8.2.34.  Attribute 44: space_total

   Total disk space in bytes on the file system containing this object.

5.8.2.35.  Attribute 45: space_used

   Number of file system bytes allocated to this object.

5.8.2.36.  Attribute 46: system

   This attribute is TRUE if this file is a "system" file with respect
   to the Windows operating environment.

5.8.2.37.  Attribute 47: time_access

   The time_access attribute represents the time of last access to the
   object by a read that was satisfied by the server.  The notion of
   what is an "access" depends on server's operating environment and/or
   the server's file system semantics.  For example, for servers obeying
   POSIX semantics, time_access would be updated only by the READ and
   READDIR operations and not any of the operations that modify the



Shepler, et al.           Expires June 18, 2009               [Page 116]

Internet-Draft                   NFSv4.1                   December 2008


   content of the object [15], [16], [17].  Of course, setting the
   corresponding time_access_set attribute is another way to modify the
   time_access attribute.

   Whenever the file object resides on a writable file system, the
   server should make best efforts to record time_access into stable
   storage.  However, to mitigate the performance effects of doing so,
   and most especially whenever the server is satisfying the read of the
   object's content from its cache, the server MAY cache access time
   updates and lazily write them to stable storage.  It is also
   acceptable to give administrators of the server the option to disable
   time_access updates.

5.8.2.38.  Attribute 48: time_access_set

   Set the time of last access to the object.  SETATTR use only.

5.8.2.39.  Attribute 49: time_backup

   The time of last backup of the object.

5.8.2.40.  Attribute 50: time_create

   The time of creation of the object.  This attribute does not have any
   relation to the traditional UNIX file attribute "ctime" or "change
   time".

5.8.2.41.  Attribute 51: time_delta

   Smallest useful server time granularity.

5.8.2.42.  Attribute 52: time_metadata

   The time of last metadata modification of the object.

5.8.2.43.  Attribute 53: time_modify

   The time of last modification to the object.

5.8.2.44.  Attribute 54: time_modify_set

   Set the time of last modification to the object.  SETATTR use only.

5.9.  Interpreting owner and owner_group

   The RECOMMENDED attributes "owner" and "owner_group" (and also users
   and groups within the "acl" attribute) are represented in terms of a
   UTF-8 string.  To avoid a representation that is tied to a particular



Shepler, et al.           Expires June 18, 2009               [Page 117]

Internet-Draft                   NFSv4.1                   December 2008


   underlying implementation at the client or server, the use of the
   UTF-8 string has been chosen.  Note that section 6.1 of RFC2624 [44]
   provides additional rationale.  It is expected that the client and
   server will have their own local representation of owner and
   owner_group that is used for local storage or presentation to the end
   user.  Therefore, it is expected that when these attributes are
   transferred between the client and server that the local
   representation is translated to a syntax of the form "user@
   dns_domain".  This will allow for a client and server that do not use
   the same local representation the ability to translate to a common
   syntax that can be interpreted by both.

   Similarly, security principals may be represented in different ways
   by different security mechanisms.  Servers normally translate these
   representations into a common format, generally that used by local
   storage, to serve as a means of identifying the users corresponding
   to these security principals.  When these local identifiers are
   translated to the form of the owner attribute, associated with files
   created by such principals they identify, in a common format, the
   users associated with each corresponding set of security principals.

   The translation used to interpret owner and group strings is not
   specified as part of the protocol.  This allows various solutions to
   be employed.  For example, a local translation table may be consulted
   that maps between a numeric identifier to the user@dns_domain syntax.
   A name service may also be used to accomplish the translation.  A
   server may provide a more general service, not limited by any
   particular translation (which would only translate a limited set of
   possible strings) by storing the owner and owner_group attributes in
   local storage without any translation or it may augment a translation
   method by storing the entire string for attributes for which no
   translation is available while using the local representation for
   those cases in which a translation is available.

   Servers that do not provide support for all possible values of the
   owner and owner_group attributes, SHOULD return an error
   (NFS4ERR_BADOWNER) when a string is presented that has no
   translation, as the value to be set for a SETATTR of the owner,
   owner_group, or acl attributes.  When a server does accept an owner
   or owner_group value as valid on a SETATTR (and similarly for the
   owner and group strings in an acl), it is promising to return that
   same string when a corresponding GETATTR is done.  Configuration
   changes (including changes from the mapping of the string to the
   local representation) and ill-constructed name translations (those
   that contain aliasing) may make that promise impossible to honor.
   Servers should make appropriate efforts to avoid a situation in which
   these attributes have their values changed when no real change to
   ownership has occurred.



Shepler, et al.           Expires June 18, 2009               [Page 118]

Internet-Draft                   NFSv4.1                   December 2008


   The "dns_domain" portion of the owner string is meant to be a DNS
   domain name.  For example, user@example.org.  Servers should accept
   as valid a set of users for at least one domain.  A server may treat
   other domains as having no valid translations.  A more general
   service is provided when a server is capable of accepting users for
   multiple domains, or for all domains, subject to security
   constraints.

   In the case where there is no translation available to the client or
   server, the attribute value will be constructed without the "@".
   Therefore, the absence of the @ from the owner or owner_group
   attribute signifies that no translation was available at the sender
   and that the receiver of the attribute should not use that string as
   a basis for translation into its own internal format.  Even though
   the attribute value can not be translated, it may still be useful.
   In the case of a client, the attribute string may be used for local
   display of ownership.

   To provide a greater degree of compatibility with NFSv3, which
   identified users and groups by 32-bit unsigned user identifiers and
   group identifiers, owner and group strings that consist of decimal
   numeric values with no leading zeros can be given a special
   interpretation by clients and servers which choose to provide such
   support.  The receiver may treat such a user or group string as
   representing the same user as would be represented by an NFSv3 uid or
   gid having the corresponding numeric value.  A server is not
   obligated to accept such a string, but may return an NFS4ERR_BADOWNER
   instead.  To avoid this mechanism being used to subvert user and
   group translation, so that a client might pass all of the owners and
   groups in numeric form, a server SHOULD return an NFS4ERR_BADOWNER
   error when there is a valid translation for the user or owner
   designated in this way.  In that case, the client must use the
   appropriate name@domain string and not the special form for
   compatibility.

   The owner string "nobody" may be used to designate an anonymous user,
   which will be associated with a file created by a security principal
   that cannot be mapped through normal means to the owner attribute.
   Users and implementations of NFSv4.1 SHOULD NOT use "nobody" to
   designate a real user whose access is not anonymous.

5.10.  Character Case Attributes

   With respect to the case_insensitive and case_preserving attributes,
   each UCS-4 character (which UTF-8 encodes) can be mapped according to
   Appendix B.2 of RFC3454 [18].  For general character handling and
   internationalization issues, see Section 14.




Shepler, et al.           Expires June 18, 2009               [Page 119]

Internet-Draft                   NFSv4.1                   December 2008


5.11.  Directory Notification Attributes

   As described in Section 18.39, the client can request a minimum delay
   for notifications of changes to attributes, but the server is free to
   ignore what the client requests.  The client can determine in advance
   what notification delays the server will accept by issuing a GETATTR
   for either or both of two directory notification attributes.  When
   the client calls the GET_DIR_DELEGATION operation and asks for
   attribute change notifications, it should request notification delays
   that are no less than the values in the server-provided attributes.

5.11.1.  Attribute 56: dir_notif_delay

   The dir_notif_delay attribute is the minimum number of seconds the
   server will delay before notifying the client of a change to the
   directory's attributes.

5.11.2.  Attribute 57: dirent_notif_delay

   The dirent_notif_delay attribute is the minimum number of seconds the
   server will delay before notifying the client of a change to a file
   object that has an entry in the directory.

5.12.  pNFS Attribute Definitions

5.12.1.  Attribute 62: fs_layout_type

   The fs_layout_type attribute (see Section 3.3.13) applies to a file
   system and indicates what layout types are supported by the file
   system.  When the client encounters a new fsid, the client SHOULD
   obtain the value for the fs_layout_type attribute associated with the
   new file system.  This attribute is used by the client to determine
   if the layout types supported by the server match any of the client's
   supported layout types.

5.12.2.  Attribute 66: layout_alignment

   When a client holds layouts on files of a file system, the
   layout_alignment attribute indicates the preferred alignment for I/O
   to files on that file system.  Where possible, the client should send
   READ and WRITE operations with offsets that are whole multiples of
   the layout_alignment attribute.

5.12.3.  Attribute 65: layout_blksize

   When a client holds layouts on files of a file system, the
   layout_blksize attribute indicates the preferred block size for I/O
   to files on that file system.  Where possible, the client should send



Shepler, et al.           Expires June 18, 2009               [Page 120]

Internet-Draft                   NFSv4.1                   December 2008


   READ operations with a count argument that is a whole multiple of
   layout_blksize, and WRITE operations with a data argument of size
   that is a whole multiple of layout_blksize.

5.12.4.  Attribute 63: layout_hint

   The layout_hint attribute (see Section 3.3.19) may be set on newly
   created files to influence the metadata server's choice for the
   file's layout.  If possible, this attribute is one of those set in
   the initial attributes within the OPEN operation.  The metadata
   server may choose to ignore this attribute.  The layout_hint
   attribute is a sub-set of the layout structure returned by LAYOUTGET.
   For example, instead of specifying particular devices, this would be
   used to suggest the stripe width of a file.  The server
   implementation determines which fields within the layout will be
   used.

5.12.5.  Attribute 64: layout_type

   This attribute lists the layout type(s) available for a file.  The
   value returned by the server is for informational purposes only.  The
   client will use the LAYOUTGET operation to obtain the information
   needed in order to perform I/O. For example, the specific device
   information for the file and its layout.

5.12.6.  Attribute 68: mdsthreshold

   This attribute is a server provided hint used to communicate to the
   client when it is more efficient to send READ and WRITE operations to
   the metadata server or the data server.  The two types of thresholds
   described are file size thresholds and I/O size thresholds.  If a
   file's size is smaller than the file size threshold, data accesses
   SHOULD be sent to the metadata server.  If an I/O request has a
   length that is below the I/O size threshold, the I/O SHOULD be sent
   to the metadata server.  Each threshold type is specified separately
   for READ and WRITE.

   The server MAY provide both types of thresholds for a file.  If both
   file size and I/O size are provided, the client SHOULD reach or
   exceed both thresholds before issuing its READ or WRITE requests to
   the data server.  Alternatively, if only one of the specified
   thresholds are reached or exceeded, the I/O requests are sent to the
   metadata server.

   For each threshold type, a value of 0 indicates no READ or WRITE
   should be sent to the metadata server, while a value of all 1s
   indicates all READS or WRITES should be sent to the metadata server.




Shepler, et al.           Expires June 18, 2009               [Page 121]

Internet-Draft                   NFSv4.1                   December 2008


   The attribute is available on a per filehandle basis.  If the current
   filehandle refers to a non-pNFS file or directory, the metadata
   server should return an attribute that is representative of the
   filehandle's file system.  It is suggested that this attribute is
   queried as part of the OPEN operation.  Due to dynamic system
   changes, the client should not assume that the attribute will remain
   constant for any specific time period, thus it should be periodically
   refreshed.

5.13.  Retention Attributes

   Retention is a concept whereby a file object can be placed in an
   immutable, undeletable, unrenamable state for a fixed or infinite
   duration of time.  Once in this "retained" state, the file cannot be
   moved out of the state until the duration of retention has been
   reached.

   When retention is enabled, retention MUST extend to the data of the
   file, and the name of file.  The server MAY extend retention to any
   other property of the file, including any subset of REQUIRED,
   RECOMMENDED, and named attributes, with the exceptions noted in this
   section.

   Servers MAY support or not support retention on any file object type.

   The five retention attributes are explained in the next subsections.

5.13.1.  Attribute 69: retention_get

   If retention is enabled for the associated file, this attribute's
   value represents the retention begin time of the file object.  This
   attribute's value is only readable with the GETATTR operation and
   MUST NOT be modified by the SETATTR operation (Section 5.5).  The
   value of the attribute consists of:

   const RET4_DURATION_INFINITE    = 0xffffffffffffffff;
   struct retention_get4 {
           uint64_t        rg_duration;
           nfstime4        rg_begin_time<1>;
   };

   The field rg_duration is the duration in seconds indicating how long
   the file will be retained once retention is enabled.  The field
   rg_begin_time is an array of up to one absolute time value.  If the
   array is zero length, no beginning retention time has been
   established, and retention is not enabled.  If rg_duration is equal
   to RET4_DURATION_INFINITE, the file, once retention is enabled, will
   be retained for an infinite duration.



Shepler, et al.           Expires June 18, 2009               [Page 122]

Internet-Draft                   NFSv4.1                   December 2008


   If (as soon as) rg_duration is zero, then rg_begin_time will be of
   zero length, and again, retention is not (no longer) enabled.

5.13.2.  Attribute 70: retention_set

   This attribute is used to set the retention duration and optionally
   enable retention for the associated file object.  This attribute is
   only modifiable via the SETATTR operation and MUST NOT be retrieved
   by the GETATTR operation (Section 5.5).  This attribute corresponds
   to retention_get.  The value of the attribute consists of:

   struct retention_set4 {
           bool            rs_enable;
           uint64_t        rs_duration<1>;
   };

   If the client sets rs_enable to TRUE, then it is enabling retention
   on the file object with the begin time of retention starting from the
   server's current time and date.  The duration of the retention can
   also be provided if the rs_duration array is of length one.  The
   duration is the time in seconds from the begin time of retention, and
   if set to RET4_DURATION_INFINITE, the file is to be retained forever.
   If retention is enabled, with no duration specified in either this
   SETATTR or a previous SETATTR, the duration defaults to zero seconds.
   The server MAY restrict the enabling of retention or the duration of
   retention on the basis of the ACE4_WRITE_RETENTION ACL permission.
   The enabling of retention MUST NOT prevent the enabling of event-
   based retention nor the modification of the retention_hold attribute.

   The following rules apply to both the retention_set and retentevt_set
   attributes.

   o  As long as retention is not enabled, the client is permitted to
      decrease the duration.

   o  The duration can always be set to an equal or higher value, even
      if retention is enabled.  Note that once retention is enabled, the
      actual duration (as returned by the retention_get or retentevt_get
      attributes, see Section 5.13.1 or Section 5.13.3), is constantly
      counting down to zero (one unit per second), unless the duration
      was set to RET4_DURATION_INFINITE.  Thus it will not be possible
      for the client to precisely extend the duration on a file that has
      retention enabled.

   o  While retention is enabled, attempts to disable retention or
      decrease the retention's duration MUST fail with the error
      NFS4ERR_INVAL.




Shepler, et al.           Expires June 18, 2009               [Page 123]

Internet-Draft                   NFSv4.1                   December 2008


   o  If the principal attempting to change retention_set or
      retentevt_set does not have ACE4_WRITE_RETENTION permissions, the
      attempt MUST fail with NFS4ERR_ACCESS.

5.13.3.  Attribute 71: retentevt_get

   Get the event-based retention duration, and if enabled, the event-
   based retention begin time of the file object.  This attribute is
   like retention_get but refers to event-based retention.  The event
   that triggers event-based retention is not defined by the NFSv4.1
   specification.

5.13.4.  Attribute 72: retentevt_set

   Set the event-based retention duration, and optionally enable event-
   based retention on the file object.  This attribute corresponds to
   retentevt_get, is like retention_set, but refers to event-based
   retention.  When event based retention is set, the file MUST be
   retained even if non-event-based retention has been set, and the
   duration of non-event-based retention has been reached.  Conversely,
   when non-event-based retention has been set, the file MUST be
   retained even if event-based retention has been set, and the duration
   of event-based retention has been reached.  The server MAY restrict
   the enabling of event-based retention or the duration of event-based
   retention on the basis of the ACE4_WRITE_RETENTION ACL permission.
   The enabling of event-based retention MUST NOT prevent the enabling
   of non-event-based retention nor the modification of the
   retention_hold attribute.

5.13.5.  Attribute 73: retention_hold

   Get or set administrative retention holds, one hold per bit position.

   This attribute allows one to 64 administrative holds, one hold per
   bit on the attribute.  If retention_hold is not zero, then the file
   MUST NOT be deleted, renamed, or modified, even if the duration on
   enabled event or non-event-based retention has been reached.  The
   server MAY restrict the modification of retention_hold on the basis
   of the ACE4_WRITE_RETENTION_HOLD ACL permission.  The enabling of
   administration retention holds does not prevent the enabling of
   event-based or non-event-based retention.

   If the principal attempting to change retention_hold does not have
   ACE4_WRITE_RETENTION_HOLD permissions, the attempt MUST fail with
   NFS4ERR_ACCESS.






Shepler, et al.           Expires June 18, 2009               [Page 124]

Internet-Draft                   NFSv4.1                   December 2008


6.  Access Control Attributes

   Access Control Lists (ACLs) are file attributes that specify fine
   grained access control.  This chapter covers the "acl", "dacl",
   "sacl", "aclsupport", "mode", "mode_set_masked" file attributes, and
   their interactions.  Note that file attributes may apply to any file
   system object.

6.1.  Goals

   ACLs and modes represent two well established models for specifying
   permissions.  This chapter specifies requirements that attempt to
   meet the following goals:

   o  If a server supports the mode attribute, it should provide
      reasonable semantics to clients that only set and retrieve the
      mode attribute.

   o  If a server supports ACL attributes, it should provide reasonable
      semantics to clients that only set and retrieve those attributes.

   o  On servers that support the mode attribute, if ACL attributes have
      never been set on an object, via inheritance or explicitly, the
      behavior should be traditional UNIX-like behavior.

   o  On servers that support the mode attribute, if the ACL attributes
      have been previously set on an object, either explicitly or via
      inheritance:

      *  Setting only the mode attribute should effectively control the
         traditional UNIX-like permissions of read, write, and execute
         on owner, owner_group, and other.

      *  Setting only the mode attribute should provide reasonable
         security.  For example, setting a mode of 000 should be enough
         to ensure that future opens for read or write by any principal
         fail, regardless of a previously existing or inherited ACL.

   o  NFSv4.1 may introduce different semantics relating to the mode and
      ACL attributes, but it does not render invalid any previously
      existing implementations.  Additionally, this chapter provides
      clarifications based on previous implementations and discussions
      around them.

   o  On servers that support both the mode and the acl or dacl
      attributes, the server must keep the two consistent with each
      other.  The value of the mode attribute (with the exception of the
      three high order bits described in Section 6.2.4), must be



Shepler, et al.           Expires June 18, 2009               [Page 125]

Internet-Draft                   NFSv4.1                   December 2008


      determined entirely by the value of the ACL, so that use of the
      mode is never required for anything other than setting the three
      high order bits.  See Section 6.4.1 for exact requirements.

   o  When a mode attribute is set on an object, the ACL attributes may
      need to be modified so as to not conflict with the new mode.  In
      such cases, it is desirable that the ACL keep as much information
      as possible.  This includes information about inheritance, AUDIT
      and ALARM ACEs, and permissions granted and denied that do not
      conflict with the new mode.

6.2.  File Attributes Discussion

6.2.1.  Attribute 12: acl

   The NFSv4.1 ACL attribute contains an array of access control entries
   (ACEs) that are associated with the file system object.  Although the
   client can read and write the acl attribute, the server is
   responsible for using the ACL to perform access control.  The client
   can use the OPEN or ACCESS operations to check access without
   modifying or reading data or metadata.

   The NFS ACE structure is defined as follows:

   typedef uint32_t        acetype4;


   typedef uint32_t aceflag4;


   typedef uint32_t        acemask4;


   struct nfsace4 {
           acetype4        type;
           aceflag4        flag;
           acemask4        access_mask;
           utf8str_mixed   who;
   };

   To determine if a request succeeds, the server processes each nfsace4
   entry in order.  Only ACEs which have a "who" that matches the
   requester are considered.  Each ACE is processed until all of the
   bits of the requester's access have been ALLOWED.  Once a bit (see
   below) has been ALLOWED by an ACCESS_ALLOWED_ACE, it is no longer
   considered in the processing of later ACEs.  If an ACCESS_DENIED_ACE
   is encountered where the requester's access still has unALLOWED bits
   in common with the "access_mask" of the ACE, the request is denied.



Shepler, et al.           Expires June 18, 2009               [Page 126]

Internet-Draft                   NFSv4.1                   December 2008


   When the ACL is fully processed, if there are bits in the requester's
   mask that have not been ALLOWED or DENIED, access is denied.

   Unlike the ALLOW and DENY ACE types, the ALARM and AUDIT ACE types do
   not affect a requester's access, and instead are for triggering
   events as a result of a requester's access attempt.  Therefore, AUDIT
   and ALARM ACEs are processed only after processing ALLOW and DENY
   ACEs.

   The NFSv4.1 ACL model is quite rich.  Some server platforms may
   provide access control functionality that goes beyond the UNIX-style
   mode attribute, but which is not as rich as the NFS ACL model.  So
   that users can take advantage of this more limited functionality, the
   server may support the acl attributes by mapping between its ACL
   model and the NFSv4.1 ACL model.  Servers must ensure that the ACL
   they actually store or enforce is at least as strict as the NFSv4 ACL
   that was set.  It is tempting to accomplish this by rejecting any ACL
   that falls outside the small set that can be represented accurately.
   However, such an approach can render ACLs unusable without special
   client-side knowledge of the server's mapping, which defeats the
   purpose of having a common NFSv4 ACL protocol.  Therefore servers
   should accept every ACL that they can without compromising security.
   To help accomplish this, servers may make a special exception, in the
   case of unsupported permission bits, to the rule that bits not
   ALLOWED or DENIED by an ACL must be denied.  For example, a UNIX-
   style server might choose to silently allow read attribute
   permissions even though an ACL does not explicitly allow those
   permissions.  (An ACL that explicitly denies permission to read
   attributes should still be rejected.)

   The situation is complicated by the fact that a server may have
   multiple modules that enforce ACLs.  For example, the enforcement for
   NFSv4.1 access may be different from, but not weaker than, the
   enforcement for local access, and both may be different from the
   enforcement for access through other protocols such as SMB.  So it
   may be useful for a server to accept an ACL even if not all of its
   modules are able to support it.

   The guiding principle with regard to NFSv4 access is that the server
   must not accept ACLs that appear to make access to the file more
   restrictive than it really is.

6.2.1.1.  ACE Type

   The constants used for the type field (acetype4) are as follows:






Shepler, et al.           Expires June 18, 2009               [Page 127]

Internet-Draft                   NFSv4.1                   December 2008


   const ACE4_ACCESS_ALLOWED_ACE_TYPE      = 0x00000000;
   const ACE4_ACCESS_DENIED_ACE_TYPE       = 0x00000001;
   const ACE4_SYSTEM_AUDIT_ACE_TYPE        = 0x00000002;
   const ACE4_SYSTEM_ALARM_ACE_TYPE        = 0x00000003;

   Only the ALLOWED and DENIED bits types may be used in the dacl
   attribute, and only the AUDIT and ALARM bits may be used in the sacl
   attribute.  All four are permitted in the acl attribute.

   +------------------------------+--------------+---------------------+
   | Value                        | Abbreviation | Description         |
   +------------------------------+--------------+---------------------+
   | ACE4_ACCESS_ALLOWED_ACE_TYPE | ALLOW        | Explicitly grants   |
   |                              |              | the access defined  |
   |                              |              | in acemask4 to the  |
   |                              |              | file or directory.  |
   | ACE4_ACCESS_DENIED_ACE_TYPE  | DENY         | Explicitly denies   |
   |                              |              | the access defined  |
   |                              |              | in acemask4 to the  |
   |                              |              | file or directory.  |
   | ACE4_SYSTEM_AUDIT_ACE_TYPE   | AUDIT        | LOG (in a system    |
   |                              |              | dependent way) any  |
   |                              |              | access attempt to a |
   |                              |              | file or directory   |
   |                              |              | which uses any of   |
   |                              |              | the access methods  |
   |                              |              | specified in        |
   |                              |              | acemask4.           |
   | ACE4_SYSTEM_ALARM_ACE_TYPE   | ALARM        | Generate a system   |
   |                              |              | ALARM (system       |
   |                              |              | dependent) when any |
   |                              |              | access attempt is   |
   |                              |              | made to a file or   |
   |                              |              | directory for the   |
   |                              |              | access methods      |
   |                              |              | specified in        |
   |                              |              | acemask4.           |
   +------------------------------+--------------+---------------------+

    The "Abbreviation" column denotes how the types will be referred to
                   throughout the rest of this chapter.

6.2.1.2.  Attribute 13: aclsupport

   A server need not support all of the above ACE types.  This attribute
   indicates which ACE types are supported for the current file system.
   The bitmask constants used to represent the above definitions within
   the aclsupport attribute are as follows:



Shepler, et al.           Expires June 18, 2009               [Page 128]

Internet-Draft                   NFSv4.1                   December 2008


   const ACL4_SUPPORT_ALLOW_ACL    = 0x00000001;
   const ACL4_SUPPORT_DENY_ACL     = 0x00000002;
   const ACL4_SUPPORT_AUDIT_ACL    = 0x00000004;
   const ACL4_SUPPORT_ALARM_ACL    = 0x00000008;

   Servers which support either the ALLOW or DENY ACE type SHOULD
   support both ALLOW and DENY ACE types.

   Clients should not attempt to set an ACE unless the server claims
   support for that ACE type.  If the server receives a request to set
   an ACE that it cannot store, it MUST reject the request with
   NFS4ERR_ATTRNOTSUPP.  If the server receives a request to set an ACE
   that it can store but cannot enforce, the server SHOULD reject the
   request with NFS4ERR_ATTRNOTSUPP.

   Support for any of the ACL attributes is optional (albeit,
   RECOMMENDED).  However, a server that supports either of the new ACL
   attributes (dacl or sacl) MUST allow use of the new ACL attributes to
   access all of the ACE types which it supports.  In other words, if
   such a server supports ALLOW or DENY ACEs, then it MUST support the
   dacl attribute, and if it supports AUDIT or ALARM ACEs, then it MUST
   support the sacl attribute.

6.2.1.3.  ACE Access Mask

   The bitmask constants used for the access mask field are as follows:

   const ACE4_READ_DATA            = 0x00000001;
   const ACE4_LIST_DIRECTORY       = 0x00000001;
   const ACE4_WRITE_DATA           = 0x00000002;
   const ACE4_ADD_FILE             = 0x00000002;
   const ACE4_APPEND_DATA          = 0x00000004;
   const ACE4_ADD_SUBDIRECTORY     = 0x00000004;
   const ACE4_READ_NAMED_ATTRS     = 0x00000008;
   const ACE4_WRITE_NAMED_ATTRS    = 0x00000010;
   const ACE4_EXECUTE              = 0x00000020;
   const ACE4_DELETE_CHILD         = 0x00000040;
   const ACE4_READ_ATTRIBUTES      = 0x00000080;
   const ACE4_WRITE_ATTRIBUTES     = 0x00000100;
   const ACE4_WRITE_RETENTION      = 0x00000200;
   const ACE4_WRITE_RETENTION_HOLD = 0x00000400;

   const ACE4_DELETE               = 0x00010000;
   const ACE4_READ_ACL             = 0x00020000;
   const ACE4_WRITE_ACL            = 0x00040000;
   const ACE4_WRITE_OWNER          = 0x00080000;
   const ACE4_SYNCHRONIZE          = 0x00100000;




Shepler, et al.           Expires June 18, 2009               [Page 129]

Internet-Draft                   NFSv4.1                   December 2008


   Note that some masks have coincident values, for example,
   ACE4_READ_DATA and ACE4_LIST_DIRECTORY.  The mask entries
   ACE4_LIST_DIRECTORY, ACE4_ADD_FILE, and ACE4_ADD_SUBDIRECTORY are
   intended to be used with directory objects, while ACE4_READ_DATA,
   ACE4_WRITE_DATA, and ACE4_APPEND_DATA are intended to be used with
   non-directory objects.

6.2.1.3.1.  Discussion of Mask Attributes

   ACE4_READ_DATA

      Operation(s) affected:

         READ

         OPEN

      Discussion:

         Permission to read the data of the file.

         Servers SHOULD allow a user the ability to read the data of the
         file when only the ACE4_EXECUTE access mask bit is allowed.

   ACE4_LIST_DIRECTORY

      Operation(s) affected:

         READDIR

      Discussion:

         Permission to list the contents of a directory.

   ACE4_WRITE_DATA

      Operation(s) affected:

         WRITE

         OPEN

         SETATTR of size








Shepler, et al.           Expires June 18, 2009               [Page 130]

Internet-Draft                   NFSv4.1                   December 2008


      Discussion:

         Permission to modify a file's data.

   ACE4_ADD_FILE

      Operation(s) affected:

         CREATE

         LINK

         OPEN

         RENAME

      Discussion:

         Permission to add a new file in a directory.  The CREATE
         operation is affected when nfs_ftype4 is NF4LNK, NF4BLK,
         NF4CHR, NF4SOCK, or NF4FIFO.  (NF4DIR is not listed because it
         is covered by ACE4_ADD_SUBDIRECTORY.)  OPEN is affected when
         used to create a regular file.  LINK and RENAME are always
         affected.

   ACE4_APPEND_DATA

      Operation(s) affected:

         WRITE

         OPEN

         SETATTR of size

      Discussion:

         The ability to modify a file's data, but only starting at EOF.
         This allows for the notion of append-only files, by allowing
         ACE4_APPEND_DATA and denying ACE4_WRITE_DATA to the same user
         or group.  If a file has an ACL such as the one described above
         and a WRITE request is made for somewhere other than EOF, the
         server SHOULD return NFS4ERR_ACCESS.








Shepler, et al.           Expires June 18, 2009               [Page 131]

Internet-Draft                   NFSv4.1                   December 2008


   ACE4_ADD_SUBDIRECTORY

      Operation(s) affected:

         CREATE

         RENAME

      Discussion:

         Permission to create a subdirectory in a directory.  The CREATE
         operation is affected when nfs_ftype4 is NF4DIR.  The RENAME
         operation is always affected.

   ACE4_READ_NAMED_ATTRS

      Operation(s) affected:

         OPENATTR

      Discussion:

         Permission to read the named attributes of a file or to lookup
         the named attributes directory.  OPENATTR is affected when it
         is not used to create a named attribute directory.  This is
         when 1.) createdir is TRUE, but a named attribute directory
         already exists, or 2.) createdir is FALSE.

   ACE4_WRITE_NAMED_ATTRS

      Operation(s) affected:

         OPENATTR



      Discussion:

         Permission to write the named attributes of a file or to create
         a named attribute directory.  OPENATTR is affected when it is
         used to create a named attribute directory.  This is when
         createdir is TRUE and no named attribute directory exists.  The
         ability to check whether or not a named attribute directory
         exists depends on the ability to look it up, therefore, users
         also need the ACE4_READ_NAMED_ATTRS permission in order to
         create a named attribute directory.





Shepler, et al.           Expires June 18, 2009               [Page 132]

Internet-Draft                   NFSv4.1                   December 2008


   ACE4_EXECUTE

      Operation(s) affected:

         READ

         OPEN

         REMOVE

         RENAME

         LINK

         CREATE

      Discussion:

         Permission to execute a file.

         Servers SHOULD allow a user the ability to read the data of the
         file when only the ACE4_EXECUTE access mask bit is allowed.
         This is because there is no way to execute a file without
         reading the contents.  Though a server may treat ACE4_EXECUTE
         and ACE4_READ_DATA bits identically when deciding to permit a
         READ operation, it SHOULD still allow the two bits to be set
         independently in ACLs, and MUST distinguish between them when
         replying to ACCESS operations.  In particular, servers SHOULD
         NOT silently turn on one of the two bits when the other is set,
         as that would make it impossible for the client to correctly
         enforce the distinction between read and execute permissions.

         As an example, following a SETATTR of the following ACL:

         nfsuser:ACE4_EXECUTE:ALLOW

         A subsequent GETATTR of ACL for that file SHOULD return:

         nfsuser:ACE4_EXECUTE:ALLOW

         Rather than:

         nfsuser:ACE4_EXECUTE/ACE4_READ_DATA:ALLOW








Shepler, et al.           Expires June 18, 2009               [Page 133]

Internet-Draft                   NFSv4.1                   December 2008


   ACE4_EXECUTE

      Operation(s) affected:

         LOOKUP

      Discussion:

         Permission to traverse/search a directory.

   ACE4_DELETE_CHILD

      Operation(s) affected:

         REMOVE

         RENAME

      Discussion:

         Permission to delete a file or directory within a directory.
         See Section 6.2.1.3.2 for information on ACE4_DELETE and
         ACE4_DELETE_CHILD interact.

   ACE4_READ_ATTRIBUTES

      Operation(s) affected:

         GETATTR of file system object attributes

         VERIFY

         NVERIFY

         READDIR

      Discussion:

         The ability to read basic attributes (non-ACLs) of a file.  On
         a UNIX system, basic attributes can be thought of as the stat
         level attributes.  Allowing this access mask bit would mean the
         entity can execute "ls -l" and stat.  If a READDIR operation
         requests attributes, this mask must be allowed for the READDIR
         to succeed.







Shepler, et al.           Expires June 18, 2009               [Page 134]

Internet-Draft                   NFSv4.1                   December 2008


   ACE4_WRITE_ATTRIBUTES

      Operation(s) affected:

         SETATTR of time_access_set, time_backup,

         time_create, time_modify_set, mimetype, hidden, system

      Discussion:

         Permission to change the times associated with a file or
         directory to an arbitrary value.  Also permission to change the
         mimetype, hidden and system attributes.  A user having
         ACE4_WRITE_DATA or ACE4_WRITE_ATTRIBUTES will be allowed to set
         the times associated with a file to the current server time.

   ACE4_WRITE_RETENTION

      Operation(s) affected:

         SETATTR of retention_set, retentevt_set.

      Discussion:

         Permission to modify the durations of event and non-event-based
         retention.  Also permission to enable event and non-event-based
         retention.  A server MAY behave such that setting
         ACE4_WRITE_ATTRIBUTES allows ACE4_WRITE_RETENTION.

   ACE4_WRITE_RETENTION_HOLD

      Operation(s) affected:

         SETATTR of retention_hold.

      Discussion:

         Permission to modify the administration retention holds.  A
         server MAY map ACE4_WRITE_ATTRIBUTES to
         ACE_WRITE_RETENTION_HOLD.

   ACE4_DELETE

      Operation(s) affected:







Shepler, et al.           Expires June 18, 2009               [Page 135]

Internet-Draft                   NFSv4.1                   December 2008


         REMOVE

      Discussion:

         Permission to delete the file or directory.  See
         Section 6.2.1.3.2 for information on ACE4_DELETE and
         ACE4_DELETE_CHILD interact.

   ACE4_READ_ACL

      Operation(s) affected:

         GETATTR of acl, dacl, or sacl

         NVERIFY

         VERIFY

      Discussion:

         Permission to read the ACL.

   ACE4_WRITE_ACL

      Operation(s) affected:

         SETATTR of acl and mode

      Discussion:

         Permission to write the acl and mode attributes.

   ACE4_WRITE_OWNER

      Operation(s) affected:

         SETATTR of owner and owner_group

      Discussion:

         Permission to write the owner and owner_group attributes.  On
         UNIX systems, this is the ability to execute chown() and
         chgrp().








Shepler, et al.           Expires June 18, 2009               [Page 136]

Internet-Draft                   NFSv4.1                   December 2008


   ACE4_SYNCHRONIZE

      Operation(s) affected:

         NONE

      Discussion:

         Permission to use the file object as a synchronization
         primitive for interprocess communication.  This permission is
         not enforced or interpreted by the NFSv4.1 server on behalf of
         the client.

         Typically, the ACE4_SYNCHRONIZE permission is only meaningful
         on local file systems, i.e. file systems not accessed via
         NFSv4.1.  The reason that the permission bit exists is that
         some operating environments, such as Windows, use
         ACE4_SYNCHRONIZE.

         For example, if a client copies a file that has
         ACE4_SYNCHRONIZE set from a local file system to an NFSv4.1
         server, and then later copies the file from the NFSv4.1 server
         to a local file system, it is likely that if ACE4_SYNCHRONIZE
         was set in the original file, the client will want it set in
         the second copy.  The first copy will not have the permission
         set unless the NFSv4.1 server has the means to set the
         ACE4_SYNCHRONIZE bit.  The second copy will not have the
         permission set unless the NFSv4.1 server has the means to
         retrieve the ACE4_SYNCHRONIZE bit.

   Server implementations need not provide the granularity of control
   that is implied by this list of masks.  For example, POSIX-based
   systems might not distinguish ACE4_APPEND_DATA (the ability to append
   to a file) from ACE4_WRITE_DATA (the ability to modify existing
   contents); both masks would be tied to a single "write" permission
   [19].  When such a server returns attributes to the client, it would
   show both ACE4_APPEND_DATA and ACE4_WRITE_DATA if and only if the
   write permission is enabled.

   If a server receives a SETATTR request that it cannot accurately
   implement, it should err in the direction of more restricted access,
   except in the previously discussed cases of execute and read.  For
   example, suppose a server cannot distinguish overwriting data from
   appending new data, as described in the previous paragraph.  If a
   client submits an ALLOW ACE where ACE4_APPEND_DATA is set but
   ACE4_WRITE_DATA is not (or vice versa), the server should either turn
   off ACE4_APPEND_DATA or reject the request with NFS4ERR_ATTRNOTSUPP.




Shepler, et al.           Expires June 18, 2009               [Page 137]

Internet-Draft                   NFSv4.1                   December 2008


6.2.1.3.2.  ACE4_DELETE vs. ACE4_DELETE_CHILD

   Two access mask bits govern the ability to delete a directory entry:
   ACE4_DELETE on the object itself (the "target"), and
   ACE4_DELETE_CHILD on the containing directory (the "parent").

   Many systems also take the "sticky bit" (MODE4_SVTX) on a directory
   to allow unlink only to a user that owns either the target or the
   parent; on some such systems the decision also depends on whether the
   target is writable.

   Servers SHOULD allow unlink if either ACE4_DELETE is permitted on the
   target, or ACE4_DELETE_CHILD is permitted on the parent.  (Note that
   this is true even if the parent or target explicitly denies one of
   these permissions.)

   If the ACLs in question neither explicitly ALLOW nor DENY either of
   the above, and if MODE4_SVTX is not set on the parent, then the
   server SHOULD allow the removal if and only if ACE4_ADD_FILE is
   permitted.  In the case where MODE4_SVTX is set, the server may also
   require the remover to own either the parent or the target, or may
   require the target to be writable.

   This allows servers to support something close to traditional UNIX-
   like semantics, with ACE4_ADD_FILE taking the place of the write bit.

6.2.1.4.  ACE flag

   The bitmask constants used for the flag field are as follows:

   const ACE4_FILE_INHERIT_ACE             = 0x00000001;
   const ACE4_DIRECTORY_INHERIT_ACE        = 0x00000002;
   const ACE4_NO_PROPAGATE_INHERIT_ACE     = 0x00000004;
   const ACE4_INHERIT_ONLY_ACE             = 0x00000008;
   const ACE4_SUCCESSFUL_ACCESS_ACE_FLAG   = 0x00000010;
   const ACE4_FAILED_ACCESS_ACE_FLAG       = 0x00000020;
   const ACE4_IDENTIFIER_GROUP             = 0x00000040;
   const ACE4_INHERITED_ACE                = 0x00000080;

   A server need not support any of these flags.  If the server supports
   flags that are similar to, but not exactly the same as, these flags,
   the implementation may define a mapping between the protocol-defined
   flags and the implementation-defined flags.

   For example, suppose a client tries to set an ACE with
   ACE4_FILE_INHERIT_ACE set but not ACE4_DIRECTORY_INHERIT_ACE.  If the
   server does not support any form of ACL inheritance, the server
   should reject the request with NFS4ERR_ATTRNOTSUPP.  If the server



Shepler, et al.           Expires June 18, 2009               [Page 138]

Internet-Draft                   NFSv4.1                   December 2008


   supports a single "inherit ACE" flag that applies to both files and
   directories, the server may reject the request (i.e., requiring the
   client to set both the file and directory inheritance flags).  The
   server may also accept the request and silently turn on the
   ACE4_DIRECTORY_INHERIT_ACE flag.

6.2.1.4.1.  Discussion of Flag Bits

   ACE4_FILE_INHERIT_ACE
      Any non-directory file in any sub-directory will get this ACE
      inherited.

   ACE4_DIRECTORY_INHERIT_ACE
      Can be placed on a directory and indicates that this ACE should be
      added to each new directory created.
      If this flag is set in an ACE in an ACL attribute to be set on a
      non-directory file system object, the operation attempting to set
      the ACL SHOULD fail with NFS4ERR_ATTRNOTSUPP.

   ACE4_INHERIT_ONLY_ACE
      Can be placed on a directory but does not apply to the directory;
      ALLOW and DENY ACEs with this bit set do not affect access to the
      directory, and AUDIT and ALARM ACEs with this bit set do not
      trigger log or alarm events.  Such ACEs only take effect once they
      are applied (with this bit cleared) to newly created files and
      directories as specified by the above two flags.
      If this flag is present on an ACE, but neither
      ACE4_DIRECTORY_INHERIT_ACE nor ACE4_FILE_INHERIT_ACE is present,
      then an operation attempting to set such an attribute SHOULD fail
      with NFS4ERR_ATTRNOTSUPP.

   ACE4_NO_PROPAGATE_INHERIT_ACE
      Can be placed on a directory.  This flag tells the server that
      inheritance of this ACE should stop at newly created child
      directories.

   ACE4_INHERITED_ACE
      Indicates that this ACE is inherited from a parent directory.  A
      server that supports automatic inheritance will place this flag on
      any ACEs inherited from the parent directory when creating a new
      object.  Client applications will use this to perform automatic
      inheritance.  Clients and servers MUST clear this bit in the acl
      attribute; it may only be used in the dacl and sacl attributes.

   ACE4_SUCCESSFUL_ACCESS_ACE_FLAG






Shepler, et al.           Expires June 18, 2009               [Page 139]

Internet-Draft                   NFSv4.1                   December 2008


   ACE4_FAILED_ACCESS_ACE_FLAG
      The ACE4_SUCCESSFUL_ACCESS_ACE_FLAG (SUCCESS) and
      ACE4_FAILED_ACCESS_ACE_FLAG (FAILED) flag bits may be set only on
      ACE4_SYSTEM_AUDIT_ACE_TYPE (AUDIT) and ACE4_SYSTEM_ALARM_ACE_TYPE
      (ALARM) ACE types.  If during the processing of the file's ACL,
      the server encounters an AUDIT or ALARM ACE that matches the
      principal attempting the OPEN, the server notes that fact, and the
      presence, if any, of the SUCCESS and FAILED flags encountered in
      the AUDIT or ALARM ACE.  Once the server completes the ACL
      processing, it then notes if the operation succeeded or failed.
      If the operation succeeded, and if the SUCCESS flag was set for a
      matching AUDIT or ALARM ACE, then the appropriate AUDIT or ALARM
      event occurs.  If the operation failed, and if the FAILED flag was
      set for the matching AUDIT or ALARM ACE, then the appropriate
      AUDIT or ALARM event occurs.  Either or both of the SUCCESS or
      FAILED can be set, but if neither is set, the AUDIT or ALARM ACE
      is not useful.

      The previously described processing applies to ACCESS operations
      even when they return NFS4_OK.  For the purposes of AUDIT and
      ALARM, we consider an ACCESS operation to be a "failure" if it
      fails to return a bit that was requested and supported.

   ACE4_IDENTIFIER_GROUP
      Indicates that the "who" refers to a GROUP as defined under UNIX
      or a GROUP ACCOUNT as defined under Windows.  Clients and servers
      MUST ignore the ACE4_IDENTIFIER_GROUP flag on ACEs with a who
      value equal to one of the special identifiers outlined in
      Section 6.2.1.5.

6.2.1.5.  ACE Who

   The "who" field of an ACE is an identifier that specifies the
   principal or principals to whom the ACE applies.  It may refer to a
   user or a group, with the flag bit ACE4_IDENTIFIER_GROUP specifying
   which.

   There are several special identifiers which need to be understood
   universally, rather than in the context of a particular DNS domain.
   Some of these identifiers cannot be understood when an NFS client
   accesses the server, but have meaning when a local process accesses
   the file.  The ability to display and modify these permissions is
   permitted over NFS, even if none of the access methods on the server
   understands the identifiers.







Shepler, et al.           Expires June 18, 2009               [Page 140]

Internet-Draft                   NFSv4.1                   December 2008


   +---------------+--------------------------------------------------+
   | Who           | Description                                      |
   +---------------+--------------------------------------------------+
   | OWNER         | The owner of the file                            |
   | GROUP         | The group associated with the file.              |
   | EVERYONE      | The world, including the owner and owning group. |
   | INTERACTIVE   | Accessed from an interactive terminal.           |
   | NETWORK       | Accessed via the network.                        |
   | DIALUP        | Accessed as a dialup user to the server.         |
   | BATCH         | Accessed from a batch job.                       |
   | ANONYMOUS     | Accessed without any authentication.             |
   | AUTHENTICATED | Any authenticated user (opposite of ANONYMOUS)   |
   | SERVICE       | Access from a system service.                    |
   +---------------+--------------------------------------------------+

                                  Table 4

   To avoid conflict, these special identifiers are distinguished by an
   appended "@" and should appear in the form "xxxx@" (with no domain
   name after the "@").  For example: ANONYMOUS@.

   The ACE4_IDENTIFIER_GROUP flag MUST be ignored on entries with these
   special identifiers.  When encoding entries with these special
   identifiers, the ACE4_IDENTIFIER_GROUP flag SHOULD be set to zero.

6.2.1.5.1.  Discussion of EVERYONE@

   It is important to note that "EVERYONE@" is not equivalent to the
   UNIX "other" entity.  This is because, by definition, UNIX "other"
   does not include the owner or owning group of a file.  "EVERYONE@"
   means literally everyone, including the owner or owning group.

6.2.2.  Attribute 58: dacl

   The dacl attribute is like the acl attribute, but dacl allows just
   ALLOW and DENY ACEs.  The dacl attribute supports automatic
   inheritance (see Section 6.4.3.2).

6.2.3.  Attribute 59: sacl

   The sacl attribute is like the acl attribute, but sacl allows just
   AUDIT and ALARM ACEs.  The sacl attribute supports automatic
   inheritance (see Section 6.4.3.2).

6.2.4.  Attribute 33: mode

   The NFSv4.1 mode attribute is based on the UNIX mode bits.  The
   following bits are defined:



Shepler, et al.           Expires June 18, 2009               [Page 141]

Internet-Draft                   NFSv4.1                   December 2008


   const MODE4_SUID = 0x800;  /* set user id on execution */
   const MODE4_SGID = 0x400;  /* set group id on execution */
   const MODE4_SVTX = 0x200;  /* save text even after use */
   const MODE4_RUSR = 0x100;  /* read permission: owner */
   const MODE4_WUSR = 0x080;  /* write permission: owner */
   const MODE4_XUSR = 0x040;  /* execute permission: owner */
   const MODE4_RGRP = 0x020;  /* read permission: group */
   const MODE4_WGRP = 0x010;  /* write permission: group */
   const MODE4_XGRP = 0x008;  /* execute permission: group */
   const MODE4_ROTH = 0x004;  /* read permission: other */
   const MODE4_WOTH = 0x002;  /* write permission: other */
   const MODE4_XOTH = 0x001;  /* execute permission: other */

   Bits MODE4_RUSR, MODE4_WUSR, and MODE4_XUSR apply to the principal
   identified in the owner attribute.  Bits MODE4_RGRP, MODE4_WGRP, and
   MODE4_XGRP apply to principals identified in the owner_group
   attribute but who are not identified in the owner attribute.  Bits
   MODE4_ROTH, MODE4_WOTH, MODE4_XOTH apply to any principal that does
   not match that in the owner attribute, and does not have a group
   matching that of the owner_group attribute.

   Bits within the mode other than those specified above are not defined
   by this protocol.  A server MUST NOT return bits other than those
   defined above in a GETATTR or READDIR operation, and it MUST return
   NFS4ERR_INVAL if bits other than those defined above are set in a
   SETATTR, CREATE, OPEN, VERIFY or NVERIFY operation.

6.2.5.  Attribute 74: mode_set_masked

   The mode_set_masked attribute is a write-only attribute that allows
   individual bits in the mode attribute to be set or reset, without
   changing others.  It allows, for example, the bits MODE4_SUID,
   MODE4_SGID, and MODE4_SVTX to be modified while leaving unmodified
   any of the nine low-order mode bits devoted to permissions.

   In such instances that the nine low-order bits are left unmodified,
   then neither the acl nor the dacl attribute should be automatically
   modified as discussed in Section 6.4.1.

   The mode_set_masked attribute consists of two words each in the form
   of a mode4.  The first consists of the value to be applied to the
   current mode value and the second is a mask.  Only bits set to one in
   the mask word are changed (set or reset) in the file's mode.  All
   other bits in the mode remain unchanged.  Bits in the first word that
   correspond to bits which are zero in the mask are ignored, except
   that undefined bits are checked for validity and can result in
   NFS4ERR_INVAL as described below.




Shepler, et al.           Expires June 18, 2009               [Page 142]

Internet-Draft                   NFSv4.1                   December 2008


   The mode_set_masked attribute is only valid in a SETATTR operation.
   If it is used in a CREATE or OPEN operation, the server MUST return
   NFS4ERR_INVAL.

   Bits not defined as valid in the mode attribute are not valid in
   either word of the mode_set_masked attribute.  The server MUST return
   NFS4ERR_INVAL if any of those are on in a SETATTR.  If the mode and
   mode_set_masked attributes are both specified in the same SETATTR,
   the server MUST also return NFS4ERR_INVAL.

6.3.  Common Methods

   The requirements in this section will be referred to in future
   sections, especially Section 6.4.

6.3.1.  Interpreting an ACL

6.3.1.1.  Server Considerations

   The server uses the algorithm described in Section 6.2.1 to determine
   whether an ACL allows access to an object.  However, the ACL might
   not be the sole determiner of access.  For example:

   o  In the case of a file system exported as read-only, the server may
      deny write permissions even though an object's ACL grants it.

   o  Server implementations MAY grant ACE4_WRITE_ACL and ACE4_READ_ACL
      permissions to prevent a situation from arising in which there is
      no valid way to ever modify the ACL.

   o  All servers will allow a user the ability to read the data of the
      file when only the execute permission is granted (i.e.  If the ACL
      denies the user the ACE4_READ_DATA access and allows the user
      ACE4_EXECUTE, the server will allow the user to read the data of
      the file).

   o  Many servers have the notion of owner-override in which the owner
      of the object is allowed to override accesses that are denied by
      the ACL.  This may be helpful, for example, to allow users
      continued access to open files on which the permissions have
      changed.

   o  Many servers have the notion of a "superuser" that has privileges
      beyond an ordinary user.  The superuser may be able to read or
      write data or metadata in ways that would not be permitted by the
      ACL.





Shepler, et al.           Expires June 18, 2009               [Page 143]

Internet-Draft                   NFSv4.1                   December 2008


   o  A retention attribute might also block access otherwise allowed by
      ACLs (see Section 5.13).

6.3.1.2.  Client Considerations

   Clients SHOULD NOT do their own access checks based on their
   interpretation the ACL, but rather use the OPEN and ACCESS operations
   to do access checks.  This allows the client to act on the results of
   having the server determine whether or not access should be granted
   based on its interpretation of the ACL.

   Clients must be aware of situations in which an object's ACL will
   define a certain access even though the server will not enforce it.
   In general, but especially in these situations, the client needs to
   do its part in the enforcement of access as defined by the ACL.  To
   do this, the client MAY send the appropriate ACCESS operation prior
   to servicing the request of the user or application in order to
   determine whether the user or application should be granted the
   access requested.  For examples in which the ACL may define accesses
   that the server doesn't enforce see Section 6.3.1.1.

6.3.2.  Computing a Mode Attribute from an ACL

   The following method can be used to calculate the MODE4_R*, MODE4_W*
   and MODE4_X* bits of a mode attribute, based upon an ACL.

   First, for each of the special identifiers OWNER@, GROUP@, and
   EVERYONE@, evaluate the ACL in order, considering only ALLOW and DENY
   ACEs for the identifier EVERYONE@ and for the identifier under
   consideration.  The result of the evaluation will be an NFSv4 ACL
   mask showing exactly which bits are permitted to that identifier.

   Then translate the calculated mask for OWNER@, GROUP@, and EVERYONE@
   into mode bits for, respectively, the user, group, and other, as
   follows:

   1.  Set the read bit (MODE4_RUSR, MODE4_RGRP, or MODE4_ROTH) if and
       only if ACE4_READ_DATA is set in the corresponding mask.

   2.  Set the write bit (MODE4_WUSR, MODE4_WGRP, or MODE4_WOTH) if and
       only if ACE4_WRITE_DATA and ACE4_APPEND_DATA are both set in the
       corresponding mask.

   3.  Set the execute bit (MODE4_XUSR, MODE4_XGRP, or MODE4_XOTH), if
       and only if ACE4_EXECUTE is set in the corresponding mask.






Shepler, et al.           Expires June 18, 2009               [Page 144]

Internet-Draft                   NFSv4.1                   December 2008


6.3.2.1.  Discussion

   Some server implementations also add bits permitted to named users
   and groups to the group bits (MODE4_RGRP, MODE4_WGRP, and
   MODE4_XGRP).

   Implementations are discouraged from doing this, because it has been
   found to cause confusion for users who see members of a file's group
   denied access that the mode bits appear to allow.  (The presence of
   DENY ACEs may also lead to such behavior, but DENY ACEs are expected
   to be more rarely used.)

   The same user confusion seen when fetching the mode also results if
   setting the mode does not effectively control permissions for the
   owner, group, and other users; this motivates some of the
   requirements that follow.

6.4.  Requirements

   The server that supports both mode and ACL must take care to
   synchronize the MODE4_*USR, MODE4_*GRP, and MODE4_*OTH bits with the
   ACEs which have respective who fields of "OWNER@", "GROUP@", and
   "EVERYONE@" so that the client can see semantically equivalent access
   permissions exist whether the client asks for owner, owner_group and
   mode attributes, or for just the ACL.

   In this section, much is made of the methods in Section 6.3.2.  Many
   requirements refer to this section.  But note that the methods have
   behaviors specified with "SHOULD".  This is intentional, to avoid
   invalidating existing implementations that compute the mode according
   to the withdrawn POSIX ACL draft (1003.1e draft 17), rather than by
   actual permissions on owner, group, and other.

6.4.1.  Setting the mode and/or ACL Attributes

   In the case where a server supports the sacl or dacl attribute, in
   addition to the acl attribute, the server MUST fail a request to set
   the acl attribute simultaneously with a dacl or sacl attribute.  The
   error to be given is NFS4ERR_ATTRNOTSUPP.

6.4.1.1.  Setting mode and not ACL

   When any of the nine low-order mode bits are subject to change,
   either because the mode attribute was set or because the
   mode_set_masked attribute was set and the mask included one or more
   bits from the nine low-order mode bits, and no ACL attribute is
   explicitly set, the acl and dacl attributes must be modified in
   accordance with the updated value of those bits.  This must happen



Shepler, et al.           Expires June 18, 2009               [Page 145]

Internet-Draft                   NFSv4.1                   December 2008


   even if the value of the low-order bits is the same after the mode is
   set as before.

   Note that any AUDIT or ALARM ACEs (hence any ACEs in the sacl
   attribute) are unaffected by changes to the mode.

   In cases in which the permissions bits are subject to change, the acl
   and dacl attributes MUST be modified such that the mode computed via
   the method in Section 6.3.2 yields the low-order nine bits (MODE4_R*,
   MODE4_W*, MODE4_X*) of the mode attribute as modified by the
   attribute change.  The ACL attributes SHOULD also be modified such
   that:

   1.  If MODE4_RGRP is not set, entities explicitly listed in the ACL
       other than OWNER@ and EVERYONE@ SHOULD NOT be granted
       ACE4_READ_DATA.

   2.  If MODE4_WGRP is not set, entities explicitly listed in the ACL
       other than OWNER@ and EVERYONE@ SHOULD NOT be granted
       ACE4_WRITE_DATA or ACE4_APPEND_DATA.

   3.  If MODE4_XGRP is not set, entities explicitly listed in the ACL
       other than OWNER@ and EVERYONE@ SHOULD NOT be granted
       ACE4_EXECUTE.

   Access mask bits other those listed above, appearing in ALLOW ACEs,
   MAY also be disabled.

   Note that ACEs with the flag ACE4_INHERIT_ONLY_ACE set do not affect
   the permissions of the ACL itself, nor do ACEs of the type AUDIT and
   ALARM.  As such, it is desirable to leave these ACEs unmodified when
   modifying the ACL attributes.

   Also note that the requirement may be met by discarding the acl and
   dacl, in favor of an ACL that represents the mode and only the mode.
   This is permitted, but it is preferable for a server to preserve as
   much of the ACL as possible without violating the above requirements.
   Discarding the ACL makes it effectively impossible for a file created
   with a mode attribute to inherit an ACL (see Section 6.4.3).

6.4.1.2.  Setting ACL and not mode

   When setting the acl or dacl and not setting the mode or
   mode_set_masked attributes, the permission bits of the mode need to
   be derived from the ACL.  In this case, the ACL attribute SHOULD be
   set as given.  The nine low-order bits of the mode attribute
   (MODE4_R*, MODE4_W*, MODE4_X*) MUST be modified to match the result
   of the method Section 6.3.2.  The three high-order bits of the mode



Shepler, et al.           Expires June 18, 2009               [Page 146]

Internet-Draft                   NFSv4.1                   December 2008


   (MODE4_SUID, MODE4_SGID, MODE4_SVTX) SHOULD remain unchanged.

6.4.1.3.  Setting both ACL and mode

   When setting both the mode (includes use of either the mode attribute
   or the mode_set_masked attribute) and the acl or dacl attributes in
   the same operation, the attributes MUST be applied in this order:
   mode (or mode_set_masked), then ACL.  The mode-related attribute is
   set as given, then the ACL attribute is set as given, possibly
   changing the final mode, as described above in Section 6.4.1.2.

6.4.2.  Retrieving the mode and/or ACL Attributes

   This section applies only to servers that support both the mode and
   ACL attributes.

   Some server implementations may have a concept of "objects without
   ACLs", meaning that all permissions are granted and denied according
   to the mode attribute, and that no ACL attribute is stored for that
   object.  If an ACL attribute is requested of such a server, the
   server SHOULD return an ACL that does not conflict with the mode;
   that is to say, the ACL returned SHOULD represent the nine low-order
   bits of the mode attribute (MODE4_R*, MODE4_W*, MODE4_X*) as
   described in Section 6.3.2.

   For other server implementations, the ACL attribute is always present
   for every object.  Such servers SHOULD store at least the three high-
   order bits of the mode attribute (MODE4_SUID, MODE4_SGID,
   MODE4_SVTX).  The server SHOULD return a mode attribute if one is
   requested, and the low-order nine bits of the mode (MODE4_R*,
   MODE4_W*, MODE4_X*) MUST match the result of applying the method in
   Section 6.3.2 to the ACL attribute.

6.4.3.  Creating New Objects

   If a server supports any ACL attributes, it may use the ACL
   attributes on the parent directory to compute an initial ACL
   attribute for a newly created object.  This will be referred to as
   the inherited ACL within this section.  The act of adding one or more
   ACEs to the inherited ACL that are based upon ACEs in the parent
   directory's ACL will be referred to as inheriting an ACE within this
   section.

   Implementors should standardize on what the behavior of CREATE and
   OPEN must be depending on the presence or absence of the mode and ACL
   attributes.





Shepler, et al.           Expires June 18, 2009               [Page 147]

Internet-Draft                   NFSv4.1                   December 2008


   1.  If just the mode is given in the call:

       In this case, inheritance SHOULD take place, but the mode MUST be
       applied to the inherited ACL as described in Section 6.4.1.1,
       thereby modifying the ACL.


   2.  If just the ACL is given in the call:

       In this case, inheritance SHOULD NOT take place, and the ACL as
       defined in the CREATE or OPEN will be set without modification,
       and the mode modified as in Section 6.4.1.2


   3.  If both mode and ACL are given in the call:

       In this case, inheritance SHOULD NOT take place, and both
       attributes will be set as described in Section 6.4.1.3.


   4.  If neither mode nor ACL are given in the call:

       In the case where an object is being created without any initial
       attributes at all, e.g. an OPEN operation with an opentype4 of
       OPEN4_CREATE and a createmode4 of EXCLUSIVE4, inheritance SHOULD
       NOT take place (note that EXCLUSIVE4_1 is a better choice of
       createmode4, since it does permit initial attributes).  Instead,
       the server SHOULD set permissions to deny all access to the newly
       created object.  It is expected that the appropriate client will
       set the desired attributes in a subsequent SETATTR operation, and
       the server SHOULD allow that operation to succeed, regardless of
       what permissions the object is created with.  For example, an
       empty ACL denies all permissions, but the server should allow the
       owner's SETATTR to succeed even though WRITE_ACL is implicitly
       denied.

       In other cases, inheritance SHOULD take place, and no
       modifications to the ACL will happen.  The mode attribute, if
       supported, MUST be as computed in Section 6.3.2, with the
       MODE4_SUID, MODE4_SGID and MODE4_SVTX bits clear.  If no
       inheritable ACEs exist on the parent directory, the rules for
       creating acl, dacl or sacl attributes are implementation defined.
       If either the dacl or sacl attribute is supported, then the
       ACL4_DEFAULTED flag SHOULD be set on the newly created
       attributes.






Shepler, et al.           Expires June 18, 2009               [Page 148]

Internet-Draft                   NFSv4.1                   December 2008


6.4.3.1.  The Inherited ACL

   If the object being created is not a directory, the inherited ACL
   SHOULD NOT inherit ACEs from the parent directory ACL unless the
   ACE4_FILE_INHERIT_FLAG is set.

   If the object being created is a directory, the inherited ACL should
   inherit all inheritable ACEs from the parent directory, those that
   have ACE4_FILE_INHERIT_ACE or ACE4_DIRECTORY_INHERIT_ACE flag set.
   If the inheritable ACE has ACE4_FILE_INHERIT_ACE set, but
   ACE4_DIRECTORY_INHERIT_ACE is clear, the inherited ACE on the newly
   created directory MUST have the ACE4_INHERIT_ONLY_ACE flag set to
   prevent the directory from being affected by ACEs meant for non-
   directories.

   When a new directory is created, the server MAY split any inherited
   ACE which is both inheritable and effective (in other words, which
   has neither ACE4_INHERIT_ONLY_ACE nor ACE4_NO_PROPAGATE_INHERIT_ACE
   set), into two ACEs, one with no inheritance flags, and one with
   ACE4_INHERIT_ONLY_ACE set.  (In the case of a dacl or sacl attribute,
   both of those ACEs SHOULD also have the ACE4_INHERITED_ACE flag set.)
   This makes it simpler to modify the effective permissions on the
   directory without modifying the ACE which is to be inherited to the
   new directory's children.

6.4.3.2.  Automatic Inheritance

   The acl attribute consists only of an array of ACEs, but the sacl
   (Section 6.2.3) and dacl (Section 6.2.2) attributes also include an
   additional flag field.

   struct nfsacl41 {
           aclflag4        na41_flag;
           nfsace4         na41_aces<>;
   };

   The flag field applies to the entire sacl or dacl; three flag values
   are defined:

   const ACL4_AUTO_INHERIT         = 0x00000001;
   const ACL4_PROTECTED            = 0x00000002;
   const ACL4_DEFAULTED            = 0x00000004;

   and all other bits must be cleared.  The ACE4_INHERITED_ACE flag may
   be set in the ACEs of the sacl or dacl (whereas it must always be
   cleared in the acl).

   Together these features allow a server to support automatic



Shepler, et al.           Expires June 18, 2009               [Page 149]

Internet-Draft                   NFSv4.1                   December 2008


   inheritance, which we now explain in more detail.

   Inheritable ACEs are normally inherited by child objects only at the
   time that the child objects are created; later modifications to
   inheritable ACEs do not result in modifications to inherited ACEs on
   descendants.

   However, the dacl and sacl provide an OPTIONAL mechanism which allows
   a client application to propagate changes to inheritable ACEs to an
   entire directory hierarchy.

   A server that supports this performs inheritance at object creation
   time in the normal way, and SHOULD set the ACE4_INHERITED_ACE flag on
   any inherited ACEs as they are added to the new object.

   A client application such as an ACL editor may then propagate changes
   to inheritable ACEs on a directory by recursively traversing that
   directory's descendants and modifying each ACL encountered to remove
   any ACEs with the ACE4_INHERITED_ACE flag and to replace them by the
   new inheritable ACEs (also with the ACE4_INHERITED_ACE flag set).  It
   uses the existing ACE inheritance flags in the obvious way to decide
   which ACEs to propagate.  (Note that it may encounter further
   inheritable ACEs when descending the directory hierarchy, and that
   those will also need to be taken into account when propagating
   inheritable ACEs to further descendants.)

   The reach of this propagation may be limited in two ways: first,
   automatic inheritance is not performed from any directory ACL that
   has the ACL4_AUTO_INHERIT flag cleared; and second, automatic
   inheritance stops wherever an ACL with the ACL4_PROTECTED flag is
   set, preventing modification of that ACL and also (if the ACL is set
   on a directory) of the ACL on any of the object's descendants.

   This propagation is performed independently for the sacl and the dacl
   attributes; thus the ACL4_AUTO_INHERIT and ACL4_PROTECTED flags may
   be independently set for the sacl and the dacl, and propagation of
   one type of acl may continue down a hierarchy even where propagation
   of the other acl has stopped.

   New objects should be created with a dacl and a sacl that both have
   the ACL4_PROTECTED flag cleared and the ACL4_AUTO_INHERIT flag set to
   the same value as that on, respectively, the sacl or dacl of the
   parent object.

   Both the dacl and sacl attributes are RECOMMENDED, and a server may
   support one without supporting the other.

   A server that supports both the old acl attribute and one or both of



Shepler, et al.           Expires June 18, 2009               [Page 150]

Internet-Draft                   NFSv4.1                   December 2008


   the new dacl or sacl attributes must do so in such a way as to keep
   all three attributes consistent with each other.  Thus the ACEs
   reported in the acl attribute should be the union of the ACEs
   reported in the dacl and sacl attributes, except that the
   ACE4_INHERITED_ACE flag must be cleared from the ACEs in the acl.
   And of course a client that queries only the acl will be unable to
   determine the values of the sacl or dacl flag fields.

   When a client performs a SETATTR for the acl attribute, the server
   SHOULD set the ACL4_PROTECTED flag to true on both the sacl and the
   dacl.  By using the acl attribute, as opposed to the dacl or sacl
   attributes, the client signals that it may not understand automatic
   inheritance, and thus cannot be trusted to set an ACL for which
   automatic inheritance would make sense.

   When a client application queries an ACL, modifies it, and sets it
   again, it should leave any ACEs marked with ACE4_INHERITED_ACE
   unchanged, in their original order, at the end of the ACL.  If the
   application is unable to do this, it should set the ACL4_PROTECTED
   flag.  This behavior is not enforced by servers, but violations of
   this rule may lead to unexpected results when applications perform
   automatic inheritance.

   If a server also supports the mode attribute, it SHOULD set the mode
   in such a way that leaves inherited ACEs unchanged, in their original
   order, at the end of the ACL.  If it is unable to do so, it SHOULD
   set the ACL4_PROTECTED flag on the file's dacl.

   Finally, in the case where the request that creates a new file or
   directory does not also set permissions for that file or directory,
   and there are also no ACEs to inherit from the parent's directory,
   then the server's choice of ACL for the new object is implementation-
   dependent.  In this case, the server SHOULD set the ACL4_DEFAULTED
   flag on the ACL it chooses for the new object.  An application
   performing automatic inheritance takes the ACL4_DEFAULTED flag as a
   sign that the ACL should be completely replaced by one generated
   using the automatic inheritance rules.


7.  Single-server Namespace

   This chapter describes the NFSv4 single-server namespace.  Single-
   server namespaces may be presented directly to clients, or they may
   be used as a basis to form larger multi-server namespaces (e.g. site-
   wide or organization-wide) to be presented to clients, as described
   in Section 11.





Shepler, et al.           Expires June 18, 2009               [Page 151]

Internet-Draft                   NFSv4.1                   December 2008


7.1.  Server Exports

   On a UNIX server, the namespace describes all the files reachable by
   pathnames under the root directory or "/".  On a Windows server the
   namespace constitutes all the files on disks named by mapped disk
   letters.  NFS server administrators rarely make the entire server's
   file system namespace available to NFS clients.  More often portions
   of the namespace are made available via an "export" feature.  In
   previous versions of the NFS protocol, the root filehandle for each
   export is obtained through the MOUNT protocol; the client sent a
   string that identified the export name within the namespace and the
   server returned the root filehandle for that export.  The MOUNT
   protocol also provided an EXPORTS procedure that enumerated server's
   exports.

7.2.  Browsing Exports

   The NFSv4.1 protocol provides a root filehandle that clients can use
   to obtain filehandles for the exports of a particular server, via a
   series of LOOKUP operations within a COMPOUND, to traverse a path.  A
   common user experience is to use a graphical user interface (perhaps
   a file "Open" dialog window) to find a file via progressive browsing
   through a directory tree.  The client must be able to move from one
   export to another export via single-component, progressive LOOKUP
   operations.

   This style of browsing is not well supported by the NFSv3 protocol.
   In NFSv3, the client expects all LOOKUP operations to remain within a
   single server file system.  For example, the device attribute will
   not change.  This prevents a client from taking namespace paths that
   span exports.

   In the case of NFSv3, an automounter on the client can obtain a
   snapshot of the server's namespace using the EXPORTS procedure of the
   MOUNT protocol.  If it understands the server's pathname syntax, it
   can create an image of the server's namespace on the client.  The
   parts of the namespace that are not exported by the server are filled
   in with directories that might be constructed similarly to an NFSv4.1
   "pseudo file system" (see Section 7.3) that allows the user to browse
   from one mounted file system to another.  There is a drawback to this
   representation of the server's namespace on the client: it is static.
   If the server administrator adds a new export the client will be
   unaware of it.

7.3.  Server Pseudo File System

   NFSv4.1 servers avoid this namespace inconsistency by presenting all
   the exports for a given server within the framework of a single



Shepler, et al.           Expires June 18, 2009               [Page 152]

Internet-Draft                   NFSv4.1                   December 2008


   namespace, for that server.  An NFSv4.1 client uses LOOKUP and
   READDIR operations to browse seamlessly from one export to another.

   Where there are portions of the server namespace that are not
   exported, clients require some way of traversing those portions to
   reach actual exported file systems.  A technique that servers may use
   to provide for this is to bridge unexported portion of the namespace
   via a "pseudo file system" that provides a view of exported
   directories only.  A pseudo file system has a unique fsid and behaves
   like a normal, read-only file system.

   Based on the construction of the server's namespace, it is possible
   that multiple pseudo file systems may exist.  For example,

           /a              pseudo file system
           /a/b            real file system
           /a/b/c          pseudo file system
           /a/b/c/d        real file system

   Each of the pseudo file systems is considered a separate entity and
   therefore MUST have its own fsid, unique among all the fsids for that
   server.

7.4.  Multiple Roots

   Certain operating environments are sometimes described as having
   "multiple roots".  In such environments individual file systems are
   commonly represented by disk or volume names.  NFSv4 servers for
   these platforms can construct a pseudo file system above these root
   names so that disk letters or volume names are simply directory names
   in the pseudo root.

7.5.  Filehandle Volatility

   The nature of the server's pseudo file system is that it is a logical
   representation of file system(s) available from the server.
   Therefore, the pseudo file system is most likely constructed
   dynamically when the server is first instantiated.  It is expected
   that the pseudo file system may not have an on disk counterpart from
   which persistent filehandles could be constructed.  Even though it is
   preferable that the server provide persistent filehandles for the
   pseudo file system, the NFS client should expect that pseudo file
   system filehandles are volatile.  This can be confirmed by checking
   the associated "fh_expire_type" attribute for those filehandles in
   question.  If the filehandles are volatile, the NFS client must be
   prepared to recover a filehandle value (e.g. with a series of LOOKUP
   operations) when receiving an error of NFS4ERR_FHEXPIRED.




Shepler, et al.           Expires June 18, 2009               [Page 153]

Internet-Draft                   NFSv4.1                   December 2008


   Because it is quite likely that servers will implement pseudo file
   systems using volatile filehandles, clients need to be prepared for
   them, rather than assuming that all filehandles will be persistent.

7.6.  Exported Root

   If the server's root file system is exported, one might conclude that
   a pseudo file system is unneeded.  This not necessarily so.  Assume
   the following file systems on a server:

           /       fs1  (exported)
           /a      fs2  (not exported)
           /a/b    fs3  (exported)

   Because fs2 is not exported, fs3 cannot be reached with simple
   LOOKUPs.  The server must bridge the gap with a pseudo file system.

7.7.  Mount Point Crossing

   The server file system environment may be constructed in such a way
   that one file system contains a directory which is 'covered' or
   mounted upon by a second file system.  For example:

           /a/b            (file system 1)
           /a/b/c/d        (file system 2)

   The pseudo file system for this server may be constructed to look
   like:

           /               (place holder/not exported)
           /a/b            (file system 1)
           /a/b/c/d        (file system 2)

   It is the server's responsibility to present the pseudo file system
   that is complete to the client.  If the client sends a lookup request
   for the path "/a/b/c/d", the server's response is the filehandle of
   the root of the file system "/a/b/c/d".  In previous versions of the
   NFS protocol, the server would respond with the filehandle of
   directory "/a/b/c/d" within the file system "/a/b".

   The NFS client will be able to determine if it crosses a server mount
   point by a change in the value of the "fsid" attribute.

7.8.  Security Policy and Namespace Presentation

   Because NFSv4 clients possess the ability to change the security
   mechanisms used, after determining what is allowed, by using SECINFO
   and SECINFO_NONAME, the server SHOULD NOT present a different view of



Shepler, et al.           Expires June 18, 2009               [Page 154]

Internet-Draft                   NFSv4.1                   December 2008


   the namespace based on the security mechanism being used by a client.
   Instead, it should present a consistent view and return
   NFS4ERR_WRONGSEC if an attempt is made to access data with an
   inappropriate security mechanism.

   If security considerations make it necessary to hide the existence of
   a particular file system, as opposed to all of the data within it,
   the server can apply the security policy of a shared resource in the
   server's namespace to components of the resource's ancestors.  For
   example:

           /                           (place holder/not exported)
           /a/b                        (file system 1)
           /a/b/MySecretProject        (file system 2)


   The /a/b/MySecretProject directory is a real file system and is the
   shared resource.  Suppose the security policy for /a/b/
   MySecretProject is Kerberos with integrity and it is desired to limit
   knowledge of the existence of this file system.  In this case, the
   server should apply the same security policy to /a/b.  This allows
   for knowledge of the existence of a file system to be secured when
   desirable.

   For the case of the use of multiple, disjoint security mechanisms in
   the server's resources, applying that sort of policy would result in
   the higher-level file system not being accessible using any security
   flavor, which would make the that higher-level file system
   inaccessible.  Therefore, that sort of configuration is not
   compatible with hiding the existence (as opposed to the contents)
   from clients using multiple disjoint sets of security flavors.

   In other circumstances, a desirable policy is for the security of a
   particular object in the server's namespace should include the union
   of all security mechanisms of all direct descendants.  A common and
   convenient practice, unless strong security requirements dictate
   otherwise, is to make all of the pseudo file system accessible by all
   of the valid security mechanisms.

   Where there is concern about the security of data on the network,
   clients should use strong security mechanisms to access the pseudo
   file system in order to prevent man-in-the-middle attacks.


8.  State Management

   Integrating locking into the NFS protocol necessarily causes it to be
   stateful.  With the inclusion of such features as share reservations,



Shepler, et al.           Expires June 18, 2009               [Page 155]

Internet-Draft                   NFSv4.1                   December 2008


   file and directory delegations, recallable layouts, and support for
   mandatory byte-range locking, the protocol becomes substantially more
   dependent on proper management of state than the traditional
   combination of NFS and NLM [45].  These features include expanded
   locking facilities, which provide some measure of interclient
   exclusion, but the state also offers features not readily providable
   using a stateless model.  There are three components to making this
   state manageable:

   o  Clear division between client and server

   o  Ability to reliably detect inconsistency in state between client
      and server

   o  Simple and robust recovery mechanisms

   In this model, the server owns the state information.  The client
   requests changes in locks and the server responds with the changes
   made.  Non-client-initiated changes in locking state are infrequent.
   The client receives prompt notification of such changes and can
   adjust its view of the locking state to reflect the server's changes.

   Individual pieces of state created by the server and passed to the
   client at its request are represented by 128-bit stateids.  These
   stateids may represent a particular open file, a set of byte-range
   locks held by a particular owner, or a recallable delegation of
   privileges to access a file in particular ways, or at a particular
   location.

   In all cases, there is a transition from the most general information
   which represents a client as a whole to the eventual lightweight
   stateid used for most client and server locking interactions.  The
   details of this transition will vary with the type of object but it
   always starts with a client ID.

8.1.  Client and Session ID

   A client must establish a client ID (see Section 2.4) and then one or
   more sessionids (see Section 2.10) before performing any operations
   to open, lock, delegate, or obtain a layout for a file object.  Each
   session ID is associated with a specific client ID, and thus serves
   as a shorthand reference to an NFSv4.1 client.

   For some types of locking interactions, the client will represent
   some number of internal locking entities called "owners", which
   normally correspond to processes internal to the client.  For other
   types of locking-related objects, such as delegations and layouts, no
   such intermediate entities are provided for, and the locking-related



Shepler, et al.           Expires June 18, 2009               [Page 156]

Internet-Draft                   NFSv4.1                   December 2008


   objects are considered to be transferred directly between the server
   and a unitary client.

8.2.  Stateid Definition

   When the server grants a lock of any type (including opens, byte-
   range locks, delegations, and layouts) it responds with a unique
   stateid, that represents a set of locks (often a single lock) for the
   same file, of the same type, and sharing the same ownership
   characteristics.  Thus opens of the same file by different open-
   owners each have an identifying stateid.  Similarly, each set of
   byte-range locks on a file owned by a specific lock-owner has its own
   identifying stateid.  Delegations and layouts also have associated
   stateids by which they may be referenced.  The stateid is used as a
   shorthand reference to a lock or set of locks and given a stateid the
   server can determine the associated state-owner or state-owners (in
   the case of an open-owner/lock-owner pair) and the associated
   filehandle.  When stateids are used, the current filehandle must be
   the one associated with that stateid.

   All stateids associated with a given client ID are associated with a
   common lease which represents the claim of those stateids and the
   objects they represent to be maintained by the server.  See
   Section 8.3 for a discussion of leases.

   The server may assign stateids independently for different clients.
   A stateid with the same bit pattern for one client may designate an
   entirely different set of locks for a different client.  The stateid
   is always interpreted with respect to the client ID associated with
   the current session.  Stateids apply to all sessions associated with
   the given client ID and the client may use a stateid obtained from
   one session on another session associated with the same client ID.

8.2.1.  Stateid Types

   With the exception of special stateids (see Section 8.2.3), each
   stateid represents locking objects of one of a set of types defined
   by the NFSv4.1 protocol.  Note that in all these cases, where we
   speak of guarantee, it is understood there are situations such as a
   client restart, or lock revocation, that allow the guarantee to be
   voided.

   o  Stateids may represent opens of files.

      Each stateid in this case represents the open state for a given
      client ID/open-owner/filehandle triple.  Such stateids are subject
      to change (with consequent incrementing of the stateid's seqid) in
      response to OPENs that result in upgrade and OPEN_DOWNGRADE



Shepler, et al.           Expires June 18, 2009               [Page 157]

Internet-Draft                   NFSv4.1                   December 2008


      operations.

   o  Stateids may represent sets of byte-range locks.

      All locks held on a particular file by a particular owner and all
      gotten under the aegis of a particular open file are associated
      with a single stateid with the seqid being incremented whenever
      LOCK and LOCKU operations affect that set of locks.

   o  Stateids may represent file delegations, which are recallable
      guarantees by the server to the client, that other clients will
      not reference, or will not modify a particular file, until the
      delegation is returned.  In NFSv4.1, file delegations may be
      obtained on both regular and non-regular files.

      A stateid represents a single delegation held by a client for a
      particular filehandle.

   o  Stateids may represent directory delegations, which are recallable
      guarantees by the server to the client, that other clients will
      not modify the directory, until the delegation is returned.

      A stateid represents a single delegation held by a client for a
      particular directory filehandle.

   o  Stateids may represent layouts, which are recallable guarantees by
      the server to the client, that particular files may be accessed
      via an alternate data access protocol at specific locations.  Such
      access is limited to particular sets of byte ranges and may
      proceed until those byte ranges are reduced or the layout is
      returned.

      A stateid represents the set of all layouts held by a particular
      client for a particular filehandle with a given layout type.  The
      seqid is updated as the layouts of that set changes with layout
      stateid changing operations such as LAYOUTGET and LAYOUTRETURN.

8.2.2.  Stateid Structure

   Stateids are divided into two fields, a 96-bit "other" field
   identifying the specific set of locks and a 32-bit "seqid" sequence
   value.  Except in the case of special stateids (see Section 8.2.3), a
   particular value of the "other" field denotes a set of locks of the
   same type (for example byte-range locks, opens, delegations, or
   layouts), for a specific file or directory, and sharing the same
   ownership characteristics.  The seqid designates a specific instance
   of such a set of locks, and is incremented to indicate changes in
   such a set of locks, either by the addition or deletion of locks from



Shepler, et al.           Expires June 18, 2009               [Page 158]

Internet-Draft                   NFSv4.1                   December 2008


   the set, a change in the byte-range they apply to, or an upgrade or
   downgrade in the type of one or more locks.

   When such a set of locks is first created the server returns a
   stateid with seqid value of one.  On subsequent operations which
   modify the set of locks the server is required to increment the seqid
   field by one (1) whenever it returns a stateid for the same state-
   owner/file/type combination and there is some change in the set of
   locks actually designated.  In this case the server will return a
   stateid with an other field the same as previously used for that
   state-owner/file/type combination, with an incremented seqid field.
   This pattern continues until the seqid is incremented past
   NFS4_UINT32_MAX, and one (not zero) is the next seqid value.

   The purpose of the incrementing of the seqid is to allow the server
   to communicate to the client the order in which operations that
   modified locking state associated with a stateid have been processed
   and to make it possible for the client to send requests that are
   conditional on the set of locks not having changed since the stateid
   in question was returned.

   Except for layout stateids (Section 12.5.3) when a client sends a
   stateid to the server, it has two choices with regard to the seqid
   sent.  It may set the seqid to zero to indicate to the server that it
   wishes the most up-to-date seqid for that stateid's "other" field to
   be used.  This would be the common choice in the case of a stateid
   sent with a READ or WRITE operation.  It also may set a non-zero
   value in which case the server checks if that seqid is the correct
   one.  In that case the server is required to return
   NFS4ERR_OLD_STATEID if the seqid is lower than the most current value
   and NFS4ERR_BAD_STATEID if the seqid is greater than the most current
   value.  This would be the common choice in the case of stateids sent
   with a CLOSE or OPEN_DOWNGRADE.  Because OPENs may be sent in
   parallel for the same owner, a client might close a file without
   knowing that an OPEN upgrade had been done by the server, changing
   the lock in question.  If CLOSE were sent with a zero seqid, the OPEN
   upgrade would be canceled before the client even received an
   indication that an upgrade had happened.

   When a stateid is sent by the server to client as part of a callback
   operation, it is not subject to checking for a current seqid and
   returning NFS4ERR_OLD_STATEID.  This is because the client is not in
   a position to know the most up-to-date seqid and thus cannot verify
   it.  Unless specially noted, the seqid value for a stateid sent by
   the server to the client as part of a callback is required to be zero
   with NFS4ERR_BAD_STATEID returned if it is not.

   In making comparisons between seqids, both by the client in



Shepler, et al.           Expires June 18, 2009               [Page 159]

Internet-Draft                   NFSv4.1                   December 2008


   determining the order of operations and by the server in determining
   whether the NFS4ERR_OLD_STATEID is to be returned, the possibility of
   the seqid being swapped around past the NFS4_UINT32_MAX value needs
   to be taken into account.  When two seqid values are being compared,
   the total count of slots for all sessions associated with the current
   client is used to do this.  When one seqid value is less that this
   total slot count and another seqid value is greater than
   NFS4_UINT32_MAX minus the total slot count, the former is to be
   treated as lower than the later, despite the fact that it is
   numerically greater.

8.2.3.  Special Stateids

   Stateid values whose "other" field is either all zeros or all ones
   are reserved.  They may not be assigned by the server but have
   special meanings defined by the protocol.  The particular meaning
   depends on whether the "other" field is all zeros or all ones and the
   specific value of the "seqid" field.

   The following combinations of "other" and "seqid" are defined in
   NFSv4.1:

   o  When "other" and "seqid" are both zero, the stateid is treated as
      a special anonymous stateid, which can be used in READ, WRITE, and
      SETATTR requests to indicate the absence of any open state
      associated with the request.  When an anonymous stateid value is
      used, and an existing open denies the form of access requested,
      then access will be denied to the request.  This stateid MUST NOT
      be used on operations to data servers (Section 13.6).

   o  When "other" and "seqid" are both all ones, the stateid is a
      special read bypass stateid.  When this value is used in WRITE or
      SETATTR, it is treated like the anonymous value.  When used in
      READ, the server MAY grant access, even if access would normally
      be denied to READ requests.  This stateid MUST NOT be used on
      operations to data servers.

   o  When "other" is zero and "seqid" is one, the stateid represents
      the current stateid, which is whatever value is the last stateid
      returned by an operation within the COMPOUND.  In the case of an
      OPEN, the stateid returned for the open file, and not the
      delegation is used.  The stateid passed to the operation in place
      of the special value has its "seqid" value set to zero, except
      when the current stateid is used by the operation CLOSE or
      OPEN_DOWNGRADE.  If there is no operation in the COMPOUND which
      has returned a stateid value, the server MUST return the error
      NFS4ERR_BAD_STATEID.  As illustrated in Figure 6, if the value of
      a current stateid is a special stateid, and the stateid of an



Shepler, et al.           Expires June 18, 2009               [Page 160]

Internet-Draft                   NFSv4.1                   December 2008


      operation's arguments has "other" set to zero, and "seqid" set to
      one, then the server MUST return the error NFS4ERR_BAD_STATEID.

   o  When "other" is zero and "seqid" is NFS4_UINT32_MAX, the stateid
      represents a reserved stateid value defined to be invalid.  When
      this stateid is used, the server MUST return the error
      NFS4ERR_BAD_STATEID.

   If a stateid value is used which has all zero or all ones in the
   "other" field, but does not match one of the cases above, the server
   MUST return the error NFS4ERR_BAD_STATEID.

   Special stateids, unlike other stateids, are not associated with
   individual client IDs or filehandles and can be used with all valid
   client IDs and filehandles.  In the case of a special stateid
   designating the current stateid, the current stateid value
   substituted for the special stateid is associated with a particular
   client ID and filehandle, and so, if it is used where current
   filehandle does not match that associated with the current stateid,
   the operation to which the stateid is passed will return
   NFS4ERR_BAD_STATEID.

8.2.4.  Stateid Lifetime and Validation

   Stateids must remain valid until either a client restart or a server
   restart or until the client returns all of the locks associated with
   the stateid by means of an operation such as CLOSE or DELEGRETURN.
   If the locks are lost due to revocation the stateid remains a valid
   designation of that revoked state until the client frees it by using
   FREE_STATEID.  Stateids associated with byte-range locks are an
   exception.  They remain valid even if a LOCKU frees all remaining
   locks, so long as the open file with which they are associated
   remains open, unless the client does a FREE_STATEID to cause the
   stateid to be freed.

   It should be noted that there are situations in which the client's
   locks become invalid, without the client requesting they be returned.
   These include lease expiration and a number of forms of lock
   revocation within the lease period.  It is important to note that in
   these situations, the stateid remains valid and the client can use it
   to determine the disposition of the associated lost locks.

   An "other" value must never be reused for a different purpose (i.e.
   different filehandle, owner, or type of locks) within the context of
   a single client ID.  A server may retain the "other" value for the
   same purpose beyond the point where it may otherwise be freed but if
   it does so, it must maintain "seqid" continuity with previous values.




Shepler, et al.           Expires June 18, 2009               [Page 161]

Internet-Draft                   NFSv4.1                   December 2008


   One mechanism that may be used to satisfy the requirement that the
   server recognize invalid and out-of-date stateids is for the server
   to divide the "other" field of the stateid into two fields.

   o  An index into a table of locking-state structures.

   o  A generation number which is incremented on each allocation of a
      table entry for a particular use.

   And then store in each table entry,

   o  The client ID with which the stateid is associated.

   o  The current generation number for the (at most one) valid stateid
      sharing this index value.

   o  The filehandle of the file on which the locks are taken.

   o  An indication of the type of stateid (open, byte-range lock, file
      delegation, directory delegation, layout).

   o  The last "seqid" value returned corresponding to the current
      "other" value.

   o  An indication of the current status of the locks associated with
      this stateid.  In particular, whether these have been revoked and
      if so, for what reason.

   With this information, an incoming stateid can be validated and the
   appropriate error returned when necessary.  Special and non-special
   stateids are handled separately.  (See Section 8.2.3 for a discussion
   of special stateids.)

   Note that stateids are implicitly qualified by the current client ID,
   as derived from the client ID associated with the current session.
   Note however, that the semantics of the session will prevent stateids
   associated with a previous client or server instance from being
   analyzed by this procedure.

   If server restart has resulted in an invalid client ID or a session
   ID which is invalid, SEQUENCE will return an error and the operation
   that takes a stateid as an argument will never be processed.

   If there has been a server restart where there is a persistent
   session, and all leased state has been lost, then the session in
   question will, although valid, be marked as dead, and any operation
   not satisfied by means of the reply cache will receive the error
   NFS4ERR_DEADSESSION, and thus not be processed as indicated below.



Shepler, et al.           Expires June 18, 2009               [Page 162]

Internet-Draft                   NFSv4.1                   December 2008


   When a stateid is being tested, and the "other" field is all zeros or
   all ones, a check that the "other" and "seqid" fields match a defined
   combination for a special stateid is done and the results determined
   as follows:

   o  If the "other" and "seqid" fields do not match a defined
      combination associated with a special stateid, the error
      NFS4ERR_BAD_STATEID is returned.

   o  If the special stateid is one designating the current stateid, and
      there is a current stateid, then the current stateid is
      substituted for the special stateid and the checks appropriate to
      non-special stateids in performed.

   o  If the combination is valid in general but is not appropriate to
      the context in which the stateid is used (e.g. an all-zero stateid
      is used when an open stateid is required in a LOCK operation), the
      error NFS4ERR_BAD_STATEID is also returned.

   o  Otherwise, the check is completed and the special stateid is
      accepted as valid.

   When a stateid is being tested, and the "other" field is neither all
   zeros or all ones, the following procedure could be used to validate
   an incoming stateid and return an appropriate error, when necessary,
   assuming that the "other" field would be divided into a table index
   and an entry generation.

   o  If the table index field is outside the range of the associated
      table, return NFS4ERR_BAD_STATEID.

   o  If the selected table entry is of a different generation than that
      specified in the incoming stateid, return NFS4ERR_BAD_STATEID.

   o  If the selected table entry does not match the current filehandle,
      return NFS4ERR_BAD_STATEID.

   o  If the client ID in the table entry does not match the client ID
      associated with the current session, return NFS4ERR_BAD_STATEID.

   o  If the stateid represents revoked state, then return
      NFS4ERR_EXPIRED, NFS4ERR_ADMIN_REVOKED, or NFS4ERR_DELEG_REVOKED,
      as appropriate.

   o  If the stateid type is not valid for the context in which the
      stateid appears, return NFS4ERR_BAD_STATEID.  Note that a stateid
      may be valid in general, as would be reported by the TEST_STATEID
      operation, but be invalid for a particular operation, as, for



Shepler, et al.           Expires June 18, 2009               [Page 163]

Internet-Draft                   NFSv4.1                   December 2008


      example, when a stateid which doesn't represent byte-range locks
      is passed to the non-from_open case of LOCK or to LOCKU, or when a
      stateid which does not represent an open is passed to CLOSE or
      OPEN_DOWNGRADE.  In such cases, the server MUST return
      NFS4ERR_BAD_STATEID.

   o  If the "seqid" field is not zero, and it is greater than the
      current sequence value corresponding the current "other" field,
      return NFS4ERR_BAD_STATEID.

   o  If the "seqid" field is not zero, and it is less than the current
      sequence value corresponding the current "other" field, return
      NFS4ERR_OLD_STATEID.

   o  Otherwise, the stateid is valid and the table entry should contain
      any additional information about the type of stateid and
      information associated with that particular type of stateid, such
      as the associated set of locks, such as open-owner and lock-owner
      information, as well as information on the specific locks, such as
      open modes and byte ranges.

8.2.5.  Stateid Use for I/O Operations

   Clients performing I/O operations need to select an appropriate
   stateid based on the locks (including opens and delegations) held by
   the client and the various types of state-owners issuing the I/O
   requests.  SETATTR operations which change the file size are treated
   like I/O operations in this regard.

   The following rules, applied in order of decreasing priority, govern
   the selection of the appropriate stateid.  In following these rules,
   the client will only consider locks of which it has actually received
   notification by an appropriate operation response or callback.  Note
   that the rules are slightly different in the case of I/O to data
   servers when file layouts are being used (see Section 13.9.1).

   o  If the client holds a delegation for the file in question, the
      delegation stateid SHOULD be used.

   o  Otherwise, if the lock-owner corresponding entity (e.g. process)
      issuing the I/O has a lock stateid for the associated open file,
      then the lock stateid for that lock-owner and open file SHOULD be
      used.

   o  If there is no lock stateid, then the open stateid for the open
      file in question SHOULD be used.





Shepler, et al.           Expires June 18, 2009               [Page 164]

Internet-Draft                   NFSv4.1                   December 2008


   o  Finally, if none of the above apply, then a special stateid SHOULD
      be used.

   Ignoring these rules may result in situations in which the server
   does not have information necessary to properly process the request.
   For example, when mandatory byte-range locks are in effect, if the
   stateid does not indicate the proper lock-owner, via a lock stateid,
   a request might be avoidably rejected.

   The server however should not try to enforce these ordering rules and
   should use whatever information is available to proper process I/O
   requests.  In particular, when a client has a delegation for a given
   file, it SHOULD take note of this fact in processing a request, even
   if it is sent with a special stateid.

8.2.6.  Stateid Use for SETATTR Operations

   Because each operation is associated with a session ID and from that
   the clientid can be determined, operations do not need to include a
   stateid for the server to be able to determine whether they should
   cause a delegation to be recalled or are to be treated as done within
   the scope of the delegation.

   In the case of SETATTR operations, a stateid is present.  In cases
   other than those which set the file size, the client may send either
   a special stateid or, when a delegation is held for the file in
   question, a delegation stateid.  While the server SHOULD validate the
   stateid and may use the stateid to optimize the determination as to
   whether a delegation is held, it SHOULD note the presence of a
   delegation even when a special stateid is sent, and MUST accept a
   valid delegation stateid when sent.

8.3.  Lease Renewal

   Each client/server pair, as represented by a client ID, has a single
   lease.  The purpose of the lease is to allow the client to indicate
   to the server, in a low-overhead way, that it is active, and thus
   that the server is to retain the client's locks.  This arrangement
   allows the server to remove stale locking-related objects that are
   held by a client that has crashed or is otherwise unreachable, once
   the relevant lease expires.  This in turn allows other clients to
   obtain conflicting locks without being delayed indefinitely by
   inactive or unreachable clients.  It is not a mechanism for cache
   consistency and lease renewals may not be denied if the lease
   interval has not expired.

   Since each session is associated with a specific client (identified
   by the client's client ID), any operation sent on that session is an



Shepler, et al.           Expires June 18, 2009               [Page 165]

Internet-Draft                   NFSv4.1                   December 2008


   indication that the associated client is reachable.  When a request
   is sent for a given session, successful execution of a SEQUENCE
   operation (or successful retrieval of the result of SEQUENCE from the
   reply cache) on an unexpired lease will result in the lease being
   implicitly renewed, for the standard renewal period (equal to the
   lease_time attribute).

   If the client ID's lease has not expired when the server receives a
   SEQUENCE operation, then the server MUST renew the lease.  If the
   client ID's lease has expired when the server receives a SEQUENCE
   operation, the server MAY renew the lease; this depends on whether
   any state was revoked as a result of the client's failure to renew
   the lease before expiration.

   Absent other activity that would renew the lease, a COMPOUND
   consisting of a single SEQUENCE operation will suffice.  The client
   should also take communication-related delays into account and take
   steps to ensure that the renewal messages actually reach the server
   in good time.  For example:

   o  When trunking is in effect, the client should consider issuing
      multiple requests on different connections, in order to ensure
      that renewal occurs, even in the event of blockage in the path
      used for one of those connections.

   o  Transport retransmission delays might become so large as to
      approach or exceed the length of the lease period.  This may be
      particularly likely when the server is unresponsive due to a
      restart; see Section 8.4.2.1.  If the client implementation is not
      careful, transport retransmission delays can result in the client
      failing to detect a server restart before the grace period ends.
      The scenario is that the client is using a transport with
      exponential back off, such that the maximum retransmission timeout
      exceeds the both the grace period and the lease_time attribute.  A
      network partition causes the client's connection's retransmission
      interval to back off, and even after the partition heals, the next
      transport-level retransmission is sent after the server has
      restarted and its grace period ends.

      The client MUST either recover from the ensuing NFS4ERR_NO_GRACE
      errors, or it MUST ensure that despite transport level
      retransmission intervals that exceed the lease_time, nonetheless a
      SEQUENCE operation is sent that renews the lease before
      expiration.  The client can achieve this by associating a new
      connection with the session, and sending a SEQUENCE operation on
      it.  However, if the attempt to establish a new connection is
      delayed for some reason (e.g. exponential backoff of the
      connection establishment packets), the client will have to abort



Shepler, et al.           Expires June 18, 2009               [Page 166]

Internet-Draft                   NFSv4.1                   December 2008


      the connection establishment attempt before the lease expires, and
      attempt to re-connect.

   If the server renews the lease upon receiving a SEQUENCE operation,
   the server MUST NOT allow the lease to expire while the rest of the
   operations in the COMPOUND procedure's request are still executing.
   Once the last operation has finished, and the response to COMPOUND
   has been sent, the server MUST set the lease to expire no sooner than
   the sum of current time and the value of the lease_time attribute.

   A client ID's lease can expire when it has been at least the lease
   interval (lease_time) since the last lease-renewing SEQUENCE
   operation was sent on any of the client ID's sessions and there are
   no active COMPOUND operations on any such sessions.

   Because the SEQUENCE operation is the basic mechanism to renew a
   lease, and because if must be done at least once for each lease
   period, it is the natural mechanism whereby the server will inform
   the client of changes in the lease status that the client needs to be
   informed of.  The client should inspect the status flags
   (sr_status_flags) returned by sequence and take the appropriate
   action (see Section 18.46.3 for details).

   o  The status bits SEQ4_STATUS_CB_PATH_DOWN and
      SEQ4_STATUS_CB_PATH_DOWN_SESSION indicate problems with the
      backchannel which the client may need to address in order to
      receive callback requests.

   o  The status bits SEQ4_STATUS_CB_GSS_CONTEXTS_EXPIRING and
      SEQ4_STATUS_CB_GSS_CONTEXTS_EXPIRED indicate problems with GSS
      contexts for the backchannel which the client may have to address
      to allow callback requests to be sent to it.

   o  The status bits SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED,
      SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED,
      SEQ4_STATUS_ADMIN_STATE_REVOKED, and
      SEQ4_STATUS_RECALLABLE_STATE_REVOKED notify the client of lock
      revocation events.  When these bits are set, the client should use
      TEST_STATEID to find what stateids have been revoked and use
      FREE_STATEID to acknowledge loss of the associated state.

   o  The status bit SEQ4_STATUS_LEASE_MOVE indicates that
      responsibility for lease renewal has been transferred to one or
      more new servers.

   o  The status bit SEQ4_STATUS_RESTART_RECLAIM_NEEDED indicates that
      due to server restart the client must reclaim locking state.




Shepler, et al.           Expires June 18, 2009               [Page 167]

Internet-Draft                   NFSv4.1                   December 2008


   o  The status bit SEQ4_STATUS_BACKCHANNEL_FAULT indicates the server
      has encountered an unrecoverable fault with the backchannel (e.g.
      it has lost track of a sequence ID for a slot in the backchannel).

8.4.  Crash Recovery

   A critical requirement in crash recovery is that both the client and
   the server know when the other has failed.  Additionally, it is
   required that a client sees a consistent view of data across server
   restarts.  All READ and WRITE operations that may have been queued
   within the client or network buffers must wait until the client has
   successfully recovered the locks protecting the READ and WRITE
   operations.  Any that reach the server before the server can safely
   determine that the client has recovered enough locking state to be
   sure that such operations can be safely processed must be rejected.
   This will happen because either:

   o  The state presented is no longer valid since it is associated with
      a now invalid client ID.  In this case the client will receive
      either an NFS4ERR_BADSESSION or NFS4ERR_DEADSESSION error, and any
      attempt to attach a new session to that invalid client ID will
      result in an NFS4ERR_STALE_CLIENTID error.

   o  Subsequent recovery of locks may make execution of the operation
      inappropriate (NFS4ERR_GRACE).

8.4.1.  Client Failure and Recovery

   In the event that a client fails, the server may release the client's
   locks when the associated lease has expired.  Conflicting locks from
   another client may only be granted after this lease expiration.  As
   discussed in Section 8.3, when a client has not failed and re-
   establishes its lease before expiration occurs, requests for
   conflicting locks will not be granted.

   To minimize client delay upon restart, lock requests are associated
   with an instance of the client by a client-supplied verifier.  This
   verifier is part of the client_owner4 sent in the initial EXCHANGE_ID
   call made by the client.  The server returns a client ID as a result
   of the EXCHANGE_ID operation.  The client then confirms the use of
   the client ID by establishing a session associated with that client
   ID (see Section 18.36.3 for a description how this is done).  All
   locks, including opens, byte-range locks, delegations, and layouts
   obtained by sessions using that client ID are associated with that
   client ID.

   Since the verifier will be changed by the client upon each
   initialization, the server can compare a new verifier to the verifier



Shepler, et al.           Expires June 18, 2009               [Page 168]

Internet-Draft                   NFSv4.1                   December 2008


   associated with currently held locks and determine that they do not
   match.  This signifies the client's new instantiation and subsequent
   loss (upon confirmation of the new client ID) of locking state.  As a
   result, the server is free to release all locks held which are
   associated with the old client ID which was derived from the old
   verifier.  At this point conflicting locks from other clients, kept
   waiting while the lease had not yet expired, can be granted.  In
   addition, all stateids associated with the old client ID can also be
   freed, as they are no longer reference-able.

   Note that the verifier must have the same uniqueness properties as
   the verifier for the COMMIT operation.

8.4.2.  Server Failure and Recovery

   If the server loses locking state (usually as a result of a restart),
   it must allow clients time to discover this fact and re-establish the
   lost locking state.  The client must be able to re-establish the
   locking state without having the server deny valid requests because
   the server has granted conflicting access to another client.
   Likewise, if there is a possibility that clients have not yet re-
   established their locking state for a file, and that such locking
   state might make it invalid to perform READ or WRITE operations, for
   example through the establishment of mandatory locks, the server must
   disallow READ and WRITE operations for that file.

   A client can determine that loss of locking state has occurred via
   several methods.

   1.  When a SEQUENCE (most common) or other operation returns
       NFS4ERR_BADSESSION, this may mean the session has been destroyed,
       but the client ID is still valid.  The client sends a
       CREATE_SESSION request with the client ID to re-establish the
       session.  If CREATE_SESSION fails with NFS4ERR_STALE_CLIENTID,
       the client must establish a new client ID (see Section 8.1) and
       re-establish its lock state with the new client ID, after the
       CREATE_SESSION operation succeeds (see Section 8.4.2.1).

   2.  When a SEQUENCE (most common) or other operation on a persistent
       session returns NFS4ERR_DEADSESSION, this indicates that a
       session is no longer usable for new, i.e. not satisfied from the
       reply cache, operations.  Once all pending operations are
       determined to be either performed before the retry or not
       performed, the client sends a CREATE_SESSION request with the
       client ID to re-establish the session.  If CREATE_SESSION fails
       with NFS4ERR_STALE_CLIENTID, the client must establish a new
       client ID (see Section 8.1) and re-establish its lock state after
       the CREATE_SESSION, with the new client ID, succeeds,



Shepler, et al.           Expires June 18, 2009               [Page 169]

Internet-Draft                   NFSv4.1                   December 2008


       (Section 8.4.2.1).

   3.  When a operation, neither SEQUENCE nor preceded by SEQUENCE (for
       example, CREATE_SESSION, DESTROY_SESSION) returns
       NFS4ERR_STALE_CLIENTID.  The client MUST establish a new client
       ID (Section 8.1) and re-establish its lock state
       (Section 8.4.2.1).

8.4.2.1.  State Reclaim

   When state information and the associated locks are lost as a result
   of a server restart, the protocol must provide a way to cause that
   state to be re-established.  The approach used is to define, for most
   types of locking state (layouts are an exception), a request whose
   function is to allow the client to re-establish on the server a lock
   first obtained from a previous instance.  Generally these requests
   are variants of the requests normally used to create locks of that
   type and are referred to as "reclaim-type" requests and the process
   of re-establishing such locks is referred to as "reclaiming" them.

   Because each client must have an opportunity to reclaim all of the
   locks that it has without the possibility that some other client will
   be granted a conflicting lock, a special period called the "grace
   period" is devoted to the reclaim process.  During this period,
   requests creating client IDs and sessions are handled normally, but
   locking requests are subject to special restrictions.  Only reclaim-
   type locking requests are allowed, unless the server can reliably
   determine (through state persistently maintained across restart
   instances), that granting any such lock cannot possibly conflict with
   a subsequent reclaim.  When a request is made to obtain a new lock
   (i.e. not a reclaim-type request) during the grace period and such a
   determination cannot be made, the server must return the error
   NFS4ERR_GRACE.

   Once a session is established using the new client ID, the client
   will use reclaim-type locking requests (e.g.  LOCK requests with
   reclaim set to TRUE and OPEN operations with a claim type of
   CLAIM_PREVIOUS; see Section 9.11) to re-establish its locking state.
   Once this is done, or if there is no such locking state to reclaim,
   the client sends a global RECLAIM_COMPLETE operation, i.e. one with
   the rca_one_fs argument set to FALSE, to indicate that it has
   reclaimed all of the locking state that it will reclaim.  Once a
   client sends such a RECLAIM_COMPLETE operation, it may attempt non-
   reclaim locking operations, although it may get NFS4ERR_GRACE errors
   the operations until the period of special handling is over.  See
   Section 11.7.7 for a discussion of the analogous handling lock
   reclamation in the case of file systems transitioning from server to
   server.



Shepler, et al.           Expires June 18, 2009               [Page 170]

Internet-Draft                   NFSv4.1                   December 2008


   During the grace period, the server must reject READ and WRITE
   operations and non-reclaim locking requests (i.e. other LOCK and OPEN
   operations) with an error of NFS4ERR_GRACE, unless it can guarantee
   that these may be done safely, as described below.

   The grace period may last until all clients which are known to
   possibly have had locks have done a global RECLAIM_COMPLETE
   operation, indicating that they have finished reclaiming the locks
   they held before the server restart.  This means that a client which
   has done a RECLAIM_COMPLETE must be prepared to receive an
   NFS4ERR_GRACE when attempting to acquire new locks.  In order for the
   server to know that all clients with possible prior lock state have
   done a RECLAIM_COMPLETE, the server must maintain in stable storage a
   list of clients which may have such locks.  The server may also
   terminate the grace period before all clients have done a global
   RECLAIM_COMPLETE.  The server SHOULD NOT terminate the grace period
   before a time equal to the lease period in order to give clients an
   opportunity to find out about the server restart, as a result of
   issuing requests on associated sessions with a frequency governed by
   the lease time.  Note that when a client does not issue such requests
   (or they are issued by the client but not received by the server), it
   is possible for the grace period to expire before the client finds
   out that the server restart has occurred.

   Some additional time in order to allow a client to establish a new
   client ID and session and to effect lock reclaims may be added to the
   lease time.  Note that analogous rules apply to file system-specific
   grace periods discussed in Section 11.7.7.

   If the server can reliably determine that granting a non-reclaim
   request will not conflict with reclamation of locks by other clients,
   the NFS4ERR_GRACE error does not have to be returned even within the
   grace period, although NFS4ERR_GRACE must always be returned to
   clients attempting a non-reclaim lock request before doing their own
   global RECLAIM_COMPLETE.  For the server to be able to service READ
   and WRITE operations during the grace period, it must again be able
   to guarantee that no possible conflict could arise between a
   potential reclaim locking request and the READ or WRITE operation.
   If the server is unable to offer that guarantee, the NFS4ERR_GRACE
   error must be returned to the client.

   For a server to provide simple, valid handling during the grace
   period, the easiest method is to simply reject all non-reclaim
   locking requests and READ and WRITE operations by returning the
   NFS4ERR_GRACE error.  However, a server may keep information about
   granted locks in stable storage.  With this information, the server
   could determine if a regular lock or READ or WRITE operation can be
   safely processed.



Shepler, et al.           Expires June 18, 2009               [Page 171]

Internet-Draft                   NFSv4.1                   December 2008


   For example, if the server maintained on stable storage summary
   information on whether mandatory locks exist, either mandatory byte-
   range locks, or share reservations specifying deny modes, many
   requests could be allowed during the grace period.  If it is known
   that no such share reservations exist, OPEN request that do not
   specify deny modes may be safely granted.  If, in addition, it is
   known that no mandatory byte-range locks exist, either through
   information stored on stable storage or simply because the server
   does not support such locks, READ and WRITE requests may be safely
   processed during the grace period.  Another important case is where
   it is known that no mandatory byte-range locks exist, either because
   the server does not provide support for them, or because their
   absence is known from persistently recorded data.  In this case, READ
   and WRITE operations specifying stateids derived from reclaim-type
   operation may be validly processed during the grace period because
   the fact of the valid reclaim ensures that no lock subsequently
   granted can prevent the I/O.

   To reiterate, for a server that allows non-reclaim lock and I/O
   requests to be processed during the grace period, it MUST determine
   that no lock subsequently reclaimed will be rejected and that no lock
   subsequently reclaimed would have prevented any I/O operation
   processed during the grace period.

   Clients should be prepared for the return of NFS4ERR_GRACE errors for
   non-reclaim lock and I/O requests.  In this case the client should
   employ a retry mechanism for the request.  A delay (on the order of
   several seconds) between retries should be used to avoid overwhelming
   the server.  Further discussion of the general issue is included in
   [46].  The client must account for the server that can perform I/O
   and non-reclaim locking requests within the grace period as well as
   those that cannot do so.

   A reclaim-type locking request outside the server's grace period can
   only succeed if the server can guarantee that no conflicting lock or
   I/O request has been granted since restart.

   A server may, upon restart, establish a new value for the lease
   period.  Therefore, clients should, once a new client ID is
   established, refetch the lease_time attribute and use it as the basis
   for lease renewal for the lease associated with that server.
   However, the server must establish, for this restart event, a grace
   period at least as long as the lease period for the previous server
   instantiation.  This allows the client state obtained during the
   previous server instance to be reliably re-established.

   The possibility exists, that because of server configuration events,
   the client will be communicating with a server different than the one



Shepler, et al.           Expires June 18, 2009               [Page 172]

Internet-Draft                   NFSv4.1                   December 2008


   on which the locks were obtained, as shown by the combination of
   eir_server_scope and eir_server_owner.  This leads to the issue of if
   and when the client should attempt to reclaim locks previously
   obtained on what is being reported as a different server.  The rules
   to resolve this question are as follows:

   o  If the server scope is different the client should not attempt to
      reclaim locks.  In this situation no lock reclaim is possible.
      Any attempt to re-obtain the locks with non-reclaim operations is
      problematic since there is no guarantee that the existing
      filehandles will be recognized by the new server, or that if
      recognized, they denote the same objects.  It is best to treat the
      locks as having been revoked by the reconfiguration event.

   o  If the server scope is the same, the client should attempt to
      reclaim locks, even if the eir_server_owner value is different.
      In this situation, it is the responsibility of the server to
      return NFS4ERR_NO_GRACE if it cannot provide correct support for
      lock reclaim operations, including the prevention of edge
      conditions.

   The eir_server_owner field is not used in making this determination.
   Its function is to specify trunking possibilities for the client (see
   Section 2.10.5) and not to control lock reclaim.

8.4.2.1.1.  Security Considerations for State Reclaim

   During the grace period, a client can reclaim state it believes or
   asserts it had before the server restarted.  Unless the server
   maintained a complete record of all the state the client had, the
   server has little choice but to trust the client.  (Of course if the
   server maintained a complete record, then it would not have to force
   the client to reclaim state after server restart.)  While the server
   has to trust the client to tell the truth, such trust does not have
   any negative consequences for security.  The fundamental rule for the
   server when processing reclaim requests is that it MUST NOT grant the
   reclaim if an equivalent non-reclaim request would not be granted
   during steady-state due to access control or access conflict issues.
   For example an OPEN request during a reclaim will be refused with
   NFS4ERR_ACCESS if the principal making the request does not have
   access to open the file according to the discretionary ACL
   (Section 6.2.2) on the file.

   Nonetheless, it is possible that client operating in error or
   maliciously could, during reclaim, prevent another client from
   reclaiming access to state.  For example, an attacker could send an
   OPEN reclaim operation with a deny mode that prevents another client
   from reclaiming the open state it had before the server restarted.



Shepler, et al.           Expires June 18, 2009               [Page 173]

Internet-Draft                   NFSv4.1                   December 2008


   The attacker could perform the same denial of service during steady
   state prior to server restart, as long as the the attacker had
   permissions.  Given that the attack vectors are equivalent, the grace
   period does not offer any additional opportunity for denial of
   service, and any concerns about this attack vector, whether during
   grace or steady state are addressed the same way: use RPCSEC_GSS for
   authentication, and limit access to the file only to principals the
   owner of the file trusts.

   Note that if prior to restart the server had client IDs with the
   EXCHGID4_FLAG_BIND_PRINC_STATEID (Section 18.35) capability set, then
   the server SHOULD record in stable storage the client owner and the
   principal that established the client ID via EXCHANGE_ID.  If the
   server does not, then there is a risk a client will be unable to
   reclaim state if it does not have a credential for a principal that
   was originally authorized to establish the state.

8.4.3.  Network Partitions and Recovery

   If the duration of a network partition is greater than the lease
   period provided by the server, the server will not have received a
   lease renewal from the client.  If this occurs, the server may free
   all locks held for the client, or it may allow the lock state to
   remain for a considerable period, subject to the constraint that if a
   request for a conflicting lock is made, locks associated with an
   expired lease do not prevent such a conflicting lock from being
   granted but MUST be revoked as necessary so as not to interfere with
   such conflicting requests.

   If the server chooses to delay freeing of lock state until there is a
   conflict, it may either free all of the clients locks once there is a
   conflict, or it may only revoke the minimum set of locks necessary to
   allow conflicting requests.  When it adopts the finer-grained
   approach, it must revoke all locks associated with a given stateid,
   even if the conflict is with only a subset of locks.

   When the server chooses to free all of a client's lock state, either
   immediately upon lease expiration, or a result of the first attempt
   to obtain a conflicting a lock, the server may report the loss of
   lock state in a number of ways.

   The server may choose to invalidate the session and the associated
   client ID.  In this case, once the client can communicate with the
   server, it will receive an NFS4ERR_BADSESSION error.  Upon attempting
   to create a new session, it would get an NFS4ERR_STALE_CLIENTID.
   Upon creating the new client ID and new session it would attempt to
   reclaim locks not be allowed to do so by the server.




Shepler, et al.           Expires June 18, 2009               [Page 174]

Internet-Draft                   NFSv4.1                   December 2008


   Another possibility is for the server to maintain the session and
   client ID but for all stateids held by the client to become invalid
   or stale.  Once the client can reach the server after such a network
   partition, the status returned by the SEQUENCE operation will
   indicate a loss of locking state, i.e. the flag
   SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED will be set in sr_status_flags.
   In addition, all I/O submitted by the client with the now invalid
   stateids will fail with the server returning the error
   NFS4ERR_EXPIRED.  Once the client learns of the loss of locking
   state, it will suitably notify the applications that held the
   invalidated locks.  The client should then take action to free
   invalidated stateids, either by establishing a new client ID using a
   new verifier or by doing a FREE_STATEID operation to release each of
   the invalidated stateids.

   When the server adopts a finer-grained approach to revocation of
   locks when a client's lease has expired, only a subset of stateids
   will normally become invalid during a network partition.  When the
   client can communicate with the server after such a network partition
   heals, the status returned by the SEQUENCE operation will indicate a
   partial loss of locking state
   (SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED).  In addition, operations,
   including I/O submitted by the client, with the now invalid stateids
   will fail with the server returning the error NFS4ERR_EXPIRED.  Once
   the client learns of the loss of locking state, it will use the
   TEST_STATEID operation on all of its stateids to determine which
   locks have been lost and then suitably notify the applications that
   held the invalidated locks.  The client can then release the
   invalidated locking state and acknowledge the revocation of the
   associated locks by doing a FREE_STATEID operation on each of the
   invalidated stateids.

   When a network partition is combined with a server restart, there are
   edge conditions that place requirements on the server in order to
   avoid silent data corruption following the server restart.  Two of
   these edge conditions are known, and are discussed below.

   The first edge condition arises as a result of the scenarios such as
   the following:

   1.  Client A acquires a lock.

   2.  Client A and server experience mutual network partition, such
       that client A is unable to renew its lease.

   3.  Client A's lease expires, and the server releases the lock.





Shepler, et al.           Expires June 18, 2009               [Page 175]

Internet-Draft                   NFSv4.1                   December 2008


   4.  Client B acquires a lock that would have conflicted with that of
       Client A.

   5.  Client B releases its lock.

   6.  Server restarts.

   7.  Network partition between client A and server heals.

   8.  Client A connects to new server instance and finds out about
       server restart.

   9.  Client A reclaims its lock within the server's grace period.

   Thus, at the final step, the server has erroneously granted client
   A's lock reclaim.  If client B modified the object the lock was
   protecting, client A will experience object corruption.

   The second known edge condition arises in situations such as the
   following:

   1.   Client A acquires one or more locks.

   2.   Server restarts.

   3.   Client A and server experience mutual network partition, such
        that client A is unable to reclaim all of its locks within the
        grace period.

   4.   Server's reclaim grace period ends.  Client A has either no
        locks or an incomplete set of locks known to the server.

   5.   Client B acquires a lock that would have conflicted with a lock
        of client A that was not reclaimed.

   6.   Client B releases the lock.

   7.   Server restarts a second time.

   8.   Network partition between client A and server heals.

   9.   Client A connects to new server instance and finds out about
        server restart.

   10.  Client A reclaims its lock within the server's grace period.

   As with the first edge condition, the final step of the scenario of
   the second edge condition has the server erroneously granting client



Shepler, et al.           Expires June 18, 2009               [Page 176]

Internet-Draft                   NFSv4.1                   December 2008


   A's lock reclaim.

   Solving the first and second edge conditions requires that the server
   either always assumes after it restarts that some edge condition
   occurs, and thus return NFS4ERR_NO_GRACE for all reclaim attempts, or
   that the server record some information in stable storage.  The
   amount of information the server records in stable storage is in
   inverse proportion to how harsh the server intends to be whenever
   edge conditions arise.  The server that is completely tolerant of all
   edge conditions will record in stable storage every lock that is
   acquired, removing the lock record from stable storage only when the
   lock is released.  For the two edge conditions discussed above, the
   harshest a server can be, and still support a grace period for
   reclaims, requires that the server record in stable storage
   information some minimal information.  For example, a server
   implementation could, for each client, save in stable storage a
   record containing:

   o  the co_ownerid field from the client_owner4 presented in the
      EXCHANGE_ID operation.

   o  a boolean that indicates if the client's lease expired or if there
      was administrative intervention (see Section 8.5) to revoke a
      byte-range lock, share reservation, or delegation and there has
      been no acknowledgement, via FREE_STATEID, of such revocation.

   o  a boolean that indicates whether the client may have locks that it
      believes to be reclaimable in situations which the grace period
      was terminated, making the server's view of lock reclaimability
      suspect.  The server will set this for any client record in stable
      storage where the client has not done a suitable RECLAIM_COMPLETE
      (global or file system-specific depending on the target of the
      lock request) before it grants any new (i.e. not reclaimed) lock
      to any client.

   Assuming the above record keeping, for the first edge condition,
   after the server restarts, the record that client A's lease expired
   means that another client could have acquired a conflicting byte-
   range lock, share reservation, or delegation.  Hence the server must
   reject a reclaim from client A with the error NFS4ERR_NO_GRACE.

   For the second edge condition, after the server restarts for a second
   time, the indication that the client had not completed its reclaims
   at the time at which the grace period ended means that the server
   must reject a reclaim from client A with the error NFS4ERR_NO_GRACE.

   When either edge condition occurs, the client's attempt to reclaim
   locks will result in the error NFS4ERR_NO_GRACE.  When this is



Shepler, et al.           Expires June 18, 2009               [Page 177]

Internet-Draft                   NFSv4.1                   December 2008


   received, or after the client restarts with no lock state, the client
   will send a global RECLAIM_COMPLETE.  When the RECLAIM_COMPLETE is
   received, the server and client are again in agreement regarding
   reclaimable locks and both booleans in persistent storage can be
   reset, to be set again only when there is a subsequent event that
   causes lock reclaim operations to be questionable.

   Regardless of the level and approach to record keeping, the server
   MUST implement one of the following strategies (which apply to
   reclaims of share reservations, byte-range locks, and delegations):

   1.  Reject all reclaims with NFS4ERR_NO_GRACE.  This is extremely
       unforgiving, but necessary if the server does not record lock
       state in stable storage.

   2.  Record sufficient state in stable storage such that all known
       edge conditions involving server restart, including the two noted
       in this section, are detected.  It is acceptable to erroneously
       recognize an edge condition and not allow a reclaim, when, with
       sufficient knowledge it would be allowed.  The error the server
       would return in this case is NFS4ERR_NO_GRACE.  Note it is not
       known if there are other edge conditions.

       In the event that, after a server restart, the server determines
       that there is unrecoverable damage or corruption to the
       information in stable storage, then for all clients and/or locks
       which may be affected, the server MUST return NFS4ERR_NO_GRACE.

   A mandate for the client's handling of the NFS4ERR_NO_GRACE error is
   outside the scope of this specification, since the strategies for
   such handling are very dependent on the client's operating
   environment.  However, one potential approach is described below.

   When the client receives NFS4ERR_NO_GRACE, it could examine the
   change attribute of the objects the client is trying to reclaim state
   for, and use that to determine whether to re-establish the state via
   normal OPEN or LOCK requests.  This is acceptable provided the
   client's operating environment allows it.  In other words, the client
   implementor is advised to document for his users the behavior.  The
   client could also inform the application that its byte-range lock or
   share reservations (whether they were delegated or not) have been
   lost, such as via a UNIX signal, a GUI pop-up window, etc.  See
   Section 10.5 for a discussion of what the client should do for
   dealing with unreclaimed delegations on client state.

   For further discussion of revocation of locks see Section 8.5.





Shepler, et al.           Expires June 18, 2009               [Page 178]

Internet-Draft                   NFSv4.1                   December 2008


8.5.  Server Revocation of Locks

   At any point, the server can revoke locks held by a client and the
   client must be prepared for this event.  When the client detects that
   its locks have been or may have been revoked, the client is
   responsible for validating the state information between itself and
   the server.  Validating locking state for the client means that it
   must verify or reclaim state for each lock currently held.

   The first occasion of lock revocation is upon server restart.  Note
   that this includes situations in which sessions are persistent and
   locking state is lost.  In this class of instances, the client will
   receive an error (NFS4ERR_STALE_CLIENTID) on an operation that takes
   client ID, usually as part of recovery in response to a problem with
   the current session) and the client will proceed with normal crash
   recovery as described in the Section 8.4.2.1.

   The second occasion of lock revocation is the inability to renew the
   lease before expiration, as discussed in Section 8.4.3.  While this
   is considered a rare or unusual event, the client must be prepared to
   recover.  The server is responsible for determining the precise
   consequences of the lease expiration, informing the client of the
   scope of the lock revocation decided upon.  The client then uses the
   status information provided by the server in the SEQUENCE results
   (field sr_status_flags, see Section 18.46.3) to synchronize its
   locking state with that of the server, in order to recover.

   The third occasion of lock revocation can occur as a result of
   revocation of locks within the lease period, either because of
   administrative intervention, or because a recallable lock (a
   delegation or layout) was not returned within the lease period after
   having been recalled.  While these are considered rare events, they
   are possible and the client must be prepared to deal with them.  When
   either of these events occur, the client finds out about the
   situation through the status returned by the SEQUENCE operation.  Any
   use of stateids associated with locks revoked during the lease period
   will receive the error NFS4ERR_ADMIN_REVOKED or
   NFS4ERR_DELEG_REVOKED, as appropriate.

   In all situations in which a subset of locking state may have been
   revoked, which include all cases in which locking state is revoked
   within the lease period, it is up to the client to determine which
   locks have been revoked and which have not.  It does this by using
   the TEST_STATEID operation on the appropriate set of stateids.  Once
   the set of revoked locks has been determined, the applications can be
   notified, and the invalidated stateids can be freed and lock
   revocation acknowledged by using FREE_STATEID.




Shepler, et al.           Expires June 18, 2009               [Page 179]

Internet-Draft                   NFSv4.1                   December 2008


8.6.  Short and Long Leases

   When determining the time period for the server lease, the usual
   lease tradeoffs apply.  Short leases are good for fast server
   recovery at a cost of increased operations to effect lease renewal
   (when there are no other operations during the period to effect lease
   renewal as a side-effect).  Long leases are certainly kinder and
   gentler to servers trying to handle very large numbers of clients.
   The number of extra requests to effect lock renewal drops in inverse
   proportion to the lease time.  The disadvantages of long leases
   include the possibility of slower recovery after certain failures.
   After server failure, a longer grace period may be required when some
   clients do not promptly reclaim their locks and do a global
   RECLAIM_COMPLETE.  In the event of client failure, there can be a
   longer period for leases to expire thus forcing conflicting requests
   to wait.

   Long leases are practical if the server can store lease state in non-
   volatile memory.  Upon recovery, the server can reconstruct the lease
   state from its non-volatile memory and continue operation with its
   clients and therefore long leases would not be an issue.

8.7.  Clocks, Propagation Delay, and Calculating Lease Expiration

   To avoid the need for synchronized clocks, lease times are granted by
   the server as a time delta.  However, there is a requirement that the
   client and server clocks do not drift excessively over the duration
   of the lease.  There is also the issue of propagation delay across
   the network which could easily be several hundred milliseconds as
   well as the possibility that requests will be lost and need to be
   retransmitted.

   To take propagation delay into account, the client should subtract it
   from lease times (e.g. if the client estimates the one-way
   propagation delay as 200 milliseconds, then it can assume that the
   lease is already 200 milliseconds old when it gets it).  In addition,
   it will take another 200 milliseconds to get a response back to the
   server.  So the client must send a lease renewal or write data back
   to the server at least 400 milliseconds before the lease would
   expire.  If the propagation delay varies over the life of the lease
   (e.g. the client is on a mobile host), the client will need to
   continuously subtract the increase in propagation delay from the
   lease times.

   The server's lease period configuration should take into account the
   network distance of the clients that will be accessing the server's
   resources.  It is expected that the lease period will take into
   account the network propagation delays and other network delay



Shepler, et al.           Expires June 18, 2009               [Page 180]

Internet-Draft                   NFSv4.1                   December 2008


   factors for the client population.  Since the protocol does not allow
   for an automatic method to determine an appropriate lease period, the
   server's administrator may have to tune the lease period.

8.8.  Obsolete Locking Infrastructure From NFSv4.0

   There are a number of operations and fields within existing
   operations that no longer have a function in NFSv4.1.  In one way or
   another, these changes are all due to the implementation of sessions
   which provides client context and exactly once semantics as a base
   feature of the protocol, separate from locking itself.

   The following NFSv4.0 operations MUST NOT be implemented in NFSv4.1.
   The server MUST return NFS4ERR_NOTSUPP if these operations are found
   in an NFSv4.1 COMPOUND.

   o  SETCLIENTID since its function has been replaced by EXCHANGE_ID.

   o  SETCLIENTID_CONFIRM since client ID confirmation now happens by
      means of CREATE_SESSION.

   o  OPEN_CONFIRM because state-owner-based seqids have been replaced
      by the sequence ID in the SEQUENCE operation.

   o  RELEASE_LOCKOWNER because lock-owners with no associated locks do
      not have any sequence-related state and so can be deleted by the
      server at will.

   o  RENEW because every SEQUENCE operation for a session causes lease
      renewal, making a separate operation superfluous.

   Also, there are a number of fields, present in existing operations
   related to locking that have no use in minor version one.  They were
   used in minor version zero to perform functions now provided in a
   different fashion.

   o  Sequence ids used to sequence requests for a given state-owner and
      to provide retry protection, now provided via sessions.

   o  Client IDs used to identify the client associated with a given
      request.  Client identification is now available using the client
      ID associated with the current session, without needing an
      explicit client ID field.

   Such vestigial fields in existing operations have no function in
   NFSv4.1 and are ignored by the server.  Note that client IDs in
   operations new to NFSv4.1 (such as CREATE_SESSION and
   DESTROY_CLIENTID) are not ignored.



Shepler, et al.           Expires June 18, 2009               [Page 181]

Internet-Draft                   NFSv4.1                   December 2008


9.  File Locking and Share Reservations

   To support Win32 share reservations it is necessary to provide
   operations which atomically open or create files.  Having a separate
   share/unshare operation would not allow correct implementation of the
   Win32 OpenFile API.  In order to correctly implement share semantics,
   the previous NFS protocol mechanisms used when a file is opened or
   created (LOOKUP, CREATE, ACCESS) need to be replaced.  The NFSv4.1
   protocol defines an OPEN operation which is capable of atomically
   looking up, creating, and locking a file on the server.

9.1.  Opens and Byte-Range Locks

   It is assumed that manipulating a byte-range lock is rare when
   compared to READ and WRITE operations.  It is also assumed that
   server restarts and network partitions are relatively rare.
   Therefore it is important that the READ and WRITE operations have a
   lightweight mechanism to indicate if they possess a held lock.  A
   byte-range lock request contains the heavyweight information required
   to establish a lock and uniquely define the owner of the lock.

9.1.1.  State-owner Definition

   When opening a file or requesting a byte-range lock, the client must
   specify an identifier which represents the owner of the requested
   lock.  This identifier is in the form of a state-owner, represented
   in the protocol by a state_owner4, a variable-length opaque array
   which, when concatenated with the current client ID uniquely defines
   the owner of lock managed by the client.  This may be a thread ID,
   process ID, or other unique value.

   Owners of opens and owners of byte-range locks are separate entities
   and remain separate even if the same opaque arrays are used to
   designate owners of each.  The protocol distinguishes between open-
   owners (represented by open_owner4 structures) and lock-owners
   (represented by lock_owner4 structures).

   Each open is associated with a specific open-owner while each byte-
   range lock is associated with a lock-owner and an open-owner, the
   latter being the open-owner associated with the open file under which
   the LOCK operation was done.  Delegations and layouts, on the other
   hand, are not associated with a specific owner but are associated
   with the client as a whole (identified by a client ID).

9.1.2.  Use of the Stateid and Locking

   All READ, WRITE and SETATTR operations contain a stateid.  For the
   purposes of this section, SETATTR operations which change the size



Shepler, et al.           Expires June 18, 2009               [Page 182]

Internet-Draft                   NFSv4.1                   December 2008


   attribute of a file are treated as if they are writing the area
   between the old and new size (i.e. the range truncated or added to
   the file by means of the SETATTR), even where SETATTR is not
   explicitly mentioned in the text.  The stateid passed to one of these
   operations must be one that represents an open, a set of byte-range
   locks, or a delegation, or it may be a special stateid representing
   anonymous access or the special bypass stateid.

   If the state-owner performs a READ or WRITE in a situation in which
   it has established a byte-range lock or share reservation on the
   server (any OPEN constitutes a share reservation) the stateid
   (previously returned by the server) must be used to indicate what
   locks, including both byte-range locks and share reservations, are
   held by the state-owner.  If no state is established by the client,
   either byte-range lock or share reservation, a special stateid for
   anonymous state (zero as "other" and "seqid") is used.  (See
   Section 8.2.3 for a description of 'special' stateids in general.)
   Regardless whether a stateid for anonymous state or a stateid
   returned by the server is used, if there is a conflicting share
   reservation or mandatory byte-range lock held on the file, the server
   MUST refuse to service the READ or WRITE operation.

   Share reservations are established by OPEN operations and by their
   nature are mandatory in that when the OPEN denies READ or WRITE
   operations, that denial results in such operations being rejected
   with error NFS4ERR_LOCKED.  Byte-range locks may be implemented by
   the server as either mandatory or advisory, or the choice of
   mandatory or advisory behavior may be determined by the server on the
   basis of the file being accessed (for example, some UNIX-based
   servers support a "mandatory lock bit" on the mode attribute such
   that if set, byte-range locks are required on the file before I/O is
   possible).  When byte-range locks are advisory, they only prevent the
   granting of conflicting lock requests and have no effect on READs or
   WRITEs.  Mandatory byte-range locks, however, prevent conflicting I/O
   operations.  When they are attempted, they are rejected with
   NFS4ERR_LOCKED.  When the client gets NFS4ERR_LOCKED on a file it
   knows it has the proper share reservation for, it will need to send a
   LOCK request on the region of the file that includes the region the
   I/O was to be performed on, with an appropriate locktype (i.e.
   READ*_LT for a READ operation, WRITE*_LT for a WRITE operation).

   Note that for UNIX environments that support mandatory file locking,
   the distinction between advisory and mandatory locking is subtle.  In
   fact, advisory and mandatory byte-range locks are exactly the same in
   so far as the APIs and requirements on implementation.  If the
   mandatory lock attribute is set on the file, the server checks to see
   if the lock-owner has an appropriate shared (read) or exclusive
   (write) byte-range lock on the region it wishes to read or write to.



Shepler, et al.           Expires June 18, 2009               [Page 183]

Internet-Draft                   NFSv4.1                   December 2008


   If there is no appropriate lock, the server checks if there is a
   conflicting lock (which can be done by attempting to acquire the
   conflicting lock on behalf of the lock-owner, and if successful,
   release the lock after the READ or WRITE is done), and if there is,
   the server returns NFS4ERR_LOCKED.

   For Windows environments, byte-range locks are always mandatory, so
   the server always checks for byte-range locks during I/O requests.

   Thus, the NFSv4.1 LOCK operation does not need to distinguish between
   advisory and mandatory byte-range locks.  It is the NFSv4.1 server's
   processing of the READ and WRITE operations that introduces the
   distinction.

   Every stateid which is validly passed to READ, WRITE or SETATTR, with
   the exception of special stateid values, defines an access mode for
   the file (i.e.  READ, WRITE, or READ-WRITE)

   o  For stateids associated with opens, this is the mode defined by
      the original OPEN which caused the allocation of the open stateid
      and as modified by subsequent OPENs and OPEN_DOWNGRADEs for the
      same open-owner/file pair.

   o  For stateids returned by byte-range lock requests, the appropriate
      mode is the access mode for the open stateid associated with the
      lock set represented by the stateid.

   o  For delegation stateids the access mode is based on the type of
      delegation.

   When a READ, WRITE, or SETATTR (which specifies the size attribute)
   is done, the operation is subject to checking against the access mode
   to verify that the operation is appropriate given the stateid with
   which the operation is associated.

   In the case of WRITE-type operations (i.e.  WRITEs and SETATTRs which
   set size), the server MUST verify that the access mode allows writing
   and MUST return an NFS4ERR_OPENMODE error if it does not.  In the
   case, of READ, the server may perform the corresponding check on the
   access mode, or it may choose to allow READ on opens for WRITE only,
   to accommodate clients whose write implementation may unavoidably do
   reads (e.g. due to buffer cache constraints).  However, even if READs
   are allowed in these circumstances, the server MUST still check for
   locks that conflict with the READ (e.g. another open specify denial
   of READs).  Note that a server which does enforce the access mode
   check on READs need not explicitly check for conflicting share
   reservations since the existence of OPEN for read access guarantees
   that no conflicting share reservation can exist.



Shepler, et al.           Expires June 18, 2009               [Page 184]

Internet-Draft                   NFSv4.1                   December 2008


   The read bypass special stateid (all bits of "other" and "seqid" set
   to one) indicates a desire to bypass locking checks.  The server MAY
   allow READ operations to bypass locking checks at the server, when
   this special stateid is used.  However, WRITE operations with this
   special stateid value MUST NOT bypass locking checks and are treated
   exactly the same as if a special stateid for anonymous state were
   used.

   A lock may not be granted while a READ or WRITE operation using one
   of the special stateids is being performed and the scope of the lock
   to be granted would conflict with the READ or WRITE operation.  This
   can occur when:

   o  A mandatory byte range lock is requested with range that conflicts
      with the range of the READ or WRITE operation.  For the purposes
      of this paragraph, a conflict occurs when a shared lock is
      requested and a WRITE operation is being performed, or an
      exclusive lock is requested and either a READ or a WRITE operation
      is being performed.

   o  A share reservation is requested which denies reading and or
      writing and the corresponding operation is being performed.

   o  A delegation is to be granted and the delegation type would
      prevent the I/O operation, i.e.  READ and WRITE conflict with a
      write delegation and WRITE conflicts with a read delegation.

   When a client holds a delegation, it needs to ensure that the stateid
   sent conveys the association of operation with the delegation, to
   avoid the delegation from being avoidably recalled.  When the
   delegation stateid, or a stateid open associated with that
   delegation, or a stateid representing byte-range locks derived form
   such an open is used, the server knows that the READ, WRITE, or
   SETATTR does not conflict with the delegation, but is sent under the
   aegis of the delegation.  Even though it is possible for the server
   to determine from the client ID (via the session ID) that the client
   does in fact have a delegation, the server is not obliged to check
   this, so using a special stateid can result in avoidable recall of
   the delegation.

9.2.  Lock Ranges

   The protocol allows a lock-owner to request a lock with a byte range
   and then either upgrade, downgrade, or unlock a sub-range of the
   initial lock, or a range that consists of a range which overlaps,
   fully or partially, that initial lock or a combination of a set of
   existing locks for the same lock-owner.  It is expected that this
   will be an uncommon type of request.  In any case, servers or server



Shepler, et al.           Expires June 18, 2009               [Page 185]

Internet-Draft                   NFSv4.1                   December 2008


   file systems may not be able to support sub-range lock semantics.  In
   the event that a server receives a locking request that represents a
   sub-range of current locking state for the lock-owner, the server is
   allowed to return the error NFS4ERR_LOCK_RANGE to signify that it
   does not support sub-range lock operations.  Therefore, the client
   should be prepared to receive this error and, if appropriate, report
   the error to the requesting application.

   The client is discouraged from combining multiple independent locking
   ranges that happen to be adjacent into a single request since the
   server may not support sub-range requests and for reasons related to
   the recovery of file locking state in the event of server failure.
   As discussed in Section 8.4.2, the server may employ certain
   optimizations during recovery that work effectively only when the
   client's behavior during lock recovery is similar to the client's
   locking behavior prior to server failure.

9.3.  Upgrading and Downgrading Locks

   If a client has a write lock on a byte-range, it can request an
   atomic downgrade of the lock to a read lock via the LOCK request, by
   setting the type to READ_LT.  If the server supports atomic
   downgrade, the request will succeed.  If not, it will return
   NFS4ERR_LOCK_NOTSUPP.  The client should be prepared to receive this
   error, and if appropriate, report the error to the requesting
   application.

   If a client has a read lock on a byte-range, it can request an atomic
   upgrade of the lock to a write lock via the LOCK request by setting
   the type to WRITE_LT or WRITEW_LT.  If the server does not support
   atomic upgrade, it will return NFS4ERR_LOCK_NOTSUPP.  If the upgrade
   can be achieved without an existing conflict, the request will
   succeed.  Otherwise, the server will return either NFS4ERR_DENIED or
   NFS4ERR_DEADLOCK.  The error NFS4ERR_DEADLOCK is returned if the
   client sent the LOCK request with the type set to WRITEW_LT and the
   server has detected a deadlock.  The client should be prepared to
   receive such errors and if appropriate, report the error to the
   requesting application.

9.4.  Stateid Seqid Values and Byte-Range Locks

   When a lock or unlock request is done, passing a stateid, the stateid
   returned has the same "other" value and a "seqid" value that is
   incremented to reflect the occurrence of the lock or unlock request.
   The server MUST increment the value of the "seqid" field whenever
   there is any change to the locking status of any byte offset as
   described by any of locks covered by the stateid.  A change in
   locking status includes a change from locked to unlocked or the



Shepler, et al.           Expires June 18, 2009               [Page 186]

Internet-Draft                   NFSv4.1                   December 2008


   reverse or a change from being locked for read to being locked for
   write or the reverse.

   When there is no such change, as, for example when a range already
   locked for write is locked again for write, the server MAY increment
   the "seqid" value.

9.5.  Issues with Multiple Open-Owners

   When the same file is opened by multiple open-owners, a client will
   have multiple open stateids for that file, each associated with a
   different open-owner.  In that case, there can be multiple LOCK and
   LOCKU requests for the same lock-owner issued using the different
   open stateids, and so a situation may arise in which there are
   multiple stateids, each representing byte-range locks on the same
   file and held by the same lock-owner but each associated with a
   different open-owner.

   In such a situation, the locking status of each byte (i.e. whether it
   is locked, the read or write mode of the lock and the lock-owner
   holding the lock) MUST reflect the last LOCK or LOCKU operation done
   for the lock-owner in question, independent of the stateid through
   which the request was issued.

   When a byte is locked by the lock-owner in question, the open-owner
   to which that lock is assigned SHOULD be that of the open-owner
   associated with the stateid through which the last LOCK of that byte
   was done.  When there is a change in the open-owner associated with
   locks for the stateid through which a LOCK or LOCKU was done, the
   "seqid" field of the stateid MUST be incremented, even if the
   locking, in terms of lock-owners has not changed.  When there is a
   change to the set of locked bytes associated with a different stateid
   for the same lock-owner, i.e. associated with a different open-owner,
   the "seqid" value for that stateid MUST NOT be incremented.

9.6.  Blocking Locks

   Some clients require the support of blocking locks.  While NFSv4.1
   provides a callback when a previously unavailable lock becomes
   available, this is an OPTIONAL feature and clients cannot depend on
   its presence.  Clients need to be prepared to continually poll for
   the lock.  This presents a fairness problem.  Two of the lock types,
   READW and WRITEW, are used to indicate to the server that the client
   is requesting a blocking lock.  When the callback is not used, the
   server should maintain an ordered list of pending blocking locks.
   When the conflicting lock is released, the server may wait for the
   period of time equal to lease_time for the first waiting client to
   re-request the lock.  After the lease period expires, the next



Shepler, et al.           Expires June 18, 2009               [Page 187]

Internet-Draft                   NFSv4.1                   December 2008


   waiting client request is allowed the lock.  Clients are required to
   poll at an interval sufficiently small that it is likely to acquire
   the lock in a timely manner.  The server is not required to maintain
   a list of pending blocked locks as it is used to increase fairness
   and not correct operation.  Because of the unordered nature of crash
   recovery, storing of lock state to stable storage would be required
   to guarantee ordered granting of blocking locks.

   Servers may also note the lock types and delay returning denial of
   the request to allow extra time for a conflicting lock to be
   released, allowing a successful return.  In this way, clients can
   avoid the burden of needlessly frequent polling for blocking locks.
   The server should take care in the length of delay in the event the
   client retransmits the request.

   If a server receives a blocking lock request, denies it, and then
   later receives a nonblocking request for the same lock, which is also
   denied, then it should remove the lock in question from its list of
   pending blocking locks.  Clients should use such a nonblocking
   request to indicate to the server that this is the last time they
   intend to poll for the lock, as may happen when the process
   requesting the lock is interrupted.  This is a courtesy to the
   server, to prevent it from unnecessarily waiting a lease period
   before granting other lock requests.  However, clients are not
   required to perform this courtesy, and servers must not depend on
   them doing so.  Also, clients must be prepared for the possibility
   that this final locking request will be accepted.

   When server indicates, via the flag OPEN4_RESULT_MAY_NOTIFY_LOCK,
   that CB_NOTIFY_LOCK callbacks will be done for the current open file,
   the client should take notice of this, but, since this is a hint,
   cannot rely on a CB_NOTIFY_LOCK always being done.  A client may
   reasonably reduce the frequency with which it polls for a denied
   lock, since the greater latency that might occur is likely to be
   eliminated given a prompt callback, but it still needs to poll.  When
   it receives a CB_NOTIFY_LOCK it should promptly try to obtain the
   lock, but it should be aware that other clients may polling and the
   server is under no obligation to reserve the lock for that particular
   client.

9.7.  Share Reservations

   A share reservation is a mechanism to control access to a file.  It
   is a separate and independent mechanism from byte-range locking.
   When a client opens a file, it sends an OPEN operation to the server
   specifying the type of access required (READ, WRITE, or BOTH) and the
   type of access to deny others (deny NONE, READ, WRITE, or BOTH).  If
   the OPEN fails the client will fail the application's open request.



Shepler, et al.           Expires June 18, 2009               [Page 188]

Internet-Draft                   NFSv4.1                   December 2008


   Pseudo-code definition of the semantics:

           if (request.access == 0) {
             return (NFS4ERR_INVAL)
           } else {
             if ((request.access & file_state.deny)) ||
                (request.deny & file_state.access)) {
               return (NFS4ERR_SHARE_DENIED)
           }
           return (NFS4ERR_OK);

   When doing this checking of share reservations on OPEN, the current
   file_state used in the algorithm includes bits that reflect all
   current opens, including those for the open-owner making the new OPEN
   request.

   The constants used for the OPEN and OPEN_DOWNGRADE operations for the
   access and deny fields are as follows:

   const OPEN4_SHARE_ACCESS_READ   = 0x00000001;
   const OPEN4_SHARE_ACCESS_WRITE  = 0x00000002;
   const OPEN4_SHARE_ACCESS_BOTH   = 0x00000003;

   const OPEN4_SHARE_DENY_NONE     = 0x00000000;
   const OPEN4_SHARE_DENY_READ     = 0x00000001;
   const OPEN4_SHARE_DENY_WRITE    = 0x00000002;
   const OPEN4_SHARE_DENY_BOTH     = 0x00000003;

9.8.  OPEN/CLOSE Operations

   To provide correct share semantics, a client MUST use the OPEN
   operation to obtain the initial filehandle and indicate the desired
   access and what access, if any, to deny.  Even if the client intends
   to use a special stateid for anonymous state or read bypass, it must
   still obtain the filehandle for the regular file with the OPEN
   operation so the appropriate share semantics can be applied.  For
   clients that do not have a deny mode built into their open
   programming interfaces, deny equal to NONE should be used.

   The OPEN operation with the CREATE flag, also subsumes the CREATE
   operation for regular files as used in previous versions of the NFS
   protocol.  This allows a create with a share to be done atomically.

   The CLOSE operation removes all share reservations held by the open-
   owner on that file.  If byte-range locks are held, the client SHOULD
   release all locks before issuing a CLOSE.  The server MAY free all
   outstanding locks on CLOSE but some servers may not support the CLOSE
   of a file that still has byte-range locks held.  The server MUST



Shepler, et al.           Expires June 18, 2009               [Page 189]

Internet-Draft                   NFSv4.1                   December 2008


   return failure, NFS4ERR_LOCKS_HELD, if any locks would exist after
   the CLOSE.

   The LOOKUP operation will return a filehandle without establishing
   any lock state on the server.  Without a valid stateid, the server
   will assume the client has the least access.  For example, a file
   opened with deny READ/WRITE using a filehandle obtained through
   LOOKUP could only be read using the special read bypass stateid and
   could not be written at all because it would not have a valid stateid
   and the special anonymous stateid would not be allowed access.

9.9.  Open Upgrade and Downgrade

   When an OPEN is done for a file and the open-owner for which the open
   is being done already has the file open, the result is to upgrade the
   open file status maintained on the server to include the access and
   deny bits specified by the new OPEN as well as those for the existing
   OPEN.  The result is that there is one open file, as far as the
   protocol is concerned, and it includes the union of the access and
   deny bits for all of the OPEN requests completed.  The open is
   represented by a single stateid whose "other" values matches that of
   the original open, and whose "seqid" value is incremented to reflect
   the occurrence of the upgrade.  The increment is required in cases in
   which the "upgrade" results in no change to the open mode (e.g. an
   OPEN is done for read when the existing open file is opened for read-
   write).  Only a single CLOSE will be done to reset the effects of
   both OPENs.  The client may use the stateid returned by the OPEN
   effecting the upgrade or with a stateid sharing the same "other"
   field and a seqid of zero, although care needs to be taken as far as
   upgrades which happen while the CLOSE is pending.  Note that the
   client, when issuing the OPEN, may not know that the same file is in
   fact being opened.  The above only applies if both OPENs result in
   the OPENed object being designated by the same filehandle.

   When the server chooses to export multiple filehandles corresponding
   to the same file object and returns different filehandles on two
   different OPENs of the same file object, the server MUST NOT "OR"
   together the access and deny bits and coalesce the two open files.
   Instead the server must maintain separate OPENs with separate
   stateids and will require separate CLOSEs to free them.

   When multiple open files on the client are merged into a single open
   file object on the server, the close of one of the open files (on the
   client) may necessitate change of the access and deny status of the
   open file on the server.  This is because the union of the access and
   deny bits for the remaining opens may be smaller (i.e. a proper
   subset) than previously.  The OPEN_DOWNGRADE operation is used to
   make the necessary change and the client should use it to update the



Shepler, et al.           Expires June 18, 2009               [Page 190]

Internet-Draft                   NFSv4.1                   December 2008


   server so that share reservation requests by other clients are
   handled properly.  The stateid returned has the same "other" field as
   that passed to the server.  The "seqid" value in the returned stateid
   MUST be incremented, even is situation in which there is no change
   the access and deny bits for the file.

9.10.  Parallel OPENs

   Unlike the case of NFSv4.0, in which OPEN operations for the same
   open-owner are inherently serialized because of the owner-based
   seqid, multiple OPENs for the same open-owner may be done in
   parallel.  When clients do this, they may encounter situations in
   which, because of the existence of hard links, two OPEN operations
   may turn out to open the same file, with a later OPEN performed being
   an upgrade of the first, with this fact only visible to the client
   once the operations complete.

   In this situation, clients may determine the order in which the OPENs
   were performed by examining the stateids returned by the OPENs.
   Stateids that share a common value of the "other" field can be
   recognized as having opened the same file, with the order of the
   operations determinable from the order of the "seqid" fields, mod any
   possible wraparound of the 32-bit field.

   When the possibility exists that the client will send multiple OPENs
   for the same open-owner in parallel, it may be the case that an open
   upgrade may happen without the client knowing beforehand that this
   could happen.  Because of this possibility, CLOSEs and
   OPEN_DOWNGRADEs, should generally be sent with a non-zero seqid in
   the stateid, to avoid the possibility that the status change
   associated with an open upgrade is not inadvertently lost.

9.11.  Reclaim of Open and Byte-Range Locks

   Special forms of the LOCK and OPEN operations are provided when it is
   necessary to re-establish byte-range locks or opens after a server
   failure.

   o  To reclaim existing opens, an OPEN operation is performed using a
      CLAIM_PREVIOUS.  Because the client, in this type of situation,
      will have already opened the file and have the filehandle of the
      target file, this operation requires that the current filehandle
      be the target file, rather than a directory and no file name is
      specified.

   o  To reclaim byte-range locks, a LOCK operation with the reclaim
      parameter set to true is used.




Shepler, et al.           Expires June 18, 2009               [Page 191]

Internet-Draft                   NFSv4.1                   December 2008


   Reclaims of opens associated with delegations are discussed in
   Section 10.2.1.


10.  Client-Side Caching

   Client-side caching of data, of file attributes, and of file names is
   essential to providing good performance with the NFS protocol.
   Providing distributed cache coherence is a difficult problem and
   previous versions of the NFS protocol have not attempted it.
   Instead, several NFS client implementation techniques have been used
   to reduce the problems that a lack of coherence poses for users.
   These techniques have not been clearly defined by earlier protocol
   specifications and it is often unclear what is valid or invalid
   client behavior.

   The NFSv4.1 protocol uses many techniques similar to those that have
   been used in previous protocol versions.  The NFSv4.1 protocol does
   not provide distributed cache coherence.  However, it defines a more
   limited set of caching guarantees to allow locks and share
   reservations to be used without destructive interference from client
   side caching.

   In addition, the NFSv4.1 protocol introduces a delegation mechanism
   which allows many decisions normally made by the server to be made
   locally by clients.  This mechanism provides efficient support of the
   common cases where sharing is infrequent or where sharing is read-
   only.

10.1.  Performance Challenges for Client-Side Caching

   Caching techniques used in previous versions of the NFS protocol have
   been successful in providing good performance.  However, several
   scalability challenges can arise when those techniques are used with
   very large numbers of clients.  This is particularly true when
   clients are geographically distributed which classically increases
   the latency for cache revalidation requests.

   The previous versions of the NFS protocol repeat their file data
   cache validation requests at the time the file is opened.  This
   behavior can have serious performance drawbacks.  A common case is
   one in which a file is only accessed by a single client.  Therefore,
   sharing is infrequent.

   In this case, repeated reference to the server to find that no
   conflicts exist is expensive.  A better option with regards to
   performance is to allow a client that repeatedly opens a file to do
   so without reference to the server.  This is done until potentially



Shepler, et al.           Expires June 18, 2009               [Page 192]

Internet-Draft                   NFSv4.1                   December 2008


   conflicting operations from another client actually occur.

   A similar situation arises in connection with file locking.  Sending
   file lock and unlock requests to the server as well as the read and
   write requests necessary to make data caching consistent with the
   locking semantics (see Section 10.3.2) can severely limit
   performance.  When locking is used to provide protection against
   infrequent conflicts, a large penalty is incurred.  This penalty may
   discourage the use of file locking by applications.

   The NFSv4.1 protocol provides more aggressive caching strategies with
   the following design goals:

   o  Compatibility with a large range of server semantics.

   o  Providing the same caching benefits as previous versions of the
      NFS protocol when unable to support the more aggressive model.

   o  Requirements for aggressive caching are organized so that a large
      portion of the benefit can be obtained even when not all of the
      requirements can be met.

   The appropriate requirements for the server are discussed in later
   sections in which specific forms of caching are covered (see
   Section 10.4).

10.2.  Delegation and Callbacks

   Recallable delegation of server responsibilities for a file to a
   client improves performance by avoiding repeated requests to the
   server in the absence of inter-client conflict.  With the use of a
   "callback" RPC from server to client, a server recalls delegated
   responsibilities when another client engages in sharing of a
   delegated file.

   A delegation is passed from the server to the client, specifying the
   object of the delegation and the type of delegation.  There are
   different types of delegations but each type contains a stateid to be
   used to represent the delegation when performing operations that
   depend on the delegation.  This stateid is similar to those
   associated with locks and share reservations but differs in that the
   stateid for a delegation is associated with a client ID and may be
   used on behalf of all the open-owners for the given client.  A
   delegation is made to the client as a whole and not to any specific
   process or thread of control within it.

   The backchannel is established by CREATE_SESSION and
   BIND_CONN_TO_SESSION, and the client is required to maintain it.



Shepler, et al.           Expires June 18, 2009               [Page 193]

Internet-Draft                   NFSv4.1                   December 2008


   Because the backchannel may be down, even temporarily, correct
   protocol operation does not depend on them.  Preliminary testing of
   backchannel functionality by means of a CB_COMPOUND procedure with a
   single operation, CB_SEQUENCE, can be used to check the continuity of
   the backchannel.  A server avoids delegating responsibilities until
   it has determined that the backchannel exists.  Because the granting
   of a delegation is always conditional upon the absence of conflicting
   access, clients MUST NOT assume that a delegation will be granted and
   they MUST always be prepared for OPENs, WANT_DELEGATIONs, and
   GET_DIR_DELEGATIONs to be processed without any delegations being
   granted.

   Once granted, a delegation behaves in many ways like a lock.  There
   is an associated lease that is subject to renewal together with all
   of the other leases held by that client.

   Unlike locks, an operation by a second client to a delegated file
   will cause the server to recall a delegation through a callback.  For
   individual operations, we will describe, under IMPLEMENTATION, when
   such operations are required to effect a recall.  A number of points
   should be noted, however.

   o  The server is free to recall a delegation whenever it feels it is
      desirable and may do so even if no operations requiring recall are
      being done.

   o  Operations done outside the NFSv4 protocol, due to, for example,
      access by other protocols, or by local access, also need to result
      in delegation recall when they make analogous changes to file
      system data.  What is crucial is if the change would invalidate
      the guarantees provided by the delegation.  When this is possible,
      the delegation needs to be recalled and MUST be returned or
      revoked before allowing the operation to proceed.

   o  The semantics of the file system are crucial in defining when
      delegation recall is required.  If a particular change within a
      specific implementation causes change to a file attribute, then
      delegation recall is required, whether that operation has been
      specifically listed as requiring delegation recall.  Again, what
      is critical is whether the guarantees provided by the delegation
      are being invalidated.

   Despite those caveats, the implementation sections for a number of
   operations describe situations in which delegation recall would be
   required under some common circumstances:

   o  For GETATTR, see Section 18.7.4.




Shepler, et al.           Expires June 18, 2009               [Page 194]

Internet-Draft                   NFSv4.1                   December 2008


   o  For OPEN, see Section 18.16.4.

   o  For READ, see Section 18.22.4.

   o  For REMOVE, see Section 18.25.4.

   o  For RENAME, see Section 18.26.4.

   o  For SETATTR, see Section 18.30.4.

   o  For WRITE, see Section 18.32.4.

   On recall, the client holding the delegation needs to flush modified
   state (such as modified data) to the server and return the
   delegation.  The conflicting request will not be acted on until the
   recall is complete.  The recall is considered complete when the
   client returns the delegation or the server times its wait for the
   delegation to be returned and revokes the delegation as a result of
   the timeout.  In the interim, the server will either delay responding
   to conflicting requests or respond to them with NFS4ERR_DELAY.
   Following the resolution of the recall, the server has the
   information necessary to grant or deny the second client's request.

   At the time the client receives a delegation recall, it may have
   substantial state that needs to be flushed to the server.  Therefore,
   the server should allow sufficient time for the delegation to be
   returned since it may involve numerous RPCs to the server.  If the
   server is able to determine that the client is diligently flushing
   state to the server as a result of the recall, the server may extend
   the usual time allowed for a recall.  However, the time allowed for
   recall completion should not be unbounded.

   An example of this is when responsibility to mediate opens on a given
   file is delegated to a client (see Section 10.4).  The server will
   not know what opens are in effect on the client.  Without this
   knowledge the server will be unable to determine if the access and
   deny state for the file allows any particular open until the
   delegation for the file has been returned.

   A client failure or a network partition can result in failure to
   respond to a recall callback.  In this case, the server will revoke
   the delegation which in turn will render useless any modified state
   still on the client.

10.2.1.  Delegation Recovery

   There are three situations that delegation recovery needs to deal
   with:



Shepler, et al.           Expires June 18, 2009               [Page 195]

Internet-Draft                   NFSv4.1                   December 2008


   o  Client restart

   o  Server restart

   o  Network partition (full or backchannel-only)

   In the event the client restarts, the failure to renew the lease will
   result in the revocation of byte-range locks and share reservations.
   Delegations, however, may be treated a bit differently.

   There will be situations in which delegations will need to be
   reestablished after a client restarts.  The reason for this is the
   client may have file data stored locally and this data was associated
   with the previously held delegations.  The client will need to
   reestablish the appropriate file state on the server.

   To allow for this type of client recovery, the server MAY extend the
   period for delegation recovery beyond the typical lease expiration
   period.  This implies that requests from other clients that conflict
   with these delegations will need to wait.  Because the normal recall
   process may require significant time for the client to flush changed
   state to the server, other clients need be prepared for delays that
   occur because of a conflicting delegation.  This longer interval
   would increase the window for clients to restart and consult stable
   storage so that the delegations can be reclaimed.  For open
   delegations, such delegations are reclaimed using OPEN with a claim
   type of CLAIM_DELEGATE_PREV or CLAIM_DELEG_PREV_FH (See Section 10.5
   and Section 18.16 for discussion of open delegation and the details
   of OPEN respectively).

   A server MAY support claim types of CLAIM_DELEGATE_PREV and
   CLAIM_DELEG_PREV_FH, and if it does, it MUST NOT remove delegations
   upon a CREATE_SESSION that confirms a client ID created by
   EXCHANGE_ID, and instead MUST, for a period of time no less than that
   of the value of the lease_time attribute, maintain the client's
   delegations to allow time for the client to send CLAIM_DELEGATE_PREV
   requests.  The server that supports CLAIM_DELEGATE_PREV and/or
   CLAIM_DELEG_PREV_FH MUST support the DELEGPURGE operation.

   When the server restarts, delegations are reclaimed (using the OPEN
   operation with CLAIM_PREVIOUS) in a similar fashion to byte-range
   locks and share reservations.  However, there is a slight semantic
   difference.  In the normal case if the server decides that a
   delegation should not be granted, it performs the requested action
   (e.g.  OPEN) without granting any delegation.  For reclaim, the
   server grants the delegation but a special designation is applied so
   that the client treats the delegation as having been granted but
   recalled by the server.  Because of this, the client has the duty to



Shepler, et al.           Expires June 18, 2009               [Page 196]

Internet-Draft                   NFSv4.1                   December 2008


   write all modified state to the server and then return the
   delegation.  This process of handling delegation reclaim reconciles
   three principles of the NFSv4.1 protocol:

   o  Upon reclaim, a client reporting resources assigned to it by an
      earlier server instance must be granted those resources.

   o  The server has unquestionable authority to determine whether
      delegations are to be granted and, once granted, whether they are
      to be continued.

   o  The use of callbacks is not to be depended upon until the client
      has proven its ability to receive them.

   When a client needs to reclaim a delegation and there is no
   associated open, the client may use the CLAIM_PREVIOUS variant of the
   WANT_DELEGATION operation.  However, since the server is not required
   to support this operation, an alternative is to reclaim via a dummy
   open together with the delegation using an OPEN of type
   CLAIM_PREVIOUS.  The dummy open file can be released using a CLOSE to
   re-establish the original state to be reclaimed, a delegation without
   an associated open.

   When a client has more than a single open associated with a
   delegation, state for those additional opens can be established using
   OPEN operations of type CLAIM_DELEGATE_CUR.  When these are used to
   establish opens associated with reclaimed delegations, the server
   MUST allow them when made within the grace period.

   When a network partition occurs, delegations are subject to freeing
   by the server when the lease renewal period expires.  This is similar
   to the behavior for locks and share reservations.  For delegations,
   however, the server may extend the period in which conflicting
   requests are held off.  Eventually the occurrence of a conflicting
   request from another client will cause revocation of the delegation.
   A loss of the backchannel (e.g. by later network configuration
   change) will have the same effect.  A recall request will fail and
   revocation of the delegation will result.

   A client normally finds out about revocation of a delegation when it
   uses a stateid associated with a delegation and receives one of the
   errors NFS4ERR_EXPIRED, NFS4ERR_ADMIN_REVOKED, or
   NFS4ERR_DELEG_REVOKED.  It also may find out about delegation
   revocation after a client restart when it attempts to reclaim a
   delegation and receives that same error.  Note that in the case of a
   revoked write open delegation, there are issues because data may have
   been modified by the client whose delegation is revoked and
   separately by other clients.  See Section 10.5.1 for a discussion of



Shepler, et al.           Expires June 18, 2009               [Page 197]

Internet-Draft                   NFSv4.1                   December 2008


   such issues.  Note also that when delegations are revoked,
   information about the revoked delegation will be written by the
   server to stable storage (as described in Section 8.4.3).  This is
   done to deal with the case in which a server restarts after revoking
   a delegation but before the client holding the revoked delegation is
   notified about the revocation.

10.3.  Data Caching

   When applications share access to a set of files, they need to be
   implemented so as to take account of the possibility of conflicting
   access by another application.  This is true whether the applications
   in question execute on different clients or reside on the same
   client.

   Share reservations and byte-range locks are the facilities the
   NFSv4.1 protocol provides to allow applications to coordinate access
   by using mutual exclusion facilities.  The NFSv4.1 protocol's data
   caching must be implemented such that it does not invalidate the
   assumptions that those using these facilities depend upon.

10.3.1.  Data Caching and OPENs

   In order to avoid invalidating the sharing assumptions that
   applications rely on, NFSv4.1 clients should not provide cached data
   to applications or modify it on behalf of an application when it
   would not be valid to obtain or modify that same data via a READ or
   WRITE operation.

   Furthermore, in the absence of open delegation (see Section 10.4),
   two additional rules apply.  Note that these rules are obeyed in
   practice by many NFSv3 clients.

   o  First, cached data present on a client must be revalidated after
      doing an OPEN.  Revalidating means that the client fetches the
      change attribute from the server, compares it with the cached
      change attribute, and if different, declares the cached data (as
      well as the cached attributes) as invalid.  This is to ensure that
      the data for the OPENed file is still correctly reflected in the
      client's cache.  This validation must be done at least when the
      client's OPEN operation includes DENY=WRITE or BOTH thus
      terminating a period in which other clients may have had the
      opportunity to open the file with WRITE access.  Clients may
      choose to do the revalidation more often (i.e. at OPENs specifying
      DENY=NONE) to parallel the NFSv3 protocol's practice for the
      benefit of users assuming this degree of cache revalidation.

      Since the change attribute is updated for data and metadata



Shepler, et al.           Expires June 18, 2009               [Page 198]

Internet-Draft                   NFSv4.1                   December 2008


      modifications, some client implementors may be tempted to use the
      time_modify attribute and not the change attribute to validate
      cached data, so that metadata changes do not spuriously invalidate
      clean data.  The implementor is cautioned in this approach.  The
      change attribute is guaranteed to change for each update to the
      file, whereas time_modify is guaranteed to change only at the
      granularity of the time_delta attribute.  Use by the client's data
      cache validation logic of time_modify and not change runs the risk
      of the client incorrectly marking stale data as valid.  Thus any
      cache validation approach by the client MUST include the use of
      the change attribute.

   o  Second, modified data must be flushed to the server before closing
      a file OPENed for write.  This is complementary to the first rule.
      If the data is not flushed at CLOSE, the revalidation done after
      client OPENs as file is unable to achieve its purpose.  The other
      aspect to flushing the data before close is that the data must be
      committed to stable storage, at the server, before the CLOSE
      operation is requested by the client.  In the case of a server
      restart and a CLOSEd file, it may not be possible to retransmit
      the data to be written to the file.  Hence, this requirement.

10.3.2.  Data Caching and File Locking

   For those applications that choose to use file locking instead of
   share reservations to exclude inconsistent file access, there is an
   analogous set of constraints that apply to client side data caching.
   These rules are effective only if the file locking is used in a way
   that matches in an equivalent way the actual READ and WRITE
   operations executed.  This is as opposed to file locking that is
   based on pure convention.  For example, it is possible to manipulate
   a two-megabyte file by dividing the file into two one-megabyte
   regions and protecting access to the two regions by file locks on
   bytes zero and one.  A lock for write on byte zero of the file would
   represent the right to do READ and WRITE operations on the first
   region.  A lock for write on byte one of the file would represent the
   right to do READ and WRITE operations on the second region.  As long
   as all applications manipulating the file obey this convention, they
   will work on a local file system.  However, they may not work with
   the NFSv4.1 protocol unless clients refrain from data caching.

   The rules for data caching in the file locking environment are:

   o  First, when a client obtains a file lock for a particular region,
      the data cache corresponding to that region (if any cache data
      exists) must be revalidated.  If the change attribute indicates
      that the file may have been updated since the cached data was
      obtained, the client must flush or invalidate the cached data for



Shepler, et al.           Expires June 18, 2009               [Page 199]

Internet-Draft                   NFSv4.1                   December 2008


      the newly locked region.  A client might choose to invalidate all
      of non-modified cached data that it has for the file but the only
      requirement for correct operation is to invalidate all of the data
      in the newly locked region.

   o  Second, before releasing a write lock for a region, all modified
      data for that region must be flushed to the server.  The modified
      data must also be written to stable storage.

   Note that flushing data to the server and the invalidation of cached
   data must reflect the actual byte ranges locked or unlocked.
   Rounding these up or down to reflect client cache block boundaries
   will cause problems if not carefully done.  For example, writing a
   modified block when only half of that block is within an area being
   unlocked may cause invalid modification to the region outside the
   unlocked area.  This, in turn, may be part of a region locked by
   another client.  Clients can avoid this situation by synchronously
   performing portions of write operations that overlap that portion
   (initial or final) that is not a full block.  Similarly, invalidating
   a locked area which is not an integral number of full buffer blocks
   would require the client to read one or two partial blocks from the
   server if the revalidation procedure shows that the data which the
   client possesses may not be valid.

   The data that is written to the server as a prerequisite to the
   unlocking of a region must be written, at the server, to stable
   storage.  The client may accomplish this either with synchronous
   writes or by following asynchronous writes with a COMMIT operation.
   This is required because retransmission of the modified data after a
   server restart might conflict with a lock held by another client.

   A client implementation may choose to accommodate applications which
   use byte-range locking in non-standard ways (e.g. using a byte-range
   lock as a global semaphore) by flushing to the server more data upon
   an LOCKU than is covered by the locked range.  This may include
   modified data within files other than the one for which the unlocks
   are being done.  In such cases, the client must not interfere with
   applications whose READs and WRITEs are being done only within the
   bounds of byte-range locks which the application holds.  For example,
   an application locks a single byte of a file and proceeds to write
   that single byte.  A client that chose to handle a LOCKU by flushing
   all modified data to the server could validly write that single byte
   in response to an unrelated unlock.  However, it would not be valid
   to write the entire block in which that single written byte was
   located since it includes an area that is not locked and might be
   locked by another client.  Client implementations can avoid this
   problem by dividing files with modified data into those for which all
   modifications are done to areas covered by an appropriate byte-range



Shepler, et al.           Expires June 18, 2009               [Page 200]

Internet-Draft                   NFSv4.1                   December 2008


   lock and those for which there are modifications not covered by a
   byte-range lock.  Any writes done for the former class of files must
   not include areas not locked and thus not modified on the client.

10.3.3.  Data Caching and Mandatory File Locking

   Client side data caching needs to respect mandatory file locking when
   it is in effect.  The presence of mandatory file locking for a given
   file is indicated when the client gets back NFS4ERR_LOCKED from a
   READ or WRITE on a file it has an appropriate share reservation for.
   When mandatory locking is in effect for a file, the client must check
   for an appropriate file lock for data being read or written.  If a
   lock exists for the range being read or written, the client may
   satisfy the request using the client's validated cache.  If an
   appropriate file lock is not held for the range of the read or write,
   the read or write request must not be satisfied by the client's cache
   and the request must be sent to the server for processing.  When a
   read or write request partially overlaps a locked region, the request
   should be subdivided into multiple pieces with each region (locked or
   not) treated appropriately.

10.3.4.  Data Caching and File Identity

   When clients cache data, the file data needs to be organized
   according to the file system object to which the data belongs.  For
   NFSv3 clients, the typical practice has been to assume for the
   purpose of caching that distinct filehandles represent distinct file
   system objects.  The client then has the choice to organize and
   maintain the data cache on this basis.

   In the NFSv4.1 protocol, there is now the possibility to have
   significant deviations from a "one filehandle per object" model
   because a filehandle may be constructed on the basis of the object's
   pathname.  Therefore, clients need a reliable method to determine if
   two filehandles designate the same file system object.  If clients
   were simply to assume that all distinct filehandles denote distinct
   objects and proceed to do data caching on this basis, caching
   inconsistencies would arise between the distinct client side objects
   which mapped to the same server side object.

   By providing a method to differentiate filehandles, the NFSv4.1
   protocol alleviates a potential functional regression in comparison
   with the NFSv3 protocol.  Without this method, caching
   inconsistencies within the same client could occur and this has not
   been present in previous versions of the NFS protocol.  Note that it
   is possible to have such inconsistencies with applications executing
   on multiple clients but that is not the issue being addressed here.




Shepler, et al.           Expires June 18, 2009               [Page 201]

Internet-Draft                   NFSv4.1                   December 2008


   For the purposes of data caching, the following steps allow an
   NFSv4.1 client to determine whether two distinct filehandles denote
   the same server side object:

   o  If GETATTR directed to two filehandles returns different values of
      the fsid attribute, then the filehandles represent distinct
      objects.

   o  If GETATTR for any file with an fsid that matches the fsid of the
      two filehandles in question returns a unique_handles attribute
      with a value of TRUE, then the two objects are distinct.

   o  If GETATTR directed to the two filehandles does not return the
      fileid attribute for both of the handles, then it cannot be
      determined whether the two objects are the same.  Therefore,
      operations which depend on that knowledge (e.g. client side data
      caching) cannot be done reliably.  Note that if GETATTR does not
      return the fileid attribute for both filehandles, it will return
      it for neither of the filehandles, since the fsid for both
      filehandles is the same.

   o  If GETATTR directed to the two filehandles returns different
      values for the fileid attribute, then they are distinct objects.

   o  Otherwise they are the same object.

10.4.  Open Delegation

   When a file is being OPENed, the server may delegate further handling
   of opens and closes for that file to the opening client.  Any such
   delegation is recallable, since the circumstances that allowed for
   the delegation are subject to change.  In particular, the server may
   receive a conflicting OPEN from another client, the server must
   recall the delegation before deciding whether the OPEN from the other
   client may be granted.  Making a delegation is up to the server and
   clients should not assume that any particular OPEN either will or
   will not result in an open delegation.  The following is a typical
   set of conditions that servers might use in deciding whether OPEN
   should be delegated:

   o  The client must be able to respond to the server's callback
      requests.  If a backchannel has been established, the server will
      send a CB_COMPOUND request, containing a single operation,
      CB_SEQUENCE, for a test of backchannel availability.

   o  The client must have responded properly to previous recalls.





Shepler, et al.           Expires June 18, 2009               [Page 202]

Internet-Draft                   NFSv4.1                   December 2008


   o  There must be no current open conflicting with the requested
      delegation.

   o  There should be no current delegation that conflicts with the
      delegation being requested.

   o  The probability of future conflicting open requests should be low
      based on the recent history of the file.

   o  The existence of any server-specific semantics of OPEN/CLOSE that
      would make the required handling incompatible with the prescribed
      handling that the delegated client would apply (see below).

   There are two types of open delegations, read and write.  A read open
   delegation allows a client to handle, on its own, requests to open a
   file for reading that do not deny read access to others.  Multiple
   read open delegations may be outstanding simultaneously and do not
   conflict.  A write open delegation allows the client to handle, on
   its own, all opens.  Only one write open delegation may exist for a
   given file at a given time and it is inconsistent with any read open
   delegations.

   When a client has a read open delegation, it is assured that neither
   the contents, the attributes (with the exception of time_access), nor
   the names of any links to the file will change without its knowledge,
   so long as the delegation is held.  When a client has a write open
   delegation, it may modify the file data locally since no other client
   will be accessing the file's data.  The client holding a write
   delegation may only locally affect file attributes which are
   intimately connected with the file data: size, change, time_access,
   time_metadata, and time_modify.  All other attributes must be
   reflected on the server.

   When a client has an open delegation, it does not need to send OPENs
   or CLOSEs to the server.  Instead the client may update the
   appropriate status internally.  For a read open delegation, opens
   that cannot be handled locally (opens for write or that deny read
   access) must be sent to the server.

   When an open delegation is made, the reply to the OPEN contains an
   open delegation structure which specifies the following:

   o  the type of delegation (read or write).

   o  space limitation information to control flushing of data on close
      (write open delegation only, see Section 10.4.1).





Shepler, et al.           Expires June 18, 2009               [Page 203]

Internet-Draft                   NFSv4.1                   December 2008


   o  an nfsace4 specifying read and write permissions.

   o  a stateid to represent the delegation for READ and WRITE.

   The delegation stateid is separate and distinct from the stateid for
   the OPEN proper.  The standard stateid, unlike the delegation
   stateid, is associated with a particular lock-owner and will continue
   to be valid after the delegation is recalled and the file remains
   open.

   When a request internal to the client is made to open a file and an
   open delegation is in effect, it will be accepted or rejected solely
   on the basis of the following conditions.  Any requirement for other
   checks to be made by the delegate should result in open delegation
   being denied so that the checks can be made by the server itself.

   o  The access and deny bits for the request and the file as described
      in Section 9.7.

   o  The read and write permissions as determined below.

   The nfsace4 passed with delegation can be used to avoid frequent
   ACCESS calls.  The permission check should be as follows:

   o  If the nfsace4 indicates that the open may be done, then it should
      be granted without reference to the server.

   o  If the nfsace4 indicates that the open may not be done, then an
      ACCESS request must be sent to the server to obtain the definitive
      answer.

   The server may return an nfsace4 that is more restrictive than the
   actual ACL of the file.  This includes an nfsace4 that specifies
   denial of all access.  Note that some common practices such as
   mapping the traditional user "root" to the user "nobody" (see
   Section 5.9) may make it incorrect to return the actual ACL of the
   file in the delegation response.

   The use of a delegation together with various other forms of caching
   creates the possibility that no server authentication and
   authorization will ever be performed for a given user since all of
   the user's requests might be satisfied locally.  Where the client is
   depending on the server for authentication and authorization, the
   client should be sure authentication and authorization occurs for
   each user by use of the ACCESS operation.  This should be the case
   even if an ACCESS operation would not be required otherwise.  As
   mentioned before, the server may enforce frequent authentication by
   returning an nfsace4 denying all access with every open delegation.



Shepler, et al.           Expires June 18, 2009               [Page 204]

Internet-Draft                   NFSv4.1                   December 2008


10.4.1.  Open Delegation and Data Caching

   An OPEN delegation allows much of the message overhead associated
   with the opening and closing files to be eliminated.  An open when an
   open delegation is in effect does not require that a validation
   message be sent to the server.  The continued endurance of the "read
   open delegation" provides a guarantee that no OPEN for write and thus
   no write has occurred.  Similarly, when closing a file opened for
   write and if write open delegation is in effect, the data written
   does not have to be written to the server until the open delegation
   is recalled.  The continued endurance of the open delegation provides
   a guarantee that no open and thus no read or write has been done by
   another client.

   For the purposes of open delegation, READs and WRITEs done without an
   OPEN are treated as the functional equivalents of a corresponding
   type of OPEN.  Although client SHOULD NOT use special stateids when
   an open exists, delegation handling on the server can use the client
   ID associated with the current session to determine if the operation
   has been done by the holder of the delegation, in which case, no
   recall is necessary, or by another client, in which case the
   delegation must be recalled and I/O not proceed until the delegation
   is recalled or revoked.

   With delegations, a client is able to avoid writing data to the
   server when the CLOSE of a file is serviced.  The file close system
   call is the usual point at which the client is notified of a lack of
   stable storage for the modified file data generated by the
   application.  At the close, file data is written to the server and
   through normal accounting the server is able to determine if the
   available file system space for the data has been exceeded (i.e.
   server returns NFS4ERR_NOSPC or NFS4ERR_DQUOT).  This accounting
   includes quotas.  The introduction of delegations requires that a
   alternative method be in place for the same type of communication to
   occur between client and server.

   In the delegation response, the server provides either the limit of
   the size of the file or the number of modified blocks and associated
   block size.  The server must ensure that the client will be able to
   write modified data to the server of a size equal to that provided in
   the original delegation.  The server must make this assurance for all
   outstanding delegations.  Therefore, the server must be careful in
   its management of available space for new or modified data taking
   into account available file system space and any applicable quotas.
   The server can recall delegations as a result of managing the
   available file system space.  The client should abide by the server's
   state space limits for delegations.  If the client exceeds the stated
   limits for the delegation, the server's behavior is undefined.



Shepler, et al.           Expires June 18, 2009               [Page 205]

Internet-Draft                   NFSv4.1                   December 2008


   Based on server conditions, quotas or available file system space,
   the server may grant write open delegations with very restrictive
   space limitations.  The limitations may be defined in a way that will
   always force modified data to be flushed to the server on close.

   With respect to authentication, flushing modified data to the server
   after a CLOSE has occurred may be problematic.  For example, the user
   of the application may have logged off the client and unexpired
   authentication credentials may not be present.  In this case, the
   client may need to take special care to ensure that local unexpired
   credentials will in fact be available.  This may be accomplished by
   tracking the expiration time of credentials and flushing data well in
   advance of their expiration or by making private copies of
   credentials to assure their availability when needed.

10.4.2.  Open Delegation and File Locks

   When a client holds a write open delegation, lock operations are
   performed locally.  This includes those required for mandatory file
   locking.  This can be done since the delegation implies that there
   can be no conflicting locks.  Similarly, all of the revalidations
   that would normally be associated with obtaining locks and the
   flushing of data associated with the releasing of locks need not be
   done.

   When a client holds a read open delegation, lock operations are not
   performed locally.  All lock operations, including those requesting
   non-exclusive locks, are sent to the server for resolution.

10.4.3.  Handling of CB_GETATTR

   The server needs to employ special handling for a GETATTR where the
   target is a file that has a write open delegation in effect.  The
   reason for this is that the client holding the write delegation may
   have modified the data and the server needs to reflect this change to
   the second client that submitted the GETATTR.  Therefore, the client
   holding the write delegation needs to be interrogated.  The server
   will use the CB_GETATTR operation.  The only attributes that the
   server can reliably query via CB_GETATTR are size and change.

   Since CB_GETATTR is being used to satisfy another client's GETATTR
   request, the server only needs to know if the client holding the
   delegation has a modified version of the file.  If the client's copy
   of the delegated file is not modified (data or size), the server can
   satisfy the second client's GETATTR request from the attributes
   stored locally at the server.  If the file is modified, the server
   only needs to know about this modified state.  If the server
   determines that the file is currently modified, it will respond to



Shepler, et al.           Expires June 18, 2009               [Page 206]

Internet-Draft                   NFSv4.1                   December 2008


   the second client's GETATTR as if the file had been modified locally
   at the server.

   Since the form of the change attribute is determined by the server
   and is opaque to the client, the client and server need to agree on a
   method of communicating the modified state of the file.  For the size
   attribute, the client will report its current view of the file size.
   For the change attribute, the handling is more involved.

   For the client, the following steps will be taken when receiving a
   write delegation:

   o  The value of the change attribute will be obtained from the server
      and cached.  Let this value be represented by c.

   o  The client will create a value greater than c that will be used
      for communicating modified data is held at the client.  Let this
      value be represented by d.

   o  When the client is queried via CB_GETATTR for the change
      attribute, it checks to see if it holds modified data.  If the
      file is modified, the value d is returned for the change attribute
      value.  If this file is not currently modified, the client returns
      the value c for the change attribute.

   For simplicity of implementation, the client MAY for each CB_GETATTR
   return the same value d.  This is true even if, between successive
   CB_GETATTR operations, the client again modifies in the file's data
   or metadata in its cache.  The client can return the same value
   because the only requirement is that the client be able to indicate
   to the server that the client holds modified data.  Therefore, the
   value of d may always be c + 1.

   While the change attribute is opaque to the client in the sense that
   it has no idea what units of time, if any, the server is counting
   change with, it is not opaque in that the client has to treat it as
   an unsigned integer, and the server has to be able to see the results
   of the client's changes to that integer.  Therefore, the server MUST
   encode the change attribute in network order when sending it to the
   client.  The client MUST decode it from network order to its native
   order when receiving it and the client MUST encode it network order
   when sending it to the server.  For this reason, change is defined as
   an unsigned integer rather than an opaque array of bytes.

   For the server, the following steps will be taken when providing a
   write delegation:





Shepler, et al.           Expires June 18, 2009               [Page 207]

Internet-Draft                   NFSv4.1                   December 2008


   o  Upon providing a write delegation, the server will cache a copy of
      the change attribute in the data structure it uses to record the
      delegation.  Let this value be represented by sc.

   o  When a second client sends a GETATTR operation on the same file to
      the server, the server obtains the change attribute from the first
      client.  Let this value be cc.

   o  If the value cc is equal to sc, the file is not modified and the
      server returns the current values for change, time_metadata, and
      time_modify (for example) to the second client.

   o  If the value cc is NOT equal to sc, the file is currently modified
      at the first client and most likely will be modified at the server
      at a future time.  The server then uses its current time to
      construct attribute values for time_metadata and time_modify.  A
      new value of sc, which we will call nsc, is computed by the
      server, such that nsc >= sc + 1.  The server then returns the
      constructed time_metadata, time_modify, and nsc values to the
      requester.  The server replaces sc in the delegation record with
      nsc.  To prevent the possibility of time_modify, time_metadata,
      and change from appearing to go backward (which would happen if
      the client holding the delegation fails to write its modified data
      to the server before the delegation is revoked or returned), the
      server SHOULD update the file's metadata record with the
      constructed attribute values.  For reasons of reasonable
      performance, committing the constructed attribute values to stable
      storage is OPTIONAL.

   As discussed earlier in this section, the client MAY return the same
   cc value on subsequent CB_GETATTR calls, even if the file was
   modified in the client's cache yet again between successive
   CB_GETATTR calls.  Therefore, the server must assume that the file
   has been modified yet again, and MUST take care to ensure that the
   new nsc it constructs and returns is greater than the previous nsc it
   returned.  An example implementation's delegation record would
   satisfy this mandate by including a boolean field (let us call it
   "modified") that is set to FALSE when the delegation is granted, and
   an sc value set at the time of grant to the change attribute value.
   The modified field would be set to TRUE the first time cc != sc, and
   would stay TRUE until the delegation is returned or revoked.  The
   processing for constructing nsc, time_modify, and time_metadata would
   use this pseudo code:








Shepler, et al.           Expires June 18, 2009               [Page 208]

Internet-Draft                   NFSv4.1                   December 2008


       if (!modified) {
           do CB_GETATTR for change and size;

           if (cc != sc)
               modified = TRUE;
       } else {
           do CB_GETATTR for size;
       }

       if (modified) {
           sc = sc + 1;
           time_modify = time_metadata = current_time;
           update sc, time_modify, time_metadata into file's metadata;
       }


   This would return to the client (that sent GETATTR) the attributes it
   requested, but make sure size comes from what CB_GETATTR returned.
   The server would not update the file's metadata with the client's
   modified size.

   In the case that the file attribute size is different than the
   server's current value, the server treats this as a modification
   regardless of the value of the change attribute retrieved via
   CB_GETATTR and responds to the second client as in the last step.

   This methodology resolves issues of clock differences between client
   and server and other scenarios where the use of CB_GETATTR break
   down.

   It should be noted that the server is under no obligation to use
   CB_GETATTR and therefore the server MAY simply recall the delegation
   to avoid its use.

10.4.4.  Recall of Open Delegation

   The following events necessitate recall of an open delegation:

   o  Potentially conflicting OPEN request (or READ/WRITE done with
      "special" stateid)

   o  SETATTR sent by another client

   o  REMOVE request for the file

   o  RENAME request for the file as either source or target of the
      RENAME




Shepler, et al.           Expires June 18, 2009               [Page 209]

Internet-Draft                   NFSv4.1                   December 2008


   Whether a RENAME of a directory in the path leading to the file
   results in recall of an open delegation depends on the semantics of
   the server's file system.  If that file system denies such RENAMEs
   when a file is open, the recall must be performed to determine
   whether the file in question is, in fact, open.

   In addition to the situations above, the server may choose to recall
   open delegations at any time if resource constraints make it
   advisable to do so.  Clients should always be prepared for the
   possibility of recall.

   When a client receives a recall for an open delegation, it needs to
   update state on the server before returning the delegation.  These
   same updates must be done whenever a client chooses to return a
   delegation voluntarily.  The following items of state need to be
   dealt with:

   o  If the file associated with the delegation is no longer open and
      no previous CLOSE operation has been sent to the server, a CLOSE
      operation must be sent to the server.

   o  If a file has other open references at the client, then OPEN
      operations must be sent to the server.  The appropriate stateids
      will be provided by the server for subsequent use by the client
      since the delegation stateid will no longer be valid.  These OPEN
      requests are done with the claim type of CLAIM_DELEGATE_CUR.  This
      will allow the presentation of the delegation stateid so that the
      client can establish the appropriate rights to perform the OPEN.
      (see the Section 18.16 which describes the OPEN" operation for
      details.)

   o  If there are granted file locks, the corresponding LOCK operations
      need to be performed.  This applies to the write open delegation
      case only.

   o  For a write open delegation, if at the time of recall the file is
      not open for write, all modified data for the file must be flushed
      to the server.  If the delegation had not existed, the client
      would have done this data flush before the CLOSE operation.

   o  For a write open delegation when a file is still open at the time
      of recall, any modified data for the file needs to be flushed to
      the server.

   o  With the write open delegation in place, it is possible that the
      file was truncated during the duration of the delegation.  For
      example, the truncation could have occurred as a result of an OPEN
      UNCHECKED with a size attribute value of zero.  Therefore, if a



Shepler, et al.           Expires June 18, 2009               [Page 210]

Internet-Draft                   NFSv4.1                   December 2008


      truncation of the file has occurred and this operation has not
      been propagated to the server, the truncation must occur before
      any modified data is written to the server.

   In the case of write open delegation, file locking imposes some
   additional requirements.  To precisely maintain the associated
   invariant, it is required to flush any modified data in any region
   for which a write lock was released while the write delegation was in
   effect.  However, because the write open delegation implies no other
   locking by other clients, a simpler implementation is to flush all
   modified data for the file (as described just above) if any write
   lock has been released while the write open delegation was in effect.

   An implementation need not wait until delegation recall (or deciding
   to voluntarily return a delegation) to perform any of the above
   actions, if implementation considerations (e.g. resource availability
   constraints) make that desirable.  Generally, however, the fact that
   the actual open state of the file may continue to change makes it not
   worthwhile to send information about opens and closes to the server,
   except as part of delegation return.  Only in the case of closing the
   open that resulted in obtaining the delegation would clients be
   likely to do this early, since, in that case, the close once done
   will not be undone.  Regardless of the client's choices on scheduling
   these actions, all must be performed before the delegation is
   returned, including (when applicable) the close that corresponds to
   the open that resulted in the delegation.  These actions can be
   performed either in previous requests or in previous operations in
   the same COMPOUND request.

10.4.5.  Clients that Fail to Honor Delegation Recalls

   A client may fail to respond to a recall for various reasons, such as
   a failure of the backchannel from server to the client.  The client
   may be unaware of a failure in the backchannel.  This lack of
   awareness could result in the client finding out long after the
   failure that its delegation has been revoked, and another client has
   modified the data for which the client had a delegation.  This is
   especially a problem for the client that held a write delegation.

   Status bits returned by SEQUENCE operations help to provide an
   alternate way of informing the client of issues regarding the status
   of the backchannel and of recalled delegations.  When the backchannel
   is not available, the server returns the status bit
   SEQ4_STATUS_CB_PATH_DOWN on SEQUENCE operations.  The client can
   react by attempting to re-establish the backchannel and by returning
   recallable objects if a backchannel cannot be successfully re-
   established.




Shepler, et al.           Expires June 18, 2009               [Page 211]

Internet-Draft                   NFSv4.1                   December 2008


   Whether the backchannel is functioning or not, it may be that the
   recalled delegation is not returned.  Note that the client's lease
   might still be renewed, even though the recalled delegation is not
   returned.  In this situation, servers SHOULD revoke delegations that
   are not returned in a period of time equal to the lease period.  This
   period of time should allow the client time to note the backchannel-
   down status and re-establish the backchannel.

   When delegations are revoked, the server will return with the
   SEQ4_STATUS_RECALLABLE_STATE_REVOKED status bit set on subsequent
   SEQUENCE operations.  The client should note this and then use
   TEST_STATEID to find which delegations have been recalled.

10.4.6.  Delegation Revocation

   At the point a delegation is revoked, if there are associated opens
   on the client, these opens may or may not be revoked.  If no lock or
   open is granted that is inconsistent with the existing open, the
   stateid for the open may remain valid, and be disconnected from the
   revoked delegation, just as would be the case if the delegation were
   returned.

   For example, if an OPEN for read-write with DENY=NONE is associated
   with the delegation, granting of another such OPEN to a different
   client will revoke the delegation but need not revoke the OPEN, since
   no lock inconsistent with that OPEN has been granted.  On the other
   hand, if an OPEN denying write is granted, then the existing open
   must be revoked.

   When opens and/or locks are revoked, the applications holding these
   opens or locks need to be notified.  This notification usually occurs
   by returning errors for READ/WRITE operations or when a close is
   attempted for the open file.

   If no opens exist for the file at the point the delegation is
   revoked, then notification of the revocation is unnecessary.
   However, if there is modified data present at the client for the
   file, the user of the application should be notified.  Unfortunately,
   it may not be possible to notify the user since active applications
   may not be present at the client.  See Section 10.5.1 for additional
   details.

10.4.7.  Delegations via WANT_DELEGATION

   In addition to providing delegations as part of the reply to OPEN
   operations, servers MAY provide delegations separate from open, via
   the OPTIONAL WANT_DELEGATION operation.  This allows delegations to
   be obtained in advance of an OPEN that might benefit from them, for



Shepler, et al.           Expires June 18, 2009               [Page 212]

Internet-Draft                   NFSv4.1                   December 2008


   objects which are not a valid target of OPEN, or to deal with cases
   in which a delegation has been recalled and the client wants to make
   an attempt to re-establish it if the absence of use by other clients
   allows that.

   The WANT_DELEGATION operation may be performed on any type of file
   object other than a directory.

   When a delegation is obtained using WANT_DELEGATION, any open files
   for the same filehandle held by that client are to be treated as
   subordinate to the delegation, just as if they had been created using
   an OPEN of type CLAIM_DELEGATE_CUR.  They are otherwise unchanged as
   to seqid, access and deny modes, and the relationship with byte-range
   locks.  Similarly, existing byte-range locks subordinate to an open
   which becomes subordinate to a delegation, become indirectly
   subordinate to that new delegation.

   The WANT_DELEGATION operation provides for delivery of delegations
   via callbacks, when the delegations are not immediately available.
   When a requested delegation is available, it is delivered to the
   client via a CB_PUSH_DELEG operation.  When this happens, open files
   for the same filehandle become subordinate to the new delegation at
   the point at which the delegation is delivered , just as if they had
   been created using an OPEN of type CLAIM_DELEGATE_CUR.  Similarly,
   for existing byte-range locks subordinate to an open.

10.5.  Data Caching and Revocation

   When locks and delegations are revoked, the assumptions upon which
   successful caching depend are no longer guaranteed.  For any locks or
   share reservations that have been revoked, the corresponding state-
   owner needs to be notified.  This notification includes applications
   with a file open that has a corresponding delegation which has been
   revoked.  Cached data associated with the revocation must be removed
   from the client.  In the case of modified data existing in the
   client's cache, that data must be removed from the client without it
   being written to the server.  As mentioned, the assumptions made by
   the client are no longer valid at the point when a lock or delegation
   has been revoked.  For example, another client may have been granted
   a conflicting lock after the revocation of the lock at the first
   client.  Therefore, the data within the lock range may have been
   modified by the other client.  Obviously, the first client is unable
   to guarantee to the application what has occurred to the file in the
   case of revocation.

   Notification to a state-owner will in many cases consist of simply
   returning an error on the next and all subsequent READs/WRITEs to the
   open file or on the close.  Where the methods available to a client



Shepler, et al.           Expires June 18, 2009               [Page 213]

Internet-Draft                   NFSv4.1                   December 2008


   make such notification impossible because errors for certain
   operations may not be returned, more drastic action such as signals
   or process termination may be appropriate.  The justification for
   this is that an invariant for which an application depends on may be
   violated.  Depending on how errors are typically treated for the
   client operating environment, further levels of notification
   including logging, console messages, and GUI pop-ups may be
   appropriate.

10.5.1.  Revocation Recovery for Write Open Delegation

   Revocation recovery for a write open delegation poses the special
   issue of modified data in the client cache while the file is not
   open.  In this situation, any client which does not flush modified
   data to the server on each close must ensure that the user receives
   appropriate notification of the failure as a result of the
   revocation.  Since such situations may require human action to
   correct problems, notification schemes in which the appropriate user
   or administrator is notified may be necessary.  Logging and console
   messages are typical examples.

   If there is modified data on the client, it must not be flushed
   normally to the server.  A client may attempt to provide a copy of
   the file data as modified during the delegation under a different
   name in the file system name space to ease recovery.  Note that when
   the client can determine that the file has not been modified by any
   other client, or when the client has a complete cached copy of file
   in question, such a saved copy of the client's view of the file may
   be of particular value for recovery.  In other case, recovery using a
   copy of the file based partially on the client's cached data and
   partially on the server copy as modified by other clients, will be
   anything but straightforward, so clients may avoid saving file
   contents in these situations or mark the results specially to warn
   users of possible problems.

   Saving of such modified data in delegation revocation situations may
   be limited to files of a certain size or might be used only when
   sufficient disk space is available within the target file system.
   Such saving may also be restricted to situations when the client has
   sufficient buffering resources to keep the cached copy available
   until it is properly stored to the target file system.

10.6.  Attribute Caching

   This section pertains to the caching of a file's attributes on a
   client when that client does not hold a delegation on the file.

   The attributes discussed in this section do not include named



Shepler, et al.           Expires June 18, 2009               [Page 214]

Internet-Draft                   NFSv4.1                   December 2008


   attributes.  Individual named attributes are analogous to files and
   caching of the data for these needs to be handled just as data
   caching is for ordinary files.  Similarly, LOOKUP results from an
   OPENATTR directory are to be cached on the same basis as any other
   pathnames and similarly for directory contents.

   Clients may cache file attributes obtained from the server and use
   them to avoid subsequent GETATTR requests.  Such caching is write
   through in that modification to file attributes is always done by
   means of requests to the server and should not be done locally and
   cached.  The exception to this are modifications to attributes that
   are intimately connected with data caching.  Therefore, extending a
   file by writing data to the local data cache is reflected immediately
   in the size as seen on the client without this change being
   immediately reflected on the server.  Normally such changes are not
   propagated directly to the server but when the modified data is
   flushed to the server, analogous attribute changes are made on the
   server.  When open delegation is in effect, the modified attributes
   may be returned to the server in reaction to a CB_RECALL call.

   The result of local caching of attributes is that the attribute
   caches maintained on individual clients will not be coherent.
   Changes made in one order on the server may be seen in a different
   order on one client and in a third order on a different client.

   The typical file system application programming interfaces do not
   provide means to atomically modify or interrogate attributes for
   multiple files at the same time.  The following rules provide an
   environment where the potential incoherences mentioned above can be
   reasonably managed.  These rules are derived from the practice of
   previous NFS protocols.

   o  All attributes for a given file (per-fsid attributes excepted) are
      cached as a unit at the client so that no non-serializability can
      arise within the context of a single file.

   o  An upper time boundary is maintained on how long a client cache
      entry can be kept without being refreshed from the server.

   o  When operations are performed that change attributes at the
      server, the updated attribute set is requested as part of the
      containing RPC.  This includes directory operations that update
      attributes indirectly.  This is accomplished by following the
      modifying operation with a GETATTR operation and then using the
      results of the GETATTR to update the client's cached attributes.

   Note that if the full set of attributes to be cached is requested by
   READDIR, the results can be cached by the client on the same basis as



Shepler, et al.           Expires June 18, 2009               [Page 215]

Internet-Draft                   NFSv4.1                   December 2008


   attributes obtained via GETATTR.

   A client may validate its cached version of attributes for a file by
   fetching just both the change and time_access attributes and assuming
   that if the change attribute has the same value as it did when the
   attributes were cached, then no attributes other than time_access
   have changed.  The reason why time_access is also fetched is because
   many servers operate in environments where the operation that updates
   change does not update time_access.  For example, POSIX file
   semantics do not update access time when a file is modified by the
   write system call [17].  Therefore, the client that wants a current
   time_access value should fetch it with change during the attribute
   cache validation processing and update its cached time_access.

   The client may maintain a cache of modified attributes for those
   attributes intimately connected with data of modified regular files
   (size, time_modify, and change).  Other than those three attributes,
   the client MUST NOT maintain a cache of modified attributes.
   Instead, attribute changes are immediately sent to the server.

   In some operating environments, the equivalent to time_access is
   expected to be implicitly updated by each read of the content of the
   file object.  If an NFS client is caching the content of a file
   object, whether it is a regular file, directory, or symbolic link,
   the client SHOULD NOT update the time_access attribute (via SETATTR
   or a small READ or READDIR request) on the server with each read that
   is satisfied from cache.  The reason is that this can defeat the
   performance benefits of caching content, especially since an explicit
   SETATTR of time_access may alter the change attribute on the server.
   If the change attribute changes, clients that are caching the content
   will think the content has changed, and will re-read unmodified data
   from the server.  Nor is the client encouraged to maintain a modified
   version of time_access in its cache, since this would mean that the
   client will either eventually have to write the access time to the
   server with bad performance effects, or it would never update the
   server's time_access, thereby resulting in a situation where an
   application that caches access time between a close and open of the
   same file observes the access time oscillating between the past and
   present.  The time_access attribute always means the time of last
   access to a file by a read that was satisfied by the server.  This
   way clients will tend to see only time_access changes that go forward
   in time.

10.7.  Data and Metadata Caching and Memory Mapped Files

   Some operating environments include the capability for an application
   to map a file's content into the application's address space.  Each
   time the application accesses a memory location that corresponds to a



Shepler, et al.           Expires June 18, 2009               [Page 216]

Internet-Draft                   NFSv4.1                   December 2008


   block that has not been loaded into the address space, a page fault
   occurs and the file is read (or if the block does not exist in the
   file, the block is allocated and then instantiated in the
   application's address space).

   As long as each memory mapped access to the file requires a page
   fault, the relevant attributes of the file that are used to detect
   access and modification (time_access, time_metadata, time_modify, and
   change) will be updated.  However, in many operating environments,
   when page faults are not required these attributes will not be
   updated on reads or updates to the file via memory access (regardless
   whether the file is local file or is being access remotely).  A
   client or server MAY fail to update attributes of a file that is
   being accessed via memory mapped I/O. This has several implications:

   o  If there is an application on the server that has memory mapped a
      file that a client is also accessing, the client may not be able
      to get a consistent value of the change attribute to determine
      whether its cache is stale or not.  A server that knows that the
      file is memory mapped could always pessimistically return updated
      values for change so as to force the application to always get the
      most up to date data and metadata for the file.  However, due to
      the negative performance implications of this, such behavior is
      OPTIONAL.

   o  If the memory mapped file is not being modified on the server, and
      instead is just being read by an application via the memory mapped
      interface, the client will not see an updated time_access
      attribute.  However, in many operating environments, neither will
      any process running on the server.  Thus NFS clients are at no
      disadvantage with respect to local processes.

   o  If there is another client that is memory mapping the file, and if
      that client is holding a write delegation, the same set of issues
      as discussed in the previous two bullet items apply.  So, when a
      server does a CB_GETATTR to a file that the client has modified in
      its cache, the reply from CB_GETATTR will not necessarily be
      accurate.  As discussed earlier, the client's obligation is to
      report that the file has been modified since the delegation was
      granted, not whether it has been modified again between successive
      CB_GETATTR calls, and the server MUST assume that any file the
      client has modified in cache has been modified again between
      successive CB_GETATTR calls.  Depending on the nature of the
      client's memory management system, this weak obligation may not be
      possible.  A client MAY return stale information in CB_GETATTR
      whenever the file is memory mapped.





Shepler, et al.           Expires June 18, 2009               [Page 217]

Internet-Draft                   NFSv4.1                   December 2008


   o  The mixture of memory mapping and file locking on the same file is
      problematic.  Consider the following scenario, where a page size
      on each client is 8192 bytes.

      *  Client A memory maps first page (8192 bytes) of file X

      *  Client B memory maps first page (8192 bytes) of file X

      *  Client A write locks first 4096 bytes

      *  Client B write locks second 4096 bytes

      *  Client A, via a STORE instruction modifies part of its locked
         region.

      *  Simultaneous to client A, client B executes a STORE on part of
         its locked region.

   Here the challenge is for each client to resynchronize to get a
   correct view of the first page.  In many operating environments, the
   virtual memory management systems on each client only know a page is
   modified, not that a subset of the page corresponding to the
   respective lock regions has been modified.  So it is not possible for
   each client to do the right thing, which is to only write to the
   server that portion of the page that is locked.  For example, if
   client A simply writes out the page, and then client B writes out the
   page, client A's data is lost.

   Moreover, if mandatory locking is enabled on the file, then we have a
   different problem.  When clients A and B execute the STORE
   instructions, the resulting page faults require a byte-range lock on
   the entire page.  Each client then tries to extend their locked range
   to the entire page, which results in a deadlock.  Communicating the
   NFS4ERR_DEADLOCK error to a STORE instruction is difficult at best.

   If a client is locking the entire memory mapped file, there is no
   problem with advisory or mandatory byte-range locking, at least until
   the client unlocks a region in the middle of the file.

   Given the above issues the following are permitted:

   o  Clients and servers MAY deny memory mapping a file they know there
      are byte-range locks for.

   o  Clients and servers MAY deny a byte-range lock on a file they know
      is memory mapped.





Shepler, et al.           Expires June 18, 2009               [Page 218]

Internet-Draft                   NFSv4.1                   December 2008


   o  A client MAY deny memory mapping a file that it knows requires
      mandatory locking for I/O. If mandatory locking is enabled after
      the file is opened and mapped, the client MAY deny the application
      further access to its mapped file.

10.8.  Name and Directory Caching without Directory Delegations

   The NFSv4.1 directory delegation facility (described in Section 10.9
   below) is OPTIONAL for servers to implement.  Even where it is
   implemented, it may not be always be functional because of resource
   availability issues or other constraints.  Thus, it is important to
   understand how name and directory caching are done in the absence of
   directory delegations.  Those topics are discussed in the next in
   Section 10.8.1.

10.8.1.  Name Caching

   The results of LOOKUP and READDIR operations may be cached to avoid
   the cost of subsequent LOOKUP operations.  Just as in the case of
   attribute caching, inconsistencies may arise among the various client
   caches.  To mitigate the effects of these inconsistencies and given
   the context of typical file system APIs, an upper time boundary is
   maintained on how long a client name cache entry can be kept without
   verifying that the entry has not been made invalid by a directory
   change operation performed by another client.

   When a client is not making changes to a directory for which there
   exist name cache entries, the client needs to periodically fetch
   attributes for that directory to ensure that it is not being
   modified.  After determining that no modification has occurred, the
   expiration time for the associated name cache entries may be updated
   to be the current time plus the name cache staleness bound.

   When a client is making changes to a given directory, it needs to
   determine whether there have been changes made to the directory by
   other clients.  It does this by using the change attribute as
   reported before and after the directory operation in the associated
   change_info4 value returned for the operation.  The server is able to
   communicate to the client whether the change_info4 data is provided
   atomically with respect to the directory operation.  If the change
   values are provided atomically, the client has a basis for
   determining, given proper care, whether other clients are modifying
   the directory is question.

   The simplest way to enable the client to make this determination is
   for the client to serialize all changes made to a specific directory.
   When this is done, and the server provides before and after values of
   the change attribute atomically, the client can simply compare the



Shepler, et al.           Expires June 18, 2009               [Page 219]

Internet-Draft                   NFSv4.1                   December 2008


   after value of the change attribute from one operation on a directory
   with the before value on the next subsequent operation modifying that
   directory.  When these are equal, the client is assured that no other
   client is modifying the directory in question.

   When such serialization is not used, and there may be multiple
   simultaneous outstanding operations modifying a single directory sent
   from a single client, making this sort of determination can be more
   complicated, since two such operations which are recognized as
   complete in a different order than they were actually performed,
   might give an appearance consistent with modification being made by
   another client.  Where this appears to happen, the client needs to
   await the completion of all such modifications that were started
   previously, to see if the outstanding before and after change numbers
   can be sorted into a chain such that the before value of one change
   number matches the after value of a previous one, in a chain
   consistent with this client being the only one modifying the
   directory.

   In either of these cases, the client is able to determine whether the
   directory is being modified by another client.  If the comparison
   indicates that the directory was updated by another client, the name
   cache associated with the modified directory is purged from the
   client.  If the comparison indicates no modification, the name cache
   can be updated on the client to reflect the directory operation and
   the associated timeout extended.  The post-operation change value
   needs to be saved as the basis for future change_info4 comparisons.

   As demonstrated by the scenario above, name caching requires that the
   client revalidate name cache data by inspecting the change attribute
   of a directory at the point when the name cache item was cached.
   This requires that the server update the change attribute for
   directories when the contents of the corresponding directory is
   modified.  For a client to use the change_info4 information
   appropriately and correctly, the server must report the pre and post
   operation change attribute values atomically.  When the server is
   unable to report the before and after values atomically with respect
   to the directory operation, the server must indicate that fact in the
   change_info4 return value.  When the information is not atomically
   reported, the client should not assume that other clients have not
   changed the directory.

10.8.2.  Directory Caching

   The results of READDIR operations may be used to avoid subsequent
   READDIR operations.  Just as in the cases of attribute and name
   caching, inconsistencies may arise among the various client caches.
   To mitigate the effects of these inconsistencies, and given the



Shepler, et al.           Expires June 18, 2009               [Page 220]

Internet-Draft                   NFSv4.1                   December 2008


   context of typical file system APIs, the following rules should be
   followed:

   o  Cached READDIR information for a directory which is not obtained
      in a single READDIR operation must always be a consistent snapshot
      of directory contents.  This is determined by using a GETATTR
      before the first READDIR and after the last of READDIR that
      contributes to the cache.

   o  An upper time boundary is maintained to indicate the length of
      time a directory cache entry is considered valid before the client
      must revalidate the cached information.

   The revalidation technique parallels that discussed in the case of
   name caching.  When the client is not changing the directory in
   question, checking the change attribute of the directory with GETATTR
   is adequate.  The lifetime of the cache entry can be extended at
   these checkpoints.  When a client is modifying the directory, the
   client needs to use the change_info4 data to determine whether there
   are other clients modifying the directory.  If it is determined that
   no other client modifications are occurring, the client may update
   its directory cache to reflect its own changes.

   As demonstrated previously, directory caching requires that the
   client revalidate directory cache data by inspecting the change
   attribute of a directory at the point when the directory was cached.
   This requires that the server update the change attribute for
   directories when the contents of the corresponding directory is
   modified.  For a client to use the change_info4 information
   appropriately and correctly, the server must report the pre and post
   operation change attribute values atomically.  When the server is
   unable to report the before and after values atomically with respect
   to the directory operation, the server must indicate that fact in the
   change_info4 return value.  When the information is not atomically
   reported, the client should not assume that other clients have not
   changed the directory.

10.9.  Directory Delegations

10.9.1.  Introduction to Directory Delegations

   Directory caching for the NFSv4.1 protocol, as previously described,
   is similar to file caching in previous versions.  Clients typically
   cache directory information for a duration determined by the client.
   At the end of a predefined timeout, the client will query the server
   to see if the directory has been updated.  By caching attributes,
   clients reduce the number of GETATTR calls made to the server to
   validate attributes.  Furthermore, frequently accessed files and



Shepler, et al.           Expires June 18, 2009               [Page 221]

Internet-Draft                   NFSv4.1                   December 2008


   directories, such as the current working directory, have their
   attributes cached on the client so that some NFS operations can be
   performed without having to make an RPC call.  By caching name and
   inode information about most recently looked up entries in a
   Directory Name Lookup Cache (DNLC), clients do not need to send
   LOOKUP calls to the server every time these files are accessed.

   This caching approach works reasonably well at reducing network
   traffic in many environments.  However, it does not address
   environments where there are numerous queries for files that do not
   exist.  In these cases of "misses", the client sends requests to the
   server in order to provide reasonable application semantics and
   promptly detect the creation of new directory entries.  Examples of
   high miss activity are compilation in software development
   environments.  The current behavior of NFS limits its potential
   scalability and wide-area sharing effectiveness in these types of
   environments.  Other distributed stateful file system architectures
   such as AFS and DFS have proven that adding state around directory
   contents can greatly reduce network traffic in high-miss
   environments.

   Delegation of directory contents is an OPTIONAL feature of NFSv4.1.
   Directory delegations provide similar traffic reduction benefits as
   with file delegations.  By allowing clients to cache directory
   contents (in a read-only fashion) while being notified of changes,
   the client can avoid making frequent requests to interrogate the
   contents of slowly-changing directories, reducing network traffic and
   improving client performance.  It can also simplify the task of
   determining whether other clients are making changes to the directory
   when the client itself is making many changes to the directory and
   changes are not serialized.

   Directory delegations allow improved namespace cache consistency to
   be achieved through delegations and synchronous recalls, in the
   absence of notifications.  In addition, if time-based consistency is
   sufficient, asynchronous notifications can provide performance
   benefits for the client, and possibly the server, under some common
   operating conditions such as slowly-changing and/or very large
   directories.

10.9.2.  Directory Delegation Design

   NFSv4.1 introduces the GET_DIR_DELEGATION (Section 18.39) operation
   to allow the client to ask for a directory delegation.  The
   delegation covers directory attributes and all entries in the
   directory.  If either of these change, the delegation will be
   recalled synchronously.  The operation causing the recall will have
   to wait before the recall is complete.  Any changes to directory



Shepler, et al.           Expires June 18, 2009               [Page 222]

Internet-Draft                   NFSv4.1                   December 2008


   entry attributes will not cause the delegation to be recalled.

   In addition to asking for delegations, a client can also ask for
   notifications for certain events.  These events include changes to
   the directory's attributes and/or its contents.  If a client asks for
   notification for a certain event, the server will notify the client
   when that event occurs.  This will not result in the delegation being
   recalled for that client.  The notifications are asynchronous and
   provide a way of avoiding recalls in situations where a directory is
   changing enough that the pure recall model may not be effective while
   trying to allow the client to get substantial benefit.  In the
   absence of notifications, once the delegation is recalled the client
   has to refresh its directory cache which might not be very efficient
   for very large directories.

   The delegation is read-only and the client may not make changes to
   the directory other than by performing NFSv4.1 operations that modify
   the directory or the associated file attributes so that the server
   has knowledge of these changes.  In order to keep the client
   namespace synchronized with the server, the server will, if the
   client has requested notifications, notify the client holding the
   delegation of the changes made as a result.  This is to avoid any
   need for subsequent GETATTR or READDIR calls to the server.  If a
   single client is holding the delegation and that client makes any
   changes to the directory (i.e. the changes are made via operations
   sent though a session associated with the client ID holding the
   delegation), the delegation will not be recalled.  Multiple clients
   may hold a delegation on the same directory, but if any such client
   modifies the directory, the server MUST recall the delegation from
   the other clients, unless those clients have made provisions to be
   notified of that sort of modification.

   Delegations can be recalled by the server at any time.  Normally, the
   server will recall the delegation when the directory changes in a way
   that is not covered by the notification, or when the directory
   changes and notifications have not been requested.  If another client
   removes the directory for which a delegation has been granted, the
   server will recall the delegation.

10.9.3.  Attributes in Support of Directory Notifications

   See Section 5.11 for a description of the attributes associated with
   directory notifications.

10.9.4.  Directory Delegation Recall

   The server will recall the directory delegation by sending a callback
   to the client.  It will use the same callback procedure as used for



Shepler, et al.           Expires June 18, 2009               [Page 223]

Internet-Draft                   NFSv4.1                   December 2008


   recalling file delegations.  The server will recall the delegation
   when the directory changes in a way that is not covered by the
   notification.  However the server need not recall the delegation if
   attributes of an entry within the directory change.

   If the server notices that handing out a delegation for a directory
   is causing too many notifications to be sent out, it may decide not
   to hand out delegations for that directory, or recall those already
   granted.  If a client tries to remove the directory for which a
   delegation has been granted, the server will recall all associated
   delegations.

   The implementation sections for a number of operations describe
   situations in which notification or delegation recall would be
   required under some common circumstances.  In this regard, a similar
   set of caveats to those listed in Section 10.2 apply.

   o  For CREATE, see Section 18.4.4.

   o  For LINK, see Section 18.9.4.

   o  For OPEN, see Section 18.16.4.

   o  For REMOVE, see Section 18.25.4.

   o  For RENAME, see Section 18.26.4.

   o  For SETATTR, see Section 18.30.4.

10.9.5.  Directory Delegation Recovery

   Recovery from client or server restart for state on regular files has
   two main goals, avoiding the necessity of breaking application
   guarantees with respect to locked files and delivery of updates
   cached at the client.  Neither of these goals applies to directories
   protected by read delegations and notifications.  Thus, no provision
   is made for reclaiming directory delegations in the event of client
   or server restart.  The client can simply establish a directory
   delegation in the same fashion as was done initially.


11.  Multi-Server Namespace

   NFSv4.1 supports attributes that allow a namespace to extend beyond
   the boundaries of a single server.  It is RECOMMENDED that clients
   and servers support construction of such multi-server namespaces.
   Use of such multi-server namespaces is OPTIONAL however, and for many
   purposes, single-server namespace are perfectly acceptable.  Use of



Shepler, et al.           Expires June 18, 2009               [Page 224]

Internet-Draft                   NFSv4.1                   December 2008


   multi-server namespaces can provide many advantages, however, by
   separating a file system's logical position in a namespace from the
   (possibly changing) logistical and administrative considerations that
   result in particular file systems being located on particular
   servers.

11.1.  Location Attributes

   NFSv4.1 contains RECOMMENDED attributes that allow file systems on
   one server to be associated with one or more instances of that file
   system on other servers.  These attributes specify such file system
   instances by specifying a server address target (either as a DNS name
   representing one or more IP addresses or as a literal IP address)
   together with the path of that file system within the associated
   single-server namespace.

   The fs_locations_info RECOMMENDED attribute allows specification of
   one or more file system instance locations where the data
   corresponding to a given file system may be found.  This attribute
   provides to the client, in addition to information about file system
   instance locations, significant information about the various file
   system instance choices (e.g. priority for use, writability,
   currency, etc.).  It also includes information to help the client
   efficiently effect as seamless a transition as possible among
   multiple file system instances, when and if that should be necessary.

   The fs_locations RECOMMENDED attribute is inherited from NFSv4.0 and
   only allows specification of the file system locations where the data
   corresponding to a given file system may be found.  Servers SHOULD
   make this attribute available whenever fs_locations_info is
   supported, but client use of fs_locations_info is to be preferred.

11.2.  File System Presence or Absence

   A given location in an NFSv4.1 namespace (typically but not
   necessarily a multi-server namespace) can have a number of file
   system instance locations associated with it (via the fs_locations or
   fs_locations_info attribute).  There may also be an actual current
   file system at that location, accessible via normal namespace
   operations (e.g.  LOOKUP).  In this case, the file system is said to
   be "present" at that position in the namespace and clients will
   typically use it, reserving use of additional locations specified via
   the location-related attributes to situations in which the principal
   location is no longer available.

   When there is no actual file system at the namespace location in
   question, the file system is said to be "absent".  An absent file
   system contains no files or directories other than the root.  Any



Shepler, et al.           Expires June 18, 2009               [Page 225]

Internet-Draft                   NFSv4.1                   December 2008


   reference to it, except to access a small set of attributes useful in
   determining alternate locations, will result in an error,
   NFS4ERR_MOVED.  Note that if the server ever returns the error
   NFS4ERR_MOVED, it MUST support the fs_locations attribute and SHOULD
   support the fs_locations_info and fs_status attributes.

   While the error name suggests that we have a case of a file system
   which once was present, and has only become absent later, this is
   only one possibility.  A position in the namespace may be permanently
   absent with the set of file system(s) designated by the location
   attributes being the only realization.  The name NFS4ERR_MOVED
   reflects an earlier, more limited conception of its function, but
   this error will be returned whenever the referenced file system is
   absent, whether it has moved or not.

   Except in the case of GETATTR-type operations (to be discussed
   later), when the current filehandle at the start of an operation is
   within an absent file system, that operation is not performed and the
   error NFS4ERR_MOVED returned, to indicate that the file system is
   absent on the current server.

   Because a GETFH cannot succeed if the current filehandle is within an
   absent file system, filehandles within an absent file system cannot
   be transferred to the client.  When a client does have filehandles
   within an absent file system, it is the result of obtaining them when
   the file system was present, and having the file system become absent
   subsequently.

   It should be noted that because the check for the current filehandle
   being within an absent file system happens at the start of every
   operation, operations that change the current filehandle so that it
   is within an absent file system will not result in an error.  This
   allows such combinations as PUTFH-GETATTR and LOOKUP-GETATTR to be
   used to get attribute information, particularly location attribute
   information, as discussed below.

   The RECOMMENDED file system attribute fs_status can be used to
   interrogate the present/absent status of a given file system.

11.3.  Getting Attributes for an Absent File System

   When a file system is absent, most attributes are not available, but
   it is necessary to allow the client access to the small set of
   attributes that are available, and most particularly those that give
   information about the correct current locations for this file system,
   fs_locations and fs_locations_info.





Shepler, et al.           Expires June 18, 2009               [Page 226]

Internet-Draft                   NFSv4.1                   December 2008


11.3.1.  GETATTR Within an Absent File System

   As mentioned above, an exception is made for GETATTR in that
   attributes may be obtained for a filehandle within an absent file
   system.  This exception only applies if the attribute mask contains
   at least one attribute bit that indicates the client is interested in
   a result regarding an absent file system: fs_locations,
   fs_locations_info, or fs_status.  If none of these attributes is
   requested, GETATTR will result in an NFS4ERR_MOVED error.

   When a GETATTR is done on an absent file system, the set of supported
   attributes is very limited.  Many attributes, including those that
   are normally REQUIRED, will not be available on an absent file
   system.  In addition to the attributes mentioned above (fs_locations,
   fs_locations_info, fs_status), the following attributes SHOULD be
   available on absent file systems, in the case of RECOMMENDED
   attributes at least to the same degree that they are available on
   present file systems.

   change_policy:  This attribute is useful for absent file systems and
      can be helpful in summarizing to the client when any of the
      location-related attributes changes.

   fsid:  This attribute should be provided so that the client can
      determine file system boundaries, including, in particular, the
      boundary between present and absent file systems.  This value must
      be different from any other fsid on the current server and need
      have no particular relationship to fsids on any particular
      destination to which the client might be directed.

   mounted_on_fileid:  For objects at the top of an absent file system
      this attribute needs to be available.  Since the fileid is one
      which is within the present parent file system, there should be no
      need to reference the absent file system to provide this
      information.

   Other attributes SHOULD NOT be made available for absent file
   systems, even when it is possible to provide them.  The server should
   not assume that more information is always better and should avoid
   gratuitously providing additional information.

   When a GETATTR operation includes a bit mask for one of the
   attributes fs_locations, fs_locations_info, or fs_status, but where
   the bit mask includes attributes which are not supported, GETATTR
   will not return an error, but will return the mask of the actual
   attributes supported with the results.

   Handling of VERIFY/NVERIFY is similar to GETATTR in that if the



Shepler, et al.           Expires June 18, 2009               [Page 227]

Internet-Draft                   NFSv4.1                   December 2008


   attribute mask does not include fs_locations, fs_locations_info, or
   fs_status, the error NFS4ERR_MOVED will result.  It differs in that
   any appearance in the attribute mask of an attribute not supported
   for an absent file system (and note that this will include some
   normally REQUIRED attributes), will also cause an NFS4ERR_MOVED
   result.

11.3.2.  READDIR and Absent File Systems

   A READDIR performed when the current filehandle is within an absent
   file system will result in an NFS4ERR_MOVED error, since, unlike the
   case of GETATTR, no such exception is made for READDIR.

   Attributes for an absent file system may be fetched via a READDIR for
   a directory in a present file system, when that directory contains
   the root directories of one or more absent file systems.  In this
   case, the handling is as follows:

   o  If the attribute set requested includes one of the attributes
      fs_locations, fs_locations_info, or fs_status, then fetching of
      attributes proceeds normally and no NFS4ERR_MOVED indication is
      returned, even when the rdattr_error attribute is requested.

   o  If the attribute set requested does not include one of the
      attributes fs_locations, fs_locations_info, or fs_status, then if
      the rdattr_error attribute is requested, each directory entry for
      the root of an absent file system, will report NFS4ERR_MOVED as
      the value of the rdattr_error attribute.

   o  If the attribute set requested does not include any of the
      attributes fs_locations, fs_locations_info, fs_status, or
      rdattr_error then the occurrence of the root of an absent file
      system within the directory will result in the READDIR failing
      with an NFS4ERR_MOVED error.

   o  The unavailability of an attribute because of a file system's
      absence, even one that is ordinarily REQUIRED, does not result in
      any error indication.  The set of attributes returned for the root
      directory of the absent file system in that case is simply
      restricted to those actually available.

11.4.  Uses of Location Information

   The location-bearing attributes (fs_locations and fs_locations_info),
   provide, together with the possibility of absent file systems, a
   number of important facilities in providing reliable, manageable, and
   scalable data access.




Shepler, et al.           Expires June 18, 2009               [Page 228]

Internet-Draft                   NFSv4.1                   December 2008


   When a file system is present, these attributes can provide
   alternative locations, to be used to access the same data, in the
   event of server failures, communications problems, or other
   difficulties that make continued access to the current file system
   impossible or otherwise impractical.  Under some circumstances
   multiple alternative locations may be used simultaneously to provide
   higher performance access to the file system in question.  Provision
   of such alternate locations is referred to as "replication" although
   there are cases in which replicated sets of data are not in fact
   present, and the replicas are instead different paths to the same
   data.

   When a file system is present and becomes absent, clients can be
   given the opportunity to have continued access to their data, at an
   alternate location.  In this case, a continued attempt to use the
   data in the now-absent file system will result in an NFS4ERR_MOVED
   error and at that point the successor locations (typically only one
   but multiple choices are possible) can be fetched and used to
   continue access.  Transfer of the file system contents to the new
   location is referred to as "migration", but it should be kept in mind
   that there are cases in which this term can be used, like
   "replication", when there is no actual data migration per se.

   Where a file system was not previously present, specification of file
   system location provides a means by which file systems located on one
   server can be associated with a namespace defined by another server,
   thus allowing a general multi-server namespace facility.  A
   designation of such a location, in place of an absent file system, is
   called a "referral".

   Because client support for location-related attributes is OPTIONAL, a
   server may (but is not required to) take action to hide migration and
   referral events from such clients, by acting as a proxy, for example.
   The server can determine the presence of client support from the
   arguments of the EXCHANGE_ID operation (see Section 18.35.3).

11.4.1.  File System Replication

   The fs_locations and fs_locations_info attributes provide alternative
   locations, to be used to access data in place of or in addition to
   the current file system instance.  On first access to a file system,
   the client should obtain the value of the set of alternate locations
   by interrogating the fs_locations or fs_locations_info attribute,
   with the latter being preferred.

   In the event that server failures, communications problems, or other
   difficulties make continued access to the current file system
   impossible or otherwise impractical, the client can use the alternate



Shepler, et al.           Expires June 18, 2009               [Page 229]

Internet-Draft                   NFSv4.1                   December 2008


   locations as a way to get continued access to its data.  Depending on
   specific attributes of these alternate locations, as indicated within
   the fs_locations_info attribute, multiple locations may be used
   simultaneously, to provide higher performance through the
   exploitation of multiple paths between client and target file system.

   The alternate locations may be physical replicas of the (typically
   read-only) file system data, or they may reflect alternate paths to
   the same server or provide for the use of various forms of server
   clustering in which multiple servers provide alternate ways of
   accessing the same physical file system.  How these different modes
   of file system transition are represented within the fs_locations and
   fs_locations_info attributes and how the client deals with file
   system transition issues will be discussed in detail below.

   Multiple server addresses, whether they are derived from a single
   entry with a DNS name representing a set of IP addresses, or from
   multiple entries each with its own server address may correspond to
   the same actual server.  The fact that two addresses correspond to
   the same server is shown by a common so_major_id field within the
   eir_server_owner field returned by EXCHANGE_ID (see Section 18.35.3).
   For a detailed discussion of how server address targets interact with
   the determination of server identity specified by the server owner
   field, see Section 11.5.

11.4.2.  File System Migration

   When a file system is present and becomes absent, clients can be
   given the opportunity to have continued access to their data, at an
   alternate location, as specified by the fs_locations or
   fs_locations_info attribute.  Typically, a client will be accessing
   the file system in question, get an NFS4ERR_MOVED error, and then use
   the fs_locations or fs_locations_info attribute to determine the new
   location of the data.  When fs_locations_info is used, additional
   information will be available which will define the nature of the
   client's handling of the transition to a new server.

   Such migration can be helpful in providing load balancing or general
   resource reallocation.  The protocol does not specify how the file
   system will be moved between servers.  It is anticipated that a
   number of different server-to-server transfer mechanisms might be
   used with the choice left to the server implementer.  The NFSv4.1
   protocol specifies the method used to communicate the migration event
   between client and server.

   The new location may be an alternate communication path to the same
   server, or, in the case of various forms of server clustering,
   another server providing access to the same physical file system.



Shepler, et al.           Expires June 18, 2009               [Page 230]

Internet-Draft                   NFSv4.1                   December 2008


   The client's responsibilities in dealing with this transition depend
   on the specific nature of the new access path and how and whether
   data was in fact migrated.  These issues will be discussed in detail
   below.

   When multiple server addresses correspond to the same actual server,
   as shown by a common value for the so_major_id field of the
   eir_server_owner field returned by EXCHANGE_ID, the location or
   locations may designate alternate server addresses in the form of
   specific server network addresses.  These can be used to access the
   file system in question at those addresses and when it is no longer
   accessible at the original address.

   Although a single successor location is typical, multiple locations
   may be provided, together with information that allows priority among
   the choices to be indicated, via information in the fs_locations_info
   attribute.  Where suitable clustering mechanisms make it possible to
   provide multiple identical file systems or paths to them, this allows
   the client the opportunity to deal with any resource or
   communications issues that might limit data availability.

   When an alternate location is designated as the target for migration,
   it must designate the same data (with metadata being the same to the
   degree indicated by the fs_locations_info attribute).  Where file
   systems are writable, a change made on the original file system must
   be visible on all migration targets.  Where a file system is not
   writable but represents a read-only copy (possibly periodically
   updated) of a writable file system, similar requirements apply to the
   propagation of updates.  Any change visible in the original file
   system must already be effected on all migration targets, to avoid
   any possibility, that a client in effecting a transition to the
   migration target will see any reversion in file system state.

11.4.3.  Referrals

   Referrals provide a way of placing a file system in a location within
   the namespace essentially without respect to its physical location on
   a given server.  This allows a single server or a set of servers to
   present a multi-server namespace that encompasses file systems
   located on multiple servers.  Some likely uses of this include
   establishment of site-wide or organization-wide namespaces, or even
   knitting such together into a truly global namespace.

   Referrals occur when a client determines, upon first referencing a
   position in the current namespace, that it is part of a new file
   system and that the file system is absent.  When this occurs,
   typically by receiving the error NFS4ERR_MOVED, the actual location
   or locations of the file system can be determined by fetching the



Shepler, et al.           Expires June 18, 2009               [Page 231]

Internet-Draft                   NFSv4.1                   December 2008


   fs_locations or fs_locations_info attribute.

   The locations-related attribute may designate a single file system
   location or multiple file system locations, to be selected based on
   the needs of the client.  The server, in the fs_locations_info
   attribute may specify priorities to be associated with various file
   system location choices.  The server may assign different priorities
   to different locations as reported to individual clients, in order to
   adapt to client physical location or to effect load balancing.  When
   both read-only and read-write file systems are present, some of the
   read-only locations may not be absolutely up-to-date (as they would
   have to be in the case of replication and migration).  Servers may
   also specify file system locations that include client-substituted
   variables so that different clients are referred to different file
   systems (with different data contents) based on client attributes
   such as CPU architecture.

   When the fs_locations_info attribute indicates that there are
   multiple possible targets listed, the relationships among them may be
   important to the client in selecting the one to use.  The same rules
   specified in Section 11.4.1 defining the appropriate standards for
   the data propagation, apply to these multiple replicas as well.  For
   example, the client might prefer a writable target on a server that
   has additional writable replicas to which it subsequently might
   switch.  Note that, as distinguished from the case of replication,
   there is no need to deal with the case of propagation of updates made
   by the current client, since the current client has not accessed the
   file system in question.

   Use of multi-server namespaces is enabled by NFSv4.1 but is not
   required.  The use of multi-server namespaces and their scope will
   depend on the applications used, and system administration
   preferences.

   Multi-server namespaces can be established by a single server
   providing a large set of referrals to all of the included file
   systems.  Alternatively, a single multi-server namespace may be
   administratively segmented with separate referral file systems (on
   separate servers) for each separately-administered portion of the
   namespace.  Any segment or the top-level referral file system may use
   replicated referral file systems for higher availability.

   Generally, multi-server namespaces are for the most part uniform, in
   that the same data made available to one client at a given location
   in the namespace is made available to all clients at that location.
   There are however facilities provided which allow different clients
   to be directed to different sets of data, so as to adapt to such
   client characteristics as CPU architecture.



Shepler, et al.           Expires June 18, 2009               [Page 232]

Internet-Draft                   NFSv4.1                   December 2008


11.5.  Location Entries and Server Identity

   As mentioned above, a single location entry may have a server address
   target in the form of a DNS name which may represent multiple IP
   addresses, while multiple location entries may have their own server
   address targets, that reference the same server.  Whether two IP
   addresses designate the same server is indicated by the existence of
   a common so_major_id field within the eir_server_owner field returned
   by EXCHANGE_ID (see Section 18.35.3), subject to further
   verification, for details of which see Section 2.10.5.

   When multiple addresses for the same server exist, the client may
   assume that for each file system in the namespace of a given server
   network address, there exist file systems at corresponding namespace
   locations for each of the other server network addresses.  It may do
   this even in the absence of explicit listing in fs_locations and
   fs_locations_info.  Such corresponding file system locations can be
   used as alternate locations, just as those explicitly specified via
   the fs_locations and fs_locations_info attributes.  Where these
   specific addresses are explicitly designated in the fs_locations_info
   attribute, the conditions of use specified in this attribute (e.g.
   priorities, specification of simultaneous use) may limit the client's
   use of these alternate locations.

   If a single location entry designates multiple server IP addresses,
   the client cannot assume that these addresses are multiple paths to
   the same server.  In most case they will be, but the client MUST
   verify that before acting on that assumption.  When two server
   addresses are designated by a single location entry and they
   correspond to different servers, this normally indicates some sort of
   misconfiguration, and so the client should avoid use such location
   entries when alternatives are available.  When they are not, clients
   should pick one of IP addresses and use it, without using others that
   are not directed to the same server.

11.6.  Additional Client-side Considerations

   When clients make use of servers that implement referrals,
   replication, and migration, care should be taken so that a user who
   mounts a given file system that includes a referral or a relocated
   file system continues to see a coherent picture of that user-side
   file system despite the fact that it contains a number of server-side
   file systems which may be on different servers.

   One important issue is upward navigation from the root of a server-
   side file system to its parent (specified as ".." in UNIX), in the
   case in which it transitions to that file system as a result of
   referral, migration, or a transition as a result of replication.



Shepler, et al.           Expires June 18, 2009               [Page 233]

Internet-Draft                   NFSv4.1                   December 2008


   When the client is at such a point, and it needs to ascend to the
   parent, it must go back to the parent as seen within the multi-server
   namespace rather issuing a LOOKUPP call to the server, which would
   result in the parent within that server's single-server namespace.
   In order to do this, the client needs to remember the filehandles
   that represent such file system roots, and use these instead of
   issuing a LOOKUPP to the current server.  This will allow the client
   to present to applications a consistent namespace, where upward
   navigation and downward navigation are consistent.

   Another issue concerns refresh of referral locations.  When referrals
   are used extensively, they may change as server configurations
   change.  It is expected that clients will cache information related
   to traversing referrals so that future client side requests are
   resolved locally without server communication.  This is usually
   rooted in client-side name lookup caching.  Clients should
   periodically purge this data for referral points in order to detect
   changes in location information.  When the change_policy attribute
   changes for directories that hold referral entries or for the
   referral entries themselves, clients should consider any associated
   cached referral information to be out of date.

11.7.  Effecting File System Transitions

   Transitions between file system instances, whether due to switching
   between replicas upon server unavailability, or in response to
   server-initiated migration events are best dealt with together.  This
   is so even though for the server, pragmatic considerations will
   normally force different implementation strategies for planned and
   unplanned transitions.  Even though the prototypical use cases of
   replication and migration contain distinctive sets of features, when
   all possibilities for these operations are considered, there is an
   underlying unity of these operations, from the client's point of
   view, that makes treating them together desirable.

   A number of methods are possible for servers to replicate data and to
   track client state in order to allow clients to transition between
   file system instances with a minimum of disruption.  Such methods
   vary between those that use inter-server clustering techniques to
   limit the changes seen by the client, to those that are less
   aggressive, use more standard methods of replicating data, and impose
   a greater burden on the client to adapt to the transition.

   The NFSv4.1 protocol does not impose choices on clients and servers
   with regard to that spectrum of transition methods.  In fact, there
   are many valid choices, depending on client and application
   requirements and their interaction with server implementation
   choices.  The NFSv4.1 protocol does define the specific choices that



Shepler, et al.           Expires June 18, 2009               [Page 234]

Internet-Draft                   NFSv4.1                   December 2008


   can be made, how these choices are communicated to the client and how
   the client is to deal with any discontinuities.

   In the sections below, references will be made to various possible
   server implementation choices as a way of illustrating the transition
   scenarios that clients may deal with.  The intent here is not to
   define or limit server implementations but rather to illustrate the
   range of issues that clients may face.

   In the discussion below, references will be made to a file system
   having a particular property or of two file systems (typically the
   source and destination) belonging to a common class of any of several
   types.  Two file systems that belong to such a class share some
   important aspect of file system behavior that clients may depend upon
   when present, to easily effect a seamless transition between file
   system instances.  Conversely, where the file systems do not belong
   to such a common class, the client has to deal with various sorts of
   implementation discontinuities which may cause performance or other
   issues in effecting a transition.

   Where the fs_locations_info attribute is available, such file system
   classification data will be made directly available to the client
   (see Section 11.10 for details).  When only fs_locations is
   available, default assumptions with regard to such classifications
   have to be inferred (see Section 11.9 for details).

   In cases in which one server is expected to accept opaque values from
   the client that originated from another server, the servers SHOULD
   encode the "opaque" values in big endian byte order.  If this is
   done, servers acting as replicas or immigrating file systems will be
   able to parse values like stateids, directory cookies, filehandles,
   etc. even if their native byte order is different from that of other
   servers cooperating in the replication and migration of the file
   system.

11.7.1.  File System Transitions and Simultaneous Access

   When a single file system may be accessed at multiple locations,
   whether this is because of an indication of file system identity as
   reported by the fs_locations or fs_locations_info attributes or
   because two file system instances have corresponding locations on
   server addresses which connect to the same server (as indicated by a
   common so_major_id field in the eir_server_owner field returned by
   EXCHANGE_ID), the client will, depending on specific circumstances as
   discussed below, either:






Shepler, et al.           Expires June 18, 2009               [Page 235]

Internet-Draft                   NFSv4.1                   December 2008


   o  The client accesses multiple instances simultaneously, as
      representing alternate paths to the same data and metadata.

   o  The client accesses one instance (or set of instances) and then
      transitions to an alternative instance (or set of instances) as a
      result of network issues, server unresponsiveness, or server-
      directed migration.  The transition may involve changes in
      filehandles, fileids, the change attribute, and/or locking state,
      depending on the attributes of the source and destination file
      system instances, as specified in the fs_locations_info attribute.

   Which of these choices is possible, and how a transition is effected,
   is governed by equivalence classes of file system instances as
   reported by the fs_locations_info attribute, and, for file system
   instances in the same location within a multiple single-server
   namespace as indicated by the so_major_id field in the
   eir_server_owner field returned by EXCHANGE_ID.

11.7.2.  Simultaneous Use and Transparent Transitions

   When two file system instances have the same location within their
   respective single-server namespaces and those two server network
   addresses designate the same server (as indicated by the same
   so_major_id value in the eir_server_owner value returned in response
   to EXCHANGE_ID), those file systems instances can be treated as the
   same, and either used together simultaneously or serially with no
   transition activity required on the part of the client.  In this case
   we refer to the transition as "transparent" and the client in
   transferring access from to the other is acting as it would in the
   event that communication is interrupted, with a new connection and
   possibly a new session being established to continue access to the
   same file system.

   Whether simultaneous use of the two file system instances is valid is
   controlled by whether the fs_locations_info attribute shows the two
   instances as having the same _simultaneous-use_ class.  See
   Section 11.10.1 for information about the definition of the various
   use classes, including the _simultaneous-use_ class.

   Note that for two such file systems, any information within the
   fs_locations_info attribute that indicates the need for special
   transition activity, i.e. the appearance of the two file system
   instances with different _handle_, _fileid_, _write-verifier_,
   _change_, _readdir_ classes, indicates a serious problem and the
   client, if it allows transition to the file system instance at all,
   must not treat this as a transparent transition.  The server SHOULD
   NOT indicate that these instances belong to different _handle_,
   _fileid_, _write-verifier_, _change_, _readdir_ classes, whether the



Shepler, et al.           Expires June 18, 2009               [Page 236]

Internet-Draft                   NFSv4.1                   December 2008


   two instances are shown belonging to the same _simultaneous-use_
   class or not.

   Where these conditions do not apply, a non-transparent file system
   instance transition is required with the details depending on the
   respective _handle_, _fileid_, _write-verifier_, _change_, _readdir_
   classes of the two file system instances and whether the two servers
   address in question have the same eir_server_scope value as reported
   by EXCHANGE_ID.

11.7.2.1.  Simultaneous Use of File System Instances

   When the conditions in Section 11.7.2 hold, in either of the
   following two cases, the client may use the two file system instances
   simultaneously.

   o  The fs_locations_info attribute does not contain separate per-
      network-address entries for file systems instances at the distinct
      network addresses.  This includes the case in which the
      fs_locations_info attribute is unavailable.  In this case, the
      fact that the two server addresses connect to the same server (as
      indicated by the two addresses sharing the same the so_major_id
      value and subsequently confirmed as described in Section 2.10.5)
      justifies simultaneous use and there is no fs_locations_info
      attribute information contradicting that.

   o  The fs_locations_info attribute indicates that two file system
      instances belong to the same _simultaneous-use_ class.

   In this case, the client may use both file system instances
   simultaneously, as representations of the same file system, whether
   that happens because the two network addresses connect to the same
   physical server or because different servers connect to clustered
   file systems and export their data in common.  When simultaneous use
   is in effect, any change made to one file system instance must be
   immediately reflected in the other file system instance(s).  Locks
   are treated as part of a common lease, associated with a common
   client ID.  Depending on the details of the eir_server_owner returned
   by EXCHANGE_ID, the two server instances may be accessed by different
   sessions or a single session in common.

11.7.2.2.  Transparent File System Transitions

   When the conditions in Section 11.7.2.1 hold and the
   fs_locations_info attribute explicitly shows the file system
   instances for these distinct network addresses as belonging to
   different _simultaneous-use_ classes, the file system instances
   should not be used by the client simultaneously, but rather serially



Shepler, et al.           Expires June 18, 2009               [Page 237]

Internet-Draft                   NFSv4.1                   December 2008


   with one being used unless and until communication difficulties, lack
   of responsiveness, or an explicit migration event causes another file
   system instance (or set of file system instances sharing a common
   _simultaneous-use_ class) to be used.

   When a change of file system instance is to be done, the client will
   use the same client ID already in effect.  If it already has
   connections to the new server address, these will be used.  Otherwise
   new connections to existing sessions or new sessions associated with
   the existing client ID are established as indicated by the
   eir_server_owner returned by EXCHANGE_ID.

   In all such transparent transition cases, the following apply:

   o  If filehandles are persistent they stay the same.  If filehandles
      are volatile, they either stay the same, or if they expire, the
      reason for expiration is not due to the file system transition.

   o  Fileid values do not change across the transition.

   o  The file system will have the same fsid in both the old and new
      locations.

   o  Change attribute values are consistent across the transition and
      do not have to be refetched.  When change attributes indicate that
      a cached object is still valid, it can remain cached.

   o  Client and state identifiers retain their validity across the
      transition, except where their staleness is recognized and
      reported by the new server.  Except where such staleness requires
      it, no lock reclamation is needed.  Any such staleness is an
      indication that the server should be considered to have restarted
      and is reported as discussed in Section 8.4.2.

   o  Write verifiers are presumed to retain their validity and can be
      used to compare with verifiers returned by COMMIT on the new
      server, with the expectation that if COMMIT on the new server
      returns an identical verifier, then that server has all of the
      data unstably written to the original server and has committed it
      to stable storage as requested.

   o  Readdir cookies are presumed to retain their validity and can be
      presented to subsequent READDIR requests together with the readdir
      verifier with which they are associated.  When the verifier is
      accepted as valid, the cookie will continue the READDIR operation
      so that the entire directory can be obtained by the client.





Shepler, et al.           Expires June 18, 2009               [Page 238]

Internet-Draft                   NFSv4.1                   December 2008


11.7.3.  Filehandles and File System Transitions

   There are a number of ways in which filehandles can be handled across
   a file system transition.  These can be divided into two broad
   classes depending upon whether the two file systems across which the
   transition happens share sufficient state to effect some sort of
   continuity of file system handling.

   When there is no such co-operation in filehandle assignment, the two
   file systems are reported as being in different _handle_ classes.  In
   this case, all filehandles are assumed to expire as part of the file
   system transition.  Note that this behavior does not depend on
   fh_expire_type attribute and supersedes the specification of
   FH4_VOL_MIGRATION bit, which only affects behavior when
   fs_locations_info is not available.

   When there is co-operation in filehandle assignment, the two file
   systems are reported as being in the same _handle_ classes.  In this
   case, persistent filehandles remain valid after the file system
   transition, while volatile filehandles (excluding those that are only
   volatile due to the FH4_VOL_MIGRATION bit) are subject to expiration
   on the target server.

11.7.4.  Fileids and File System Transitions

   In NFSv4.0, the issue of continuity of fileids in the event of a file
   system transition was not addressed.  The general expectation had
   been that in situations in which the two file system instances are
   created by a single vendor using some sort of file system image copy,
   fileids will be consistent across the transition while in the
   analogous multi-vendor transitions they will not.  This poses
   difficulties, especially for the client without special knowledge of
   the transition mechanisms adopted by the server.  Note that although
   fileid is not a REQUIRED attribute, many servers support fileids and
   many clients provide API's that depend on fileids.

   It is important to note that while clients themselves may have no
   trouble with a fileid changing as a result of a file system
   transition event, applications do typically have access to the fileid
   (e.g. via stat), and the result of this is that an application may
   work perfectly well if there is no file system instance transition or
   if any such transition is among instances created by a single vendor,
   yet be unable to deal with the situation in which a multi-vendor
   transition occurs, at the wrong time.

   Providing the same fileids in a multi-vendor (multiple server
   vendors) environment has generally been held to be quite difficult.
   While there is work to be done, it needs to be pointed out that this



Shepler, et al.           Expires June 18, 2009               [Page 239]

Internet-Draft                   NFSv4.1                   December 2008


   difficulty is partly self-imposed.  Servers have typically identified
   fileid with inode number, i.e. with a quantity used to find the file
   in question.  This identification poses special difficulties for
   migration of a file system between vendors where assigning the same
   index to a given file may not be possible.  Note here that a fileid
   is not required to be useful to find the file in question, only that
   it is unique within the given file system.  Servers prepared to
   accept a fileid as a single piece of metadata and store it apart from
   the value used to index the file information can relatively easily
   maintain a fileid value across a migration event, allowing a truly
   transparent migration event.

   In any case, where servers can provide continuity of fileids, they
   should, and the client should be able to find out that such
   continuity is available and take appropriate action.  Information
   about the continuity (or lack thereof) of fileids across a file
   system transition is represented by specifying whether the file
   systems in question are of the same _fileid_ class.

   Note that when consistent fileids do not exist across a transition
   (either because there is no continuity of fileids or because fileid
   is not a supported attribute on one of instances involved), and there
   are no reliable filehandles across a transition event (either because
   there is no filehandle continuity or because the filehandles are
   volatile), the client is in a position where it cannot verify that
   files it was accessing before the transition are the same objects.
   It is forced to assume that no object has been renamed, and, unless
   there are guarantees that provide this (e.g. the file system is read-
   only), problems for applications may occur.  Therefore, use of such
   configurations should be limited to situations where the problems
   that this may cause can be tolerated.

11.7.5.  Fsids and File System Transitions

   Since fsids are generally only unique within a per-server basis, it
   is likely that they will change during a file system transition.  One
   exception is the case of transparent transitions, but in that case we
   have multiple network addresses that are defined as the same server
   (as specified by a common value of the so_major_id field of
   eir_server_owner).  Clients should not make the fsids received from
   the server visible to applications since they may not be globally
   unique, and because they may change during a file system transition
   event.  Applications are best served if they are isolated from such
   transitions to the extent possible.

   Although normally, a single source file system will transition to a
   single target file system, there is a provision for splitting a
   single source file system into multiple target file systems, by



Shepler, et al.           Expires June 18, 2009               [Page 240]

Internet-Draft                   NFSv4.1                   December 2008


   specifying the FSLI4F_MULTI_FS flag.

11.7.5.1.  File System Splitting

   When a file system transition is made and the fs_locations_info
   indicates that the file system in question may be split into multiple
   file systems (via the FSLI4F_MULTI_FS flag), the client SHOULD do
   GETATTRs to determine the fsid attribute on all known objects within
   the file system undergoing transition to determine the new file
   system boundaries.

   Clients may maintain the fsids passed to existing applications by
   mapping all of the fsids for the descendant file systems to the
   common fsid used for the original file system.

   Splitting a file system may be done on a transition between file
   systems of the same _fileid_ class, since the fact that fileids are
   unique within the source file system ensure they will be unique in
   each of the target file systems.

11.7.6.  The Change Attribute and File System Transitions

   Since the change attribute is defined as a server-specific one,
   change attributes fetched from one server are normally presumed to be
   invalid on another server.  Such a presumption is troublesome since
   it would invalidate all cached change attributes, requiring
   refetching.  Even more disruptive, the absence of any assured
   continuity for the change attribute means that even if the same value
   is retrieved on refetch no conclusions can drawn as to whether the
   object in question has changed.  The identical change attribute could
   be merely an artifact of a modified file with a different change
   attribute construction algorithm, with that new algorithm just
   happening to result in an identical change value.

   When the two file systems have consistent change attribute formats,
   and this fact is communicated to the client by reporting as in the
   same _change_ class, the client may assume a continuity of change
   attribute construction and handle this situation just as it would be
   handled without any file system transition.

11.7.7.  Lock State and File System Transitions

   In a file system transition, the client needs to handle cases in
   which the two servers have cooperated in state management and in
   which they have not.  Cooperation by two servers in state management
   requires coordination of client IDs.  Before the client attempts to
   use a client ID associated with one server in a request to the server
   of the other file system, it must eliminate the possibility that two



Shepler, et al.           Expires June 18, 2009               [Page 241]

Internet-Draft                   NFSv4.1                   December 2008


   non-cooperating servers have assigned the same client ID by accident.
   The client needs to compare the eir_server_scope values returned by
   each server.  If the scope values do not match, then the servers have
   not cooperated in state management.  If the scope values match, then
   this indicates the servers have cooperated in assigning client IDs to
   the point that they will reject client IDs that refer to state they
   do not know about.  See Section 2.10.4 for more information about the
   use of server scope.

   In the case of migration, the servers involved in the migration of a
   file system SHOULD transfer all server state from the original to the
   new server.  When this is done, it must be done in a way that is
   transparent to the client.  With replication, such a degree of common
   state is typically not the case.  Clients, however should use the
   information provided by the eir_server_scope returned by EXCHANGE_ID
   (as modified by the validation procedures described in
   Section 2.10.4) to determine whether such sharing may be in effect,
   rather than making assumptions based on the reason for the
   transition.

   This state transfer will reduce disruption to the client when a file
   system transition occurs.  If the servers are successful in
   transferring all state, the client can attempt to establish sessions
   associated with the client ID used for the source file system
   instance.  If the server accepts that as a valid client ID, then the
   client may use the existing stateids associated with that client ID
   for the old file system instance in connection with that same client
   ID in connection with the transitioned file system instance.  If the
   client in question already had a client ID on the target system, it
   may interrogate the stateid values from the source system under that
   new client ID, with the assurance that if they are accepted as valid,
   then they represent validly transferred lock state for the source
   file system, transferred to the target server.

   When the two servers belong to the same server scope, it does not
   mean that when dealing with the transition, the client will not have
   to reclaim state.  However it does mean that the client may proceed
   using its current client ID when establishing communication with the
   new server and the new server will either recognize the client ID as
   valid, or reject it, in which case locks must be reclaimed by the
   client.

   File systems co-operating in state management may actually share
   state or simply divide the identifier space so as to recognize (and
   reject as stale) each other's stateids and client IDs.  Servers which
   do share state may not do so under all conditions or at all times.
   The requirement for the server is that if it cannot be sure in
   accepting a client ID that it reflects the locks the client was



Shepler, et al.           Expires June 18, 2009               [Page 242]

Internet-Draft                   NFSv4.1                   December 2008


   given, it must treat all associated state as stale and report it as
   such to the client.

   When the two file system instances are on servers that do not share a
   server scope value, the client must establish a new client ID on the
   destination, if it does not have one already, and reclaim locks if
   allowed by the server.  In this case, old stateids and client IDs
   should not be presented to the new server since there is no assurance
   that they will not conflict with IDs valid on that server.  Note that
   in this case lock reclaim may be attempted even when the servers
   involved in the transfer have different server scope values (see
   Section 8.4.2.1 for the contrary case of reclaim after server reboot.
   Servers with different server scope values may co-operate to allow
   reclaim for locks associated with the transfer of a filesystem even
   if they do not co-operate sufficiently to share a server scope.

   In either case, when actual locks are not known to be maintained, the
   destination server may establish a grace period specific to the given
   file system, with non-reclaim locks being rejected for that file
   system, even though normal locks are being granted for other file
   systems.  Clients should not infer the absence of a grace period for
   file systems being transitioned to a server from responses to
   requests for other file systems.

   In the case of lock reclamation for a given file system after a file
   system transition, edge conditions can arise similar to those for
   reclaim after server restart (although in the case of the planned
   state transfer associated with migration, these can be avoided by
   securely recording lock state as part of state migration).  Unless
   the destination server can guarantee that locks will not be
   incorrectly granted, the destination server should not allow lock
   reclaims and avoid establishing a grace period.

   Once all locks have been reclaimed, or there were no locks to
   reclaim, the client indicates that there are no more reclaims to be
   done for the file system in question by issuing a RECLAIM_COMPLETE
   operation with the rca_one_fs parameter set to true.  Once this has
   been done, non-reclaim locking operations may be done, and any
   subsequent request to do reclaims will be rejected with the error
   NFS4ERR_NO_GRACE.

   Information about client identity may be propagated between servers
   in the form of client_owner4 and associated verifiers, under the
   assumption that the client presents the same values to all the
   servers with which it deals.

   Servers are encouraged to provide facilities to allow locks to be
   reclaimed on the new server after a file system transition.  Often,



Shepler, et al.           Expires June 18, 2009               [Page 243]

Internet-Draft                   NFSv4.1                   December 2008


   however, in cases in which the two servers do not share a server
   scope value, such facilities may not be available and client should
   be prepared to re-obtain locks, even though it is possible that the
   client may have its LOCK or OPEN request denied due to a conflicting
   lock.

   The consequences of having no facilities available to reclaim locks
   on the new server will depend on the type of environment.  In some
   environments, such as the transition between read-only file systems,
   such denial of locks should not pose large difficulties in practice.
   When an attempt to re-establish a lock on a new server is denied, the
   client should treat the situation as if its original lock had been
   revoked.  Note that when the lock is granted, the client cannot
   assume that no conflicting lock could have been granted in the
   interim.  Where change attribute continuity is present, the client
   may check the change attribute to check for unwanted file
   modifications.  Where even this is not available, and the file system
   is not read-only, a client may reasonably treat all pending locks as
   having been revoked.

11.7.7.1.  Leases and File System Transitions

   In the case of lease renewal, the client may not be submitting
   requests for a file system that has been transferred to another
   server.  This can occur because of the lease renewal mechanism.  The
   client renews the lease associated with all file systems when
   submitting a request on an associated session, regardless of the
   specific file system being referenced.

   In order for the client to schedule renewal of leases where there is
   locking state that may have been relocated to the new server, the
   client must find out about lease relocation before those leases
   expire.  To accomplish this, the SEQUENCE operation will return the
   status bit SEQ4_STATUS_LEASE_MOVED, if responsibility for any of the
   locking state renewed has been transferred to a new server.  This
   will continue until the client receives an NFS4ERR_MOVED error for
   each of the file systems for which there has been locking state
   relocation.

   When a client receives an SEQ4_STATUS_LEASE_MOVED indication, it
   should perform an operation on each file system associated with the
   server where there is locking state for the current client associated
   with the file system in question.  The client may choose to reference
   all file systems in the interests of simplicity but what is important
   is that it must reference all file systems for which there was
   locking state where that state moved.  Once the client receives an
   NFS4ERR_MOVED error for each file system, the SEQ4_STATUS_LEASE_MOVED
   indication is cleared.  The client can terminate the process of



Shepler, et al.           Expires June 18, 2009               [Page 244]

Internet-Draft                   NFSv4.1                   December 2008


   checking file systems once this indication is cleared (but only if
   the client has received a reply for all outstanding SEQUENCE requests
   on all sessions it has with the server), since there are no others
   for which locking state has moved.

   A client may use GETATTR of the fs_status (or fs_locations_info)
   attribute on all of the file systems to get absence indications in a
   single (or a few) request(s), since absent file systems will not
   cause an error in this context.  However, it still must do an
   operation which receives NFS4ERR_MOVED on each file system, in order
   to clear the SEQ4_STATUS_LEASE_MOVED indication is cleared.

   Once the set of file systems with transferred locking state has been
   determined, the client can follow the normal process to obtain the
   new server information (through the fs_locations and
   fs_locations_info attributes) and perform renewal of those leases on
   the new server, unless information in fs_locations_info attribute
   shows that no state could have been transferred.  If the server has
   not had state transferred to it transparently, the client will
   receive NFS4ERR_STALE_CLIENTID from the new server, as described
   above, and the client can then reclaim locks as is done in the event
   of server failure.

11.7.7.2.  Transitions and the Lease_time Attribute

   In order that the client may appropriately manage its leases in the
   case of a file system transition, the destination server must
   establish proper values for the lease_time attribute.

   When state is transferred transparently, that state should include
   the correct value of the lease_time attribute.  The lease_time
   attribute on the destination server must never be less than that on
   the source since this would result in premature expiration of leases
   granted by the source server.  Upon transitions in which state is
   transferred transparently, the client is under no obligation to re-
   fetch the lease_time attribute and may continue to use the value
   previously fetched (on the source server).

   If state has not been transferred transparently, either because the
   associated servers are shown as having different eir_server_scope
   strings or because the client ID is rejected when presented to the
   new server, the client should fetch the value of lease_time on the
   new (i.e. destination) server, and use it for subsequent locking
   requests.  However the server must respect a grace period at least as
   long as the lease_time on the source server, in order to ensure that
   clients have ample time to reclaim their lock before potentially
   conflicting non-reclaimed locks are granted.




Shepler, et al.           Expires June 18, 2009               [Page 245]

Internet-Draft                   NFSv4.1                   December 2008


11.7.8.  Write Verifiers and File System Transitions

   In a file system transition, the two file systems may be clustered in
   the handling of unstably written data.  When this is the case, and
   the two file systems belong to the same _write-verifier_ class, write
   verifiers returned from one system may be compared to those returned
   by the other and superfluous writes avoided.

   When two file systems belong to different _write-verifier_ classes,
   any verifier generated by one must not be compared to one provided by
   the other.  Instead, it should be treated as not equal even when the
   values are identical.

11.7.9.  Readdir Cookies and Verifiers and File System Transitions

   In a file system transition, the two file systems may be consistent
   in their handling of READDIR cookies and verifiers.  When this is the
   case, and the two file systems belong to the same _readdir_ class,
   READDIR cookies and verifiers from one system may be recognized by
   the other and READDIR operations started on one server may be validly
   continued on the other, simply by presenting the cookie and verifier
   returned by a READDIR operation done on the first file system to the
   second.

   When two file systems belong to different _readdir_ classes, any
   READDIR cookie and verifier generated by one is not valid on the
   second, and must not be presented to that server by the client.  The
   client should act as if the verifier was rejected.

11.7.10.  File System Data and File System Transitions

   When multiple replicas exist and are used simultaneously or in
   succession by a client, applications using them will normally expect
   that they contain data the same data or data which is consistent with
   the normal sorts of changes that are made by other clients updating
   the data of the file system. (with metadata being the same to the
   degree indicated by the fs_locations_info attribute).  However, when
   multiple file systems are presented as replicas of one another, the
   precise relationship between the data of one and the data of another
   is not, as a general matter, specified by the NFSv4.1 protocol.  It
   is quite possible to present as replicas file systems where the data
   of those file systems is sufficiently different that some
   applications have problems dealing with the transition between
   replicas.  The namespace will typically be constructed so that
   applications can choose an appropriate level of support, so that in
   one position in the namespace a varied set of replicas will be listed
   while in another only those that are up-to-date may be considered
   replicas.  The protocol does define three special cases of the



Shepler, et al.           Expires June 18, 2009               [Page 246]

Internet-Draft                   NFSv4.1                   December 2008


   relationship among replicas to be specified by the server and relied
   upon by clients:

   o  When multiple server addresses correspond to the same actual
      server, as indicated by a common so_major_id field within the
      eir_server_owner field returned by EXCHANGE_ID, the client may
      depend on the fact that changes to data, metadata, or locks made
      on one file system are immediately reflected on others.

   o  When multiple replicas exist and are used simultaneously by a
      client (see the FSLIB4_CLSIMUL definition within
      fs_locations_info), they must designate the same data.  Where file
      systems are writable, a change made on one instance must be
      visible on all instances, immediately upon the earlier of the
      return of the modifying requester or the visibility of that change
      on any of the associated replicas.  This allows a client to use
      these replicas simultaneously without any special adaptation to
      the fact that there are multiple replicas.  In this case, locks,
      whether shared or byte-range, and delegations obtained one replica
      are immediately reflected on all replicas, even though these locks
      will be managed under a set of client IDs.

   o  When one replica is designated as the successor instance to
      another existing instance after return NFS4ERR_MOVED (i.e. the
      case of migration), the client may depend on the fact that all
      changes securely made to data (uncommitted writes are dealt with
      in Section 11.7.8) on the original instance are made to the
      successor image.

   o  Where a file system is not writable but represents a read-only
      copy (possibly periodically updated) of a writable file system,
      clients have similar requirements with regard to the propagation
      of updates.  They may need a guarantee that any change visible on
      the original file system instance must be immediately visible on
      any replica before the client transitions access to that replica,
      in order to avoid any possibility that a client, in effecting a
      transition to a replica, will see any reversion in file system
      state.  The specific means by which this will be prevented varies
      based on fs4_status_type reported as part of the fs_status
      attribute (see Section 11.11).  Since these file systems are
      presumed not to be suitable for simultaneous use, there is no
      specification of how locking is handled and it generally will be
      the case that locks obtained one file system will be separate from
      those on others.  Since these are going to be read-only file
      systems, this is not expected to pose an issue for clients or
      applications.





Shepler, et al.           Expires June 18, 2009               [Page 247]

Internet-Draft                   NFSv4.1                   December 2008


11.8.  Effecting File System Referrals

   Referrals are effected when an absent file system is encountered, and
   one or more alternate locations are made available by the
   fs_locations or fs_locations_info attributes.  The client will
   typically get an NFS4ERR_MOVED error, fetch the appropriate location
   information and proceed to access the file system on a different
   server, even though it retains its logical position within the
   original namespace.  Referrals differ from migration events in that
   they happen only when the client has not previously referenced the
   file system in question (so there is nothing to transition).
   Referrals can only come into effect when an absent file system is
   encountered at its root.

   The examples given in the sections below are somewhat artificial in
   that an actual client will not typically do a multi-component lookup,
   but will have cached information regarding the upper levels of the
   name hierarchy.  However, these example are chosen to make the
   required behavior clear and easy to put within the scope of a small
   number of requests, without getting unduly into details of how
   specific clients might choose to cache things.

11.8.1.  Referral Example (LOOKUP)

   Let us suppose that the following COMPOUND is sent in an environment
   in which /this/is/the/path is absent from the target server.  This
   may be for a number of reasons.  It may be the case that the file
   system has moved, or, it may be the case that the target server is
   functioning mainly, or solely, to refer clients to the servers on
   which various file systems are located.

   o  PUTROOTFH

   o  LOOKUP "this"

   o  LOOKUP "is"

   o  LOOKUP "the"

   o  LOOKUP "path"

   o  GETFH

   o  GETATTR fsid,fileid,size,time_modify

   Under the given circumstances, the following will be the result.





Shepler, et al.           Expires June 18, 2009               [Page 248]

Internet-Draft                   NFSv4.1                   December 2008


   o  PUTROOTFH --> NFS_OK.  The current fh is now the root of the
      pseudo-fs.

   o  LOOKUP "this" --> NFS_OK.  The current fh is for /this and is
      within the pseudo-fs.

   o  LOOKUP "is" --> NFS_OK.  The current fh is for /this/is and is
      within the pseudo-fs.

   o  LOOKUP "the" --> NFS_OK.  The current fh is for /this/is/the and
      is within the pseudo-fs.

   o  LOOKUP "path" --> NFS_OK.  The current fh is for /this/is/the/path
      and is within a new, absent file system, but ... the client will
      never see the value of that fh.

   o  GETFH --> NFS4ERR_MOVED.  Fails because current fh is in an absent
      file system at the start of the operation and the spec makes no
      exception for GETFH.

   o  GETATTR fsid,fileid,size,time_modify.  Not executed because the
      failure of the GETFH stops processing of the COMPOUND.

   Given the failure of the GETFH, the client has the job of determining
   the root of the absent file system and where to find that file
   system, i.e. the server and path relative to that server's root fh.
   Note here that in this example, the client did not obtain filehandles
   and attribute information (e.g. fsid) for the intermediate
   directories, so that it would not be sure where the absent file
   system starts.  It could be the case, for example, that /this/is/the
   is the root of the moved file system and that the reason that the
   lookup of "path" succeeded is that the file system was not absent on
   that operation but was moved between the last LOOKUP and the GETFH
   (since COMPOUND is not atomic).  Even if we had the fsids for all of
   the intermediate directories, we could have no way of knowing that
   /this/is/the/path was the root of a new file system, since we don't
   yet have its fsid.

   In order to get the necessary information, let us re-send the chain
   of LOOKUPs with GETFHs and GETATTRs to at least get the fsids so we
   can be sure where the appropriate file system boundaries are.  The
   client could choose to get fs_locations_info at the same time but in
   most cases the client will have a good guess as to where file system
   boundaries are (because of where and where not NFS4ERR_MOVED was
   received) making fetching of fs_locations_info unnecessary.






Shepler, et al.           Expires June 18, 2009               [Page 249]

Internet-Draft                   NFSv4.1                   December 2008


   OP01:  PUTROOTFH --> NFS_OK

   -  Current fh is root of pseudo-fs.

   OP02:  GETATTR(fsid) --> NFS_OK

   -  Just for completeness.  Normally, clients will know the fsid of
      the pseudo-fs as soon as they establish communication with a
      server.

   OP03:  LOOKUP "this" --> NFS_OK

   OP04:  GETATTR(fsid) --> NFS_OK

   -  Get current fsid to see where file system boundaries are.  The
      fsid will be that for the pseudo-fs in this example, so no
      boundary.

   OP05:  GETFH --> NFS_OK

   -  Current fh is for /this and is within pseudo-fs.

   OP06:  LOOKUP "is" --> NFS_OK

   -  Current fh is for /this/is and is within pseudo-fs.

   OP07:  GETATTR(fsid) --> NFS_OK

   -  Get current fsid to see where file system boundaries are.  The
      fsid will be that for the pseudo-fs in this example, so no
      boundary.

   OP08:  GETFH --> NFS_OK

   -  Current fh is for /this/is and is within pseudo-fs.

   OP09:  LOOKUP "the" --> NFS_OK

   -  Current fh is for /this/is/the and is within pseudo-fs.

   OP10:  GETATTR(fsid) --> NFS_OK

   -  Get current fsid to see where file system boundaries are.  The
      fsid will be that for the pseudo-fs in this example, so no
      boundary.






Shepler, et al.           Expires June 18, 2009               [Page 250]

Internet-Draft                   NFSv4.1                   December 2008


   OP11:  GETFH --> NFS_OK

   -  Current fh is for /this/is/the and is within pseudo-fs.

   OP12:  LOOKUP "path" --> NFS_OK

   -  Current fh is for /this/is/the/path and is within a new, absent
      file system, but ...

   -  The client will never see the value of that fh

   OP13:  GETATTR(fsid, fs_locations_info) --> NFS_OK

   -  We are getting the fsid to know where the file system boundaries
      are.  In this operation the fsid will be different than that of
      the parent directory (which in turn was retrieved in OP10).  Note
      that the fsid we are given will not necessarily be preserved at
      the new location.  That fsid might be different and in fact the
      fsid we have for this file system might be a valid fsid of a
      different file system on that new server.

   -  In this particular case, we are pretty sure anyway that what has
      moved is /this/is/the/path rather than /this/is/the since we have
      the fsid of the latter and it is that of the pseudo-fs, which
      presumably cannot move.  However, in other examples, we might not
      have this kind of information to rely on (e.g. /this/is/the might
      be a non-pseudo file system separate from /this/is/the/path), so
      we need to have another reliable source information on the
      boundary of the file system which is moved.  If, for example, the
      file system "/this/is" had moved we would have a case of migration
      rather than referral and once the boundaries of the migrated file
      system was clear we could fetch fs_locations_info.

   -  We are fetching fs_locations_info because the fact that we got an
      NFS4ERR_MOVED at this point means that it most likely that this is
      a referral and we need the destination.  Even if it is the case
      that "/this/is/the" is a file system which has migrated, we will
      still need the location information for that file system.

   OP14:  GETFH --> NFS4ERR_MOVED

   -  Fails because current fh is in an absent file system at the start
      of the operation and the spec makes no exception for GETFH.  Note
      that this means the server will never send the client a filehandle
      from within an absent file system.

   Given the above, the client knows where the root of the absent file
   system is (/this/is/the/path), by noting where the change of fsid



Shepler, et al.           Expires June 18, 2009               [Page 251]

Internet-Draft                   NFSv4.1                   December 2008


   occurred (between "the" and "path").  The fs_locations_info attribute
   also gives the client the actual location of the absent file system,
   so that the referral can proceed.  The server gives the client the
   bare minimum of information about the absent file system so that
   there will be very little scope for problems of conflict between
   information sent by the referring server and information of the file
   system's home.  No filehandles and very few attributes are present on
   the referring server and the client can treat those it receives as
   basically transient information with the function of enabling the
   referral.

11.8.2.  Referral Example (READDIR)

   Another context in which a client may encounter referrals is when it
   does a READDIR on directory in which some of the sub-directories are
   the roots of absent file systems.

   Suppose such a directory is read as follows:

   o  PUTROOTFH

   o  LOOKUP "this"

   o  LOOKUP "is"

   o  LOOKUP "the"

   o  READDIR (fsid, size, time_modify, mounted_on_fileid)

   In this case, because rdattr_error is not requested,
   fs_locations_info is not requested, and some of attributes cannot be
   provided, the result will be an NFS4ERR_MOVED error on the READDIR,
   with the detailed results as follows:

   o  PUTROOTFH --> NFS_OK.  The current fh is at the root of the
      pseudo-fs.

   o  LOOKUP "this" --> NFS_OK.  The current fh is for /this and is
      within the pseudo-fs.

   o  LOOKUP "is" --> NFS_OK.  The current fh is for /this/is and is
      within the pseudo-fs.

   o  LOOKUP "the" --> NFS_OK.  The current fh is for /this/is/the and
      is within the pseudo-fs.

   o  READDIR (fsid, size, time_modify, mounted_on_fileid) -->
      NFS4ERR_MOVED.  Note that the same error would have been returned



Shepler, et al.           Expires June 18, 2009               [Page 252]

Internet-Draft                   NFSv4.1                   December 2008


      if /this/is/the had migrated, when in fact it is because the
      directory contains the root of an absent file system.

   So now suppose that we re-send with rdattr_error:

   o  PUTROOTFH

   o  LOOKUP "this"

   o  LOOKUP "is"

   o  LOOKUP "the"

   o  READDIR (rdattr_error, fsid, size, time_modify, mounted_on_fileid)

   The results will be:

   o  PUTROOTFH --> NFS_OK.  The current fh is at the root of the
      pseudo-fs.

   o  LOOKUP "this" --> NFS_OK.  The current fh is for /this and is
      within the pseudo-fs.

   o  LOOKUP "is" --> NFS_OK.  The current fh is for /this/is and is
      within the pseudo-fs.

   o  LOOKUP "the" --> NFS_OK.  The current fh is for /this/is/the and
      is within the pseudo-fs.

   o  READDIR (rdattr_error, fsid, size, time_modify, mounted_on_fileid)
      --> NFS_OK.  The attributes for directory entry with the component
      named "path" will only contain rdattr_error with the value
      NFS4ERR_MOVED, together with an fsid value and a value for
      mounted_on_fileid.

   So suppose we do another READDIR to get fs_locations_info (although
   we could have used a GETATTR directly, as in Section 11.8.1).

   o  PUTROOTFH

   o  LOOKUP "this"

   o  LOOKUP "is"

   o  LOOKUP "the"

   o  READDIR (rdattr_error, fs_locations_info, mounted_on_fileid, fsid,
      size, time_modify)



Shepler, et al.           Expires June 18, 2009               [Page 253]

Internet-Draft                   NFSv4.1                   December 2008


   The results would be:

   o  PUTROOTFH --> NFS_OK.  The current fh is at the root of the
      pseudo-fs.

   o  LOOKUP "this" --> NFS_OK.  The current fh is for /this and is
      within the pseudo-fs.

   o  LOOKUP "is" --> NFS_OK.  The current fh is for /this/is and is
      within the pseudo-fs.

   o  LOOKUP "the" --> NFS_OK.  The current fh is for /this/is/the and
      is within the pseudo-fs.

   o  READDIR (rdattr_error, fs_locations_info, mounted_on_fileid, fsid,
      size, time_modify) --> NFS_OK.  The attributes will be as shown
      below.

   The attributes for the directory entry with the component named
   "path" will only contain

   o  rdattr_error (value: NFS_OK)

   o  fs_locations_info

   o  mounted_on_fileid (value: unique fileid within referring file
      system)

   o  fsid (value: unique value within referring server)

   The attributes for entry "path" will not contain size or time_modify
   because these attributes are not available within an absent file
   system.

11.9.  The Attribute fs_locations

   The fs_locations attribute is structured in the following way:


   struct fs_location4 {
           utf8str_cis     server<>;
           pathname4       rootpath;
   };


   struct fs_locations4 {
           pathname4       fs_root;
           fs_location4    locations<>;



Shepler, et al.           Expires June 18, 2009               [Page 254]

Internet-Draft                   NFSv4.1                   December 2008


   };

   The fs_location4 data type is used to represent the location of a
   file system by providing a server name and the path to the root of
   the file system within that server's namespace.  When a set of
   servers have corresponding file systems at the same path within their
   namespaces, an array of server names may be provided.  An entry in
   the server array is a UTF-8 string and represents one of a
   traditional DNS host name, IPv4 address, or IPv6 address, or a zero-
   length string.  An IPv4 or IPv6 address is represented as a universal
   address (see Section 3.3.9 and [14]), minus the netid, and either
   with or without the trailing ".p1.p2" suffix that represents the port
   number.  If the suffix is omitted, then the default port, 2049,
   SHOULD be assumed.  A zero-length string SHOULD be used to indicate
   the current address being used for the RPC call.  It is not a
   requirement that all servers that share the same rootpath be listed
   in one fs_location4 instance.  The array of server names is provided
   for convenience.  Servers that share the same rootpath may also be
   listed in separate fs_location4 entries in the fs_locations
   attribute.

   The fs_locations4 data type and fs_locations attribute contain an
   array of such locations.  Since the namespace of each server may be
   constructed differently, the "fs_root" field is provided.  The path
   represented by fs_root represents the location of the file system in
   the current server's namespace, i.e. that of the server from which
   the fs_locations attribute was obtained.  The fs_root path is meant
   to aid the client by clearly referencing the root of the file system
   whose locations are being reported, no matter what object within the
   current file system the current filehandle designates.  The fs_root
   is simply the pathname the client used to reach the object on the
   current server, the object being that the fs_locations attribute
   applies to.

   When the fs_locations attribute is interrogated and there are no
   alternate file system locations, the server SHOULD return a zero-
   length array of fs_location4 structures, together with a valid
   fs_root.

   As an example, suppose there is a replicated file system located at
   two servers (servA and servB).  At servA, the file system is located
   at path "/a/b/c".  At, servB the file system is located at path
   "/x/y/z".  If the client were to obtain the fs_locations value for
   the directory at "/a/b/c/d", it might not necessarily know that the
   file system's root is located in servA's namespace at "/a/b/c".  When
   the client switches to servB, it will need to determine that the
   directory it first referenced at servA is now represented by the path
   "/x/y/z/d" on servB.  To facilitate this, the fs_locations attribute



Shepler, et al.           Expires June 18, 2009               [Page 255]

Internet-Draft                   NFSv4.1                   December 2008


   provided by servA would have a fs_root value of "/a/b/c" and two
   entries in fs_locations.  One entry in fs_locations will be for
   itself (servA) and the other will be for servB with a path of
   "/x/y/z".  With this information, the client is able to substitute
   "/x/y/z" for the "/a/b/c" at the beginning of its access path and
   construct "/x/y/z/d" to use for the new server.

   Note that: there is no requirement that the number of components in
   each rootpath be the same; there is no relation between the number of
   components in rootpath or fs_root; and the none of the components in
   each rootpath and fs_root have to be the same.  In the above example,
   we could have had a third element in the locations array, with server
   equal to "servC", and rootpath equal to "/I/II", and a fourth element
   in locations with server equal to "servD", and rootpath equal to
   "/aleph/beth/gimel/daleth/he".

   The relationship between fs_root to a rootpath is that the client
   replaces the pathname indicated in fs_root for the current server for
   the substitute indicated in rootpath for the new server.

   For an example for a referred or migrated file system, suppose there
   is a file system located at serv1.  At serv1, the file system is
   located at "/az/buky/vedi/glagoli".  The client finds that object at
   "glagoli" has migrated (or is a referral).  The client gets the
   fs_locations attribute, which contains an fs_root of "/az/buky/vedi/
   glagoli", and one element in the locations array, with server equal
   to "serv2", and rootpath equal to "/izhitsa/fita".  The client
   replaces "/az/buky/vedi/glagoli" with "/izhitsa/fita", and uses the
   latter pathname on "serv2".

   Thus, the server MUST return an fs_root that is equal to the path the
   client used to reach the object the fs_locations attribute applies
   to.  Otherwise the client cannot determine the new path to use on the
   new server.

   Since the fs_locations attribute lacks information defining various
   attributes of the various file system choices presented, it SHOULD
   only be interrogated and used when fs_locations_info is not
   available.  When fs_locations is used, information about the specific
   locations should be assumed based on the following rules.

   The following rules are general and apply irrespective of the
   context.

   o  All listed file system instances should be considered as of the
      same _handle_ class, if and only if, the current fh_expire_type
      attribute does not include the FH4_VOL_MIGRATION bit.  Note that
      in the case of referral, filehandle issues do not apply since



Shepler, et al.           Expires June 18, 2009               [Page 256]

Internet-Draft                   NFSv4.1                   December 2008


      there can be no filehandles known within the current file system
      nor is there any access to the fh_expire_type attribute on the
      referring (absent) file system.

   o  All listed file system instances should be considered as of the
      same _fileid_ class, if and only if, the fh_expire_type attribute
      indicates persistent filehandles and does not include the
      FH4_VOL_MIGRATION bit.  Note that in the case of referral, fileid
      issues do not apply since there can be no fileids known within the
      referring (absent) file system nor is there any access to the
      fh_expire_type attribute.

   o  All file system instances servers should be considered as of
      different _change_ classes.

   For other class assignments, handling of file system transitions
   depends on the reasons for the transition:

   o  When the transition is due to migration, that is the client was
      directed to new file system after receiving an NFS4ERR_MOVED
      error, the target should be treated as being of the same _write-
      verifier_ class as the source.

   o  When the transition is due to failover to another replica, that
      is, the client selected another replica without receiving and
      NFS4ERR_MOVED error, the target should be treated as being of a
      different _write-verifier_ class from the source.

   The specific choices reflect typical implementation patterns for
   failover and controlled migration respectively.  Since other choices
   are possible and useful, this information is better obtained by using
   fs_locations_info.  When a server implementation needs to communicate
   other choices, it MUST support the fs_locations_info attribute.

   See Section 21 for a discussion on the recommendations for the
   security flavor to be used by any GETATTR operation that requests the
   "fs_locations" attribute.

11.10.  The Attribute fs_locations_info

   The fs_locations_info attribute is intended as a more functional
   replacement for fs_locations which will continue to exist and be
   supported.  Clients can use it to get a more complete set of
   information about alternative file system locations.  When the server
   does not support fs_locations_info, fs_locations can be used to get a
   subset of the information.  A server which supports fs_locations_info
   MUST support fs_locations as well.




Shepler, et al.           Expires June 18, 2009               [Page 257]

Internet-Draft                   NFSv4.1                   December 2008


   There is additional information present in fs_locations_info, that is
   not available in fs_locations:

   o  Attribute continuity information to allow a client to select a
      location which meets the transparency requirements of the
      applications accessing the data and to take advantage of
      optimizations that server guarantees as to attribute continuity
      may provide (e.g. change attribute).

   o  File System identity information which indicates when multiple
      replicas, from the client's point of view, correspond to the same
      target file system, allowing them to be used interchangeably,
      without disruption, as multiple paths to the same thing.

   o  Information which will bear on the suitability of various
      replicas, depending on the use that the client intends.  For
      example, many applications need an absolutely up-to-date copy
      (e.g. those that write), while others may only need access to the
      most up-to-date copy reasonably available.

   o  Server-derived preference information for replicas, which can be
      used to implement load-balancing while giving the client the
      entire file system list to be used in case the primary fails.

   The fs_locations_info attribute is structured similarly to the
   fs_locations attribute.  A top-level structure (fs_locations_info4)
   contains the entire attribute including the root pathname of the file
   system and an array of lower-level structures that define replicas
   that share a common root path on their respective servers.  The
   lower-level structure in turn (fs_locations_item4) contains a
   specific pathname and information on one or more individual server
   replicas.  For that last lowest-level fs_locations_info has a
   fs_locations_server4 structure that contains per-server-replica
   information in addition to the server name.  This per-server-replica
   information includes a nominally opaque array, fls_info, in which
   specific pieces of information are located at the specific indices
   listed below.

   The attribute will always contains at least a single
   fs_locations_server entry.  Typically, this will be an entry with the
   FS4LIGF_CUR_REQ flag set, although in the case of a referral there
   will be no entry with that flag set.

   It should be noted that fs_locations_info attributes returned by
   servers for various replicas may differ for various reasons.  One
   server may know about a set of replicas that are not know to other
   servers.  Further, compatibility attributes may differ.  Filehandles
   might be of the same class going from replica A to replica B but not



Shepler, et al.           Expires June 18, 2009               [Page 258]

Internet-Draft                   NFSv4.1                   December 2008


   going in the reverse direction.  This might happen because the
   filehandles are the same but replica B's server implementation might
   not have provision to note and report that equivalence.

   The fs_locations_info attribute consists of a root pathname
   (fli_fs_root, just like fs_root in the fs_locations attribute),
   together with an array of fs_location_item4 structures.  The
   fs_location_item4 structures in turn consist of a root pathname
   (fli_rootpath) together with an array (fli_entries) of elements of
   data type fs_locations_server4, all defined as follows.

   /*
    * Defines an individual server replica
    */
   struct  fs_locations_server4 {
           int32_t         fls_currency;
           opaque          fls_info<>;
           utf8str_cis     fls_server;
   };

   /*
    * Byte indices of items within
    * fls_info: flag fields, class numbers,
    * bytes indicating ranks and orders.
    */
   const FSLI4BX_GFLAGS            = 0;
   const FSLI4BX_TFLAGS            = 1;

   const FSLI4BX_CLSIMUL           = 2;
   const FSLI4BX_CLHANDLE          = 3;
   const FSLI4BX_CLFILEID          = 4;
   const FSLI4BX_CLWRITEVER        = 5;
   const FSLI4BX_CLCHANGE          = 6;
   const FSLI4BX_CLREADDIR         = 7;

   const FSLI4BX_READRANK          = 8;
   const FSLI4BX_WRITERANK         = 9;
   const FSLI4BX_READORDER         = 10;
   const FSLI4BX_WRITEORDER        = 11;

   /*
    * Bits defined within the general flag byte.
    */
   const FSLI4GF_WRITABLE          = 0x01;
   const FSLI4GF_CUR_REQ           = 0x02;
   const FSLI4GF_ABSENT            = 0x04;
   const FSLI4GF_GOING             = 0x08;
   const FSLI4GF_SPLIT             = 0x10;



Shepler, et al.           Expires June 18, 2009               [Page 259]

Internet-Draft                   NFSv4.1                   December 2008


   /*
    * Bits defined within the transport flag byte.
    */
   const FSLI4TF_RDMA              = 0x01;

   /*
    * Defines a set of replicas sharing
    * a common value of the root path
    * with in the corresponding
    * single-server namespaces.
    */
   struct  fs_locations_item4 {
           fs_locations_server4    fli_entries<>;
           pathname4               fli_rootpath;
   };

   /*
    * Defines the overall structure of
    * the fs_locations_info attribute.
    */
   struct  fs_locations_info4 {
           uint32_t                fli_flags;
           int32_t                 fli_valid_for;
           pathname4               fli_fs_root;
           fs_locations_item4      fli_items<>;
   };

   /*
    * Flag bits in fli_flags.
    */
   const FSLI4IF_VAR_SUB           = 0x00000001;

   typedef fs_locations_info4 fattr4_fs_locations_info;

   As noted above, the fs_locations_info attribute, when supported, may
   be requested of absent file systems without causing NFS4ERR_MOVED to
   be returned and it is generally expected that it will be available
   for both present and absent file systems even if only a single
   fs_locations_server4 entry is present, designating the current
   (present) file system, or two fs_locations_server4 entries
   designating the previous location of an absent file system (the one
   just referenced) and its successor location.  Servers are strongly
   urged to support this attribute on all file systems if they support
   it on any file system.

   The data presented in the fs_locations_info attribute may be obtained
   by the server in any number of ways, including specification by the
   administrator or by current protocols for transferring data among



Shepler, et al.           Expires June 18, 2009               [Page 260]

Internet-Draft                   NFSv4.1                   December 2008


   replicas and protocols not yet developed.  NFSv4.1 only defines how
   this information is presented by the server to the client.

11.10.1.  The fs_locations_server4 Structure

   The fs_locations_server4 structure consists of the following items:

   o  An indication of file system up-to-date-ness (fls_currency) in
      terms of approximate seconds before the present.  This value is
      relative to the master copy.  A negative value indicates that the
      server is unable to give any reasonably useful value here.  A zero
      indicates that file system is the actual writable data or a
      reliably coherent and fully up-to-date copy.  Positive values
      indicate how out-of-date this copy can normally be before it is
      considered for update.  Such a value is not a guarantee that such
      updates will always be performed on the required schedule but
      instead serve as a hint about how far the copy of the data would
      be expected to be behind the most up-to-date copy.

   o  A counted array of one-byte values (fls_info) containing
      information about the particular file system instance.  This data
      includes general flags, transport capability flags, file system
      equivalence class information, and selection priority information.
      The encoding will be discussed below.

   o  The server string (fls_server).  For the case of the replica
      currently being accessed (via GETATTR), a zero-length string MAY
      be used to indicate the current address being used for the RPC
      call.  The fls_server field can also be an IPv4 or IPv6 address,
      formatted the same way as an IPv4 or IPv6 address in the "server"
      field of the fs_location4 data type (see Section 11.9).

   Data within the fls_info array is in the form of 8-bit data items
   with constants giving the offsets within the array of various values
   describing this particular file system instance.  This style of
   definition was chosen, in preference to explicit XDR structure
   definitions for these values, for a number of reasons.

   o  The kinds of data in the fls_info array, representing flags, file
      system classes and priorities among set of file systems
      representing the same data, are such that eight bits provides a
      quite acceptable range of values.  Even where there might be more
      than 256 such file system instances, having more than 256 distinct
      classes or priorities is unlikely.

   o  Explicit definition of the various specific data items within XDR
      would limit expandability in that any extension within a
      subsequent minor version would require yet another attribute,



Shepler, et al.           Expires June 18, 2009               [Page 261]

Internet-Draft                   NFSv4.1                   December 2008


      leading to specification and implementation clumsiness.

   o  Such explicit definitions would also make it impossible to propose
      standards-track extensions apart from a full minor version.

   This encoding scheme can be adapted to the specification of multi-
   byte numeric values, even though none are currently defined.  If
   extensions are made via standards-track RFC's, multi-byte quantities
   will be encoded as a range of bytes with a range of indices with the
   byte interpreted in big endian byte order.  Further any such index
   assignments are constrained so that the relevant quantities will not
   cross XDR word boundaries.

   The set of fls_info data is subject to expansion in a future minor
   version, or in a standard-track RFC, within the context of a single
   minor version.  The server SHOULD NOT send and the client MUST NOT
   use indices within the fls_info array that are not defined in
   standards-track RFC's.

   The fls_info array contains within it:

   o  Two 8-bit flag fields, one devoted to general file-system
      characteristics and a second reserved for transport-related
      capabilities.

   o  Six 8-bit class values which define various file system
      equivalence classes as explained below.

   o  Four 8-bit priority values which govern file system selection as
      explained below.

   The general file system characteristics flag (at byte index
   FSLI4BX_GFLAGS) has the following bits defined within it:

   o  FSLI4GF_WRITABLE indicates that this file system target is
      writable, allowing it to be selected by clients which may need to
      write on this file system.  When the current file system instance
      is writable, and is defined as of the same simultaneous use class
      (as specified by the value at index FSLI4BX_CLSIMUL) to which the
      client was previously writing, then it must incorporate within its
      data any committed write made on the source file system instance.
      See Section 11.7.8 which discusses the write-verifier class.
      While there is no harm in not setting this flag for a file system
      that turns out to be writable, turning the flag on for read-only
      file system can cause problems for clients which select a
      migration or replication target based on it and then find
      themselves unable to write.




Shepler, et al.           Expires June 18, 2009               [Page 262]

Internet-Draft                   NFSv4.1                   December 2008


   o  FSLI4GF_CUR_REQ indicates that this replica is the one on which
      the request is being made.  Only a single server entry may have
      this flag set and in the case of a referral, no entry will have
      it.

   o  FSLI4GF_ABSENT indicates that this entry corresponds an absent
      file system replica.  It can only be set if FSLI4GF_CUR_REQ is
      set.  When both such bits are set it indicates that a file system
      instance is not usable but that the information in the entry can
      be used to determine the sorts of continuity available when
      switching from this replica to other possible replicas.  Since
      this bit can only be true if FSLI4GF_CUR_REQ is true, the value
      could be determined using the fs_status attribute but the
      information is also made available here for the convenience of the
      client.  An entry with this bit, since it represents a true file
      system (albeit absent), does not appear in the event of a
      referral, but only where a file system has been accessed at this
      location and has subsequently been migrated.

   o  FSLI4GF_GOING indicates that a replica, while still available,
      should not be used further.  The client, if using it, should make
      an orderly transfer to another file system instance as
      expeditiously as possible.  It is expected that file systems going
      out of service will be announced as FSLI4GF_GOING some time before
      the actual loss of service and that the valid_for value will be
      sufficiently small to allow clients to detect and act on scheduled
      events while large enough that the cost of the requests to fetch
      the fs_locations_info values will not be excessive.  Values on the
      order of ten minutes seem reasonable.

      When this flag is seen as part of a transition into a new file
      system, a client might choose to transfer immediately to another
      replica, or it may reference the current file system and only
      transition when a migration event occurs.  Similarly, when this
      flag appears as a replica in the referral, clients would likely to
      avoid being referred to this instance whenever there is another
      choice.

   o  FSLI4GF_SPLIT indicates that when a transition occurs from the
      current file system instance to this one, the replacement may
      consist of multiple file systems.  In this case, the client has to
      be prepared for the possibility that objects on the same file
      system before migration will be on different ones after.  Note
      that FSLI4GF_SPLIT is not incompatible with the file systems
      belonging to the same _fileid_ class since, if one has a set of
      fileids that are unique within a file system, each subset assigned
      to a smaller file system after migration would not have any
      conflicts internal to that file system.



Shepler, et al.           Expires June 18, 2009               [Page 263]

Internet-Draft                   NFSv4.1                   December 2008


      A client, in the case of a split file system, will interrogate
      existing files with which it has continuing connection (it is free
      simply forget cached filehandles).  If the client remembers the
      directory filehandle associated with each open file, it may
      proceed upward using LOOKUPP to find the new file system
      boundaries.  Note that in the event of a referral, there will not
      be any such files and so these action will not be performed.
      Instead, a reference to a portion of the original file system now
      split off into other file systems will encounter an fsid change
      and possibly a further referral.

      Once the client recognizes that one file system has been split
      into two, it can prevent the disruption of running applications by
      presenting the two file systems as a single one until a convenient
      point to recognize the transition, such as a restart.  This would
      require a mapping from the server's fsids to fsids as seen by the
      client but this is already necessary for other reasons.  As noted
      above, existing fileids within the two descendant file systems
      will not conflict.  Providing non-conflicting fileids for newly-
      created files on the split file systems is the responsibility of
      the server (or servers working in concert).  The server can encode
      filehandles such that filehandles generated before the split event
      can be discerned from those generated after the split, allowing
      the server to determine when the need for emulating two file
      systems as one is over.

      Although it is possible for this flag to be present in the event
      of referral, it would generally be of little interest to the
      client, since the client is not expected to have information
      regarding the current contents of the absent file system.

   The transport-flag field (at byte index FSLI4BX_TFLAGS) contains the
   following bits related to the transport capabilities of the specific
   file system.

   o  FSLI4TF_RDMA indicates that this file system provides NFSv4.1 file
      system access using an RDMA-capable transport.

   Attribute continuity and file system identity information are
   expressed by defining equivalence relations on the sets of file
   systems presented to the client.  Each such relation is expressed as
   a set of file system equivalence classes.  For each relation, a file
   system has an 8-bit class number.  Two file systems belong to the
   same class if both have identical non-zero class numbers.  Zero is
   treated as non-matching.  Most often, the relevant question for the
   client will be whether a given replica is identical-to/
   continuous-with the current one in a given respect but the
   information should be available also as to whether two other replicas



Shepler, et al.           Expires June 18, 2009               [Page 264]

Internet-Draft                   NFSv4.1                   December 2008


   match in that respect as well.

   The following fields specify the file system's class numbers for the
   equivalence relations used in determining the nature of file system
   transitions.  See Section 11.7 for details about how this information
   is to be used.  Servers may assign these values as they wish, so long
   as file system instances that share the same value have the specified
   relationship to one another, conversely file systems which have the
   specified relationship to one another share a common class value.  As
   each instance entry is added, the relationships of this instance to
   previously entered instances can be consulted and if one is found
   that bears the specified relationship, that entry's class value can
   be copied to the new entry.  When no such previous entry exists, a
   new value for that byte index, not previously used can be selected,
   most likely by increment the value of the last class value assigned
   for that index.

   o  The field with byte index FSLI4BX_CLSIMUL defines the
      simultaneous-use class for the file system.

   o  The field with byte index FSLI4BX_CLHANDLE defines the handle
      class for the file system.

   o  The field with byte index FSLI4BX_CLFILEID defines the fileid
      class for the file system.

   o  The field with byte index FSLI4BX_CLWRITEVER defines the write-
      verifier class for the file system.

   o  The field with byte index FSLI4BX_CLCHANGE defines the change
      class for the file system.

   o  The field with byte index FSLI4BX_CLREADDIR defines the readdir
      class for the file system.

   Server-specified preference information is also provided via 8-bit
   values within the fls_info array.  The values provide a rank and an
   order (see below) to be used with separate values specifiable for the
   cases of read-only and writable file systems.  These values are
   compared for different file systems to establish the server-specified
   preference, with lower values indicating "more preferred".

   Rank is used to express a strict server-imposed ordering on clients,
   with lower values indicating "more preferred."  Clients should
   attempt to use all replicas with a given rank before they use one
   with a higher rank.  Only if all of those file systems are
   unavailable should the client proceed to those of a higher rank.
   Because specifying a rank will override client preferences, servers



Shepler, et al.           Expires June 18, 2009               [Page 265]

Internet-Draft                   NFSv4.1                   December 2008


   should be conservative about using this mechanism, particularly when
   the environment is one in client communication characteristics are
   not tightly controlled and visible to the server.

   Within a rank, the order value is used to specify the server's
   preference to guide the client's selection when the client's own
   preferences are not controlling, with lower values of order
   indicating "more preferred."  If replicas are approximately equal in
   all respects, clients should defer to the order specified by the
   server.  When clients look at server latency as part of their
   selection, they are free to use this criterion but it is suggested
   that when latency differences are not significant, the server-
   specified order should guide selection.

   o  The field at byte index FSLI4BX_READRANK gives the rank value to
      be used for read-only access.

   o  The field at byte index FSLI4BX_READORDER gives the order value to
      be used for read-only access.

   o  The field at byte index FSLI4BX_WRITERANK gives the rank value to
      be used for writable access.

   o  The field at byte index FSLI4BX_WRITEORDER gives the order value
      to be used for writable access.

   Depending on the potential need for write access by a given client,
   one of the pairs of rank and order values is used.  The read rank and
   order should only be used if the client knows that only reading will
   ever be done or if it is prepared to switch to a different replica in
   the event that any write access capability is required in the future.

11.10.2.  The fs_locations_info4 Structure

   The fs_locations_info4 structure, encoding the fs_locations_info
   attribute, contains the following:

   o  The fli_flags field which contains general flags that affect the
      interpretation of this fs_locations_info4 structure and all
      fs_locations_item4 structures within it.  The only flag currently
      defined is FSLI4IF_VAR_SUB.  All bits in the fli_flags field which
      are not defined should always be returned as zero.

   o  The fli_fs_root field which contains the pathname of the root of
      the current file system on the current server, just as it does in
      the fs_locations4 structure.





Shepler, et al.           Expires June 18, 2009               [Page 266]

Internet-Draft                   NFSv4.1                   December 2008


   o  An array called fli_items of fs_locations4_item structures, which
      contain information about replicas of the current file system.
      Where the current file system is actually present, or has been
      present, i.e. this is not a referral situation, one of the
      fs_locations_item4 structures will contain an fs_locations_server4
      for the current server.  This structure will have FSLI4GF_ABSENT
      set if the current file system is absent, i.e. normal access to it
      will return NFS4ERR_MOVED.

   o  The fli_valid_for field specifies a time in seconds for which it
      is reasonable for a client to use the fs_locations_info attribute
      without refetch.  The fli_valid_for value does not provide a
      guarantee of validity since servers can unexpectedly go out of
      service or become inaccessible for any number of reasons.  Clients
      are well-advised to refetch this information for actively accessed
      file system at every fli_valid_for seconds.  This is particularly
      important when file system replicas may go out of service in a
      controlled way using the FSLI4GF_GOING flag to communicate an
      ongoing change.  The server should set fli_valid_for to a value
      which allows well-behaved clients to notice the FSLI4GF_GOING flag
      and make an orderly switch before the loss of service becomes
      effective.  If this value is zero, then no refetch interval is
      appropriate and the client need not refetch this data on any
      particular schedule.  In the event of a transition to a new file
      system instance, a new value of the fs_locations_info attribute
      will be fetched at the destination and it is to be expected that
      this may have a different valid_for value, which the client should
      then use, in the same fashion as the previous value.

   The FSLI4IF_VAR_SUB flag within fli_flags controls whether variable
   substitution is to be enabled.  See Section 11.10.3 for an
   explanation of variable substitution.

11.10.3.  The fs_locations_item4 Structure

   The fs_locations_item4 structure contains a pathname (in the field
   fli_rootpath) which encodes the path of the target file system
   replicas on the set of servers designated by the included
   fs_locations_server4 entries.  The precise manner in which this
   target location is specified depends on the value of the
   FSLI4IF_VAR_SUB flag within the associated fs_locations_info4
   structure.

   If this flag is not set, then fli_rootpath simply designates the
   location of the target file system within each server's single-server
   namespace just as it does for the rootpath within the fs_location4
   structure.  When this bit is set, however, component entries of a
   certain form are subject to client-specific variable substitution so



Shepler, et al.           Expires June 18, 2009               [Page 267]

Internet-Draft                   NFSv4.1                   December 2008


   as to allow a degree of namespace non-uniformity in order to
   accommodate the selection of client-specific file system targets to
   adapt to different client architectures or other characteristics.

   When such substitution is in effect a variable beginning with the
   string "${" and ending with the string "}" and containing a colon is
   to be replaced by the client-specific value associated with that
   variable.  The string "unknown" should be used by the client when it
   has no value for such a variable.  The pathname resulting from such
   substitutions is used to designate the target file system, so that
   different clients may have different file systems, corresponding to
   that location in the multi-server namespace.

   As mentioned above, such substituted pathname variables contain a
   colon.  The part before the colon is to be a DNS domain name with the
   part after being a case-insensitive alphanumeric string.

   Where the domain is "ietf.org", only variable names defined in this
   document or subsequent standards-track RFC's are subject to such
   substitution.  Organizations are free to use their domain names to
   create their own sets of client-specific variables, to be subject to
   such substitution.  In case where such variables are intended to be
   used more broadly than a single organization, publication of an
   informational RFC defining such variables is RECOMMENDED.

   The variable ${ietf.org:CPU_ARCH} is used to denote the CPU
   architecture object files are compiled.  This specification does not
   limit the acceptable values (except that they must be valid UTF-8
   strings) but such values as "x86", "x86_64" and "sparc" would be
   expected to be used in line with industry practice.

   The variable ${ietf.org:OS_TYPE} is used to denote the operating
   system and thus the kernel and library API's for which code might be
   compiled.  This specification does not limit the acceptable values
   (except that they must be valid UTF-8 strings) but such values as
   "linux" and "freebsd" would be expected to be used in line with
   industry practice.

   The variable ${ietf.org:OS_VERSION} is used to denote the operating
   system version and thus the specific details of versioned interfaces
   for which code might be compiled.  This specification does not limit
   the acceptable values (except that they must be valid UTF-8 strings).
   However, combinations of numbers and letters with interspersed dots
   would be expected to be used in line with industry practice, with the
   details of the version format depending on the specific value of the
   variable ${ietf.org:OS_TYPE} with which it is used.

   Use of these variable could result in direction of different clients



Shepler, et al.           Expires June 18, 2009               [Page 268]

Internet-Draft                   NFSv4.1                   December 2008


   to different file systems on the same server, as appropriate to
   particular clients.  In cases in which the target file systems are
   located on different servers, a single server could serve as a
   referral point so that each valid combination of variable values
   would designate a referral hosted on a single server, with the
   targets of those referrals on a number of different servers.

   Because namespace administration is affected by the values selected
   to substitute for various variables, clients should provide
   convenient means of determining what variable substitutions a client
   will implement, as well as, where appropriate, providing means to
   control the substitutions to be used.  The exact means by which this
   will be done is outside the scope of this specification.

   Although variable substitution is most suitable for use in the
   context of referrals, if may be used in the context of replication
   and migration.  If it is used in these contexts, the server must
   ensure that no matter what values the client presents for the
   substituted variables, the result is always a valid successor file
   system instance to that from which a transition is occurring, i.e.
   that the data is identical or represents a later image of a writable
   file system.

   Note that when fli_rootpath is a null pathname (that is, one with
   zero components), the file system designated is at the root of the
   specified server, whether the FSLI4IF_VAR_SUB flag within the
   associated fs_locations_info4 structure is set or not.

11.11.  The Attribute fs_status

   In an environment in which multiple copies of the same basic set of
   data are available, information regarding the particular source of
   such data and the relationships among different copies can be very
   helpful in providing consistent data to applications.

















Shepler, et al.           Expires June 18, 2009               [Page 269]

Internet-Draft                   NFSv4.1                   December 2008


   enum fs4_status_type {
           STATUS4_FIXED = 1,
           STATUS4_UPDATED = 2,
           STATUS4_VERSIONED = 3,
           STATUS4_WRITABLE = 4,
           STATUS4_REFERRAL = 5
   };

   struct fs4_status {
           bool            fss_absent;
           fs4_status_type fss_type;
           utf8str_cs      fss_source;
           utf8str_cs      fss_current;
           int32_t         fss_age;
           nfstime4        fss_version;
   };

   The boolean fss_absent indicates whether the file system is currently
   absent.  This value will be set if the file system was previously
   present and becomes absent, or if the file system has never been
   present and the type is STATUS4_REFERRAL.  When this boolean is set
   and the type is not STATUS4_REFERRAL, the remaining information in
   the fs4_status reflects that last valid when the file system was
   present.

   The fss_type field indicates the kind of file system image
   represented.  This is of particular importance when using the version
   values to determine appropriate succession of file system images.
   When fss_absent is set, and the file system was previously present,
   the value of fss_type reflected is that when the file was last
   present.  Five values are distinguished:

   o  STATUS4_FIXED which indicates a read-only image in the sense that
      it will never change.  The possibility is allowed that, as a
      result of migration or switch to a different image, changed data
      can be accessed, but within the confines of this instance, no
      change is allowed.  The client can use this fact to cache
      aggressively.

   o  STATUS4_VERSIONED which indicates that the image, like the
      STATUS4_UPDATED case, is updated externally, but it provides a
      guarantee that the server will carefully update an associated
      version value so that the client can protect itself from a
      situation in which it reads data from one version of the file
      system, and then later reads data from an earlier version of the
      same file system.  See below for a discussion of how this can be
      done.




Shepler, et al.           Expires June 18, 2009               [Page 270]

Internet-Draft                   NFSv4.1                   December 2008


   o  STATUS4_UPDATED which indicates an image that cannot be updated by
      the user writing to it but may be changed externally, typically
      because it is a periodically updated copy of another writable file
      system somewhere else.  In this case, version information is not
      provided and the client does not have the responsibility of making
      sure that this version only advances upon a file system instance
      transition.  In this case, it is the responsibility of the server
      to make sure that the data presented after a file system instance
      transition is a proper successor image and includes all changes
      seen by the client and any change made before all such changes.

   o  STATUS4_WRITABLE which indicates that the file system is an actual
      writable one.  The client need not, of course, actually write to
      the file system, but once it does, it should not accept a
      transition to anything other than a writable instance of that same
      file system.

   o  STATUS4_REFERRAL which indicates that the file system is question
      is absent and has never been present on this server.

   Note that in the STATUS4_UPDATED and STATUS4_VERSIONED cases, the
   server is responsible for the appropriate handling of locks that are
   inconsistent with external changes to delegations.  If a server gives
   out delegations, they SHOULD be recalled before an inconsistent
   change made to data, and MUST be revoked if this is not possible.
   Similarly, if an open is inconsistent with data that is changed (the
   open denies WRITE and the data is changed), that lock SHOULD be
   considered administratively revoked.

   The opaque strings fss_source and fss_current provide a way of
   presenting information about the source of the file system image
   being present.  It is not intended that client do anything with this
   information other than make it available to administrative tools.  It
   is intended that this information be helpful when researching
   possible problems with a file system image that might arise when it
   is unclear if the correct image is being accessed and if not, how
   that image came to be made.  This kind of diagnostic information will
   be helpful, if, as seems likely, copies of file systems are made in
   many different ways (e.g. simple user-level copies, file system-level
   point-in-time copies, clones of the underlying storage), under a
   variety of administrative arrangements.  In such environments,
   determining how a given set of data was constructed can be very
   helpful in resolving problems.

   The opaque string fss_source is used to indicate the source of a
   given file system with the expectation that tools capable of creating
   a file system image propagate this information, when that is
   possible.  It is understood that this may not always be possible



Shepler, et al.           Expires June 18, 2009               [Page 271]

Internet-Draft                   NFSv4.1                   December 2008


   since a user-level copy may be thought of as creating a new data set
   and the tools used may have no mechanism to propagate this data.
   When a file system is initially created, it is desirable to associate
   with it data regarding how the file system was created, where it was
   created, by whom, etc.  Making this information available in this
   attribute in a human-readable string form will be helpful for
   applications and system administrators and also serves to make it
   available when the original file system is used to make subsequent
   copies.

   The opaque string fss_current should provide whatever information is
   available about the source of the current copy.  Such information as
   the tool creating it, any relevant parameters to that tool, the time
   at which the copy was done, the user making the change, the server on
   which the change was made, etc.  All information should be in a
   human-readable string form.

   The field fss_age provides an indication of how out-of-date the file
   system currently is with respect to its ultimate data source (in case
   of cascading data updates).  This complements the fls_currency field
   of fs_locations_server4 (see Section 11.10) in the following way: the
   information in fls_currency gives a bound for how out of date the
   data in a file system might typically get, while the value in fss_age
   gives a bound on how out of date that data actually is.  Negative
   values imply that no information is available.  A zero means that
   this data is known to be current.  A positive value means that this
   data is known to be no older than that number of seconds with respect
   to the ultimate data source.  Using this value, the client may be
   able to decide that a data copy is too old, so that it may search for
   a newer version to use.

   The fss_version field provides a version identification, in the form
   of a time value, such that successive versions always have later time
   values.  When the fs_type is anything other than STATUS4_VERSIONED,
   the server may provide such a value but there is no guarantee as to
   its validity and clients will not use it except to provide additional
   information to add to fss_source and fss_current.

   When fss_type is STATUS4_VERSIONED, servers SHOULD provide a value of
   version which progresses monotonically whenever any new version of
   the data is established.  This allows the client, if reliable image
   progression is important to it, to fetch this attribute as part of
   each COMPOUND where data or metadata from the file system is used.

   When it is important to the client to make sure that only valid
   successor images are accepted, it must make sure that it does not
   read data or metadata from the file system without updating its sense
   of the current state of the image, to avoid the possibility that the



Shepler, et al.           Expires June 18, 2009               [Page 272]

Internet-Draft                   NFSv4.1                   December 2008


   fs_status which the client holds will be one for an earlier image,
   and so accept a new file system instance which is later than that but
   still earlier than updated data read by the client.

   In order to do this reliably, it must do a GETATTR of the fs_status
   attribute that follows any interrogation of data or metadata within
   the file system in question.  Often this is most conveniently done by
   appending such a GETATTR after all other operations that reference a
   given file system.  When errors occur between reading file system
   data and performing such a GETATTR, care must be exercised to make
   sure that the data in question is not used before obtaining the
   proper fs_status value.  In this connection, when an OPEN is done
   within such a versioned file system and the associated GETATTR of
   fs_status is not successfully completed, the open file in question
   must not be accessed until that fs_status is fetched.

   The procedure above will ensure that before using any data from the
   file system the client has in hand a newly-fetched current version of
   the file system image.  Multiple values for multiple requests in
   flight can be resolved by assembling them into the required partial
   order (and the elements should form a total order within it) and
   using the last.  The client may then, when switching among file
   system instances, decline to use an instance which does not have an
   fss_type of STATUS4_VERSIONED or whose fss_version field is earlier
   than the last one obtained from the predecessor file system instance.


12.  Parallel NFS (pNFS)

12.1.  Introduction

   pNFS is an OPTIONAL feature within NFSv4.1; the pNFS feature set
   allows direct client access to the storage devices containing file
   data.  When file data for a single NFSv4 server is stored on multiple
   and/or higher throughput storage devices (by comparison to the
   server's throughput capability), the result can be significantly
   better file access performance.  The relationship among multiple
   clients, a single server, and multiple storage devices for pNFS
   (server and clients have access to all storage devices) is shown in
   Figure 1.











Shepler, et al.           Expires June 18, 2009               [Page 273]

Internet-Draft                   NFSv4.1                   December 2008


       +-----------+
       |+-----------+                                 +-----------+
       ||+-----------+                                |           |
       |||           |        NFSv4.1 + pNFS          |           |
       +||  Clients  |<------------------------------>|   Server  |
        +|           |                                |           |
         +-----------+                                |           |
              |||                                     +-----------+
              |||                                           |
              |||                                           |
              ||| Storage        +-----------+              |
              ||| Protocol       |+-----------+             |
              ||+----------------||+-----------+  Control   |
              |+-----------------|||           |    Protocol|
              +------------------+||  Storage  |------------+
                                  +|  Devices  |
                                   +-----------+

                                 Figure 1

   In this model, the clients, server, and storage devices are
   responsible for managing file access.  This is in contrast to NFSv4
   without pNFS where it is primarily the server's responsibility; some
   of this responsibility may be delegated to the client under strictly
   specified conditions.  See Section 12.2.6 for a discussion of the
   Control Protocol.  See Section 12.2.5 for a discussion of the Storage
   Protocol.

   pNFS takes the form of OPTIONAL operations that manage protocol
   objects called 'layouts' (Section 12.2.7) which contain a byte-range
   and storage location information.  The layout is managed in a similar
   fashion as NFSv4.1 data delegations.  For example, the layout is
   leased, recallable and revocable.  However, layouts are distinct
   abstractions and are manipulated with new operations.  When a client
   holds a layout, it is granted the ability to directly access the
   byte-range at the storage location specified in the layout.

   There are interactions between layouts and other NFSv4.1 abstractions
   such as data delegations and byte-range locking.  Delegation issues
   are discussed in Section 12.5.5.  Byte range locking issues are
   discussed in Section 12.2.9 and Section 12.5.1.

12.2.  pNFS Definitions

   NFSv4.1's pNFS feature provides parallel data access to a file system
   that stripes its content across multiple storage servers.  The first
   instantiation of pNFS, as part of NFSv4.1, separates the file system
   protocol processing into two parts: metadata processing and data



Shepler, et al.           Expires June 18, 2009               [Page 274]

Internet-Draft                   NFSv4.1                   December 2008


   processing.  Data consist of the contents of regular files which are
   striped across storage servers.  Data striping occurs in at least two
   ways: on a file-by-file basis, and within sufficiently large files,
   on a block-by-block basis.  In contrast, striped access to metadata
   by pNFS clients is not provided in NFSv4.1, even though the file
   system back end of a pNFS server might stripe metadata.  Metadata
   consist of everything else, including the contents of non-regular
   files (e.g. directories); see Section 12.2.1.  The metadata
   functionality is implemented by an NFSv4.1 server that supports pNFS
   and the operations described in (Section 18); such a server is called
   a metadata server (Section 12.2.2).

   The data functionality is implemented by one or more storage devices,
   each of which are accessed by the client via a storage protocol.  A
   subset (defined in Section 13.6) of NFSv4.1 is one such storage
   protocol.  New terms are introduced to the NFSv4.1 nomenclature and
   existing terms are clarified to allow for the description of the pNFS
   feature.

12.2.1.  Metadata

   Information about a file system object, such as its name, location
   within the namespace, owner, ACL and other attributes.  Metadata may
   also include storage location information and this will vary based on
   the underlying storage mechanism that is used.

12.2.2.  Metadata Server

   An NFSv4.1 server which supports the pNFS feature.  A variety of
   architectural choices exists for the metadata server and its use of
   file system information held at the server.  Some servers may contain
   metadata only for file objects residing at the metadata server while
   the file data resides on associated storage devices.  Other metadata
   servers may hold both metadata and a varying degree of file data.

12.2.3.  pNFS Client

   An NFSv4.1 client that supports pNFS operations and supports at least
   one storage protocol for performing I/O to storage devices.

12.2.4.  Storage Device

   A storage device stores a regular file's data, but leaves metadata
   management to the metadata server.  A storage device could be another
   NFSv4.1 server, an object storage device (OSD), a block device
   accessed over a SAN (e.g., either FiberChannel or iSCSI SAN), or some
   other entity.




Shepler, et al.           Expires June 18, 2009               [Page 275]

Internet-Draft                   NFSv4.1                   December 2008


12.2.5.  Storage Protocol

   As noted in the Figure 1, the storage protocol is the method used by
   the client to store and retrieve data directly from the storage
   devices.

   The NFSv4.1 pNFS feature has been structured to allow for a variety
   of storage protocols to be defined and used.  One example storage
   protocol is NFSv4.1 itself (as documented in Section 13).  Other
   options for the storage protocol are described elsewhere and include:

   o  Block/volume protocols such as iSCSI ([47]), and FCP ([48]).  The
      block/volume protocol support can be independent of the addressing
      structure of the block/volume protocol used, allowing more than
      one protocol to access the same file data and enabling
      extensibility to other block/volume protocols.  See [40] for a
      layout specification that allows pNFS to use block/volume storage
      protocols.

   o  Object protocols such as OSD over iSCSI or Fibre Channel [49].
      See [39] for a layout specification that allows pNFS to use object
      storage protocols.

   It is possible that various storage protocols are available to both
   client and server and it may be possible that a client and server do
   not have a matching storage protocol available to them.  Because of
   this, the pNFS server MUST support normal NFSv4.1 access to any file
   accessible by the pNFS feature; this will allow for continued
   interoperability between an NFSv4.1 client and server.

12.2.6.  Control Protocol

   As noted in the Figure 1, the control protocol is used by the
   exported file system between the metadata server and storage devices.
   Specification of such protocols is outside the scope of the NFSv4.1
   protocol.  Such control protocols would be used to control activities
   such as the allocation and deallocation of storage, the management of
   state required by the storage devices to perform client access
   control, and, depending on the storage protocol, the enforcement of
   authentication and authorization so that restrictions that would be
   enforced by the metadata server are also enforced by the storage
   device.

   A particular control protocol is not REQUIRED by NFSv4.1 but
   requirements are placed on the control protocol for maintaining
   attributes like modify time, the change attribute, and the end-of-
   file (EOF) position.  Note that if pNFS is layered over a clustered,
   parallel file system (e.g.  PVFS [50]), the mechanisms that enable



Shepler, et al.           Expires June 18, 2009               [Page 276]

Internet-Draft                   NFSv4.1                   December 2008


   clustering and parallelism in that file system can be considered the
   control protocol.

12.2.7.  Layout Types

   A layout describes the mapping of a file's data to the storage
   devices that hold the data.  A layout is said to belong to a specific
   layout type (data type layouttype4, see Section 3.3.13).  The layout
   type allows for variants to handle different storage protocols, such
   as those associated with block/volume [40], object [39], and file
   (Section 13) layout types.  A metadata server, along with its control
   protocol, MUST support at least one layout type.  A private sub-range
   of the layout type name space is also defined.  Values from the
   private layout type range MAY be used for internal testing or
   experimentation.

   As an example, the organization of the file layout type could be an
   array of tuples (e.g., device ID, filehandle), along with a
   definition of how the data is stored across the devices (e.g.,
   striping).  A block/volume layout might be an array of tuples that
   store <device ID, block_number, block count> along with information
   about block size and the associated file offset of the block number.
   An object layout might be an array of tuples <device ID, object ID>
   and an additional structure (i.e., the aggregation map) that defines
   how the logical byte sequence of the file data is serialized into the
   different objects.  Note that the actual layouts are typically more
   complex than these simple expository examples.

   Requests for pNFS-related operations will often specify a layout
   type.  Examples of such operations are GETDEVICEINFO and LAYOUTGET.
   The response for these operations will include structures such a
   device_addr4 or a layout4, each of which includes a layout type
   within it.  The layout type sent by the server MUST always be the
   same one requested by the client.  When a server sends a response
   that includes a different layout type, the client SHOULD ignore the
   response and behave as if the server had returned an error response.

12.2.8.  Layout

   A layout defines how a file's data is organized on one or more
   storage devices.  There are many potential layout types; each of the
   layout types are differentiated by the storage protocol used to
   access data and in the aggregation scheme that lays out the file data
   on the underlying storage devices.  A layout is precisely identified
   by the following tuple: <client ID, filehandle, layout type, iomode,
   range>; where filehandle refers to the filehandle of the file on the
   metadata server.




Shepler, et al.           Expires June 18, 2009               [Page 277]

Internet-Draft                   NFSv4.1                   December 2008


   It is important to define when layouts overlap and/or conflict with
   each other.  For two layouts with overlapping byte ranges to actually
   overlap each other, both layouts must be of the same layout type,
   correspond to the same filehandle, and have the same iomode.  Layouts
   conflict when they overlap and differ in the content of the layout
   (i.e., the storage device/file mapping parameters differ).  Note that
   differing iomodes do not lead to conflicting layouts.  It is
   permissible for layouts with different iomodes, pertaining to the
   same byte range, to be held by the same client.  An example of this
   would be copy-on-write functionality for a block/volume layout type.

12.2.9.  Layout Iomode

   The layout iomode (data type layoutiomode4, see Section 3.3.20)
   indicates to the metadata server the client's intent to perform
   either just read operations or a mixture of I/O possibly containing
   read and write operations.  For certain layout types, it is useful
   for a client to specify this intent at the time it sends LAYOUTGET
   (Section 18.43).  For example, block/volume based protocols, block
   allocation could occur when a READ/WRITE iomode is specified.  A
   special LAYOUTIOMODE4_ANY iomode is defined and can only be used for
   LAYOUTRETURN and CB_LAYOUTRECALL, not for LAYOUTGET.  It specifies
   that layouts pertaining to both READ and READ/WRITE iomodes are being
   returned or recalled, respectively.

   A storage device may validate I/O with regard to the iomode; this is
   dependent upon storage device implementation and layout type.  Thus,
   if the client's layout iomode is inconsistent with the I/O being
   performed, the storage device may reject the client's I/O with an
   error indicating a new layout with the correct iomode should be
   obtained via LAYOUTGET.  For example, if a client gets a layout with
   a READ iomode and performs a WRITE to a storage device, the storage
   device is allowed to reject that WRITE.

   The use of the layout iomode does not conflict with OPEN share modes
   or byte-range lock requests; open mode and lock conflicts are
   enforced as they are without the use of pNFS, and are logically
   separate from the pNFS layout level.  Open modes and locks are the
   preferred method for restricting user access to data files.  For
   example, an OPEN of read, deny-write does not conflict with a
   LAYOUTGET containing an iomode of READ/WRITE performed by another
   client.  Applications that depend on writing into the same file
   concurrently may use byte-range locking to serialize their accesses.

12.2.10.  Device IDs

   The device ID (data type deviceid4, see Section 3.3.14) identifies a
   group of storage devices.  The scope of a device ID is the pair



Shepler, et al.           Expires June 18, 2009               [Page 278]

Internet-Draft                   NFSv4.1                   December 2008


   <client ID, layout type>.  In practice, a significant amount of
   information may be required to fully address a storage device.
   Rather than embedding all such information in a layout, layouts embed
   device IDs.  The NFSv4.1 operation GETDEVICEINFO (Section 18.40) is
   used to retrieve the complete address information (including all
   device addresses for the device ID) regarding the storage device
   according to its layout type and device ID.  For example, the address
   of an NFSv4.1 data server or of an object storage device could be an
   IP address and port.  The address of a block storage device could be
   a volume label.

   Clients cannot expect the mapping between a device ID and its storage
   device address(es) to persist across metadata server restart.  See
   Section 12.7.4 for a description of how recovery works in that
   situation.

   A device ID lives as long as there is a layout referring to the
   device ID.  If there are no layouts referring to the device ID, the
   server is free to delete the device ID any time.  Once a device ID is
   deleted by the server, the server MUST NOT reuse the device ID for
   the same layout type and client ID again.  This requirement is
   feasible because the device ID is 16 bytes long, leaving sufficient
   room to store a generation number if server's implementation requires
   most of the rest of the device ID's content to be reused.  This
   requirement is necessary because otherwise the race conditions
   between asynchronous notification of device ID addition and deletion
   would be too difficult to sort out.

   Device ID to device address mappings are not leased, and can be
   changed at any time.  (Note that while device ID to device address
   mappings are likely to change after the metadata server restarts, the
   server is not required to change the mappings.)  A server has two
   choices for changing mappings.  It can recall all layouts referring
   to the device ID or it can use a notification mechanism.

   The NFSv4.1 protocol has no optimal way to recall all layouts that
   referred to a particular device ID (unless the server associates a
   single device ID with a single fsid or a single client ID; in which
   case, CB_LAYOUTRECALL has options for recalling all layouts
   associated with the fsid, client ID pair or just the client ID).

   Via a notification mechanism (see Section 20.12), device ID to device
   address mappings can change over the duration of server operation
   without recalling or revoking the layouts that refer to device ID.
   The notification mechanism can also delete a device ID, but only if
   the client has no layouts referring to the device ID.  A notification
   of a change to a device ID to device address mapping will immediately
   or eventually invalidate some or all of the device ID's mappings.



Shepler, et al.           Expires June 18, 2009               [Page 279]

Internet-Draft                   NFSv4.1                   December 2008


   The server MUST support notifications and the client must request
   them before they can be used.  For further information about the
   notification types Section 20.12.

12.3.  pNFS Operations

   NFSv4.1 has several operations that are needed for pNFS servers,
   regardless of layout type or storage protocol.  These operations are
   all sent to a metadata server and summarized here.  While pNFS is an
   OPTIONAL feature, if pNFS is implemented, some operations are
   REQUIRED in order to comply with pNFS.  See Section 17.

   These are the fore channel pNFS operations:

   GETDEVICEINFO.  As noted previously (Section 12.2.10), GETDEVICEINFO
      (Section 18.40) returns the mapping of device ID to storage device
      address.

   GETDEVICELIST  (Section 18.41), allows clients to fetch all device
      IDs for a specific file system.

   LAYOUTGET  (Section 18.43) is used by a client to get a layout for a
      file.

   LAYOUTCOMMIT  (Section 18.42) is used to inform the metadata server
      of the client's intent to commit data which has been written to
      the storage device; the storage device as originally indicated in
      the return value of LAYOUTGET.

   LAYOUTRETURN  (Section 18.44) is used to return layouts for a file,
      an FSID and for client ID.

   These are the backchannel pNFS operations:

   CB_LAYOUTRECALL  (Section 20.3) recalls a layout or all layouts
      belonging to a file system, or all layouts belonging to a client
      ID.

   CB_RECALL_ANY  (Section 20.6), tells a client that it needs to return
      some number of recallable objects, including layouts, to the
      metadata server.

   CB_RECALLABLE_OBJ_AVAIL  (Section 20.7) tells a client that a
      recallable object that it was denied (in case of pNFS, a layout,
      denied by LAYOUTGET) due to resource exhaustion, is now available.






Shepler, et al.           Expires June 18, 2009               [Page 280]

Internet-Draft                   NFSv4.1                   December 2008


   CB_NOTIFY_DEVICEID  (Section 20.12) Notifies the client of changes to
      device IDs.

12.4.  pNFS Attributes

   A number of attributes specific to pNFS are listed and described in
   Section 5.12

12.5.  Layout Semantics

12.5.1.  Guarantees Provided by Layouts

   Layouts grant to the client the ability to access data located at a
   storage device with the appropriate storage protocol.  The client is
   guaranteed the layout will be recalled when one of two things occur;
   either a conflicting layout is requested or the state encapsulated by
   the layout becomes invalid and this can happen when an event directly
   or indirectly modifies the layout.  When a layout is recalled and
   returned by the client, the client continues with the ability to
   access file data with normal NFSv4.1 operations through the metadata
   server.  Only the ability to access the storage devices is affected.

   The requirement of NFSv4.1, that all user access rights MUST be
   obtained through the appropriate open, lock, and access operations,
   is not modified with the existence of layouts.  Layouts are provided
   to NFSv4.1 clients and user access still follows the rules of the
   protocol as if they did not exist.  It is a requirement that for a
   client to access a storage device, a layout must be held by the
   client.  If a storage device receives an I/O for a byte range for
   which the client does not hold a layout, the storage device SHOULD
   reject that I/O request.  Note that the act of modifying a file for
   which a layout is held, does not necessarily conflict with the
   holding of the layout that describes the file being modified.
   Therefore, it is the requirement of the storage protocol or layout
   type that determines the necessary behavior.  For example, block/
   volume layout types require that the layout's iomode agree with the
   type of I/O being performed.

   Depending upon the layout type and storage protocol in use, storage
   device access permissions may be granted by LAYOUTGET and may be
   encoded within the type-specific layout.  For an example of storage
   device access permissions, see an object based protocol such as [49].
   If access permissions are encoded within the layout, the metadata
   server SHOULD recall the layout when those permissions become invalid
   for any reason; for example when a file becomes unwritable or
   inaccessible to a client.  Note, clients are still required to
   perform the appropriate access operations with open, lock and access
   as described above.  The degree to which it is possible for the



Shepler, et al.           Expires June 18, 2009               [Page 281]

Internet-Draft                   NFSv4.1                   December 2008


   client to circumvent these access operations and the consequences of
   doing so must be clearly specified by the individual layout type
   specifications.  In addition, these specifications must be clear
   about the requirements and non-requirements for the checking
   performed by the server.

   In the presence of pNFS functionality, mandatory file locks MUST
   behave as they would without pNFS.  Therefore, if mandatory file
   locks and layouts are provided simultaneously, the storage device
   MUST be able to enforce the mandatory file locks.  For example, if
   one client obtains a mandatory lock and a second client accesses the
   storage device, the storage device MUST appropriately restrict I/O
   for the byte range of the mandatory file lock.  If the storage device
   is incapable of providing this check in the presence of mandatory
   file locks, the metadata server then MUST NOT grant layouts and
   mandatory file locks simultaneously.

12.5.2.  Getting a Layout

   A client obtains a layout with the LAYOUTGET operation.  The metadata
   server will grant layouts of a particular type (e.g., block/volume,
   object, or file).  The client selects an appropriate layout type that
   the server supports and the client is prepared to use.  The layout
   returned to the client might not exactly match the requested byte
   range as described in Section 18.43.3.  As needed a client may make
   multiple LAYOUTGET requests; these might result in multiple
   overlapping, non-conflicting layouts (see Section 12.2.8).

   In order to get a layout, the client must first have opened the file
   via the OPEN operation.  When a client has no layout on a file, it
   MUST present a stateid as returned by OPEN, a delegation stateid, or
   a byte-range lock stateid in the loga_stateid argument.  A successful
   LAYOUTGET result includes a layout stateid.  The first successful
   LAYOUTGET processed by the server using a non-layout stateid as an
   argument MUST have the "seqid" field of the layout stateid in the
   response set to one.  Thereafter, the client MUST use a layout
   stateid (see Section 12.5.3) on future invocations of LAYOUTGET on
   the file, and the "seqid" MUST NOT be set to zero.  Once the layout
   has been retrieved, it can be held across multiple OPEN and CLOSE
   sequences.  Therefore, a client may hold a layout for a file that is
   not currently open by any user on the client.  This allows for the
   caching of layouts beyond CLOSE.

   The storage protocol used by the client to access the data on the
   storage device is determined by the layout's type.  The client is
   responsible for matching the layout type with an available method to
   interpret and use the layout.  The method for this layout type
   selection is outside the scope of the pNFS functionality.



Shepler, et al.           Expires June 18, 2009               [Page 282]

Internet-Draft                   NFSv4.1                   December 2008


   Although the metadata server is in control of the layout for a file,
   the pNFS client can provide hints to the server when a file is opened
   or created about the preferred layout type and aggregation schemes.
   pNFS introduces a layout_hint (Section 5.12.4) attribute that the
   client can set at file creation time to provide a hint to the server
   for new files.  Setting this attribute separately, after the file has
   been created might make it difficult, or impossible, for the server
   implementation to comply.

   Because the EXCLUSIVE4 createmode4 does not allow the setting of
   attributes at file creation time, NFSv4.1 introduces the EXCLUSIVE4_1
   createmode4, which does allow attributes to be set at file creation
   time.  In addition, if the session is created with persistent reply
   caches, EXCLUSIVE4_1 is neither necessary nor allowed.  Instead,
   GUARDED4 both works better and is prescribed.  Table 10 in
   Section 18.16.3, summarizes how a client is allowed to send an
   exclusive create.

12.5.3.  Layout Stateid

   As with all other stateids, the layout stateid consists of a "seqid"
   and "other" field.  Once a layout stateid is changed, the "other"
   field will stay constant unless the stateid is revoked, or the client
   returns all layouts on the file and the server disposes of the
   stateid.  The "seqid" field is initially set to one, and is never
   zero on any NFSv4.1 operation that uses layout stateids, whether it
   is a fore channel or backchannel operation.  After the layout stateid
   is established, the server increments by one the value of the "seqid"
   in each subsequent LAYOUTGET and LAYOUTRETURN response, and in each
   CB_LAYOUTRECALL request.

   Given the design goal of pNFS to provide parallelism, the layout
   stateid differs from other stateid types in that the client is
   expected to send LAYOUTGET and LAYOUTRETURN operations in parallel.
   The "seqid" value is used by the client to properly sort responses to
   LAYOUTGET and LAYOUTRETURN.  The "seqid" is also used to prevent race
   conditions between LAYOUTGET and CB_LAYOUTRECALL.  Given the
   processing rules differ from layout stateids and other stateid types,
   only the pNFS sections of this document should be considered to
   determine proper layout stateid handling.

   Once the client receives a layout stateid, it MUST use the correct
   "seqid" for subsequent LAYOUTGET or LAYOUTRETURN operations.  The
   correct "seqid" is defined as the highest "seqid" value from
   responses of fully processed LAYOUTGET or LAYOUTRETURN operations or
   arguments of a fully processed CB_LAYOUTRECALL operation.  Since the
   server is incrementing the "seqid" value on each layout operation,
   the client may determine the order of operation processing by



Shepler, et al.           Expires June 18, 2009               [Page 283]

Internet-Draft                   NFSv4.1                   December 2008


   inspecting the "seqid" value.  In the case of overlapping layout
   ranges, the ordering information will provide the client the
   knowledge of which layout ranges are held.  Note that overlapping
   layout ranges may occur because of the client's specific requests or
   because the server is allowed to expand the range of a requested
   layout and notify the client in the LAYOUTRETURN results.  Additional
   layout stateid sequencing requirements are provided in
   Section 12.5.5.2.

   The client's receipt of a "seqid" is not sufficient for subsequent
   use.  The client must fully process the operations before the "seqid"
   can be used.  For LAYOUTGET results, if the client is not using the
   forgetful model (Section 12.5.5.1), it MUST first update its record
   of what ranges of the file's layout it has before using the seqid.
   For LAYOUTRETURN results, the client MUST delete the range from its
   record of what ranges of the file's layout it had before using the
   seqid.  For CB_LAYOUTRECALL arguments, the client MUST send a
   response to the recall before using the seqid.  The fundamental
   requirement in client processing is that the "seqid" is used to
   provide the order of processing.  LAYOUTGET results may be processed
   in parallel.  LAYOUTRETURN results may be processed in parallel.
   LAYOUTGET and LAYOUTRETURN responses may be processed in parallel as
   long as the ranges do not overlap.  CB_LAYOUTRECALL request
   processing MUST be processed in "seqid" order at all times.

   Once a client has no more layouts on a file, the layout stateid is no
   longer valid, and MUST NOT be used.  Any attempt to use such a layout
   stateid will result in NFS4ERR_BAD_STATEID.

12.5.4.  Committing a Layout

   Allowing for varying storage protocols capabilities, the pNFS
   protocol does not require the metadata server and storage devices to
   have a consistent view of file attributes and data location mappings.
   Data location mapping refers to aspects such as which offsets store
   data as opposed to storing holes (see Section 13.4.4 for a
   discussion).  Related issues arise for storage protocols where a
   layout may hold provisionally allocated blocks where the allocation
   of those blocks does not survive a complete restart of both the
   client and server.  Because of this inconsistency, it is necessary to
   re-synchronize the client with the metadata server and its storage
   devices and make any potential changes available to other clients.
   This is accomplished by use of the LAYOUTCOMMIT operation.

   The LAYOUTCOMMIT operation is responsible for committing a modified
   layout to the metadata server.  The data should be written and
   committed to the appropriate storage devices before the LAYOUTCOMMIT
   occurs.  The scope of the LAYOUTCOMMIT operation depends on the



Shepler, et al.           Expires June 18, 2009               [Page 284]

Internet-Draft                   NFSv4.1                   December 2008


   storage protocol in use.  It is important to note that the level of
   synchronization is from the point of view of the client which sent
   the LAYOUTCOMMIT.  The updated state on the metadata server need only
   reflect the state as of the client's last operation previous to the
   LAYOUTCOMMIT.  It is not REQUIRED to maintain a global view that
   accounts for other clients' I/O that may have occurred within the
   same time frame.

   For block/volume-based layouts, LAYOUTCOMMIT may require updating the
   block list that comprises the file and committing this layout to
   stable storage.  For file-layouts synchronization of attributes
   between the metadata and storage devices primarily the size attribute
   is required.

   The control protocol is free to synchronize the attributes before it
   receives a LAYOUTCOMMIT, however upon successful completion of a
   LAYOUTCOMMIT, state that exists on the metadata server that describes
   the file MUST be in sync with the state existing on the storage
   devices that comprise that file as of the issuing client's last
   operation.  Thus, a client that queries the size of a file between a
   WRITE to a storage device and the LAYOUTCOMMIT may observe a size
   that does not reflect the actual data written.

   The client MUST have a layout in order to issue LAYOUTCOMMIT.

12.5.4.1.  LAYOUTCOMMIT and change/time_modify

   The change and time_modify attributes may be updated by the server
   when the LAYOUTCOMMIT operation is processed.  The reason for this is
   that some layout types do not support the update of these attributes
   when the storage devices process I/O operations.  If client has a
   layout with the LAYOUTIOMODE4_RW iomode on the file, the client MAY
   provide a suggested value to the server for time_modify within the
   arguments to LAYOUTCOMMIT.  Based on the layout type, the provided
   value may or may not be used.  The server should sanity check the
   client provided values before they are used.  For example, the server
   should ensure that time does not flow backwards.  The client always
   has the option to set time_modify through an explicit SETATTR
   operation.

   For some layout protocols, the storage device is able to notify the
   metadata server of the occurrence of an I/O and as a result the
   change and time_modify attributes may be updated at the metadata
   server.  For a metadata server that is capable of monitoring updates
   to the change and time_modify attributes, LAYOUTCOMMIT processing is
   not required to update the change attribute; in this case the
   metadata server must ensure that no further update to the data has
   occurred since the last update of the attributes; file-based



Shepler, et al.           Expires June 18, 2009               [Page 285]

Internet-Draft                   NFSv4.1                   December 2008


   protocols may have enough information to make this determination or
   may update the change attribute upon each file modification.  This
   also applies for the time_modify attribute.  If the server
   implementation is able to determine that the file has not been
   modified since the last time_modify update, the server need not
   update time_modify at LAYOUTCOMMIT.  At LAYOUTCOMMIT completion, the
   updated attributes should be visible if that file was modified since
   the latest previous LAYOUTCOMMIT or LAYOUTGET.

12.5.4.2.  LAYOUTCOMMIT and size

   The size of a file may be updated when the LAYOUTCOMMIT operation is
   used by the client.  One of the fields in the argument to
   LAYOUTCOMMIT is loca_last_write_offset; this field indicates the
   highest byte offset written but not yet committed with the
   LAYOUTCOMMIT operation.  The data type of loca_last_write_offset is
   newoffset4 and is switched on a boolean value, no_newoffset, that
   indicates if a previous write occurred or not.  If no_newoffset is
   FALSE, an offset is not given.  If the client has a layout with
   LAYOUTIOMODE4_RW iomode on the file, with an lo_offset and lo_length
   that overlaps loca_last_write_offset, then the client MAY set
   no_newoffset to TRUE and provide an offset that will update the file
   size.  Keep in mind that offset is not the same as length, though
   they are related.  For example, a loca_last_write_offset value of
   zero means that one byte was written at offset zero, and so the
   length of the file is at least one byte.

   The metadata server may do one of the following:

   1.  Update the file's size using the last write offset provided by
       the client as either the true file size or as a hint of the file
       size.  If the metadata server has a method available, any new
       value for file size should be sanity checked.  For example, the
       file must not be truncated if the client presents a last write
       offset less than the file's current size.

   2.  Ignore the client provided last write offset; the metadata server
       must have sufficient knowledge from other sources to determine
       the file's size.  For example, the metadata server queries the
       storage devices with the control protocol.

   The method chosen to update the file's size will depend on the
   storage device's and/or the control protocol's capabilities.  For
   example, if the storage devices are block devices with no knowledge
   of file size, the metadata server must rely on the client to set the
   last write offset appropriately.

   The results of LAYOUTCOMMIT contain a new size value in the form of a



Shepler, et al.           Expires June 18, 2009               [Page 286]

Internet-Draft                   NFSv4.1                   December 2008


   newsize4 union data type.  If the file's size is set as a result of
   LAYOUTCOMMIT, the metadata server must reply with the new size;
   otherwise the new size is not provided.  If the file size is updated,
   the metadata server SHOULD update the storage devices such that the
   new file size is reflected when LAYOUTCOMMIT processing is complete.
   For example, the client should be able to READ up to the new file
   size.

   The client can extend the length of a file or truncate a file by
   sending a SETATTR operation to the metadata server with the size
   attribute specified.  If the size specified is larger than the
   current size of the file, the file is "zero extended", i.e., zeroes
   are implicitly added between the file's previous EOF and the new EOF.
   (In many implementations the zero extended region of the file
   consists of unallocated holes in the file.)  When the client writes
   past EOF via WRITE, the SETATTR operation does not need to be used.

12.5.4.3.  LAYOUTCOMMIT and layoutupdate

   The LAYOUTCOMMIT argument contains a loca_layoutupdate field
   (Section 18.42.1) of data type layoutupdate4 (Section 3.3.18).  This
   argument is a layout type-specific structure.  The structure can be
   used to pass arbitrary layout type-specific information from the
   client to the metadata server at LAYOUTCOMMIT time.  For example, if
   using a block/volume layout, the client can indicate to the metadata
   server which reserved or allocated blocks the client used or did not
   use.  The content of loca_layoutupdate (field lou_body) need not be
   the same layout type-specific content returned by LAYOUTGET
   (Section 18.43.2) in the loc_body field of the lo_content field, of
   the logr_layout field.  The content of loca_layoutupdate is defined
   by the layout type specification and is opaque to LAYOUTCOMMIT.

12.5.5.  Recalling a Layout

   Since a layout protects a client's access to a file via a direct
   client-storage-device path, a layout need only be recalled when it is
   semantically unable to serve this function.  Typically, this occurs
   when the layout no longer encapsulates the true location of the file
   over the byte range it represents.  Any operation or action, such as
   server driven restriping or load balancing, that changes the layout
   will result in a recall of the layout.  A layout is recalled by the
   CB_LAYOUTRECALL callback operation (see Section 20.3) and returned
   with LAYOUTRETURN Section 18.44.  The CB_LAYOUTRECALL operation may
   recall a layout identified by a byte range, all the layouts
   associated with a file system (FSID), or all layouts associated with
   a client ID.  Section 12.5.5.2 discusses sequencing issues
   surrounding the getting, returning, and recalling of layouts.




Shepler, et al.           Expires June 18, 2009               [Page 287]

Internet-Draft                   NFSv4.1                   December 2008


   An iomode is also specified when recalling a layout.  Generally, the
   iomode in the recall request must match the layout being returned;
   for example, a recall with an iomode of LAYOUTIOMODE4_RW should cause
   the client to only return LAYOUTIOMODE4_RW layouts and not
   LAYOUTIOMODE4_READ layouts.  However, a special LAYOUTIOMODE4_ANY
   enumeration is defined to enable recalling a layout of any iomode; in
   other words, the client must return both read-only and read/write
   layouts.

   A REMOVE operation SHOULD cause the metadata server to recall the
   layout to prevent the client from accessing a non-existent file and
   to reclaim state stored on the client.  Since a REMOVE may be delayed
   until the last close of the file has occurred, the recall may also be
   delayed until this time.  After the last reference on the file has
   been released and the file has been removed, the client should no
   longer be able to perform I/O using the layout.  In the case of a
   files based layout, the data server SHOULD return NFS4ERR_STALE in
   response to any operation on the removed file.

   Once a layout has been returned, the client MUST NOT send I/Os to the
   storage devices for the file, byte range, and iomode represented by
   the returned layout.  If a client does send an I/O to a storage
   device for which it does not hold a layout, the storage device SHOULD
   reject the I/O.

   Although pNFS does not alter the file data caching capabilities of
   clients, or their semantics, it recognizes that some clients may
   perform more aggressive write-behind caching to optimize the benefits
   provided by pNFS.  However, write-behind caching may negatively
   affect the latency in returning a layout in response to a
   CB_LAYOUTRECALL; this is similar to file delegations and the impact
   that file data caching has on DELEGRETURN.  Client implementations
   SHOULD limit the amount of unwritten data they have outstanding at
   any one time in order to prevent excessively long responses to
   CB_LAYOUTRECALL.  Once a layout is recalled, a server MUST wait one
   lease period before taking further action.  As soon as a lease period
   has past, the server may choose to fence the client's access to the
   storage devices if the server perceives the client has taken too long
   to return a layout.  However, just as in the case of data delegation
   and DELEGRETURN, the server may choose to wait given that the client
   is showing forward progress on its way to returning the layout.  This
   forward progress can take the form of successful interaction with the
   storage devices or sub-portions of the layout being returned by the
   client.  The server can also limit exposure to these problems by
   limiting the byte ranges initially provided in the layouts and thus
   the amount of outstanding modified data.





Shepler, et al.           Expires June 18, 2009               [Page 288]

Internet-Draft                   NFSv4.1                   December 2008


12.5.5.1.  Layout Recall Callback Robustness

   It has been assumed thus far that pNFS client state for a file
   exactly matches the pNFS server state for that file and client
   regarding layout ranges and iomode.  This assumption leads to the
   implication that any callback results in a LAYOUTRETURN or set of
   LAYOUTRETURNs that exactly match the range in the callback, since
   both client and server agree about the state being maintained.
   However, it can be useful if this assumption does not always hold.
   For example:

   o  If conflicts that require callbacks are very rare, and a server
      can use a multi-file callback to recover per-client resources
      (e.g., via a FSID recall, or a multi-file recall within a single
      compound), the result may be significantly less client-server pNFS
      traffic.

   o  It may be useful for servers to maintain information about what
      ranges are held by a client on a coarse-grained basis, leading to
      the server's layout ranges being beyond those actually held by the
      client.  In the extreme, a server could manage conflicts on a per-
      file basis, only issuing whole-file callbacks even though clients
      may request and be granted sub-file ranges.

   o  It may be useful for clients to "forget" details about what
      layouts and ranges the client actually has, leading to the
      server's layout ranges being beyond those what the client "thinks"
      it has.  As long as the client does not assume it has layouts that
      are beyond what the server has granted, this is a safe practice.
      When a client forgets what ranges and layouts it has, and it
      receives a CB_LAYOUTRECALL operation, the client MUST follow up
      with a LAYOUTRETURN for what the server recalled, or alternatively
      return the NFS4ERR_NOMATCHING_LAYOUT error if it has no layout to
      return in the recalled range.

   o  In order to avoid errors, it is vital that a client not assign
      itself layout permissions beyond what the server has granted and
      that the server not forget layout permissions that have been
      granted.  On the other hand, if a server believes that a client
      holds a layout that the client does not know about, it is useful
      for the client to cleanly indicate completion of the requested
      recall either by issuing a LAYOUTRETURN for the entire requested
      range or by returning an NFS4ERR_NOMATCHING_LAYOUT error to the
      CB_LAYOUTRECALL.

   Thus, in light of the above, it is useful for a server to be able to
   send callbacks for layout ranges it has not granted to a client, and
   for a client to return ranges it does not hold.  A pNFS client MUST



Shepler, et al.           Expires June 18, 2009               [Page 289]

Internet-Draft                   NFSv4.1                   December 2008


   always return layouts that comprise the full range specified by the
   recall.  Note, the full recalled layout range need not be returned as
   part of a single operation, but may be returned in portions.  This
   allows the client to stage the flushing of dirty data, layout
   commits, and returns.  Also, it indicates to the metadata server that
   the client is making progress.

   When a layout is returned, the client MUST NOT have any outstanding
   I/O requests to the storage devices involved in the layout.
   Rephrasing, the client MUST NOT return the layout while it has
   outstanding I/O requests to the storage device.

   Even with this requirement for the client, it is possible that I/O
   requests may be presented to a storage device no longer allowed to
   perform them.  Since the server has no strict control as to when the
   client will return the layout, the server may later decide to
   unilaterally revoke the client's access to the storage devices as
   provided by the layout.  In choosing to revoke access, the server
   must deal with the possibility of lingering I/O request; those
   outstanding I/O requests are still in flight to storage devices
   identified by the revoked layout.  All layout type specifications
   MUST define whether unilateral layout revocation by the metadata
   server is supported; if it is, the specification must also describe
   how lingering writes are processed.  For example, storage devices
   identified by the revoked layout could be fenced off from the client
   that held the layout.

   In order to ensure client/server convergence with regard to layout
   state, the final LAYOUTRETURN operation in a sequence of LAYOUTRETURN
   operations for a particular recall, MUST specify the entire range
   being recalled, echoing the recalled layout type, iomode, recall/
   return type (FILE, FSID, or ALL), and byte range; even if layouts
   pertaining to partial ranges were previously returned.  In addition,
   if the client holds no layouts that overlaps the range being
   recalled, the client should return the NFS4ERR_NOMATCHING_LAYOUT
   error code to CB_LAYOUTRECALL.  This allows the server to update its
   view of the client's layout state.

12.5.5.2.  Sequencing of Layout Operations

   As with other stateful operations, pNFS requires the correct
   sequencing of layout operations. pNFS uses the "seqid" in the layout
   stateid to provide the correct sequencing between regular operations
   and callbacks.  It is the server's responsibility to avoid
   inconsistencies regarding the layouts provided and the client's
   responsibility to properly serialize its layout requests and layout
   returns.




Shepler, et al.           Expires June 18, 2009               [Page 290]

Internet-Draft                   NFSv4.1                   December 2008


12.5.5.2.1.  Layout Recall and Return Sequencing

   One critical issue with regard to layout operations sequencing
   concerns callbacks.  The protocol must defend against races between
   the reply to a LAYOUTGET or LAYOUTRETURN operation and a subsequent
   CB_LAYOUTRECALL.  A client MUST NOT process a CB_LAYOUTRECALL that
   implies one or more outstanding LAYOUTGET or LAYOUTRETURN operations
   to which the client has not yet received a reply.  The client detects
   such a CB_LAYOUTRECALL by examining the "seqid" field of the recall's
   layout stateid.  If the "seqid" is not one higher than what the
   client currently has recorded, and the client has at least one
   LAYOUTGET and/or LAYOUTRETURN operation outstanding, the client knows
   the server sent the CB_LAYOUTRECALL after sending a response to an
   outstanding LAYOUTGET or LAYOUTRETURN.  The client MUST wait before
   processing such a CB_LAYOUTRECALL until it processes all replies for
   outstanding LAYOUTGET and LAYOUTRETURN operations for the
   corresponding file with seqid less than the seqid given by
   CB_LAYOUTRECALL (lor_stateid, see Section 20.3.)

   In addition to the seqid-based mechanism, Section 2.10.6.3 describes
   the sessions mechanism for allowing the client to detect callback
   race conditions and delay processing such a CB_LAYOUTRECALL.  The
   server MAY reference conflicting operations in the CB_SEQUENCE that
   precedes the CB_LAYOUTRECALL.  Because the server has already sent
   replies for these operations before issuing the callback, the replies
   may race with the CB_LAYOUTRECALL.  The client MUST wait for all the
   referenced calls to complete and update its view of the layout state
   before processing the CB_LAYOUTRECALL.

12.5.5.2.1.1.  Get/Return Sequencing

   The protocol allows the client to send concurrent LAYOUTGET and
   LAYOUTRETURN operations to the server.  The protocol does not provide
   any means for the server to process the requests in the same order in
   which they were created.  However, through the use of the "seqid"
   field in the layout stateid, the client can determine the order in
   which parallel outstanding operations were processed by the server.
   Thus, when a layout retrieved by an outstanding LAYOUTGET operation
   intersects with a layout returned by an outstanding LAYOUTRETURN on
   the same file, the order in which the two conflicting operations are
   processed determines the final state of the overlapping layout.  The
   order is determined by the "seqid" returned in each operation: the
   operation with the higher seqid was executed later.

   It is permissible for the client to send in parallel multiple
   LAYOUTGET operations for the same file or multiple LAYOUTRETURN
   operations for the same file, and a mix of both.




Shepler, et al.           Expires June 18, 2009               [Page 291]

Internet-Draft                   NFSv4.1                   December 2008


   It is permissible for the client to use the current stateid (see
   Section 16.2.3.1.2) for LAYOUTGET operations for example when
   compounding LAYOUTGETs or compounding OPEN and LAYOUTGETs.  It is
   also permissible to use the current stateid when compounding
   LAYOUTRETURNs.

   It is permissible for the client to use the current stateid when
   combining LAYOUTRETURN and LAYOUTGET operations for the same file in
   the same COMPOUND request since the server MUST process these in
   order.  However, if a client does send such COMPOUND requests, it
   MUST NOT have more than one outstanding for the same file at the same
   time and MUST NOT have other LAYOUTGET or LAYOUTRETURN operations
   outstanding at the same time for that same file.

12.5.5.2.1.2.  Client Considerations

   Consider a pNFS client that has sent a LAYOUTGET and before it
   receives the reply to LAYOUTGET, it receives a CB_LAYOUTRECALL for
   the same file with an overlapping range.  There are two
   possibilities, which the client can distinguish via the layout
   stateid in the recall.

   1.  The server processed the LAYOUTGET before issuing the recall, so
       the LAYOUTGET must be waited for because it may be carrying
       layout information that will need to be returned to deal with the
       CB_LAYOUTRECALL.

   2.  The server sent the callback before receiving the LAYOUTGET.  The
       server will not respond to the LAYOUTGET until the
       CB_LAYOUTRECALL is processed.

   If these possibilities cannot be distinguished, a deadlock could
   result, as the client must wait for the LAYOUTGET response before
   processing the recall in the first case, but that response will not
   arrive until after the recall is processed in the second case.  Note
   that in the first case, the "seqid" in the layout stateid of the
   recall is two greater than what the client has recorded and in the
   second case, the "seqid" is one greater than what the client has
   recorded.  This allows the client to disambiguate between the two
   cases.  The client thus knows precisely which possibility applies.

   In case 1 the client knows it needs to wait for the LAYOUTGET
   response before processing the recall (or the client can return
   NFS4ERR_DELAY).

   In case 2 the client will not wait for the LAYOUTGET response before
   processing the recall, because waiting would cause deadlock.
   Therefore, the action at the client will only require waiting in the



Shepler, et al.           Expires June 18, 2009               [Page 292]

Internet-Draft                   NFSv4.1                   December 2008


   case that the client has not yet seen the server's earlier responses
   to the LAYOUTGET operation(s).

   The recall process can be considered completed when the final
   LAYOUTRETURN operation for the recalled range is completed.  The
   LAYOUTRETURN uses the layout stateid (with seqid) specified in
   CB_LAYOUTRECALL.  If the client uses multiple LAYOUTRETURNs in
   processing the recall, the first LAYOUTRETURN will use the layout
   stateid as specified in CB_LAYOUTRECALL.  Subsequent LAYOUTRETURNs
   will use the highest seqid as is the usual case.

12.5.5.2.1.3.  Server Considerations

   Consider a race from the metadata server's point of view.  The
   metadata server has sent a CB_LAYOUTRECALL and receives an
   overlapping LAYOUTGET for the same file before the LAYOUTRETURN(s)
   that respond to the CB_LAYOUTRECALL.  There are three cases:

   1.  The client sent the LAYOUTGET before processing the
       CB_LAYOUTRECALL.  The "seqid" in the layout stateid of LAYOUTGET
       is two less than the "seqid" in CB_LAYOUTRECALL.  The server
       returns NFS4ERR_RECALLCONFLICT to the client, which indicates to
       the client that there is a pending recall.

   2.  The client sent the LAYOUTGET after processing the
       CB_LAYOUTRECALL, but the LAYOUTGET arrived before the
       LAYOUTRETURN and the response to CB_LAYOUTRECALL that completed
       that processing.  The "seqid" in the layout stateid of LAYOUTGET
       is equal to or greater than that of the "seqid" in
       CB_LAYOUTRECALL.  The server has not received a response to the
       CB_LAYOUTRECALL, so it returns NFS4ERR_RECALLCONFLICT.

   3.  The client sent the LAYOUTGET after processing the
       CB_LAYOUTRECALL, the server received the CB_LAYOUTRECALL
       response, but the LAYOUTGET arrived before the LAYOUTRETURN that
       completed that processing.  The "seqid" in the layout stateid of
       LAYOUTGET is equal to that of the "seqid" in CB_LAYOUTRECALL.
       The server has received a response to the CB_LAYOUTRECALL, so it
       returns NFS4ERR_RETURNCONFLICT.

12.5.5.2.1.4.  Wraparound and Validation of Seqid

   The rules for layout stateid processing differ from other stateids in
   the protocol because the "seqid" value cannot be zero and the
   stateid's "seqid" value changes in a CB_LAYOUTRECALL operation.  The
   non-zero requirement combined with the inherent parallelism of layout
   operations means that a set of LAYOUTGET and LAYOUTRETURN operations
   may contain the same value for "seqid".  The server uses a slightly



Shepler, et al.           Expires June 18, 2009               [Page 293]

Internet-Draft                   NFSv4.1                   December 2008


   modified version of the modulo arithmetic as described in
   Section 2.10.6.1 when incrementing the layout stateid's "seqid".  The
   modification to that modulo arithmetic description is to not use
   zero.  The modulo arithmetic is also used for the comparisons of
   "seqid" values in the processing of CB_LAYOUTRECALL events as
   described above in Section 12.5.5.2.1.3.

   Just as the server validates the "seqid" in the event of
   CB_LAYOUTRECALL usage, as described in Section 12.5.5.2.1.3, the
   server also validates the "seqid" value to ensure that it is within
   an appropriate range.  This range represents the degree of
   parallelism the server supports for layout stateids.  If the client
   is sending multiple layout operations to the server in parallel, by
   definition, the "seqid" value in the supplied stateid will not be the
   current "seqid" as held by the server.  The range of parallelism
   spans from the highest or current "seqid" to a "seqid" value in the
   past.  To assist in the discussion, the server's current "seqid"
   value for a layout stateid is defined as: SERVER_CURRENT_SEQID.  The
   lowest "seqid" value that is acceptable to the server is represented
   by PAST_SEQID.  And the value for the range of valid "seqid"s or
   range of parallelism is VALID_SEQID_RANGE.  Therefore, the following
   holds: VALID_SEQID_RANGE = SERVER_CURRENT_SEQID - PAST_SEQID.  In the
   following, all arithmetic is the modulo arithmetic as described
   above.

   The server MUST support a minimum VALID_SEQID_RANGE.  The minimum is
   defined as: VALID_SEQID_RANGE = summation of 1..N of
   (ca_maxoperations(i) - 1) where N is the number of session fore
   channels and ca_maxoperations(i) is the value of the ca_maxoperations
   returned from CREATE_SESSION of the i'th session.  The reason for
   minus 1 is to allow for the required SEQUENCE operation.  The server
   MAY support a VALID_SEQID_RANGE value larger than the minimum.  The
   maximum VALID_SEQID_RANGE is (2 ^ 32 - 2) (accounts for 0 not being a
   valid "seqid" value).

   If the server finds the "seqid" is zero, the NFS4ERR_BAD_STATEID
   error is returned to the client.  The server further validates the
   "seqid" to ensure it is within the range of parallelism,
   VALID_SEQID_RANGE.  If the "seqid" value is outside of that range,
   the error NFS4ERR_OLD_STATEID is returned to the client.  Upon
   receipt of NFS4ERR_OLD_STATEID, the client updates the stateid in the
   layout request based on processing of other layout requests and re-
   sends the operation to the server.

12.5.5.2.1.5.  Bulk Recall and Return

   pNFS supports recalling and returning all layouts that are for files
   belonging to a particular fsid (LAYOUTRECALL4_FSID,



Shepler, et al.           Expires June 18, 2009               [Page 294]

Internet-Draft                   NFSv4.1                   December 2008


   LAYOUTRETURN4_FSID) or client ID (LAYOUTRECALL4_ALL,
   LAYOUTRETURN4_ALL).  There are no "bulk" stateids, so detection of
   races via the seqid is not possible.  The server MUST NOT initiate
   bulk recall while another recall is in progress, or the corresponding
   LAYOUTRETURN is in progress or pending.  In the event the server
   sends a bulk recall while the client has pending or in progress
   LAYOUTRETURN, CB_LAYOUTRECALL, or LAYOUTGET, the client returns
   NFS4ERR_DELAY.  In the event the client sends a LAYOUTGET or
   LAYOUTRETURN while a bulk recall is in progress, the server returns
   NFS4ERR_RECALLCONFLICT.  If the client sends a LAYOUTGET or
   LAYOUTRETURN after the server receives NFS4ERR_DELAY from a bulk
   recall, then to ensure forward progress, the server MAY return
   NFS4ERR_RECALLCONFLICT.

   Once a CB_LAYOUTRECALL of LAYOUTRECALL4_ALL is sent, the server MUST
   NOT allow the client to use any layout stateid except for
   LAYOUTCOMMIT operations.  Once the client receives a CB_LAYOUTRECALL
   of LAYOUTRECALL4_ALL, it MUST NOT use any layout stateid except for
   LAYOUTCOMMIT operations.  Once a LAYOUTRETURN of LAYOUTRETURN4_ALL is
   sent, all layout stateids granted to the client ID are freed.  The
   client MUST NOT use the layout stateids again.  It MUST use LAYOUTGET
   to obtain new layout stateids.

   Once a CB_LAYOUTRECALL of LAYOUTRECALL4_FSID is sent, the server MUST
   NOT allow the client to use any layout stateid that refers to a file
   with the specified fsid except for LAYOUTCOMMIT operations.  Once the
   client receives a CB_LAYOUTRECALL of LAYOUTRECALL4_ALL, it MUST NOT
   use any layout stateid that refers to a file with the specified fsid
   except for LAYOUTCOMMIT operations.  Once a LAYOUTRETURN of
   LAYOUTRETURN4_FSID is sent, all layout stateids granted to the
   referenced fsid are freed.  The client MUST NOT use the layout
   stateids for files with the referenced fsid again.  It MUST use
   LAYOUTGET to obtain new layout stateids files with the referenced
   fsid.

   If the server has sent a bulk CB_LAYOUTRECALL, and receives a
   LAYOUTGET, or a LAYOUTRETURN with a stateid, the server MUST return
   NFS4ERR_RECALLCONFLICT.  If the server has sent a bulk
   CB_LAYOUTRECALL, and receives a LAYOUTRETURN with an lr_returntype
   that is not equal to the lor_recalltype of the CB_LAYOUTRECALL, the
   server MUST return NFS4ERR_RECALLCONFLICT.

12.5.6.  Revoking Layouts

   Parallel NFS permits servers to revoke layouts from clients that fail
   to response to recalls and/or fail to renew their lease in time.
   Whether the server revokes the layout or not depends on the layout
   type, and what actions are taken with respect to the client's I/O to



Shepler, et al.           Expires June 18, 2009               [Page 295]

Internet-Draft                   NFSv4.1                   December 2008


   data servers is also layout type specific.

12.5.7.  Metadata Server Write Propagation

   Asynchronous writes written through the metadata server may be
   propagated lazily to the storage devices.  For data written
   asynchronously through the metadata server, a client performing a
   read at the appropriate storage device is not guaranteed to see the
   newly written data until a COMMIT occurs at the metadata server.
   While the write is pending, reads to the storage device may give out
   either the old data, the new data, or a mixture of new and old.  Upon
   completion of a synchronous WRITE or COMMIT (for asynchronously
   written data), the metadata server MUST ensure that storage devices
   give out the new data and that the data has been written to stable
   storage.  If the server implements its storage in any way such that
   it cannot obey these constraints, then it MUST recall the layouts to
   prevent reads being done that cannot be handled correctly.  Note that
   the layouts MUST be recalled prior to the server responding to the
   associated WRITE operations.

12.6.  pNFS Mechanics

   This section describes the operations flow taken by a pNFS client to
   a metadata server and storage device.

   When a pNFS client encounters a new FSID, it sends a GETATTR to the
   NFSv4.1 server for the fs_layout_type (Section 5.12.1) attribute.  If
   the attribute returns at least one layout type, and the layout types
   returned are among the set supported by the client, the client knows
   that pNFS is a possibility for the file system.  If, from the server
   that returned the new FSID, the client does not have a client ID that
   came from an EXCHANGE_ID result that returned
   EXCHGID4_FLAG_USE_PNFS_MDS, it MUST send an EXCHANGE_ID to the server
   with the EXCHGID4_FLAG_USE_PNFS_MDS bit set.  If the server's
   response does not have EXCHGID4_FLAG_USE_PNFS_MDS, then contrary to
   what the fs_layout_type attribute said, the server does not support
   pNFS, and the client will not be able use pNFS to that server; in
   this case, the server MUST return NFS4ERR_NOTSUPP in response to any
   pNFS operation.

   The client then creates a session, requesting a persistent session,
   so that exclusive creates can be done with single round trip via the
   createmode4 of GUARDED4.  If the session ends up not being
   persistent, the client will use EXCLUSIVE4_1 for exclusive creates.

   If a file is to be created on a pNFS enabled file system, the client
   uses the OPEN operation.  With the normal set of attributes that may
   be provided upon OPEN used for creation, there is an OPTIONAL



Shepler, et al.           Expires June 18, 2009               [Page 296]

Internet-Draft                   NFSv4.1                   December 2008


   layout_hint attribute.  The client's use of layout_hint allows the
   client to express its preference for a layout type and its associated
   layout details.  The use of a createmode4 of UNCHECKED4, GUARDED4, or
   EXCLUSIVE4_1 will allow the client to provide the layout_hint
   attribute at create time.  The client MUST NOT use EXCLUSIVE4 (see
   Table 10).  The client is RECOMMENDED to combine a GETATTR operation
   after the OPEN within the same COMPOUND.  The GETATTR may then
   retrieve the layout_type attribute for the newly created file.  The
   client will then know what layout type the server has chosen for the
   file and therefore what storage protocol the client must use.

   If the client wants to open an existing file, then it also includes a
   GETATTR to determine what layout type the file supports.

   The GETATTR in either the file creation or plain file open case can
   also include the layout_blksize and layout_alignment attributes so
   that the client can determine optimal offsets and lengths for I/O on
   the file.

   Assuming the client supports the layout type returned by GETATTR and
   it chooses to use pNFS for data access, it then sends LAYOUTGET using
   the filehandle and stateid returned by OPEN, specifying the range it
   wants to do I/O on.  The response is a layout, which may be a subset
   of the range for which the client asked.  It also includes device IDs
   and a description of how data is organized (or in the case of
   writing, how data is to be organized) across the devices.  The device
   IDs and data description are encoded in a format that is specific to
   the layout type, but the client is expected to understand.

   When the client wants to send an I/O, it determines which device ID
   it needs to send the I/O command to by examining the data description
   in the layout.  It then sends a GETDEVICEINFO to find the device
   address(es) of the device ID.  The client then sends the I/O request
   one of device ID's device addresses, using the storage protocol
   defined for the layout type.  Note that if a client has multiple I/Os
   to send, these I/O requests may be done in parallel.

   If the I/O was a WRITE, then at some point the client may want to use
   LAYOUTCOMMIT to commit the modification time and the new size of the
   file (if it believes it extended the file size) to the metadata
   server and the modified data to the file system.

12.7.  Recovery

   Recovery is complicated by the distributed nature of the pNFS
   protocol.  In general, crash recovery for layouts is similar to crash
   recovery for delegations in the base NFSv4.1 protocol.  However, the
   client's ability to perform I/O without contacting the metadata



Shepler, et al.           Expires June 18, 2009               [Page 297]

Internet-Draft                   NFSv4.1                   December 2008


   server introduces subtleties that must be handled correctly if the
   possibility of file system corruption is to be avoided.

12.7.1.  Recovery from Client Restart

   Client recovery for layouts is similar to client recovery for other
   lock and delegation state.  When an pNFS client restarts, it will
   lose all information about the layouts that it previously owned.
   There are two methods by which the server can reclaim these resources
   and allow otherwise conflicting layouts to be provided to other
   clients.

   The first is through the expiry of the client's lease.  If the client
   recovery time is longer than the lease period, the client's lease
   will expire and the server will know that state may be released.  For
   layouts the server may release the state immediately upon lease
   expiry or it may allow the layout to persist awaiting possible lease
   revival, as long as no other layout conflicts.

   The second is through the client restarting in less time than it
   takes for the lease period to expire.  In such a case, the client
   will contact the server through the standard EXCHANGE_ID protocol.
   The server will find that the client's co_ownerid matches the
   co_ownerid of the previous client invocation, but that the verifier
   is different.  The server uses this as a signal to release all layout
   state associated with the client's previous invocation.  In this
   scenario, the data written by the client but not covered by a
   successful LAYOUTCOMMIT is in an undefined state; it may have been
   written or it may now be lost.  This is acceptable behavior and it is
   the client's responsibility to use LAYOUTCOMMIT to achieve the
   desired level of stability.

12.7.2.  Dealing with Lease Expiration on the Client

   If a client believes its lease has expired, it MUST NOT send I/O to
   the storage device until it has validated its lease.  The client can
   send a SEQUENCE operation to the metadata server.  If the SEQUENCE
   operation is successful, but sr_status_flag has
   SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED,
   SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED, or
   SEQ4_STATUS_ADMIN_STATE_REVOKED set, the client MUST NOT use
   currently held layouts.  The client has two choices to recover from
   the lease expiration.  First, for all modified but uncommitted data,
   write it to the metadata server using the FILE_SYNC4 flag for the
   WRITEs or WRITE and COMMIT.  Second, the client reestablishes a
   client ID and session with the server and obtain new layouts and
   device ID to device address mappings for the modified data ranges and
   then write the data to the storage devices with the newly obtained



Shepler, et al.           Expires June 18, 2009               [Page 298]

Internet-Draft                   NFSv4.1                   December 2008


   layouts.

   If sr_status_flags from the metadata server has
   SEQ4_STATUS_RESTART_RECLAIM_NEEDED set (or SEQUENCE returns
   NFS4ERR_BAD_SESSION and CREATE_SESSION returns
   NFS4ERR_STALE_CLIENTID) then the metadata server has restarted, and
   the client SHOULD recover using the methods described in
   Section 12.7.4.

   If sr_status_flags from the metadata server has
   SEQ4_STATUS_LEASE_MOVED set, then the client recovers by following
   the procedure described in Section 11.7.7.1.  After that, the client
   may get an indication that the layout state was not moved with the
   file system.  The client recovers as in the other applicable
   situations discussed in Paragraph 1 or Paragraph 2 of this section.

   If sr_status_flags reports no loss of state, then the lease for the
   layouts the client has are valid and renewed, and the client can once
   again send I/O requests to the storage devices.

   While clients SHOULD NOT send I/Os to storage devices that may extend
   past the lease expiration time period, this is not always possible;
   for example, an extended network partition that starts after the I/O
   is sent and does not heal until the I/O request is received by the
   storage device.  Thus the metadata server and/or storage devices are
   responsible for protecting themselves from I/Os that are sent before
   the lease expires, but arrive after the lease expires.  See
   Section 12.7.3.

12.7.3.  Dealing with Loss of Layout State on the Metadata Server

   This is a description of the case where all of the following are
   true:

   o  the metadata server has not restarted

   o  a pNFS client's layouts have been discarded (usually because the
      client's lease expired) and are invalid

   o  an I/O from the pNFS client arrives at the storage device

   The metadata server and its storage devices MUST solve this by
   fencing the client.  In other words, prevent the execution of I/O
   operations from the client to the storage devices after layout state
   loss.  The details of how fencing is done are specific to the layout
   type.  The solution for NFSv4.1 file-based layouts is described in
   (Section 13.11), and for other layout types in their respective
   external specification documents.



Shepler, et al.           Expires June 18, 2009               [Page 299]

Internet-Draft                   NFSv4.1                   December 2008


12.7.4.  Recovery from Metadata Server Restart

   The pNFS client will discover that the metadata server has restarted
   via the methods described in Section 8.4.2 and discussed in a pNFS-
   specific context in Paragraph 2, of Section 12.7.2.  The client MUST
   stop using layouts and delete the device ID to device address
   mappings it previously received from the metadata server.  Having
   done that, if the client wrote data to the storage device without
   committing the layouts via LAYOUTCOMMIT, then the client has
   additional work to do in order to have the client, metadata server
   and storage device(s) all synchronized on the state of the data.

   o  If the client has data still modified and unwritten in the
      client's memory, the client has only two choices.

      1.  The client can obtain a layout via LAYOUTGET after the
          server's grace period and write the data to the storage
          devices.

      2.  The client can write that data through the metadata server
          using the WRITE (Section 18.32) operation, and then obtain
          layouts as desired.

   o  If the client asynchronously wrote data to the storage device, but
      still has a copy of the data in its memory, then it has available
      to it the recovery options listed above in the previous bullet
      point.  If the metadata server is also in its grace period, the
      client has available to it the options below in the next bullet
      item.

   o  The client does not have a copy of the data in its memory and the
      metadata server is still in its grace period.  The client cannot
      use LAYOUTGET (within or outside the grace period) to reclaim a
      layout because the contents of the response from LAYOUTGET may not
      match what it had previously.  The range might be different or it
      might get the same range but the content of the layout might be
      different.  Even if the content of the layout appears to be the
      same, the device IDs may map to different device addresses, and
      even if the device addresses are the same, the device addresses
      could have been assigned to a different storage device.  The
      option of retrieving the data from the storage device and writing
      it to the metadata server per the recovery scenario described
      above is not available because, again, the mappings of range to
      device ID, device ID to device address, device address to physical
      device are stale and new mappings via new LAYOUTGET do not solve
      the problem.

      The only recovery option for this scenario is to send a



Shepler, et al.           Expires June 18, 2009               [Page 300]

Internet-Draft                   NFSv4.1                   December 2008


      LAYOUTCOMMIT in reclaim mode, which the metadata server will
      accept as long as it is in its grace period.  The use of
      LAYOUTCOMMIT in reclaim mode informs the metadata server that the
      layout has changed.  It is critical the metadata server receive
      this information before its grace period ends, and thus before it
      starts allowing updates to the file system.

      To send LAYOUTCOMMIT in reclaim mode, the client sets the
      loca_reclaim field of the operation's arguments (Section 18.42.1)
      to TRUE.  During the metadata server's recovery grace period (and
      only during the recovery grace period) the metadata server is
      prepared to accept LAYOUTCOMMIT requests with the loca_reclaim
      field set to TRUE.

      When loca_reclaim is TRUE, the client is attempting to commit
      changes to the layout that occurred prior to the restart of the
      metadata server.  The metadata server applies some consistency
      checks on the loca_layoutupdate field of the arguments to
      determine whether the client can commit the data written to the
      storage device to the file system.  The loca_layoutupdate field is
      of data type layoutupdate4, and contains layout type-specific
      content (in the lou_body field of loca_layoutupdate).  The layout
      type-specific information that loca_layoutupdate might have is
      discussed in Section 12.5.4.3.  If the metadata server's
      consistency checks on loca_layoutupdate succeed, then the metadata
      server MUST commit the data (as described by the loca_offset,
      loca_length, and loca_layoutupdate fields of the arguments) that
      was written to storage device.  If the metadata server's
      consistency checks on loca_layoutupdate fail, the metadata server
      rejects the LAYOUTCOMMIT operation, and makes no changes to the
      file system.  However, any time LAYOUTCOMMIT with loca_reclaim
      TRUE fails, the pNFS client has lost all the data in the range
      defined by <loca_offset, loca_length>.  A client can defend
      against this risk by caching all data, whether written
      synchronously or asynchronously in its memory and not release the
      cached data until a successful LAYOUTCOMMIT.  This condition does
      not hold true for all layout types; for example, files-based
      storage devices need not suffer from this limitation.

   o  The client does not have a copy of the data in its memory and the
      metadata server is no longer in its grace period; i.e. the
      metadata server returns NFS4ERR_NO_GRACE.  As with the scenario in
      the above bullet item, the failure of LAYOUTCOMMIT means the data
      in the range <loca_offset, loca_length> lost.  The defense against
      the risk is the same; cache all written data on the client until a
      successful LAYOUTCOMMIT.





Shepler, et al.           Expires June 18, 2009               [Page 301]

Internet-Draft                   NFSv4.1                   December 2008


12.7.5.  Operations During Metadata Server Grace Period

   Some of the recovery scenarios thus far noted that some operations,
   namely WRITE and LAYOUTGET might be permitted during the metadata
   server's grace period.  The metadata server may allow these
   operations during its grace period.  For LAYOUTGET, the metadata
   server must reliably determine that servicing such a request will not
   conflict with an impending LAYOUTCOMMIT reclaim request.  For WRITE,
   it must reliably determine that it will not conflict with an
   impending OPEN; or a LOCK where the file has mandatory file locking
   enabled.

   As mentioned previously, some operations, namely WRITE and LAYOUTGET
   may be rejected during the metadata server's grace period, because to
   provide simple, valid handling during the grace period, the easiest
   method is to simply reject all non-reclaim pNFS requests and WRITE
   operations by returning the NFS4ERR_GRACE error.  However, depending
   on the storage protocol (which is specific to the layout type) and
   metadata server implementation, the metadata server may be able to
   determine that a particular request is safe.  For example, a metadata
   server may save provisional allocation mappings for each file to
   stable storage, as well as information about potentially conflicting
   OPEN share modes and mandatory byte-range locks that might have been
   in effect at the time of restart, and use this information during the
   recovery grace period to determine that a WRITE request is safe.

12.7.6.  Storage Device Recovery

   Recovery from storage device restart is mostly dependent upon the
   layout type in use.  However, there are a few general techniques a
   client can use if it discovers a storage device has crashed while
   holding modified, uncommitted data that was asynchronously written.
   First and foremost, it is important to realize that the client is the
   only one which has the information necessary to recover non-committed
   data; since, it holds the modified data and probably nothing else
   does.  Second, the best solution is for the client to err on the side
   of caution and attempt to re-write the modified data through another
   path.

   The client SHOULD immediately write the data to the metadata server,
   with the stable field in the WRITE4args set to FILE_SYNC4.  Once it
   does this, there is no need to wait for the original storage device.

12.8.  Metadata and Storage Device Roles

   If the same physical hardware is used to implement both a metadata
   server and storage device, then the same hardware entity is to be
   understood to be implementing two distinct roles and it is important



Shepler, et al.           Expires June 18, 2009               [Page 302]

Internet-Draft                   NFSv4.1                   December 2008


   that it be clearly understood on behalf of which role the hardware is
   executing at any given time.

   Two sub-cases can be distinguished.

   1.  The storage device uses NFSv4.1 as the storage protocol, i.e.
       same physical hardware is used to implement both a metadata and
       data server.  See Section 13.1 for a description how multiple
       roles are handled.

   2.  The storage device does not use NFSv4.1 as the storage protocol,
       and the same physical hardware is used to implement both a
       metadata and storage device.  Whether distinct network addresses
       are used to access metadata server and storage device is
       immaterial, because, it is always clear to the pNFS client and
       server, from upper layer protocol being used (NFSv4.1 or non-
       NFSv4.1) what role the request to the common server network
       address is directed to.

12.9.  Security Considerations for pNFS

   pNFS separates file system metadata and data and provides access to
   both.  There are pNFS-specific operations (listed in Section 12.3)
   that provide access to the metadata; all existing NFSv4.1
   conventional (non-pNFS) security mechanisms and features apply to
   accessing the metadata.  The combination of components in a pNFS
   system (see Figure 1) is required to preserve the security properties
   of NFSv4.1 with respect to an entity accessing storage device from a
   client, including security countermeasures to defend against threats
   that NFSv4.1 provides defenses for in environments where these
   threats are considered significant.

   In some cases, the security countermeasures for connections to
   storage devices may take the form of physical isolation or a
   recommendation not to use pNFS in an environment.  For example, it
   may be impractical to provide confidentiality protection for some
   storage protocols to protect against eavesdropping; in environments
   where eavesdropping on such protocols is of sufficient concern to
   require countermeasures, physical isolation of the communication
   channel (e.g., via direct connection from client(s) to storage
   device(s)) and/or a decision to forgo use of pNFS (e.g., and fall
   back to conventional NFSv4.1) may be appropriate courses of action.

   Where communication with storage devices is subject to the same
   threats as client to metadata server communication, the protocols
   used for that communication need to provide security mechanisms as
   strong as or no weaker than those available via RPCSEC_GSS for
   NFSv4.1.  Except for the storage protocol used for the



Shepler, et al.           Expires June 18, 2009               [Page 303]

Internet-Draft                   NFSv4.1                   December 2008


   LAYOUT4_NFSV4_1_FILES layout (see Section 13), i.e. except for
   NFSv4.1, it is beyond the scope of this document to specify the
   security mechanisms for storage access protocols.

   pNFS implementations MUST NOT remove NFSv4.1's access controls.  The
   combination of clients, storage devices, and the metadata server are
   responsible for ensuring that all client to storage device file data
   access respects NFSv4.1's ACLs and file open modes.  This entails
   performing both of these checks on every access in the client, the
   storage device, or both (as applicable; when the storage device is an
   NFSv4.1 server, the storage device is ultimately responsible for
   controlling access as described in Section 13.9.2).  If a pNFS
   configuration performs these checks only in the client, the risk of a
   misbehaving client obtaining unauthorized access is an important
   consideration in determining when it is appropriate to use such a
   pNFS configuration.  Such layout types SHOULD NOT be used when
   client-only access checks do not provide sufficient assurance that
   NFSv4.1 access control is being applied correctly.  (This is not a
   problem for the file layout type described in Section 13 because the
   storage access protocol for LAYOUT4_NFSV4_1_FILES is NFSv4.1, and
   thus the security model for storage device access via
   LAYOUT4_NFSv4_1_FILES is the sames as that of the metadata server.)
   For handling of access control specific to a layout, the reader
   should examine the layout specification, such as the NFSv4.1/
   files-based layout (Section 13) of this document, the blocks layout
   [40], and objects layout [39].


13.  NFSv4.1 as a Storage Protocol in pNFS: the File Layout Type

   This section describes the semantics and format of NFSv4.1 file-based
   layouts for pNFS.  NFSv4.1 file-based layouts uses the
   LAYOUT4_NFSV4_1_FILES layout type.  The LAYOUT4_NFSV4_1_FILES type
   defines striping data across multiple NFSv4.1 data servers.

13.1.  Client ID and Session Considerations

   Sessions are a REQUIRED feature of NFSv4.1, and this extends to both
   the metadata server and file-based (NFSv4.1-based) data servers.

   The role a server plays in pNFS is determined by the result it
   returns from EXCHANGE_ID.  The roles are:

   o  metadata server (EXCHGID4_FLAG_USE_PNFS_MDS is set in the result
      eir_flags),

   o  data server (EXCHGID4_FLAG_USE_PNFS_DS)




Shepler, et al.           Expires June 18, 2009               [Page 304]

Internet-Draft                   NFSv4.1                   December 2008


   o  non-metadata server (EXCHGID4_FLAG_USE_NON_PNFS).  This is an
      NFSv4.1 server that does not support operations (e.g.  LAYOUTGET)
      or attributes that pertain to pNFS.

   The client MAY request zero or more of EXCHGID4_FLAG_USE_NON_PNFS,
   EXCHGID4_FLAG_USE_PNFS_DS, or EXCHGID4_FLAG_USE_PNFS_MDS, even though
   some combinations (e.g.  EXCHGID4_FLAG_USE_NON_PNFS |
   EXCHGID4_FLAG_USE_PNFS_MDS) are contradictory.  The server however
   MUST only return the following acceptable combinations:

        +--------------------------------------------------------+
        | Acceptable Results from EXCHANGE_ID                    |
        +--------------------------------------------------------+
        | EXCHGID4_FLAG_USE_PNFS_MDS                             |
        | EXCHGID4_FLAG_USE_PNFS_MDS | EXCHGID4_FLAG_USE_PNFS_DS |
        | EXCHGID4_FLAG_USE_PNFS_DS                              |
        | EXCHGID4_FLAG_USE_NON_PNFS                             |
        | EXCHGID4_FLAG_USE_PNFS_DS | EXCHGID4_FLAG_USE_NON_PNFS |
        +--------------------------------------------------------+

   As the above table implies, a server can have one or two roles.  A
   server can be both a metadata server and a data server or it can be
   both a data server and non-metadata server.  In addition to returning
   two roles in EXCHANGE_ID's results, and thus serving both roles via a
   common client ID, a server can serve two roles by returning a unique
   client ID and server owner for each role in each of two EXCHANGE_ID
   results, with each result indicating each role.

   In the case of a server with concurrent pNFS roles that are served by
   a common client ID, if the EXCHANGE_ID request from the client has
   zero or a combination of the bits set in eia_flags, the server result
   should set bits which represent the higher of the acceptable
   combination of the server roles, with a preference to match the roles
   requested by the client.  Thus if a client request has
   (EXCHGID4_FLAG_USE_NON_PNFS | EXCHGID4_FLAG_USE_PNFS_MDS |
   EXCHGID4_FLAG_USE_PNFS_DS) flags set, and the server is both a
   metadata server and a data server, serving both the roles by a common
   client ID, the server SHOULD return with (EXCHGID4_FLAG_USE_PNFS_MDS
   | EXCHGID4_FLAG_USE_PNFS_DS) set.

   In the case of a server that has multiple concurrent pNFS roles, each
   role served by a unique client ID, if the client specifies zero or a
   combination of roles in the request, the server results SHOULD return
   only one of the roles from the combination specified by the client
   request.  If the role specified by the server result does not match
   the intended use by the client, the client should send the
   EXCHANGE_ID specifying just the interested pNFS role.




Shepler, et al.           Expires June 18, 2009               [Page 305]

Internet-Draft                   NFSv4.1                   December 2008


   If a pNFS metadata client gets a layout that refers it to an NFSv4.1
   data server, it needs a client ID on that data server.  If it does
   not yet have a client ID from the server that had the
   EXCHGID4_FLAG_USE_PNFS_DS flag set in the EXCHANGE_ID results, then
   the client needs to send an EXCHANGE_ID to the data server, using the
   same co_ownerid as it sent to the metadata server, with the
   EXCHGID4_FLAG_USE_PNFS_DS flag set in the arguments.  If the server's
   EXCHANGE_ID results have EXCHGID4_FLAG_USE_PNFS_DS set, then the
   client may use the client ID to create sessions that will exchange
   pNFS data operations.  The client ID returned by the data server has
   no relationship with the client ID returned by a metadata server
   unless the client IDs are equal and the server owners and server
   scopes of the data server and metadata server are equal.

   In NFSv4.1, the session ID in the SEQUENCE operation implies the
   client ID, which in turn might be used by the server to map the
   stateid to the right client/server pair.  However, when a data server
   is presented with a READ or WRITE operation with a stateid, because
   the stateid is associated with client ID on a metadata server, and
   because the session ID in the preceding SEQUENCE operation is tied to
   the client ID of the data server, the data server has no obvious way
   to determine the metadata server from the COMPOUND procedure, and
   thus has no way to validate the stateid.  One RECOMMENDED approach is
   for pNFS servers to encode metadata server routing and/or identity
   information in the data server filehandles as returned in the layout.

   If metadata server routing and/or identity information is encoded in
   data server filehandles, when the metadata server identity or
   location changes, the data server filehandles it gave out will become
   invalid (stale), and so the metadata server MUST first recall the
   layouts.  Invalidating a data server filehandle does not render the
   NFS client's data cache invalid.  The client's cache should map a
   data server filehandle to a metadata server filehandle, and a
   metadata server filehandle to cached data.

   If a server is both a metadata server and a data server, the server
   might need to distinguish operations on files that are directed to
   the metadata server from those that are directed to the data server.
   It is RECOMMENDED that the values of the filehandles returned by the
   LAYOUTGET operation to be different than the value of the filehandle
   returned by the OPEN of the same file.

   Another scenario is for the metadata server and the storage device to
   be distinct from one client's point of view, and the roles reversed
   from another client's point of view.  For example, in the cluster
   file system model, a metadata server to one client might be a data
   server to another client.  If NFSv4.1 is being used as the storage
   protocol, then pNFS servers need to encode the values of filehandles



Shepler, et al.           Expires June 18, 2009               [Page 306]

Internet-Draft                   NFSv4.1                   December 2008


   according to their specific roles.

13.1.1.  Sessions Considerations for Data Servers

   Section 2.10.10.2 states that a client has to keep its lease renewed
   in order to prevent a session from being deleted by the server.  If
   the reply to EXCHANGE_ID has just the EXCHGID4_FLAG_USE_PNFS_DS role
   set, then as noted in Section 13.6 the client will not be able to
   determine the data server's lease_time attribute, because GETATTR
   will not be permitted.  Instead, the rule is that any time a client
   receives a layout referring it to a data server that returns just the
   EXCHGID4_FLAG_USE_PNFS_DS role, the client MAY assume that the
   lease_time attribute from the metadata server that returned the
   layout applies to the data server.  Thus the data server MUST be
   aware of the values of all lease_time attributes of all metadata
   servers it is providing I/O for, and MUST use the maximum of all such
   lease_time values as the lease interval for all client IDs and
   sessions established on it.

   For example, if one metadata server has a lease_time attribute of 20
   seconds, and a second metadata server has a lease_time attribute of
   10 seconds, then if both servers return layouts that refer to an
   EXCHGID4_FLAG_USE_PNFS_DS-only data server, the data server MUST
   renew a client's lease if the interval between two SEQUENCE
   operations on different COMPOUND requests is less than 20 seconds.

13.2.  File Layout Definitions

   The following definitions apply to the LAYOUT4_NFSV4_1_FILES layout
   type, and may be applicable to other layout types.

   Unit.  A unit is a fixed size quantity of data written to a data
      server.

   Pattern.  A pattern is a method of distributing one or more equal
      sized units across a set of data servers.  A pattern is iterated
      one or more times.

   Stripe.  An stripe is a set of data distributed across a set of data
      servers in a pattern before that pattern repeats.

   Stripe Count.  A stripe count is the number of units in a pattern.

   Stripe Width.  A stripe width is the size of stripe in bytes.  The
      stripe width = the stripe count * the size of the stripe unit.

   Hereafter, this document will refer to a unit that is a written in a
   pattern as a "stripe unit".



Shepler, et al.           Expires June 18, 2009               [Page 307]

Internet-Draft                   NFSv4.1                   December 2008


   A pattern may have more stripe units than data servers.  If so, some
   data servers will have more than one stripe unit per stripe.  A data
   server that has multiple stripe units per stripe MAY store each unit
   in a different data file (and depending on the implementation, will
   possibly assign a unique data filehandle to each data file).

13.3.  File Layout Data Types

   The high level NFSv4.1 layout types are nfsv4_1_file_layouthint4,
   nfsv4_1_file_layout_ds_addr4, and nfsv4_1_file_layout4.

   The SETATTR operation supports a layout hint attribute
   (Section 5.12.4).  When the client sets a layout hint (data type
   layouthint4) with a layout type of LAYOUT4_NFSV4_1_FILES (the
   loh_type field), the loh_body field contains a value of data type
   nfsv4_1_file_layouthint4.

   const NFL4_UFLG_MASK            = 0x0000003F;
   const NFL4_UFLG_DENSE           = 0x00000001;
   const NFL4_UFLG_COMMIT_THRU_MDS = 0x00000002;
   const NFL4_UFLG_STRIPE_UNIT_SIZE_MASK
                                   = 0xFFFFFFC0;

   typedef uint32_t nfl_util4;


   enum filelayout_hint_care4 {
           NFLH4_CARE_DENSE        = NFL4_UFLG_DENSE,

           NFLH4_CARE_COMMIT_THRU_MDS
                                   = NFL4_UFLG_COMMIT_THRU_MDS,

           NFLH4_CARE_STRIPE_UNIT_SIZE
                                   = 0x00000040,

           NFLH4_CARE_STRIPE_COUNT = 0x00000080
   };

   /* Encoded in the loh_body field of type layouthint4: */

   struct nfsv4_1_file_layouthint4 {
           uint32_t        nflh_care;
           nfl_util4       nflh_util;
           count4          nflh_stripe_count;
   };

   The generic layout hint structure is described in Section 3.3.19.
   The client uses the layout hint in the layout_hint (Section 5.12.4)



Shepler, et al.           Expires June 18, 2009               [Page 308]

Internet-Draft                   NFSv4.1                   December 2008


   attribute to indicate the preferred type of layout to be used for a
   newly created file.  The LAYOUT4_NFSV4_1_FILES layout type-specific
   content for the layout hint is composed of three fields.  The first
   field, nflh_care, is a set of flags indicating which values of the
   hint the client cares about.  If the NFLH4_CARE_DENSE flag is set,
   then the client indicates in the second field, nflh_util, a
   preference for how the data file is packed (Section 13.4.4), which is
   controlled by the value of nflh_util & NFL4_UFLG_DENSE.  If the
   NFLH4_CARE_COMMIT_THRU_MDS flag is set, then the client indicates a
   preference for whether the client should send COMMIT operations to
   the metadata server or data server (Section 13.7), which is
   controlled by the value of nflh_util & NFL4_UFLG_COMMIT_THRU_MDS.  If
   the NFLH4_CARE_STRIPE_UNIT_SIZE flag is set, the client indicates its
   preferred stripe unit size, which is indicated in nflh_util &
   NFL4_UFLG_STRIPE_UNIT_SIZE_MASK (thus the stripe unit size MUST be a
   multiple of 64 bytes).  The minimum stripe unit size is 64 bytes.  If
   the NFLH4_CARE_STRIPE_COUNT flag is set, the client indicates in the
   third field, nflh_stripe_count, the stripe count.  The stripe count
   multiplied by the stripe unit size is the stripe width.

   When LAYOUTGET returns a LAYOUT4_NFSV4_1_FILES layout (indicated in
   the loc_type field of the lo_content field), the loc_body field of
   the lo_content field contains a value of data type
   nfsv4_1_file_layout4.  Among other content, nfsv4_1_file_layout4 has
   a storage device ID (field nfl_deviceid) of data type deviceid4.  The
   GETDEVICEINFO operation maps a device ID to a storage device address
   (type device_addr4).  When GETDEVICEINFO returns a device address
   with a layout type of LAYOUT4_NFSV4_1_FILES (the da_layout_type
   field), the da_addr_body field contains a value of data type
   nfsv4_1_file_layout_ds_addr4.


   typedef netaddr4 multipath_list4<>;

   /* Encoded in the da_addr_body field of type device_addr4: */
   struct nfsv4_1_file_layout_ds_addr4 {
           uint32_t        nflda_stripe_indices<>;
           multipath_list4 nflda_multipath_ds_list<>;
   };

   The nfsv4_1_file_layout_ds_addr4 data type represents the device
   address.  It is composed of two fields:

   1.  nflda_multipath_ds_list: An array of lists of data servers, where
       each list can be one or more elements, and each element
       represents a (see Section 13.5) data server address which may
       serve equally as the target of IO operations.  The length of this
       array might be different than the stripe count.



Shepler, et al.           Expires June 18, 2009               [Page 309]

Internet-Draft                   NFSv4.1                   December 2008


   2.  nflda_stripe_indices: An array of indices used to index into
       nflda_multipath_ds_list.  The value of each element of
       nflda_stripe_indices MUST be less than the number of elements in
       nflda_multipath_ds_list.  Each element of nflda_multipath_ds_list
       SHOULD be referred to by one or more elements of
       nflda_stripe_indices.  The number of elements in
       nflda_stripe_indices is always equal to the stripe count.



   /* Encoded in the loc_body field of type layout_content4: */
   struct nfsv4_1_file_layout4 {
            deviceid4      nfl_deviceid;
            nfl_util4      nfl_util;
            uint32_t       nfl_first_stripe_index;
            offset4        nfl_pattern_offset;
            nfs_fh4        nfl_fh_list<>;
   };

   The nfsv4_1_file_layout4 data type represents the layout.  It is
   composed of the following fields:

   1.  nfl_deviceid: The device ID which maps to a value of type
       nfsv4_1_file_layout_ds_addr4.

   2.  nfl_util: Like the nflh_util field of data type
       nfsv4_1_file_layouthint4, a compact representation of how the
       data on a file on each data server is packed, whether the client
       should send COMMIT operations to the metadata server or data
       server, and the stripe unit size.  If a server returns two or
       more overlapping layouts, each stripe unit size in each
       overlapping layout MUST be the same.

   3.  nfl_first_stripe_index: The index into the first element of the
       nflda_stripe_indices array to use.

   4.  nfl_pattern_offset: This field is the logical offset into the
       file where the striping pattern starts.  It is required for
       converting the client's logical I/O offset (e.g. the current
       offset in a POSIX file descriptor before the read() or write()
       system call is sent) into the stripe unit number (see
       Section 13.4.1).

       If dense packing is used, then nfl_pattern_offset is also needed
       to convert the client's logical I/O offset to an offset on the
       file on the data server corresponding to the stripe unit number
       (see Section 13.4.4).




Shepler, et al.           Expires June 18, 2009               [Page 310]

Internet-Draft                   NFSv4.1                   December 2008


       Note that nfl_pattern_offset is not always the same as lo_offset.
       For example, via the LAYOUTGET operation, a client might request
       a layout starting at offset 1000 of a file that has its striping
       pattern start at offset 0.


   5.  nfl_fh_list: An array of data server filehandles for each list of
       data servers in each element of the nflda_multipath_ds_list
       array.  The number of elements in nfl_fh_list depends on whether
       sparse or dense packing is being used.

       *  If sparse packing is being used, the number of elements in
          nfl_fh_list MUST be one of three values:

          +  Zero.  This means that filehandles used for each data
             server are the same as the filehandle returned by the OPEN
             operation from the metadata server.

          +  One. This means that every data server uses the same
             filehandle: what is specified in nfl_fh_list[0].

          +  The same number of elements in nflda_multipath_ds_list.
             Thus, in this case, when issuing an I/O to any data server
             in nflda_multipath_ds_list[X], the filehandle in
             nfl_fh_list[X] MUST be used.

          See the discussion on sparse packing in Section 13.4.4.


       *  If dense packing is being used, number of elements in
          nfl_fh_list MUST be the same as the number of elements in
          nflda_stripe_indices.  Thus when issuing I/O to any data
          server in nflda_multipath_ds_list[nflda_stripe_indices[Y]],
          the filehandle in nfl_fh_list[Y] MUST be used.  In addition,
          any time there exists i, and j, (i != j) such that the
          intersection of
          nflda_multipath_ds_list[nflda_stripe_indices[i]] and
          nflda_multipath_ds_list[nflda_stripe_indices[j]] is not empty,
          then nfl_fh_list[i] MUST NOT equal nfl_fh_list[j].  In other
          words, when dense packing is being used, if a data server
          appears in two or more units of a striping pattern, each
          reference to the data server MUST use a different filehandle.

          Indeed, if there are multiple striping patterns, as indicated
          by the presence of multiple objects of data type layout4
          (either returned in one or multiple LAYOUTGET operations), and
          a data server is the target of a unit of one pattern and
          another unit of another pattern, then each reference to each



Shepler, et al.           Expires June 18, 2009               [Page 311]

Internet-Draft                   NFSv4.1                   December 2008


          data server MUST use a different filehandle.

          See the discussion on dense packing in Section 13.4.4.

   The details on the interpretation of the layout are in Section 13.4.

13.4.  Interpreting the File Layout

13.4.1.  Determining the Stripe Unit Number

   To find the stripe unit number that corresponds to the client's
   logical file offset, the pattern offset will also be used.  The i'th
   stripe unit (SUi) is:

       relative_offset = file_offset - nfl_pattern_offset;
       SUi = floor(relative_offset / stripe_unit_size);

13.4.2.  Interpreting the File Layout Using Sparse Packing

   When sparse packing is used, the algorithm for determining the
   filehandle and set of data server network addresses to write stripe
   unit i (SUi) to is:





























Shepler, et al.           Expires June 18, 2009               [Page 312]

Internet-Draft                   NFSv4.1                   December 2008


      stripe_count = number of elements in nflda_stripe_indices;

      j = (SUi + nfl_first_stripe_index) % stripe_count;

      idx = nflda_stripe_indices[j];

      fh_count = number of elements in nfl_fh_list;
      ds_count = number of elements in nflda_multipath_ds_list;

      switch (fh_count) {
        case ds_count:
          fh = nfl_fh_list[idx];
          break;

        case 1:
          fh = nfl_fh_list[0];
          break;

        case 0:
          fh = filehandle returned by OPEN;
          break;

        default:
          throw a fatal exception;
          break;
      }

      address_list = nflda_multipath_ds_list[idx];


   The client would then select a data server from address_list, and
   send a READ or WRITE operation using the filehandle specified in fh.

   Consider the following example:

   Suppose we have a device address consisting of seven data servers,
   arranged in three equivalence (Section 13.5) classes:

      { A, B, C, D }, { E }, { F, G }

   Where A through G are network addresses.

   Then

      nflda_multipath_ds_list<> = { A, B, C, D }, { E }, { F, G }

   i.e.




Shepler, et al.           Expires June 18, 2009               [Page 313]

Internet-Draft                   NFSv4.1                   December 2008


      nflda_multipath_ds_list[0] = { A, B, C, D }

      nflda_multipath_ds_list[1] = { E }

      nflda_multipath_ds_list[2] = { F, G }

   Suppose the striping index array is:

      nflda_stripe_indices<> = { 2, 0, 1, 0 }

   Now suppose the client gets a layout which has a device ID that maps
   to the above device address.  The initial index,

      nfl_first_stripe_index = 2,

   and

      nfl_fh_list = { 0x36, 0x87, 0x67 }.

   If the client wants to write to SU0, the set of valid { network
   address, filehandle } combinations for SUi are determined by:

      nfl_first_stripe_index = 2

   So

      idx = nflda_stripe_indices[(0 + 2) % 4]

         = nflda_stripe_indices[2]

         = 1

   So

      nflda_multipath_ds_list[1] = { E }

   and

      nfl_fh_list[1] = { 0x87 }

   The client can thus write SU0 to { 0x87, { E }, }.

   The destinations of the first thirteen storage units are:








Shepler, et al.           Expires June 18, 2009               [Page 314]

Internet-Draft                   NFSv4.1                   December 2008


                    +-----+------------+--------------+
                    | SUi | filehandle | data servers |
                    +-----+------------+--------------+
                    | 0   | 87         | E            |
                    | 1   | 36         | A,B,C,D      |
                    | 2   | 67         | F,G          |
                    | 3   | 36         | A,B,C,D      |
                    | 4   | 87         | E            |
                    | 5   | 36         | A,B,C,D      |
                    | 6   | 67         | F,G          |
                    | 7   | 36         | A,B,C,D      |
                    | 8   | 87         | E            |
                    | 9   | 36         | A,B,C,D      |
                    | 10  | 67         | F,G          |
                    | 11  | 36         | A,B,C,D      |
                    | 12  | 87         | E            |
                    +-----+------------+--------------+

13.4.3.  Interpreting the File Layout Using Dense Packing

   When dense packing is used, the algorithm for determining the
   filehandle and set of data server network addresses to write stripe
   unit i (SUi) to is:


      stripe_count = number of elements in nflda_stripe_indices;

      j = (SUi + nfl_first_stripe_index) % stripe_count;

      idx = nflda_stripe_indices[j];

      fh_count = number of elements in nfl_fh_list;
      ds_count = number of elements in nflda_multipath_ds_list;

      switch (fh_count) {
        case stripe_count:
          fh = nfl_fh_list[j];
          break;

        default:
          throw a fatal exception;
          break;
      }

      address_list = nflda_multipath_ds_list[idx];


   The client would then select a data server from address_list, and



Shepler, et al.           Expires June 18, 2009               [Page 315]

Internet-Draft                   NFSv4.1                   December 2008


   send a READ or WRITE operation using the filehandle specified in fh.

   Consider the following example (which is the same as the sparse
   packing example, except for the filehandle list):

   Suppose we have a device address consisting of seven data servers,
   arranged in three equivalence (Section 13.5) classes:

      { A, B, C, D }, { E }, { F, G }

   Where A through G are network addresses.

   Then

      nflda_multipath_ds_list<> = { A, B, C, D }, { E }, { F, G }

   i.e.

      nflda_multipath_ds_list[0] = { A, B, C, D }

      nflda_multipath_ds_list[1] = { E }

      nflda_multipath_ds_list[2] = { F, G }

   Suppose the striping index array is:

      nflda_stripe_indices<> = { 2, 0, 1, 0 }

   Now suppose the client gets a layout which has a device ID that maps
   to the above device address.  The initial index,

      nfl_first_stripe_index = 2,

   and

      nfl_fh_list = { 0x67, 0x37, 0x87, 0x36 }.

   The interesting examples for dense packing are SU1 and SU3, because
   each stripe unit refers to the same data server list, yet MUST use a
   different filehandle.  If the client wants to write to SU1, the set
   of valid { network address, filehandle } combinations for SUi are
   determined by:

      nfl_first_stripe_index = 2

   So





Shepler, et al.           Expires June 18, 2009               [Page 316]

Internet-Draft                   NFSv4.1                   December 2008


      j = (1 + 2) % 4 = 3

         idx = nflda_stripe_indices[j]

         = nflda_stripe_indices[3]

         = 0

   So

      nflda_multipath_ds_list[0] = { A, B, C, D }

   and

      nfl_fh_list[3] = { 0x36 }

   The client can thus write SU1 to { 0x36, { A, B, C, D }, }.

   For SU3, j = (3 + 2) % 4 = 1, and nflda_stripe_indices[1] = 0.  Then
   nflda_multipath_ds_list[0] = { A, B, C, D }, and nfl_fh_list[1] =
   0x37.  The client can thus write SU3 to { 0x37, { A, B, C, D } }.

   The destinations of the first thirteen storage units are:

                    +-----+------------+--------------+
                    | SUi | filehandle | data servers |
                    +-----+------------+--------------+
                    | 0   | 87         | E            |
                    | 1   | 36         | A,B,C,D      |
                    | 2   | 67         | F,G          |
                    | 3   | 37         | A,B,C,D      |
                    | 4   | 87         | E            |
                    | 5   | 36         | A,B,C,D      |
                    | 6   | 67         | F,G          |
                    | 7   | 37         | A,B,C,D      |
                    | 8   | 87         | E            |
                    | 9   | 36         | A,B,C,D      |
                    | 10  | 67         | F,G          |
                    | 11  | 37         | A,B,C,D      |
                    | 12  | 87         | E            |
                    +-----+------------+--------------+

13.4.4.  Sparse and Dense Stripe Unit Packing

   The flag NFL4_UFLG_DENSE of the nfl_util4 data type (field nflh_util
   of the data type nfsv4_1_file_layouthint4 and field nfl_util of data
   type nfsv4_1_file_layout_ds_addr4) specifies how the data is packed
   within the data file on a data server.  It allows for two different



Shepler, et al.           Expires June 18, 2009               [Page 317]

Internet-Draft                   NFSv4.1                   December 2008


   data packings: sparse and dense.  The packing type determines the
   calculation that will be made to map the client visible file offset
   to the offset within the data file located on the data server.

   If nfl_util & NFL4_UFLG_DENSE is zero, this means that sparse packing
   is being used.  Hence the logical offsets of the file as viewed by a
   client issuing READs and WRITEs directly to the metadata server are
   the same offsets each data server uses when storing a stripe unit.
   The effect then, for striping patterns consisting of at least two
   stripe units, is for each data server file to be sparse or holey.  So
   for example, suppose there is a pattern with three stripe units, the
   stripe unit size is a 4096 bytes, and there are three data servers in
   the pattern, then the file in data server 1 will have stripe units 0,
   3, 6, 9, ... filled, data server 2's file will have stripe units 1,
   4, 7, 10, ... filled, and data server 3's file will have stripe units
   2, 5, 8, 11, ... filled.  The unfilled stripe units of each file will
   be holes, hence the files in each data server are sparse.

   If sparse packing is being used and a client attempts I/O to one of
   the holes, then an error MUST be returned by the data server.  Using
   the above example, if data server 3 received a READ or WRITE request
   for block 4, the data server would return NFS4ERR_PNFS_IO_HOLE.  Thus
   data servers need to understand the striping pattern in order to
   support sparse packing.

   If nfl_util & NFL4_UFLG_DENSE is one, this means that dense packing
   is being used and the data server files have no holes.  Dense packing
   might be selected because the data server does not (efficiently)
   support holey files, or because the data server cannot recognize
   read-ahead unless there are no holes.  If dense packing is indicated
   in the layout, the data files will be packed.  Using the example
   striping pattern and stripe unit size that was used for the sparse
   packing example, the corresponding dense packing would have all
   stripe units of all data files filled.  Logical stripe units 0, 3, 6,
   ... of the file would live on stripe units 0, 1, 2, ... of the file
   of data server 1, logical stripe units 1, 4, 7, ... of the file would
   live on stripe units 0, 1, 2, ... of the file of data server 2, and
   logical stripe units 2, 5, 8, ... of the file would live on stripe
   units 0, 1, 2, ... of the file of data server 3.

   Because dense packing does not leave holes on the data servers, the
   pNFS client is allowed to write to any offset of any data file of any
   data server in the stripe.  Thus the data servers need not know the
   file's striping pattern.

   The calculation to determine the byte offset within the data file for
   dense data server layouts is:




Shepler, et al.           Expires June 18, 2009               [Page 318]

Internet-Draft                   NFSv4.1                   December 2008


      stripe_width = stripe_unit_size * N;
         where N = number of elements in nflda_stripe_indices.

      relative_offset = file_offset - nfl_pattern_offset;

      data_file_offset = floor(relative_offset / stripe_width)
         * stripe_unit_size
         + relative_offset % stripe_unit_size

   If dense packing is being used, and a data server appears more than
   once in a striping pattern, then to distinguish one stripe unit from
   another, the data server MUST use a different filehandle.  Let's
   suppose there are two data servers.  Logical stripe units 0, 3, 6 are
   served by data server 1, logical stripe units 1, 4, 7 are served by
   data server 2, and logical stripe units 2, 5, 8 are also served by
   data server 2.  Unless data server 2 has two filehandles (each
   referring to a different data file), then, for example, a write to
   logical stripe unit 1 overwrites the write to logical stripe unit 2,
   because both logical stripe units are located in the same stripe unit
   (0) of data server 2.

13.5.  Data Server Multipathing

   The NFSv4.1 file layout supports multipathing to multiple data server
   addresses.  Data server-level multipathing is used for bandwidth
   scaling via trunking (Section 2.10.5) and for higher availability of
   use in the case of a data server failure.  Multipathing allows the
   client to switch to another data server address which may that of
   another data server that is exporting the same data stripe unit,
   without having to contact the metadata server for a new layout.

   To support data server multipathing, each element of the
   nflda_multipath_ds_list contains an array of one more data server
   network addresses.  This array (data type multipath_list4) represents
   a list of data servers (each identified by a network address), with
   it being possible that some data servers will appear in the list
   multiple times.

   The client is free to use any of the network addresses as a
   destination to send data server requests.  If some network addresses
   are less optimal paths to the data than others, then the MDS SHOULD
   NOT include those network addresses in an element of
   nflda_multipath_ds_list.  If less optimal network addresses exist to
   provide fail over, the RECOMMENDED method to offer the addresses is
   to provide them in a replacement device ID to device address mapping,
   or a replacement device ID.  When a client finds that no data server
   in an element of nflda_multipath_ds_list responds, it SHOULD send a
   GETDEVICEINFO to attempt to replace the existing device ID to device



Shepler, et al.           Expires June 18, 2009               [Page 319]

Internet-Draft                   NFSv4.1                   December 2008


   address mappings.  If the MDS detects that all data servers
   represented by an element of nflda_multipath_ds_list are unavailable,
   the MDS SHOULD send a CB_NOTIFY_DEVICEID (if the client has indicated
   it wants device ID notifications for changed device IDs) to change
   the device ID to device address mappings to the available data
   servers.  If the device ID itself will be replaced, the MDS SHOULD
   recall all layouts with the device ID, and thus force the client to
   get new layouts and device ID mappings via LAYOUTGET and
   GETDEVICEINFO.

   Generally if two network addresses appear in an element of
   nflda_multipath_ds_list they will designate the same data server and
   the two data server addresses will support the implementation client
   ID or session trunking (the latter is RECOMMENDED) as defined in
   Section 2.10.5, and the two data server addresses will share the same
   server owner, or major ID of the server owner.  It is not always
   necessary for the two data server addresses to designate the same
   server with trunking being used.  For example the data could be read-
   only, and the data consist of exact replicas.

13.6.  Operations Sent to NFSv4.1 Data Servers

   Clients accessing data on an NFSv4.1 data server MUST send only the
   NULL procedure and COMPOUND procedures whose operations are taken
   only from two restricted subsets of the operations defined as valid
   NFSv4.1 operations.  Clients MUST use the filehandle specified by the
   layout when accessing data on NFSv4.1 data servers.

   The first of these operation subsets consist of management
   operations.  This subset consists of the BACKCHANNEL_CTL,
   BIND_CONN_TO_SESSION, CREATE_SESSION, DESTROY_CLIENTID,
   DESTROY_SESSION, EXCHANGE_ID, SECINFO_NO_NAME, SET_SSV, and SEQUENCE
   operations.  The client may use these operations in order to set up
   and maintain the appropriate client IDs, sessions, and security
   contexts involved in communication with the data server.  Henceforth
   these will be referred to as data-server housekeeping operations.

   The second subset consists of COMMIT, READ, WRITE, and PUTFH, These
   operations MUST be used with a current filehandle specified by the
   layout.  In the case of PUTFH, the new current filehandle MUST be one
   taken from the layout.  Henceforth, these will be referred to as
   data-server I/O operations.  As described in Section 12.5.1, a client
   MUST NOT send an I/O to a data server for which it does not hold a
   valid layout; the data server MUST reject such an I/O.

   Unless the server has a concurrent non-data-server personality, i.e.
   EXCHANGE_ID results returned (EXCHGID4_FLAG_USE_PNFS_DS |
   EXCHGID4_FLAG_USE_PNFS_MDS) or (EXCHGID4_FLAG_USE_PNFS_DS |



Shepler, et al.           Expires June 18, 2009               [Page 320]

Internet-Draft                   NFSv4.1                   December 2008


   EXCHGID4_FLAG_USE_NON_PNFS), see Section 13.1, any attempted use of
   operations against a data server other than those specified in the
   two subsets above MUST return NFS4ERR_NOTSUPP to the client.

   When the server has concurrent data server and non-data-server
   personalities, each COMPOUND sent by the client MUST be constructed
   so that it is appropriate to one of the two personalities, and MUST
   NOT contain operations directed to a mix of those personalities.  The
   server MUST enforce this.  To understand the constraints, operations
   within a COMPOUND are divided into the following three classes:

   1.  An operation which is ambiguous regarding its personality
       assignment.  These include all of the data-server housekeeping
       operations.  Additionally, if the server has assigned filehandles
       so that the ones defined by the layout are the same as those used
       by the metadata server, all operations using such filehandles are
       within this class, with the following exception.  The exception
       is that if the operation uses a stateid that is incompatible with
       a data-server personality (e.g. a special stateid or the stateid
       has a non-zero seqid field, see Section 13.9.1); if so, the
       operation is in class 3, as described below.  A COMPOUND
       containing multiple class 1 operations (and operations of no
       other class) MAY be sent to a server with multiple concurrent
       data server and non-data-server personalities.

   2.  An operation which is unambiguously referable to the data server
       personality.  These are data-server I/O operations where the
       filehandle is one that can only be validly directed to the data-
       server personality.

   3.  An operation which is unambiguously referable to the non-data-
       server personality.  These include all COMPOUND operations that
       are neither data-server housekeeping nor data-server I/O
       operations plus data-server I/O operations where the current fh
       (or the one to be made the current fh in the case of PUTFH) is
       one that is only valid on the metadata server or where a stateid
       is used that is incompatible with the data server, i.e. is a
       special stateid or has a non-zero seqid value.

   When a COMPOUND first executes an operation from class 3 above, it
   acts as a normal COMPOUND on any other server and the data server
   personality ceases to be relevant.  There are no special restrictions
   on the operations in the COMPOUND to limit them to those for a data
   server.  When a PUTFH is done, filehandles derived from the layout
   are not valid.  If their format is not normally acceptable, then
   NFS4ERR_BADHANDLE MUST result.  Similarly, current filehandles for
   other operations do not accept filehandles derived from layouts and
   are not normally usable on the metadata server.  Using these will



Shepler, et al.           Expires June 18, 2009               [Page 321]

Internet-Draft                   NFSv4.1                   December 2008


   result in NFS4ERR_STALE.

   When a COMPOUND first executes an operation from class 2, which would
   be PUTFH where the filehandle is one from a layout, the COMPOUND
   henceforth is interpreted with respect to the data server
   personality.  Operations outside the two classes discussed above MUST
   result in NFS4ERR_NOTSUPP.  Filehandles are validated using the rules
   of the data server, resulting in NFS4ERR_BADHANDLE and/or
   NFS4ERR_STALE even when they would not normally do so when addressed
   to the non-data-server personality.  Stateids must obey the rules of
   the data server in that any use of special stateids or stateids with
   non-zero seqid values must result in NFS4ERR_BAD_STATEID.

   Until the server first executes an operation from class 2 or class 3,
   the client MUST NOT depend on the operation being executed by either
   the data-server or the non-data-server personality.  The server MUST
   pick one personality consistently for a given COMPOUND, with the only
   possible transition being a single one when the first operation from
   class 2 or class 3 is executed.

   Because of the complexity induced by assigning filehandles so they
   can be used on both a data server and a metadata server, it is
   RECOMMENDED that where the same server can have both personalities,
   the server assign separate unique filehandles to both personalities.
   This makes it unambiguous for which server a given request is
   intended.

   GETATTR and SETATTR MUST be directed to the metadata server.  In the
   case of a SETATTR of the size attribute, the control protocol is
   responsible for propagating size updates/truncations to the data
   servers.  In the case of extending WRITEs to the data servers, the
   new size must be visible on the metadata server once a LAYOUTCOMMIT
   has completed (see Section 12.5.4.2).  Section 13.10, describes the
   mechanism by which the client is to handle data server files that do
   not reflect the metadata server's size.

13.7.  COMMIT Through Metadata Server

   The file layout provides two alternate means of providing for the
   commit of data written through data servers.  The flag
   NFL4_UFLG_COMMIT_THRU_MDS in the field nfl_util of the file layout
   (data type nfsv4_1_file_layout4) is an indication from the metadata
   server to the client of the REQUIRED way of performing COMMIT, either
   by sending the COMMIT to the data server or the metadata server.
   These two methods of dealing with the issue correspond to broad
   styles of implementation for a pNFS server supporting the files
   layout type.




Shepler, et al.           Expires June 18, 2009               [Page 322]

Internet-Draft                   NFSv4.1                   December 2008


   o  When the flag is FALSE, COMMIT operations MUST to be sent to the
      data server to which the corresponding WRITE operations were sent.
      This approach is most useful when striping of files is implemented
      as part of pNFS server, with the individual data servers each
      implementing their own file systems.

   o  When the flag is TRUE, COMMIT operations MUST be sent to the
      metadata server, rather than to the individual data servers.  This
      approach is most useful when the pNFS server is implemented on top
      of a clustered file system.  In such an implementation, sending
      COMMIT's to multiple data servers may result in repeated writes of
      metadata blocks as each individual COMMIT is executed, to the
      detriment of write performance.  Sending a single COMMIT to the
      metadata server can provide more efficiency when there exists a
      clustered file system capable of implementing such a co-ordinated
      COMMIT.

      If nfl_util & NFL4_UFLG_COMMIT_THRU_MDS is TRUE, then in order to
      maintain the current NFSv4.1 commit and recovery model, the data
      servers MUST return a common writeverf verifier in all WRITE
      responses for a given file layout, and the metadata server's
      COMMIT implementation must return the same writeverf.  The value
      of the writeverf verifier MUST be changed at the metadata server
      or any data server that is referenced in the layout, whenever
      there is a server event that can possibly lead to loss of
      uncommitted data.  The scope of the verifier can be for a file or
      for the entire pNFS server.  It might be more difficult for the
      server to maintain the verifier at the file level but the benefit
      is that only events that impact a given file will require recovery
      action.

   Note that if the layout specified dense packing, then the offset used
   to a COMMIT to the MDS may differ than that of an offset used to a
   COMMIT to the data server.

   The single COMMIT to the metadata server will return a verifier and
   the client should compare it to all the verifiers from the WRITEs and
   fail the COMMIT if there is any mismatched verifiers.  If COMMIT to
   the metadata server fails, the client should re-send WRITEs for all
   the modified data in the file.  The client should treat modified data
   with a mismatched verifier as a WRITE failure and try to recover by
   reissuing the WRITEs to the original data server or using another
   path to that data if the layout has not been recalled.  Another
   option the client has is getting a new layout or just rewrite the
   data through the metadata server.  If nfl_util &
   NFL4_UFLG_COMMIT_THRU_MDS is FALSE, sending a COMMIT to the metadata
   server might have no effect.  If nfl_util & NFL4_UFLG_COMMIT_THRU_MDS
   is FALSE, a COMMIT sent to the metadata server should be used only to



Shepler, et al.           Expires June 18, 2009               [Page 323]

Internet-Draft                   NFSv4.1                   December 2008


   commit data that was written to the metadata server.  See
   Section 12.7.6 for recovery options.

13.8.  The Layout Iomode

   The layout iomode need not be used by the metadata server when
   servicing NFSv4.1 file-based layouts, although in some circumstances
   it may be useful.  For example, if the server implementation supports
   reading from read-only replicas or mirrors, it would be useful for
   the server to return a layout enabling the client to do so.  As such,
   the client SHOULD set the iomode based on its intent to read or write
   the data.  The client may default to an iomode of LAYOUTIOMODE4_RW.
   The iomode need not be checked by the data servers when clients
   perform I/O. However, the data servers SHOULD still validate that the
   client holds a valid layout and return an error if the client does
   not.

13.9.  Metadata and Data Server State Coordination

13.9.1.  Global Stateid Requirements

   When the client sends I/O to a data server, the stateid used MUST NOT
   be a layout stateid as returned by LAYOUTGET or sent by
   CB_LAYOUTRECALL.  Permitted stateids are based on one of the
   following: an open stateid (the stateid field of data type OPEN4resok
   as returned by OPEN), a delegation stateid (the stateid field of data
   types open_read_delegation4 and open_write_delegation4 as returned by
   OPEN or WANT_DELEGATION, or as sent by CB_PUSH_DELEG), or a stateid
   returned by the LOCK or LOCKU operations.  The stateid sent to the
   data server MUST be sent with the seqid set to zero, indicating the
   most current version of that stateid, rather than indicating a
   specific non-zero seqid value.  In no case is the use of special
   stateid values allowed.

   The stateid used for I/O MUST have the same effect and be subject to
   the same validation on a data server as it would if the I/O was being
   performed on the metadata server itself in the absence of pNFS.  This
   has the implication that stateids are globally valid on both the
   metadata and data servers.  This requires the metadata server to
   propagate changes in lock and open state to the data servers, so that
   the data servers can validate I/O accesses.  This is discussed
   further in Section 13.9.2.  Depending on when stateids are
   propagated, the existence of a valid stateid on the data server may
   act as proof of a valid layout.

   Clients performing I/O operations need to select an appropriate
   stateid based on the locks (including opens and delegations) held by
   the client and the various types of state-owners issuing the I/O



Shepler, et al.           Expires June 18, 2009               [Page 324]

Internet-Draft                   NFSv4.1                   December 2008


   requests.  The rules for doing so when referencing data servers are
   somewhat different from those discussed in Section 8.2.5 which apply
   when accessing metadata servers.

   The following rules, applied in order of decreasing priority, govern
   the selection of the appropriate stateid:

   o  If the client holds a delegation for the file in question, the
      delegation stateid should be used.

   o  Otherwise, there must be an open stateid for the current open-
      owner, and that open stateid for the open file in question is
      used, unless mandatory locking, prevents that.  See below.

   o  If the data server had previously responded with NFS4ERR_LOCKED to
      use of the open stateid, then the client should use the lock
      stateid whenever one exists for that open file with the current
      lock-owner.

   o  Special stateids should never be used and if used the data server
      MUST reject the I/O with an NFS4ERR_BAD_STATEID error.

13.9.2.  Data Server State Propagation

   Since the metadata server, which handles lock and open-mode state
   changes, as well as ACLs, might not be co-located with the data
   servers where I/O access are validated, the server implementation
   MUST take care of propagating changes of this state to the data
   servers.  Once the propagation to the data servers is complete, the
   full effect of those changes MUST be in effect at the data servers.
   However, some state changes need not be propagated immediately,
   although all changes SHOULD be propagated promptly.  These state
   propagations have an impact on the design of the control protocol,
   even though the control protocol is outside of the scope of this
   specification.  Immediate propagation refers to the synchronous
   propagation of state from the metadata server to the data server(s);
   the propagation must be complete before returning to the client.

13.9.2.1.  Lock State Propagation

   If the pNFS server supports mandatory locking, any mandatory locks on
   a file MUST be made effective at the data servers before the request
   that establishes them returns to the caller.  The effect MUST be the
   same as if the mandatory lock state were synchronously propagated to
   the data servers, even though the details of the control protocol may
   avoid actual transfer of the state under certain circumstances.

   On the other hand, since advisory lock state is not used for checking



Shepler, et al.           Expires June 18, 2009               [Page 325]

Internet-Draft                   NFSv4.1                   December 2008


   I/O accesses at the data servers, there is no semantic reason for
   propagating advisory lock state to the data servers.  Since updates
   to advisory locks neither confer nor remove privileges, these changes
   need not be propagated immediately, and may not need to be propagated
   promptly.  The updates to advisory locks need only be propagated when
   the data server needs to resolve a question about a stateid.  In
   fact, if byte-range locking is not mandatory (i.e., is advisory) the
   clients are advised not to use the lock-based stateids for I/O at
   all.  The stateids returned by open are sufficient and eliminate
   overhead for this kind of state propagation.

   If a client gets back an NFS4ERR_LOCKED error from a data server,
   this is an indication that mandatory byte-range locking is in force.
   The client recovers from this by getting a byte-range lock that
   covers the affected range and re-sends the I/O with the stateid of
   the byte-range lock.

13.9.2.2.  Open and Deny Mode Validation

   Open and deny mode validation MUST be performed against the open and
   deny mode(s) held by the data servers.  When access is reduced or a
   deny mode made more restrictive (because of CLOSE or DOWNGRADE) the
   data server MUST prevent any I/Os that would be denied if performed
   on the metadata server.  When access is expanded, the data server
   MUST make sure that no requests are subsequently rejected because of
   open or deny issues that no longer apply, given the previous
   relaxation.

13.9.2.3.  File Attributes

   Since the SETATTR operation has the ability to modify state that is
   visible on both the metadata and data servers (e.g., the size), care
   must be taken to ensure that the resultant state across the set of
   data servers is consistent; especially when truncating or growing the
   file.

   As described earlier, the LAYOUTCOMMIT operation is used to ensure
   that the metadata is synchronized with changes made to the data
   servers.  For the NFSv4.1-based data storage protocol, it is
   necessary to re-synchronize state such as the size attribute, and the
   setting of mtime/change/atime.  See Section 12.5.4 for a full
   description of the semantics regarding LAYOUTCOMMIT and attribute
   synchronization.  It should be noted, that by using an NFSv4.1-based
   layout type, it is possible to synchronize this state before
   LAYOUTCOMMIT occurs.  For example, the control protocol can be used
   to query the attributes present on the data servers.

   Any changes to file attributes that control authorization or access



Shepler, et al.           Expires June 18, 2009               [Page 326]

Internet-Draft                   NFSv4.1                   December 2008


   as reflected by ACCESS calls or READs and WRITEs on the metadata
   server, MUST be propagated to the data servers for enforcement on
   READ and WRITE I/O calls.  If the changes made on the metadata server
   result in more restrictive access permissions for any user, those
   changes MUST be propagated to the data servers synchronously.

   The OPEN operation (Section 18.16.4) does not impose any requirement
   that I/O operations on an open file have the same credentials as the
   OPEN itself (unless EXCHGID4_FLAG_BIND_PRINC_STATEID is set when
   EXCHANGE_ID creates the client ID) and so requires the server's READ
   and WRITE operations to perform appropriate access checking.  Changes
   to ACLs also require new access checking by READ and WRITE on the
   server.  The propagation of access right changes due to changes in
   ACLs may be asynchronous only if the server implementation is able to
   determine that the updated ACL is not more restrictive for any user
   specified in the old ACL.  Due to the relative infrequency of ACL
   updates, it is suggested that all changes be propagated
   synchronously.

13.10.  Data Server Component File Size

   A potential problem exists when a component data file on a particular
   data server is grown past EOF; the problem exists for both dense and
   sparse layouts.  Imagine the following scenario: a client creates a
   new file (size == 0) and writes to byte 131072; the client then seeks
   to the beginning of the file and reads byte 100.  The client should
   receive 0s back as a result of the READ.  However, if the READ falls
   on a data server other than the one that received client's original
   WRITE, the data server servicing the READ may still believe that the
   file's size is at 0 and return no data with the EOF flag set.  The
   data server can only return 0s if it knows that the file's size has
   been extended.  This would require the immediate propagation of the
   file's size to all data servers, which is potentially very costly.
   Therefore, the client that has initiated the extension of the file's
   size MUST be prepared to deal with these EOF conditions; the EOF'ed
   or short READs will be treated as a hole in the file and the NFS
   client will substitute 0s for the data when the offset is less than
   the client's view of the file size.

   The NFSv4.1 protocol only provides close to open file data cache
   semantics; meaning that when the file is closed all modified data is
   written to the server.  When a subsequent OPEN of the file is done,
   the change attribute is inspected for a difference from a cached
   value for the change attribute.  For the case above, this means that
   a LAYOUTCOMMIT will be done at close (along with the data WRITEs) and
   will update the file's size and change attribute.  Access from
   another client after that point will result in the appropriate size
   being returned.



Shepler, et al.           Expires June 18, 2009               [Page 327]

Internet-Draft                   NFSv4.1                   December 2008


13.11.  Layout Revocation and Fencing

   As described in Section 12.7, the layout type-specific storage
   protocol is responsible for handling the effects of I/Os started
   before lease expiration, extending through lease expiration.  The
   LAYOUT4_NFSV4_1_FILES layout type can prevent all I/Os to data
   servers from being executed after lease expiration, without relying
   on a precise client lease timer and without requiring data servers to
   maintain lease timers.  However, while LAYOUT4_NFSV4_1_FILES pNFS
   server is free to deny the client all access to the data servers,
   because it supports revocation of layouts, it is also free to perform
   a denial on a per file basis only when revoking a layout.

   In addition to lease expiration, the reasons a layout can be revoked
   include: client fails to respond to a CB_LAYOUTRECALL, the metadata
   server restarts, or administrative intervention.  Regardless of the
   reason, once a client's layout has been revoked, the pNFS server MUST
   prevent the client from issuing I/O for the affected file from and to
   all data servers, in other words, it MUST fence the client from the
   affected file on the data servers.

   Fencing works as follows.  As described in Section 13.1, in COMPOUND
   procedure requests to the data server, the data filehandle provided
   by the PUTFH operation and the stateid in the READ or WRITE operation
   are used to validate that the client has a valid layout for the I/O
   being performed, if it does not, the I/O is rejected with
   NFS4ERR_PNFS_NO_LAYOUT.  The server can simply check the stateid, and
   additionally, make the data filehandle stale if the layout specified
   a data filehandle that is different from the metadata server's
   filehandle for the file (see the nfl_fh_list description in
   Section 13.3).

   Before the metadata server takes any action to invalidate layout
   state given out by a previous instance, it must make sure that all
   layout state from that previous instance are invalidated at the data
   servers.  This means that a metadata server may not restripe a file
   until it has contacted all of the data servers to invalidate the
   layouts from the previous instance nor may it give out mandatory
   locks that conflict with layouts from the previous instance without
   either doing a specific invalidation (as it would have to do anyway)
   or doing a global data server invalidation.

13.12.  Security Considerations for the File Layout Type

   The NFSv4.1 file layout type MUST adhere to the security
   considerations outlined in Section 12.9.  NFSv4.1 data servers MUST
   make all of the required access checks on each READ or WRITE I/O as
   determined by the NFSv4.1 protocol.  If the metadata server would



Shepler, et al.           Expires June 18, 2009               [Page 328]

Internet-Draft                   NFSv4.1                   December 2008


   deny READ or WRITE operation on a given file due its ACL, mode
   attribute, open mode, open deny mode, mandatory lock state, or any
   other attributes and state, the data server MUST also deny the READ
   or WRITE operation.  This impacts the control protocol and the
   propagation of state from the metadata server to the data servers;
   see Section 13.9.2 for more details.

   The methods for authentication, integrity, and privacy for file
   layout-based data servers are the same as those used by metadata
   servers.  Metadata and data servers use ONC RPC security flavors to
   authenticate, and SECINFO and SECINFO_NO_NAME to negotiate the
   security mechanism and services to be used.  Thus when using the
   LAYOUT4_NFSV4_1_FILES layout type, the impact on the RPC-based
   security model due to pNFS (as alluded to in Section 1.6.1 and
   Section 1.6.2.2) is zero.

   For a given file object, a metadata server MAY require different
   security parameters (secinfo4 value) than the data server.  For a
   given file object with multiple data servers, the secinfo4 value
   SHOULD be the same across all data servers.  If the secinfo4 values
   across a metadata server and its data servers differ for a specific
   file, the mapping of the principal to the server's internal user
   identifier MUST be the same in order for the access control checks
   based on ACL, mode, open and deny mode, and mandatory locking to be
   consistent across on the pNFS server.

   If an NFSv4.1 implementation supports pNFS and supports NFSv4.1 file
   layouts, then the implementation MUST support the SECINFO_NO_NAME
   operation, on both the metadata and data servers.


14.  Internationalization

   The primary issue in which NFSv4.1 needs to deal with
   internationalization, or I18N, is with respect to file names and
   other strings as used within the protocol.  The choice of string
   representation must allow reasonable name/string access to clients
   which use various languages.  The UTF-8 encoding of the UCS as
   defined by ISO10646 [20] allows for this type of access and follows
   the policy described in "IETF Policy on Character Sets and
   Languages", RFC2277 [21].

   RFC3454 [18], otherwise know as "stringprep", documents a framework
   for using Unicode/UTF-8 in networking protocols, so as "to increase
   the likelihood that string input and string comparison work in ways
   that make sense for typical users throughout the world."  A protocol
   must define a profile of stringprep "in order to fully specify the
   processing options."  The remainder of this Internationalization



Shepler, et al.           Expires June 18, 2009               [Page 329]

Internet-Draft                   NFSv4.1                   December 2008


   section defines the NFSv4.1 stringprep profiles.  Much of terminology
   used for the remainder of this section comes from stringprep.

   There are three UTF-8 string types defined for NFSv4.1: utf8str_cs,
   utf8str_cis, and utf8str_mixed.  Separate profiles are defined for
   each.  Each profile defines the following, as required by stringprep:

   o  The intended applicability of the profile

   o  The character repertoire that is the input and output to
      stringprep (which is Unicode 3.2 for referenced version of
      stringprep).  However, NFSv4.1 implementations are not limited to
      3.2.

   o  The mapping tables from stringprep used (as described in section 3
      of stringprep)

   o  Any additional mapping tables specific to the profile

   o  The Unicode normalization used, if any (as described in section 4
      of stringprep)

   o  The tables from stringprep listing of characters that are
      prohibited as output (as described in section 5 of stringprep)

   o  The bidirectional string testing used, if any (as described in
      section 6 of stringprep)

   o  Any additional characters that are prohibited as output specific
      to the profile

   Stringprep discusses Unicode characters, whereas NFSv4.1 renders
   UTF-8 characters.  Since there is a one-to-one mapping from UTF-8 to
   Unicode, when the remainder of this document refers to Unicode, the
   reader should assume UTF-8.

   Much of the text for the profiles comes from RFC3491 [22].

14.1.  Stringprep profile for the utf8str_cs type

   Every use of the utf8str_cs type definition in the NFSv4 protocol
   specification follows the profile named nfs4_cs_prep.

14.1.1.  Intended applicability of the nfs4_cs_prep profile

   The utf8str_cs type is a case sensitive string of UTF-8 characters.
   Its primary use in NFSv4.1 is for naming components and pathnames.
   Components and pathnames are stored on the server's file system.  Two



Shepler, et al.           Expires June 18, 2009               [Page 330]

Internet-Draft                   NFSv4.1                   December 2008


   valid distinct UTF-8 strings might be the same after processing via
   the utf8str_cs profile.  If the strings are two names inside a
   directory, the NFSv4.1 server will need to either:

   o  disallow the creation of a second name if its post processed form
      collides with that of an existing name, or

   o  allow the creation of the second name, but arrange so that after
      post processing, the second name is different than the post
      processed form of the first name.

14.1.2.  Character repertoire of nfs4_cs_prep

   The nfs4_cs_prep profile uses Unicode 3.2, as defined in stringprep's
   Appendix A.1.  However, NFSv4.1 implementations are not limited to
   3.2.

14.1.3.  Mapping used by nfs4_cs_prep

   The nfs4_cs_prep profile specifies mapping using the following tables
   from stringprep:

      Table B.1

   Table B.2 is normally not part of the nfs4_cs_prep profile as it is
   primarily for dealing with case-insensitive comparisons.  However, if
   the NFSv4.1 file server supports the case_insensitive file system
   attribute, and if case_insensitive is TRUE, the NFSv4.1 server MUST
   use Table B.2 (in addition to Table B1) when processing utf8str_cs
   strings, and the NFSv4.1 client MUST assume Table B.2 (in addition to
   Table B.1) are being used.

   If the case_preserving attribute is present and set to FALSE, then
   the NFSv4.1 server MUST use table B.2 to map case when processing
   utf8str_cs strings.  Whether the server maps from lower to upper case
   or the upper to lower case is an implementation dependency.

14.1.4.  Normalization used by nfs4_cs_prep

   The nfs4_cs_prep profile does not specify a normalization form.  A
   later revision of this specification may specify a particular
   normalization form.  Therefore, the server and client can expect that
   they may receive unnormalized characters within protocol requests and
   responses.  If the operating environment requires normalization, then
   the implementation must normalize utf8str_cs strings within the
   protocol before presenting the information to an application (at the
   client) or local file system (at the server).




Shepler, et al.           Expires June 18, 2009               [Page 331]

Internet-Draft                   NFSv4.1                   December 2008


14.1.5.  Prohibited output for nfs4_cs_prep

   The nfs4_cs_prep profile RECOMMENDS prohibiting the use of the
   following tables from stringprep:

      Table C.5

      Table C.6

14.1.6.  Bidirectional output for nfs4_cs_prep

   The nfs4_cs_prep profile does not specify any checking of
   bidirectional strings.

14.2.  Stringprep profile for the utf8str_cis type

   Every use of the utf8str_cis type definition in the NFSv4.1 protocol
   specification follows the profile named nfs4_cis_prep.

14.2.1.  Intended applicability of the nfs4_cis_prep profile

   The utf8str_cis type is a case insensitive string of UTF-8
   characters.  Its primary use in NFSv4.1 is for naming NFS servers.

14.2.2.  Character repertoire of nfs4_cis_prep

   The nfs4_cis_prep profile uses Unicode 3.2, as defined in
   stringprep's Appendix A.1.  However, NFSv4.1 implementations are not
   limited to 3.2.

14.2.3.  Mapping used by nfs4_cis_prep

   The nfs4_cis_prep profile specifies mapping using the following
   tables from stringprep:

      Table B.1

      Table B.2

14.2.4.  Normalization used by nfs4_cis_prep

   The nfs4_cis_prep profile specifies using Unicode normalization form
   KC, as described in stringprep.

14.2.5.  Prohibited output for nfs4_cis_prep

   The nfs4_cis_prep profile specifies prohibiting using the following
   tables from stringprep:



Shepler, et al.           Expires June 18, 2009               [Page 332]

Internet-Draft                   NFSv4.1                   December 2008


      Table C.1.2

      Table C.2.2

      Table C.3

      Table C.4

      Table C.5

      Table C.6

      Table C.7

      Table C.8

      Table C.9

14.2.6.  Bidirectional output for nfs4_cis_prep

   The nfs4_cis_prep profile specifies checking bidirectional strings as
   described in stringprep's section 6.

14.3.  Stringprep profile for the utf8str_mixed type

   Every use of the utf8str_mixed type definition in the NFSv4.1
   protocol specification follows the profile named nfs4_mixed_prep.

14.3.1.  Intended applicability of the nfs4_mixed_prep profile

   The utf8str_mixed type is a string of UTF-8 characters, with a prefix
   that is case sensitive, a separator equal to '@', and a suffix that
   is fully qualified domain name.  Its primary use in NFSv4.1 is for
   naming principals identified in an Access Control Entry.

14.3.2.  Character repertoire of nfs4_mixed_prep

   The nfs4_mixed_prep profile uses Unicode 3.2, as defined in
   stringprep's Appendix A.1.  However, NFSv4.1 implementations are not
   limited to 3.2.

14.3.3.  Mapping used by nfs4_cis_prep

   For the prefix and the separator of a utf8str_mixed string, the
   nfs4_mixed_prep profile specifies mapping using the following table
   from stringprep:





Shepler, et al.           Expires June 18, 2009               [Page 333]

Internet-Draft                   NFSv4.1                   December 2008


      Table B.1

   For the suffix of a utf8str_mixed string, the nfs4_mixed_prep profile
   specifies mapping using the following tables from stringprep:

      Table B.1

      Table B.2

14.3.4.  Normalization used by nfs4_mixed_prep

   The nfs4_mixed_prep profile specifies using Unicode normalization
   form KC, as described in stringprep.

14.3.5.  Prohibited output for nfs4_mixed_prep

   The nfs4_mixed_prep profile specifies prohibiting using the following
   tables from stringprep:

      Table C.1.2

      Table C.2.2

      Table C.3

      Table C.4

      Table C.5

      Table C.6

      Table C.7

      Table C.8

      Table C.9

14.3.6.  Bidirectional output for nfs4_mixed_prep

   The nfs4_mixed_prep profile specifies checking bidirectional strings
   as described in stringprep's section 6.

14.4.  UTF-8 Capabilities

   const FSCHARSET_CAP4_CONTAINS_NON_UTF8  = 0x1;
   const FSCHARSET_CAP4_ALLOWS_ONLY_UTF8   = 0x2;

   typedef uint32_t        fs_charset_cap4;



Shepler, et al.           Expires June 18, 2009               [Page 334]

Internet-Draft                   NFSv4.1                   December 2008


   Because some operating environments and file systems do not enforce
   character set encodings, NFSv4.1 supports the fs_charset_cap
   attribute (Section 5.8.2.11) that indicates to the client a file
   system's UTF-8 capabilities.  The attribute is an integer containing
   a pair of flags.  The first flag is FSCHARSET_CAP4_CONTAINS_NON_UTF8,
   which, if set to one tells the client the file system contains non-
   UTF-8 characters, and the server will not convert non-UTF characters
   to UTF-8 if the client reads a symlink or directory, nor will
   operations with component names or pathnames in the arguments convert
   the strings to UTF-8.  The second flag is
   FSCHARSET_CAP4_ALLOWS_ONLY_UTF8 which if set to one, indicates that
   the server will accept (and generate) only UTF-8 characters on the
   file system.  If FSCHARSET_CAP4_ALLOWS_ONLY_UTF8 is set to one,
   FSCHARSET_CAP4_CONTAINS_NON_UTF8 MUST be set to zero.
   FSCHARSET_CAP4_ALLOWS_ONLY_UTF8 SHOULD always be set to one.

14.5.  UTF-8 Related Errors

   Where the client sends an invalid UTF-8 string, the server should
   return NFS4ERR_INVAL (see Table 5).  This includes cases in which
   inappropriate prefixes are detected and where the count includes
   trailing bytes that do not constitute a full UCS character.

   Where the client supplied string is valid UTF-8 but contains
   characters that are not supported by the server as a value for that
   string (e.g. names containing characters outside of Unicode plane 0
   on filesystems that fail to support such characters despite their
   presence in the Unicode standard), the server should return
   NFS4ERR_BADCHAR.

   Where a UTF-8 string is used as a file name, and the file system,
   while supporting all of the characters within the name, does not
   allow that particular name to be used, the server should return the
   error NFS4ERR_BADNAME (Table 5).  This includes situations in which
   the server file system imposes a normalization constraint on name
   strings, but will also include such situations as file system
   prohibitions of "." and ".." as file names for certain operations,
   and other such constraints.


15.  Error Values

   NFS error numbers are assigned to failed operations within a Compound
   (COMPOUND or CB_COMPOUND) request.  A Compound request contains a
   number of NFS operations that have their results encoded in sequence
   in a Compound reply.  The results of successful operations will
   consist of an NFS4_OK status followed by the encoded results of the
   operation.  If an NFS operation fails, an error status will be



Shepler, et al.           Expires June 18, 2009               [Page 335]

Internet-Draft                   NFSv4.1                   December 2008


   entered in the reply and the Compound request will be terminated.

15.1.  Error Definitions

                        Protocol Error Definitions

    +-----------------------------------+--------+-------------------+
    | Error                             | Number | Description       |
    +-----------------------------------+--------+-------------------+
    | NFS4_OK                           | 0      | Section 15.1.3.1  |
    | NFS4ERR_ACCESS                    | 13     | Section 15.1.6.1  |
    | NFS4ERR_ATTRNOTSUPP               | 10032  | Section 15.1.15.1 |
    | NFS4ERR_ADMIN_REVOKED             | 10047  | Section 15.1.5.1  |
    | NFS4ERR_BACK_CHAN_BUSY            | 10057  | Section 15.1.12.1 |
    | NFS4ERR_BADCHAR                   | 10040  | Section 15.1.7.1  |
    | NFS4ERR_BADHANDLE                 | 10001  | Section 15.1.2.1  |
    | NFS4ERR_BADIOMODE                 | 10049  | Section 15.1.10.1 |
    | NFS4ERR_BADLAYOUT                 | 10050  | Section 15.1.10.2 |
    | NFS4ERR_BADNAME                   | 10041  | Section 15.1.7.2  |
    | NFS4ERR_BADOWNER                  | 10039  | Section 15.1.15.2 |
    | NFS4ERR_BADSESSION                | 10052  | Section 15.1.11.1 |
    | NFS4ERR_BADSLOT                   | 10053  | Section 15.1.11.2 |
    | NFS4ERR_BADTYPE                   | 10007  | Section 15.1.4.1  |
    | NFS4ERR_BADXDR                    | 10036  | Section 15.1.1.1  |
    | NFS4ERR_BAD_COOKIE                | 10003  | Section 15.1.1.2  |
    | NFS4ERR_BAD_HIGH_SLOT             | 10077  | Section 15.1.11.3 |
    | NFS4ERR_BAD_RANGE                 | 10042  | Section 15.1.8.1  |
    | NFS4ERR_BAD_SEQID                 | 10026  | Section 15.1.16.1 |
    | NFS4ERR_BAD_SESSION_DIGEST        | 10051  | Section 15.1.12.2 |
    | NFS4ERR_BAD_STATEID               | 10025  | Section 15.1.5.2  |
    | NFS4ERR_CB_PATH_DOWN              | 10048  | Section 15.1.11.4 |
    | NFS4ERR_CLID_INUSE                | 10017  | Section 15.1.13.2 |
    | NFS4ERR_CLIENTID_BUSY             | 10074  | Section 15.1.13.1 |
    | NFS4ERR_COMPLETE_ALREADY          | 10054  | Section 15.1.9.1  |
    | NFS4ERR_CONN_NOT_BOUND_TO_SESSION | 10055  | Section 15.1.11.6 |
    | NFS4ERR_DEADLOCK                  | 10045  | Section 15.1.8.2  |
    | NFS4ERR_DEADSESSION               | 10078  | Section 15.1.11.5 |
    | NFS4ERR_DELAY                     | 10008  | Section 15.1.1.3  |
    | NFS4ERR_DELEG_ALREADY_WANTED      | 10056  | Section 15.1.14.1 |
    | NFS4ERR_DELEG_REVOKED             | 10087  | Section 15.1.5.3  |
    | NFS4ERR_DENIED                    | 10010  | Section 15.1.8.3  |
    | NFS4ERR_DIRDELEG_UNAVAIL          | 10084  | Section 15.1.14.2 |
    | NFS4ERR_DQUOT                     | 69     | Section 15.1.4.2  |
    | NFS4ERR_ENCR_ALG_UNSUPP           | 10079  | Section 15.1.13.3 |
    | NFS4ERR_EXIST                     | 17     | Section 15.1.4.3  |
    | NFS4ERR_EXPIRED                   | 10011  | Section 15.1.5.4  |
    | NFS4ERR_FBIG                      | 27     | Section 15.1.4.4  |
    | NFS4ERR_FHEXPIRED                 | 10014  | Section 15.1.2.2  |



Shepler, et al.           Expires June 18, 2009               [Page 336]

Internet-Draft                   NFSv4.1                   December 2008


    | NFS4ERR_FILE_OPEN                 | 10046  | Section 15.1.4.5  |
    | NFS4ERR_GRACE                     | 10013  | Section 15.1.9.2  |
    | NFS4ERR_HASH_ALG_UNSUPP           | 10072  | Section 15.1.13.4 |
    | NFS4ERR_INVAL                     | 22     | Section 15.1.1.4  |
    | NFS4ERR_IO                        | 5      | Section 15.1.4.6  |
    | NFS4ERR_ISDIR                     | 21     | Section 15.1.2.3  |
    | NFS4ERR_LAYOUTTRYLATER            | 10058  | Section 15.1.10.3 |
    | NFS4ERR_LAYOUTUNAVAILABLE         | 10059  | Section 15.1.10.4 |
    | NFS4ERR_LEASE_MOVED               | 10031  | Section 15.1.16.2 |
    | NFS4ERR_LOCKED                    | 10012  | Section 15.1.8.4  |
    | NFS4ERR_LOCKS_HELD                | 10037  | Section 15.1.8.5  |
    | NFS4ERR_LOCK_NOTSUPP              | 10043  | Section 15.1.8.6  |
    | NFS4ERR_LOCK_RANGE                | 10028  | Section 15.1.8.7  |
    | NFS4ERR_MINOR_VERS_MISMATCH       | 10021  | Section 15.1.3.2  |
    | NFS4ERR_MLINK                     | 31     | Section 15.1.4.7  |
    | NFS4ERR_MOVED                     | 10019  | Section 15.1.2.4  |
    | NFS4ERR_NAMETOOLONG               | 63     | Section 15.1.7.3  |
    | NFS4ERR_NOENT                     | 2      | Section 15.1.4.8  |
    | NFS4ERR_NOFILEHANDLE              | 10020  | Section 15.1.2.5  |
    | NFS4ERR_NOMATCHING_LAYOUT         | 10060  | Section 15.1.10.5 |
    | NFS4ERR_NOSPC                     | 28     | Section 15.1.4.9  |
    | NFS4ERR_NOTDIR                    | 20     | Section 15.1.2.6  |
    | NFS4ERR_NOTEMPTY                  | 66     | Section 15.1.4.10 |
    | NFS4ERR_NOTSUPP                   | 10004  | Section 15.1.1.5  |
    | NFS4ERR_NOT_ONLY_OP               | 10081  | Section 15.1.3.3  |
    | NFS4ERR_NOT_SAME                  | 10027  | Section 15.1.15.3 |
    | NFS4ERR_NO_GRACE                  | 10033  | Section 15.1.9.3  |
    | NFS4ERR_NXIO                      | 6      | Section 15.1.16.3 |
    | NFS4ERR_OLD_STATEID               | 10024  | Section 15.1.5.5  |
    | NFS4ERR_OPENMODE                  | 10038  | Section 15.1.8.8  |
    | NFS4ERR_OP_ILLEGAL                | 10044  | Section 15.1.3.4  |
    | NFS4ERR_OP_NOT_IN_SESSION         | 10071  | Section 15.1.3.5  |
    | NFS4ERR_PERM                      | 1      | Section 15.1.6.2  |
    | NFS4ERR_PNFS_IO_HOLE              | 10075  | Section 15.1.10.6 |
    | NFS4ERR_PNFS_NO_LAYOUT            | 10080  | Section 15.1.10.7 |
    | NFS4ERR_RECALLCONFLICT            | 10061  | Section 15.1.14.3 |
    | NFS4ERR_RECLAIM_BAD               | 10034  | Section 15.1.9.4  |
    | NFS4ERR_RECLAIM_CONFLICT          | 10035  | Section 15.1.9.5  |
    | NFS4ERR_REJECT_DELEG              | 10085  | Section 15.1.14.4 |
    | NFS4ERR_REP_TOO_BIG               | 10066  | Section 15.1.3.6  |
    | NFS4ERR_REP_TOO_BIG_TO_CACHE      | 10067  | Section 15.1.3.7  |
    | NFS4ERR_REQ_TOO_BIG               | 10065  | Section 15.1.3.8  |
    | NFS4ERR_RESTOREFH                 | 10030  | Section 15.1.16.4 |
    | NFS4ERR_RETRY_UNCACHED_REP        | 10068  | Section 15.1.3.9  |
    | NFS4ERR_RETURNCONFLICT            | 10086  | Section 15.1.10.8 |
    | NFS4ERR_ROFS                      | 30     | Section 15.1.4.11 |
    | NFS4ERR_SAME                      | 10009  | Section 15.1.15.4 |
    | NFS4ERR_SHARE_DENIED              | 10015  | Section 15.1.8.9  |



Shepler, et al.           Expires June 18, 2009               [Page 337]

Internet-Draft                   NFSv4.1                   December 2008


    | NFS4ERR_SEQUENCE_POS              | 10064  | Section 15.1.3.10 |
    | NFS4ERR_SEQ_FALSE_RETRY           | 10076  | Section 15.1.11.7 |
    | NFS4ERR_SEQ_MISORDERED            | 10063  | Section 15.1.11.8 |
    | NFS4ERR_SERVERFAULT               | 10006  | Section 15.1.1.6  |
    | NFS4ERR_STALE                     | 70     | Section 15.1.2.7  |
    | NFS4ERR_STALE_CLIENTID            | 10022  | Section 15.1.13.5 |
    | NFS4ERR_STALE_STATEID             | 10023  | Section 15.1.16.5 |
    | NFS4ERR_SYMLINK                   | 10029  | Section 15.1.2.8  |
    | NFS4ERR_TOOSMALL                  | 10005  | Section 15.1.1.7  |
    | NFS4ERR_TOO_MANY_OPS              | 10070  | Section 15.1.3.11 |
    | NFS4ERR_UNKNOWN_LAYOUTTYPE        | 10062  | Section 15.1.10.9 |
    | NFS4ERR_UNSAFE_COMPOUND           | 10069  | Section 15.1.3.12 |
    | NFS4ERR_WRONGSEC                  | 10016  | Section 15.1.6.3  |
    | NFS4ERR_WRONG_CRED                | 10082  | Section 15.1.6.4  |
    | NFS4ERR_WRONG_TYPE                | 10083  | Section 15.1.2.9  |
    | NFS4ERR_XDEV                      | 18     | Section 15.1.4.12 |
    +-----------------------------------+--------+-------------------+

                                  Table 5

15.1.1.  General Errors

   This section deals with errors that are applicable to a broad set of
   different purposes.

15.1.1.1.  NFS4ERR_BADXDR (Error Code 10036)

   The arguments for this operation do not match those specified in the
   XDR definition.  This includes situations in which the request ends
   before all the arguments have been seen.  Note that this error
   applies when fixed enumerations (these include booleans) have a value
   within the input stream which is not valid for the enum.  A replier
   may pre-parse all operations for a Compound procedure before doing
   any operation execution and return RPC-level XDR errors in that case.

15.1.1.2.  NFS4ERR_BAD_COOKIE (Error Code 10003)

   Used for operations that provide a set of information indexed by some
   quantity provided by the client or cookie sent by the server for an
   earlier invocation.  Where the value cannot be used for its intended
   purpose, this error results.

15.1.1.3.  NFS4ERR_DELAY (Error Code 10008)

   For any of a number of reasons, the replier could not process this
   operation in what was deemed a reasonable time.  The client should
   wait and then try the request with a new slot and sequence value.




Shepler, et al.           Expires June 18, 2009               [Page 338]

Internet-Draft                   NFSv4.1                   December 2008


   Some example of situations that might lead to this situation:

   o  A server that supports hierarchical storage receives a request to
      process a file that had been migrated.

   o  An operation requires a delegation recall to proceed and waiting
      for this delegation recall makes processing this request in a
      timely fashion impossible.

   In such cases, the error NFS4ERR_DELAY allows these preparatory
   operations to proceed without holding up client resources such as a
   session slot.  After delaying for period of time, the client can then
   re-send the operation in question (but not with the same slot ID and
   sequence ID; one or both MUST be different on the re-send).

   Note that without the ability to return NFS4ERR_DELAY and the
   client's willingness to re-send when receiving it, deadlock might
   well result.  E.g., if a recall is done, and if the delegation return
   or operations preparatory to delegation return are held up by other
   operations that need the delegation to be returned, session slots
   might not be available.  The result could be deadlock.

15.1.1.4.  NFS4ERR_INVAL (Error Code 22)

   The arguments for this operation are not valid for some reason, even
   though they do match those specified in the XDR definition for the
   request.

15.1.1.5.  NFS4ERR_NOTSUPP (Error Code 10004)

   Operation not supported, either because the operation is an OPTIONAL
   one and is not supported by this server or because the operation MUST
   NOT be implemented in the current minor version.

15.1.1.6.  NFS4ERR_SERVERFAULT (Error Code 10006)

   An error occurred on the server which does not map to any of the
   specific legal NFSv4.1 protocol error values.  The client should
   translate this into an appropriate error.  UNIX clients may choose to
   translate this to EIO.

15.1.1.7.  NFS4ERR_TOOSMALL (Error Code 10005)

   Used where an operation returns a variable amount of data, with a
   limit specified by the client.  Where the data returned cannot be fit
   within the limit specified by the client, this error results.





Shepler, et al.           Expires June 18, 2009               [Page 339]

Internet-Draft                   NFSv4.1                   December 2008


15.1.2.  Filehandle Errors

   These errors deal with the situation in which the current or saved
   filehandle, or the filehandle passed to PUTFH intended to become the
   current filehandle, is invalid in some way.  This includes situations
   in which the filehandle is a valid filehandle in general but is not
   of the appropriate object type for the current operation.

   Where the error description indicates a problem with the current or
   saved filehandle, it is to be understood that filehandles are only
   checked for the condition if they are implicit arguments of the
   operation in question.

15.1.2.1.  NFS4ERR_BADHANDLE (Error Code 10001)

   Illegal NFS filehandle for the current server.  The current file
   handle failed internal consistency checks.  Once accepted as valid
   (by PUTFH), no subsequent status change can cause the filehandle to
   generate this error.

15.1.2.2.  NFS4ERR_FHEXPIRED (Error Code 10014)

   A current or saved filehandle which is an argument to the current
   operation is volatile and has expired at the server.

15.1.2.3.  NFS4ERR_ISDIR (Error Code 21)

   The current or saved filehandle designates a directory when the
   current operation does not allow a directory to be accepted as the
   target of this operation.

15.1.2.4.  NFS4ERR_MOVED (Error Code 10019)

   The file system which contains the current filehandle object is not
   present at the server.  It may have been relocated, migrated to
   another server or may have never been present.  The client may obtain
   the new file system location by obtaining the "fs_locations" or
   "fs_locations_info" attribute for the current filehandle.  For
   further discussion, refer to Section 11.2

15.1.2.5.  NFS4ERR_NOFILEHANDLE (Error Code 10020)

   The logical current or saved filehandle value is required by the
   current operation and is not set.  This may be a result of a
   malformed COMPOUND operation (i.e. no PUTFH or PUTROOTFH before an
   operation that requires the current filehandle be set).





Shepler, et al.           Expires June 18, 2009               [Page 340]

Internet-Draft                   NFSv4.1                   December 2008


15.1.2.6.  NFS4ERR_NOTDIR (Error Code 20)

   The current (or saved) filehandle designates an object which is not a
   directory for an operation in which a directory is required.

15.1.2.7.  NFS4ERR_STALE (Error Code 70)

   The current or saved filehandle value designating an argument to the
   current operation is invalid The file referred to by that filehandle
   no longer exists or access to it has been revoked.

15.1.2.8.  NFS4ERR_SYMLINK (Error Code 10029)

   The current filehandle designates a symbolic link when the current
   operation does not allow a symbolic link as the target.

15.1.2.9.  NFS4ERR_WRONG_TYPE (Error Code 10083)

   The current (or saved) filehandle designates an object which is of an
   invalid type for the current operation and there is no more specific
   error (such as NFS4ERR_ISDIR or NFS4ERR_SYMLINK) that applies.  Note
   that in NFSv4.0, such situations generally resulted in the less
   specific error NFS4ERR_INVAL.

15.1.3.  Compound Structure Errors

   This section deals with errors that relate to overall structure of a
   Compound request (by which we mean to include both COMPOUND and
   CB_COMPOUND), rather than to particular operations.

   There are a number of basic constraints on the operations that may
   appear in a Compound request.  Sessions adds to these basic
   constraints by requiring a Sequence operation (either SEQUENCE or
   CB_SEQUENCE) at the start of the Compound.

15.1.3.1.  NFS_OK (Error code 0)

   Indicates the operation completed successfully, in that all of the
   constituent operations completed without error.

15.1.3.2.  NFS4ERR_MINOR_VERS_MISMATCH (Error code 10021)

   The minor version specified is not one that the current listener
   supports.  This value is returned in the overall status for the
   Compound but is not associated with a specific operation since the
   results will specify a result count of zero.





Shepler, et al.           Expires June 18, 2009               [Page 341]

Internet-Draft                   NFSv4.1                   December 2008


15.1.3.3.  NFS4ERR_NOT_ONLY_OP (Error Code 10081)

   Certain operations, which are allowed to be executed outside of a
   session, MUST be the only operation within a COMPOUND.  This error
   results when that constraint is not met.

15.1.3.4.  NFS4ERR_OP_ILLEGAL (Error Code 10044)

   The operation code is not a valid one for the current Compound
   procedure.  The opcode in the result stream matched with this error
   is the ILLEGAL value, although the value that appears in the request
   stream may be different.  Where an illegal value appears and the
   replier pre-parses all operations for a Compound procedure before
   doing any operation execution, an RPC-level XDR error may be returned
   in this case.

15.1.3.5.  NFS4ERR_OP_NOT_IN_SESSION (Error Code 10071)

   Most forward operations and all callback operations are only valid
   within the context of a session, so that the Compound request in
   question MUST begin with a Sequence operation.  If an attempt is made
   to execute these operations outside the context of session, this
   error results.

15.1.3.6.  NFS4ERR_REP_TOO_BIG (Error Code 10066)

   The reply to a Compound would exceed the channel's negotiated maximum
   response size.

15.1.3.7.  NFS4ERR_REP_TOO_BIG_TO_CACHE (Error Code 10067)

   The reply to a Compound would exceed the channel's negotiated maximum
   size for replies cached in the reply cache when the Sequence for the
   current request specifies that this request is to be cached.

15.1.3.8.  NFS4ERR_REQ_TOO_BIG (Error Code 10065)

   The Compound request exceeds the channel's negotiated maximum size
   for requests.

15.1.3.9.  NFS4ERR_RETRY_UNCACHED_REP (Error Code 10068)

   The requester has attempted a retry of a Compound which it previously
   requested not be placed in the reply cache.







Shepler, et al.           Expires June 18, 2009               [Page 342]

Internet-Draft                   NFSv4.1                   December 2008


15.1.3.10.  NFS4ERR_SEQUENCE_POS (Error Code 10064)

   A Sequence operation appeared in a position other than the first
   operation of a Compound request.

15.1.3.11.  NFS4ERR_TOO_MANY_OPS (Error Code 10070)

   The Compound request has too many operations, exceeding the count
   negotiated when the session was created.

15.1.3.12.  NFS4ERR_UNSAFE_COMPOUND (Error Code 10068)

   The client has sent a COMPOUND request with an unsafe mix of
   operations, specifically with a non-idempotent operation changing the
   current filehandle which is not followed by a GETFH.

15.1.4.  File System Errors

   These errors describe situations which occurred in the underlying
   file system implementation rather than in the protocol or any NFSv4.x
   feature.

15.1.4.1.  NFS4ERR_BADTYPE (Error Code 10007)

   An attempt was made to create an object with an inappropriate type
   specified to CREATE.  This may be because the type is undefined,
   because it is a type not supported by the server, or because it is a
   type for which create is not intended such as a regular file or named
   attribute, for which OPEN is used to do the file creation.

15.1.4.2.  NFS4ERR_DQUOT (Error Code 19)

   Resource (quota) hard limit exceeded.  The user's resource limit on
   the server has been exceeded.

15.1.4.3.  NFS4ERR_EXIST (Error Code 17)

   A file of the specified target name (when creating, renaming or
   linking) already exists.

15.1.4.4.  NFS4ERR_FBIG (Error Code 27)

   File too large.  The operation would have caused a file to grow
   beyond the server's limit.







Shepler, et al.           Expires June 18, 2009               [Page 343]

Internet-Draft                   NFSv4.1                   December 2008


15.1.4.5.  NFS4ERR_FILE_OPEN (Error Code 10046)

   The operation is not allowed because a file involved in the operation
   is currently open.  Servers may, but are not required to disallow
   linking-to, removing, or renaming open files.

15.1.4.6.  NFS4ERR_IO (Error Code 5)

   Indicates that an I/O error occurred for which the file system was
   unable to provide recovery.

15.1.4.7.  NFS4ERR_MLINK (Error Code 31)

   The request would have caused the server's limit for the number of
   hard links a file may have to be exceeded.

15.1.4.8.  NFS4ERR_NOENT (Error Code 2)

   Indicates no such file or directory.  The file or directory name
   specified does not exist.

15.1.4.9.  NFS4ERR_NOSPC (Error Code 28)

   Indicates no space left on device.  The operation would have caused
   the server's file system to exceed its limit.

15.1.4.10.  NFS4ERR_NOTEMPTY (Error Code 66)

   An attempt was made to remove a directory that was not empty.

15.1.4.11.  NFS4ERR_ROFS (Error Code 30)

   Indicates a read-only file system.  A modifying operation was
   attempted on a read-only file system.

15.1.4.12.  NFS4ERR_XDEV (Error Code 18)

   Indicates an attempt to do an operation, such as linking, that
   inappropriately crosses a boundary.  This may be due to such
   boundaries as:

   o  That between file systems (where the fsids are different).

   o  That between different named attribute directories or between a
      named attribute directory and an ordinary directory.

   o  That between regions of a file system that the file system
      implementation treats as separate (for example for space



Shepler, et al.           Expires June 18, 2009               [Page 344]

Internet-Draft                   NFSv4.1                   December 2008


      accounting purposes), and where cross-connection between the
      regions are not allowed.

15.1.5.  State Management Errors

   These errors indicate problems with the stateid (or one of the
   stateids) passed to a given operation.  This includes situations in
   which the stateid is invalid as well as situations in which the
   stateid is valid but designates revoked locking state.  Depending on
   the operation, the stateid when valid may designate opens, byte-range
   locks, file or directory delegations, layouts, or device maps.

15.1.5.1.  NFS4ERR_ADMIN_REVOKED (Error Code 10047)

   A stateid designates locking state of any type that has been revoked
   due to administrative interaction, possibly while the lease is valid.

15.1.5.2.  NFS4ERR_BAD_STATEID (Error Code 10026)

   A stateid does not properly designate any valid state.  See
   Section 8.2.4 and Section 8.2.3 for a discussion of how stateids are
   validated.

15.1.5.3.  NFS4ERR_DELEG_REVOKED (Error Code 10087)

   A stateid designates recallable locking state of any type (delegation
   or layout) that has been revoked due to the failure of the client to
   return the lock when it was recalled.

15.1.5.4.  NFS4ERR_EXPIRED (Error Code 10011)

   A stateid designates locking state of any type that has been revoked
   due to expiration of the client's lease, either immediately upon
   lease expiration, or following a later request for a conflicting
   lock.

15.1.5.5.  NFS4ERR_OLD_STATEID (Error Code 10024)

   A stateid with a non-zero seqid value does match the current seqid
   for the state designated by the user.

15.1.6.  Security Errors

   These are the various permission-related errors in NFSv4.1.







Shepler, et al.           Expires June 18, 2009               [Page 345]

Internet-Draft                   NFSv4.1                   December 2008


15.1.6.1.  NFS4ERR_ACCESS (Error Code 13)

   Indicates permission denied.  The caller does not have the correct
   permission to perform the requested operation.  Contrast this with
   NFS4ERR_PERM (Section 15.1.6.2), which restricts itself to owner or
   privileged user permission failures, and NFS4ERR_WRONG_CRED
   (Section 15.1.6.4) which deals with appropriate permission to delete
   or modify transient objects, based on the credentials of the user
   that created them.

15.1.6.2.  NFS4ERR_PERM (Error Code 1)

   Indicates requester is not the owner.  The operation was not allowed
   because