--- 1/draft-ietf-nfsv4-rfc3530bis-07.txt 2011-03-04 21:16:38.000000000 +0100 +++ 2/draft-ietf-nfsv4-rfc3530bis-08.txt 2011-03-04 21:16:39.000000000 +0100 @@ -1,18 +1,18 @@ NFSv4 T. Haynes Internet-Draft D. Noveck Intended status: Standards Track Editors -Expires: August 31, 2011 February 27, 2011 +Expires: September 5, 2011 March 04, 2011 NFS Version 4 Protocol - draft-ietf-nfsv4-rfc3530bis-07.txt + draft-ietf-nfsv4-rfc3530bis-08.txt Abstract The Network File System (NFS) version 4 is a distributed filesystem protocol which owes heritage to NFS protocol version 2, RFC 1094, and version 3, RFC 1813. Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the mount protocol. In addition, support for strong security (and its negotiation), compound operations, client caching, and internationalization have been added. Of course, @@ -42,21 +42,21 @@ and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. - This Internet-Draft will expire on August 31, 2011. + This Internet-Draft will expire on September 5, 2011. Copyright Notice Copyright (c) 2011 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents @@ -75,40 +75,40 @@ the copyright in such materials, this document may not be modified outside the IETF Standards Process, and derivative works of it may not be created outside the IETF Standards Process, except to format it for publication as an RFC or to translate it into languages other than English. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1. Changes since RFC 3530 . . . . . . . . . . . . . . . . . 8 - 1.2. Changes since RFC 3010 . . . . . . . . . . . . . . . . . 8 + 1.2. Changes since RFC 3010 . . . . . . . . . . . . . . . . . 9 1.3. NFS Version 4 Goals . . . . . . . . . . . . . . . . . . 10 1.4. Inconsistencies of this Document with the companion document NFS Version 4 Protocol . . . . . . . . . . . . 10 - 1.5. Overview of NFS version 4 Features . . . . . . . . . . . 11 + 1.5. Overview of NFSv4 Features . . . . . . . . . . . . . . . 11 1.5.1. RPC and Security . . . . . . . . . . . . . . . . . . 11 1.5.2. Procedure and Operation Structure . . . . . . . . . 11 1.5.3. Filesystem Model . . . . . . . . . . . . . . . . . . 12 1.5.4. OPEN and CLOSE . . . . . . . . . . . . . . . . . . . 14 1.5.5. File Locking . . . . . . . . . . . . . . . . . . . . 14 1.5.6. Client Caching and Delegation . . . . . . . . . . . 14 1.6. General Definitions . . . . . . . . . . . . . . . . . . 15 2. Protocol Data Types . . . . . . . . . . . . . . . . . . . . . 17 2.1. Basic Data Types . . . . . . . . . . . . . . . . . . . . 17 - 2.2. Structured Data Types . . . . . . . . . . . . . . . . . 18 + 2.2. Structured Data Types . . . . . . . . . . . . . . . . . 19 3. RPC and Security Flavor . . . . . . . . . . . . . . . . . . . 24 3.1. Ports and Transports . . . . . . . . . . . . . . . . . . 24 3.1.1. Client Retransmission Behavior . . . . . . . . . . . 25 3.2. Security Flavors . . . . . . . . . . . . . . . . . . . . 25 - 3.2.1. Security mechanisms for NFS version 4 . . . . . . . 26 + 3.2.1. Security mechanisms for NFSv4 . . . . . . . . . . . 26 3.3. Security Negotiation . . . . . . . . . . . . . . . . . . 28 3.3.1. SECINFO . . . . . . . . . . . . . . . . . . . . . . 28 3.3.2. Security Error . . . . . . . . . . . . . . . . . . . 28 3.3.3. Callback RPC Authentication . . . . . . . . . . . . 29 4. Filehandles . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.1. Obtaining the First Filehandle . . . . . . . . . . . . . 31 4.1.1. Root Filehandle . . . . . . . . . . . . . . . . . . 31 4.1.2. Public Filehandle . . . . . . . . . . . . . . . . . 31 4.2. Filehandle Types . . . . . . . . . . . . . . . . . . . . 32 4.2.1. General Properties of a Filehandle . . . . . . . . . 32 @@ -174,176 +174,182 @@ 8. NFS Server Name Space . . . . . . . . . . . . . . . . . . . . 101 8.1. Server Exports . . . . . . . . . . . . . . . . . . . . . 101 8.2. Browsing Exports . . . . . . . . . . . . . . . . . . . . 101 8.3. Server Pseudo Filesystem . . . . . . . . . . . . . . . . 101 8.4. Multiple Roots . . . . . . . . . . . . . . . . . . . . . 102 8.5. Filehandle Volatility . . . . . . . . . . . . . . . . . 102 8.6. Exported Root . . . . . . . . . . . . . . . . . . . . . 102 8.7. Mount Point Crossing . . . . . . . . . . . . . . . . . . 103 8.8. Security Policy and Name Space Presentation . . . . . . 103 9. File Locking and Share Reservations . . . . . . . . . . . . . 104 - 9.1. Locking . . . . . . . . . . . . . . . . . . . . . . . . 105 + 9.1. Opens and Byte-Range Locks . . . . . . . . . . . . . . . 105 9.1.1. Client ID . . . . . . . . . . . . . . . . . . . . . 105 - 9.1.2. Server Release of Clientid . . . . . . . . . . . . . 108 - 9.1.3. lock_owner and stateid Definition . . . . . . . . . 109 - 9.1.4. Use of the stateid and Locking . . . . . . . . . . . 110 - 9.1.5. Sequencing of Lock Requests . . . . . . . . . . . . 112 - 9.1.6. Recovery from Replayed Requests . . . . . . . . . . 113 - 9.1.7. Releasing lock_owner State . . . . . . . . . . . . . 114 - 9.1.8. Use of Open Confirmation . . . . . . . . . . . . . . 114 - 9.2. Lock Ranges . . . . . . . . . . . . . . . . . . . . . . 115 - 9.3. Upgrading and Downgrading Locks . . . . . . . . . . . . 116 - 9.4. Blocking Locks . . . . . . . . . . . . . . . . . . . . . 116 - 9.5. Lease Renewal . . . . . . . . . . . . . . . . . . . . . 117 - 9.6. Crash Recovery . . . . . . . . . . . . . . . . . . . . . 118 - 9.6.1. Client Failure and Recovery . . . . . . . . . . . . 118 - 9.6.2. Server Failure and Recovery . . . . . . . . . . . . 119 - 9.6.3. Network Partitions and Recovery . . . . . . . . . . 120 - 9.7. Recovery from a Lock Request Timeout or Abort . . . . . 124 - 9.8. Server Revocation of Locks . . . . . . . . . . . . . . . 124 - 9.9. Share Reservations . . . . . . . . . . . . . . . . . . . 125 - 9.10. OPEN/CLOSE Operations . . . . . . . . . . . . . . . . . 126 - 9.10.1. Close and Retention of State Information . . . . . . 127 - 9.11. Open Upgrade and Downgrade . . . . . . . . . . . . . . . 127 - 9.12. Short and Long Leases . . . . . . . . . . . . . . . . . 128 + 9.1.2. Server Release of Client ID . . . . . . . . . . . . 108 + 9.1.3. Stateid Definition . . . . . . . . . . . . . . . . . 109 + 9.1.4. lock_owner . . . . . . . . . . . . . . . . . . . . . 117 + 9.1.5. Use of the Stateid and Locking . . . . . . . . . . . 117 + 9.1.6. Sequencing of Lock Requests . . . . . . . . . . . . 119 + 9.1.7. Recovery from Replayed Requests . . . . . . . . . . 120 + 9.1.8. Releasing lock_owner State . . . . . . . . . . . . . 121 + 9.1.9. Use of Open Confirmation . . . . . . . . . . . . . . 121 + 9.2. Lock Ranges . . . . . . . . . . . . . . . . . . . . . . 122 + 9.3. Upgrading and Downgrading Locks . . . . . . . . . . . . 123 + 9.4. Blocking Locks . . . . . . . . . . . . . . . . . . . . . 123 + 9.5. Lease Renewal . . . . . . . . . . . . . . . . . . . . . 124 + 9.6. Crash Recovery . . . . . . . . . . . . . . . . . . . . . 125 + 9.6.1. Client Failure and Recovery . . . . . . . . . . . . 125 + 9.6.2. Server Failure and Recovery . . . . . . . . . . . . 126 + 9.6.3. Network Partitions and Recovery . . . . . . . . . . 127 + 9.7. Recovery from a Lock Request Timeout or Abort . . . . . 133 + 9.8. Server Revocation of Locks . . . . . . . . . . . . . . . 133 + 9.9. Share Reservations . . . . . . . . . . . . . . . . . . . 135 + 9.10. OPEN/CLOSE Operations . . . . . . . . . . . . . . . . . 135 + 9.10.1. Close and Retention of State Information . . . . . . 136 + 9.11. Open Upgrade and Downgrade . . . . . . . . . . . . . . . 137 + 9.12. Short and Long Leases . . . . . . . . . . . . . . . . . 137 9.13. Clocks, Propagation Delay, and Calculating Lease - Expiration . . . . . . . . . . . . . . . . . . . . . . . 129 - 9.14. Migration, Replication and State . . . . . . . . . . . . 129 - 9.14.1. Migration and State . . . . . . . . . . . . . . . . 130 - 9.14.2. Replication and State . . . . . . . . . . . . . . . 130 - 9.14.3. Notification of Migrated Lease . . . . . . . . . . . 131 - 9.14.4. Migration and the Lease_time Attribute . . . . . . . 132 - 10. Client-Side Caching . . . . . . . . . . . . . . . . . . . . . 132 - 10.1. Performance Challenges for Client-Side Caching . . . . . 133 - 10.2. Delegation and Callbacks . . . . . . . . . . . . . . . . 134 - 10.2.1. Delegation Recovery . . . . . . . . . . . . . . . . 135 - 10.3. Data Caching . . . . . . . . . . . . . . . . . . . . . . 137 - 10.3.1. Data Caching and OPENs . . . . . . . . . . . . . . . 138 - 10.3.2. Data Caching and File Locking . . . . . . . . . . . 139 - 10.3.3. Data Caching and Mandatory File Locking . . . . . . 140 - 10.3.4. Data Caching and File Identity . . . . . . . . . . . 141 - 10.4. Open Delegation . . . . . . . . . . . . . . . . . . . . 142 - 10.4.1. Open Delegation and Data Caching . . . . . . . . . . 144 - 10.4.2. Open Delegation and File Locks . . . . . . . . . . . 145 - 10.4.3. Handling of CB_GETATTR . . . . . . . . . . . . . . . 146 - 10.4.4. Recall of Open Delegation . . . . . . . . . . . . . 149 - 10.4.5. Clients that Fail to Honor Delegation Recalls . . . 151 - 10.4.6. Delegation Revocation . . . . . . . . . . . . . . . 151 - 10.5. Data Caching and Revocation . . . . . . . . . . . . . . 152 - 10.5.1. Revocation Recovery for Write Open Delegation . . . 152 - 10.6. Attribute Caching . . . . . . . . . . . . . . . . . . . 153 - 10.7. Data and Metadata Caching and Memory Mapped Files . . . 155 - 10.8. Name Caching . . . . . . . . . . . . . . . . . . . . . . 157 - 10.9. Directory Caching . . . . . . . . . . . . . . . . . . . 158 - 11. Minor Versioning . . . . . . . . . . . . . . . . . . . . . . 159 - 12. Internationalization . . . . . . . . . . . . . . . . . . . . 162 - 12.1. Use of UTF-8 . . . . . . . . . . . . . . . . . . . . . . 163 - 12.1.1. Relation to Stringprep . . . . . . . . . . . . . . . 163 - 12.1.2. Normalization, Equivalence, and Confusability . . . 164 - 12.2. String Type Overview . . . . . . . . . . . . . . . . . . 166 - 12.2.1. Overall String Class Divisions . . . . . . . . . . . 167 - 12.2.2. Divisions by Typedef Parent types . . . . . . . . . 168 - 12.2.3. Individual Types and Their Handling . . . . . . . . 168 - 12.3. Errors Related to Strings . . . . . . . . . . . . . . . 170 - 12.4. Types with Pre-processing to Resolve Mixture Issues . . 171 - 12.4.1. Processing of Principal Strings . . . . . . . . . . 171 - 12.4.2. Processing of Server Id Strings . . . . . . . . . . 171 - 12.5. String Types without Internationalization Processing . . 172 - 12.6. Types with Processing Defined by Other Internet Areas . 172 - 12.7. String Types with NFS-specific Processing . . . . . . . 173 - 12.7.1. Handling of File Name Components . . . . . . . . . . 174 - 12.7.2. Processing of Link Text . . . . . . . . . . . . . . 183 - 12.7.3. Processing of Principal Prefixes . . . . . . . . . . 184 - 13. Error Values . . . . . . . . . . . . . . . . . . . . . . . . 185 - 13.1. Error Definitions . . . . . . . . . . . . . . . . . . . 185 - 13.1.1. General Errors . . . . . . . . . . . . . . . . . . . 187 - 13.1.2. Filehandle Errors . . . . . . . . . . . . . . . . . 188 - 13.1.3. Compound Structure Errors . . . . . . . . . . . . . 189 - 13.1.4. File System Errors . . . . . . . . . . . . . . . . . 190 - 13.1.5. State Management Errors . . . . . . . . . . . . . . 192 - 13.1.6. Security Errors . . . . . . . . . . . . . . . . . . 193 - 13.1.7. Name Errors . . . . . . . . . . . . . . . . . . . . 193 - 13.1.8. Locking Errors . . . . . . . . . . . . . . . . . . . 194 - 13.1.9. Reclaim Errors . . . . . . . . . . . . . . . . . . . 195 - 13.1.10. Client Management Errors . . . . . . . . . . . . . . 196 - 13.1.11. Attribute Handling Errors . . . . . . . . . . . . . 196 - 13.2. Operations and their valid errors . . . . . . . . . . . 197 - 13.3. Callback operations and their valid errors . . . . . . . 205 - 13.4. Errors and the operations that use them . . . . . . . . 205 - 14. NFS version 4 Requests . . . . . . . . . . . . . . . . . . . 209 - 14.1. Compound Procedure . . . . . . . . . . . . . . . . . . . 210 - 14.2. Evaluation of a Compound Request . . . . . . . . . . . . 210 - 14.3. Synchronous Modifying Operations . . . . . . . . . . . . 211 - 14.4. Operation Values . . . . . . . . . . . . . . . . . . . . 212 - 15. NFS version 4 Procedures . . . . . . . . . . . . . . . . . . 212 - 15.1. Procedure 0: NULL - No Operation . . . . . . . . . . . . 212 - 15.2. Procedure 1: COMPOUND - Compound Operations . . . . . . 212 - 15.3. Operation 3: ACCESS - Check Access Rights . . . . . . . 215 - 15.4. Operation 4: CLOSE - Close File . . . . . . . . . . . . 218 - 15.5. Operation 5: COMMIT - Commit Cached Data . . . . . . . . 219 - 15.6. Operation 6: CREATE - Create a Non-Regular File Object . 221 + Expiration . . . . . . . . . . . . . . . . . . . . . . . 138 + 9.14. Migration, Replication and State . . . . . . . . . . . . 138 + 9.14.1. Migration and State . . . . . . . . . . . . . . . . 139 + 9.14.2. Replication and State . . . . . . . . . . . . . . . 140 + 9.14.3. Notification of Migrated Lease . . . . . . . . . . . 140 + 9.14.4. Migration and the Lease_time Attribute . . . . . . . 141 + 10. Client-Side Caching . . . . . . . . . . . . . . . . . . . . . 141 + 10.1. Performance Challenges for Client-Side Caching . . . . . 142 + 10.2. Delegation and Callbacks . . . . . . . . . . . . . . . . 143 + 10.2.1. Delegation Recovery . . . . . . . . . . . . . . . . 145 + 10.3. Data Caching . . . . . . . . . . . . . . . . . . . . . . 147 + 10.3.1. Data Caching and OPENs . . . . . . . . . . . . . . . 147 + 10.3.2. Data Caching and File Locking . . . . . . . . . . . 148 + 10.3.3. Data Caching and Mandatory File Locking . . . . . . 149 + 10.3.4. Data Caching and File Identity . . . . . . . . . . . 150 + + 10.4. Open Delegation . . . . . . . . . . . . . . . . . . . . 151 + 10.4.1. Open Delegation and Data Caching . . . . . . . . . . 153 + 10.4.2. Open Delegation and File Locks . . . . . . . . . . . 155 + 10.4.3. Handling of CB_GETATTR . . . . . . . . . . . . . . . 155 + 10.4.4. Recall of Open Delegation . . . . . . . . . . . . . 158 + 10.4.5. OPEN Delegation Race with CB_RECALL . . . . . . . . 160 + 10.4.6. Clients that Fail to Honor Delegation Recalls . . . 161 + 10.4.7. Delegation Revocation . . . . . . . . . . . . . . . 162 + 10.5. Data Caching and Revocation . . . . . . . . . . . . . . 162 + 10.5.1. Revocation Recovery for Write Open Delegation . . . 163 + 10.6. Attribute Caching . . . . . . . . . . . . . . . . . . . 163 + 10.7. Data and Metadata Caching and Memory Mapped Files . . . 165 + 10.8. Name Caching . . . . . . . . . . . . . . . . . . . . . . 167 + 10.9. Directory Caching . . . . . . . . . . . . . . . . . . . 168 + 11. Minor Versioning . . . . . . . . . . . . . . . . . . . . . . 169 + 12. Internationalization . . . . . . . . . . . . . . . . . . . . 172 + 12.1. Use of UTF-8 . . . . . . . . . . . . . . . . . . . . . . 173 + 12.1.1. Relation to Stringprep . . . . . . . . . . . . . . . 173 + 12.1.2. Normalization, Equivalence, and Confusability . . . 174 + 12.2. String Type Overview . . . . . . . . . . . . . . . . . . 177 + 12.2.1. Overall String Class Divisions . . . . . . . . . . . 177 + 12.2.2. Divisions by Typedef Parent types . . . . . . . . . 178 + 12.2.3. Individual Types and Their Handling . . . . . . . . 179 + 12.3. Errors Related to Strings . . . . . . . . . . . . . . . 180 + 12.4. Types with Pre-processing to Resolve Mixture Issues . . 181 + 12.4.1. Processing of Principal Strings . . . . . . . . . . 181 + 12.4.2. Processing of Server Id Strings . . . . . . . . . . 181 + 12.5. String Types without Internationalization Processing . . 182 + 12.6. Types with Processing Defined by Other Internet Areas . 182 + 12.7. String Types with NFS-specific Processing . . . . . . . 183 + 12.7.1. Handling of File Name Components . . . . . . . . . . 184 + 12.7.2. Processing of Link Text . . . . . . . . . . . . . . 193 + 12.7.3. Processing of Principal Prefixes . . . . . . . . . . 194 + 13. Error Values . . . . . . . . . . . . . . . . . . . . . . . . 195 + 13.1. Error Definitions . . . . . . . . . . . . . . . . . . . 195 + 13.1.1. General Errors . . . . . . . . . . . . . . . . . . . 197 + 13.1.2. Filehandle Errors . . . . . . . . . . . . . . . . . 198 + 13.1.3. Compound Structure Errors . . . . . . . . . . . . . 199 + 13.1.4. File System Errors . . . . . . . . . . . . . . . . . 200 + 13.1.5. State Management Errors . . . . . . . . . . . . . . 202 + 13.1.6. Security Errors . . . . . . . . . . . . . . . . . . 203 + 13.1.7. Name Errors . . . . . . . . . . . . . . . . . . . . 203 + 13.1.8. Locking Errors . . . . . . . . . . . . . . . . . . . 204 + 13.1.9. Reclaim Errors . . . . . . . . . . . . . . . . . . . 205 + 13.1.10. Client Management Errors . . . . . . . . . . . . . . 206 + 13.1.11. Attribute Handling Errors . . . . . . . . . . . . . 206 + 13.2. Operations and their valid errors . . . . . . . . . . . 207 + 13.3. Callback operations and their valid errors . . . . . . . 214 + 13.4. Errors and the operations that use them . . . . . . . . 214 + 14. NFSv4 Requests . . . . . . . . . . . . . . . . . . . . . . . 219 + 14.1. Compound Procedure . . . . . . . . . . . . . . . . . . . 219 + 14.2. Evaluation of a Compound Request . . . . . . . . . . . . 220 + 14.3. Synchronous Modifying Operations . . . . . . . . . . . . 221 + 14.4. Operation Values . . . . . . . . . . . . . . . . . . . . 221 + 15. NFSv4 Procedures . . . . . . . . . . . . . . . . . . . . . . 221 + 15.1. Procedure 0: NULL - No Operation . . . . . . . . . . . . 221 + 15.2. Procedure 1: COMPOUND - Compound Operations . . . . . . 222 + 15.3. Operation 3: ACCESS - Check Access Rights . . . . . . . 227 + 15.4. Operation 4: CLOSE - Close File . . . . . . . . . . . . 230 + 15.5. Operation 5: COMMIT - Commit Cached Data . . . . . . . . 231 + 15.6. Operation 6: CREATE - Create a Non-Regular File Object . 233 15.7. Operation 7: DELEGPURGE - Purge Delegations Awaiting - Recovery . . . . . . . . . . . . . . . . . . . . . . . . 224 - 15.8. Operation 8: DELEGRETURN - Return Delegation . . . . . . 225 - 15.9. Operation 9: GETATTR - Get Attributes . . . . . . . . . 225 - 15.10. Operation 10: GETFH - Get Current Filehandle . . . . . . 226 - 15.11. Operation 11: LINK - Create Link to a File . . . . . . . 227 - 15.12. Operation 12: LOCK - Create Lock . . . . . . . . . . . . 229 - 15.13. Operation 13: LOCKT - Test For Lock . . . . . . . . . . 233 - 15.14. Operation 14: LOCKU - Unlock File . . . . . . . . . . . 234 - 15.15. Operation 15: LOOKUP - Lookup Filename . . . . . . . . . 236 - 15.16. Operation 16: LOOKUPP - Lookup Parent Directory . . . . 237 + Recovery . . . . . . . . . . . . . . . . . . . . . . . . 236 + 15.8. Operation 8: DELEGRETURN - Return Delegation . . . . . . 237 + 15.9. Operation 9: GETATTR - Get Attributes . . . . . . . . . 237 + 15.10. Operation 10: GETFH - Get Current Filehandle . . . . . . 239 + 15.11. Operation 11: LINK - Create Link to a File . . . . . . . 240 + 15.12. Operation 12: LOCK - Create Lock . . . . . . . . . . . . 241 + 15.13. Operation 13: LOCKT - Test For Lock . . . . . . . . . . 245 + 15.14. Operation 14: LOCKU - Unlock File . . . . . . . . . . . 247 + 15.15. Operation 15: LOOKUP - Lookup Filename . . . . . . . . . 248 + 15.16. Operation 16: LOOKUPP - Lookup Parent Directory . . . . 250 15.17. Operation 17: NVERIFY - Verify Difference in - Attributes . . . . . . . . . . . . . . . . . . . . . . . 238 - 15.18. Operation 18: OPEN - Open a Regular File . . . . . . . . 239 + Attributes . . . . . . . . . . . . . . . . . . . . . . . 250 + 15.18. Operation 18: OPEN - Open a Regular File . . . . . . . . 252 15.19. Operation 19: OPENATTR - Open Named Attribute - Directory . . . . . . . . . . . . . . . . . . . . . . . 249 - 15.20. Operation 20: OPEN_CONFIRM - Confirm Open . . . . . . . 250 - 15.21. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access . 252 - 15.22. Operation 22: PUTFH - Set Current Filehandle . . . . . . 253 - 15.23. Operation 23: PUTPUBFH - Set Public Filehandle . . . . . 253 - 15.24. Operation 24: PUTROOTFH - Set Root Filehandle . . . . . 255 - 15.25. Operation 25: READ - Read from File . . . . . . . . . . 255 - 15.26. Operation 26: READDIR - Read Directory . . . . . . . . . 258 - 15.27. Operation 27: READLINK - Read Symbolic Link . . . . . . 261 - 15.28. Operation 28: REMOVE - Remove Filesystem Object . . . . 262 - 15.29. Operation 29: RENAME - Rename Directory Entry . . . . . 264 - 15.30. Operation 30: RENEW - Renew a Lease . . . . . . . . . . 266 - 15.31. Operation 31: RESTOREFH - Restore Saved Filehandle . . . 267 - 15.32. Operation 32: SAVEFH - Save Current Filehandle . . . . . 268 - 15.33. Operation 33: SECINFO - Obtain Available Security . . . 269 - 15.34. Operation 34: SETATTR - Set Attributes . . . . . . . . . 272 - 15.35. Operation 35: SETCLIENTID - Negotiate Clientid . . . . . 275 - 15.36. Operation 36: SETCLIENTID_CONFIRM - Confirm Clientid . . 278 - 15.37. Operation 37: VERIFY - Verify Same Attributes . . . . . 282 - 15.38. Operation 38: WRITE - Write to File . . . . . . . . . . 283 + Directory . . . . . . . . . . . . . . . . . . . . . . . 262 + 15.20. Operation 20: OPEN_CONFIRM - Confirm Open . . . . . . . 263 + 15.21. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access . 265 + 15.22. Operation 22: PUTFH - Set Current Filehandle . . . . . . 266 + 15.23. Operation 23: PUTPUBFH - Set Public Filehandle . . . . . 267 + 15.24. Operation 24: PUTROOTFH - Set Root Filehandle . . . . . 268 + 15.25. Operation 25: READ - Read from File . . . . . . . . . . 269 + 15.26. Operation 26: READDIR - Read Directory . . . . . . . . . 271 + 15.27. Operation 27: READLINK - Read Symbolic Link . . . . . . 275 + 15.28. Operation 28: REMOVE - Remove Filesystem Object . . . . 276 + 15.29. Operation 29: RENAME - Rename Directory Entry . . . . . 278 + 15.30. Operation 30: RENEW - Renew a Lease . . . . . . . . . . 280 + 15.31. Operation 31: RESTOREFH - Restore Saved Filehandle . . . 281 + 15.32. Operation 32: SAVEFH - Save Current Filehandle . . . . . 282 + 15.33. Operation 33: SECINFO - Obtain Available Security . . . 282 + 15.34. Operation 34: SETATTR - Set Attributes . . . . . . . . . 285 + 15.35. Operation 35: SETCLIENTID - Negotiate Client ID . . . . 288 + 15.36. Operation 36: SETCLIENTID_CONFIRM - Confirm Client ID . 292 + 15.37. Operation 37: VERIFY - Verify Same Attributes . . . . . 295 + 15.38. Operation 38: WRITE - Write to File . . . . . . . . . . 297 15.39. Operation 39: RELEASE_LOCKOWNER - Release Lockowner - State . . . . . . . . . . . . . . . . . . . . . . . . . 287 - - 15.40. Operation 10044: ILLEGAL - Illegal operation . . . . . . 288 - 16. NFS version 4 Callback Procedures . . . . . . . . . . . . . . 289 - 16.1. Procedure 0: CB_NULL - No Operation . . . . . . . . . . 289 - 16.2. Procedure 1: CB_COMPOUND - Compound Operations . . . . . 290 - 16.2.6. Operation 3: CB_GETATTR - Get Attributes . . . . . . 291 - 16.2.7. Operation 4: CB_RECALL - Recall an Open Delegation . 292 + State . . . . . . . . . . . . . . . . . . . . . . . . . 301 + 15.40. Operation 10044: ILLEGAL - Illegal operation . . . . . . 302 + 16. NFSv4 Callback Procedures . . . . . . . . . . . . . . . . . . 302 + 16.1. Procedure 0: CB_NULL - No Operation . . . . . . . . . . 303 + 16.2. Procedure 1: CB_COMPOUND - Compound Operations . . . . . 303 + 16.2.6. Operation 3: CB_GETATTR - Get Attributes . . . . . . 305 + 16.2.7. Operation 4: CB_RECALL - Recall an Open Delegation . 306 16.2.8. Operation 10044: CB_ILLEGAL - Illegal Callback - Operation . . . . . . . . . . . . . . . . . . . . . 293 - 17. Security Considerations . . . . . . . . . . . . . . . . . . . 294 - 18. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 296 - 18.1. Named Attribute Definition . . . . . . . . . . . . . . . 296 - 18.2. ONC RPC Network Identifiers (netids) . . . . . . . . . . 296 - 19. References . . . . . . . . . . . . . . . . . . . . . . . . . 297 - 19.1. Normative References . . . . . . . . . . . . . . . . . . 297 - 19.2. Informative References . . . . . . . . . . . . . . . . . 298 - Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . 300 - Appendix B. RFC Editor Notes . . . . . . . . . . . . . . . . . . 300 - Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 301 + Operation . . . . . . . . . . . . . . . . . . . . . 307 + 17. Security Considerations . . . . . . . . . . . . . . . . . . . 308 + 18. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 309 + 18.1. Named Attribute Definitions . . . . . . . . . . . . . . 309 + 18.1.1. Initial Registry . . . . . . . . . . . . . . . . . . 310 + 18.1.2. Updating Registrations . . . . . . . . . . . . . . . 310 + 18.2. ONC RPC Network Identifiers (netids) . . . . . . . . . . 310 + 18.2.1. Initial Registry . . . . . . . . . . . . . . . . . . 312 + 18.2.2. Updating Registrations . . . . . . . . . . . . . . . 312 + 19. References . . . . . . . . . . . . . . . . . . . . . . . . . 312 + 19.1. Normative References . . . . . . . . . . . . . . . . . . 312 + 19.2. Informative References . . . . . . . . . . . . . . . . . 313 + Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . 315 + Appendix B. RFC Editor Notes . . . . . . . . . . . . . . . . . . 316 + Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 316 1. Introduction 1.1. Changes since RFC 3530 This document, together with the companion XDR description document [2], obsoletes RFC 3530 [11] as the authoritative document describing NFSv4. It does not introduce any over-the-wire protocol changes, in the sense that previously valid requests requests remain valid. However, some requests previously defined as invalid, although not @@ -366,28 +372,35 @@ these name are the province of the receiving entity. o Updating handling of domain names to reflect IDNA. o Restructuring of string types to more appropriately reflect the reality of required string processing. o LIPKEY SPKM/3 has been moved from being REQUIRED to OPTIONAL. o Some clarification on a client re-establishing callback - information to the new server if state has been migrated + information to the new server if state has been migrated. + + o A third edge case was added for Courtesy locks and network + partitions. + + o The definintion of stateid was strengthened, which had the side + effect of introducing a semantic change in a COMPOUND structure + having a current stateid and a saved stateid. 1.2. Changes since RFC 3010 - This definition of the NFS version 4 protocol replaces or obsoletes - the definition present in [12]. While portions of the two documents - have remained the same, there have been substantive changes in - others. The changes made between [12] and this document represent + This definition of the NFSv4 protocol replaces or obsoletes the + definition present in [12]. While portions of the two documents have + remained the same, there have been substantive changes in others. + The changes made between [12] and this document represent implementation experience and further review of the protocol. While some modifications were made for ease of implementation or clarification, most updates represent errors or situations where the [12] definition were untenable. The following list is not all inclusive of all changes but presents some of the most notable changes or additions made: o The state model has added an open_owner4 identifier. This was done to accommodate Posix based clients and the model they use for @@ -419,33 +432,33 @@ o Added a new operation LOCKOWNER_RELEASE to enable notifying the server that a lock_owner4 will no longer be used by the client. o RENEW operation changes to identify the client correctly and allow for additional error returns. o Verify error return possibilities for all operations. o Remove use of the pathname4 data type from LOOKUP and OPEN in favor of having the client construct a sequence of LOOKUP - operations to achieive the same effect. + operations to achieve the same effect. o Clarification of the internationalization issues and adoption of the new stringprep profile framework. 1.3. NFS Version 4 Goals - The NFS version 4 protocol is a further revision of the NFS protocol - defined already by versions 2 [13] and 3 [14]. It retains the - essential characteristics of previous versions: design for easy - recovery, independent of transport protocols, operating systems and - filesystems, simplicity, and good performance. The NFS version 4 - revision has the following goals: + The NFSv4 protocol is a further revision of the NFS protocol defined + already by versions 2 [13] and 3 [14]. It retains the essential + characteristics of previous versions: design for easy recovery, + independent of transport protocols, operating systems and + filesystems, simplicity, and good performance. The NFSv4 revision + has the following goals: o Improved access and good performance on the Internet. The protocol is designed to transit firewalls easily, perform well where latency is high and bandwidth is low, and scale to very large numbers of clients per server. o Strong security with negotiation built into the protocol. The protocol builds on the work of the ONCRPC working group in @@ -469,278 +482,288 @@ Version 4 Protocol [2], NFS Version 4 Protocol, contains the definitions in XDR description language of the constructs used by the protocol. Inside this document, several of the constructs are reproduced for purposes of explanation. The reader is warned of the possibility of errors in the reproduced constructs outside of [2]. For any part of the document that is inconsistent with [2], [2] is to be considered authoritative. -1.5. Overview of NFS version 4 Features +1.5. Overview of NFSv4 Features To provide a reasonable context for the reader, the major features of - NFS version 4 protocol will be reviewed in brief. This will be done - to provide an appropriate context for both the reader who is familiar + NFSv4 protocol will be reviewed in brief. This will be done to + provide an appropriate context for both the reader who is familiar with the previous versions of the NFS protocol and the reader that is new to the NFS protocols. For the reader new to the NFS protocols, there is still a fundamental knowledge that is expected. The reader should be familiar with the XDR and RPC protocols as described in [3] and [15]. A basic knowledge of filesystems and distributed filesystems is expected as well. 1.5.1. RPC and Security As with previous versions of NFS, the External Data Representation - (XDR) and Remote Procedure Call (RPC) mechanisms used for the NFS - version 4 protocol are those defined in [3] and [15]. To meet end to - end security requirements, the RPCSEC_GSS framework [4] will be used - to extend the basic RPC security. With the use of RPCSEC_GSS, - various mechanisms can be provided to offer authentication, - integrity, and privacy to the NFS version 4 protocol. Kerberos V5 - will be used as described in [16] to provide one security framework. - The LIPKEY GSS-API mechanism described in [5] will be used to provide - for the use of user password and server public key by the NFS version - 4 protocol. With the use of RPCSEC_GSS, other mechanisms may also be - specified and used for NFS version 4 security. + (XDR) and Remote Procedure Call (RPC) mechanisms used for the NFSv4 + protocol are those defined in [3] and [15]. To meet end to end + security requirements, the RPCSEC_GSS framework [4] will be used to + extend the basic RPC security. With the use of RPCSEC_GSS, various + mechanisms can be provided to offer authentication, integrity, and + privacy to the NFS version 4 protocol. Kerberos V5 will be used as + described in [16] to provide one security framework. The LIPKEY GSS- + API mechanism described in [5] will be used to provide for the use of + user password and server public key by the NFSv4 protocol. With the + use of RPCSEC_GSS, other mechanisms may also be specified and used + for NFS version 4 security. - To enable in-band security negotiation, the NFS version 4 protocol - has added a new operation which provides the client a method of - querying the server about its policies regarding which security - mechanisms must be used for access to the server's filesystem - resources. With this, the client can securely match the security - mechanism that meets the policies specified at both the client and - server. + To enable in-band security negotiation, the NFSv4 protocol has added + a new operation which provides the client a method of querying the + server about its policies regarding which security mechanisms must be + used for access to the server's filesystem resources. With this, the + client can securely match the security mechanism that meets the + policies specified at both the client and server. 1.5.2. Procedure and Operation Structure A significant departure from the previous versions of the NFS - protocol is the introduction of the COMPOUND procedure. For the NFS - version 4 protocol, there are two RPC procedures, NULL and COMPOUND. - The COMPOUND procedure is defined in terms of operations and these + protocol is the introduction of the COMPOUND procedure. For the + NFSv4 protocol, there are two RPC procedures, NULL and COMPOUND. The + COMPOUND procedure is defined in terms of operations and these operations correspond more closely to the traditional NFS procedures. With the use of the COMPOUND procedure, the client is able to build simple or complex requests. These COMPOUND requests allow for a reduction in the number of RPCs needed for logical filesystem operations. For example, without previous contact with a server a client will be able to read data from a file in one request by combining LOOKUP, OPEN, and READ operations in a single COMPOUND RPC. With previous versions of the NFS protocol, this type of single request was not possible. The model used for COMPOUND is very simple. There is no logical OR or ANDing of operations. The operations combined within a COMPOUND request are evaluated in order by the server. Once an operation returns a failing result, the evaluation ends and the results of all evaluated operations are returned to the client. - The NFS version 4 protocol continues to have the client refer to a - file or directory at the server by a "filehandle". The COMPOUND - procedure has a method of passing a filehandle from one operation to - another within the sequence of operations. There is a concept of a - "current filehandle" and "saved filehandle". Most operations use the - "current filehandle" as the filesystem object to operate upon. The - "saved filehandle" is used as temporary filehandle storage within a - COMPOUND procedure as well as an additional operand for certain - operations. + The NFSv4 protocol continues to have the client refer to a file or + directory at the server by a "filehandle". The COMPOUND procedure + has a method of passing a filehandle from one operation to another + within the sequence of operations. There is a concept of a "current + filehandle" and "saved filehandle". Most operations use the "current + filehandle" as the filesystem object to operate upon. The "saved + filehandle" is used as temporary filehandle storage within a COMPOUND + procedure as well as an additional operand for certain operations. 1.5.3. Filesystem Model - The general filesystem model used for the NFS version 4 protocol is - the same as previous versions. The server filesystem is hierarchical - with the regular files contained within being treated as opaque byte - streams. In a slight departure, file and directory names are encoded - with UTF-8 to deal with the basics of internationalization. + The general filesystem model used for the NFSv4 protocol is the same + as previous versions. The server filesystem is hierarchical with the + regular files contained within being treated as opaque byte streams. + In a slight departure, file and directory names are encoded with + UTF-8 to deal with the basics of internationalization. - The NFS version 4 protocol does not require a separate protocol to - provide for the initial mapping between path name and filehandle. - Instead of using the older MOUNT protocol for this mapping, the - server provides a ROOT filehandle that represents the logical root or - top of the filesystem tree provided by the server. The server - provides multiple filesystems by gluing them together with pseudo - filesystems. These pseudo filesystems provide for potential gaps in - the path names between real filesystems. + The NFSv4 protocol does not require a separate protocol to provide + for the initial mapping between path name and filehandle. Instead of + using the older MOUNT protocol for this mapping, the server provides + a ROOT filehandle that represents the logical root or top of the + filesystem tree provided by the server. The server provides multiple + filesystems by gluing them together with pseudo filesystems. These + pseudo filesystems provide for potential gaps in the path names + between real filesystems. 1.5.3.1. Filehandle Types In previous versions of the NFS protocol, the filehandle provided by the server was guaranteed to be valid or persistent for the lifetime of the filesystem object to which it referred. For some server implementations, this persistence requirement has been difficult to - meet. For the NFS version 4 protocol, this requirement has been - relaxed by introducing another type of filehandle, volatile. With - persistent and volatile filehandle types, the server implementation - can match the abilities of the filesystem at the server along with - the operating environment. The client will have knowledge of the - type of filehandle being provided by the server and can be prepared - to deal with the semantics of each. + meet. For the NFSv4 protocol, this requirement has been relaxed by + introducing another type of filehandle, volatile. With persistent + and volatile filehandle types, the server implementation can match + the abilities of the filesystem at the server along with the + operating environment. The client will have knowledge of the type of + filehandle being provided by the server and can be prepared to deal + with the semantics of each. 1.5.3.2. Attribute Types - The NFS version 4 protocol introduces three classes of filesystem or - file attributes. Like the additional filehandle type, the - classification of file attributes has been done to ease server - implementations along with extending the overall functionality of the - NFS protocol. This attribute model is structured to be extensible - such that new attributes can be introduced in minor revisions of the - protocol without requiring significant rework. + The NFSv4 protocol has a rich and extensible file object attribute + structure, which is divided into REQUIRED, RECOMMENDED, and named + attributes (see Section 5). - The three classifications are: mandatory, recommended and named - attributes. This is a significant departure from the previous - attribute model used in the NFS protocol. Previously, the attributes - for the filesystem and file objects were a fixed set of mainly UNIX - attributes. If the server or client did not support a particular - attribute, it would have to simulate the attribute the best it could. + Several (but not all) of the REQUIRED attributes are derived from the + attributes of NFSv3 (see definition of the fattr3 data type in [14]). + An example of a REQUIRED attribute is the file object's type + (Section 5.8.1.2) so that regular files can be distinguished from + directories (also known as folders in some operating environments) + and other types of objects. REQUIRED attributes are discussed in + Section 5.1. - Mandatory attributes are the minimal set of file or filesystem - attributes that must be provided by the server and must be properly - represented by the server. Recommended attributes represent - different filesystem types and operating environments. The - recommended attributes will allow for better interoperability and the - inclusion of more operating environments. The mandatory and - recommended attribute sets are traditional file or filesystem - attributes. The third type of attribute is the named attribute. A - named attribute is an opaque byte stream that is associated with a + An example of three RECOMMENDED attributes are acl, sacl, and dacl. + These attributes define an Access Control List (ACL) on a file object + ((Section 6). An ACL provides directory and file access control + beyond the model used in NFSv3. The ACL definition allows for + specification of specific sets of permissions for individual users + and groups. In addition, ACL inheritance allows propagation of + access permissions and restriction down a directory tree as file + system objects are created. RECOMMENDED attributes are discussed in + Section 5.2. + + A named attribute is an opaque byte stream that is associated with a directory or file and referred to by a string name. Named attributes are meant to be used by client applications as a method to associate - application specific data with a regular file or directory. + application-specific data with a regular file or directory. NFSv4.1 + modifies named attributes relative to NFSv4.0 by tightening the + allowed operations in order to prevent the development of non- + interoperable implementations. Named attributes are discussed in + Section 5.3. - One significant addition to the recommended set of file attributes is - the Access Control List (ACL) attribute. This attribute provides for - directory and file access control beyond the model used in previous - versions of the NFS protocol. The ACL definition allows for - specification of user and group level access control. +1.5.3.3. Multi-server Namespace -1.5.3.3. Filesystem Replication and Migration + NFSv4 contains a number of features to allow implementation of + namespaces that cross server boundaries and that allow and facilitate + a non-disruptive transfer of support for individual file systems + between servers. They are all based upon attributes that allow one + file system to specify alternate or new locations for that file + system. - With the use of a special file attribute, the ability to inform the - client of filesystem locations on another server is enabled. The - filesystem locations attribute provides a method for the client to - probe the server about the location of a filesystem. In the event - that a fileystems is not present on server the client will receive an - error when attempting to operate on the filesystem and it can then - query as to the correct filesystem location. Thus is allowed - construction of multi-server namespaces.. + These attributes may be used together with the concept of absent file + systems, which provide specifications for additional locations but no + actual file system content. This allows a number of important + facilities: - These features also allow file system replication and migration. In - the event of a migration of a filesystem, the client will receive an - error when operating on the filesystem and it can then query location - attribute to determine the new file system location. Similar steps - are used for replication, the client is able to query the server for - the multiple available locations of a particular filesystem. From - this information, the client can use its own policies to access the - appropriate filesystem location. + o Location attributes may be used with absent file systems to + implement referrals whereby one server may direct the client to a + file system provided by another server. This allows extensive + multi-server namespaces to be constructed. + + o Location attributes may be provided for present file systems to + provide the locations of alternate file system instances or + replicas to be used in the event that the current file system + instance becomes unavailable. + + o Location attributes may be provided when a previously present file + system becomes absent. This allows non-disruptive migration of + file systems to alternate servers. 1.5.4. OPEN and CLOSE - The NFS version 4 protocol introduces OPEN and CLOSE operations. The - OPEN operation provides a single point where file lookup, creation, - and share semantics can be combined. The CLOSE operation also - provides for the release of state accumulated by OPEN. + The NFSv4 protocol introduces OPEN and CLOSE operations. The OPEN + operation provides a single point where file lookup, creation, and + share semantics can be combined. The CLOSE operation also provides + for the release of state accumulated by OPEN. 1.5.5. File Locking - With the NFS version 4 protocol, the support for byte range file - locking is part of the NFS protocol. The file locking support is - structured so that an RPC callback mechanism is not required. This - is a departure from the previous versions of the NFS file locking - protocol, Network Lock Manager (NLM). The state associated with file - locks is maintained at the server under a lease-based model. The - server defines a single lease period for all state held by a NFS - client. If the client does not renew its lease within the defined - period, all state associated with the client's lease may be released - by the server. The client may renew its lease with use of the RENEW + With the NFSv4 protocol, the support for byte range file locking is + part of the NFS protocol. The file locking support is structured so + that an RPC callback mechanism is not required. This is a departure + from the previous versions of the NFS file locking protocol, Network + Lock Manager (NLM). The state associated with file locks is + maintained at the server under a lease-based model. The server + defines a single lease period for all state held by a NFS client. If + the client does not renew its lease within the defined period, all + state associated with the client's lease may be released by the + server. The client may renew its lease with use of the RENEW operation or implicitly by use of other operations (primarily READ). 1.5.6. Client Caching and Delegation - The file, attribute, and directory caching for the NFS version 4 - protocol is similar to previous versions. Attributes and directory - information are cached for a duration determined by the client. At - the end of a predefined timeout, the client will query the server to - see if the related filesystem object has been updated. + The file, attribute, and directory caching for the NFSv4 protocol is + similar to previous versions. Attributes and directory information + are cached for a duration determined by the client. At the end of a + predefined timeout, the client will query the server to see if the + related filesystem object has been updated. For file data, the client checks its cache validity when the file is opened. A query is sent to the server to determine if the file has been changed. Based on this information, the client determines if the data cache for the file should kept or released. Also, when the file is closed, any modified data is written to the server. If an application wants to serialize access to file data, file locking of the file data ranges in question should be used. - The major addition to NFS version 4 in the area of caching is the - ability of the server to delegate certain responsibilities to the - client. When the server grants a delegation for a file to a client, - the client is guaranteed certain semantics with respect to the - sharing of that file with other clients. At OPEN, the server may - provide the client either a read or write delegation for the file. - If the client is granted a read delegation, it is assured that no - other client has the ability to write to the file for the duration of - the delegation. If the client is granted a write delegation, the - client is assured that no other client has read or write access to - the file. + The major addition to NFSv4 in the area of caching is the ability of + the server to delegate certain responsibilities to the client. When + the server grants a delegation for a file to a client, the client is + guaranteed certain semantics with respect to the sharing of that file + with other clients. At OPEN, the server may provide the client + either a OPEN_DELEGATE_READ or OPEN_DELEGATE_WRITE delegation for the + file. If the client is granted a OPEN_DELEGATE_READ delegation, it + is assured that no other client has the ability to write to the file + for the duration of the delegation. If the client is granted a + OPEN_DELEGATE_WRITE delegation, the client is assured that no other + client has read or write access to the file. Delegations can be recalled by the server. If another client requests access to the file in such a way that the access conflicts with the granted delegation, the server is able to notify the initial client and recall the delegation. This requires that a callback path exist between the server and client. If this callback path does not exist, then delegations cannot be granted. The essence of a delegation is that it allows the client to locally service operations such as OPEN, CLOSE, LOCK, LOCKU, READ, or WRITE without immediate interaction with the server. 1.6. General Definitions The following definitions are provided for the purpose of providing an appropriate context for the reader. - Client The "client" is the entity that accesses the NFS server's - resources. The client may be an application which contains the + Byte In this document, a byte is an octet, i.e., a datum exactly 8 + bits in length. + + Client The client is the entity that accesses the NFS server's + resources. The client may be an application that contains the logic to access the NFS server directly. The client may also be - the traditional operating system client remote filesystem services - for a set of applications. + the traditional operating system client that provides remote + filesystem services for a set of applications. - In the case of file locking the client is the entity that - maintains a set of locks on behalf of one or more applications. - This client is responsible for crash or failure recovery for those - locks it manages. + With reference to byte-range locking, the client is also the + entity that maintains a set of locks on behalf of one or more + applications. This client is responsible for crash or failure + recovery for those locks it manages. Note that multiple clients may share the same transport and - multiple clients may exist on the same network node. + connection and multiple clients may exist on the same network + node. - Clientid A 64-bit quantity used as a unique, short-hand reference to - a client supplied Verifier and ID. The server is responsible for - supplying the Clientid. + Client ID A 64-bit quantity used as a unique, short-hand reference + to a client supplied Verifier and ID. The server is responsible + for supplying the Client ID. + + File System The file system is the collection of objects on a server + that share the same fsid attribute (see Section 5.8.1.9). Lease An interval of time defined by the server for which the client is irrevocably granted a lock. At the end of a lease period the lock may be revoked if the lease has not been extended. The lock must be revoked if a conflicting lock has been granted after the lease interval. All leases granted by a server have the same fixed interval. Note that the fixed interval was chosen to alleviate the expense a server would have in maintaining state about variable length leases across server failures. Lock The term "lock" is used to refer to both record (byte-range) locks as well as share reservations unless specifically stated otherwise. Server The "Server" is the entity responsible for coordinating client access to a set of filesystems. - Stable Storage NFS version 4 servers must be able to recover without - data loss from multiple power failures (including cascading power + Stable Storage NFSv4 servers must be able to recover without data + loss from multiple power failures (including cascading power failures, that is, several power failures in quick succession), operating system failures, and hardware failure of components other than the storage medium itself (for example, disk, nonvolatile RAM). Some examples of stable storage that are allowable for an NFS server include: 1. Media commit of data, that is, the modified data has been successfully written to the disk media, for example, the disk @@ -748,26 +771,24 @@ 2. An immediate reply disk drive with battery-backed on-drive intermediate storage or uninterruptible power system (UPS). 3. Server commit of data with battery-backed intermediate storage and recovery software. 4. Cache commit with uninterruptible power system (UPS) and recovery software. - Stateid A 128-bit quantity returned by a server that uniquely - defines the open and locking state provided by the server for a - specific open or lock owner for a specific file. - - Stateids composed of all bits 0 or all bits 1 have special meaning - and are reserved values. + Stateid A stateid is a 128-bit quantity returned by a server that + uniquely defines the open and locking states provided by the + server for a specific open-owner or lock-owner/open-owner pair for + a specific file and type of lock. Verifier A 64-bit quantity generated by the client that the server can use to determine if the client has restarted and lost all previous lock state. 2. Protocol Data Types The syntax and semantics to describe the data types of the NFS version 4 protocol are defined in the XDR [15] and RPC [3] documents. The next sections build upon the XDR data types to define types and @@ -970,21 +989,21 @@ 2.2.10. clientaddr4 struct clientaddr4 { /* see struct rpcb in RFC 1833 */ string r_netid<>; /* network id */ string r_addr<>; /* universal address */ }; The clientaddr4 structure is used as part of the SETCLIENTID operation to either specify the address of the client that is using a - clientid or as part of the callback registration. The r_netid and + client ID or as part of the callback registration. The r_netid and r_addr fields are specified in [17], but they are underspecified in [17] as far as what they should look like for specific protocols. For TCP over IPv4 and for UDP over IPv4, the format of r_addr is the US-ASCII string: h1.h2.h3.h4.p1.p2 The prefix, "h1.h2.h3.h4", is the standard textual form for representing an IPv4 address, which is always four octets long. @@ -1080,116 +1100,114 @@ This structure is used for the various state sharing mechanisms between the client and server. For the client, this data structure is read-only. The starting value of the seqid field is undefined. The server is required to increment the seqid field monotonically at each transition of the stateid. This is important since the client will inspect the seqid in OPEN stateids to determine the order of OPEN processing done by the server. 3. RPC and Security Flavor - The NFS version 4 protocol is a Remote Procedure Call (RPC) - application that uses RPC version 2 and the corresponding eXternal - Data Representation (XDR) as defined in [3] and [15]. The RPCSEC_GSS - security flavor as defined in [4] MUST be used as the mechanism to - deliver stronger security for the NFS version 4 protocol. + The NFSv4 protocol is a Remote Procedure Call (RPC) application that + uses RPC version 2 and the corresponding eXternal Data Representation + (XDR) as defined in [3] and [15]. The RPCSEC_GSS security flavor as + defined in [4] MUST be used as the mechanism to deliver stronger + security for the NFSv4 protocol. 3.1. Ports and Transports - Historically, NFS version 2 and version 3 servers have resided on - port 2049. The registered port 2049 [19] for the NFS protocol should - be the default configuration. Using the registered port for NFS - services means the NFS client will not need to use the RPC binding - protocols as described in [17]; this will allow NFS to transit - firewalls. + Historically, NFSv2 and NFSv3 servers have resided on port 2049. The + registered port 2049 [19] for the NFS protocol SHOULD be the default + configuration. Using the registered port for NFS services means the + NFS client will not need to use the RPC binding protocols as + described in [17]; this will allow NFS to transit firewalls. - Where an NFS version 4 implementation supports operation over the IP - network protocol, the supported transports between NFS and IP MUST be - among the IETF-approved congestion control transport protocols, which + Where an NFSv4 implementation supports operation over the IP network + protocol, the supported transports between NFS and IP MUST be among + the IETF-approved congestion control transport protocols, which include TCP and SCTP. To enhance the possibilities for - interoperability, an NFS version 4 implementation MUST support - operation over the TCP transport protocol, at least until such time - as a standards track RFC revises this requirement to use a different - IETF-approved congestion control transport protocol. + interoperability, an NFSv4 implementation MUST support operation over + the TCP transport protocol, at least until such time as a standards + track RFC revises this requirement to use a different IETF-approved + congestion control transport protocol. If TCP is used as the transport, the client and server SHOULD use persistent connections. This will prevent the weakening of TCP's congestion control via short lived connections and will improve performance for the WAN environment by eliminating the need for SYN handshakes. - As noted in Section 17, the authentication model for NFS version 4 - has moved from machine-based to principal-based. However, this - modification of the authentication model does not imply a technical - requirement to move the TCP connection management model from whole - machine-based to one based on a per user model. In particular, NFS - over TCP client implementations have traditionally multiplexed - traffic for multiple users over a common TCP connection between an - NFS client and server. This has been true, regardless whether the - NFS client is using AUTH_SYS, AUTH_DH, RPCSEC_GSS or any other - flavor. Similarly, NFS over TCP server implementations have assumed - such a model and thus scale the implementation of TCP connection - management in proportion to the number of expected client machines. - - It is intended that NFS version 4 will not modify this connection - management model. NFS version 4 clients that violate this assumption - can expect scaling issues on the server and hence reduced service. + As noted in Section 17, the authentication model for NFSv4 has moved + from machine-based to principal-based. However, this modification of + the authentication model does not imply a technical requirement to + move the TCP connection management model from whole machine-based to + one based on a per user model. In particular, NFS over TCP client + implementations have traditionally multiplexed traffic for multiple + users over a common TCP connection between an NFS client and server. + This has been true, regardless whether the NFS client is using + AUTH_SYS, AUTH_DH, RPCSEC_GSS or any other flavor. Similarly, NFS + over TCP server implementations have assumed such a model and thus + scale the implementation of TCP connection management in proportion + to the number of expected client machines. It is intended that NFSv4 + will not modify this connection management model. NFSv4 clients that + violate this assumption can expect scaling issues on the server and + hence reduced service. Note that for various timers, the client and server should avoid inadvertent synchronization of those timers. For further discussion of the general issue refer to [20]. 3.1.1. Client Retransmission Behavior When processing a request received over a reliable transport such as - TCP, the NFS version 4 server MUST NOT silently drop the request, - except if the transport connection has been broken. Given such a - contract between NFS version 4 clients and servers, clients MUST NOT - retry a request unless one or both of the following are true: + TCP, the NFSv4 server MUST NOT silently drop the request, except if + the transport connection has been broken. Given such a contract + between NFSv4 clients and servers, clients MUST NOT retry a request + unless one or both of the following are true: o The transport connection has been broken o The procedure being retried is the NULL procedure Since reliable transports, such as TCP, do not always synchronously inform a peer when the other peer has broken the connection (for - example, when an NFS server reboots), the NFS version 4 client may - want to actively "probe" the connection to see if has been broken. - Use of the NULL procedure is one recommended way to do so. So, when - a client experiences a remote procedure call timeout (of some - arbitrary implementation specific amount), rather than retrying the - remote procedure call, it could instead issue a NULL procedure call - to the server. If the server has died, the transport connection - break will eventually be indicated to the NFS version 4 client. The - client can then reconnect, and then retry the original request. If - the NULL procedure call gets a response, the connection has not - broken. The client can decide to wait longer for the original - request's response, or it can break the transport connection and - reconnect before re-sending the original request. + example, when an NFS server reboots), the NFSv4 client may want to + actively "probe" the connection to see if has been broken. Use of + the NULL procedure is one recommended way to do so. So, when a + client experiences a remote procedure call timeout (of some arbitrary + implementation specific amount), rather than retrying the remote + procedure call, it could instead issue a NULL procedure call to the + server. If the server has died, the transport connection break will + eventually be indicated to the NFSv4 client. The client can then + reconnect, and then retry the original request. If the NULL + procedure call gets a response, the connection has not broken. The + client can decide to wait longer for the original request's response, + or it can break the transport connection and reconnect before re- + sending the original request. For callbacks from the server to the client, the same rules apply, but the server doing the callback becomes the client, and the client receiving the callback becomes the server. 3.2. Security Flavors Traditional RPC implementations have included AUTH_NONE, AUTH_SYS, AUTH_DH, and AUTH_KRB4 as security flavors. With [4] an additional security flavor of RPCSEC_GSS has been introduced which uses the functionality of GSS-API [6]. This allows for the use of various security mechanisms by the RPC layer without the additional - implementation overhead of adding RPC security flavors. For NFS - version 4, the RPCSEC_GSS security flavor MUST be used to enable the - mandatory security mechanism. Other flavors, such as, AUTH_NONE, - AUTH_SYS, and AUTH_DH MAY be implemented as well. + implementation overhead of adding RPC security flavors. For NFSv4, + the RPCSEC_GSS security flavor MUST be used to enable the mandatory + security mechanism. Other flavors, such as, AUTH_NONE, AUTH_SYS, and + AUTH_DH MAY be implemented as well. -3.2.1. Security mechanisms for NFS version 4 +3.2.1. Security mechanisms for NFSv4 The use of RPCSEC_GSS requires selection of: mechanism, quality of protection, and service (authentication, integrity, privacy). The remainder of this document will refer to these three parameters of the RPCSEC_GSS security as the security triple. 3.2.1.1. Kerberos V5 as a security triple The Kerberos V5 GSS-API mechanism as described in [16] MUST be implemented and provide the following security triples. @@ -1207,23 +1225,22 @@ 390003 krb5 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_none 390004 krb5i 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_integrity 390005 krb5p 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_privacy for integrity, and 56 bit DES for privacy. Note that the pseudo flavor is presented here as a mapping aid to the implementor. Because this NFS protocol includes a method to negotiate security and it understands the GSS-API mechanism, the - pseudo flavor is not needed. The pseudo flavor is needed for NFS - version 3 since the security negotiation is done via the MOUNT - protocol. + pseudo flavor is not needed. The pseudo flavor is needed for NFSv3 + since the security negotiation is done via the MOUNT protocol. For a discussion of NFS' use of RPCSEC_GSS and Kerberos V5, please see [21]. Users and implementors are warned that 56 bit DES is no longer considered state of the art in terms of resistance to brute force attacks. Once a revision to [16] is available that adds support for AES, implementors are urged to incorporate AES into their NFSv4 over Kerberos V5 protocol stacks, and users are similarly urged to migrate to the use of AES. @@ -1284,65 +1301,65 @@ Even though LIPKEY is layered over SPKM-3, SPKM-3 is specified as a mandatory set of triples to handle the situations where the initiator (the client) is anonymous or where the initiator has its own certificate. If the initiator is anonymous, there will not be a user name and password to send to the target (the server). If the initiator has its own certificate, then using passwords is superfluous. 3.3. Security Negotiation - With the NFS version 4 server potentially offering multiple security + With the NFSv4 server potentially offering multiple security mechanisms, the client needs a method to determine or negotiate which mechanism is to be used for its communication with the server. The NFS server may have multiple points within its filesystem name space that are available for use by NFS clients. In turn the NFS server may be configured such that each of these entry points may have different or multiple security mechanisms in use. - The security negotiation between client and server must be done with - a secure channel to eliminate the possibility of a third party + The security negotiation between client and server SHOULD be done + with a secure channel to eliminate the possibility of a third party intercepting the negotiation sequence and forcing the client and server to choose a lower level of security than required or desired. See Section 17 for further discussion. 3.3.1. SECINFO The new SECINFO operation will allow the client to determine, on a per filehandle basis, what security triple is to be used for server access. In general, the client will not have to use the SECINFO operation except during initial communication with the server or when the client crosses policy boundaries at the server. It is possible that the server's policies change during the client's interaction therefore forcing the client to negotiate a new security triple. 3.3.2. Security Error - Based on the assumption that each NFS version 4 client and server - must support a minimum set of security (i.e., LIPKEY, SPKM-3, and + Based on the assumption that each NFSv4 client and server MUST + support a minimum set of security (i.e., LIPKEY, SPKM-3, and Kerberos-V5 all under RPCSEC_GSS), the NFS client will start its communication with the server with one of the minimal security triples. During communication with the server, the client may receive an NFS error of NFS4ERR_WRONGSEC. This error allows the server to notify the client that the security triple currently being used is not appropriate for access to the server's filesystem resources. The client is then responsible for determining what security triples are available at the server and choose one which is appropriate for the client. See Section 15.33 for further discussion of how the client will respond to the NFS4ERR_WRONGSEC error and use SECINFO. 3.3.3. Callback RPC Authentication Except as noted elsewhere in this section, the callback RPC (described later) MUST mutually authenticate the NFS server to the - principal that acquired the clientid (also described later), using + principal that acquired the client ID (also described later), using the security flavor the original SETCLIENTID operation used. For AUTH_NONE, there are no principals, so this is a non-issue. AUTH_SYS has no notions of mutual authentication or a server principal, so the callback from the server simply uses the AUTH_SYS credential that the user used when he set up the delegation. For AUTH_DH, one commonly used convention is that the server uses the credential corresponding to this AUTH_DH principal: @@ -1421,39 +1439,39 @@ The filehandle in the NFS protocol is a per server unique identifier for a filesystem object. The contents of the filehandle are opaque to the client. Therefore, the server is responsible for translating the filehandle to an internal representation of the filesystem object. 4.1. Obtaining the First Filehandle The operations of the NFS protocol are defined in terms of one or more filehandles. Therefore, the client needs a filehandle to - initiate communication with the server. With the NFS version 2 - protocol [13] and the NFS version 3 protocol [14], there exists an - ancillary protocol to obtain this first filehandle. The MOUNT - protocol, RPC program number 100005, provides the mechanism of - translating a string based filesystem path name to a filehandle which - can then be used by the NFS protocols. + initiate communication with the server. With the NFSv2 protocol [13] + and the NFSv3 protocol [14], there exists an ancillary protocol to + obtain this first filehandle. The MOUNT protocol, RPC program number + 100005, provides the mechanism of translating a string based + filesystem path name to a filehandle which can then be used by the + NFS protocols. The MOUNT protocol has deficiencies in the area of security and use via firewalls. This is one reason that the use of the public filehandle was introduced in [23] and [24]. With the use of the - public filehandle in combination with the LOOKUP operation in the NFS - version 2 and 3 protocols, it has been demonstrated that the MOUNT + public filehandle in combination with the LOOKUP operation in the + NFSv2 and NFSv3 protocols, it has been demonstrated that the MOUNT protocol is unnecessary for viable interaction between NFS client and server. - Therefore, the NFS version 4 protocol will not use an ancillary - protocol for translation from string based path names to a - filehandle. Two special filehandles will be used as starting points - for the NFS client. + Therefore, the NFSv4 protocol will not use an ancillary protocol for + translation from string based path names to a filehandle. Two + special filehandles will be used as starting points for the NFS + client. 4.1.1. Root Filehandle The first of the special filehandles is the ROOT filehandle. The ROOT filehandle is the "conceptual" root of the filesystem name space at the NFS server. The client uses or starts with the ROOT filehandle by employing the PUTROOTFH operation. The PUTROOTFH operation instructs the server to set the "current" filehandle to the ROOT of the server's file tree. Once this PUTROOTFH operation is used, the client can then traverse the entirety of the server's file @@ -1468,26 +1486,26 @@ for this binding. It may be that the PUBLIC filehandle and the ROOT filehandle refer to the same filesystem object. However, it is up to the administrative software at the server and the policies of the server administrator to define the binding of the PUBLIC filehandle and server filesystem object. The client may not make any assumptions about this binding. The client uses the PUBLIC filehandle via the PUTPUBFH operation. 4.2. Filehandle Types - In the NFS version 2 and 3 protocols, there was one type of - filehandle with a single set of semantics. This type of filehandle - is termed "persistent" in NFS Version 4. The semantics of a - persistent filehandle remain the same as before. A new type of - filehandle introduced in NFS Version 4 is the "volatile" filehandle, - which attempts to accommodate certain server environments. + In the NFSv2 and NFSv3 protocols, there was one type of filehandle + with a single set of semantics. This type of filehandle is termed + "persistent" in NFS Version 4. The semantics of a persistent + filehandle remain the same as before. A new type of filehandle + introduced in NFS Version 4 is the "volatile" filehandle, which + attempts to accommodate certain server environments. The volatile filehandle type was introduced to address server functionality or implementation issues which make correct implementation of a persistent filehandle infeasible. Some server environments do not provide a filesystem level invariant that can be used to construct a persistent filehandle. The underlying server filesystem may not provide the invariant or the server's filesystem programming interfaces may not provide access to the needed invariant. Volatile filehandles may ease the implementation of server functionality such as hierarchical storage management or @@ -2128,21 +2146,21 @@ 5.8.2.9. Attribute 23: files_total Total file slots on the file system containing this object. 5.8.2.10. Attribute 24: fs_locations Locations where this file system may be found. If the server returns NFS4ERR_MOVED as an error, this attribute MUST be supported. The server can specify a root path by setting an array of zero path - compenents. Other than this special case, the server MUST not + components. Other than this special case, the server MUST not present empty path components to the client. 5.8.2.11. Attribute 25: hidden TRUE, if the file is considered hidden with respect to the Windows API. 5.8.2.12. Attribute 26: homogeneous TRUE, if this object's file system is homogeneous, i.e., all objects @@ -3907,23 +3925,23 @@ [31], an additional attribute "fs_locations_info" is presented, which will define the specific choices that can be made, how these choices are communicated to the client, and how the client is to deal with any discontinuities. In the sections below, references will be made to various possible server implementation choices as a way of illustrating the transition scenarios that clients may deal with. The intent here is not to define or limit server implementations but rather to illustrate the range of issues that clients may face. Again, as the NFSv4.0 - protocol does not have an explict means of communicating these issues - to the client, the intent is to document the problems that can be - faced in a multi-server name space and allow the client to use the + protocol does not have an explicit means of communicating these + issues to the client, the intent is to document the problems that can + be faced in a multi-server name space and allow the client to use the inferred transitions available via fs_locations and other attributes (see Section 7.9.1). In the discussion below, references will be made to a file system having a particular property or to two file systems (typically the source and destination) belonging to a common class of any of several types. Two file systems that belong to such a class share some important aspects of file system behavior that clients may depend upon when present, to easily effect a seamless transition between file system instances. Conversely, where the file systems do not @@ -3947,21 +3965,21 @@ 7.7.1. File System Transitions and Simultaneous Access When a single file system may be accessed at multiple locations, either because of an indication of file system identity as reported by the fs_locations attribute, the client will, depending on specific circumstances as discussed below, either: o Access multiple instances simultaneously, each of which represents an alternate path to the same data and metadata. - o Acesses one instance (or set of instances) and then transition to + o Accesses one instance (or set of instances) and then transition to an alternative instance (or set of instances) as a result of network issues, server unresponsiveness, or server-directed migration. 7.7.2. Filehandles and File System Transitions There are a number of ways in which filehandles can be handled across a file system transition. These can be divided into two broad classes depending upon whether the two file systems across which the transition happens share sufficient state to effect some sort of @@ -4729,74 +4747,73 @@ portions of the name space are made available via an "export" feature. In previous versions of the NFS protocol, the root filehandle for each export is obtained through the MOUNT protocol; the client sends a string that identifies the export of name space and the server returns the root filehandle for it. The MOUNT protocol supports an EXPORTS procedure that will enumerate the server's exports. 8.2. Browsing Exports - The NFS version 4 protocol provides a root filehandle that clients - can use to obtain filehandles for these exports via a multi-component - LOOKUP. A common user experience is to use a graphical user - interface (perhaps a file "Open" dialog window) to find a file via - progressive browsing through a directory tree. The client must be - able to move from one export to another export via single-component, - progressive LOOKUP operations. + The NFSv4 protocol provides a root filehandle that clients can use to + obtain filehandles for these exports via a multi-component LOOKUP. A + common user experience is to use a graphical user interface (perhaps + a file "Open" dialog window) to find a file via progressive browsing + through a directory tree. The client must be able to move from one + export to another export via single-component, progressive LOOKUP + operations. - This style of browsing is not well supported by the NFS version 2 and - 3 protocols. The client expects all LOOKUP operations to remain - within a single server filesystem. For example, the device attribute - will not change. This prevents a client from taking name space paths - that span exports. + This style of browsing is not well supported by the NFSv2 and NFSv3 + protocols. The client expects all LOOKUP operations to remain within + a single server filesystem. For example, the device attribute will + not change. This prevents a client from taking name space paths that + span exports. An automounter on the client can obtain a snapshot of the server's name space using the EXPORTS procedure of the MOUNT protocol. If it understands the server's pathname syntax, it can create an image of the server's name space on the client. The parts of the name space that are not exported by the server are filled in with a "pseudo filesystem" that allows the user to browse from one mounted filesystem to another. There is a drawback to this representation of the server's name space on the client: it is static. If the server administrator adds a new export the client will be unaware of it. 8.3. Server Pseudo Filesystem - NFS version 4 servers avoid this name space inconsistency by - presenting all the exports within the framework of a single server - name space. An NFS version 4 client uses LOOKUP and READDIR - operations to browse seamlessly from one export to another. Portions - of the server name space that are not exported are bridged via a - "pseudo filesystem" that provides a view of exported directories - only. A pseudo filesystem has a unique fsid and behaves like a - normal, read only filesystem. + NFSv4 servers avoid this name space inconsistency by presenting all + the exports within the framework of a single server name space. An + NFSv4 client uses LOOKUP and READDIR operations to browse seamlessly + from one export to another. Portions of the server name space that + are not exported are bridged via a "pseudo filesystem" that provides + a view of exported directories only. A pseudo filesystem has a + unique fsid and behaves like a normal, read only filesystem. Based on the construction of the server's name space, it is possible that multiple pseudo filesystems may exist. For example, /a pseudo filesystem /a/b real filesystem /a/b/c pseudo filesystem /a/b/c/d real filesystem Each of the pseudo filesystems are considered separate entities and therefore will have a unique fsid. 8.4. Multiple Roots The DOS and Windows operating environments are sometimes described as having "multiple roots". Filesystems are commonly represented as - disk letters. MacOS represents filesystems as top level names. NFS - version 4 servers for these platforms can construct a pseudo file - system above these root names so that disk letters or volume names - are simply directory names in the pseudo root. + disk letters. MacOS represents filesystems as top level names. + NFSv4 servers for these platforms can construct a pseudo file system + above these root names so that disk letters or volume names are + simply directory names in the pseudo root. 8.5. Filehandle Volatility The nature of the server's pseudo filesystem is that it is a logical representation of filesystem(s) available from the server. Therefore, the pseudo filesystem is most likely constructed dynamically when the server is first instantiated. It is expected that the pseudo filesystem may not have an on disk counterpart from which persistent filehandles could be constructed. Even though it is preferable that the server provide persistent filehandles for the @@ -4877,67 +4894,80 @@ For the case of the use of multiple, disjoint security mechanisms in the server's resources, the security for a particular object in the server's namespace should be the union of all security mechanisms of all direct descendants. 9. File Locking and Share Reservations Integrating locking into the NFS protocol necessarily causes it to be stateful. With the inclusion of share reservations the protocol becomes substantially more dependent on state than the traditional - combination of NFS and NLM [32]. There are three components to - making this state manageable: + combination of NFS and NLM (Network Lock Manager) [32]. There are + three components to making this state manageable: - o Clear division between client and server + o clear division between client and server - o Ability to reliably detect inconsistency in state between client + o ability to reliably detect inconsistency in state between client and server - o Simple and robust recovery mechanisms + o simple and robust recovery mechanisms In this model, the server owns the state information. The client - communicates its view of this state to the server as needed. The - client is also able to detect inconsistent state before modifying a - file. + requests changes in locks and the server responds with the changes + made. Non-client-initiated changes in locking state are infrequent. + The client receives prompt notification of such changes and can + adjust its view of the locking state to reflect the server's changes. + + Individual pieces of state created by the server and passed to the + client at its request are represented by 128-bit stateids. These + stateids may represent a particular open file, a set of byte-range + locks held by a particular owner, or a recallable delegation of + privileges to access a file in particular ways or at a particular + location. + + In all cases, there is a transition from the most general information + that represents a client as a whole to the eventual lightweight + stateid used for most client and server locking interactions. The + details of this transition will vary with the type of object but it + always starts with a client ID. To support Win32 share reservations it is necessary to atomically OPEN or CREATE files. Having a separate share/unshare operation would not allow correct implementation of the Win32 OpenFile API. In order to correctly implement share semantics, the previous NFS protocol mechanisms used when a file is opened or created (LOOKUP, - CREATE, ACCESS) need to be replaced. The NFS version 4 protocol has - an OPEN operation that subsumes the NFS version 3 methodology of - LOOKUP, CREATE, and ACCESS. However, because many operations require - a filehandle, the traditional LOOKUP is preserved to map a file name - to filehandle without establishing state on the server. The policy - of granting access or modifying files is managed by the server based - on the client's state. These mechanisms can implement policy ranging - from advisory only locking to full mandatory locking. + CREATE, ACCESS) need to be replaced. The NFSv4 protocol has an OPEN + operation that subsumes the NFSv3 methodology of LOOKUP, CREATE, and + ACCESS. However, because many operations require a filehandle, the + traditional LOOKUP is preserved to map a file name to filehandle + without establishing state on the server. The policy of granting + access or modifying files is managed by the server based on the + client's state. These mechanisms can implement policy ranging from + advisory only locking to full mandatory locking. -9.1. Locking +9.1. Opens and Byte-Range Locks - It is assumed that manipulating a lock is rare when compared to READ - and WRITE operations. It is also assumed that crashes and network - partitions are relatively rare. Therefore it is important that the - READ and WRITE operations have a lightweight mechanism to indicate if - they possess a held lock. A lock request contains the heavyweight - information required to establish a lock and uniquely define the lock - owner. + It is assumed that manipulating a byte-range lock is rare when + compared to READ and WRITE operations. It is also assumed that + server restarts and network partitions are relatively rare. + Therefore it is important that the READ and WRITE operations have a + lightweight mechanism to indicate if they possess a held lock. A + byte-range lock request contains the heavyweight information required + to establish a lock and uniquely define the owner of the lock. The following sections describe the transition from the heavy weight information to the eventual stateid used for most client and server locking and lease interactions. 9.1.1. Client ID For each LOCK request, the client must identify itself to the server. - This is done in such a way as to allow for correct lock identification and crash recovery. A sequence of a SETCLIENTID operation followed by a SETCLIENTID_CONFIRM operation is required to establish the identification onto the server. Establishment of identification by a new incarnation of the client also has the effect of immediately breaking any leased state that a previous incarnation of the client might have had on the server, as opposed to forcing the new client incarnation to wait for the leases to expire. Breaking the lease state amounts to the server removing all lock, share reservation, and, where the server is not supporting the @@ -4938,20 +4968,35 @@ identification by a new incarnation of the client also has the effect of immediately breaking any leased state that a previous incarnation of the client might have had on the server, as opposed to forcing the new client incarnation to wait for the leases to expire. Breaking the lease state amounts to the server removing all lock, share reservation, and, where the server is not supporting the CLAIM_DELEGATE_PREV claim type, all delegation state associated with same client with the same identity. For discussion of delegation state recovery, see Section 10.2.1. + Owners of opens and owners of byte-range locks are separate entities + and remain separate even if the same opaque arrays are used to + designate owners of each. The protocol distinguishes between open- + owners (represented by open_owner4 structures) and lock-owners + (represented by lock_owner4 structures). + + Each open is associated with a specific open-owner while each byte- + range lock is associated with a lock-owner and an open-owner, the + latter being the open-owner associated with the open file under which + the LOCK operation was done. + + Unlike the text in NFSv4.1 [31], this text treats "lock_owner" as + meaning both a open_owner4 and a lock_owner4. Also, a "lock" can + refer to both a byte-range and share lock. + Client identification is encapsulated in the following structure: struct nfs_client_id4 { verifier4 verifier; opaque id; }; The first field, verifier is a client incarnation verifier that is used to detect client reboots. Only if the verifier is different from that which the server has previously recorded the client (as @@ -4968,22 +5013,22 @@ present the same string. The consequences of two clients presenting the same string range from one client getting an error to one client having its leased state abruptly and unexpectedly canceled. o The string should be selected so the subsequent incarnations (e.g., reboots) of the same client cause the client to present the same string. The implementor is cautioned against an approach that requires the string to be recorded in a local file because this precludes the use of the implementation in an environment - where there is no local disk and all file access is from an NFS - version 4 server. + where there is no local disk and all file access is from an NFSv4 + server. o The string should be different for each server network address that the client accesses, rather than common to all server network addresses. The reason is that it may not be possible for the client to tell if the same server is listening on multiple network addresses. If the client issues SETCLIENTID with the same id string to each network address of such a server, the server will think it is the same client, and each successive SETCLIENTID will cause the server to begin the process of removing the client's previous leased state. @@ -4998,247 +5043,557 @@ algorithm for generating the id string, will generate a conflicting id string. Given the above considerations, an example of a well generated id string is one that includes: o The server's network address. o The client's network address. - o For a user level NFS version 4 client, it should contain - additional information to distinguish the client from other user - level clients running on the same host, such as an universally - unique identifier (UUID). + o For a user level NFSv4 client, it should contain additional + information to distinguish the client from other user level + clients running on the same host, such as an universally unique + identifier (UUID). o Additional information that tends to be unique, such as one or more of: * The client machine's serial number (for privacy reasons, it is best to perform some one way function on the serial number). * A MAC address. - * The timestamp of when the NFS version 4 software was first - installed on the client (though this is subject to the - previously mentioned caution about using information that is - stored in a file, because the file might only be accessible - over NFS version 4). + * The timestamp of when the NFSv4 software was first installed on + the client (though this is subject to the previously mentioned + caution about using information that is stored in a file, + because the file might only be accessible over NFSv4). * A true random number. However since this number ought to be the same between client incarnations, this shares the same problem as that of the using the timestamp of the software installation. As a security measure, the server MUST NOT cancel a client's leased - state if the principal established the state for a given id string is - not the same as the principal issuing the SETCLIENTID. + state if the principal that established the state for a given id + string is not the same as the principal issuing the SETCLIENTID. Note that SETCLIENTID and SETCLIENTID_CONFIRM has a secondary purpose of establishing the information the server needs to make callbacks to the client for purpose of supporting delegations. It is permitted to change this information via SETCLIENTID and SETCLIENTID_CONFIRM within the same incarnation of the client without removing the client's leased state. Once a SETCLIENTID and SETCLIENTID_CONFIRM sequence has successfully completed, the client uses the shorthand client identifier, of type clientid4, instead of the longer and less compact nfs_client_id4 - structure. This shorthand client identifier (a clientid) is assigned - by the server and should be chosen so that it will not conflict with - a clientid previously assigned by the server. This applies across - server restarts or reboots. When a clientid is presented to a server - and that clientid is not recognized, as would happen after a server - reboot, the server will reject the request with the error - NFS4ERR_STALE_CLIENTID. When this happens, the client must obtain a - new clientid by use of the SETCLIENTID operation and then proceed to - any other necessary recovery for the server reboot case (See - Section 9.6.2). + structure. This shorthand client identifier (a client ID) is + assigned by the server and should be chosen so that it will not + conflict with a client ID previously assigned by the server. This + applies across server restarts or reboots. When a client ID is + presented to a server and that client ID is not recognized, as would + happen after a server reboot, the server will reject the request with + the error NFS4ERR_STALE_CLIENTID. When this happens, the client must + obtain a new client ID by use of the SETCLIENTID operation and then + proceed to any other necessary recovery for the server reboot case + (See Section 9.6.2). The client must also employ the SETCLIENTID operation when it receives a NFS4ERR_STALE_STATEID error using a stateid derived from - its current clientid, since this also indicates a server reboot which - has invalidated the existing clientid (see Section 9.1.3 for + its current client ID, since this also indicates a server reboot + which has invalidated the existing client ID (see Section 9.1.4 for details). See the detailed descriptions of SETCLIENTID and SETCLIENTID_CONFIRM for a complete specification of the operations. -9.1.2. Server Release of Clientid +9.1.2. Server Release of Client ID If the server determines that the client holds no associated state - for its clientid, the server may choose to release the clientid. The - server may make this choice for an inactive client so that resources - are not consumed by those intermittently active clients. If the - client contacts the server after this release, the server must ensure - the client receives the appropriate error so that it will use the - SETCLIENTID/SETCLIENTID_CONFIRM sequence to establish a new identity. - It should be clear that the server must be very hesitant to release a - clientid since the resulting work on the client to recover from such - an event will be the same burden as if the server had failed and - restarted. Typically a server would not release a clientid unless - there had been no activity from that client for many minutes. + for its client ID, the server may choose to release the client ID. + The server may make this choice for an inactive client so that + resources are not consumed by those intermittently active clients. + If the client contacts the server after this release, the server must + ensure the client receives the appropriate error so that it will use + the SETCLIENTID/SETCLIENTID_CONFIRM sequence to establish a new + identity. It should be clear that the server must be very hesitant + to release a client ID since the resulting work on the client to + recover from such an event will be the same burden as if the server + had failed and restarted. Typically a server would not release a + client ID unless there had been no activity from that client for many + minutes. Note that if the id string in a SETCLIENTID request is properly constructed, and if the client takes care to use the same principal for each successive use of SETCLIENTID, then, barring an active denial of service attack, NFS4ERR_CLID_INUSE should never be returned. However, client bugs, server bugs, or perhaps a deliberate change of the principal owner of the id string (such as the case of a client that changes security flavors, and under the new flavor, there is no mapping to the previous owner) will in rare cases result in NFS4ERR_CLID_INUSE. - In that event, when the server gets a SETCLIENTID for a client id + In that event, when the server gets a SETCLIENTID for a client ID that currently has no state, or it has state, but the lease has expired, rather than returning NFS4ERR_CLID_INUSE, the server MUST - allow the SETCLIENTID, and confirm the new clientid if followed by + allow the SETCLIENTID, and confirm the new client ID if followed by the appropriate SETCLIENTID_CONFIRM. -9.1.3. lock_owner and stateid Definition +9.1.3. Stateid Definition - When requesting a lock, the client must present to the server the - clientid and an identifier for the owner of the requested lock. - These two fields are referred to as the lock_owner and the definition - of those fields are: + When the server grants a lock of any type (including opens, byte- + range locks, and delegations), it responds with a unique stateid that + represents a set of locks (often a single lock) for the same file, of + the same type, and sharing the same ownership characteristics. Thus, + opens of the same file by different open- owners each have an + identifying stateid. Similarly, each set of byte-range locks on a + file owned by a specific lock-owner has its own identifying stateid. + Delegations also have associated stateids by which they may be + referenced. The stateid is used as a shorthand reference to a lock + or set of locks, and given a stateid, the server can determine the + associated state-owner or state-owners (in the case of an open-owner/ + lock-owner pair) and the associated filehandle. When stateids are + used, the current filehandle must be the one associated with that + stateid. - o A clientid returned by the server as part of the client's use of - the SETCLIENTID operation. + All stateids associated with a given client ID are associated with a + common lease that represents the claim of those stateids and the + objects they represent to be maintained by the server. See + Section 9.5 for a discussion of the lease. - o A variable length opaque array used to uniquely define the owner - of a lock managed by the client. + The server may assign stateids independently for different clients. - This may be a thread id, process id, or other unique value. + A stateid with the same bit pattern for one client may designate an + entirely different set of locks for a different client. The stateid + is always interpreted with respect to the client ID associated with + the current session. - When the server grants the lock, it responds with a unique stateid. - The stateid is used as a shorthand reference to the lock_owner, since - the server will be maintaining the correspondence between them. +9.1.3.1. Stateid Types - The server is free to form the stateid in any manner that it chooses - as long as it is able to recognize invalid and out-of-date stateids. - This requirement includes those stateids generated by earlier - instances of the server. From this, the client can be properly - notified of a server restart. This notification will occur when the - client presents a stateid to the server from a previous - instantiation. + With the exception of special stateids (see Section 9.1.3.3), each + stateid represents locking objects of one of a set of types defined + by the NFSv4 protocol. Note that in all these cases, where we speak + of guarantee, it is understood there are situations such as a client + restart, or lock revocation, that allow the guarantee to be voided. - The server must be able to distinguish the following situations and - return the error as specified: + o Stateids may represent opens of files. - o The stateid was generated by an earlier server instance (i.e., - before a server reboot). The error NFS4ERR_STALE_STATEID should - be returned. + Each stateid in this case represents the OPEN state for a given + client ID/open-owner/filehandle triple. Such stateids are subject + to change (with consequent incrementing of the stateid's seqid) in + response to OPENs that result in upgrade and OPEN_DOWNGRADE + operations. - o The stateid was generated by the current server instance but the - stateid no longer designates the current locking state for the - lockowner-file pair in question (i.e., one or more locking - operations has occurred). The error NFS4ERR_OLD_STATEID should be - returned. + o Stateids may represent sets of byte-range locks. - This error condition will only occur when the client issues a - locking request which changes a stateid while an I/O request that - uses that stateid is outstanding. + All locks held on a particular file by a particular owner and all + gotten under the aegis of a particular open file are associated + with a single stateid with the seqid being incremented whenever + LOCK and LOCKU operations affect that set of locks. - o The stateid was generated by the current server instance but the - stateid does not designate a locking state for any active - lockowner-file pair. The error NFS4ERR_BAD_STATEID should be - returned. + o Stateids may represent file delegations, which are recallable + guarantees by the server to the client, that other clients will + not reference, or will not modify a particular file, until the + delegation is returned. - This error condition will occur when there has been a logic error - on the part of the client or server. This should not happen. + A stateid represents a single delegation held by a client for a + particular filehandle. - One mechanism that may be used to satisfy these requirements is for - the server to, +9.1.3.2. Stateid Structure - o divide the "other" field of each stateid into two fields: + Stateids are divided into two fields, a 96-bit "other" field + identifying the specific set of locks and a 32-bit "seqid" sequence + value. Except in the case of special stateids (see Section 9.1.3.3), + a particular value of the "other" field denotes a set of locks of the + same type (for example, byte-range locks, opens, delegations, or + layouts), for a specific file or directory, and sharing the same + ownership characteristics. The seqid designates a specific instance + of such a set of locks, and is incremented to indicate changes in + such a set of locks, either by the addition or deletion of locks from + the set, a change in the byte-range they apply to, or an upgrade or + downgrade in the type of one or more locks. - * A server verifier which uniquely designates a particular server - instantiation. + When such a set of locks is first created, the server returns a + stateid with seqid value of one. On subsequent operations that + modify the set of locks, the server is required to increment the + "seqid" field by one whenever it returns a stateid for the same + state-owner/file/type combination and there is some change in the set + of locks actually designated. In this case, the server will return a + stateid with an "other" field the same as previously used for that + state-owner/file/type combination, with an incremented "seqid" field. + This pattern continues until the seqid is incremented past + NFS4_UINT32_MAX, and one (not zero) is the next seqid value. The + purpose of the incrementing of the seqid is to allow the server to + communicate to the client the order in which operations that modified + locking state associated with a stateid have been processed and to + make it possible for the client to send requests that are conditional + on the set of locks not having changed since the stateid in question + was returned. - * An index into a table of locking-state structures. + When a client sends a stateid to the server, it has two choices with + regard to the seqid sent. It may set the seqid to zero to indicate + to the server that it wishes the most up-to-date seqid for that + stateid's "other" field to be used. This would be the common choice + in the case of a stateid sent with a READ or WRITE operation. It + also may set a non-zero value, in which case the server checks if + that seqid is the correct one. In that case, the server is required + to return NFS4ERR_OLD_STATEID if the seqid is lower than the most + current value and NFS4ERR_BAD_STATEID if the seqid is greater than + the most current value. This would be the common choice in the case + of stateids sent with a CLOSE or OPEN_DOWNGRADE. Because OPENs may + be sent in parallel for the same owner, a client might close a file + without knowing that an OPEN upgrade had been done by the server, + changing the lock in question. If CLOSE were sent with a zero seqid, + the OPEN upgrade would be cancelled before the client even received + an indication that an upgrade had happened. - o utilize the "seqid" field of each stateid, such that seqid is - monotonically incremented for each stateid that is associated with - the same index into the locking-state table. + When a stateid is sent by the server to the client as part of a + callback operation, it is not subject to checking for a current seqid + and returning NFS4ERR_OLD_STATEID. This is because the client is not + in a position to know the most up-to-date seqid and thus cannot + verify it. Unless specially noted, the seqid value for a stateid + sent by the server to the client as part of a callback is required to + be zero with NFS4ERR_BAD_STATEID returned if it is not. - By matching the incoming stateid and its field values with the state - held at the server, the server is able to easily determine if a - stateid is valid for its current instantiation and state. If the - stateid is not valid, the appropriate error can be supplied to the - client. + In making comparisons between seqids, both by the client in + determining the order of operations and by the server in determining + whether the NFS4ERR_OLD_STATEID is to be returned, the possibility of + the seqid being swapped around past the NFS4_UINT32_MAX value needs + to be taken into account. When two seqid values are being compared, + the total count of slots for all sessions associated with the current + client is used to do this. When one seqid value is less than this + total slot count and another seqid value is greater than + NFS4_UINT32_MAX minus the total slot count, the former is to be + treated as lower than the latter, despite the fact that it is + numerically greater. -9.1.4. Use of the stateid and Locking +9.1.3.3. Special Stateids + + Stateid values whose "other" field is either all zeros or all ones + are reserved. They may not be assigned by the server but have + special meanings defined by the protocol. The particular meaning + depends on whether the "other" field is all zeros or all ones and the + specific value of the "seqid" field. + + The following combinations of "other" and "seqid" are defined in + NFSv4: + + o When "other" and "seqid" are both zero, the stateid is treated as + a special anonymous stateid, which can be used in READ, WRITE, and + SETATTR requests to indicate the absence of any open state + associated with the request. When an anonymous stateid value is + used, and an existing open denies the form of access requested, + then access will be denied to the request. + + o When "other" and "seqid" are both all ones, the stateid is a + special READ bypass stateid. When this value is used in WRITE or + SETATTR, it is treated like the anonymous value. When used in + READ, the server MAY grant access, even if access would normally + be denied to READ requests. + + o When "other" is zero and "seqid" is one, the stateid represents + the current stateid, which is whatever value is the last stateid + returned by an operation within the COMPOUND. In the case of an + OPEN, the stateid returned for the open file, and not the + delegation is used. The stateid passed to the operation in place + of the special value has its "seqid" value set to zero, except + when the current stateid is used by the operation CLOSE or + OPEN_DOWNGRADE. If there is no operation in the COMPOUND which + has returned a stateid value, the server MUST return the error + NFS4ERR_BAD_STATEID. As illustrated in Figure 5, if the value of + a current stateid is a special stateid, and the stateid of an + operation's arguments has "other" set to zero, and "seqid" set to + one, then the server MUST return the error NFS4ERR_BAD_STATEID. + + o When "other" is zero and "seqid" is NFS4_UINT32_MAX, the stateid + represents a reserved stateid value defined to be invalid. When + this stateid is used, the server MUST return the error + NFS4ERR_BAD_STATEID. + + If a stateid value is used which has all zero or all ones in the + "other" field, but does not match one of the cases above, the server + MUST return the error NFS4ERR_BAD_STATEID. + + Special stateids, unlike other stateids, are not associated with + individual client IDs or filehandles and can be used with all valid + client IDs and filehandles. In the case of a special stateid + designating the current stateid, the current stateid value + substituted for the special stateid is associated with a particular + client ID and filehandle, and so, if it is used where current + filehandle does not match that associated with the current stateid, + the operation to which the stateid is passed will return + NFS4ERR_BAD_STATEID. + +9.1.3.4. Stateid Lifetime and Validation + + Stateids must remain valid until either a client restart or a server + restart or until the client returns all of the locks associated with + the stateid by means of an operation such as CLOSE or DELEGRETURN. + If the locks are lost due to revocation as long as the client ID is + valid, the stateid remains a valid designation of that revoked state. + Stateids associated with byte-range locks are an exception. They + remain valid even if a LOCKU frees all remaining locks, so long as + the open file with which they are associated remains open. + + It should be noted that there are situations in which the client's + locks become invalid, without the client requesting they be returned. + These include lease expiration and a number of forms of lock + revocation within the lease period. It is important to note that in + these situations, the stateid remains valid and the client can use it + to determine the disposition of the associated lost locks. + + An "other" value must never be reused for a different purpose (i.e. + different filehandle, owner, or type of locks) within the context of + a single client ID. A server may retain the "other" value for the + same purpose beyond the point where it may otherwise be freed but if + it does so, it must maintain "seqid" continuity with previous values. + + One mechanism that may be used to satisfy the requirement that the + server recognize invalid and out-of-date stateids is for the server + to divide the "other" field of the stateid into two fields. + + o An index into a table of locking-state structures. + + o A generation number which is incremented on each allocation of a + table entry for a particular use. + + And then store in each table entry, + + o The client ID with which the stateid is associated. + + o The current generation number for the (at most one) valid stateid + sharing this index value. + + o The filehandle of the file on which the locks are taken. + + o An indication of the type of stateid (open, byte-range lock, file + delegation). + + o The last "seqid" value returned corresponding to the current + "other" value. + + o An indication of the current status of the locks associated with + this stateid. In particular, whether these have been revoked and + if so, for what reason. + + With this information, an incoming stateid can be validated and the + appropriate error returned when necessary. Special and non-special + stateids are handled separately. (See Section 9.1.3.3 for a + discussion of special stateids.) + + When a stateid is being tested, and the "other" field is all zeros or + all ones, a check that the "other" and "seqid" fields match a defined + combination for a special stateid is done and the results determined + as follows: + + o If the "other" and "seqid" fields do not match a defined + combination associated with a special stateid, the error + NFS4ERR_BAD_STATEID is returned. + + o If the special stateid is one designating the current stateid, and + there is a current stateid, then the current stateid is + substituted for the special stateid and the checks appropriate to + non-special stateids in performed. + + o If the combination is valid in general but is not appropriate to + the context in which the stateid is used (e.g. an all-zero stateid + is used when an open stateid is required in a LOCK operation), the + error NFS4ERR_BAD_STATEID is also returned. + + o Otherwise, the check is completed and the special stateid is + accepted as valid. + + When a stateid is being tested, and the "other" field is neither all + zeros or all ones, the following procedure could be used to validate + an incoming stateid and return an appropriate error, when necessary, + assuming that the "other" field would be divided into a table index + and an entry generation. + + o If the table index field is outside the range of the associated + table, return NFS4ERR_BAD_STATEID. + + o If the selected table entry is of a different generation than that + specified in the incoming stateid, return NFS4ERR_BAD_STATEID. + + o If the selected table entry does not match the current filehandle, + return NFS4ERR_BAD_STATEID. + + o If the client ID in the table entry does not match the client ID + associated with the current session, return NFS4ERR_BAD_STATEID. + + o If the stateid represents revoked state, then return + NFS4ERR_EXPIRED, NFS4ERR_ADMIN_REVOKED, or NFS4ERR_DELEG_REVOKED, + as appropriate. + + o If the stateid type is not valid for the context in which the + stateid appears, return NFS4ERR_BAD_STATEID. Note that a stateid + may be valid in general, but be invalid for a particular + operation, as, for example, when a stateid which doesn't represent + byte-range locks is passed to the non-from_open case of LOCK or to + LOCKU, or when a stateid which does not represent an open is + passed to CLOSE or OPEN_DOWNGRADE. In such cases, the server MUST + return NFS4ERR_BAD_STATEID. + + o If the "seqid" field is not zero, and it is greater than the + current sequence value corresponding the current "other" field, + return NFS4ERR_BAD_STATEID. + + o If the "seqid" field is not zero, and it is less than the current + sequence value corresponding the current "other" field, return + NFS4ERR_OLD_STATEID. + + o Otherwise, the stateid is valid and the table entry should contain + any additional information about the type of stateid and + information associated with that particular type of stateid, such + as the associated set of locks, such as open-owner and lock-owner + information, as well as information on the specific locks, such as + open modes and byte ranges. + +9.1.3.5. Stateid Use for I/O Operations + + Clients performing I/O operations need to select an appropriate + stateid based on the locks (including opens and delegations) held by + the client and the various types of state-owners sending the I/O + requests. SETATTR operations that change the file size are treated + like I/O operations in this regard. + + The following rules, applied in order of decreasing priority, govern + the selection of the appropriate stateid. In following these rules, + the client will only consider locks of which it has actually received + notification by an appropriate operation response or callback. + + o If the client holds a delegation for the file in question, the + delegation stateid SHOULD be used. + + o Otherwise, if the entity corresponding to the lock-owner (e.g., a + process) sending the I/O has a byte-range lock stateid for the + associated open file, then the byte-range lock stateid for that + lock-owner and open file SHOULD be used. + + o If there is no byte-range lock stateid, then the OPEN stateid for + the open file in question SHOULD be used. + + o Finally, if none of the above apply, then a special stateid SHOULD + be used. + + Ignoring these rules may result in situations in which the server + does not have information necessary to properly process the request. + For example, when mandatory byte-range locks are in effect, if the + stateid does not indicate the proper lock-owner, via a lock stateid, + a request might be avoidably rejected. + + The server however should not try to enforce these ordering rules and + should use whatever information is available to properly process I/O + requests. In particular, when a client has a delegation for a given + file, it SHOULD take note of this fact in processing a request, even + if it is sent with a special stateid. + +9.1.3.6. Stateid Use for SETATTR Operations + + In the case of SETATTR operations, a stateid is present. In cases + other than those that set the file size, the client may send either a + special stateid or, when a delegation is held for the file in + question, a delegation stateid. While the server SHOULD validate the + stateid and may use the stateid to optimize the determination as to + whether a delegation is held, it SHOULD note the presence of a + delegation even when a special stateid is sent, and MUST accept a + valid delegation stateid when sent. + +9.1.4. lock_owner + + When requesting a lock, the client must present to the server the + client ID and an identifier for the owner of the requested lock. + These two fields are referred to as the lock_owner and the definition + of those fields are: + + o A client ID returned by the server as part of the client's use of + the SETCLIENTID operation. + + o A variable length opaque array used to uniquely define the owner + of a lock managed by the client. + + This may be a thread id, process id, or other unique value. + + When the server grants the lock, it responds with a unique stateid. + The stateid is used as a shorthand reference to the lock_owner, since + the server will be maintaining the correspondence between them. + +9.1.5. Use of the Stateid and Locking All READ, WRITE and SETATTR operations contain a stateid. For the purposes of this section, SETATTR operations which change the size attribute of a file are treated as if they are writing the area between the old and new size (i.e., the range truncated or added to the file by means of the SETATTR), even where SETATTR is not - explicitly mentioned in the text. + explicitly mentioned in the text. The stateid passed to one of these + operations must be one that represents an OPEN, a set of byte-range + locks, or a delegation, or it may be a special stateid representing + anonymous access or the special bypass stateid. If the lock_owner performs a READ or WRITE in a situation in which it has established a lock or share reservation on the server (any OPEN constitutes a share reservation) the stateid (previously returned by - the server) must be used to indicate what locks, including both - record locks and share reservations, are held by the lockowner. If - no state is established by the client, either record lock or share + the server) must be used to indicate what locks, including both byte- + range locks and share reservations, are held by the lockowner. If no + state is established by the client, either byte-range lock or share reservation, a stateid of all bits 0 is used. Regardless whether a stateid of all bits 0, or a stateid returned by the server is used, - if there is a conflicting share reservation or mandatory record lock - held on the file, the server MUST refuse to service the READ or WRITE - operation. + if there is a conflicting share reservation or mandatory byte-range + lock held on the file, the server MUST refuse to service the READ or + WRITE operation. Share reservations are established by OPEN operations and by their nature are mandatory in that when the OPEN denies READ or WRITE operations, that denial results in such operations being rejected - with error NFS4ERR_LOCKED. Record locks may be implemented by the - server as either mandatory or advisory, or the choice of mandatory or - advisory behavior may be determined by the server on the basis of the - file being accessed (for example, some UNIX-based servers support a - "mandatory lock bit" on the mode attribute such that if set, record - locks are required on the file before I/O is possible). When record - locks are advisory, they only prevent the granting of conflicting - lock requests and have no effect on READs or WRITEs. Mandatory - record locks, however, prevent conflicting I/O operations. When they - are attempted, they are rejected with NFS4ERR_LOCKED. When the - client gets NFS4ERR_LOCKED on a file it knows it has the proper share - reservation for, it will need to issue a LOCK request on the region - of the file that includes the region the I/O was to be performed on, - with an appropriate locktype (i.e., READ*_LT for a READ operation, - WRITE*_LT for a WRITE operation). + with error NFS4ERR_LOCKED. Byte-range locks may be implemented by + the server as either mandatory or advisory, or the choice of + mandatory or advisory behavior may be determined by the server on the + basis of the file being accessed (for example, some UNIX-based + servers support a "mandatory lock bit" on the mode attribute such + that if set, byte-range locks are required on the file before I/O is + possible). When byte-range locks are advisory, they only prevent the + granting of conflicting lock requests and have no effect on READs or + WRITEs. Mandatory byte-range locks, however, prevent conflicting I/O + operations. When they are attempted, they are rejected with + NFS4ERR_LOCKED. When the client gets NFS4ERR_LOCKED on a file it + knows it has the proper share reservation for, it will need to issue + a LOCK request on the region of the file that includes the region the + I/O was to be performed on, with an appropriate locktype (i.e., + READ*_LT for a READ operation, WRITE*_LT for a WRITE operation). - With NFS version 3, there was no notion of a stateid so there was no - way to tell if the application process of the client sending the READ - or WRITE operation had also acquired the appropriate record lock on + With NFSv3, there was no notion of a stateid so there was no way to + tell if the application process of the client sending the READ or + WRITE operation had also acquired the appropriate byte-range lock on the file. Thus there was no way to implement mandatory locking. With the stateid construct, this barrier has been removed. Note that for UNIX environments that support mandatory file locking, the distinction between advisory and mandatory locking is subtle. In - fact, advisory and mandatory record locks are exactly the same in so - far as the APIs and requirements on implementation. If the mandatory - lock attribute is set on the file, the server checks to see if the - lockowner has an appropriate shared (read) or exclusive (write) - record lock on the region it wishes to read or write to. If there is - no appropriate lock, the server checks if there is a conflicting lock - (which can be done by attempting to acquire the conflicting lock on - the behalf of the lockowner, and if successful, release the lock - after the READ or WRITE is done), and if there is, the server returns - NFS4ERR_LOCKED. + fact, advisory and mandatory byte-range locks are exactly the same in + so far as the APIs and requirements on implementation. If the + mandatory lock attribute is set on the file, the server checks to see + if the lockowner has an appropriate shared (read) or exclusive + (write) byte-range lock on the region it wishes to read or write to. + If there is no appropriate lock, the server checks if there is a + conflicting lock (which can be done by attempting to acquire the + conflicting lock on the behalf of the lockowner, and if successful, + release the lock after the READ or WRITE is done), and if there is, + the server returns NFS4ERR_LOCKED. - For Windows environments, there are no advisory record locks, so the - server always checks for record locks during I/O requests. + For Windows environments, there are no advisory byte-range locks, so + the server always checks for byte-range locks during I/O requests. - Thus, the NFS version 4 LOCK operation does not need to distinguish - between advisory and mandatory record locks. It is the NFS version 4 + Thus, the NFSv4 LOCK operation does not need to distinguish between + advisory and mandatory byte-range locks. It is the NFS version 4 server's processing of the READ and WRITE operations that introduces the distinction. Every stateid other than the special stateid values noted in this section, whether returned by an OPEN-type operation (i.e., OPEN, OPEN_DOWNGRADE), or by a LOCK-type operation (i.e., LOCK or LOCKU), defines an access mode for the file (i.e., READ, WRITE, or READ- WRITE) as established by the original OPEN which began the stateid sequence, and as modified by subsequent OPENs and OPEN_DOWNGRADEs within that stateid sequence. When a READ, WRITE, or SETATTR which @@ -5267,21 +5622,21 @@ A lock may not be granted while a READ or WRITE operation using one of the special stateids is being performed and the range of the lock request conflicts with the range of the READ or WRITE operation. For the purposes of this paragraph, a conflict occurs when a shared lock is requested and a WRITE operation is being performed, or an exclusive lock is requested and either a READ or a WRITE operation is being performed. A SETATTR that sets size is treated similarly to a WRITE as discussed above. -9.1.5. Sequencing of Lock Requests +9.1.6. Sequencing of Lock Requests Locking is different than most NFS operations as it requires "at- most-one" semantics that are not provided by ONCRPC. ONCRPC over a reliable transport is not sufficient because a sequence of locking requests may span multiple TCP connections. In the face of retransmission or reordering, lock or unlock requests must have a well defined and consistent behavior. To accomplish this, each lock request contains a sequence number that is a consecutively increasing integer. Different lock_owners have different sequences. The server maintains the last sequence number (L) received and the response that @@ -5314,58 +5669,57 @@ request and response on a given lock_owner must be cached as long as the lock state exists on the server. The client MUST monotonically increment the sequence number for the CLOSE, LOCK, LOCKU, OPEN, OPEN_CONFIRM, and OPEN_DOWNGRADE operations. This is true even in the event that the previous operation that used the sequence number received an error. The only exception to this rule is if the previous operation received one of the following errors: NFS4ERR_STALE_CLIENTID, NFS4ERR_STALE_STATEID, NFS4ERR_BAD_STATEID, NFS4ERR_BAD_SEQID, NFS4ERR_BADXDR, - NFS4ERR_RESOURCE, NFS4ERR_NOFILEHANDLE, NFS4ERR_LEASE_MOVED, or - NFS4ERR_MOVED. + NFS4ERR_RESOURCE, NFS4ERR_NOFILEHANDLE, or NFS4ERR_MOVED. -9.1.6. Recovery from Replayed Requests +9.1.7. Recovery from Replayed Requests As described above, the sequence number is per lock_owner. As long as the server maintains the last sequence number received and follows the methods described above, there are no risks of a Byzantine router re-sending old requests. The server need only maintain the (lock_owner, sequence number) state as long as there are open files or closed files with locks outstanding. LOCK, LOCKU, OPEN, OPEN_DOWNGRADE, and CLOSE each contain a sequence number and therefore the risk of the replay of these operations resulting in undesired effects is non-existent while the server maintains the lock_owner state. -9.1.7. Releasing lock_owner State +9.1.8. Releasing lock_owner State When a particular lock_owner no longer holds open or file locking state at the server, the server may choose to release the sequence number state associated with the lock_owner. The server may make this choice based on lease expiration, for the reclamation of server memory, or other implementation specific details. In any event, the server is able to do this safely only when the lock_owner no longer is being utilized by the client. The server may choose to hold the lock_owner state in the event that retransmitted requests are received. However, the period to hold this state is implementation specific. In the case that a LOCK, LOCKU, OPEN_DOWNGRADE, or CLOSE is retransmitted after the server has previously released the lock_owner state, the server will find that the lock_owner has no files open and an error will be returned to the client. If the lock_owner does have a file open, the stateid will not match and again an error is returned to the client. -9.1.8. Use of Open Confirmation +9.1.9. Use of Open Confirmation In the case that an OPEN is retransmitted and the lock_owner is being used for the first time or the lock_owner state has been previously released by the server, the use of the OPEN_CONFIRM operation will prevent incorrect behavior. When the server observes the use of the lock_owner for the first time, it will direct the client to perform the OPEN_CONFIRM for the corresponding OPEN. This sequence establishes the use of a lock_owner and associated sequence number. Since the OPEN_CONFIRM sequence connects a new open_owner on the server with an existing open_owner on a client, the sequence number @@ -5497,42 +5851,42 @@ that this final locking request will be accepted. 9.5. Lease Renewal The purpose of a lease is to allow a server to remove stale locks that are held by a client that has crashed or is otherwise unreachable. It is not a mechanism for cache consistency and lease renewals may not be denied if the lease interval has not expired. The following events cause implicit renewal of all of the leases for - a given client (i.e., all those sharing a given clientid). Each of + a given client (i.e., all those sharing a given client ID). Each of these is a positive indication that the client is still active and that the associated state held at the server, for the client, is still valid. - o An OPEN with a valid clientid. + o An OPEN with a valid client ID. o Any operation made with a valid stateid (CLOSE, DELEGPURGE, DELEGRETURN, LOCK, LOCKU, OPEN, OPEN_CONFIRM, OPEN_DOWNGRADE, READ, RENEW, SETATTR, or WRITE). This does not include the special stateids of all bits 0 or all bits 1. Note that if the client had restarted or rebooted, the client would not be making these requests without issuing the SETCLIENTID/SETCLIENTID_CONFIRM sequence. The use of the SETCLIENTID/SETCLIENTID_CONFIRM sequence (one that changes the client verifier) notifies the server to drop the locking state associated with the client. SETCLIENTID/SETCLIENTID_CONFIRM never renews a lease. If the server has rebooted, the stateids (NFS4ERR_STALE_STATEID - error) or the clientid (NFS4ERR_STALE_CLIENTID error) will not be + error) or the client ID (NFS4ERR_STALE_CLIENTID error) will not be valid hence preventing spurious renewals. This approach allows for low overhead lease renewal which scales well. In the typical case no extra RPC calls are required for lease renewal and in the worst case one RPC is required every lease period (i.e., a RENEW operation). The number of locks held by the client is not a factor since all state for the client is involved with the lease renewal action. Since all operations that create a new lease also renew existing @@ -5555,33 +5909,33 @@ In the event that a client fails, the server may recover the client's locks when the associated leases have expired. Conflicting locks from another client may only be granted after this lease expiration. If the client is able to restart or reinitialize within the lease period the client may be forced to wait the remainder of the lease period before obtaining new locks. To minimize client delay upon restart, lock requests are associated with an instance of the client by a client supplied verifier. This verifier is part of the initial SETCLIENTID call made by the client. - The server returns a clientid as a result of the SETCLIENTID - operation. The client then confirms the use of the clientid with - SETCLIENTID_CONFIRM. The clientid in combination with an opaque + The server returns a client ID as a result of the SETCLIENTID + operation. The client then confirms the use of the client ID with + SETCLIENTID_CONFIRM. The client ID in combination with an opaque owner field is then used by the client to identify the lock owner for OPEN. This chain of associations is then used to identify all locks for a particular client. Since the verifier will be changed by the client upon each initialization, the server can compare a new verifier to the verifier associated with currently held locks and determine that they do not match. This signifies the client's new instantiation and subsequent loss of locking state. As a result, the server is free to release - all locks held which are associated with the old clientid which was + all locks held which are associated with the old client ID which was derived from the old verifier. Note that the verifier must have the same uniqueness properties of the verifier for the COMMIT operation. 9.6.2. Server Failure and Recovery If the server loses locking state (usually as a result of a restart or reboot), it must allow clients time to discover this fact and re- establish the lost locking state. The client must be able to re- @@ -5589,22 +5943,22 @@ requests because the server has granted conflicting access to another client. Likewise, if there is the possibility that clients have not yet re-established their locking state for a file, the server must disallow READ and WRITE operations for that file. The duration of this recovery period is equal to the duration of the lease period. A client can determine that server failure (and thus loss of locking state) has occurred, when it receives one of two errors. The NFS4ERR_STALE_STATEID error indicates a stateid invalidated by a reboot or restart. The NFS4ERR_STALE_CLIENTID error indicates a - clientid invalidated by reboot or restart. When either of these are - received, the client must establish a new clientid (see + client ID invalidated by reboot or restart. When either of these are + received, the client must establish a new client ID (see Section 9.1.1) and re-establish the locking state as discussed below. The period of special handling of locking and READs and WRITEs, equal in duration to the lease period, is referred to as the "grace period". During the grace period, clients recover locks and the associated state by reclaim-type locking requests (i.e., LOCK requests with reclaim set to true and OPEN operations with a claim type of CLAIM_PREVIOUS). During the grace period, the server must reject READ and WRITE operations and non-reclaim locking requests (i.e., other LOCK and OPEN operations) with an error of @@ -5651,52 +6005,63 @@ the server. Further discussion of the general issue is included in [20]. The client must account for the server that is able to perform I/O and non-reclaim locking requests within the grace period as well as those that cannot do so. A reclaim-type locking request outside the server's grace period can only succeed if the server can guarantee that no conflicting lock or I/O request has been granted since reboot or restart. A server may, upon restart, establish a new value for the lease - period. Therefore, clients should, once a new clientid is + period. Therefore, clients should, once a new client ID is established, refetch the lease_time attribute and use it as the basis for lease renewal for the lease associated with that server. However, the server must establish, for this restart event, a grace period at least as long as the lease period for the previous server instantiation. This allows the client state obtained during the previous server instance to be reliably re-established. 9.6.3. Network Partitions and Recovery If the duration of a network partition is greater than the lease period provided by the server, the server will have not received a lease renewal from the client. If this occurs, the server may free all locks held for the client. As a result, all stateids held by the client will become invalid or stale. Once the client is able to reach the server after such a network partition, all I/O submitted by the client with the now invalid stateids will fail with the server returning the error NFS4ERR_EXPIRED. Once this error is received, the client will suitably notify the application that held the lock. +9.6.3.1. Courtesy Locks + As a courtesy to the client or as an optimization, the server may continue to hold locks on behalf of a client for which recent communication has extended beyond the lease period. If the server receives a lock or I/O request that conflicts with one of these courtesy locks, the server must free the courtesy lock and grant the new request. + If the server does not reboot before the network partition is healed, + when the original client tries to access a courtesy lock which was + freed, the server SHOULD send back a NFS4ERR_BAD_STATEID to the + client. If the client tries to access a courtesy lock which was not + freed, then the server should mark all of the courtesy locks as + implicitly being renewed. + When a network partition is combined with a server reboot, there are edge conditions that place requirements on the server in order to avoid silent data corruption following the server reboot. Two of these edge conditions are known, and are discussed below. +9.6.3.1.1. First Server Edge Condition + The first edge condition has the following scenario: 1. Client A acquires a lock. 2. Client A and server experience mutual network partition, such that client A is unable to renew its lease. 3. Client A's lease expires, so server releases lock. 4. Client B acquires a lock that would have conflicted with that of @@ -5710,20 +6075,22 @@ 8. Client A issues a RENEW operation, and gets back a NFS4ERR_STALE_CLIENTID. 9. Client A reclaims its lock within the server's grace period. Thus, at the final step, the server has erroneously granted client A's lock reclaim. If client B modified the object the lock was protecting, client A will experience object corruption. +9.6.3.1.2. Second Server Edge Condition + The second known edge condition follows: 1. Client A acquires a lock. 2. Server reboots. 3. Client A and server experience mutual network partition, such that client A is unable to reclaim its lock within the grace period. @@ -5741,100 +6108,188 @@ 9. Client A issues a RENEW operation, and gets back a NFS4ERR_STALE_CLIENTID. 10. Client A reclaims its lock within the server's grace period. As with the first edge condition, the final step of the scenario of the second edge condition has the server erroneously granting client A's lock reclaim. - Solving the first and second edge conditions requires that the server - either assume after it reboots that edge condition occurs, and thus - return NFS4ERR_NO_GRACE for all reclaim attempts, or that the server - record some information stable storage. The amount of information - the server records in stable storage is in inverse proportion to how +9.6.3.1.3. Handling Server Edge Conditions + + Solving these edge conditions requires that the server either assume + after it reboots that edge condition occurs, and thus return + NFS4ERR_NO_GRACE for all reclaim attempts, or that the server record + some information in stable storage. The amount of information the + server records in stable storage is in inverse proportion to how harsh the server wants to be whenever the edge conditions occur. The server that is completely tolerant of all edge conditions will record in stable storage every lock that is acquired, removing the lock record from stable storage only when the lock is unlocked by the client and the lock's lockowner advances the sequence number such that the lock release is not the last stateful event for the lockowner's sequence. For the two aforementioned edge conditions, the harshest a server can be, and still support a grace period for reclaims, requires that the server record in stable storage information some minimal information. For example, a server implementation could, for each client, save in stable storage a record containing: o the client's id string o a boolean that indicates if the client's lease expired or if there was administrative intervention (see Section 9.8) to revoke a - record lock, share reservation, or delegation + byte-range lock, share reservation, or delegation o a timestamp that is updated the first time after a server boot or - reboot the client acquires record locking, share reservation, or - delegation state on the server. The timestamp need not be updated - on subsequent lock requests until the server reboots. + reboot the client acquires byte-range locking, share reservation, + or delegation state on the server. The timestamp need not be + updated on subsequent lock requests until the server reboots. The server implementation would also record in the stable storage the timestamps from the two most recent server reboots. Assuming the above record keeping, for the first edge condition, after the server reboots, the record that client A's lease expired means that another client could have acquired a conflicting record lock, share reservation, or delegation. Hence the server must reject a reclaim from client A with the error NFS4ERR_NO_GRACE. For the second edge condition, after the server reboots for a second time, the record that the client had an unexpired record lock, share reservation, or delegation established before the server's previous incarnation means that the server must reject a reclaim from client A with the error NFS4ERR_NO_GRACE. Regardless of the level and approach to record keeping, the server MUST implement one of the following strategies (which apply to - reclaims of share reservations, record locks, and delegations): + reclaims of share reservations, byte-range locks, and delegations): 1. Reject all reclaims with NFS4ERR_NO_GRACE. This is superharsh, but necessary if the server does not want to record lock state in stable storage. 2. Record sufficient state in stable storage such that all known edge conditions involving server reboot, including the two noted in this section, are detected. False positives are acceptable. Note that at this time, it is not known if there are other edge conditions. In the event, after a server reboot, the server determines that there is unrecoverable damage or corruption to the the stable storage, then for all clients and/or locks affected, the server MUST return NFS4ERR_NO_GRACE. +9.6.3.1.4. Client Edge Condition + + A third edge condition effects the client and not the server. If the + server reboots in the middle of the client reclaiming some locks and + then a network partition is established, the client might be in the + situation of having reclaimed some, but not all locks. In that case, + a conservative client would assume that the non-reclaimed locks were + revoked. + + The third known edge condition follows: + + 1. Client A acquires a lock 1. + + 2. Client A acquires a lock 2. + + 3. Server reboots. + + 4. Client A issues a RENEW operation, and gets back a + NFS4ERR_STALE_CLIENTID. + + 5. Client A reclaims its lock 1 within the server's grace period. + + 6. Client A and server experience mutual network partition, such + that client A is unable to reclaim its remaining locks within + the grace period. + + 7. Server's reclaim grace period ends. Client A has no locks + recorded on server. + + 8. Server reboots a second time. + + 9. Network partition between client A and server heals. + + 10. Client A issues a RENEW operation, and gets back a + NFS4ERR_STALE_CLIENTID. + + 11. Client A reclaims its lock 1 within the server's grace period. + + During the partition, client A decided that the server had revoked + lock 2. After the partition, it was able to reclaim lock 1, but made + no attempt to reclaim lock 2. After the grace period, it is free to + try to reestablish lock 2 via LOCK operations. + + Note that the other two edge conditions are able to interact with + this third edge condition. Another client B may have established a + conflicting lock during the partition, made some changes, and the + released the lock before the second server reboot. + +9.6.3.1.5. Client's Handling of NFS4ERR_NO_GRACE + A mandate for the client's handling of the NFS4ERR_NO_GRACE error is outside the scope of this specification, since the strategies for such handling are very dependent on the client's operating environment. However, one potential approach is described below. When the client receives NFS4ERR_NO_GRACE, it could examine the change attribute of the objects the client is trying to reclaim state for, and use that to determine whether to re-establish the state via normal OPEN or LOCK requests. This is acceptable provided the client's operating environment allows it. In otherwords, the client implementor is advised to document for his users the behavior. The - client could also inform the application that its record lock or + client could also inform the application that its byte-range lock or share reservations (whether they were delegated or not) have been lost, such as via a UNIX signal, a GUI pop-up window, etc. See Section 10.5, for a discussion of what the client should do for dealing with unreclaimed delegations on client state. For further discussion of revocation of locks see Section 9.8. +9.6.3.2. Client's Reaction to a Freed Lock + + There is no way for a client to predetermine how a given server is + going to behave during a network partition. When the partition + heals, either the client still has all of its locks, it has some of + its locks, or it has none of them. The client will be able to + examine the various error return values to determine its response. + + NFS4ERR_EXPIRED: + + All locks has been revoked during the partition. The client + should use a SETCLIENTID to recover. + + NFS4ERR_ADMIN_REVOKED: + + The current lock has been revoked during the partition and there + is no clue as to whether the server rebooted. + + NFS4ERR_BAD_STATEID: + + The current lock has been revoked during the partition and the + server did not reboot. Other locks MAY still be renewed. The + client MAY NOT want to do a SETCLIENTID and instead SHOULD probe + via a RENEW call. + + NFS4ERR_NO_GRACE: + + The current lock has been revoked during the partition and the + server rebooted. The server might have no information on the + other locks. They may still be renewable. + + NFS4ERR_OLD_STATEID: + + The server has not rebooted. The client SHOULD handle this error + as it normally would. + 9.7. Recovery from a Lock Request Timeout or Abort In the event a lock request times out, a client may decide to not retry the request. The client may also abort the request when the process for which it was issued is terminated (e.g., in UNIX due to a signal). It is possible though that the server received the request and acted upon it. This would change the state on the server without the client being aware of the change. It is paramount that the client re-synchronize state with server before it attempts any other operation that takes a seqid and/or a stateid with the same @@ -5907,25 +6362,27 @@ the locks governed by that stateid and re-established the appropriate state between itself and the server. If the I/O request is not successful, then one or more of the locks associated with the stateid was revoked by the server and the client must notify the owner. 9.9. Share Reservations A share reservation is a mechanism to control access to a file. It - is a separate and independent mechanism from record locking. When a - client opens a file, it issues an OPEN operation to the server + is a separate and independent mechanism from byte-range locking. + When a client opens a file, it issues an OPEN operation to the server specifying the type of access required (READ, WRITE, or BOTH) and the - type of access to deny others (deny NONE, READ, WRITE, or BOTH). If - the OPEN fails the client will fail the application's open request. + type of access to deny others (OPEN4_SHARE_DENY_NONE, + OPEN4_SHARE_DENY_READ, OPEN4_SHARE_DENY_WRITE, or + OPEN4_SHARE_DENY_BOTH). If the OPEN fails the client will fail the + application's open request. Pseudo-code definition of the semantics: if (request.access == 0) return (NFS4ERR_INVAL) else if ((request.access & file_state.deny)) || (request.deny & file_state.access)) return (NFS4ERR_DENIED) This checking of share reservations on OPEN is done with no exception @@ -5943,42 +6400,45 @@ const OPEN4_SHARE_DENY_WRITE = 0x00000002; const OPEN4_SHARE_DENY_BOTH = 0x00000003; 9.10. OPEN/CLOSE Operations To provide correct share semantics, a client MUST use the OPEN operation to obtain the initial filehandle and indicate the desired access and what access, if any, to deny. Even if the client intends to use a stateid of all 0's or all 1's, it must still obtain the filehandle for the regular file with the OPEN operation so the - appropriate share semantics can be applied. For clients that do not - have a deny mode built into their open programming interfaces, deny - equal to NONE should be used. + appropriate share semantics can be applied. Clients that do not have + a deny mode built into their programming interfaces for opening a + file should request a deny mode of OPEN4_SHARE_DENY_NONE. The OPEN operation with the CREATE flag, also subsumes the CREATE operation for regular files as used in previous versions of the NFS protocol. This allows a create with a share to be done atomically. The CLOSE operation removes all share reservations held by the - lock_owner on that file. If record locks are held, the client SHOULD - release all locks before issuing a CLOSE. The server MAY free all - outstanding locks on CLOSE but some servers may not support the CLOSE - of a file that still has record locks held. The server MUST return - failure, NFS4ERR_LOCKS_HELD, if any locks would exist after the - CLOSE. + lock_owner on that file. If byte-range locks are held, the client + SHOULD release all locks before issuing a CLOSE. The server MAY free + all outstanding locks on CLOSE but some servers may not support the + CLOSE of a file that still has byte-range locks held. The server + MUST return failure, NFS4ERR_LOCKS_HELD, if any locks would exist + after the CLOSE. The LOOKUP operation will return a filehandle without establishing any lock state on the server. Without a valid stateid, the server - will assume the client has the least access. For example, a file - opened with deny READ/WRITE cannot be accessed using a filehandle - obtained through LOOKUP because it would not have a valid stateid - (i.e., using a stateid of all bits 0 or all bits 1). + will assume the client has the least access. For example, if one + client opened a file with OPEN4_SHARE_DENY_BOTH and another client + accesses the file via a filehandle obtained through LOOKUP, the + second client could only read the file using the special read bypass + stateid. The second client could not WRITE the file at all because + it would not have a valid stateid from OPEN and the special anonymous + stateid would not be allowed access. 9.10.1. Close and Retention of State Information Since a CLOSE operation requests deallocation of a stateid, dealing with retransmission of the CLOSE, may pose special difficulties, since the state information, which normally would be used to determine the state of the open file being designated, might be deallocated, resulting in an NFS4ERR_BAD_STATEID error. Servers may deal with this problem in a number of ways. To provide @@ -6031,21 +6491,24 @@ stateids and will require separate CLOSEs to free them. When multiple open files on the client are merged into a single open file object on the server, the close of one of the open files (on the client) may necessitate change of the access and deny status of the open file on the server. This is because the union of the access and deny bits for the remaining opens may be smaller (i.e., a proper subset) than previously. The OPEN_DOWNGRADE operation is used to make the necessary change and the client should use it to update the server so that share reservation requests by other clients are - handled properly. + handled properly. The stateid returned has the same "other" field as + that passed to the server. The "seqid" value in the returned stateid + MUST be incremented, even in situations in which there is no change + to the access and deny bits for the file. 9.12. Short and Long Leases When determining the time period for the server lease, the usual lease tradeoffs apply. Short leases are good for fast server recovery at a cost of increased RENEW or READ (with zero length) requests. Longer leases are certainly kinder and gentler to servers trying to handle very large numbers of clients. The number of RENEW requests drop in proportion to the lease time. The disadvantages of long leases are slower recovery after server failure (the server must @@ -6085,21 +6548,21 @@ for an automatic method to determine an appropriate lease period, the server's administrator may have to tune the lease period. 9.14. Migration, Replication and State When responsibility for handling a given file system is transferred to a new server (migration) or the client chooses to use an alternate server (e.g., in response to server unresponsiveness) in the context of file system replication, the appropriate handling of state shared between the client and server (i.e., locks, leases, stateids, and - clientids) is as described below. The handling differs between + client IDs) is as described below. The handling differs between migration and replication. For related discussion of file server state and recover of such see the sections under Section 9.6. If a server replica or a server immigrating a filesystem agrees to, or is expected to, accept opaque values from the client that originated from another server, then it is a wise implementation practice for the servers to encode the "opaque" values in network byte order. This way, servers acting as replicas or immigrating filesystems will be able to parse values like stateids, directory cookies, filehandles, etc. even if their native byte order is @@ -6108,21 +6571,21 @@ 9.14.1. Migration and State In the case of migration, the servers involved in the migration of a filesystem SHOULD transfer all server state from the original to the new server. This must be done in a way that is transparent to the client. This state transfer will ease the client's transition when a filesystem migration occurs. If the servers are successful in transferring all state, the client will continue to use stateids assigned by the original server. Therefore the new server must - recognize these stateids as valid. This holds true for the clientid + recognize these stateids as valid. This holds true for the client ID as well. Since responsibility for an entire filesystem is transferred with a migration event, there is no possibility that conflicts will arise on the new server as a result of the transfer of locks. As part of the transfer of information between servers, leases would be transferred as well. The leases being transferred to the new server will typically have a different expiration time from those for the same client, previously on the old server. To maintain the property that all leases on a given server for a given client expire @@ -6144,21 +6607,21 @@ A client SHOULD re-establish new callback information with the new server as soon as possible, according to sequences described in Section 15.35 and Section 15.36. This ensures that server operations are not blocked by the inability to recall delegations. 9.14.2. Replication and State Since client switch-over in the case of replication is not under server control, the handling of state is different. In this case, - leases, stateids and clientids do not have validity across a + leases, stateids and client IDs do not have validity across a transition from one server to another. The client must re-establish its locks on the new server. This can be compared to the re- establishment of locks by means of reclaim-type requests after a server reboot. The difference is that the server has no provision to distinguish requests reclaiming locks from those obtaining new locks or to defer the latter. Thus, a client re-establishing a lock on the new server (by means of a LOCK or OPEN request), may have the requests denied due to a conflicting lock. Since replication is intended for read-only use of filesystems, such denial of locks should not pose large difficulties in practice. When an attempt to @@ -6240,41 +6703,41 @@ Client-side caching of data, of file attributes, and of file names is essential to providing good performance with the NFS protocol. Providing distributed cache coherence is a difficult problem and previous versions of the NFS protocol have not attempted it. Instead, several NFS client implementation techniques have been used to reduce the problems that a lack of coherence poses for users. These techniques have not been clearly defined by earlier protocol specifications and it is often unclear what is valid or invalid client behavior. - The NFS version 4 protocol uses many techniques similar to those that - have been used in previous protocol versions. The NFS version 4 - protocol does not provide distributed cache coherence. However, it - defines a more limited set of caching guarantees to allow locks and - share reservations to be used without destructive interference from - client side caching. + The NFSv4 protocol uses many techniques similar to those that have + been used in previous protocol versions. The NFSv4 protocol does not + provide distributed cache coherence. However, it defines a more + limited set of caching guarantees to allow locks and share + reservations to be used without destructive interference from client + side caching. - In addition, the NFS version 4 protocol introduces a delegation - mechanism which allows many decisions normally made by the server to - be made locally by clients. This mechanism provides efficient - support of the common cases where sharing is infrequent or where - sharing is read-only. + In addition, the NFSv4 protocol introduces a delegation mechanism + which allows many decisions normally made by the server to be made + locally by clients. This mechanism provides efficient support of the + common cases where sharing is infrequent or where sharing is read- + only. 10.1. Performance Challenges for Client-Side Caching Caching techniques used in previous versions of the NFS protocol have been successful in providing good performance. However, several scalability challenges can arise when those techniques are used with very large numbers of clients. This is particularly true when clients are geographically distributed which classically increases - the latency for cache revalidation requests. + the latency for cache re-validation requests. The previous versions of the NFS protocol repeat their file data cache validation requests at the time the file is opened. This behavior can have serious performance drawbacks. A common case is one in which a file is only accessed by a single client. Therefore, sharing is infrequent. In this case, repeated reference to the server to find that no conflicts exist is expensive. A better option with regards to performance is to allow a client that repeatedly opens a file to do @@ -6282,22 +6745,22 @@ conflicting operations from another client actually occur. A similar situation arises in connection with file locking. Sending file lock and unlock requests to the server as well as the read and write requests necessary to make data caching consistent with the locking semantics (see Section 10.3.2) can severely limit performance. When locking is used to provide protection against infrequent conflicts, a large penalty is incurred. This penalty may discourage the use of file locking by applications. - The NFS version 4 protocol provides more aggressive caching - strategies with the following design goals: + The NFSv4 protocol provides more aggressive caching strategies with + the following design goals: o Compatibility with a large range of server semantics. o Provide the same caching benefits as previous versions of the NFS protocol when unable to provide the more aggressive model. o Requirements for aggressive caching are organized so that a large portion of the benefit can be obtained even when not all of the requirements can be met. @@ -6313,21 +6776,21 @@ "callback" RPC from server to client, a server recalls delegated responsibilities when another client engages in sharing of a delegated file. A delegation is passed from the server to the client, specifying the object of the delegation and the type of delegation. There are different types of delegations but each type contains a stateid to be used to represent the delegation when performing operations that depend on the delegation. This stateid is similar to those associated with locks and share reservations but differs in that the - stateid for a delegation is associated with a clientid and may be + stateid for a delegation is associated with a client ID and may be used on behalf of all the open_owners for the given client. A delegation is made to the client as a whole and not to any specific process or thread of control within it. Because callback RPCs may not work in all environments (due to firewalls, for example), correct protocol operation does not depend on them. Preliminary testing of callback functionality by means of a CB_NULL procedure determines whether callbacks can be supported. The CB_NULL procedure checks the continuity of the callback path. A server makes a preliminary assessment of callback availability to a @@ -6385,26 +6848,27 @@ able to determine that a limit has been reached because each new delegation request results in a revoke. The client could then determine which delegations it may not need and preemptively release them. 10.2.1. Delegation Recovery There are three situations that delegation recovery must deal with: o Client reboot or restart + o Server reboot or restart o Network partition (full or callback-only) In the event the client reboots or restarts, the failure to renew - leases will result in the revocation of record locks and share + leases will result in the revocation of byte-range locks and share reservations. Delegations, however, may be treated a bit differently. There will be situations in which delegations will need to be reestablished after a client reboots or restarts. The reason for this is the client may have file data stored locally and this data was associated with the previously held delegations. The client will need to reestablish the appropriate file state on the server. To allow for this type of client recovery, the server MAY extend the @@ -6422,31 +6886,31 @@ A server MAY support a claim type of CLAIM_DELEGATE_PREV, but if it does, it MUST NOT remove delegations upon SETCLIENTID_CONFIRM, and instead MUST, for a period of time no less than that of the value of the lease_time attribute, maintain the client's delegations to allow time for the client to issue CLAIM_DELEGATE_PREV requests. The server that supports CLAIM_DELEGATE_PREV MUST support the DELEGPURGE operation. When the server reboots or restarts, delegations are reclaimed (using - the OPEN operation with CLAIM_PREVIOUS) in a similar fashion to - record locks and share reservations. However, there is a slight + the OPEN operation with CLAIM_PREVIOUS) in a similar fashion to byte- + range locks and share reservations. However, there is a slight semantic difference. In the normal case if the server decides that a delegation should not be granted, it performs the requested action (e.g., OPEN) without granting any delegation. For reclaim, the server grants the delegation but a special designation is applied so that the client treats the delegation as having been granted but recalled by the server. Because of this, the client has the duty to write all modified state to the server and then return the delegation. This process of handling delegation reclaim reconciles - three principles of the NFS version 4 protocol: + three principles of the NFSv4 protocol: o Upon reclaim, a client reporting resources assigned to it by an earlier server instance must be granted those resources. o The server has unquestionable authority to determine whether delegations are to be granted and, once granted, whether they are to be continued. o The use of callbacks is not to be depended upon until the client has proven its ability to receive them. @@ -6464,71 +6928,71 @@ requests are held off. Eventually the occurrence of a conflicting request from another client will cause revocation of the delegation. A loss of the callback path (e.g., by later network configuration change) will have the same effect. A recall request will fail and revocation of the delegation will result. A client normally finds out about revocation of a delegation when it uses a stateid associated with a delegation and receives the error NFS4ERR_EXPIRED. It also may find out about delegation revocation after a client reboot when it attempts to reclaim a delegation and - receives that same error. Note that in the case of a revoked write - open delegation, there are issues because data may have been modified - by the client whose delegation is revoked and separately by other - clients. See Section 10.5.1 for a discussion of such issues. Note - also that when delegations are revoked, information about the revoked - delegation will be written by the server to stable storage (as - described in Section 9.6). This is done to deal with the case in - which a server reboots after revoking a delegation but before the - client holding the revoked delegation is notified about the - revocation. + receives that same error. Note that in the case of a revoked + OPEN_DELEGATE_WRITE delegation, there are issues because data may + have been modified by the client whose delegation is revoked and + separately by other clients. See Section 10.5.1 for a discussion of + such issues. Note also that when delegations are revoked, + information about the revoked delegation will be written by the + server to stable storage (as described in Section 9.6). This is done + to deal with the case in which a server reboots after revoking a + delegation but before the client holding the revoked delegation is + notified about the revocation. 10.3. Data Caching When applications share access to a set of files, they need to be implemented so as to take account of the possibility of conflicting access by another application. This is true whether the applications in question execute on different clients or reside on the same client. - Share reservations and record locks are the facilities the NFS + Share reservations and byte-range locks are the facilities the NFS version 4 protocol provides to allow applications to coordinate - access by providing mutual exclusion facilities. The NFS version 4 + access by providing mutual exclusion facilities. The NFSv4 protocol's data caching must be implemented such that it does not invalidate the assumptions that those using these facilities depend upon. 10.3.1. Data Caching and OPENs In order to avoid invalidating the sharing assumptions that - applications rely on, NFS version 4 clients should not provide cached - data to applications or modify it on behalf of an application when it - would not be valid to obtain or modify that same data via a READ or - WRITE operation. + applications rely on, NFSv4 clients should not provide cached data to + applications or modify it on behalf of an application when it would + not be valid to obtain or modify that same data via a READ or WRITE + operation. Furthermore, in the absence of open delegation (see Section 10.4) two additional rules apply. Note that these rules are obeyed in practice - by many NFS version 2 and version 3 clients. + by many NFSv2 and NFSv3 clients. o First, cached data present on a client must be revalidated after doing an OPEN. Revalidating means that the client fetches the change attribute from the server, compares it with the cached change attribute, and if different, declares the cached data (as well as the cached attributes) as invalid. This is to ensure that the data for the OPENed file is still correctly reflected in the client's cache. This validation must be done at least when the client's OPEN operation includes DENY=WRITE or BOTH thus terminating a period in which other clients may have had the opportunity to open the file with WRITE access. Clients may choose to do the revalidation more often (i.e., at OPENs - specifying DENY=NONE) to parallel the NFS version 3 protocol's - practice for the benefit of users assuming this degree of cache + specifying DENY=NONE) to parallel the NFSv3 protocol's practice + for the benefit of users assuming this degree of cache revalidation. Since the change attribute is updated for data and metadata modifications, some client implementors may be tempted to use the time_modify attribute and not change to validate cached data, so that metadata changes do not spuriously invalidate clean data. The implementor is cautioned in this approach. The change attribute is guaranteed to change for each update to the file, whereas time_modify is guaranteed to change only at the granularity of the time_delta attribute. Use by the client's data cache validation logic of time_modify and not change runs the risk of the client incorrectly marking stale data as valid. @@ -6554,21 +7018,21 @@ operations executed. This is as opposed to file locking that is based on pure convention. For example, it is possible to manipulate a two-megabyte file by dividing the file into two one-megabyte regions and protecting access to the two regions by file locks on bytes zero and one. A lock for write on byte zero of the file would represent the right to do READ and WRITE operations on the first region. A lock for write on byte one of the file would represent the right to do READ and WRITE operations on the second region. As long as all applications manipulating the file obey this convention, they will work on a local filesystem. However, they may not work with the - NFS version 4 protocol unless clients refrain from data caching. + NFSv4 protocol unless clients refrain from data caching. The rules for data caching in the file locking environment are: o First, when a client obtains a file lock for a particular region, the data cache corresponding to that region (if any cached data exists) must be revalidated. If the change attribute indicates that the file may have been updated since the cached data was obtained, the client must flush or invalidate the cached data for the newly locked region. A client might choose to invalidate all of non-modified cached data that it has for the file but the only @@ -6595,39 +7059,39 @@ client possesses may not be valid. The data that is written to the server as a prerequisite to the unlocking of a region must be written, at the server, to stable storage. The client may accomplish this either with synchronous writes or by following asynchronous writes with a COMMIT operation. This is required because retransmission of the modified data after a server reboot might conflict with a lock held by another client. A client implementation may choose to accommodate applications which - use record locking in non-standard ways (e.g., using a record lock as - a global semaphore) by flushing to the server more data upon a LOCKU - than is covered by the locked range. This may include modified data - within files other than the one for which the unlocks are being done. - In such cases, the client must not interfere with applications whose - READs and WRITEs are being done only within the bounds of record - locks which the application holds. For example, an application locks - a single byte of a file and proceeds to write that single byte. A - client that chose to handle a LOCKU by flushing all modified data to - the server could validly write that single byte in response to an - unrelated unlock. However, it would not be valid to write the entire - block in which that single written byte was located since it includes - an area that is not locked and might be locked by another client. - Client implementations can avoid this problem by dividing files with - modified data into those for which all modifications are done to - areas covered by an appropriate record lock and those for which there - are modifications not covered by a record lock. Any writes done for - the former class of files must not include areas not locked and thus - not modified on the client. + use byte-range locking in non-standard ways (e.g., using a byte-range + lock as a global semaphore) by flushing to the server more data upon + a LOCKU than is covered by the locked range. This may include + modified data within files other than the one for which the unlocks + are being done. In such cases, the client must not interfere with + applications whose READs and WRITEs are being done only within the + bounds of record locks which the application holds. For example, an + application locks a single byte of a file and proceeds to write that + single byte. A client that chose to handle a LOCKU by flushing all + modified data to the server could validly write that single byte in + response to an unrelated unlock. However, it would not be valid to + write the entire block in which that single written byte was located + since it includes an area that is not locked and might be locked by + another client. Client implementations can avoid this problem by + dividing files with modified data into those for which all + modifications are done to areas covered by an appropriate byte-range + lock and those for which there are modifications not covered by a + byte-range lock. Any writes done for the former class of files must + not include areas not locked and thus not modified on the client. 10.3.3. Data Caching and Mandatory File Locking Client side data caching needs to respect mandatory file locking when it is in effect. The presence of mandatory file locking for a given file is indicated when the client gets back NFS4ERR_LOCKED from a READ or WRITE on a file it has an appropriate share reservation for. When mandatory locking is in effect for a file, the client must check for an appropriate file lock for data being read or written. If a lock exists for the range being read or written, the client may @@ -6636,60 +7100,63 @@ the read or write request must not be satisfied by the client's cache and the request must be sent to the server for processing. When a read or write request partially overlaps a locked region, the request should be subdivided into multiple pieces with each region (locked or not) treated appropriately. 10.3.4. Data Caching and File Identity When clients cache data, the file data needs to be organized according to the filesystem object to which the data belongs. For - NFS version 3 clients, the typical practice has been to assume for - the purpose of caching that distinct filehandles represent distinct + NFSv3 clients, the typical practice has been to assume for the + purpose of caching that distinct filehandles represent distinct filesystem objects. The client then has the choice to organize and maintain the data cache on this basis. - In the NFS version 4 protocol, there is now the possibility to have + In the NFSv4 protocol, there is now the possibility to have significant deviations from a "one filehandle per object" model because a filehandle may be constructed on the basis of the object's pathname. Therefore, clients need a reliable method to determine if two filehandles designate the same filesystem object. If clients were simply to assume that all distinct filehandles denote distinct objects and proceed to do data caching on this basis, caching inconsistencies would arise between the distinct client side objects which mapped to the same server side object. - By providing a method to differentiate filehandles, the NFS version 4 + By providing a method to differentiate filehandles, the NFSv4 protocol alleviates a potential functional regression in comparison - with the NFS version 3 protocol. Without this method, caching + with the NFSv3 protocol. Without this method, caching inconsistencies within the same client could occur and this has not been present in previous versions of the NFS protocol. Note that it is possible to have such inconsistencies with applications executing on multiple clients but that is not the issue being addressed here. - For the purposes of data caching, the following steps allow an NFS - version 4 client to determine whether two distinct filehandles denote - the same server side object: + For the purposes of data caching, the following steps allow an NFSv4 + client to determine whether two distinct filehandles denote the same + server side object: o If GETATTR directed to two filehandles returns different values of the fsid attribute, then the filehandles represent distinct objects. o If GETATTR for any file with an fsid that matches the fsid of the two filehandles in question returns a unique_handles attribute with a value of TRUE, then the two objects are distinct. o If GETATTR directed to the two filehandles does not return the fileid attribute for both of the handles, then it cannot be determined whether the two objects are the same. Therefore, operations which depend on that knowledge (e.g., client side data - caching) cannot be done reliably. + caching) cannot be done reliably. Note that if GETATTR does not + return the fileid attribute for both filehandles, it will return + it for neither of the filehandles, since the fsid for both + filehandles is the same. o If GETATTR directed to the two filehandles returns different values for the fileid attribute, then they are distinct objects. o Otherwise they are the same object. 10.4. Open Delegation When a file is being OPENed, the server may delegate further handling of opens and closes for that file to the opening client. Any such @@ -6715,50 +7182,52 @@ o There should be no current delegation that conflicts with the delegation being requested. o The probability of future conflicting open requests should be low based on the recent history of the file. o The existence of any server-specific semantics of OPEN/CLOSE that would make the required handling incompatible with the prescribed handling that the delegated client would apply (see below). - There are two types of open delegations, read and write. A read open - delegation allows a client to handle, on its own, requests to open a - file for reading that do not deny read access to others. Multiple - read open delegations may be outstanding simultaneously and do not - conflict. A write open delegation allows the client to handle, on - its own, all opens. Only one write open delegation may exist for a - given file at a given time and it is inconsistent with any read open - delegations. + There are two types of open delegations, OPEN_DELEGATE_READ and + OPEN_DELEGATE_WRITE. A OPEN_DELEGATE_READ delegation allows a client + to handle, on its own, requests to open a file for reading that do + not deny read access to others. Multiple OPEN_DELEGATE_READ + delegations may be outstanding simultaneously and do not conflict. A + OPEN_DELEGATE_WRITE delegation allows the client to handle, on its + own, all opens. Only one OPEN_DELEGATE_WRITE delegation may exist + for a given file at a given time and it is inconsistent with any + OPEN_DELEGATE_READ delegations. - When a client has a read open delegation, it may not make any changes - to the contents or attributes of the file but it is assured that no - other client may do so. When a client has a write open delegation, - it may modify the file data since no other client will be accessing - the file's data. The client holding a write delegation may only - affect file attributes which are intimately connected with the file - data: size, time_modify, change. + When a client has a OPEN_DELEGATE_READ delegation, it may not make + any changes to the contents or attributes of the file but it is + assured that no other client may do so. When a client has a + OPEN_DELEGATE_WRITE delegation, it may modify the file data since no + other client will be accessing the file's data. The client holding a + OPEN_DELEGATE_WRITE delegation may only affect file attributes which + are intimately connected with the file data: size, time_modify, + change. When a client has an open delegation, it does not send OPENs or CLOSEs to the server but updates the appropriate status internally. - For a read open delegation, opens that cannot be handled locally - (opens for write or that deny read access) must be sent to the - server. + For a OPEN_DELEGATE_READ delegation, opens that cannot be handled + locally (opens for write or that deny read access) must be sent to + the server. When an open delegation is made, the response to the OPEN contains an open delegation structure which specifies the following: o the type of delegation (read or write) o space limitation information to control flushing of data on close - (write open delegation only, see Section 10.4.1) + (OPEN_DELEGATE_WRITE delegation only, see Section 10.4.1) o an nfsace4 specifying read and write permissions o a stateid to represent the delegation for READ and WRITE The delegation stateid is separate and distinct from the stateid for the OPEN proper. The standard stateid, unlike the delegation stateid, is associated with a particular lock_owner and will continue to be valid after the delegation is recalled and the file remains open. @@ -6799,37 +7268,37 @@ each user by use of the ACCESS operation. This should be the case even if an ACCESS operation would not be required otherwise. As mentioned before, the server may enforce frequent authentication by returning an nfsace4 denying all access with every open delegation. 10.4.1. Open Delegation and Data Caching OPEN delegation allows much of the message overhead associated with the opening and closing files to be eliminated. An open when an open delegation is in effect does not require that a validation message be - sent to the server. The continued endurance of the "read open - delegation" provides a guarantee that no OPEN for write and thus no - write has occurred. Similarly, when closing a file opened for write - and if write open delegation is in effect, the data written does not - have to be flushed to the server until the open delegation is - recalled. The continued endurance of the open delegation provides a - guarantee that no open and thus no read or write has been done by - another client. + sent to the server. The continued endurance of the + "OPEN_DELEGATE_READ delegation" provides a guarantee that no OPEN for + write and thus no write has occurred. Similarly, when closing a file + opened for write and if OPEN_DELEGATE_WRITE delegation is in effect, + the data written does not have to be flushed to the server until the + open delegation is recalled. The continued endurance of the open + delegation provides a guarantee that no open and thus no read or + write has been done by another client. For the purposes of open delegation, READs and WRITEs done without an OPEN are treated as the functional equivalents of a corresponding type of OPEN. This refers to the READs and WRITEs that use the special stateids consisting of all zero bits or all one bits. Therefore, READs or WRITEs with a special stateid done by another - client will force the server to recall a write open delegation. A - WRITE with a special stateid done by another client will force a - recall of read open delegations. + client will force the server to recall a OPEN_DELEGATE_WRITE + delegation. A WRITE with a special stateid done by another client + will force a recall of OPEN_DELEGATE_READ delegations. With delegations, a client is able to avoid writing data to the server when the CLOSE of a file is serviced. The file close system call is the usual point at which the client is notified of a lack of stable storage for the modified file data generated by the application. At the close, file data is written to the server and through normal accounting the server is able to determine if the available filesystem space for the data has been exceeded (i.e., server returns NFS4ERR_NOSPC or NFS4ERR_DQUOT). This accounting includes quotas. The introduction of delegations requires that a @@ -6843,56 +7312,59 @@ original delegation. The server must make this assurance for all outstanding delegations. Therefore, the server must be careful in its management of available space for new or modified data taking into account available filesystem space and any applicable quotas. The server can recall delegations as a result of managing the available filesystem space. The client should abide by the server's state space limits for delegations. If the client exceeds the stated limits for the delegation, the server's behavior is undefined. Based on server conditions, quotas or available filesystem space, the - server may grant write open delegations with very restrictive space - limitations. The limitations may be defined in a way that will - always force modified data to be flushed to the server on close. + server may grant OPEN_DELEGATE_WRITE delegations with very + restrictive space limitations. The limitations may be defined in a + way that will always force modified data to be flushed to the server + on close. With respect to authentication, flushing modified data to the server after a CLOSE has occurred may be problematic. For example, the user of the application may have logged off the client and unexpired authentication credentials may not be present. In this case, the client may need to take special care to ensure that local unexpired credentials will in fact be available. This may be accomplished by tracking the expiration time of credentials and flushing data well in advance of their expiration or by making private copies of credentials to assure their availability when needed. 10.4.2. Open Delegation and File Locks - When a client holds a write open delegation, lock operations may be - performed locally. This includes those required for mandatory file - locking. This can be done since the delegation implies that there - can be no conflicting locks. Similarly, all of the revalidations - that would normally be associated with obtaining locks and the - flushing of data associated with the releasing of locks need not be - done. + When a client holds a OPEN_DELEGATE_WRITE delegation, lock operations + may be performed locally. This includes those required for mandatory + file locking. This can be done since the delegation implies that + there can be no conflicting locks. Similarly, all of the + revalidations that would normally be associated with obtaining locks + and the flushing of data associated with the releasing of locks need + not be done. - When a client holds a read open delegation, lock operations are not - performed locally. All lock operations, including those requesting - non-exclusive locks, are sent to the server for resolution. + When a client holds a OPEN_DELEGATE_READ delegation, lock operations + are not performed locally. All lock operations, including those + requesting non-exclusive locks, are sent to the server for + resolution. 10.4.3. Handling of CB_GETATTR The server needs to employ special handling for a GETATTR where the - target is a file that has a write open delegation in effect. The - reason for this is that the client holding the write delegation may - have modified the data and the server needs to reflect this change to - the second client that submitted the GETATTR. Therefore, the client - holding the write delegation needs to be interrogated. The server + target is a file that has a OPEN_DELEGATE_WRITE delegation in effect. + The reason for this is that the client holding the + OPEN_DELEGATE_WRITE delegation may have modified the data and the + server needs to reflect this change to the second client that + submitted the GETATTR. Therefore, the client holding the + OPEN_DELEGATE_WRITE delegation needs to be interrogated. The server will use the CB_GETATTR operation. The only attributes that the server can reliably query via CB_GETATTR are size and change. Since CB_GETATTR is being used to satisfy another client's GETATTR request, the server only needs to know if the client holding the delegation has a modified version of the file. If the client's copy of the delegated file is not modified (data or size), the server can satisfy the second client's GETATTR request from the attributes stored locally at the server. If the file is modified, the server only needs to know about this modified state. If the server @@ -6897,25 +7369,24 @@ stored locally at the server. If the file is modified, the server only needs to know about this modified state. If the server determines that the file is currently modified, it will respond to the second client's GETATTR as if the file had been modified locally at the server. Since the form of the change attribute is determined by the server and is opaque to the client, the client and server need to agree on a method of communicating the modified state of the file. For the size attribute, the client will report its current view of the file size. - For the change attribute, the handling is more involved. For the client, the following steps will be taken when receiving a - write delegation: + OPEN_DELEGATE_WRITE delegation: o The value of the change attribute will be obtained from the server and cached. Let this value be represented by c. o The client will create a value greater than c that will be used for communicating modified data is held at the client. Let this value be represented by d. o When the client is queried via CB_GETATTR for the change attribute, it checks to see if it holds modified data. If the @@ -6933,28 +7404,28 @@ While the change attribute is opaque to the client in the sense that it has no idea what units of time, if any, the server is counting change with, it is not opaque in that the client has to treat it as an unsigned integer, and the server has to be able to see the results of the client's changes to that integer. Therefore, the server MUST encode the change attribute in network order when sending it to the client. The client MUST decode it from network order to its native order when receiving it and the client MUST encode it network order when sending it to the server. For this reason, change is defined as - an unsigned integer rather than an opaque array of octets. + an unsigned integer rather than an opaque array of bytes. For the server, the following steps will be taken when providing a - write delegation: + OPEN_DELEGATE_WRITE delegation: - o Upon providing a write delegation, the server will cache a copy of - the change attribute in the data structure it uses to record the - delegation. Let this value be represented by sc. + o Upon providing a OPEN_DELEGATE_WRITE delegation, the server will + cache a copy of the change attribute in the data structure it uses + to record the delegation. Let this value be represented by sc. o When a second client sends a GETATTR operation on the same file to the server, the server obtains the change attribute from the first client. Let this value be cc. o If the value cc is equal to sc, the file is not modified and the server returns the current values for change, time_metadata, and time_modify (for example) to the second client. o If the value cc is NOT equal to sc, the file is currently modified @@ -6967,34 +7438,34 @@ requester. The server replaces sc in the delegation record with nsc. To prevent the possibility of time_modify, time_metadata, and change from appearing to go backward (which would happen if the client holding the delegation fails to write its modified data to the server before the delegation is revoked or returned), the server SHOULD update the file's metadata record with the constructed attribute values. For reasons of reasonable performance, committing the constructed attribute values to stable storage is OPTIONAL. - As discussed earlier in this section, the client MAY return the - same cc value on subsequent CB_GETATTR calls, even if the file was + As discussed earlier in this section, the client MAY return the same + cc value on subsequent CB_GETATTR calls, even if the file was modified in the client's cache yet again between successive CB_GETATTR calls. Therefore, the server must assume that the file has been modified yet again, and MUST take care to ensure that the - new nsc it constructs and returns is greater than the previous nsc - it returned. An example implementation's delegation record would + new nsc it constructs and returns is greater than the previous nsc it + returned. An example implementation's delegation record would satisfy this mandate by including a boolean field (let us call it - "modified") that is set to false when the delegation is granted, - and an sc value set at the time of grant to the change attribute - value. The modified field would be set to true the first time cc - != sc, and would stay true until the delegation is returned or - revoked. The processing for constructing nsc, time_modify, and - time_metadata would use this pseudo code: + "modified") that is set to FALSE when the delegation is granted, and + an sc value set at the time of grant to the change attribute value. + The modified field would be set to TRUE the first time cc != sc, and + would stay TRUE until the delegation is returned or revoked. The + processing for constructing nsc, time_modify, and time_metadata would + use this pseudo code: if (!modified) { do CB_GETATTR for change and size; if (cc != sc) modified = TRUE; } else { do CB_GETATTR for size; } @@ -6994,30 +7465,29 @@ if (cc != sc) modified = TRUE; } else { do CB_GETATTR for size; } if (modified) { sc = sc + 1; time_modify = time_metadata = current_time; - update sc, time_modify, time_metadata into file's metadata; } - return to client (that sent GETATTR) the attributes - it requested, but make sure size comes from what - CB_GETATTR returned. Do not update the file's metadata - with the client's modified size. + This would return to the client (that sent GETATTR) the attributes it + requested, but make sure size comes from what CB_GETATTR returned. + The server would not update the file's metadata with the client's + modified size. - o In the case that the file attribute size is different than the + In the case that the file attribute size is different than the server's current value, the server treats this as a modification regardless of the value of the change attribute retrieved via CB_GETATTR and responds to the second client as in the last step. This methodology resolves issues of clock differences between client and server and other scenarios where the use of CB_GETATTR break down. It should be noted that the server is under no obligation to use CB_GETATTR and therefore the server MAY simply recall the delegation @@ -7061,109 +7531,147 @@ o If a file has other open references at the client, then OPEN operations must be sent to the server. The appropriate stateids will be provided by the server for subsequent use by the client since the delegation stateid will not longer be valid. These OPEN requests are done with the claim type of CLAIM_DELEGATE_CUR. This will allow the presentation of the delegation stateid so that the client can establish the appropriate rights to perform the OPEN. (see Section 15.18 for details.) o If there are granted file locks, the corresponding LOCK operations - need to be performed. This applies to the write open delegation - case only. + need to be performed. This applies to the OPEN_DELEGATE_WRITE + delegation case only. - o For a write open delegation, if at the time of recall the file is - not open for write, all modified data for the file must be flushed - to the server. If the delegation had not existed, the client - would have done this data flush before the CLOSE operation. + o For a OPEN_DELEGATE_WRITE delegation, if at the time of recall the + file is not open for write, all modified data for the file must be + flushed to the server. If the delegation had not existed, the + client would have done this data flush before the CLOSE operation. - o For a write open delegation when a file is still open at the time - of recall, any modified data for the file needs to be flushed to - the server. + o For a OPEN_DELEGATE_WRITE delegation when a file is still open at + the time of recall, any modified data for the file needs to be + flushed to the server. - o With the write open delegation in place, it is possible that the - file was truncated during the duration of the delegation. For - example, the truncation could have occurred as a result of an OPEN - UNCHECKED with a size attribute value of zero. Therefore, if a - truncation of the file has occurred and this operation has not - been propagated to the server, the truncation must occur before - any modified data is written to the server. + o With the OPEN_DELEGATE_WRITE delegation in place, it is possible + that the file was truncated during the duration of the delegation. + For example, the truncation could have occurred as a result of an + OPEN UNCHECKED4 with a size attribute value of zero. Therefore, + if a truncation of the file has occurred and this operation has + not been propagated to the server, the truncation must occur + before any modified data is written to the server. - In the case of write open delegation, file locking imposes some - additional requirements. To precisely maintain the associated + In the case of OPEN_DELEGATE_WRITE delegation, file locking imposes + some additional requirements. To precisely maintain the associated invariant, it is required to flush any modified data in any region - for which a write lock was released while the write delegation was in - effect. However, because the write open delegation implies no other - locking by other clients, a simpler implementation is to flush all - modified data for the file (as described just above) if any write - lock has been released while the write open delegation was in effect. + for which a write lock was released while the OPEN_DELEGATE_WRITE + delegation was in effect. However, because the OPEN_DELEGATE_WRITE + delegation implies no other locking by other clients, a simpler + implementation is to flush all modified data for the file (as + described just above) if any write lock has been released while the + OPEN_DELEGATE_WRITE delegation was in effect. An implementation need not wait until delegation recall (or deciding to voluntarily return a delegation) to perform any of the above actions, if implementation considerations (e.g., resource availability constraints) make that desirable. Generally, however, the fact that the actual open state of the file may continue to change makes it not worthwhile to send information about opens and closes to the server, except as part of delegation return. Only in the case of closing the open that resulted in obtaining the delegation would clients be likely to do this early, since, in that case, the close once done will not be undone. Regardless of the client's choices on scheduling these actions, all must be performed before the delegation is returned, including (when applicable) the close that corresponds to the open that resulted in the delegation. These actions can be performed either in previous requests or in previous operations in the same COMPOUND request. -10.4.5. Clients that Fail to Honor Delegation Recalls +10.4.5. OPEN Delegation Race with CB_RECALL + + The server informs the client of recall via a CB_RECALL. A race case + which may develop is when the delegation is immediately recalled + before the COMPOUND which established the delegation is returned to + the client. As the CB_RECALL provides both a stateid and a + filehandle for which the client has no mapping, it cannot honor the + recall attempt. At this point, the client has two choices, either do + not respond or respond with NFS4ERR_BADHANDLE. If it does not + respond, then it runs the risk of the server deciding to not grant it + further delegations. + + If instead it does reply with NFS4ERR_BADHANDLE, then both the client + and the server might be able to detect that a race condition is + occurring. The client can keep a list of pending delegations. When + it receives a CB_RECALL for an unknown delegation, it can cache the + stateid and filehandle on a list of pending recalls. When it is + provided with a delegation, it would only use it if it was not on the + pending recall list. Upon the next CB_RECALL, it could immediately + return the delegation. + + In turn, the server can keep track of when it issues a delegation and + assume that if a client responds to the CB_RECALL with a + NFS4ERR_BADHANDLE, then the client has yet to receive the delegation. + The server SHOULD give the client a reasonable time both to get this + delegation and to return it before revoking the delegation. Unlike a + failed callback path, the server should periodically probe the client + with CB_RECALL to see if it has received the delegation and is ready + to return it. + + When the server finally determines that enough time has lapsed, it + SHOULD revoke the delegation and it SHOULD NOT revoke the lease. + During this extended recall process, the server SHOULD be renewing + the client lease. The intent here is that the client not pay too + onerous a burden for a condition caused by the server. + +10.4.6. Clients that Fail to Honor Delegation Recalls A client may fail to respond to a recall for various reasons, such as a failure of the callback path from server to the client. The client may be unaware of a failure in the callback path. This lack of awareness could result in the client finding out long after the failure that its delegation has been revoked, and another client has modified the data for which the client had a delegation. This is - especially a problem for the client that held a write delegation. + especially a problem for the client that held a OPEN_DELEGATE_WRITE + delegation. The server also has a dilemma in that the client that fails to respond to the recall might also be sending other NFS requests, including those that renew the lease before the lease expires. Without returning an error for those lease renewing operations, the server leads the client to believe that the delegation it has is in force. This difficulty is solved by the following rules: o When the callback path is down, the server MUST NOT revoke the delegation if one of the following occurs: * The client has issued a RENEW operation and the server has returned an NFS4ERR_CB_PATH_DOWN error. The server MUST renew - the lease for any record locks and share reservations the + the lease for any byte-range locks and share reservations the client has that the server has known about (as opposed to those locks and share reservations the client has established but not yet sent to the server, due to the delegation). The server SHOULD give the client a reasonable time to return its delegations to the server before revoking the client's delegations. * The client has not issued a RENEW operation for some period of time after the server attempted to recall the delegation. This period of time MUST NOT be less than the value of the lease_time attribute. o When the client holds a delegation, it cannot rely on operations, except for RENEW, that take a stateid, to renew delegation leases across callback path failures. The client that wants to keep delegations in force across callback path failures must use RENEW to do so. -10.4.6. Delegation Revocation +10.4.7. Delegation Revocation At the point a delegation is revoked, if there are associated opens on the client, the applications holding these opens need to be notified. This notification usually occurs by returning errors for READ/WRITE operations or when a close is attempted for the open file. If no opens exist for the file at the point the delegation is revoked, then notification of the revocation is unnecessary. However, if there is modified data present at the client for the file, the user of the application should be notified. Unfortunately, @@ -7196,25 +7704,25 @@ operations may not be returned, more drastic action such as signals or process termination may be appropriate. The justification for this is that an invariant for which an application depends on may be violated. Depending on how errors are typically treated for the client operating environment, further levels of notification including logging, console messages, and GUI pop-ups may be appropriate. 10.5.1. Revocation Recovery for Write Open Delegation - Revocation recovery for a write open delegation poses the special - issue of modified data in the client cache while the file is not - open. In this situation, any client which does not flush modified - data to the server on each close must ensure that the user receives - appropriate notification of the failure as a result of the + Revocation recovery for a OPEN_DELEGATE_WRITE delegation poses the + special issue of modified data in the client cache while the file is + not open. In this situation, any client which does not flush + modified data to the server on each close must ensure that the user + receives appropriate notification of the failure as a result of the revocation. Since such situations may require human action to correct problems, notification schemes in which the appropriate user or administrator is notified may be necessary. Logging and console messages are typical examples. If there is modified data on the client, it must not be flushed normally to the server. A client may attempt to provide a copy of the file data as modified during the delegation under a different name in the filesystem name space to ease recovery. Note that when the client can determine that the file has not been modified by any @@ -7258,21 +7766,21 @@ may be returned to the server in the response to a CB_RECALL call. The result of local caching of attributes is that the attribute caches maintained on individual clients will not be coherent. Changes made in one order on the server may be seen in a different order on one client and in a third order on a different client. The typical filesystem application programming interfaces do not provide means to atomically modify or interrogate attributes for multiple files at the same time. The following rules provide an - environment where the potential incoherences mentioned above can be + environment where the potential incoherency mentioned above can be reasonably managed. These rules are derived from the practice of previous NFS protocols. o All attributes for a given file (per-fsid attributes excepted) are cached as a unit at the client so that no non-serializability can arise within the context of a single file. o An upper time boundary is maintained on how long a client cache entry can be kept without being refreshed from the server. @@ -7359,33 +7867,33 @@ OPTIONAL. o If the memory mapped file is not being modified on the server, and instead is just being read by an application via the memory mapped interface, the client will not see an updated time_access attribute. However, in many operating environments, neither will any process running on the server. Thus NFS clients are at no disadvantage with respect to local processes. o If there is another client that is memory mapping the file, and if - that client is holding a write delegation, the same set of issues - as discussed in the previous two bullet items apply. So, when a - server does a CB_GETATTR to a file that the client has modified in - its cache, the response from CB_GETATTR will not necessarily be - accurate. As discussed earlier, the client's obligation is to - report that the file has been modified since the delegation was - granted, not whether it has been modified again between successive - CB_GETATTR calls, and the server MUST assume that any file the - client has modified in cache has been modified again between - successive CB_GETATTR calls. Depending on the nature of the - client's memory management system, this weak obligation may not be - possible. A client MAY return stale information in CB_GETATTR - whenever the file is memory mapped. + that client is holding a OPEN_DELEGATE_WRITE delegation, the same + set of issues as discussed in the previous two bullet items apply. + So, when a server does a CB_GETATTR to a file that the client has + modified in its cache, the response from CB_GETATTR will not + necessarily be accurate. As discussed earlier, the client's + obligation is to report that the file has been modified since the + delegation was granted, not whether it has been modified again + between successive CB_GETATTR calls, and the server MUST assume + that any file the client has modified in cache has been modified + again between successive CB_GETATTR calls. Depending on the + nature of the client's memory management system, this weak + obligation may not be possible. A client MAY return stale + information in CB_GETATTR whenever the file is memory mapped. o The mixture of memory mapping and file locking on the same file is problematic. Consider the following scenario, where the page size on each client is 8192 bytes. * Client A memory maps first page (8192 bytes) of file X * Client B memory maps first page (8192 bytes) of file X * Client A write locks first 4096 bytes @@ -7403,38 +7910,38 @@ virtual memory management systems on each client only know a page is modified, not that a subset of the page corresponding to the respective lock regions has been modified. So it is not possible for each client to do the right thing, which is to only write to the server that portion of the page that is locked. For example, if client A simply writes out the page, and then client B writes out the page, client A's data is lost. Moreover, if mandatory locking is enabled on the file, then we have a different problem. When clients A and B issue the STORE - instructions, the resulting page faults require a record lock on the - entire page. Each client then tries to extend their locked range to - the entire page, which results in a deadlock. + instructions, the resulting page faults require a byte-range lock on + the entire page. Each client then tries to extend their locked range + to the entire page, which results in a deadlock. Communicating the NFS4ERR_DEADLOCK error to a STORE instruction is difficult at best. If a client is locking the entire memory mapped file, there is no - problem with advisory or mandatory record locking, at least until the - client unlocks a region in the middle of the file. + problem with advisory or mandatory byte-range locking, at least until + the client unlocks a region in the middle of the file. Given the above issues the following are permitted: o Clients and servers MAY deny memory mapping a file they know there - are record locks for. + are byte-range locks for. - o Clients and servers MAY deny a record lock on a file they know is - memory mapped. + o Clients and servers MAY deny a byte-range lock on a file they know + is memory mapped. o A client MAY deny memory mapping a file that it knows requires mandatory locking for I/O. If mandatory locking is enabled after the file is opened and mapped, the client MAY deny the application further access to its mapped file. 10.8. Name Caching The results of LOOKUP and READDIR operations may be cached to avoid the cost of subsequent LOOKUP operations. Just as in the case of @@ -7522,74 +8030,77 @@ operation change attribute values atomically. When the server is unable to report the before and after values atomically with respect to the directory operation, the server must indicate that fact in the change_info4 return value. When the information is not atomically reported, the client should not assume that other clients have not changed the directory. 11. Minor Versioning To address the requirement of an NFS protocol that can evolve as the - need arises, the NFS version 4 protocol contains the rules and - framework to allow for future minor changes or versioning. + need arises, the NFSv4 protocol contains the rules and framework to + allow for future minor changes or versioning. The base assumption with respect to minor versioning is that any future accepted minor version must follow the IETF process and be documented in a standards track RFC. Therefore, each minor version number will correspond to an RFC. Minor version zero of the NFS - version 4 protocol is represented by this RFC. The COMPOUND - procedure will support the encoding of the minor version being - requested by the client. + version 4 protocol is represented by this RFC. The COMPOUND and + CB_COMPOUND procedures support the encoding of the minor version + being requested by the client. The following items represent the basic rules for the development of minor versions. Note that a future minor version may decide to modify or add to the following rules as part of the minor version definition. 1. Procedures are not added or deleted - To maintain the general RPC model, NFS version 4 minor versions - will not add to or delete procedures from the NFS program. + To maintain the general RPC model, NFSv4 minor versions will not + add to or delete procedures from the NFS program. 2. Minor versions may add operations to the COMPOUND and CB_COMPOUND procedures. The addition of operations to the COMPOUND and CB_COMPOUND procedures does not affect the RPC model. - 1. Minor versions may append attributes to GETATTR4args, - bitmap4, and GETATTR4res. + 1. Minor versions may append attributes to the bitmap4 that + represents sets of attributes and to the fattr4 that + represents sets of attribute values. This allows for the expansion of the attribute model to allow for future growth or adaptation. 2. Minor version X must append any new attributes after the last documented attribute. Since attribute results are specified as an opaque array of per-attribute XDR encoded results, the complexity of adding - new attributes in the midst of the current definitions will + new attributes in the midst of the current definitions would be too burdensome. 3. Minor versions must not modify the structure of an existing operation's arguments or results. - Again the complexity of handling multiple structure definitions + Again, the complexity of handling multiple structure definitions for a single operation is too burdensome. New operations should be added instead of modifying existing structures for a minor version. This rule does not preclude the following adaptations in a minor version. - * adding bits to flag fields such as new attributes to - GETATTR's bitmap4 data type + * adding bits to flag fields, such as new attributes to + GETATTR's bitmap4 data type, and providing corresponding + variants of opaque arrays, such as a notify4 used together + with such bitmaps * adding bits to existing attributes like ACLs that have flag words * extending enumerated types (including NFS4ERR_*) with new values 4. Minor versions must not modify the structure of existing attributes. @@ -7627,70 +8138,70 @@ 11. A client and server that support minor version X SHOULD support minor versions 0 (zero) through X-1 as well. 12. Except for infrastructural changes, no new features may be introduced as REQUIRED in a minor version. This rule allows for the introduction of new functionality and forces the use of implementation experience before designating a feature as REQUIRED. On the other hand, some classes of features are infrastructural and have broad effects. Allowing - such features to not be REQUIRED complicates implementation of - the minor version. + infrastructural features to be RECOMMENDED or OPTIONAL + complicates implementation of the minor version. 13. A client MUST NOT attempt to use a stateid, filehandle, or similar returned object from the COMPOUND procedure with minor version X for another COMPOUND procedure with minor version Y, where X != Y. 12. Internationalization - This chapter describes the string-handling aspects of the NFS version - 4 protocol, and how they address issues related to + This chapter describes the string-handling aspects of the NFSv4 + protocol, and how they address issues related to internationalization, including issues related to UTF-8, normalization, string preparation, case folding, and handling of internationalization issues related to domains. - The NFS version 4 protocol needs to deal with internationalization, - or I18N, with respect to file names and other strings as used within - the protocol. The choice of string representation must allow for + The NFSv4 protocol needs to deal with internationalization, or I18N, + with respect to file names and other strings as used within the + protocol. The choice of string representation must allow for reasonable name/string access to clients, applications, and users which use various languages. The UTF-8 encoding of the UCS as defined by [7] allows for this type of access and follows the policy described in "IETF Policy on Character Sets and Languages", [8]. In implementing such policies, it is important to understand and - respect the nature of NFS version 4 as a means by which client + respect the nature of NFSv4 as a means by which client implementations may invoke operations on remote file systems. Server implementations act as a conduit to a range of file system - implementations that the NFS version 4 server typically invokes - through a virtual-file-system interface. + implementations that the NFSv4 server typically invokes through a + virtual-file-system interface. Keeping this context in mind, one needs to understand that the file systems with which clients will be interacting will generally not be devoted solely to access using NFS version 4. Local access and its requirements will generally be important and often access over other remote file access protocols will be as well. It is generally a - functional requirement in practice for the users of the NFS version 4 + functional requirement in practice for the users of the NFSv4 protocol (although it may be formally out of scope for this document) for the implementation to allow files created by other protocols and by local operations on the file system to be accessed using NFS version 4 as well. It also needs to be understood that a considerable portion of file name processing will occur within the implementation of the file - system rather than within the limits of the NFS version 4 server + system rather than within the limits of the NFSv4 server implementation per se. As a result, cetain aspects of name processing may change as the locus of processing moves from file system to file system. As a result of these factors, the protocol - cannot enforce uniformity of name-related processing upon NFS version - 4 server requests on the server as a whole. Because the server + cannot enforce uniformity of name-related processing upon NFSv4 + server requests on the server as a whole. Because the server interacts with existing file system implementations, the same server handling will produce different behavior when interacting with different file system implementations. To attempt to require uniform behavior, and treat the the protocol server and the file system as a unified application, would considerably limit the usefulness of the protocol. 12.1. Use of UTF-8 As mentioned above, UTF-8 is used as a convenient way to encode @@ -7698,86 +8209,85 @@ requirements to avoid these issues since the mapping of ASCII names to UTF-8 is the identity. 12.1.1. Relation to Stringprep RFC 3454 [9], otherwise known as "stringprep", documents a framework for using Unicode/UTF-8 in networking protocols, intended "to increase the likelihood that string input and string comparison work in ways that make sense for typical users throughout the world." A protocol conforming to this framework must define a profile of - stringprep "in order to fully specify the processing options." NFS - version 4, while it does make normative references to stringprep and - uses elements of that framework, it does not, for reasons that are + stringprep "in order to fully specify the processing options." + NFSv4, while it does make normative references to stringprep and uses + elements of that framework, it does not, for reasons that are explained below, conform to that framework, for all of the strings that are used within it. In addition to some specific issues which have caused stringprep to add confusion in handling certain characters for certain languages, there are a number of general reasons why stringprep profiles are not - suitable for describing NFS version 4. + suitable for describing NFSv4. o Restricting the character repertoire to Unicode 3.2, as required by stringprep is unduly constricting. o Many of the character tables in stringprep are inappropriate because of this limited character repertoire, so that normative reference to stringprep is not desirable in many case and instead, we allow more flexibility in the definition of case mapping tables. o Because of the presence of different file systems, the specifics of processing are not fully defined and some aspects that are are RECOMMENDED, rather than REQUIRED. Despite these issues, in many cases the general structure of stringprep profiles, consisting of sections which deal with the - applicability of the description, the character repertoire, charcter + applicability of the description, the character repertoire, character mapping, normalization, prohibited characters, and issues of the handling (i.e., possible prohibition) of bidirectional strings, is a convenient way to describe the string handling which is needed and will be used where appropriate. 12.1.2. Normalization, Equivalence, and Confusability Unicode has defined several equivalence relationships among the set of possible strings. Understanding the nature and purpose of these equivalence relations is important to understand the handling of - Unicode strings within NFS version 4. + Unicode strings within NFSv4. Some string pairs are thought as only differing in the way accents and other diacritics are encoded, as illustrated in the examples below. Such string pairs are called "canonically equivalent". Such equivalence can occur when there are precomposed characters, as an alternative to encoding a base character in addition to a combining accent. For example, the character LATIN SMALL LETTER E WITH ACUTE (U+00E9) is defined as canonically equivalent to the string consisting of LATIN SMALL LETTER E followed by COMBINING ACUTE ACCENT (U+0065, U+0301). When multiple combining diacritics are present, differences in the ordering are not reflected in resulting display and the strings are defined as canonically equivalent. For example, the string consisting of LATIN SMALL LETTER Q, COMBINING ACUTE ACCENT, COMBINING GRAVE ACCENT (U+0071, U+0301, U+0300) is canonically - quivalent to the string consisting of LATIN SMALL LETTER Q, + equivalent to the string consisting of LATIN SMALL LETTER Q, COMBINING GRAVE ACCENT, COMBINING ACUTE ACCENT (U+0071, U+0300, U+0301) When both situations are present, the number of canonically equivalent strings can be greater. Thus, the following strings are all canonically equivalent: LATIN SMALL LETTER E, COMBINING MACRON, ACCENT, COMBINING ACUTE ACCENT (U+0xxx, U+0304, U+0301) - LATIN SMALL LETTER E, COMBINING ACUTE ACCENT, COMBINING MACRON (U+0xxx, U+0301, U+0304) LATIN SMALL LETTER E WITH MACRON, COMBINING ACUTE ACCENT (U+011E, U+0301) LATIN SMALL LETTER E WITH ACUTE, COMBINING MACRON (U+00E9, U+0304) LATIN SMALL LETTER E WITH MACRON AND ACUTE (U+1E16) @@ -7782,157 +8292,158 @@ LATIN SMALL LETTER E WITH MACRON AND ACUTE (U+1E16) Additionally there is an equivalence relation of "compatibility equivalence". Two canonically equivalent strings are necessarily compatibility equivalent, although not the converse. An example of compatibility equivalent strings which are not canonically equivalent are GREEK CAPITAL LETTER OMEGA (U+03A9) and OHM SIGN (U+2129). These are identical in appearance while other compatibility equivalent strings are not. Another example would be "x2" and the two character - string denoting x-squared which are clearly differnt in appearance + string denoting x-squared which are clearly different in appearance although compatibility equivalent and not canonically equivalent. These have Unicode encodings LATIN SMALL LETTER X, DIGIT TWO (U+0078, U+0032) and LATIN SMALL LETTER X, SUPERSCRIPT TWO (U+0078, U+00B2), One way to deal with these equivalence relations is via normalization. A normalization form maps all strings to a - correspondig normalized string in such a fashion that all strings + corresponding normalized string in such a fashion that all strings that are equivalent (canonically or compatibly, depending on the form) are mapped to the same value. Thus the image of the mapping is - a subset of Unicode strings conceived as the representives of the + a subset of Unicode strings conceived as the representatives of the equivalence classes defined by the chosen equivalence relation. - In the NFS version 4 protocol, handling of issues related to + In the NFSv4 protocol, handling of issues related to internationalization with regard to normalization follows one of two basic patterns: o For strings whose function is related to other internet standards, such as server and domain naming, the normalization form defined by the appropriate internet standards is used. For server and domain naming, this involves normalization form NFKC as specified in [10] o For other strings, particular those passed by the server to file system implementations, normalization requirements are the province of the file system and the job of this specification is not to specify a particular form but to make sure that - interoperability is maximmized, even when clients and server-based + interoperability is maximized, even when clients and server-based file systems have different preferences. A related but distinct issue concerns string confusability. This can - occur when two strings (including single-charcter strings) having a + occur when two strings (including single-character strings) having a similar appearance. There have been attempts to define uniform processing in an attempt to avoid such confusion (see stringprep [9]) but the results have often added confusion. Some examples of possible confusions and proposed processing intended to reduce/avoid confusions: o Deletion of characters believed to be invisible and appropriately ignored, justifying their deletion, including, WORD JOINER (U+2060), and the ZERO WIDTH SPACE (U+200B). o Deletion of characters supposed to not bear semantics and only affect glyph choice, including the ZERO WIDTH NON-JOINER (U+200C) and the ZERO WIDTH JOINER (U+200D), where the deletion turns out to be a problem for Farsi speakers. o Prohibition of space characters such as the EM SPACE (U+2003), the EN SPACE (U+2002), and the THIN SPACE (U+2009). - In addition, character pairs which apprear very similar and could and + In addition, character pairs which appear very similar and could and often do result in confusion. In addition to what Unicode defines as "compatibility equivalence", there are a considerable number of additional character pairs that could cause confusion. This includes characters such as LATIN CAPITAL LETTER O (U+004F) and DIGIT ZERO (U+0030), and CYRILLIC SMALL LETTER ER (U+0440) LATIN SMALL LETTER P (U+0070) (also with MATHEMATICAL BOLD SMALL P (U+1D429) and GREEK SMALL LETTER RHO (U+1D56, for good measure). - NFS version 4, as it does with normalization, takes a two-part - approach to this issue: + NFSv4, as it does with normalization, takes a two-part approach to + this issue: o For strings whose function is related to other internet standards, such as server and domain naming, any string processing to address the confusability issue is defined by the appropriate internet standards is used. For server and domain naming, this is the responsibility of IDNA as described in [10]. o For other strings, particularly those passed by the server to file system implementations, any such preparation requirements including the choice of how, or whether to address the confusability issue, are the responsibility of the file system to define, and for this specification to try to add its own set would add unacceptably to complexity, and make many files accessible locally and by other remote file access protocols, inaccessible by - NFS version 4. This specification defines how the protocol - maximizes interoperability in the face of different file system - implementations . NFS version 4 does allow file systems to map - and to reject characters, including those likely to result in - confusion, since file systems may choose to do such things. It - defines what the client will see in such cases, in order to limit - problems that can arise when a file name is created and it appears - to have a different name from the one it is assigned when the name - is created. + NFSv4. This specification defines how the protocol maximizes + interoperability in the face of different file system + implementations. NFSv4 does allow file systems to map and to + reject characters, including those likely to result in confusion, + since file systems may choose to do such things. It defines what + the client will see in such cases, in order to limit problems that + can arise when a file name is created and it appears to have a + different name from the one it is assigned when the name is + created. 12.2. String Type Overview + 12.2.1. Overall String Class Divisions - NFS version 4 has to deal with a large set of diffreent types of - strings and because of the different role of each, - internationalization issues will be different for each: + NFSv4 has to deal with a large set of different types of strings and + because of the different role of each, internationalization issues + will be different for each: o For some types of strings, the fundamental internationalization- related decisions are the province of the file system or the security-handling functions of the server and the protocol's job is to establish the rules under which file systems and servers are allowed to exercise this freedom, to avoid adding to confusion. o In other cases, the fundamental internationalization issues are the responsibility of other IETF groups and our jobis simply to reference those and perhaps make a few choices as to how they are to be used (e.g., U-labels vs. A-labels). - o There are also cases in which a string has a small amount of NFS - version 4 processing which results in one or more strings being - referred to one of the other categories. + o There are also cases in which a string has a small amount of NFSv4 + processing which results in one or more strings being referred to + one of the other categories. We will divide strings to be dealt with into the following classes: MIX indicating that there is small amount of preparatory processing - that either picks an internationalization hadling mode or divides + that either picks an internationalization handling mode or divides the string into a set of (two) strings with a different mode internationalization handling for each. The details are discussed in the section "Types with Pre-processing to Resolve Mixture Issues". NIP indicating that, for various reasons, there is no need for internationalization-specific processing to be performed. The specifics of the various string types handled in this way are described in the section "String Types without Internationalization Processing". INET indicating that the string needs to be processed in a fashion - goverened by non-NFS-specific internet specifications. The - details are discussed in the section "Types with Processing - Defined by Other Internet Areas". + governed by non-NFS-specific internet specifications. The details + are discussed in the section "Types with Processing Defined by + Other Internet Areas". NFS indicating that the string needs to be processed in a fashion governed by NFSv4-specific considerations. The primary focus is on enabling flexibility for the various file systems to be accessed and is described in the section "String Types with NFS- specific Processing". 12.2.2. Divisions by Typedef Parent types - There are a number of different string types within NFS version 4 and + There are a number of different string types within NFSv4 and internationalization handling will be different for different types of strings. Each the types will be in one of four groups based on the parent type that specifies the nature of its relationship to utf8 and ascii. utf8_should/USHOULD: indicating that strings of this type SHOULD be UTF-8 but clients and servers will not check for valid UTF-8 encoding. utf8val_should/UVSHOULD: indicating that strings of this type SHOULD @@ -8092,21 +8602,21 @@ are no at-signs or the at-sign appears at the start or end of the string see Interpreting owner and owner_group. Otherwise, the portion before the at-sign is dealt with as a prinpfx4 and the portion after is dealt with as a prinsfx4. 12.4.2. Processing of Server Id Strings Server id strings typically appear in responses (as attribute values) and only appear in requests as an attribute value presented to VERIFY and NVERIFY. With that exception, they are not subject to server - validation and posible rejection. It is not expected that clients + validation and possible rejection. It is not expected that clients will typically do such validation on receipt of responses but they may as a way to check for proper server behavior. The responsibility for sending correct UTF-8 strings is with the server. Servers are identified by either server names or IP addresses. Once an id has been identified as an IP address, then there is no processing specific to internationalization to be done, since such an address must be ASCII to be valid. 12.5. String Types without Internationalization Processing @@ -8123,32 +8633,32 @@ comptag4 strings are an aid to debugging and the sender should avoid confusion by not using anything but valid UTF-8. But any work validating the string or modifying it would only add complication to a mechanism whose basic function is best supported by making it not subject to any checking and having data maximally available to be looked at in a network trace. fattr4_mimetype strings need to be validated by matching against a list of valid mime types. Since these are all ASCII, no - processing specific to internationaliztion is required since + processing specific to internationalization is required since anything that does not match is invalid and anything which does not obey the rules of UTF-8 will not be ASCII and consequently will not match, and will be invalid. svraddr4 strings, in order to be valid, need to be ASCII, but if you check them for validity, you have inherently checked that that they are ASCII and thus UTF-8. 12.6. Types with Processing Defined by Other Internet Areas - There are two types of strings which NFS version 4 deals with whose + There are two types of strings which NFSv4 deals with whose processing is defined by other Internet standards, and where issues related to different handling choices by server operating systems or server file systems do not apply. These are as follows: o Server names as they appear in the fs_locations attribute. Note that for most purposes, such server names will only be sent by the server to the client. The exception is use of the fs_locations attribute in a VERIFY or NVERIFY operation. @@ -8183,63 +8693,63 @@ domain returned on a GETATTR of the userid MUST be the same as that used when setting the userid by the SETATTTR. The server MAY implement VERIFY and NVERIFY without translating internal state to a string form, so that, for example, a user principal which represents a specific numeric user id, will match a different principal string which represents the same numeric user id. 12.7. String Types with NFS-specific Processing - For a number of data types within NFSv4, the primary responsbibility + For a number of data types within NFSv4, the primary responsibility for internationalization-related handling is that of some entity other than the server itself (see below for details). In these - situations, the primary responsibility of NFS version 4 is to provide - a framework in which that other entity (file system and server + situations, the primary responsibility of NFSv4 is to provide a + framework in which that other entity (file system and server operating system principal naming framework) implements its own decisions while establishing rules to limit interoperability issues. This pattern applies to the following data types: o In the case of name components (strings of type component4), the server-side file system implementation (of which there may be more than one for a particular server) deals with internationalization - issues, in a fashion that is appropriate to NFS version 4, other - remote file access protocols, and local file access methods. See + issues, in a fashion that is appropriate to NFSv4, other remote + file access protocols, and local file access methods. See "Handling of File Name Components" for the detailed treatment. o In the case of link text strings (strings of type lintext4), the issues are similar, but file systems are restricted in the set of acceptable internationalization-related processing that they may - do, principally because symbolic links may contain name componetns + do, principally because symbolic links may contain name components that, when used, are presented to other file systems and/or other servers. See "Processing of Link Text" for the detailed treatment. o In the case of principal prefix strings, any decisions regarding internationalization are the responsibility of the server operating systems which may make its own rules regarding user and group name encoding. See "Processing of Principal Prefixes" for the detailed treatment. 12.7.1. Handling of File Name Components There are a number of places within client and server where file name components are processed: - o On the client, file names may be processed as part of forming NFS - version 4 requests. Any such processing will reflect specific - needs of the client's environment and will be treated as out-of- - scope from the viewpoint of this specification. + o On the client, file names may be processed as part of forming + NFSv4 requests. Any such processing will reflect specific needs + of the client's environment and will be treated as out-of-scope + from the viewpoint of this specification. - o On the server, file names are processed as part of processing NFS - version 4 requests. In practice, parts of the processing will be + o On the server, file names are processed as part of processing + NFSv4 requests. In practice, parts of the processing will be implemented within the NFS version 4 server while other parts will be implemented within the file system. This processing is described in the sections below. These sections are organized in a fashion parallel to a stringprep profile. The same sorts of topics are dealt with but they differ in that there is a wider range of possible processing choices. o On the server, file name components might potentially be subject to processing as part of generating NFS version 4 responses. This specification assumes that this processing will be empty and that @@ -8314,22 +8823,22 @@ the server when that character is encountered. Strings are intended to be in UTF-8 format and servers SHOULD return NFS4ERR_INVAL, as discussed above, when the characters sent are not valid UTF-8. When the character repertoire consists of single-byte characters, UTF-8 is not enforced. Such situations should be restricted to those where use is within a restricted environment where a single character mapping locale can be administratively enforced, allowing a file name to be treated as a string of bytes, rather than as a string of characters. Such an arrangement might be - necessary when NFS version 4 access to a file system containing names - which are not valid UTF-8 needs to be provided. + necessary when NFSv4 access to a file system containing names which + are not valid UTF-8 needs to be provided. However, in any of the following situations, file names have to be treated as strings of Unicode characters and servers MUST return NFS4ERR_INVAL when file names that are not in UTF-8 format: o Case-insensitive comparisons are specified by the file system and any characters sent contain non-ASCII byte codes. o Any normalization constraints are enforced by the server or file system implementation. @@ -8342,75 +8851,74 @@ when the server does not enforce UTF-8 component4 strings and treats them as strings of bytes. A client may determine that a given filesystem is operating in this mode by performing a LOOKUP using a non-UTF-8 string, if NFS4ERR_INVAL is not returned, then name components will be treated as opaque and those sorts of modifications will not be seen. 12.7.1.3. Case-based Mapping Used for Component4 Strings Case-based mapping is not always a required part of server processing - of name components. However, if the NFS version 4 file server - supports the case_insensitive file system attribute, and if the - case_insensitive attribute is true for a given file system, the NFS - version 4 server MUST use the Unicode case mapping tables for the - version of Unicode corresponding to the character repertoire. In the - case where the character repertoire is UCS-2 or UCS-4, the case - mapping tables from the latest available version of Unicode SHOULD be - used. + of name components. However, if the NFSv4 file server supports the + case_insensitive file system attribute, and if the case_insensitive + attribute is true for a given file system, the NFS version 4 server + MUST use the Unicode case mapping tables for the version of Unicode + corresponding to the character repertoire. In the case where the + character repertoire is UCS-2 or UCS-4, the case mapping tables from + the latest available version of Unicode SHOULD be used. If the case_preserving attribute is present and set to false, then - the NFS version 4 server MUST use the corresponding Unicode case - mapping table to map case when processing component4 strings. - Whether the server maps from lower to upper case or the upper to - lower case is a matter for implementation choice. + the NFSv4 server MUST use the corresponding Unicode case mapping + table to map case when processing component4 strings. Whether the + server maps from lower to upper case or the upper to lower case is a + matter for implementation choice. Stringprep Table B.2 should not be used for these purpose since it is limited to Unicode version 3.2 and also because it erroneously maps the German ligature eszett to the string "ss", whereas later versions of Unicode contain both lower-case and upper-case versions of Eszett (SMALL LETTER SHARP S and CAPITAL LETTER SHARP S). Clients should be aware that servers may have mapped SMALL LETTER SHARP S to the string "ss" when case-insensitive mapping is in effect, with result that file whose name contains SMALL LETTER SHARP S may have that character replaced by "ss" or "SS". 12.7.1.4. Other Mapping Used for Component4 Strings - Other than for issues of case mapping, an NFS version 4 server SHOULD - limit visible (i.e., those that change the name of file to reflect - those mappings to those from from a subset of the stringprep table - B.1. Note particularly, the mapings from U+200C and U+200D to the - empty string should be avoided, due to their undesirable effect on - some strings in Farsi. + Other than for issues of case mapping, an NFSv4 server SHOULD limit + visible (i.e., those that change the name of file to reflect those + mappings to those from from a subset of the stringprep table B.1. + Note particularly, the mappings from U+200C and U+200D to the empty + string should be avoided, due to their undesirable effect on some + strings in Farsi. Table B.1 may be used but it should be used only if required by the local file system implementation. For example, if the file system in question accepts file names containing the MONGOLIAN TODO SOFT HYPHEN character (U+1806) and they are distinct from the corresponding file names with this character removed, then using Table B.1 will cause functional problems when clients attempt to interact with that file - system. The NFS version 4 server implementation including the - filesystem MUST NOT silently remove characters not within Table B.1. + system. The NFSv4 server implementation including the filesystem + MUST NOT silently remove characters not within Table B.1. If an implementation wishes to eliminate other characters because it is believed that allowing component name versions that both include the character and do not have while otherwise the same, will contribute to confusion, it has two options: o Treat the characters as prohibited and return NFS4ERR_BADCHAR. o Eliminate the character as part of the name matching processing, while retaining it when a file is created. This would be analogous to file systems that are both case-insensitive and case- - preserving,as dicussed above, or those which are both + preserving,as discussed above, or those which are both normalization-insensitive and normalization-preserving, as discussed below. The handling will be insensitive to the presence of the chosen characters while preserving the presence or absence of such characters within names. Note that the second of these choices is a desirable way to handle characters within table B.1, again with the exception of U+200C and U+200D, which can cause issues for Farsi. In addition to modification due to normalization, discussed below, @@ -8421,27 +8929,27 @@ The issues are best discussed separately for the server and the client. It is important to note that the server and client may have different approaches to this area, and that the server choice may not match the client operating environment. The issue of mismatches and how they may be best dealt with by the client is discussed in a later section. 12.7.1.5.1. Server Normalization Issues for Component Strings - The NFS version 4 does not specify required use of a particular - normalization form for component4 strings. Therefore, the server may - receive unnormalized strings or strings that reflect either - normalization form within protocol requests and responses. If the - file system requires normalization, then the server implementation - must normalize component4 strings within the protocol server before - presenting the information to the local file system. + The NFSv4 does not specify required use of a particular normalization + form for component4 strings. Therefore, the server may receive + unnormalized strings or strings that reflect either normalization + form within protocol requests and responses. If the file system + requires normalization, then the server implementation must normalize + component4 strings within the protocol server before presenting the + information to the local file system. With regard to normalization, servers have the following choices, with the possibility that different choices may be selected for different file systems. o Implement a particular normalization form, either NFC, or NFD, in which case file names received from a client are converted to that normalization form and as a consequence, the client will always receive names in that normalization form. If this option is chosen, then it is impossible to create two files in the same @@ -8483,21 +8991,21 @@ normalization-unaware. We discuss below issues that may arise when each of these types of environments interact with the various types of file systems, with regard to normalization handling. Note that complexity for the client is increased given that there are no file system attributes to determine the normalization handling present for that file system. Where the client has the ability to create files (file system not read-only and security allows it), attempting to create multiple files with canonically equivalent names and looking at success - paaaters and the names assigned by the server to these files can + patterns and the names assigned by the server to these files can serve as a way to determine the relevant information. Normalization-aware environments interoperate most normally with servers that either impose a given normalization form or those that implement name handling which is both normalization-insensitive and normalization-preserving name handling. However, clients need to be prepared to interoperate with servers that have normalization- sensitive file naming. In this situation, the client needs to be prepared for the fact that a directory may contain multiple names that it considers equivalent. @@ -8516,167 +9024,166 @@ When it cannot be determined that a normalization-insensitive server file system is not involved, the client is generally best advised to process incoming name components so as to allow all name components in a canonical equivalence class to be together. When only a single member of class exists, it should generally mapped directly to the preferred normalization form, whether the name was of that form or not. When the client sees multiple names that are canonically - equivalent, it is clear you have a file systen which is + equivalent, it is clear you have a file system which is normalization sensitive. Clients should generally replace each canonically equivalent name with one that appends some distinguishing suffix, usually including a number. The numbers should be assigned so that each distinct possible name with the set of canonically equivalent names has an assigned numeric value. Note that for some cases in which there are multiple instances of strings that might be composed or decomposed and/or situations with multiple diacritics to be applied to the same character, the class might be large. When interacting with a normalization-sensitive filesystem, it may be that the environment contains clients or implementations local to the OS in which the file system is embedded, which use a different normalization form. In such situations, a LOOKUP may well fail, even though the directory contains a name canonically equivalent to the name sought. One solution to this problem is to re-do the LOOKUP in that situation with name converted to the alternate normalization form. In the case in which normalization-unaware clients are involved in - the mix, LOOKUP can fail and then the second lOOKUP, described - above can also fail, even though there may well be a oanonically + the mix, LOOKUP can fail and then the second LOOKUP, described + above can also fail, even though there may well be a canonically equivalent name in the directory. One possible approach in that case is to use a READDIR to find the equivalent name and lookup that, although this can greatly add to client implementation complexity. When interacting with a normalization-sensitive filesystem, the situation where the environment contains clients or implementations local to the OS in which the file system is embedded, which use a different normalization form can also cause issues when a file (or symlink or directory, etc.) is being created. In such cases, you may be able to create an object of the specified name even though, the directory contains a canonically equivalent name. Similar issues can occur with LINK and RENAME. The client can't really do much about such - sitautions, except be aware that they may occur. That's one of + situations, except be aware that they may occur. That's one of the reasons normalization-sensitive server file system implementations can be problematic to use when internationalization issues are important. Normalization-unaware environments interoperate most normally with servers that implement normalization-sensitive file naming. However, clients need to be prepared to interoperate with servers that impose a given normalization form or that implement name handling which is both normalization-insensitive and normalization-preserving. In the former case, a file created with a given name may find it changed to a different (although related name). In both cases, the client will have to deal with the fact that it is unable to create two names within a directory that are canonically equivalent. Note that although the client implementation itself and the kernel - implementation may be normalization-unware, treating name components + implementation may be normalization-unaware, treating name components as strings not subject to normalization, the environment as a whole may be normalization-aware if commonly used libraries result in an application environment where a single normalization form is used throughout. Because of this, normalization-unaware environments may be relatively rare. The following suggestions may be helpful in handling interoperability - issues for truely normalization-unaware client environments, when - they interact with file systems other than those which are - normalization-sensitive. The issues tend to be the inverse of those - for normalization-aware environments. The implementer should be - careful not to erroneously treat the environment as normalization- - unaware, based solely on the details of the kernel implementation. + issues for truly normalization-unaware client environments, when they + interact with file systems other than those which are normalization- + sensitive. The issues tend to be the inverse of those for + normalization-aware environments. The implementer should be careful + not to erroneously treat the environment as normalization-unaware, + based solely on the details of the kernel implementation. Unless the file system is normalization-preserving, when files (or other objects) are created, the object name as reported by a READDIR of the associated directory may show a name different than the one used to create the object. This behavior is something that the client has to accept. Since it has no preferred normalization form, it has no way of converting the name to a preferred form. In situations where there is an attempt to create multiple objects in the same directory which have canonically-equivalent names. these file systems will either report that an object of name already exists or simply open a file of that other name. - If it desired to have those two obects in the same directory, the + If it desired to have those two objects in the same directory, the names must be made not canonically equivalent. It is possible to append some distinguishing character to the name of the second object but in clients having a typical file API (such as POSIX), the fact that the name change occurred cannot be propagated back to the requester. In cases where a client is application-specific, it may be possible for it to deal with such a collision by modifying the name and taking note of the changed name. 12.7.1.6. Prohibited Characters for Component Names - The NFS version 4 protocol does not specify particular characters - that may not appear in component names. File systems may have their - own set of prohibited characters for which the error NFS4ERR_BADCHAR - should be returned by the server. Clients need to be prepared for - this error to occur whenever file name components are presented to - the server. + The NFSv4 protocol does not specify particular characters that may + not appear in component names. File systems may have their own set + of prohibited characters for which the error NFS4ERR_BADCHAR should + be returned by the server. Clients need to be prepared for this + error to occur whenever file name components are presented to the + server. Clients whose character repertoire for acceptable characters in file name components is smaller than the entire scope of UCS-4 may need to deal with names returned by the server that contain characters outside that repertoire. It is up to the client whether it simply ignores these files or modifies the name to meet its own rules for acceptable names. Clients may encounter names that do not consist of valid UTF-8, if they interact with servers configured to allow this option. They are not required to deal with this case and may treat the server as not functioning correctly, or they may handle this as normal. Clients will normally make this a configuration option. As discussed above, a client can determine whether a particular file system is being supported by the server in this mode by issuing a LOOKUP specifying a name which is not valid UTF-8 and seeing if NFS4ERR_INVAL is returned. 12.7.1.7. Bidirectional String Checking for Component Names - The NFS version 4 protocol does not require processing of component - names to check for and reject bidirectional strings. Such processing - may be a part of the file system implementation but if so, its - particular form will be defined by the file system implementation. - When strings are rejected on this basis, the error NFS4ERR_BADNAME - would be returned. + The NFSv4 protocol does not require processing of component names to + check for and reject bidirectional strings. Such processing may be a + part of the file system implementation but if so, its particular form + will be defined by the file system implementation. When strings are + rejected on this basis, the error NFS4ERR_BADNAME would be returned. Clients need to be prepared for the fact that the server may reject a file name component if it consists of a bidirectional string, returning NFS4ERR_BADNAME. Clients may encounter names with bidirectional strings returned in responses from the server. If clients treat such strings as not valid file name components, it is up to the client whether it simply ignores these files or modifies the name component to meet its own rules for acceptable name component strings. 12.7.2. Processing of Link Text Symbolic link text is defined as utf8val_should and therefore the server SHOULD validate link text on a CREATE and return NFS4ERR_INVAL if it is is not valid UTF-8. Note that file systems which treat names as strings of byte are an exception for which such validation - need not be done. One other situation in which an NFS version 4 - might choose (or be configured) not to make such a check is when - links within file system reference names in another which is - configured to treat names as strings of bytes. + need not be done. One other situation in which an NFSv4 might choose + (or be configured) not to make such a check is when links within file + system reference names in another which is configured to treat names + as strings of bytes. On the other hand, UTF-8 validation of symbolic link text need not be done on the data resulting from a READLINK. Such data might have been stored by an NFS Version 4 server configured to allow non-UTF-8 link text or it might have resulted from symbolic link text stored via local file system access or access via another remote file access protocol. Note that because of the role of the symbolic link, as data stored and read by the user, other sorts of validations or modifications @@ -8676,36 +9183,35 @@ been stored by an NFS Version 4 server configured to allow non-UTF-8 link text or it might have resulted from symbolic link text stored via local file system access or access via another remote file access protocol. Note that because of the role of the symbolic link, as data stored and read by the user, other sorts of validations or modifications should not be done. Note that when component names with the symbolic link text are used, such checks and modifications will be done at that time. In particular, - o Limitation of the character repertoire MUST NOT be done. This - includes limitations to reflect a particular version of unicode, - or the inability of any particualr file system to store characters - beyond UCS-2. + includes limitations to reflect a particular version of Unicode, + or the inability of any particularly file system to store + characters beyond UCS-2. o Name mapping, whether for case folding or otherwise MUST NOT be done. o Checks for a type of normalization or normalization to a particular form MUST NOT be done. o Checks for specific characters excluded by the server or file system MUST NOT be done. - o Checks for bidrectional strings MUST NOT be done. + o Checks for bidirectional strings MUST NOT be done. 12.7.3. Processing of Principal Prefixes As mentioned above, users and groups are designated as a particular string at a specified domain. Servers will recognize a set of valid principals for one or more domains. With regard to the handling of these strings, the following rules MUST be followed o The string MUST be checked by the server for valid UTF-8 and the error NFS4ERR_INVAL returned if it is not valid. @@ -8718,34 +9224,34 @@ with. o No character mapping is to be done, as for example table B.1 in stringprep, and no case mapping is to be done. The user and group names are to be treated as case-sensitive. o Strings must not be rejected based on their normalization. Servers should do normalization insensitive matching in converting a user to group to an internal id. The client cannot assume that the server preserves normalization so a user set to one string - value may be returned as a string which differs in nomralization + value may be returned as a string which differs in normalization and the client must be prepared to deal with that, by, for - example, normalizing the string to the client's prferred form. + example, normalizing the string to the client's preferred form. o There are no checks for specific invalid characters but servers may limit the characters, with the result that any principal presented by the client which has such a characters is treated as invalid. - o Specific checks for bidrectional strings are not done but servers + o Specific checks for bidirectional strings are not done but servers may limit the principal prefix strings to those which are unidirectional or are of a certain direction, with the result that any principal presented by the client which done not meet that - criterion will be treated as invaid. + criterion will be treated as invalid. 13. Error Values NFS error numbers are assigned to failed operations within a Compound (COMPOUND or CB_COMPOUND) request. A Compound request contains a number of NFS operations that have their results encoded in sequence in a Compound reply. The results of successful operations will consist of an NFS4_OK status followed by the encoded results of the operation. If an NFS operation fails, an error status will be entered in the reply and the Compound request will be terminated. @@ -8855,25 +9361,20 @@ Some example of situations that might lead to this situation: o A server that supports hierarchical storage receives a request to process a file that had been migrated. o An operation requires a delegation recall to proceed and waiting for this delegation recall makes processing this request in a timely fashion impossible. - In such cases, the error NFS4ERR_DELAY allows these preparatory - operations to proceed without holding up client resources such as a - session slot. After delaying for period of time, the client can then - re-send the operation in question. - 13.1.1.4. NFS4ERR_INVAL (Error Code 22) The arguments for this operation are not valid for some reason, even though they do match those specified in the XDR definition for the request. 13.1.1.5. NFS4ERR_NOTSUPP (Error Code 10004) Operation not supported, either because the operation is an OPTIONAL one and is not supported by this server or because the operation MUST @@ -9166,21 +9667,21 @@ 13.1.7.2. NFS4ERR_BADNAME (Error Code 10041) A name string in a request consisted of valid UTF-8 characters supported by the server but the name is not supported by the server as a valid name for current operation. An example might be creating a file or directory named ".." on a server whose file system uses that name for links to parent directories. This error should not be returned due a normalization issue in a string. When a file system keeps names in a particular normalization - form, it is the server's responsiblity to do the approproriate + form, it is the server's responsibility to do the appropriate normalization, rather than rejecting the name. 13.1.7.3. NFS4ERR_NAMETOOLONG (Error Code 63) Returned when the filename in an operation exceeds the server's implementation limit. 13.1.8. Locking Errors This section deal with errors related to locking, both as to share @@ -9230,22 +9731,22 @@ A locking request was attempted which would require the upgrade or downgrade of a lock range already held by the owner when the server does not support atomic upgrade or downgrade of locks. 13.1.8.8. NFS4ERR_LOCK_RANGE (Error Code 10028) A lock request is operating on a range that overlaps in part a currently held lock for the current lock owner and does not precisely match a single such lock where the server does not support this type - of request, and thus does not implement POSIX locking semantics. See - Section 15.12.5, Section 15.13.5, and Section 15.14.5 for a + of request, and thus does not implement POSIX locking semantics [35]. + See Section 15.12.5, Section 15.13.5, and Section 15.14.5 for a discussion of how this applies to LOCK, LOCKT, and LOCKU respectively. 13.1.8.9. NFS4ERR_OPENMODE (Error Code 10038) The client attempted a READ, WRITE, LOCK or other operation not sanctioned by the stateid passed (e.g., writing to a file opened only for read). 13.1.9. Reclaim Errors @@ -9284,21 +9785,21 @@ This sections deals with errors associated with requests used to create and manage client IDs. 13.1.10.1. NFS4ERR_CLID_INUSE (Error Code 10017) The SETCLIENTID operation has found that a client id is already in use by another client. 13.1.10.2. NFS4ERR_STALE_CLIENTID (Error Code 10022) - A clientid not recognized by the server was used in a locking or + A client ID not recognized by the server was used in a locking or SETCLIENTID_CONFIRM request. 13.1.11. Attribute Handling Errors This section deals with errors specific to attribute handling within NFSv4. 13.1.11.1. NFS4ERR_ATTRNOTSUPP (Error Code 10032) An attribute specified is not supported by the server. This error @@ -9497,24 +9998,25 @@ | | NFS4ERR_ISDIR, NFS4ERR_LEASE_MOVED, | | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | | | NFS4ERR_OLD_STATEID, NFS4ERR_RESOURCE, | | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE, | | | NFS4ERR_STALE_STATEID | | OPEN_DOWNGRADE | NFS4ERR_ADMIN_REVOKED, NFS4ERR_BADHANDLE, | | | NFS4ERR_BADXDR, NFS4ERR_BAD_SEQID, | | | NFS4ERR_BAD_STATEID, NFS4ERR_DELAY, | | | NFS4ERR_EXPIRED, NFS4ERR_FHEXPIRED, | | | NFS4ERR_INVAL, NFS4ERR_LEASE_MOVED, | - | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | - | | NFS4ERR_OLD_STATEID, NFS4ERR_RESOURCE, | - | | NFS4ERR_ROFS, NFS4ERR_SERVERFAULT, | - | | NFS4ERR_STALE, NFS4ERR_STALE_STATEID | + | | NFS4ERR_LOCKS_HELD, NFS4ERR_MOVED, | + | | NFS4ERR_NOFILEHANDLE, NFS4ERR_OLD_STATEID, | + | | NFS4ERR_RESOURCE, NFS4ERR_ROFS, | + | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE, | + | | NFS4ERR_STALE_STATEID | | PUTFH | NFS4ERR_BADHANDLE, NFS4ERR_BADXDR, | | | NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, | | | NFS4ERR_MOVED, NFS4ERR_SERVERFAULT, | | | NFS4ERR_STALE, NFS4ERR_WRONGSEC | | PUTPUBFH | NFS4ERR_DELAY, NFS4ERR_SERVERFAULT, | | | NFS4ERR_WRONGSEC | | PUTROOTFH | NFS4ERR_DELAY, NFS4ERR_SERVERFAULT, | | | NFS4ERR_WRONGSEC | | READ | NFS4ERR_ACCESS, NFS4ERR_ADMIN_REVOKED, | | | NFS4ERR_BADHANDLE, NFS4ERR_BADXDR, | @@ -9751,21 +10252,22 @@ | | REMOVE, RENAME, SETATTR, VERIFY, WRITE | | NFS4ERR_ISDIR | CLOSE, COMMIT, LINK, LOCK, LOCKT, | | | LOCKU, OPEN, OPEN_CONFIRM, READ, | | | READLINK, SETATTR, WRITE | | NFS4ERR_LEASE_MOVED | CLOSE, DELEGPURGE, DELEGRETURN, LOCK, | | | LOCKT, LOCKU, OPEN_CONFIRM, | | | OPEN_DOWNGRADE, READ, | | | RELEASE_LOCKOWNER, RENEW, SETATTR, | | | WRITE | | NFS4ERR_LOCKED | READ, SETATTR, WRITE | - | NFS4ERR_LOCKS_HELD | CLOSE, RELEASE_LOCKOWNER | + | NFS4ERR_LOCKS_HELD | CLOSE, OPEN_DOWNGRADE, | + | | RELEASE_LOCKOWNER | | NFS4ERR_LOCK_NOTSUPP | LOCK | | NFS4ERR_LOCK_RANGE | LOCK, LOCKT, LOCKU | | NFS4ERR_MLINK | LINK | | NFS4ERR_MOVED | ACCESS, CLOSE, COMMIT, CREATE, | | | DELEGRETURN, GETATTR, GETFH, LINK, | | | LOCK, LOCKT, LOCKU, LOOKUP, LOOKUPP, | | | NVERIFY, OPEN, OPENATTR, OPEN_CONFIRM, | | | OPEN_DOWNGRADE, PUTFH, READ, READDIR, | | | READLINK, REMOVE, RENAME, RESTOREFH, | | | SAVEFH, SECINFO, SETATTR, VERIFY, | @@ -9840,35 +10342,34 @@ | NFS4ERR_STALE_STATEID | CLOSE, DELEGRETURN, LOCK, LOCKU, | | | OPEN_CONFIRM, OPEN_DOWNGRADE, READ, | | | SETATTR, WRITE | | NFS4ERR_SYMLINK | COMMIT, LOOKUP, LOOKUPP, OPEN, READ, | | | WRITE | | NFS4ERR_TOOSMALL | READDIR | | NFS4ERR_WRONGSEC | LINK, LOOKUP, LOOKUPP, OPEN, PUTFH, | | | PUTPUBFH, PUTROOTFH, RENAME, RESTOREFH | | NFS4ERR_XDEV | LINK, RENAME | +--------------------------+----------------------------------------+ - Table 11 -14. NFS version 4 Requests +14. NFSv4 Requests - For the NFS version 4 RPC program, there are two traditional RPC - procedures: NULL and COMPOUND. All other functionality is defined as - a set of operations and these operations are defined in normal XDR/ - RPC syntax and semantics. However, these operations are encapsulated - within the COMPOUND procedure. This requires that the client combine - one or more of the NFS version 4 operations into a single request. + For the NFSv4 RPC program, there are two traditional RPC procedures: + NULL and COMPOUND. All other functionality is defined as a set of + operations and these operations are defined in normal XDR/RPC syntax + and semantics. However, these operations are encapsulated within the + COMPOUND procedure. This requires that the client combine one or + more of the NFSv4 operations into a single request. The NFS4_CALLBACK program is used to provide server to client - signaling and is constructed in a similar fashion as the NFS version - 4 program. The procedures CB_NULL and CB_COMPOUND are defined in the + signaling and is constructed in a similar fashion as the NFSv4 + program. The procedures CB_NULL and CB_COMPOUND are defined in the same way as NULL and COMPOUND are within the NFS program. The CB_COMPOUND request also encapsulates the remaining operations of the NFS4_CALLBACK program. There is no predefined RPC program number for the NFS4_CALLBACK program. It is up to the client to specify a program number in the "transient" program range. The program and port number of the NFS4_CALLBACK program are provided by the client as part of the SETCLIENTID/SETCLIENTID_CONFIRM sequence. The program and port can be changed by another SETCLIENTID/SETCLIENTID_CONFIRM sequence, and it is possible to use the sequence to change them within a client incarnation without removing relevant leased client @@ -9936,23 +10437,23 @@ operation within the procedure. Each operation assumes a "current" and "saved" filehandle that is available as part of the execution context of the compound request. Operations may set, change, or return the current filehandle. The "saved" filehandle is used for temporary storage of a filehandle value and as operands for the RENAME and LINK operations. 14.3. Synchronous Modifying Operations - NFS version 4 operations that modify the filesystem are synchronous. - When an operation is successfully completed at the server, the client - can depend that any data associated with the request is now on stable + NFSv4 operations that modify the filesystem are synchronous. When an + operation is successfully completed at the server, the client can + depend that any data associated with the request is now on stable storage (the one exception is in the case of the file data in a WRITE operation with the UNSTABLE option specified). This implies that any previous operations within the same compound request are also reflected in stable storage. This behavior enables the client's ability to recover from a partially executed compound request which may resulted from the failure of the server. For example, if a compound request contains operations A and B and the server is unable to send a response to the client, depending on the progress the server made in servicing the request the result of both @@ -9960,21 +10461,21 @@ be reflected. The server must not have just the results of operation B in stable storage. 14.4. Operation Values The operations encoded in the COMPOUND procedure are identified by operation values. To avoid overlap with the RPC procedure numbers, operations 0 (zero) and 1 are not defined. Operation 2 is not defined but reserved for future use with minor versioning. -15. NFS version 4 Procedures +15. NFSv4 Procedures 15.1. Procedure 0: NULL - No Operation 15.1.1. SYNOPSIS 15.1.2. ARGUMENT void; @@ -10082,44 +10584,171 @@ NFS4ERR_OP_ILLEGAL, as described in the next paragraph, is returned to the client. If an operation array contains an operation 2 and the minorversion field is non-zero and the server does not support the minor version, the server returns an error of NFS4ERR_MINOR_VERS_MISMATCH. Therefore, the NFS4ERR_MINOR_VERS_MISMATCH error takes precedence over all other errors. It is possible that the server receives a request that contains an operation that is less than the first legal operation (OP_ACCESS) or - greater than the last legal operation (OP_RELEASE_LOCKOWNER). - - In this case, the server's response will encode the opcode OP_ILLEGAL + greater than the last legal operation (OP_RELEASE_LOCKOWNER). In + this case, the server's response will encode the opcode OP_ILLEGAL rather than the illegal opcode of the request. The status field in the ILLEGAL return results will set to NFS4ERR_OP_ILLEGAL. The COMPOUND procedure's return results will also be NFS4ERR_OP_ILLEGAL. The definition of the "tag" in the request is left to the implementor. It may be used to summarize the content of the compound request for the benefit of packet sniffers and engineers debugging implementations. However, the value of "tag" in the response SHOULD be the same value as provided in the request. This applies to the tag field of the CB_COMPOUND procedure as well. +15.2.4.1. Current Filehandle + + The current and saved filehandle are used throughout the protocol. + Most operations implicitly use the current filehandle as a argument + and many set the current filehandle as part of the results. The + combination of client specified sequences of operations and current + and saved filehandle arguments and results allows for greater + protocol flexibility. The best or easiest example of current + filehandle usage is a sequence like the following: + + PUTFH fh1 {fh1} + LOOKUP "compA" {fh2} + GETATTR {fh2} + LOOKUP "compB" {fh3} + GETATTR {fh3} + LOOKUP "compC" {fh4} + GETATTR {fh4} + GETFH + + Figure 1 + + In this example, the PUTFH (Section 15.22) operation explicitly sets + the current filehandle value while the result of each LOOKUP + operation sets the current filehandle value to the resultant file + system object. Also, the client is able to insert GETATTR operations + using the current filehandle as an argument. + + The PUTROOTFH (Section 15.24) and PUTPUBFH (Section 15.24) operations + also set the current filehandle. The above example would replace + "PUTFH fh1" with PUTROOTFH or PUTPUBFH with no filehandle argument in + order to achieve the same effect (on the assumption that "compA" is + directly below the root of the namespace). + + Along with the current filehandle, there is a saved filehandle. + While the current filehandle is set as the result of operations like + LOOKUP, the saved filehandle must be set directly with the use of the + SAVEFH operation. The SAVEFH operations copies the current + filehandle value to the saved value. The saved filehandle value is + used in combination with the current filehandle value for the LINK + and RENAME operations. The RESTOREFH operation will copy the saved + filehandle value to the current filehandle value; as a result, the + saved filehandle value may be used a sort of "scratch" area for the + client's series of operations. + +15.2.4.2. Current Stateid + + The COMPOUND processing environment also have a current stateid and a + saved stateid, which allows for the passing of stateids between + operations. + + A "current stateid" is the stateid that is associated with the + current filehandle. The current stateid may only be changed by an + operation that modifies the current filehandle or returns a stateid. + If an operation returns a stateid it MUST set the current stateid to + the returned value. If an operation sets the current filehandle but + does not return a stateid, the current stateid MUST be set to the + all-zeros special stateid, i.e. (seqid, other) = (0, 0). If an + operation uses a stateid as an argument but does not return a + stateid, the current stateid MUST NOT be changed. E.g., PUTFH, + PUTROOTFH, and PUTPUBFH will change the current server state from + {ocfh, (osid)} to {cfh, (0, 0)} while LOCK will change the current + state from {cfh, (osid} to {cfh, (nsid)}. Operations like LOOKUP + that transform a current filehandle and component name into a new + current filehandle will also change the current stateid to {0, 0}. + The SAVEFH and RESTOREFH operations will save and restore both the + current filehandle and the current stateid as a set. + + The following example is the common case of a simple READ operation + with a supplied stateid showing that the PUTFH initializes the + current stateid to (0, 0). The subsequent READ with stateid (sid1) + leaves the current stateid unchanged, but does evaluate the the + operation. + + PUTFH fh1 - -> {fh1, (0, 0)} + READ (sid1), 0, 1024 {fh1, (0, 0)} -> {fh1, (0, 0)} + + Figure 2 + + This next example performs an OPEN with the root filehandle and as a + result generates stateid (sid1). The next operation specifies the + READ with the argument stateid set such that (seqid, other) are equal + to (1, 0), but the current stateid set by the previous operation is + actually used when the operation is evaluated. This allows correct + interaction with any existing, potentially conflicting, locks. + + PUTROOTFH - -> {fh1, (0, 0)} + OPEN "compA" {fh1, (0, 0)} -> {fh2, (sid1)} + READ (1, 0), 0, 1024 {fh2, (sid1)} -> {fh2, (sid1)} + CLOSE (1, 0) {fh2, (sid1)} -> {fh2, (sid2)} + + Figure 3 + + This next example is similar to the second in how it passes the + stateid sid2 generated by the LOCK operation to the next READ + operation. This allows the client to explicitly surround a single + I/O operation with a lock and its appropriate stateid to guarantee + correctness with other client locks. The example also shows how + SAVEFH and RESTOREFH can save and later re-use a filehandle and + stateid, passing them as the current filehandle and stateid to a READ + operation. + + PUTFH fh1 - -> {fh1, (0, 0)} + LOCK 0, 1024, (sid1) {fh1, (sid1)} -> {fh1, (sid2)} + READ (1, 0), 0, 1024 {fh1, (sid2)} -> {fh1, (sid2)} + LOCKU 0, 1024, (1, 0) {fh1, (sid2)} -> {fh1, (sid3)} + SAVEFH {fh1, (sid3)} -> {fh1, (sid3)} + + PUTFH fh2 {fh1, (sid3)} -> {fh2, (0, 0)} + WRITE (1, 0), 0, 1024 {fh2, (0, 0)} -> {fh2, (0, 0)} + + RESTOREFH {fh2, (0, 0)} -> {fh1, (sid3)} + READ (1, 0), 1024, 1024 {fh1, (sid3)} -> {fh1, (sid3)} + + Figure 4 + + The final example shows a disallowed use of the current stateid. The + client is attempting to implicitly pass anonymous special stateid, + (0,0) to the READ operation. The server MUST return + NFS4ERR_BAD_STATEID in the reply to the READ operation. + + PUTFH fh1 - -> {fh1, (0, 0)} + READ (1, 0), 0, 1024 {fh1, (0, 0)} -> NFS4ERR_BAD_STATEID + + Figure 5 + 15.2.5. IMPLEMENTATION Since an error of any type may occur after only a portion of the operations have been evaluated, the client must be prepared to recover from any failure. If the source of an NFS4ERR_RESOURCE error was a complex or lengthy set of operations, it is likely that if the number of operations were reduced the server would be able to evaluate them successfully. Therefore, the client is responsible for dealing with this type of complexity in recovery. + The client SHOULD NOT construct a COMPOUND which mixes operations for + different client IDs. + 15.3. Operation 3: ACCESS - Check Access Rights 15.3.1. SYNOPSIS (cfh), accessreq -> supported, accessrights 15.3.2. ARGUMENT const ACCESS4_READ = 0x00000001; const ACCESS4_LOOKUP = 0x00000002; @@ -10194,35 +10823,35 @@ In general, it is not sufficient for the client to attempt to deduce access permissions by inspecting the uid, gid, and mode fields in the file attributes or by attempting to interpret the contents of the ACL attribute. This is because the server may perform uid or gid mapping or enforce additional access control restrictions. It is also possible that the server may not be in the same ID space as the client. In these cases (and perhaps others), the client cannot reliably perform an access check with only current file attributes. - In the NFS version 2 protocol, the only reliable way to determine - whether an operation was allowed was to try it and see if it - succeeded or failed. Using the ACCESS operation in the NFS version 4 - protocol, the client can ask the server to indicate whether or not - one or more classes of operations are permitted. The ACCESS - operation is provided to allow clients to check before doing a series - of operations which will result in an access failure. The OPEN - operation provides a point where the server can verify access to the - file object and method to return that information to the client. The - ACCESS operation is still useful for directory operations or for use - in the case the UNIX API "access" is used on the client. + In the NFSv2 protocol, the only reliable way to determine whether an + operation was allowed was to try it and see if it succeeded or + failed. Using the ACCESS operation in the NFSv4 protocol, the client + can ask the server to indicate whether or not one or more classes of + operations are permitted. The ACCESS operation is provided to allow + clients to check before doing a series of operations which will + result in an access failure. The OPEN operation provides a point + where the server can verify access to the file object and method to + return that information to the client. The ACCESS operation is still + useful for directory operations or for use in the case the UNIX API + "access" is used on the client. The information returned by the server in response to an ACCESS call is not permanent. It was correct at the exact time that the server - performed the checks, but not necessarily afterwards. The server can + performed the checks, but not necessarily afterward. The server can revoke access permission at any time. The client should use the effective credentials of the user to build the authentication information in the ACCESS request used to determine access rights. It is the effective user and group credentials that are used in subsequent read and write operations. Many implementations do not directly support the ACCESS4_DELETE permission. Operating systems like UNIX will ignore the ACCESS4_DELETE bit if set on an access request on a non-directory @@ -10260,25 +10889,25 @@ 15.4.4. DESCRIPTION The CLOSE operation releases share reservations for the regular or named attribute file as specified by the current filehandle. The share reservations and other state information released at the server as a result of this CLOSE is only associated with the supplied stateid. The sequence id provides for the correct ordering. State associated with other OPENs is not affected. - If record locks are held, the client SHOULD release all locks before - issuing a CLOSE. The server MAY free all outstanding locks on CLOSE - but some servers may not support the CLOSE of a file that still has - record locks held. The server MUST return failure if any locks would - exist after the CLOSE. + If byte-range locks are held, the client SHOULD release all locks + before issuing a CLOSE. The server MAY free all outstanding locks on + CLOSE but some servers may not support the CLOSE of a file that still + has byte-range locks held. The server MUST return failure if any + locks would exist after the CLOSE. On success, the current filehandle retains its value. 15.4.5. IMPLEMENTATION Even though CLOSE returns a stateid, this stateid is not useful to the client and should be treated as deprecated. CLOSE "shuts down" the state associated with all OPENs for the file by a single open_owner. As noted above, CLOSE will either release all file locking state or return an error. Therefore, the stateid returned by @@ -10333,31 +10962,31 @@ verifier at each server event or instantiation that may lead to a loss of uncommitted data. Most commonly this occurs when the server is rebooted; however, other events at the server may result in uncommitted data loss as well. On success, the current filehandle retains its value. 15.5.5. IMPLEMENTATION The COMMIT operation is similar in operation and semantics to the - POSIX fsync(2) system call that synchronizes a file's state with the - disk (file data and metadata is flushed to disk or stable storage). - COMMIT performs the same operation for a client, flushing any - unsynchronized data and metadata on the server to the server's disk - or stable storage for the specified file. Like fsync(2), it may be - that there is some modified data or no modified data to synchronize. - The data may have been synchronized by the server's normal periodic - buffer synchronization activity. COMMIT should return NFS4_OK, - unless there has been an unexpected error. + POSIX fsync() [36] system call that synchronizes a file's state with + the disk (file data and metadata is flushed to disk or stable + storage). COMMIT performs the same operation for a client, flushing + any unsynchronized data and metadata on the server to the server's + disk or stable storage for the specified file. Like fsync(), it may + be that there is some modified data or no modified data to + synchronize. The data may have been synchronized by the server's + normal periodic buffer synchronization activity. COMMIT should + return NFS4_OK, unless there has been an unexpected error. - COMMIT differs from fsync(2) in that it is possible for the client to + COMMIT differs from fsync() in that it is possible for the client to flush a range of the file (most likely triggered by a buffer- reclamation scheme on the client before file has been completely written). The server implementation of COMMIT is reasonably simple. If the server receives a full file COMMIT request, that is starting at offset 0 and count 0, it should do the equivalent of fsync()'ing the file. Otherwise, it should arrange to have the cached data in the range specified by offset and count to be flushed to stable storage. In both cases, any metadata associated with the file must be flushed @@ -10487,26 +11116,26 @@ MUST derive the owner (or the owner ACE). This would typically be from the principal indicated in the RPC credentials of the call, but the server's operating environment or filesystem semantics may dictate other methods of derivation. Similarly, if createattrs includes neither the group attribute nor a group ACE, and if the server's filesystem both supports and requires the notion of a group attribute (or group ACE), the server MUST derive the group attribute (or the corresponding owner ACE) for the file. This could be from the RPC call's credentials, such as the group principal if the credentials include it (such as with AUTH_SYS), from the group - identifier associated with the principal in the credentials (for - e.g., POSIX systems have a passwd database that has the group - identifier for every user identifier), inherited from directory the - object is created in, or whatever else the server's operating - environment or filesystem semantics dictate. This applies to the - OPEN operation too. + identifier associated with the principal in the credentials (e.g., + POSIX systems have a user database [37] that has the group identifier + for every user identifier), inherited from directory the object is + created in, or whatever else the server's operating environment or + filesystem semantics dictate. This applies to the OPEN operation + too. Conversely, it is possible the client will specify in createattrs an owner attribute or group attribute or ACL that the principal indicated the RPC call's credentials does not have permissions to create files for. The error to be returned in this instance is NFS4ERR_PERM. This applies to the OPEN operation too. 15.6.5. IMPLEMENTATION If the client desires to set attribute values after the create, a @@ -10612,35 +11241,63 @@ 15.9.4. DESCRIPTION The GETATTR operation will obtain attributes for the filesystem object specified by the current filehandle. The client sets a bit in the bitmap argument for each attribute value that it would like the server to return. The server returns an attribute bitmap that indicates the attribute values for which it was able to return, followed by the attribute values ordered lowest attribute number first. - The server must return a value for each attribute that the client + The server MUST return a value for each attribute that the client requests if the attribute is supported by the server. If the server does not support an attribute or cannot approximate a useful value - then it must not return the attribute value and must not set the - attribute bit in the result bitmap. The server must return an error - if it supports an attribute but cannot obtain its value. In that - case no attribute values will be returned. + then it MUST NOT return the attribute value and MUST NOT set the + attribute bit in the result bitmap. The server MUST return an error + if it supports an attribute on the target but cannot obtain its + value. In that case no attribute values will be returned. - All servers must support the mandatory attributes as specified in the - section "File Attributes". + File systems which are absent should be treated as having support for + a very small set of attributes as described in GETATTR Within an + Absent File System (Section 7.3.1), even if previously, when the file + system was present, more attributes were supported. + + All servers MUST support the REQUIRED attributes as specified in the + section File Attributes (Section 5), for all file systems, with the + exception of absent file systems. On success, the current filehandle retains its value. 15.9.5. IMPLEMENTATION + Suppose there is a OPEN_DELEGATE_WRITE delegation held by another + client for file in question and size and/or change are among the set + of attributes being interrogated. The server has two choices. + + First, the server can obtain the actual current value of these + attributes from the client holding the delegation by using the + CB_GETATTR callback. Second, the server, particularly when the + delegated client is unresponsive, can recall the delegation in + question. The GETATTR MUST NOT proceed until one of the following + occurs: + + o The requested attribute values are returned in the response to + CB_GETATTR. + + o The OPEN_DELEGATE_WRITE delegation is returned. + + o The OPEN_DELEGATE_WRITE delegation is revoked. + + Unless one of the above happens very quickly, one or more + NFS4ERR_DELAY errors will be returned if while a delegation is + outstanding. + 15.10. Operation 10: GETFH - Get Current Filehandle 15.10.1. SYNOPSIS (cfh) -> filehandle 15.10.2. ARGUMENT /* CURRENT_FH: */ void; @@ -10806,21 +11463,21 @@ case NFS4_OK: LOCK4resok resok4; case NFS4ERR_DENIED: LOCK4denied denied; default: void; }; 15.12.4. DESCRIPTION - The LOCK operation requests a record lock for the byte range + The LOCK operation requests a byte-range lock for the byte range specified by the offset and length parameters. The lock type is also specified to be one of the nfs_lock_type4s. If this is a reclaim request, the reclaim parameter will be TRUE; Bytes in a file may be locked even if those bytes are not currently allocated to the file. To lock the file from a specific offset through the end-of-file (no matter how long the file actually is) use a length field with all bits set to 1 (one). If the length is zero, or if a length which is not all bits set to one is specified, and length when added to the offset exceeds the maximum 64-bit unsigned @@ -10865,51 +11522,53 @@ only LOCK for ranges that do not include any bytes already locked by that lock_owner and LOCKU of locks held by that lock_owner (specifying an exactly-matching range and type). Similarly, when the client makes a lock request that amounts to upgrading (changing from a read lock to a write lock) or downgrading (changing from write lock to a read lock) an existing record lock, and the server does not support such a lock, the server will return NFS4ERR_LOCK_NOTSUPP. Such operations may not perfectly reflect the required semantics in the face of conflicting lock requests from other clients. + When a client holds an OPEN_DELEGATE_WRITE delegation, the client + holding that delegation is assured that there are no opens by other + clients. Thus, there can be no conflicting LOCK operations from such + clients. Therefore, the client may be handling locking requests + locally, without doing LOCK operations on the server. If it does + that, it must be prepared to update the lock status on the server, by + sending appropriate LOCK and LOCKU operations before returning the + delegation. + + When one or more clients hold OPEN_DELEGATE_READ delegations, any + LOCK operation where the server is implementing mandatory locking + semantics MUST result in the recall of all such delegations. The + LOCK operation may not be granted until all such delegations are + returned or revoked. Except where this happens very quickly, one or + more NFS4ERR_DELAY errors will be returned to requests made while the + delegation remains outstanding. + The locker argument specifies the lock-owner that is associated with the LOCK request. The locker4 structure is a switched union that indicates whether the client has already created byte-range locking state associated with the current open file and lock-owner. In the case in which it has, the argument is just a stateid representing the set of locks associated with that open file and lock-owner, together - with a lock_seqid value which MAY be any value and MUST be ignored by + with a lock_seqid value that MAY be any value and MUST be ignored by the server. In the case where no byte-range locking state has been established, or the client does not have the stateid available, the argument contains the stateid of the open file with which this lock is to be associated, together with the lock-owner with which the lock is to be associated. The open_to_lock_owner case covers the very first lock done by a lock-owner for a given open file and offers a method to use the established state of the open_stateid to transition to the use of a lock stateid. - The following fields of the locker parameter MAY be set to any value - by the client and MUST be ignored by the server: - - o The clientid field of the lock_owner field of the open_owner field - (locker.open_owner.lock_owner.clientid). The reason the server - MUST ignore the clientid field is that the server MUST derive the - client ID from the session ID from the SEQUENCE operation of the - COMPOUND request. - - o The open_seqid and lock_seqid fields of the open_owner field - (locker.open_owner.open_seqid and locker.open_owner.lock_seqid). - - o The lock_seqid field of the lock_owner field - (locker.lock_owner.lock_seqid). - 15.13. Operation 13: LOCKT - Test For Lock 15.13.1. SYNOPSIS (cfh) locktype, offset, length, owner -> {void, NFS4ERR_DENIED -> owner} 15.13.2. ARGUMENT struct LOCKT4args { @@ -10952,29 +11611,33 @@ If the server is unable to determine the exact offset and length of the conflicting lock, the same offset and length that were provided in the arguments should be returned in the denied results. Section 9 contains further discussion of the file locking mechanisms. LOCKT uses a lock_owner4 rather a stateid4, as is used in LOCK to identify the owner. This is because the client does not have to open the file to test for the existence of a lock, so a stateid may not be available. - The test for conflicting locks should exclude locks for the current + The test for conflicting locks SHOULD exclude locks for the current lockowner. Note that since such locks are not examined the possible existence of overlapping ranges may not affect the results of LOCKT. If the server does examine locks that match the lockowner for the purpose of range checking, NFS4ERR_LOCK_RANGE may be returned.. In the event that it returns NFS4_OK, clients may do a LOCK and receive NFS4ERR_LOCK_RANGE on the LOCK request because of the flexibility provided to the server. + When a client holds an OPEN_DELEGATE_WRITE delegation, it may choose + (see Section 15.12.5)) to handle LOCK requests locally. In such a + case, LOCKT requests will similarly be handled locally. + 15.14. Operation 14: LOCKU - Unlock File 15.14.1. SYNOPSIS (cfh) type, seqid, stateid, offset, length -> stateid 15.14.2. ARGUMENT struct LOCKU4args { /* CURRENT_FH: file */ @@ -10989,45 +11652,49 @@ union LOCKU4res switch (nfsstat4 status) { case NFS4_OK: stateid4 lock_stateid; default: void; }; 15.14.4. DESCRIPTION - The LOCKU operation unlocks the record lock specified by the + The LOCKU operation unlocks the byte-range lock specified by the parameters. The client may set the locktype field to any value that is legal for the nfs_lock_type4 enumerated type, and the server MUST accept any legal value for locktype. Any legal value for locktype has no effect on the success or failure of the LOCKU operation. The ranges are specified as for LOCK. The NFS4ERR_INVAL and NFS4ERR_BAD_RANGE errors are returned under the same circumstances as for LOCK. On success, the current filehandle retains its value. 15.14.5. IMPLEMENTATION If the area to be unlocked does not correspond exactly to a lock actually held by the lockowner the server may return the error NFS4ERR_LOCK_RANGE. This includes the case in which the area is not locked, where the area is a sub-range of the area locked, where it overlaps the area locked without matching exactly or the area specified includes multiple locks held by the lockowner. In all of - these cases, allowed by POSIX locking semantics, a client receiving - this error, should if it desires support for such operations, - simulate the operation using LOCKU on ranges corresponding to locks - it actually holds, possibly followed by LOCK requests for the sub- - ranges not being unlocked. + these cases, allowed by POSIX locking [35] semantics, a client + receiving this error, should if it desires support for such + operations, simulate the operation using LOCKU on ranges + corresponding to locks it actually holds, possibly followed by LOCK + requests for the sub-ranges not being unlocked. + + When a client holds an OPEN_DELEGATE_WRITE delegation, it may choose + (see Section 15.12.5)) to handle LOCK requests locally. In such a + case, LOCKU requests will similarly be handled locally. 15.15. Operation 15: LOOKUP - Lookup Filename 15.15.1. SYNOPSIS (cfh), component -> (cfh) 15.15.2. ARGUMENT struct LOOKUP4args { @@ -11065,39 +11732,39 @@ filehandle): PUTFH (directory filehandle) LOOKUP "pub" GETFH LOOKUP "foo" GETFH LOOKUP "bar" GETFH - NFS version 4 servers depart from the semantics of previous NFS - versions in allowing LOOKUP requests to cross mountpoints on the - server. The client can detect a mountpoint crossing by comparing the - fsid attribute of the directory with the fsid attribute of the - directory looked up. If the fsids are different then the new - directory is a server mountpoint. UNIX clients that detect a - mountpoint crossing will need to mount the server's filesystem. This - needs to be done to maintain the file object identity checking - mechanisms common to UNIX clients. + NFSv4 servers depart from the semantics of previous NFS versions in + allowing LOOKUP requests to cross mountpoints on the server. The + client can detect a mountpoint crossing by comparing the fsid + attribute of the directory with the fsid attribute of the directory + looked up. If the fsids are different then the new directory is a + server mountpoint. UNIX clients that detect a mountpoint crossing + will need to mount the server's filesystem. This needs to be done to + maintain the file object identity checking mechanisms common to UNIX + clients. Servers that limit NFS access to "shares" or "exported" filesystems should provide a pseudo-filesystem into which the exported filesystems can be integrated, so that clients can browse the server's name space. The clients' view of a pseudo filesystem will be limited to paths that lead to exported filesystems. Note: previous versions of the protocol assigned special semantics to - the names "." and "..". NFS version 4 assigns no special semantics - to these names. The LOOKUPP operator must be used to lookup a parent + the names "." and "..". NFSv4 assigns no special semantics to these + names. The LOOKUPP operator must be used to lookup a parent directory. Note that this operation does not follow symbolic links. The client is responsible for all parsing of filenames including filenames that are modified by symbolic links encountered during the lookup process. If the current filehandle supplied is not a directory but a symbolic link, the error NFS4ERR_SYMLINK is returned as the error. For all other non-directory file types, the error NFS4ERR_NOTDIR is returned. @@ -11393,45 +12064,45 @@ because the server may maintain the state indefinitely as long as another client does not attempt to make a conflicting access to the same file. 15.18.5. DESCRIPTION The OPEN operation creates and/or opens a regular file in a directory with the provided name. If the file does not exist at the server and creation is desired, specification of the method of creation is provided by the openhow parameter. The client has the choice of - three creation methods: UNCHECKED, GUARDED, or EXCLUSIVE. + three creation methods: UNCHECKED4, GUARDED4, or EXCLUSIVE4. If the current filehandle is a named attribute directory, OPEN will then create or open a named attribute file. Note that exclusive create of a named attribute is not supported. If the createmode is EXCLUSIVE4 and the current filehandle is a named attribute directory, the server will return EINVAL. - UNCHECKED means that the file should be created if a file of that + UNCHECKED4 means that the file should be created if a file of that name does not exist and encountering an existing regular file of that name is not an error. For this type of create, createattrs specifies the initial set of attributes for the file. The set of attributes may include any writable attribute valid for regular files. When an - UNCHECKED create encounters an existing file, the attributes + UNCHECKED4 create encounters an existing file, the attributes specified by createattrs are not used, except that when an size of - zero is specified, the existing file is truncated. If GUARDED is + zero is specified, the existing file is truncated. If GUARDED4 is specified, the server checks for the presence of a duplicate object by name before performing the create. If a duplicate exists, an error of NFS4ERR_EXIST is returned as the status. If the object does - not exist, the request is performed as described for UNCHECKED. For - each of these cases (UNCHECKED and GUARDED) where the operation is + not exist, the request is performed as described for UNCHECKED4. For + each of these cases (UNCHECKED4 and GUARDED4) where the operation is successful, the server will return to the client an attribute mask signifying which attributes were successfully set for the object. - EXCLUSIVE specifies that the server is to follow exclusive creation + EXCLUSIVE4 specifies that the server is to follow exclusive creation semantics, using the verifier to ensure exclusive creation of the target. The server should check for the presence of a duplicate object by name. If the object does not exist, the server creates the object and stores the verifier with the object. If the object does exist and the stored verifier matches the client provided verifier, the server uses the existing object as the newly created object. If the stored verifier does not match, then an error of NFS4ERR_EXIST is returned. No attributes may be provided in this case, since the server may use an attribute of the target object to store the verifier. If the server uses an attribute to store the exclusive @@ -11500,35 +12171,38 @@ reclaim (CLAIM_PREVIOUS) case, in which a delegation type is claimed. In this case, delegation will always be granted, although the server may specify an immediate recall in the delegation structure. The rflags returned by a successful OPEN allow the server to return information governing how the open file is to be handled. OPEN4_RESULT_CONFIRM indicates that the client MUST execute an OPEN_CONFIRM operation before using the open file. OPEN4_RESULT_LOCKTYPE_POSIX indicates the server's file locking - behavior supports the complete set of Posix locking techniques. From - this the client can choose to manage file locking state in a way to - handle a mis-match of file locking management. + behavior supports the complete set of Posix locking techniques [35]. + From this the client can choose to manage file locking state in a way + to handle a mis-match of file locking management. If the component is of zero length, NFS4ERR_INVAL will be returned. The component is also subject to the normal UTF-8, character support, and name checks. See Section 12.3 for further discussion. When an OPEN is done and the specified open_owner already has the resulting filehandle open, the result is to "OR" together the new share and deny status together with the existing status. In this case, only a single CLOSE need be done, even though multiple OPENs were completed. When such an OPEN is done, checking of share reservations for the new OPEN proceeds normally, with no exception - for the existing OPEN held by the same owner. + for the existing OPEN held by the same owner. In this case, the + stateid returned as an "other" field that matches that of the + previous open while the "seqid" field is incremented to reflect the + change status due to the new open. If the underlying filesystem at the server is only accessible in a read-only mode and the OPEN request has specified ACCESS_WRITE or ACCESS_BOTH, the server will return NFS4ERR_ROFS to indicate a read- only filesystem. As with the CREATE operation, the server MUST derive the owner, owner ACE, group, or group ACE if any of the four attributes are required and supported by the server's filesystem. For an OPEN with the EXCLUSIVE4 createmode, the server has no choice, since such OPEN @@ -11537,28 +12211,28 @@ corresponding ACEs) that the principal in the RPC call's credentials does not have authorization to create files for, then the server may return NFS4ERR_PERM. In the case of a OPEN which specifies a size of zero (e.g., truncation) and the file has named attributes, the named attributes are left as is. They are not removed. 15.18.6. IMPLEMENTATION - The OPEN operation contains support for EXCLUSIVE create. The - mechanism is similar to the support in NFS version 3 [14]. As in NFS - version 3, this mechanism provides reliable exclusive creation. - Exclusive create is invoked when the how parameter is EXCLUSIVE. In - this case, the client provides a verifier that can reasonably be - expected to be unique. A combination of a client identifier, perhaps - the client network address, and a unique number generated by the - client, perhaps the RPC transaction identifier, may be appropriate. + The OPEN operation contains support for EXCLUSIVE4 create. The + mechanism is similar to the support in NFSv3 [14]. As in NFSv3, this + mechanism provides reliable exclusive creation. Exclusive create is + invoked when the how parameter is EXCLUSIVE4. In this case, the + client provides a verifier that can reasonably be expected to be + unique. A combination of a client identifier, perhaps the client + network address, and a unique number generated by the client, perhaps + the RPC transaction identifier, may be appropriate. If the object does not exist, the server creates the object and stores the verifier in stable storage. For filesystems that do not provide a mechanism for the storage of arbitrary file attributes, the server may use one or more elements of the object meta-data to store the verifier. The verifier must be stored in stable storage to prevent erroneous failure on retransmission of the request. It is assumed that an exclusive create is being performed because exclusive semantics are critical to the application. Because of the expected usage, exclusive CREATE does not rely solely on the normally volatile @@ -11587,29 +12261,29 @@ Once the client has performed a successful exclusive create, it must issue a SETATTR to set the correct object attributes. Until it does so, it should not rely upon any of the object attributes, since the server implementation may need to overload object meta-data to store the verifier. The subsequent SETATTR must not occur in the same COMPOUND request as the OPEN. This separation will guarantee that the exclusive create mechanism will continue to function properly in the face of retransmission of the request. - Use of the GUARDED attribute does not provide exactly-once semantics. - In particular, if a reply is lost and the server does not detect the - retransmission of the request, the operation can fail with + Use of the GUARDED4 attribute does not provide exactly-once + semantics. In particular, if a reply is lost and the server does not + detect the retransmission of the request, the operation can fail with NFS4ERR_EXIST, even though the create was performed successfully. The client would use this behavior in the case that the application has not requested an exclusive create but has asked to have the file truncated when the file is opened. In the case of the client timing - out and retransmitting the create request, the client can use GUARDED - to prevent against a sequence like: create, write, create + out and retransmitting the create request, the client can use + GUARDED4 to prevent against a sequence like: create, write, create (retransmitted) from occurring. For SHARE reservations, the client must specify a value for share_access that is one of READ, WRITE, or BOTH. For share_deny, the client must specify one of NONE, READ, WRITE, or BOTH. If the client fails to do this, the server must return NFS4ERR_INVAL. Based on the share_access value (READ, WRITE, or BOTH) the client should check that the requester has the proper access rights to perform the specified operation. This would generally be the results @@ -11623,20 +12297,46 @@ version 4 protocol does not impose any requirement that READs and WRITEs issued for an open file have the same credentials as the OPEN itself, the server still must do appropriate access checking on the READs and WRITEs themselves. If the component provided to OPEN is a symbolic link, the error NFS4ERR_SYMLINK will be returned to the client. If the current filehandle is not a directory, the error NFS4ERR_NOTDIR will be returned. + If a COMPOUND contains an OPEN which establishes a + OPEN_DELEGATE_WRITE delegation, then a subsequent GETATTR inside that + COMPOUND SHOULD not result in a CB_GETATTR to the client. The server + SHOULD understand the GETATTR to be for the same client ID and avoid + querying the client, which will not be able to respond. This + sequence of OPEN, GETATTR SHOULD be understood as an atomic retrieval + of the initial size and change attribute. Further, the client SHOULD + NOT construct a COMPOUND which mixes operations for different client + IDs. + +15.18.7. Warning to Client Implementors + + OPEN resembles LOOKUP in that it generates a filehandle for the + client to use. Unlike LOOKUP though, OPEN creates server state on + the filehandle. In normal circumstances, the client can only release + this state with a CLOSE operation. CLOSE uses the current filehandle + to determine which file to close. Therefore, the client MUST follow + every OPEN operation with a GETFH operation in the same COMPOUND + procedure. This will supply the client with the filehandle such that + CLOSE can be used appropriately. + + Simply waiting for the lease on the file to expire is insufficient + because the server may maintain the state indefinitely as long as + another client does not attempt to make a conflicting access to the + same file. + 15.19. Operation 19: OPENATTR - Open Named Attribute Directory 15.19.1. SYNOPSIS (cfh) createdir -> (cfh) 15.19.2. ARGUMENT struct OPENATTR4args { /* CURRENT_FH: object */ @@ -11714,42 +12414,41 @@ passed to the OPEN operation. If the server receives an unexpected sequence id with respect to the original open, then the server assumes that the client will not confirm the original OPEN and all state associated with the original OPEN is released by the server. On success, the current filehandle retains its value. 15.20.5. IMPLEMENTATION A given client might generate many open_owner4 data structures for a - given clientid. The client will periodically either dispose of its + given client ID. The client will periodically either dispose of its open_owner4s or stop using them for indefinite periods of time. The - latter situation is why the NFS version 4 protocol does not have an - explicit operation to exit an open_owner4: such an operation is of no - use in that situation. Instead, to avoid unbounded memory use, the - server needs to implement a strategy for disposing of open_owner4s - that have no current open state for any files and have not been used - recently. The time period used to determine when to dispose of - open_owner4s is an implementation choice. The time period should - certainly be no less than the lease time plus any grace period the - server wishes to implement beyond a lease time. The OPEN_CONFIRM - operation allows the server to safely dispose of unused open_owner4 - data structures. + latter situation is why the NFSv4 protocol does not have an explicit + operation to exit an open_owner4: such an operation is of no use in + that situation. Instead, to avoid unbounded memory use, the server + needs to implement a strategy for disposing of open_owner4s that have + no current open state for any files and have not been used recently. + The time period used to determine when to dispose of open_owner4s is + an implementation choice. The time period should certainly be no + less than the lease time plus any grace period the server wishes to + implement beyond a lease time. The OPEN_CONFIRM operation allows the + server to safely dispose of unused open_owner4 data structures. In the case that a client issues an OPEN operation and the server no longer has a record of the open_owner4, the server needs to ensure that this is a new OPEN and not a replay or retransmission. Servers must not require confirmation on OPENs that grant delegations - or are doing reclaim operations. See Section 9.1.8 for details. The + or are doing reclaim operations. See Section 9.1.9 for details. The server can easily avoid this by noting whether it has disposed of one - open_owner4 for the given clientid. If the server does not support + open_owner4 for the given client ID. If the server does not support delegation, it might simply maintain a single bit that notes whether any open_owner4 (for any client) has been disposed of. The server must hold unconfirmed OPEN state until one of three events occur. First, the client sends an OPEN_CONFIRM request with the appropriate sequence id and stateid within the lease period. In this case, the OPEN state on the server goes to confirmed, and the open_owner4 on the server is fully established. Second, the client sends another OPEN request with a sequence id that @@ -11817,20 +12516,24 @@ The share_access and share_deny bits specified in this operation replace the current ones for the specified open file. The share_access and share_deny bits specified must be exactly equal to the union of the share_access and share_deny bits specified for some subset of the OPENs in effect for current openowner on the current file. If that constraint is not respected, the error NFS4ERR_INVAL should be returned. Since share_access and share_deny bits are subsets of those already granted, it is not possible for this request to be denied because of conflicting share reservations. + As the OPEN_DOWNGRADE may change a file to be not-open-for-write and + a write byte-range lock might be held, the server may have to reject + the OPEN_DOWNGRADE with a NFS4ERR_LOCKS_HELD. + On success, the current filehandle retains its value. 15.22. Operation 22: PUTFH - Set Current Filehandle 15.22.1. SYNOPSIS filehandle -> (cfh) 15.22.2. ARGUMENT @@ -11841,26 +12544,30 @@ 15.22.3. RESULT struct PUTFH4res { /* CURRENT_FH: */ nfsstat4 status; }; 15.22.4. DESCRIPTION Replaces the current filehandle with the filehandle provided as an - argument. + argument. Clears the current stateid. If the security mechanism used by the requester does not meet the requirements of the filehandle provided to this operation, the server MUST return NFS4ERR_WRONGSEC. + See Section 15.2.4.1 for more details on the current filehandle. + + See Section 15.2.4.2 for more details on the current stateid. + 15.22.5. IMPLEMENTATION Commonly used as the first operator in an NFS request to set the context for following operations. 15.23. Operation 23: PUTPUBFH - Set Public Filehandle 15.23.1. SYNOPSIS - -> (cfh) @@ -11877,59 +12584,59 @@ }; 15.23.4. DESCRIPTION Replaces the current filehandle with the filehandle that represents the public filehandle of the server's name space. This filehandle may be different from the "root" filehandle which may be associated with some other directory on the server. The public filehandle represents the concepts embodied in [23], [24], - [35]. The intent for NFS version 4 is that the public filehandle + [38]. The intent for NFSv4 is that the public filehandle (represented by the PUTPUBFH operation) be used as a method of - providing WebNFS server compatibility with NFS versions 2 and 3. + providing WebNFS server compatibility with NFSv2 and NFSv3. The public filehandle and the root filehandle (represented by the PUTROOTFH operation) should be equivalent. If the public and root filehandles are not equivalent, then the public filehandle MUST be a descendant of the root filehandle. 15.23.5. IMPLEMENTATION Used as the first operator in an NFS request to set the context for following operations. - With the NFS version 2 and 3 public filehandle, the client is able to - specify whether the path name provided in the LOOKUP should be - evaluated as either an absolute path relative to the server's root or - relative to the public filehandle. [35] contains further discussion - of the functionality. With NFS version 4, that type of specification - is not directly available in the LOOKUP operation. The reason for - this is because the component separators needed to specify absolute - vs. relative are not allowed in NFS version 4. Therefore, the client - is responsible for constructing its request such that the use of - either PUTROOTFH or PUTPUBFH are used to signify absolute or relative + With the NFSv2 and 3 public filehandle, the client is able to specify + whether the path name provided in the LOOKUP should be evaluated as + either an absolute path relative to the server's root or relative to + the public filehandle. [38] contains further discussion of the + functionality. With NFSv4, that type of specification is not + directly available in the LOOKUP operation. The reason for this is + because the component separators needed to specify absolute vs. + relative are not allowed in NFSv4. Therefore, the client is + responsible for constructing its request such that the use of either + PUTROOTFH or PUTPUBFH are used to signify absolute or relative evaluation of an NFS URL respectively. - Note that there are warnings mentioned in [35] with respect to the + Note that there are warnings mentioned in [38] with respect to the use of absolute evaluation and the restrictions the server may place on that evaluation with respect to how much of its namespace has been - made available. These same warnings apply to NFS version 4. It is - likely, therefore that because of server implementation details, an - NFS version 3 absolute public filehandle lookup may behave - differently than an NFS version 4 absolute resolution. + made available. These same warnings apply to NFSv4. It is likely, + therefore that because of server implementation details, an NFSv3 + absolute public filehandle lookup may behave differently than an + NFSv4 absolute resolution. - There is a form of security negotiation as described in [36] that + There is a form of security negotiation as described in [39] that uses the public filehandle a method of employing SNEGO. This method - is not available with NFS version 4 as filehandles are not overloaded - with special meaning and therefore do not provide the same framework - as NFS versions 2 and 3. Clients should therefore use the security + is not available with NFSv4 as filehandles are not overloaded with + special meaning and therefore do not provide the same framework as + NFSv2 and NFSv3. Clients should therefore use the security negotiation mechanisms described in this RFC. 15.24. Operation 24: PUTROOTFH - Set Root Filehandle 15.24.1. SYNOPSIS - -> (cfh) 15.24.2. ARGUMENT @@ -11943,26 +12650,33 @@ }; 15.24.4. DESCRIPTION Replaces the current filehandle with the filehandle that represents the root of the server's name space. From this filehandle a LOOKUP operation can locate any other filehandle on the server. This filehandle may be different from the "public" filehandle which may be associated with some other directory on the server. + PUTROOTFH also clears the current stateid. + + See Section 15.2.4.1 for more details on the current filehandle. + + See Section 15.2.4.2 for more details on the current stateid. + 15.24.5. IMPLEMENTATION Commonly used as the first operator in an NFS request to set the context for following operations. 15.25. Operation 25: READ - Read from File + 15.25.1. SYNOPSIS (cfh), stateid, offset, count -> eof, data 15.25.2. ARGUMENT struct READ4args { /* CURRENT_FH: file */ stateid4 stateid; offset4 offset; @@ -11995,24 +12709,25 @@ is returned with a data length set to 0 (zero) and eof is set to TRUE. The READ is subject to access permissions checking. If the client specifies a count value of 0 (zero), the READ succeeds and returns 0 (zero) bytes of data again subject to access permissions checking. The server may choose to return fewer bytes than specified by the client. The client needs to check for this condition and handle the condition appropriately. The stateid value for a READ request represents a value returned from - a previous record lock or share reservation request or the stateid - associated with a delegation. The stateid is used by the server to - verify that the associated share reservation and any record locks are - still valid and to update lease timeouts for the client. + a previous byte-range lock or share reservation request or the + stateid associated with a delegation. The stateid is used by the + server to verify that the associated share reservation and any byte- + range locks are still valid and to update lease timeouts for the + client. If the read ended at the end-of-file (formally, in a correctly formed READ request, if offset + count is equal to the size of the file), or the read request extends beyond the size of the file (if offset + count is greater than the size of the file), eof is returned as TRUE; otherwise it is FALSE. A successful READ of an empty file will always return eof as TRUE. If the current filehandle is not a regular file, an error will be returned to the client. In the case the current filehandle @@ -12022,39 +12737,49 @@ For a READ with a stateid value of all bits 0, the server MAY allow the READ to be serviced subject to mandatory file locks or the current share deny modes for the file. For a READ with a stateid value of all bits 1, the server MAY allow READ operations to bypass locking checks at the server. On success, the current filehandle retains its value. 15.25.5. IMPLEMENTATION - It is possible for the server to return fewer than count bytes of - data. If the server returns less than the count requested and eof is - set to FALSE, the client should issue another READ to get the - remaining data. A server may return less data than requested under - several circumstances. The file may have been truncated by another - client or perhaps on the server itself, changing the file size from - what the requesting client believes to be the case. This would - reduce the actual amount of data available to the client. It is - possible that the server may back off the transfer size and reduce - the read request return. Server resource exhaustion may also occur - necessitating a smaller read return. + If the server returns a "short read" (i.e., fewer data than requested + and eof is set to FALSE), the client should send another READ to get + the remaining data. A server may return less data than requested + under several circumstances. The file may have been truncated by + another client or perhaps on the server itself, changing the file + size from what the requesting client believes to be the case. This + would reduce the actual amount of data available to the client. It + is possible that the server reduce the transfer size and so return a + short read result. Server resource exhaustion may also occur in a + short read. - If mandatory file locking is on for the file, and if the region - corresponding to the data to be read from file is write locked by an - owner not associated the stateid, the server will return the - NFS4ERR_LOCKED error. The client should try to get the appropriate - read record lock via the LOCK operation before re-attempting the - READ. When the READ completes, the client should release the record - lock via LOCKU. + If mandatory byte-range locking is in effect for the file, and if the + byte-range corresponding to the data to be read from the file is + WRITE_LT locked by an owner not associated with the stateid, the + server will return the NFS4ERR_LOCKED error. The client should try + to get the appropriate READ_LT via the LOCK operation before + reattempting the READ. When the READ completes, the client should + release the byte-range lock via LOCKU. + + If another client has an OPEN_DELEGATE_WRITE delegation for the file + being read, the delegation must be recalled, and the operation cannot + proceed until that delegation is returned or revoked. Except where + this happens very quickly, one or more NFS4ERR_DELAY errors will be + returned to requests made while the delegation remains outstanding. + Normally, delegations will not be recalled as a result of a READ + operation since the recall will occur as a result of an earlier OPEN. + However, since it is possible for a READ to be done with a special + stateid, the server needs to check for this case even though the + client should have done an OPEN previously. 15.26. Operation 26: READDIR - Read Directory 15.26.1. SYNOPSIS (cfh), cookie, cookieverf, dircount, maxcount, attr_request -> cookieverf { cookie, name, attrs } 15.26.2. ARGUMENT @@ -12298,34 +13023,33 @@ removal. If the target is of zero length, NFS4ERR_INVAL will be returned. The target is also subject to the normal UTF-8, character support, and name checks. See Section 12.3 for further discussion. On success, the current filehandle retains its value. 15.28.5. IMPLEMENTATION - NFS versions 2 and 3 required a different operator RMDIR for - directory removal and REMOVE for non-directory removal. This allowed - clients to skip checking the file type when being passed a non- - directory delete system call (e.g., unlink() in POSIX) to remove a - directory, as well as the converse (e.g., a rmdir() on a non- - directory) because they knew the server would check the file type. - NFS version 4 REMOVE can be used to delete any directory entry - independent of its file type. The implementor of an NFS version 4 - client's entry points from the unlink() and rmdir() system calls - should first check the file type against the types the system call is - allowed to remove before issuing a REMOVE. Alternatively, the - implementor can produce a COMPOUND call that includes a LOOKUP/VERIFY - sequence to verify the file type before a REMOVE operation in the - same COMPOUND call. + NFSv3 required a different operator RMDIR for directory removal and + REMOVE for non-directory removal. This allowed clients to skip + checking the file type when being passed a non-directory delete + system call (e.g., unlink() [40] in POSIX) to remove a directory, as + well as the converse (e.g., a rmdir() on a non-directory) because + they knew the server would check the file type. NFSv4 REMOVE can be + used to delete any directory entry independent of its file type. The + implementor of an NFSv4 client's entry points from the unlink() and + rmdir() system calls should first check the file type against the + types the system call is allowed to remove before issuing a REMOVE. + Alternatively, the implementor can produce a COMPOUND call that + includes a LOOKUP/VERIFY sequence to verify the file type before a + REMOVE operation in the same COMPOUND call. The concept of last reference is server specific. However, if the numlinks field in the previous attributes of the object had the value 1, the client should not rely on referring to the object via a filehandle. Likewise, the client should not rely on the resources (disk space, directory entry, and so on) formerly associated with the object becoming immediately available. Thus, if a client needs to be able to continue to access a file after using REMOVE to remove it, the client should take steps to make sure that the file will still be accessible. The usual mechanism used is to RENAME the file from its @@ -12461,21 +13185,21 @@ operation. 15.30.5. IMPLEMENTATION When the client holds delegations, it needs to use RENEW to detect when the server has determined that the callback path is down. When the server has made such a determination, only the RENEW operation will renew the lease on delegations. If the server determines the callback path is down, it returns NFS4ERR_CB_PATH_DOWN. Even though it returns NFS4ERR_CB_PATH_DOWN, the server MUST renew the lease on - the record locks and share reservations that the client has + the byte-range locks and share reservations that the client has established on the server. If for some reason the lock and share reservation lease cannot be renewed, then the server MUST return an error other than NFS4ERR_CB_PATH_DOWN, even if the callback path is also down. In the event that the server has conditions such that is could return either NFS4ERR_CB_PATH_DOWN or NFS4ERR_LEASE_MOVED, NFS4ERR_LEASE_MOVED MUST be handled first. The client that issues RENEW MUST choose the principal, RPC security flavor, and if applicable, GSS-API mechanism and service via one of the following algorithms: @@ -12721,103 +13444,112 @@ nfsstat4 status; bitmap4 attrsset; }; 15.34.4. DESCRIPTION The SETATTR operation changes one or more of the attributes of a filesystem object. The new attributes are specified with a bitmap and the attributes that follow the bitmap in bit order. - The stateid argument for SETATTR is used to provide file locking - context that is necessary for SETATTR requests that set the size - attribute. Since setting the size attribute modifies the file's + The stateid argument for SETATTR is used to provide byte-range + locking context that is necessary for SETATTR requests that set the + size attribute. Since setting the size attribute modifies the file's data, it has the same locking requirements as a corresponding WRITE. Any SETATTR that sets the size attribute is incompatible with a share - reservation that specifies DENY_WRITE. The area between the old end- - of-file and the new end-of-file is considered to be modified just as - would have been the case had the area in question been specified as - the target of WRITE, for the purpose of checking conflicts with - record locks, for those cases in which a server is implementing - mandatory record locking behavior. A valid stateid should always be - specified. When the file size attribute is not set, the special - stateid consisting of all bits zero should be passed. + reservation that specifies OPEN4_SHARE_DENY_WRITE. The area between + the old end-of-file and the new end-of-file is considered to be + modified just as would have been the case had the area in question + been specified as the target of WRITE, for the purpose of checking + conflicts with byte-range locks, for those cases in which a server is + implementing mandatory byte-range locking behavior. A valid stateid + SHOULD always be specified. When the file size attribute is not set, + the special stateid consisting of all bits zero MAY be passed. On either success or failure of the operation, the server will return the attrsset bitmask to represent what (if any) attributes were successfully set. The attrsset in the response is a subset of the bitmap4 that is part of the obj_attributes in the argument. On success, the current filehandle retains its value. 15.34.5. IMPLEMENTATION If the request specifies the owner attribute to be set, the server - should allow the operation to succeed if the current owner of the + SHOULD allow the operation to succeed if the current owner of the object matches the value specified in the request. Some servers may be implemented in a way as to prohibit the setting of the owner attribute unless the requester has privilege to do so. If the server is lenient in this one case of matching owner values, the client implementation may be simplified in cases of creation of an object - followed by a SETATTR. + (e.g., an exclusive create via OPEN) followed by a SETATTR. The file size attribute is used to request changes to the size of a - file. A value of 0 (zero) causes the file to be truncated, a value - less than the current size of the file causes data from new size to - the end of the file to be discarded, and a size greater than the - current size of the file causes logically zeroed data bytes to be - added to the end of the file. Servers are free to implement this - using holes or actual zero data bytes. Clients should not make any - assumptions regarding a server's implementation of this feature, - beyond that the bytes returned will be zeroed. Servers must support - extending the file size via SETATTR. + file. A value of zero causes the file to be truncated, a value less + than the current size of the file causes data from new size to the + end of the file to be discarded, and a size greater than the current + size of the file causes logically zeroed data bytes to be added to + the end of the file. Servers are free to implement this using holes + or actual zero data bytes. Clients should not make any assumptions + regarding a server's implementation of this feature, beyond that the + bytes returned will be zeroed. Servers MUST support extending the + file size via SETATTR. SETATTR is not guaranteed atomic. A failed SETATTR may partially - change a file's attributes. + change a file's attributes, hence the reason why the reply always + includes the status and the list of attributes that were set. + + If the object whose attributes are being changed has a file + delegation that is held by a client other than the one doing the + SETATTR, the delegation(s) must be recalled, and the operation cannot + proceed to actually change an attribute until each such delegation is + returned or revoked. In all cases in which delegations are recalled, + the server is likely to return one or more NFS4ERR_DELAY errors while + the delegation(s) remains outstanding, although it might not do that + if the delegations are returned quickly. Changing the size of a file with SETATTR indirectly changes the - time_modify. A client must account for this as size changes can - result in data deletion. + time_modify and change attributes. A client must account for this as + size changes can result in data deletion. The attributes time_access_set and time_modify_set are write-only attributes constructed as a switched union so the client can direct the server in setting the time values. If the switched union specifies SET_TO_CLIENT_TIME4, the client has provided an nfstime4 to be used for the operation. If the switch union does not specify SET_TO_CLIENT_TIME4, the server is to use its current time for the SETATTR operation. If server and client times differ, programs that compare client time to file times can break. A time maintenance protocol should be used to limit client/server time skew. Use of a COMPOUND containing a VERIFY operation specifying only the change attribute, immediately followed by a SETATTR, provides a means whereby a client may specify a request that emulates the - functionality of the SETATTR guard mechanism of NFS version 3. Since - the function of the guard mechanism is to avoid changes to the file + functionality of the SETATTR guard mechanism of NFSv3. Since the + function of the guard mechanism is to avoid changes to the file attributes based on stale information, delays between checking of the guard condition and the setting of the attributes have the potential to compromise this function, as would the corresponding delay in the - NFS version 4 emulation. Therefore, NFS version 4 servers should - take care to avoid such delays, to the degree possible, when - executing such a request. + NFSv4 emulation. Therefore, NFSv4 servers should take care to avoid + such delays, to the degree possible, when executing such a request. If the server does not support an attribute as requested by the client, the server should return NFS4ERR_ATTRNOTSUPP. A mask of the attributes actually set is returned by SETATTR in all - cases. That mask must not include attributes bits not requested to - be set by the client, and must be equal to the mask of attributes - requested to be set only if the SETATTR completes without error. + cases. That mask MUST NOT include attribute bits not requested to be + set by the client. If the attribute masks in the request and reply + are equal, the status field in the reply MUST be NFS4_OK. -15.35. Operation 35: SETCLIENTID - Negotiate Clientid +15.35. Operation 35: SETCLIENTID - Negotiate Client ID 15.35.1. SYNOPSIS client, callback, callback_ident -> clientid, setclientid_confirm 15.35.2. ARGUMENT struct SETCLIENTID4args { nfs_client_id4 client; cb_client4 callback; @@ -12839,30 +13571,30 @@ default: void; }; 15.35.4. DESCRIPTION The client uses the SETCLIENTID operation to notify the server of its intention to use a particular client identifier, callback, and callback_ident for subsequent requests that entail creating lock, share reservation, and delegation state on the server. Upon - successful completion the server will return a shorthand clientid + successful completion the server will return a shorthand client ID which, if confirmed via a separate step, will be used in subsequent - file locking and file open requests. Confirmation of the clientid + file locking and file open requests. Confirmation of the client ID must be done via the SETCLIENTID_CONFIRM operation to return the - clientid and setclientid_confirm values, as verifiers, to the server. - The reason why two verifiers are necessary is that it is possible to - use SETCLIENTID and SETCLIENTID_CONFIRM to modify the callback and - callback_ident information but not the shorthand clientid. In that - event, the setclientid_confirm value is effectively the only - verifier. + client ID and setclientid_confirm values, as verifiers, to the + server. The reason why two verifiers are necessary is that it is + possible to use SETCLIENTID and SETCLIENTID_CONFIRM to modify the + callback and callback_ident information but not the shorthand client + ID. In that event, the setclientid_confirm value is effectively the + only verifier. The callback information provided in this operation will be used if the client is provided an open delegation at a future point. Therefore, the client must correctly reflect the program and port numbers for the callback program at the time SETCLIENTID is used. The callback_ident value is used by the server on the callback. The client can leverage the callback_ident to eliminate the need for more than one callback RPC program number, while still being able to determine which server is initiating the callback. @@ -12871,21 +13603,21 @@ To understand how to implement SETCLIENTID, make the following notations. Let: x be the value of the client.id subfield of the SETCLIENTID4args structure. v be the value of the client.verifier subfield of the SETCLIENTID4args structure. - c be the value of the clientid field returned in the + c be the value of the client ID field returned in the SETCLIENTID4resok structure. k represent the value combination of the fields callback and callback_ident fields of the SETCLIENTID4args structure. s be the setclientid_confirm value returned in the SETCLIENTID4resok structure. { v, x, c, k, s } be a quintuple for a client record. A client record is confirmed if there has been a SETCLIENTID_CONFIRM @@ -12977,21 +13709,21 @@ The server returns { d, t }. The server awaits confirmation of { d, k } via SETCLIENTID_CONFIRM { d, t }. The server does NOT remove client (lock/share/ delegation) state for x. The server generates the clientid and setclientid_confirm values and must take care to ensure that these values are extremely unlikely to ever be regenerated. -15.36. Operation 36: SETCLIENTID_CONFIRM - Confirm Clientid +15.36. Operation 36: SETCLIENTID_CONFIRM - Confirm Client ID 15.36.1. SYNOPSIS clientid, verifier -> - 15.36.2. ARGUMENT struct SETCLIENTID_CONFIRM4args { clientid4 clientid; verifier4 setclientid_confirm; @@ -13000,40 +13732,40 @@ 15.36.3. RESULT struct SETCLIENTID_CONFIRM4res { nfsstat4 status; }; 15.36.4. DESCRIPTION This operation is used by the client to confirm the results from a previous call to SETCLIENTID. The client provides the server - supplied (from a SETCLIENTID response) clientid. The server responds - with a simple status of success or failure. + supplied (from a SETCLIENTID response) client ID. The server + responds with a simple status of success or failure. 15.36.5. IMPLEMENTATION The client must use the SETCLIENTID_CONFIRM operation to confirm the following two distinct cases: o The client's use of a new shorthand client identifier (as returned from the server in the response to SETCLIENTID), a new callback value (as specified in the arguments to SETCLIENTID) and a new callback_ident (as specified in the arguments to SETCLIENTID) value. The client's use of SETCLIENTID_CONFIRM in this case also confirms the removal of any of the client's previous relevant - leased state. Relevant leased client state includes record locks, - share reservations, and where the server does not support the - CLAIM_DELEGATE_PREV claim type, delegations. If the server + leased state. Relevant leased client state includes byte-range + locks, share reservations, and where the server does not support + the CLAIM_DELEGATE_PREV claim type, delegations. If the server supports CLAIM_DELEGATE_PREV, then SETCLIENTID_CONFIRM MUST NOT remove delegations for this client; relevant leased client state - would then just include record locks and share reservations. + would then just include byte-range locks and share reservations. o The client's re-use of an old, previously confirmed, shorthand client identifier, a new callback value, and a new callback_ident value. The client's use of SETCLIENTID_CONFIRM in this case MUST NOT result in the removal of any previous leased state (locks, share reservations, and delegations) We use the same notation and definitions for v, x, c, k, s, and unconfirmed and confirmed client records as introduced in the description of the SETCLIENTID operation. The arguments to @@ -13127,23 +13859,23 @@ renewed before the lease time expires via an operation from the client. If the client cannot issue a SETCLIENTID_CONFIRM after a SETCLIENTID before a period of time equal to that of a lease expires, then the client is unlikely to be able maintain state on the server during steady state operation. If the client does send a SETCLIENTID_CONFIRM for an unconfirmed record that the server has already deleted, the client will get NFS4ERR_STALE_CLIENTID back. If so, the client should then start over, and send SETCLIENTID to reestablish an unconfirmed client - record and get back an unconfirmed clientid and setclientid_confirm + record and get back an unconfirmed client ID and setclientid_confirm verifier. The client should then send the SETCLIENTID_CONFIRM to - confirm the clientid. + confirm the client ID. SETCLIENTID_CONFIRM does not establish or renew a lease. However, if SETCLIENTID_CONFIRM removes relevant leased client state, and that state does not include existing delegations, the server MUST allow the client a period of time no less than the value of lease_time attribute, to reclaim, (via the CLAIM_DELEGATE_PREV claim type of the OPEN operation) its delegations before removing unreclaimed delegations. 15.37. Operation 37: VERIFY - Verify Same Attributes @@ -13262,24 +13994,25 @@ stable is UNSTABLE4, the server is free to commit any part of the data and the metadata to stable storage, including all or none, before returning a reply to the client. There is no guarantee whether or when any uncommitted data will subsequently be committed to stable storage. The only guarantees made by the server are that it will not destroy any data without changing the value of verf and that it will not commit the data and metadata at a level less than that requested by the client. The stateid value for a WRITE request represents a value returned - from a previous record lock or share reservation request or the + from a previous byte-range lock or share reservation request or the stateid associated with a delegation. The stateid is used by the - server to verify that the associated share reservation and any record - locks are still valid and to update lease timeouts for the client. + server to verify that the associated share reservation and any byte- + range locks are still valid and to update lease timeouts for the + client. Upon successful completion, the following results are returned. The count result is the number of bytes of data written to the file. The server may write fewer bytes than requested. If so, the actual number of bytes written starting at location, offset, is returned. The server also returns an indication of the level of commitment of the data and metadata via committed. If the server committed all data and metadata to stable storage, committed should be set to FILE_SYNC4. If the level of commitment was at least as strong as @@ -13288,23 +14021,23 @@ then committed must also be FILE_SYNC4: anything else constitutes a protocol violation. If stable was DATA_SYNC4, then committed may be FILE_SYNC4 or DATA_SYNC4: anything else constitutes a protocol violation. If stable was UNSTABLE4, then committed may be either FILE_SYNC4, DATA_SYNC4, or UNSTABLE4. The final portion of the result is the write verifier. The write verifier is a cookie that the client can use to determine whether the server has changed instance (boot) state between a call to WRITE and a subsequent call to either WRITE or COMMIT. This cookie must be - consistent during a single instance of the NFS version 4 protocol - service and must be unique between instances of the NFS version 4 - protocol server, where uncommitted data may be lost. + consistent during a single instance of the NFSv4 protocol service and + must be unique between instances of the NFSv4 protocol server, where + uncommitted data may be lost. If a client writes data to the server with the stable argument set to UNSTABLE4 and the reply yields a committed response of DATA_SYNC4 or UNSTABLE4, the client will follow up some time in the future with a COMMIT operation to synchronize outstanding asynchronous data and metadata with the server's stable storage, barring client error. It is possible that due to client crash or other error that a subsequent COMMIT will not be received by the server. For a WRITE with a stateid value of all bits 0, the server MAY allow @@ -13338,30 +14071,30 @@ 1. Repeated power failures. 2. Hardware failures (of any board, power supply, etc.). 3. Repeated software crashes, including reboot cycle. This definition does not address failure of the stable storage module itself. The verifier is defined to allow a client to detect different - instances of an NFS version 4 protocol server over which cached, - uncommitted data may be lost. In the most likely case, the verifier - allows the client to detect server reboots. This information is - required so that the client can safely determine whether the server - could have lost cached data. If the server fails unexpectedly and - the client has uncommitted data from previous WRITE requests (done - with the stable argument set to UNSTABLE4 and in which the result - committed was returned as UNSTABLE4 as well) it may not have flushed - cached data to stable storage. The burden of recovery is on the - client and the client will need to retransmit the data to the server. + instances of an NFSv4 protocol server over which cached, uncommitted + data may be lost. In the most likely case, the verifier allows the + client to detect server reboots. This information is required so + that the client can safely determine whether the server could have + lost cached data. If the server fails unexpectedly and the client + has uncommitted data from previous WRITE requests (done with the + stable argument set to UNSTABLE4 and in which the result committed + was returned as UNSTABLE4 as well) it may not have flushed cached + data to stable storage. The burden of recovery is on the client and + the client will need to retransmit the data to the server. A suggested verifier would be to use the time that the server was booted or the time the server was last started (if restarting the server without a reboot results in lost buffers). The committed field in the results allows the client to do more effective caching. If the server is committing all WRITE requests to stable storage, then it should return with committed set to FILE_SYNC4, regardless of the value of the stable field in the arguments. A server that uses an NVRAM accelerator may choose to @@ -13375,23 +14108,24 @@ NFS4ERR_ISDIR. If the current filehandle is not a regular file or a directory, the server will return NFS4ERR_INVAL. If mandatory file locking is on for the file, and corresponding record of the data to be written file is read or write locked by an owner that is not associated with the stateid, the server will return NFS4ERR_LOCKED. If so, the client must check if the owner corresponding to the stateid used with the WRITE operation has a conflicting read lock that overlaps with the region that was to be written. If the stateid's owner has no conflicting read lock, then - the client should try to get the appropriate write record lock via - the LOCK operation before re-attempting the WRITE. When the WRITE - completes, the client should release the record lock via LOCKU. + the client should try to get the appropriate write byte-range lock + via the LOCK operation before re-attempting the WRITE. When the + WRITE completes, the client should release the byte-range lock via + LOCKU. If the stateid's owner had a conflicting read lock, then the client has no choice but to return an error to the application that attempted the WRITE. The reason is that since the stateid's owner had a read lock, the server either attempted to temporarily effectively upgrade this read lock to a write lock, or the server has no upgrade capability. If the server attempted to upgrade the read lock and failed, it is pointless for the client to re-attempt the upgrade via the LOCK operation, because there might be another client also trying to upgrade. If two clients are blocked trying upgrade @@ -13462,21 +14197,21 @@ 15.40.5. IMPLEMENTATION A client will probably not send an operation with code OP_ILLEGAL but if it does, the response will be ILLEGAL4res just as it would be with any other invalid operation code. Note that if the server gets an illegal operation code that is not OP_ILLEGAL, and if the server checks for legal operation codes during the XDR decode phase, then the ILLEGAL4res would not be returned. -16. NFS version 4 Callback Procedures +16. NFSv4 Callback Procedures The procedures used for callbacks are defined in the following sections. In the interest of clarity, the terms "client" and "server" refer to NFS clients and servers, despite the fact that for an individual callback RPC, the sense of these terms would be precisely the opposite. 16.1. Procedure 0: CB_NULL - No Operation 16.1.1. SYNOPSIS @@ -13604,27 +14338,28 @@ union CB_GETATTR4res switch (nfsstat4 status) { case NFS4_OK: CB_GETATTR4resok resok4; default: void; }; 16.2.6.4. DESCRIPTION The CB_GETATTR operation is used by the server to obtain the current - modified state of a file that has been write delegated. The - attributes size and change are the only ones guaranteed to be + modified state of a file that has been OPEN_DELEGATE_WRITE delegated. + The attributes size and change are the only ones guaranteed to be serviced by the client. See Section 10.4.3 for a full description of how the client and server are to interact with the use of CB_GETATTR. If the filehandle specified is not one for which the client holds a - write open delegation, an NFS4ERR_BADHANDLE error is returned. + OPEN_DELEGATE_WRITE delegation, an NFS4ERR_BADHANDLE error is + returned. 16.2.6.5. IMPLEMENTATION The client returns attrmask bits and the associated attribute values only for the change attribute, and attributes that it may change (time_modify, and size). 16.2.7. Operation 4: CB_RECALL - Recall an Open Delegation 16.2.7.1. SYNOPSIS @@ -13728,26 +14463,26 @@ discussed as part of Section 3. Note that while NFSv4 mandates an end to end mutual authentication model, the "classic" model of machine authentication via IP address checking and AUTH_SYS identification can still be supported with the caveat that the AUTH_SYS flavor is neither MANDATORY nor RECOMMENDED by this specification, and so interoperability via AUTH_SYS is not assured. For reasons of reduced administration overhead, better performance - and/or reduction of CPU utilization, users of NFS version 4 - implementations may choose to not use security mechanisms that enable - integrity protection on each remote procedure call and response. The - use of mechanisms without integrity leaves the customer vulnerable to - an attacker in between the NFS client and server that modifies the - RPC request and/or the response. While implementations are free to + and/or reduction of CPU utilization, users of NFSv4 implementations + may choose to not use security mechanisms that enable integrity + protection on each remote procedure call and response. The use of + mechanisms without integrity leaves the customer vulnerable to an + attacker in between the NFS client and server that modifies the RPC + request and/or the response. While implementations are free to provide the option to use weaker security mechanisms, there are two operations in particular that warrant the implementation overriding user choices. The first such operation is SECINFO. It is recommended that the client issue the SECINFO call such that it is protected with a security flavor that has integrity protection, such as RPCSEC_GSS with a security triple that uses either rpc_gss_svc_integrity or rpc_gss_svc_privacy (rpc_gss_svc_privacy includes integrity protection) service. Without integrity protection encapsulating @@ -13765,46 +14500,100 @@ server controlled by the attacker. Because the operations SETCLIENTID/SETCLIENTID_CONFIRM are responsible for the release of client state, it is imperative that the principal used for these operations is checked against and match the previous use of these operations. See Section 9.1.1 for further discussion. 18. IANA Considerations -18.1. Named Attribute Definition + This section uses terms that are defined in [41]. - The NFS version 4 protocol provides for the association of named - attributes to files. The name space identifiers for these attributes - are defined as string names. The protocol does not define the - specific assignment of the name space for these file attributes. - Even though the name space is not specifically controlled to prevent - collisions, an IANA registry has been created for the registration of - NFS version 4 named attributes. Registration will be achieved - through the publication of an Informational RFC and will require not - only the name of the attribute but the syntax and semantics of the - named attribute contents; the intent is to promote interoperability - where common interests exist. While application developers are - allowed to define and use attributes as needed, they are encouraged - to register the attributes with IANA. +18.1. Named Attribute Definitions + + IANA will create a registry called the "NFSv4 Named Attribute + Definitions Registry". + + The NFSv4 protocol supports the association of a file with zero or + more named attributes. The name space identifiers for these + attributes are defined as string names. The protocol does not define + the specific assignment of the name space for these file attributes. + An IANA registry will promote interoperability where common interests + exist. While application developers are allowed to define and use + attributes as needed, they are encouraged to register the attributes + with IANA. + + Such registered named attributes are presumed to apply to all minor + versions of NFSv4, including those defined subsequently to the + registration. Where the named attribute is intended to be limited + with regard to the minor versions for which they are not be used, the + assignment in registry will clearly state the applicable limits. + + All assignments to the registry are made on a First Come First Served + basis, per section 4.1 of [41]. The policy for each assignment is + Specification Required, per section 4.1 of [41]. + + Under the NFSv4 specification, the name of a named attribute can in + theory be up to 2^32 - 1 bytes in length, but in practice NFSv4 + clients and servers will be unable to a handle string that long. + IANA should reject any assignment request with a named attribute that + exceeds 128 UTF-8 characters. To give IESG the flexibility to set up + bases of assignment of Experimental Use and Standards Action, the + prefixes of "EXPE" and "STDS" are Reserved. The zero length named + attribute name is Reserved. + + The prefix "PRIV" is allocated for Private Use. A site that wants to + make use of unregistered named attributes without risk of conflicting + with an assignment in IANA's registry should use the prefix "PRIV" in + all of its named attributes. + + Because some NFSv4 clients and servers have case insensitive + semantics, the fifteen additional lower case and mixed case + permutations of each of "EXPE", "PRIV", and "STDS", are Reserved + (e.g. "expe", "expE", "exPe", etc. are Reserved). Similarly, IANA + must not allow two assignments that would conflict if both named + attributes were converted to a common case. + + The registry of named attributes is a list of assignments, each + containing three fields for each assignment. + + 1. A US-ASCII string name that is the actual name of the attribute. + This name must be unique. This string name can be 1 to 128 UTF-8 + characters long. + + 2. A reference to the specification of the named attribute. The + reference can consume up to 256 bytes (or more if IANA permits). + + 3. The point of contact of the registrant. The point of contact can + consume up to 256 bytes (or more if IANA permits). + +18.1.1. Initial Registry + + There is no initial registry. + +18.1.2. Updating Registrations + + The registrant is always permitted to update the point of contact + field. To make any other change will require Expert Review or IESG + Approval. 18.2. ONC RPC Network Identifiers (netids) Section 2.2 discussed the r_netid field and the corresponding r_addr - field of a clientaddr4 structure. The NFS version 4 protocol depends - on the syntax and semantics of these fields to effectively - communicate callback information between client and server. - Therefore, an IANA registry has been created to include the values - defined in this document and to allow for future expansion based on - transport usage/availability. Additions to this ONC RPC Network - Identifier registry must be done with the publication of an RFC. + field of a clientaddr4 structure. The NFSv4 protocol depends on the + syntax and semantics of these fields to effectively communicate + callback information between client and server. Therefore, an IANA + registry has been created to include the values defined in this + document and to allow for future expansion based on transport usage/ + availability. Additions to this ONC RPC Network Identifier registry + must be done with the publication of an RFC. The initial values for this registry are as follows (some of this text is replicated from section 2.2 for clarity): The Network Identifier (or r_netid for short) is used to specify a transport protocol and associated universal address (or r_addr for short). The syntax of the Network Identifier is a US-ASCII string. The initial definitions for r_netid are: "tcp" TCP over IP version 4 @@ -13838,35 +14627,40 @@ x1:x2:x3:x4:x5:x6:x7:x8.p1.p2 The suffix "p1.p2" is the service port, and is computed the same way as with universal addresses for "tcp" and "udp". The prefix, "x1:x2: x3:x4:x5:x6:x7:x8", is the standard textual form for representing an IPv6 address as defined in Section 2.2 of [18]. Additionally, the two alternative forms specified in Section 2.2 of [18] are also acceptable. - As mentioned, the registration of new Network Identifiers will - require the publication of an Information RFC with similar detail as - listed above for the Network Identifier itself and corresponding - Universal Address. +18.2.1. Initial Registry + + There is no initial registry. + +18.2.2. Updating Registrations + + The registrant is always permitted to update the point of contact + field. To make any other change will require Expert Review or IESG + Approval. 19. References 19.1. Normative References [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", March 1997. [2] Haynes, T. and D. Noveck, "NFSv4 Version 0 XDR Description", draft-ietf-nfsv4-rfc3530bis-dot-x-02 (work in progress), - Jul 2010. + Feb 2011. [3] Srinivasan, R., "RPC: Remote Procedure Call Protocol Specification Version 2", RFC 1831, August 1995. [4] Eisler, M., Chiu, A., and L. Ling, "RPCSEC_GSS Protocol Specification", RFC 2203, September 1997. [5] Eisler, M., "LIPKEY - A Low Infrastructure Public Key Mechanism Using SPKM", RFC 2847, June 2000. @@ -13897,22 +14691,22 @@ [12] Shepler, S., Callaghan, B., Robinson, D., Thurlow, R., Beame, C., Eisler, M., and D. Noveck, "Network File System (NFS) version 4 Protocol", RFC 3010, December 2000. [13] Nowicki, B., "NFS: Network File System Protocol specification", RFC 1094, March 1989. [14] Callaghan, B., Pawlowski, B., and P. Staubach, "NFS Version 3 Protocol Specification", RFC 1813, June 1995. - [15] Srinivasan, R., "XDR: External Data Representation Standard", - RFC 1832, August 1995. + [15] Eisler, M., "XDR: External Data Representation Standard", + RFC 4506, May 2006. [16] Linn, J., "The Kerberos Version 5 GSS-API Mechanism", RFC 1964, June 1996. [17] Srinivasan, R., "Binding Protocols for ONC RPC Version 2", RFC 1833, August 1995. [18] Hinden, R. and S. Deering, "IP Version 6 Addressing Architecture", RFC 2373, July 1998. @@ -13963,32 +14757,57 @@ [32] The Open Group, "Protocols for Interworking: XNFS, Version 3W, ISBN 1-85912-184-5", February 1998. [33] Postel, J., "Transmission Control Protocol", STD 7, RFC 793, September 1981. [34] Juszczak, C., "Improving the Performance and Correctness of an NFS Server", USENIX Conference Proceedings , June 1990. - [35] Callaghan, B., "NFS URL Scheme", RFC 2224, October 1997. + [35] The Open Group, "Section 'fcntl()' of System Interfaces of The + Open Group Base Specifications Issue 6 IEEE Std 1003.1, 2004 + Edition, HTML Version (www.opengroup.org), ISBN 1931624232", + 2004. - [36] Chiu, A., Eisler, M., and B. Callaghan, "Security Negotiation + [36] The Open Group, "Section 'fsync()' of System Interfaces of The + Open Group Base Specifications Issue 6 IEEE Std 1003.1, 2004 + Edition, HTML Version (www.opengroup.org), ISBN 1931624232", + 2004. + + [37] The Open Group, "Section 'getpwnam()' of System Interfaces of + The Open Group Base Specifications Issue 6 IEEE Std 1003.1, + 2004 Edition, HTML Version (www.opengroup.org), ISBN + 1931624232", 2004. + + [38] Callaghan, B., "NFS URL Scheme", RFC 2224, October 1997. + + [39] Chiu, A., Eisler, M., and B. Callaghan, "Security Negotiation for WebNFS", RFC 2755, January 2000. - [37] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA + [40] The Open Group, "Section 'unlink()' of System Interfaces of The + Open Group Base Specifications Issue 6 IEEE Std 1003.1, 2004 + Edition, HTML Version (www.opengroup.org), ISBN 1931624232", + 2004. + + [41] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA Considerations Section in RFCs", BCP 26, RFC 5226, May 2008. Appendix A. Acknowledgments + A bis is certainly built on the shoulders of the first attempt. + Spencer Shepler, Brent Callaghan, David Robinson, Robert Thurlow, + Carl Beame, Mike Eisler, and David Noveck are responsible for a great + deal of the effort in this work. + Rob Thurlow clarified how a client should contact a new server if a - migration has occured. + migration has occurred. David Black, Nico Williams, Mike Eisler, Trond Myklebust, and James Lentini read many drafts of Section 12 and contributed numerous useful suggestions, without which the necessary revision of that section for this document would not have been possible. Peter Staubach read almost all of the drafts of Section 12 leading to the published result and his numerous comments were always useful and contributed substantially to improving the quality of the final result.