--- 1/draft-ietf-nfsv4-rfc3530bis-04.txt 2010-10-22 01:15:49.000000000 +0200 +++ 2/draft-ietf-nfsv4-rfc3530bis-05.txt 2010-10-22 01:15:49.000000000 +0200 @@ -1,18 +1,18 @@ NFSv4 T. Haynes Internet-Draft D. Noveck Intended status: Standards Track Editors -Expires: January 8, 2011 July 07, 2010 +Expires: April 24, 2011 October 21, 2010 NFS Version 4 Protocol - draft-ietf-nfsv4-rfc3530bis-04.txt + draft-ietf-nfsv4-rfc3530bis-05.txt Abstract The Network File System (NFS) version 4 is a distributed filesystem protocol which owes heritage to NFS protocol version 2, RFC 1094, and version 3, RFC 1813. Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the mount protocol. In addition, support for strong security (and its negotiation), compound operations, client caching, and internationalization have been added. Of course, @@ -42,21 +42,21 @@ and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. - This Internet-Draft will expire on January 8, 2011. + This Internet-Draft will expire on April 24, 2011. Copyright Notice Copyright (c) 2010 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents @@ -142,208 +142,208 @@ 6.4. Requirements . . . . . . . . . . . . . . . . . . . . . . 70 6.4.1. Setting the mode and/or ACL Attributes . . . . . . . 71 6.4.2. Retrieving the mode and/or ACL Attributes . . . . . 72 6.4.3. Creating New Objects . . . . . . . . . . . . . . . . 72 7. Multi-Server Namespace . . . . . . . . . . . . . . . . . . . 74 7.1. Location Attributes . . . . . . . . . . . . . . . . . . 74 7.2. File System Presence or Absence . . . . . . . . . . . . 75 7.3. Getting Attributes for an Absent File System . . . . . . 76 7.3.1. GETATTR Within an Absent File System . . . . . . . . 76 7.3.2. READDIR and Absent File Systems . . . . . . . . . . 77 - 7.4. Uses of Location Information . . . . . . . . . . . . . . 78 + 7.4. Uses of Location Information . . . . . . . . . . . . . . 77 7.4.1. File System Replication . . . . . . . . . . . . . . 78 7.4.2. File System Migration . . . . . . . . . . . . . . . 79 7.4.3. Referrals . . . . . . . . . . . . . . . . . . . . . 80 7.5. Location Entries and Server Identity . . . . . . . . . . 80 - 7.6. Additional Client-side Considerations . . . . . . . . . 81 + 7.6. Additional Client-Side Considerations . . . . . . . . . 81 7.7. Effecting File System Transitions . . . . . . . . . . . 82 7.7.1. File System Transitions and Simultaneous Access . . 83 7.7.2. Filehandles and File System Transitions . . . . . . 83 7.7.3. Fileids and File System Transitions . . . . . . . . 84 7.7.4. Fsids and File System Transitions . . . . . . . . . 85 7.7.5. The Change Attribute and File System Transitions . . 85 7.7.6. Lock State and File System Transitions . . . . . . . 86 7.7.7. Write Verifiers and File System Transitions . . . . 88 7.7.8. Readdir Cookies and Verifiers and File System Transitions . . . . . . . . . . . . . . . . . . . . 88 - 7.7.9. File System Data and File System Transitions . . . . 88 + 7.7.9. File System Data and File System Transitions . . . . 89 7.8. Effecting File System Referrals . . . . . . . . . . . . 90 7.8.1. Referral Example (LOOKUP) . . . . . . . . . . . . . 90 7.8.2. Referral Example (READDIR) . . . . . . . . . . . . . 94 7.9. The Attribute fs_locations . . . . . . . . . . . . . . . 97 7.9.1. Inferring Transition Modes . . . . . . . . . . . . . 98 - 8. NFS Server Name Space . . . . . . . . . . . . . . . . . . . . 99 + 8. NFS Server Name Space . . . . . . . . . . . . . . . . . . . . 100 8.1. Server Exports . . . . . . . . . . . . . . . . . . . . . 100 8.2. Browsing Exports . . . . . . . . . . . . . . . . . . . . 100 8.3. Server Pseudo Filesystem . . . . . . . . . . . . . . . . 100 8.4. Multiple Roots . . . . . . . . . . . . . . . . . . . . . 101 8.5. Filehandle Volatility . . . . . . . . . . . . . . . . . 101 8.6. Exported Root . . . . . . . . . . . . . . . . . . . . . 101 8.7. Mount Point Crossing . . . . . . . . . . . . . . . . . . 102 8.8. Security Policy and Name Space Presentation . . . . . . 102 9. File Locking and Share Reservations . . . . . . . . . . . . . 103 9.1. Locking . . . . . . . . . . . . . . . . . . . . . . . . 104 9.1.1. Client ID . . . . . . . . . . . . . . . . . . . . . 104 9.1.2. Server Release of Clientid . . . . . . . . . . . . . 107 - 9.1.3. lock_owner and stateid Definition . . . . . . . . . 107 + 9.1.3. lock_owner and stateid Definition . . . . . . . . . 108 9.1.4. Use of the stateid and Locking . . . . . . . . . . . 109 9.1.5. Sequencing of Lock Requests . . . . . . . . . . . . 111 9.1.6. Recovery from Replayed Requests . . . . . . . . . . 112 - 9.1.7. Releasing lock_owner State . . . . . . . . . . . . . 112 + 9.1.7. Releasing lock_owner State . . . . . . . . . . . . . 113 9.1.8. Use of Open Confirmation . . . . . . . . . . . . . . 113 9.2. Lock Ranges . . . . . . . . . . . . . . . . . . . . . . 114 - 9.3. Upgrading and Downgrading Locks . . . . . . . . . . . . 114 + 9.3. Upgrading and Downgrading Locks . . . . . . . . . . . . 115 9.4. Blocking Locks . . . . . . . . . . . . . . . . . . . . . 115 - 9.5. Lease Renewal . . . . . . . . . . . . . . . . . . . . . 115 + 9.5. Lease Renewal . . . . . . . . . . . . . . . . . . . . . 116 9.6. Crash Recovery . . . . . . . . . . . . . . . . . . . . . 116 9.6.1. Client Failure and Recovery . . . . . . . . . . . . 117 9.6.2. Server Failure and Recovery . . . . . . . . . . . . 117 9.6.3. Network Partitions and Recovery . . . . . . . . . . 119 - 9.7. Recovery from a Lock Request Timeout or Abort . . . . . 122 + 9.7. Recovery from a Lock Request Timeout or Abort . . . . . 123 9.8. Server Revocation of Locks . . . . . . . . . . . . . . . 123 9.9. Share Reservations . . . . . . . . . . . . . . . . . . . 124 9.10. OPEN/CLOSE Operations . . . . . . . . . . . . . . . . . 125 - 9.10.1. Close and Retention of State Information . . . . . . 125 + 9.10.1. Close and Retention of State Information . . . . . . 126 9.11. Open Upgrade and Downgrade . . . . . . . . . . . . . . . 126 9.12. Short and Long Leases . . . . . . . . . . . . . . . . . 127 9.13. Clocks, Propagation Delay, and Calculating Lease Expiration . . . . . . . . . . . . . . . . . . . . . . . 127 9.14. Migration, Replication and State . . . . . . . . . . . . 128 9.14.1. Migration and State . . . . . . . . . . . . . . . . 128 9.14.2. Replication and State . . . . . . . . . . . . . . . 129 - 9.14.3. Notification of Migrated Lease . . . . . . . . . . . 129 + 9.14.3. Notification of Migrated Lease . . . . . . . . . . . 130 9.14.4. Migration and the Lease_time Attribute . . . . . . . 130 10. Client-Side Caching . . . . . . . . . . . . . . . . . . . . . 131 10.1. Performance Challenges for Client-Side Caching . . . . . 131 10.2. Delegation and Callbacks . . . . . . . . . . . . . . . . 132 - 10.2.1. Delegation Recovery . . . . . . . . . . . . . . . . 133 - 10.3. Data Caching . . . . . . . . . . . . . . . . . . . . . . 135 + 10.2.1. Delegation Recovery . . . . . . . . . . . . . . . . 134 + 10.3. Data Caching . . . . . . . . . . . . . . . . . . . . . . 136 10.3.1. Data Caching and OPENs . . . . . . . . . . . . . . . 136 10.3.2. Data Caching and File Locking . . . . . . . . . . . 137 - 10.3.3. Data Caching and Mandatory File Locking . . . . . . 138 + 10.3.3. Data Caching and Mandatory File Locking . . . . . . 139 10.3.4. Data Caching and File Identity . . . . . . . . . . . 139 10.4. Open Delegation . . . . . . . . . . . . . . . . . . . . 140 10.4.1. Open Delegation and Data Caching . . . . . . . . . . 142 - 10.4.2. Open Delegation and File Locks . . . . . . . . . . . 143 + 10.4.2. Open Delegation and File Locks . . . . . . . . . . . 144 10.4.3. Handling of CB_GETATTR . . . . . . . . . . . . . . . 144 10.4.4. Recall of Open Delegation . . . . . . . . . . . . . 147 10.4.5. Clients that Fail to Honor Delegation Recalls . . . 149 - 10.4.6. Delegation Revocation . . . . . . . . . . . . . . . 149 + 10.4.6. Delegation Revocation . . . . . . . . . . . . . . . 150 10.5. Data Caching and Revocation . . . . . . . . . . . . . . 150 - 10.5.1. Revocation Recovery for Write Open Delegation . . . 150 + 10.5.1. Revocation Recovery for Write Open Delegation . . . 151 10.6. Attribute Caching . . . . . . . . . . . . . . . . . . . 151 10.7. Data and Metadata Caching and Memory Mapped Files . . . 153 - 10.8. Name Caching . . . . . . . . . . . . . . . . . . . . . . 155 - 10.9. Directory Caching . . . . . . . . . . . . . . . . . . . 156 + 10.8. Name Caching . . . . . . . . . . . . . . . . . . . . . . 156 + 10.9. Directory Caching . . . . . . . . . . . . . . . . . . . 157 11. Minor Versioning . . . . . . . . . . . . . . . . . . . . . . 157 12. Internationalization . . . . . . . . . . . . . . . . . . . . 160 12.1. Use of UTF-8 . . . . . . . . . . . . . . . . . . . . . . 161 12.1.1. Relation to Stringprep . . . . . . . . . . . . . . . 161 12.1.2. Normalization, Equivalence, and Confusability . . . 162 - 12.2. String Type Overview . . . . . . . . . . . . . . . . . . 164 - 12.2.1. Overall String Class Divisions . . . . . . . . . . . 164 - 12.2.2. Divisions by Typedef Parent types . . . . . . . . . 165 - 12.2.3. Individual Types and Their Handling . . . . . . . . 166 - 12.3. Errors Related to Strings . . . . . . . . . . . . . . . 167 - 12.4. Types with Pre-processing to Resolve Mixture Issues . . 168 - 12.4.1. Processing of Principal Strings . . . . . . . . . . 168 - 12.4.2. Processing of Server Id Strings . . . . . . . . . . 168 - 12.5. String Types without Internationalization Processing . . 169 - 12.6. Types with Processing Defined by Other Internet Areas . 169 - 12.7. String Types with NFS-specific Processing . . . . . . . 170 - 12.7.1. Handling of File Came Components . . . . . . . . . . 171 - 12.7.2. Processing of Link Text . . . . . . . . . . . . . . 178 - 12.7.3. Processing of Principal Prefixes . . . . . . . . . . 179 - 13. Error Values . . . . . . . . . . . . . . . . . . . . . . . . 179 - 13.1. Error Definitions . . . . . . . . . . . . . . . . . . . 180 - 13.1.1. General Errors . . . . . . . . . . . . . . . . . . . 181 - 13.1.2. Filehandle Errors . . . . . . . . . . . . . . . . . 183 - 13.1.3. Compound Structure Errors . . . . . . . . . . . . . 184 - 13.1.4. File System Errors . . . . . . . . . . . . . . . . . 185 - 13.1.5. State Management Errors . . . . . . . . . . . . . . 187 - 13.1.6. Security Errors . . . . . . . . . . . . . . . . . . 188 - 13.1.7. Name Errors . . . . . . . . . . . . . . . . . . . . 188 - 13.1.8. Locking Errors . . . . . . . . . . . . . . . . . . . 189 - 13.1.9. Reclaim Errors . . . . . . . . . . . . . . . . . . . 190 - 13.1.10. Client Management Errors . . . . . . . . . . . . . . 191 - 13.1.11. Attribute Handling Errors . . . . . . . . . . . . . 191 - 13.2. Operations and their valid errors . . . . . . . . . . . 192 - 13.3. Callback operations and their valid errors . . . . . . . 199 - 13.4. Errors and the operations that use them . . . . . . . . 199 - 14. NFS version 4 Requests . . . . . . . . . . . . . . . . . . . 204 - 14.1. Compound Procedure . . . . . . . . . . . . . . . . . . . 204 - 14.2. Evaluation of a Compound Request . . . . . . . . . . . . 205 - 14.3. Synchronous Modifying Operations . . . . . . . . . . . . 206 - 14.4. Operation Values . . . . . . . . . . . . . . . . . . . . 206 - 15. NFS version 4 Procedures . . . . . . . . . . . . . . . . . . 206 - 15.1. Procedure 0: NULL - No Operation . . . . . . . . . . . . 206 - 15.2. Procedure 1: COMPOUND - Compound Operations . . . . . . 207 - 15.3. Operation 3: ACCESS - Check Access Rights . . . . . . . 209 - 15.4. Operation 4: CLOSE - Close File . . . . . . . . . . . . 212 - 15.5. Operation 5: COMMIT - Commit Cached Data . . . . . . . . 213 - 15.6. Operation 6: CREATE - Create a Non-Regular File Object . 216 + 12.2. String Type Overview . . . . . . . . . . . . . . . . . . 165 + 12.2.1. Overall String Class Divisions . . . . . . . . . . . 165 + 12.2.2. Divisions by Typedef Parent types . . . . . . . . . 166 + 12.2.3. Individual Types and Their Handling . . . . . . . . 167 + 12.3. Errors Related to Strings . . . . . . . . . . . . . . . 168 + 12.4. Types with Pre-processing to Resolve Mixture Issues . . 169 + 12.4.1. Processing of Principal Strings . . . . . . . . . . 169 + 12.4.2. Processing of Server Id Strings . . . . . . . . . . 169 + 12.5. String Types without Internationalization Processing . . 170 + 12.6. Types with Processing Defined by Other Internet Areas . 170 + 12.7. String Types with NFS-specific Processing . . . . . . . 171 + 12.7.1. Handling of File Name Components . . . . . . . . . . 172 + 12.7.2. Processing of Link Text . . . . . . . . . . . . . . 181 + 12.7.3. Processing of Principal Prefixes . . . . . . . . . . 182 + 13. Error Values . . . . . . . . . . . . . . . . . . . . . . . . 183 + 13.1. Error Definitions . . . . . . . . . . . . . . . . . . . 183 + 13.1.1. General Errors . . . . . . . . . . . . . . . . . . . 185 + 13.1.2. Filehandle Errors . . . . . . . . . . . . . . . . . 186 + 13.1.3. Compound Structure Errors . . . . . . . . . . . . . 187 + 13.1.4. File System Errors . . . . . . . . . . . . . . . . . 188 + 13.1.5. State Management Errors . . . . . . . . . . . . . . 190 + 13.1.6. Security Errors . . . . . . . . . . . . . . . . . . 191 + 13.1.7. Name Errors . . . . . . . . . . . . . . . . . . . . 191 + 13.1.8. Locking Errors . . . . . . . . . . . . . . . . . . . 192 + 13.1.9. Reclaim Errors . . . . . . . . . . . . . . . . . . . 193 + 13.1.10. Client Management Errors . . . . . . . . . . . . . . 194 + 13.1.11. Attribute Handling Errors . . . . . . . . . . . . . 194 + 13.2. Operations and their valid errors . . . . . . . . . . . 195 + 13.3. Callback operations and their valid errors . . . . . . . 203 + 13.4. Errors and the operations that use them . . . . . . . . 203 + 14. NFS version 4 Requests . . . . . . . . . . . . . . . . . . . 207 + 14.1. Compound Procedure . . . . . . . . . . . . . . . . . . . 208 + 14.2. Evaluation of a Compound Request . . . . . . . . . . . . 208 + 14.3. Synchronous Modifying Operations . . . . . . . . . . . . 209 + 14.4. Operation Values . . . . . . . . . . . . . . . . . . . . 210 + 15. NFS version 4 Procedures . . . . . . . . . . . . . . . . . . 210 + 15.1. Procedure 0: NULL - No Operation . . . . . . . . . . . . 210 + 15.2. Procedure 1: COMPOUND - Compound Operations . . . . . . 210 + 15.3. Operation 3: ACCESS - Check Access Rights . . . . . . . 213 + 15.4. Operation 4: CLOSE - Close File . . . . . . . . . . . . 216 + 15.5. Operation 5: COMMIT - Commit Cached Data . . . . . . . . 217 + 15.6. Operation 6: CREATE - Create a Non-Regular File Object . 219 15.7. Operation 7: DELEGPURGE - Purge Delegations Awaiting - Recovery . . . . . . . . . . . . . . . . . . . . . . . . 218 - 15.8. Operation 8: DELEGRETURN - Return Delegation . . . . . . 219 - 15.9. Operation 9: GETATTR - Get Attributes . . . . . . . . . 220 - 15.10. Operation 10: GETFH - Get Current Filehandle . . . . . . 221 - 15.11. Operation 11: LINK - Create Link to a File . . . . . . . 222 - 15.12. Operation 12: LOCK - Create Lock . . . . . . . . . . . . 224 - 15.13. Operation 13: LOCKT - Test For Lock . . . . . . . . . . 228 - 15.14. Operation 14: LOCKU - Unlock File . . . . . . . . . . . 229 - 15.15. Operation 15: LOOKUP - Lookup Filename . . . . . . . . . 230 - 15.16. Operation 16: LOOKUPP - Lookup Parent Directory . . . . 232 + Recovery . . . . . . . . . . . . . . . . . . . . . . . . 222 + 15.8. Operation 8: DELEGRETURN - Return Delegation . . . . . . 223 + 15.9. Operation 9: GETATTR - Get Attributes . . . . . . . . . 223 + 15.10. Operation 10: GETFH - Get Current Filehandle . . . . . . 224 + 15.11. Operation 11: LINK - Create Link to a File . . . . . . . 225 + 15.12. Operation 12: LOCK - Create Lock . . . . . . . . . . . . 227 + 15.13. Operation 13: LOCKT - Test For Lock . . . . . . . . . . 231 + 15.14. Operation 14: LOCKU - Unlock File . . . . . . . . . . . 232 + 15.15. Operation 15: LOOKUP - Lookup Filename . . . . . . . . . 233 + 15.16. Operation 16: LOOKUPP - Lookup Parent Directory . . . . 235 15.17. Operation 17: NVERIFY - Verify Difference in - Attributes . . . . . . . . . . . . . . . . . . . . . . . 233 - 15.18. Operation 18: OPEN - Open a Regular File . . . . . . . . 234 + Attributes . . . . . . . . . . . . . . . . . . . . . . . 236 + 15.18. Operation 18: OPEN - Open a Regular File . . . . . . . . 237 15.19. Operation 19: OPENATTR - Open Named Attribute - Directory . . . . . . . . . . . . . . . . . . . . . . . 243 - 15.20. Operation 20: OPEN_CONFIRM - Confirm Open . . . . . . . 244 - 15.21. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access . 246 - 15.22. Operation 22: PUTFH - Set Current Filehandle . . . . . . 248 - 15.23. Operation 23: PUTPUBFH - Set Public Filehandle . . . . . 248 - 15.24. Operation 24: PUTROOTFH - Set Root Filehandle . . . . . 250 - 15.25. Operation 25: READ - Read from File . . . . . . . . . . 250 - 15.26. Operation 26: READDIR - Read Directory . . . . . . . . . 252 - 15.27. Operation 27: READLINK - Read Symbolic Link . . . . . . 256 - 15.28. Operation 28: REMOVE - Remove Filesystem Object . . . . 257 - 15.29. Operation 29: RENAME - Rename Directory Entry . . . . . 259 - 15.30. Operation 30: RENEW - Renew a Lease . . . . . . . . . . 261 - 15.31. Operation 31: RESTOREFH - Restore Saved Filehandle . . . 262 - 15.32. Operation 32: SAVEFH - Save Current Filehandle . . . . . 263 - 15.33. Operation 33: SECINFO - Obtain Available Security . . . 263 - 15.34. Operation 34: SETATTR - Set Attributes . . . . . . . . . 266 - 15.35. Operation 35: SETCLIENTID - Negotiate Clientid . . . . . 269 - 15.36. Operation 36: SETCLIENTID_CONFIRM - Confirm Clientid . . 272 - 15.37. Operation 37: VERIFY - Verify Same Attributes . . . . . 276 - 15.38. Operation 38: WRITE - Write to File . . . . . . . . . . 277 + Directory . . . . . . . . . . . . . . . . . . . . . . . 246 + 15.20. Operation 20: OPEN_CONFIRM - Confirm Open . . . . . . . 247 + 15.21. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access . 249 + 15.22. Operation 22: PUTFH - Set Current Filehandle . . . . . . 251 + 15.23. Operation 23: PUTPUBFH - Set Public Filehandle . . . . . 251 + 15.24. Operation 24: PUTROOTFH - Set Root Filehandle . . . . . 253 + 15.25. Operation 25: READ - Read from File . . . . . . . . . . 253 + 15.26. Operation 26: READDIR - Read Directory . . . . . . . . . 255 + 15.27. Operation 27: READLINK - Read Symbolic Link . . . . . . 259 + 15.28. Operation 28: REMOVE - Remove Filesystem Object . . . . 260 + 15.29. Operation 29: RENAME - Rename Directory Entry . . . . . 262 + 15.30. Operation 30: RENEW - Renew a Lease . . . . . . . . . . 264 + 15.31. Operation 31: RESTOREFH - Restore Saved Filehandle . . . 265 + 15.32. Operation 32: SAVEFH - Save Current Filehandle . . . . . 266 + 15.33. Operation 33: SECINFO - Obtain Available Security . . . 266 + 15.34. Operation 34: SETATTR - Set Attributes . . . . . . . . . 269 + 15.35. Operation 35: SETCLIENTID - Negotiate Clientid . . . . . 272 + 15.36. Operation 36: SETCLIENTID_CONFIRM - Confirm Clientid . . 275 + 15.37. Operation 37: VERIFY - Verify Same Attributes . . . . . 279 + 15.38. Operation 38: WRITE - Write to File . . . . . . . . . . 280 15.39. Operation 39: RELEASE_LOCKOWNER - Release Lockowner - State . . . . . . . . . . . . . . . . . . . . . . . . . 281 + State . . . . . . . . . . . . . . . . . . . . . . . . . 284 - 15.40. Operation 10044: ILLEGAL - Illegal operation . . . . . . 282 - 16. NFS version 4 Callback Procedures . . . . . . . . . . . . . . 283 - 16.1. Procedure 0: CB_NULL - No Operation . . . . . . . . . . 283 - 16.2. Procedure 1: CB_COMPOUND - Compound Operations . . . . . 284 - 16.2.6. Operation 3: CB_GETATTR - Get Attributes . . . . . . 285 - 16.2.7. Operation 4: CB_RECALL - Recall an Open Delegation . 286 + 15.40. Operation 10044: ILLEGAL - Illegal operation . . . . . . 285 + 16. NFS version 4 Callback Procedures . . . . . . . . . . . . . . 286 + 16.1. Procedure 0: CB_NULL - No Operation . . . . . . . . . . 286 + 16.2. Procedure 1: CB_COMPOUND - Compound Operations . . . . . 287 + 16.2.6. Operation 3: CB_GETATTR - Get Attributes . . . . . . 288 + 16.2.7. Operation 4: CB_RECALL - Recall an Open Delegation . 289 16.2.8. Operation 10044: CB_ILLEGAL - Illegal Callback - Operation . . . . . . . . . . . . . . . . . . . . . 287 - 17. Security Considerations . . . . . . . . . . . . . . . . . . . 288 - 18. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 290 - 18.1. Named Attribute Definition . . . . . . . . . . . . . . . 290 - 18.2. ONC RPC Network Identifiers (netids) . . . . . . . . . . 290 - 19. References . . . . . . . . . . . . . . . . . . . . . . . . . 291 - 19.1. Normative References . . . . . . . . . . . . . . . . . . 291 - 19.2. Informative References . . . . . . . . . . . . . . . . . 292 - Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . 294 - Appendix B. RFC Editor Notes . . . . . . . . . . . . . . . . . . 294 - Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 294 + Operation . . . . . . . . . . . . . . . . . . . . . 290 + 17. Security Considerations . . . . . . . . . . . . . . . . . . . 291 + 18. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 293 + 18.1. Named Attribute Definition . . . . . . . . . . . . . . . 293 + 18.2. ONC RPC Network Identifiers (netids) . . . . . . . . . . 293 + 19. References . . . . . . . . . . . . . . . . . . . . . . . . . 294 + 19.1. Normative References . . . . . . . . . . . . . . . . . . 294 + 19.2. Informative References . . . . . . . . . . . . . . . . . 295 + Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . 297 + Appendix B. RFC Editor Notes . . . . . . . . . . . . . . . . . . 297 + Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 297 1. Introduction 1.1. Changes since RFC 3530 This document, together with the companion XDR description document [2], obsoletes RFC 3530 [11] as the authoritative document describing NFSv4. It does not introduce any over-the-wire protocol changes, in the sense that previously valid requests requests remain valid. However, some requests previously defined as invalid, although not @@ -363,21 +363,21 @@ o More liberal handling of internationalization for file names and user and group names, with the elimination of restrictions imposed by stringprep, with the recognition that rules for the forms of these name are the province of the receiving entity. o Updating handling of domain names to reflect IDNA. o Restructuring of string types to more appropriately reflect the reality of required string processing. - o LIPKEY SPKM/3 has been moved from being mandatory to optional + o LIPKEY SPKM/3 has been moved from being REQUIRED to OPTIONAL. o Some clarification on a client re-establishing callback information to the new server if state has been migrated 1.2. Changes since RFC 3010 This definition of the NFS version 4 protocol replaces or obsoletes the definition present in [12]. While portions of the two documents have remained the same, there have been substantive changes in others. The changes made between [12] and this document represent @@ -913,21 +913,21 @@ uint64_t major; uint64_t minor; }; This type is the filesystem identifier that is used as a mandatory attribute. 2.2.6. fs_location4 struct fs_location4 { - utf8val_must server<>; + utf8must server<>; pathname4 rootpath; }; 2.2.7. fs_locations4 struct fs_locations4 { pathname4 fs_root; fs_location4 locations<>; }; @@ -1813,21 +1813,21 @@ o CREATE is not allowed in a named attribute directory. Thus, such objects as symbolic links and special files are not allowed to be named attributes. Further, directories may not be created in a named attribute directory so no hierarchical structure of named attributes for a single object is allowed. o If OPENATTR is done on a named attribute directory or on a named attribute, the server MUST return NFS4ERR_WRONG_TYPE. o Doing a RENAME of a named attribute to a different named attribute - directory or to an ordinary (i.e. non-named-attribute) directory + directory or to an ordinary (i.e., non-named-attribute) directory is not allowed. o Creating hard links between named attribute directories or between named attribute directories and ordinary directories is not allowed. Names of attributes will not be controlled by this document or other IETF standards track documents. See Section 18 for further discussion. @@ -1861,23 +1861,23 @@ acl, archive, fileid, hidden, maxlink, mimetype, mode, numlinks, owner, owner_group, rawdev, space_used, system, time_access, time_backup, time_create, time_metadata, time_modify, mounted_on_fileid For quota_avail_hard, quota_avail_soft, and quota_used see their definitions below for the appropriate classification. 5.5. Set-Only and Get-Only Attributes - Some REQUIRED and RECOMMENDED attributes are set-only, i.e. they can + Some REQUIRED and RECOMMENDED attributes are set-only, i.e., they can be set via SETATTR but not retrieved via GETATTR. Similarly, some - REQUIRED and RECOMMENDED attributes are get-only, i.e. they can be + REQUIRED and RECOMMENDED attributes are get-only, i.e., they can be retrieved GETATTR but not set via SETATTR. If a client attempts to set a get-only attribute or get a set-only attributes, the server MUST return NFS4ERR_INVAL. 5.6. REQUIRED Attributes - List and Definition References The list of REQUIRED attributes appears in Table 2. The meaning of the columns of the table are: o Name: the name of attribute @@ -2126,21 +2126,21 @@ Locations where this file system may be found. If the server returns NFS4ERR_MOVED as an error, this attribute MUST be supported. 5.8.2.11. Attribute 25: hidden True, if the file is considered hidden with respect to the Windows API. 5.8.2.12. Attribute 26: homogeneous - True, if this object's file system is homogeneous, i.e. are per file + True, if this object's file system is homogeneous, i.e., are per file system attributes the same for all file system's objects. 5.8.2.13. Attribute 27: maxfilesize Maximum supported file size for the file system of this object. 5.8.2.14. Attribute 28: maxlink Maximum number of links for this object. @@ -2205,21 +2205,21 @@ the same as that of the fileid attribute. The mounted_on_fileid attribute is RECOMMENDED, so the server SHOULD provide it if possible, and for a UNIX-based server, this is straightforward. Usually, mounted_on_fileid will be requested during a READDIR operation, in which case it is trivial (at least for UNIX- based servers) to return mounted_on_fileid since it is equal to the fileid of a directory entry returned by readdir(). If mounted_on_fileid is requested in a GETATTR operation, the server should obey an invariant that has it returning a value that is equal - to the file object's entry in the object's parent directory, i.e. + to the file object's entry in the object's parent directory, i.e., what readdir() would have returned. Some operating environments allow a series of two or more file systems to be mounted onto a single mount point. In this case, for the server to obey the aforementioned invariant, it will need to find the base mount point, and not the intermediate mount points. 5.8.2.20. Attribute 34: no_trunc If this attribute is TRUE, then if the client uses a file name longer than name_max, an error will be returned instead of the name being @@ -3207,21 +3207,21 @@ be the sole determiner of access. For example: o In the case of a file system exported as read-only, the server may deny write permissions even though an object's ACL grants it. o Server implementations MAY grant ACE4_WRITE_ACL and ACE4_READ_ACL permissions to prevent a situation from arising in which there is no valid way to ever modify the ACL. o All servers will allow a user the ability to read the data of the - file when only the execute permission is granted (i.e. If the ACL + file when only the execute permission is granted (i.e., If the ACL denies the user the ACE4_READ_DATA access and allows the user ACE4_EXECUTE, the server will allow the user to read the data of the file). o Many servers have the notion of owner-override in which the owner of the object is allowed to override accesses that are denied by the ACL. This may be helpful, for example, to allow users continued access to open files on which the permissions have changed. @@ -3422,21 +3422,21 @@ and the mode modified as in Section 6.4.1.2 3. If both mode and ACL are given in the call: In this case, inheritance SHOULD NOT take place, and both attributes will be set as described in Section 6.4.1.3. 4. If neither mode nor ACL are given in the call: In the case where an object is being created without any initial - attributes at all, e.g. an OPEN operation with an opentype4 of + attributes at all, e.g., an OPEN operation with an opentype4 of OPEN4_CREATE and a createmode4 of EXCLUSIVE4, inheritance SHOULD NOT take place. Instead, the server SHOULD set permissions to deny all access to the newly created object. It is expected that the appropriate client will set the desired attributes in a subsequent SETATTR operation, and the server SHOULD allow that operation to succeed, regardless of what permissions the object is created with. For example, an empty ACL denies all permissions, but the server should allow the owner's SETATTR to succeed even though WRITE_ACL is implicitly denied. @@ -3468,22 +3468,22 @@ set), into two ACEs, one with no inheritance flags, and one with ACE4_INHERIT_ONLY_ACE set. This makes it simpler to modify the effective permissions on the directory without modifying the ACE which is to be inherited to the new directory's children. 7. Multi-Server Namespace NFSv4 supports attributes that allow a namespace to extend beyond the boundaries of a single server. It is RECOMMENDED that clients and servers support construction of such multi-server namespaces. Use of - such multi-server namespaces is OPTIONAL however, and for many - purposes, single-server namespace are perfectly acceptable. Use of + such multi-server namespaces is OPTIONAL, however, and for many + purposes, single-server namespaces are perfectly acceptable. Use of multi-server namespaces can provide many advantages, however, by separating a file system's logical position in a namespace from the (possibly changing) logistical and administrative considerations that result in particular file systems being located on particular servers. 7.1. Location Attributes NFSv4 contains RECOMMENDED attributes that allow file systems on one server to be associated with one or more instances of that file @@ -3496,47 +3496,47 @@ The fs_locations RECOMMENDED attribute allows specification of the file system locations where the data corresponding to a given file system may be found. 7.2. File System Presence or Absence A given location in an NFSv4 namespace (typically but not necessarily a multi-server namespace) can have a number of file system instance locations associated with it via the fs_locations attribute. There may also be an actual current file system at that location, - accessible via normal namespace operations (e.g. LOOKUP). In this + accessible via normal namespace operations (e.g., LOOKUP). In this case, the file system is said to be "present" at that position in the - namespace and clients will typically use it, reserving use of + namespace, and clients will typically use it, reserving use of additional locations specified via the location-related attributes to situations in which the principal location is no longer available. When there is no actual file system at the namespace location in question, the file system is said to be "absent". An absent file system contains no files or directories other than the root. Any reference to it, except to access a small set of attributes useful in determining alternate locations, will result in an error, NFS4ERR_MOVED. Note that if the server ever returns the error NFS4ERR_MOVED, it MUST support the fs_locations attribute. While the error name suggests that we have a case of a file system - which once was present, and has only become absent later, this is - only one possibility. A position in the namespace may be permanently + that once was present, and has only become absent later, this is only + one possibility. A position in the namespace may be permanently absent with the set of file system(s) designated by the location attributes being the only realization. The name NFS4ERR_MOVED reflects an earlier, more limited conception of its function, but this error will be returned whenever the referenced file system is absent, whether it has moved or not. Except in the case of GETATTR-type operations (to be discussed later), when the current filehandle at the start of an operation is within an absent file system, that operation is not performed and the - error NFS4ERR_MOVED returned, to indicate that the file system is + error NFS4ERR_MOVED is returned, to indicate that the file system is absent on the current server. Because a GETFH cannot succeed if the current filehandle is within an absent file system, filehandles within an absent file system cannot be transferred to the client. When a client does have filehandles within an absent file system, it is the result of obtaining them when the file system was present, and having the file system become absent subsequently. It should be noted that because the check for the current filehandle @@ -3561,74 +3561,73 @@ attributes may be obtained for a filehandle within an absent file system. This exception only applies if the attribute mask contains at least the fs_locations attribute bit, which indicates the client is interested in a result regarding an absent file system. If it is not requested, GETATTR will result in an NFS4ERR_MOVED error. When a GETATTR is done on an absent file system, the set of supported attributes is very limited. Many attributes, including those that are normally REQUIRED, will not be available on an absent file system. In addition to the fs_locations attribute, the following - attributes SHOULD be available on absent file systems, in the case of - RECOMMENDED attributes at least to the same degree that they are - available on present file systems. + attributes SHOULD be available on absent file systems. In the case + of RECOMMENDED attributes, they should be available at least to the + same degree that they are available on present file systems. fsid: This attribute should be provided so that the client can determine file system boundaries, including, in particular, the boundary between present and absent file systems. This value must be different from any other fsid on the current server and need have no particular relationship to fsids on any particular destination to which the client might be directed. - mounted_on_fileid: For objects at the top of an absent file system - this attribute needs to be available. Since the fileid is one - which is within the present parent file system, there should be no - need to reference the absent file system to provide this - information. + mounted_on_fileid: For objects at the top of an absent file system, + this attribute needs to be available. Since the fileid is within + the present parent file system, there should be no need to + reference the absent file system to provide this information. Other attributes SHOULD NOT be made available for absent file systems, even when it is possible to provide them. The server should not assume that more information is always better and should avoid gratuitously providing additional information. When a GETATTR operation includes a bit mask for the attribute - fs_locations, but where the bit mask includes attributes which are - not supported, GETATTR will not return an error, but will return the - mask of the actual attributes supported with the results. + fs_locations, but where the bit mask includes attributes that are not + supported, GETATTR will not return an error, but will return the mask + of the actual attributes supported with the results. Handling of VERIFY/NVERIFY is similar to GETATTR in that if the attribute mask does not include fs_locations the error NFS4ERR_MOVED will result. It differs in that any appearance in the attribute mask of an attribute not supported for an absent file system (and note - that this will include some normally REQUIRED attributes), will also + that this will include some normally REQUIRED attributes) will also cause an NFS4ERR_MOVED result. 7.3.2. READDIR and Absent File Systems A READDIR performed when the current filehandle is within an absent file system will result in an NFS4ERR_MOVED error, since, unlike the case of GETATTR, no such exception is made for READDIR. Attributes for an absent file system may be fetched via a READDIR for a directory in a present file system, when that directory contains the root directories of one or more absent file systems. In this case, the handling is as follows: o If the attribute set requested includes fs_locations, then fetching of attributes proceeds normally and no NFS4ERR_MOVED indication is returned, even when the rdattr_error attribute is requested. o If the attribute set requested does not include fs_locations, then if the rdattr_error attribute is requested, each directory entry - for the root of an absent file system, will report NFS4ERR_MOVED - as the value of the rdattr_error attribute. + for the root of an absent file system will report NFS4ERR_MOVED as + the value of the rdattr_error attribute. o If the attribute set requested does not include either of the attributes fs_locations or rdattr_error then the occurrence of the root of an absent file system within the directory will result in the READDIR failing with an NFS4ERR_MOVED error. o The unavailability of an attribute because of a file system's absence, even one that is ordinarily REQUIRED, does not result in any error indication. The set of attributes returned for the root directory of the absent file system in that case is simply @@ -3638,34 +3637,34 @@ The location-bearing attribute of fs_locations provides, together with the possibility of absent file systems, a number of important facilities in providing reliable, manageable, and scalable data access. When a file system is present, these attributes can provide alternative locations, to be used to access the same data, in the event of server failures, communications problems, or other difficulties that make continued access to the current file system - impossible or otherwise impractical. Under some circumstances + impossible or otherwise impractical. Under some circumstances, multiple alternative locations may be used simultaneously to provide - higher performance access to the file system in question. Provision + higher-performance access to the file system in question. Provision of such alternate locations is referred to as "replication" although there are cases in which replicated sets of data are not in fact present, and the replicas are instead different paths to the same data. When a file system is present and becomes absent, clients can be given the opportunity to have continued access to their data, at an alternate location. In this case, a continued attempt to use the data in the now-absent file system will result in an NFS4ERR_MOVED - error and at that point the successor locations (typically only one - but multiple choices are possible) can be fetched and used to + error and, at that point, the successor locations (typically only one + although multiple choices are possible) can be fetched and used to continue access. Transfer of the file system contents to the new location is referred to as "migration", but it should be kept in mind that there are cases in which this term can be used, like "replication", when there is no actual data migration per se. Where a file system was not previously present, specification of file system location provides a means by which file systems located on one server can be associated with a namespace defined by another server, thus allowing a general multi-server namespace facility. A designation of such a location, in place of an absent file system, is @@ -3688,58 +3687,65 @@ impossible or otherwise impractical, the client can use the alternate locations as a way to get continued access to its data. Multiple locations may be used simultaneously, to provide higher performance through the exploitation of multiple paths between client and target file system. The alternate locations may be physical replicas of the (typically read-only) file system data, or they may reflect alternate paths to the same server or provide for the use of various forms of server clustering in which multiple servers provide alternate ways of - accessing the same physical file system. + accessing the same physical file system. How these different modes + of file system transition are represented within the fs_locations + attribute and how the client deals with file system transition issues + will be discussed in detail below. Multiple server addresses, whether they are derived from a single - entry with a DNS name representing a set of IP addresses, or from - multiple entries each with its own server address may correspond to + entry with a DNS name representing a set of IP addresses or from + multiple entries each with its own server address, may correspond to the same actual server. 7.4.2. File System Migration When a file system is present and becomes absent, clients can be given the opportunity to have continued access to their data, at an alternate location, as specified by the fs_locations attribute. Typically, a client will be accessing the file system in question, get an NFS4ERR_MOVED error, and then use the fs_locations attribute to determine the new location of the data. Such migration can be helpful in providing load balancing or general resource reallocation. The protocol does not specify how the file system will be moved between servers. It is anticipated that a number of different server-to-server transfer mechanisms might be - used with the choice left to the server implementer. The NFSv4 + used with the choice left to the server implementor. The NFSv4 protocol specifies the method used to communicate the migration event between client and server. The new location may be an alternate communication path to the same - server, or, in the case of various forms of server clustering, - another server providing access to the same physical file system. + server or, in the case of various forms of server clustering, another + server providing access to the same physical file system. The + client's responsibilities in dealing with this transition depend on + the specific nature of the new access path as well as how and whether + data was in fact migrated. These issues will be discussed in detail + below. When an alternate location is designated as the target for migration, it must designate the same data. Where file systems are writable, a change made on the original file system must be visible on all migration targets. Where a file system is not writable but represents a read-only copy (possibly periodically updated) of a writable file system, similar requirements apply to the propagation of updates. Any change visible in the original file system must already be effected on all migration targets, to avoid any - possibility, that a client in effecting a transition to the migration - target will see any reversion in file system state. + possibility that a client, in effecting a transition to the migration + target, will see any reversion in file system state. 7.4.3. Referrals Referrals provide a way of placing a file system in a location within the namespace essentially without respect to its physical location on a given server. This allows a single server or a set of servers to present a multi-server namespace that encompasses file systems located on multiple servers. Some likely uses of this include establishment of site-wide or organization-wide namespaces, or even knitting such together into a truly global namespace. @@ -3750,281 +3756,282 @@ typically by receiving the error NFS4ERR_MOVED, the actual location or locations of the file system can be determined by fetching the fs_locations attribute. The locations-related attribute may designate a single file system location or multiple file system locations, to be selected based on the needs of the client. Use of multi-server namespaces is enabled by NFSv4 but is not required. The use of multi-server namespaces and their scope will - depend on the applications used, and system administration + depend on the applications used and system administration preferences. Multi-server namespaces can be established by a single server providing a large set of referrals to all of the included file systems. Alternatively, a single multi-server namespace may be administratively segmented with separate referral file systems (on - separate servers) for each separately-administered portion of the - namespace. Any segment or the top-level referral file system may use + separate servers) for each separately administered portion of the + namespace. The top-level referral file system or any segment may use replicated referral file systems for higher availability. Generally, multi-server namespaces are for the most part uniform, in that the same data made available to one client at a given location in the namespace is made available to all clients at that location. 7.5. Location Entries and Server Identity As mentioned above, a single location entry may have a server address - target in the form of a DNS name which may represent multiple IP + target in the form of a DNS name that may represent multiple IP addresses, while multiple location entries may have their own server - address targets, that reference the same server. + address targets that reference the same server. When multiple addresses for the same server exist, the client may assume that for each file system in the namespace of a given server network address, there exist file systems at corresponding namespace locations for each of the other server network addresses. It may do this even in the absence of explicit listing in fs_locations. Such corresponding file system locations can be used as alternate locations, just as those explicitly specified via the fs_locations attribute. If a single location entry designates multiple server IP addresses, the client cannot assume that these addresses are multiple paths to - the same server. In most case they will be, but the client MUST + the same server. In most cases, they will be, but the client MUST verify that before acting on that assumption. When two server addresses are designated by a single location entry and they correspond to different servers, this normally indicates some sort of - misconfiguration, and so the client should avoid use such location + misconfiguration, and so the client should avoid using such location entries when alternatives are available. When they are not, clients should pick one of IP addresses and use it, without using others that are not directed to the same server. -7.6. Additional Client-side Considerations +7.6. Additional Client-Side Considerations When clients make use of servers that implement referrals, - replication, and migration, care should be taken so that a user who + replication, and migration, care should be taken that a user who mounts a given file system that includes a referral or a relocated file system continues to see a coherent picture of that user-side file system despite the fact that it contains a number of server-side - file systems which may be on different servers. + file systems that may be on different servers. One important issue is upward navigation from the root of a server- side file system to its parent (specified as ".." in UNIX), in the case in which it transitions to that file system as a result of referral, migration, or a transition as a result of replication. When the client is at such a point, and it needs to ascend to the parent, it must go back to the parent as seen within the multi-server - namespace rather issuing a LOOKUPP call to the server, which would - result in the parent within that server's single-server namespace. - In order to do this, the client needs to remember the filehandles - that represent such file system roots, and use these instead of - issuing a LOOKUPP to the current server. This will allow the client - to present to applications a consistent namespace, where upward - navigation and downward navigation are consistent. + namespace rather than sending a LOOKUPP operation to the server, + which would result in the parent within that server's single-server + namespace. In order to do this, the client needs to remember the + filehandles that represent such file system roots and use these + instead of issuing a LOOKUPP operation to the current server. This + will allow the client to present to applications a consistent + namespace, where upward navigation and downward navigation are + consistent. Another issue concerns refresh of referral locations. When referrals are used extensively, they may change as server configurations change. It is expected that clients will cache information related - to traversing referrals so that future client side requests are + to traversing referrals so that future client-side requests are resolved locally without server communication. This is usually rooted in client-side name lookup caching. Clients should periodically purge this data for referral points in order to detect - changes in location information. When the change_policy attribute - changes for directories that hold referral entries or for the - referral entries themselves, clients should consider any associated - cached referral information to be out of date. + changes in location information. 7.7. Effecting File System Transitions Transitions between file system instances, whether due to switching - between replicas upon server unavailability, or in response to - server-initiated migration events are best dealt with together. This - is so even though for the server, pragmatic considerations will - normally force different implementation strategies for planned and - unplanned transitions. Even though the prototypical use cases of - replication and migration contain distinctive sets of features, when - all possibilities for these operations are considered, there is an + between replicas upon server unavailability or to server-initiated + migration events, are best dealt with together. This is so even + though, for the server, pragmatic considerations will normally force + different implementation strategies for planned and unplanned + transitions. Even though the prototypical use cases of replication + and migration contain distinctive sets of features, when all + possibilities for these operations are considered, there is an underlying unity of these operations, from the client's point of view, that makes treating them together desirable. A number of methods are possible for servers to replicate data and to track client state in order to allow clients to transition between file system instances with a minimum of disruption. Such methods vary between those that use inter-server clustering techniques to limit the changes seen by the client, to those that are less aggressive, use more standard methods of replicating data, and impose a greater burden on the client to adapt to the transition. The NFSv4 protocol does not impose choices on clients and servers - with regard to that spectrum of transition methods. The NFSv4.0 - protocol does not provide the servers a means of communicating the - transiation methods. In the NFSv4.1 protocol [27], an additional - attribute "fs_locations_info" is presented, which will define the - specific choices that can be made, how these choices are communicated - to the client and how the client is to deal with any discontinuities. + with regard to that spectrum of transition methods. In fact, there + are many valid choices, depending on client and application + requirements and their interaction with server implementation + choices. The NFSv4.0 protocol does not provide the servers a means + of communicating the transition methods. In the NFSv4.1 protocol + [27], an additional attribute "fs_locations_info" is presented, which + will define the specific choices that can be made, how these choices + are communicated to the client, and how the client is to deal with + any discontinuities. In the sections below, references will be made to various possible - server issues as a way of illustrating the transition scenarios that - clients may deal with. The intent here is not to define or limit - server implementations but rather to illustrate the range of issues - that clients may face. Again, as the NFSv4.0 protocol does not have - an explict means of communicating these issues to the client, the - intent is to document the problems that can be faced in a multi- - server name space and allow the client to use the inferred - transitions available via fs_locations and other attributes (see - Section 7.9.1). + server implementation choices as a way of illustrating the transition + scenarios that clients may deal with. The intent here is not to + define or limit server implementations but rather to illustrate the + range of issues that clients may face. Again, as the NFSv4.0 + protocol does not have an explict means of communicating these issues + to the client, the intent is to document the problems that can be + faced in a multi-server name space and allow the client to use the + inferred transitions available via fs_locations and other attributes + (see Section 7.9.1). In the discussion below, references will be made to a file system - having a particular property or of two file systems (typically the + having a particular property or to two file systems (typically the source and destination) belonging to a common class of any of several types. Two file systems that belong to such a class share some - important aspect of file system behavior that clients may depend upon - when present, to easily effect a seamless transition between file - system instances. Conversely, where the file systems do not belong - to such a common class, the client has to deal with various sorts of - implementation discontinuities which may cause performance or other - issues in effecting a transition. + important aspects of file system behavior that clients may depend + upon when present, to easily effect a seamless transition between + file system instances. Conversely, where the file systems do not + belong to such a common class, the client has to deal with various + sorts of implementation discontinuities that may cause performance or + other issues in effecting a transition. While fs_locations is available, default assumptions with regard to such classifications have to be inferred (see Section 7.9.1 for details). In cases in which one server is expected to accept opaque values from the client that originated from another server, the servers SHOULD - encode the "opaque" values in big endian byte order. If this is + encode the "opaque" values in big-endian byte order. If this is done, servers acting as replicas or immigrating file systems will be able to parse values like stateids, directory cookies, filehandles, - etc. even if their native byte order is different from that of other + etc., even if their native byte order is different from that of other servers cooperating in the replication and migration of the file system. 7.7.1. File System Transitions and Simultaneous Access When a single file system may be accessed at multiple locations, - whether this is because of an indication of file system identity as - reported by the fs_locations attribute, the client will, depending on - specific circumstances as discussed below, either: + either because of an indication of file system identity as reported + by the fs_locations attribute, the client will, depending on specific + circumstances as discussed below, either: - o The client accesses multiple instances simultaneously, as - representing alternate paths to the same data and metadata. + o Access multiple instances simultaneously, each of which represents + an alternate path to the same data and metadata. - o The client accesses one instance (or set of instances) and then - transitions to an alternative instance (or set of instances) as a - result of network issues, server unresponsiveness, or server- - directed migration. + o Acesses one instance (or set of instances) and then transition to + an alternative instance (or set of instances) as a result of + network issues, server unresponsiveness, or server-directed + migration. 7.7.2. Filehandles and File System Transitions There are a number of ways in which filehandles can be handled across a file system transition. These can be divided into two broad classes depending upon whether the two file systems across which the transition happens share sufficient state to effect some sort of continuity of file system handling. - When there is no such co-operation in filehandle assignment, the two - file systems are reported as being in different _handle_ classes. In + When there is no such cooperation in filehandle assignment, the two + file systems are reported as being in different handle classes. In this case, all filehandles are assumed to expire as part of the file system transition. Note that this behavior does not depend on fh_expire_type attribute and depends on the specification of the FH4_VOL_MIGRATION bit. When there is co-operation in filehandle assignment, the two file - systems are reported as being in the same _handle_ classes. In this + systems are reported as being in the same handle classes. In this case, persistent filehandles remain valid after the file system transition, while volatile filehandles (excluding those that are only volatile due to the FH4_VOL_MIGRATION bit) are subject to expiration on the target server. 7.7.3. Fileids and File System Transitions The issue of continuity of fileids in the event of a file system - transition needs to be addressed. The general expectation had been - that in situations in which the two file system instances are created - by a single vendor using some sort of file system image copy, fileids - will be consistent across the transition while in the analogous - multi-vendor transitions they will not. This poses difficulties, + transition needs to be addressed. The general expectation is that in + situations in which the two file system instances are created by a + single vendor using some sort of file system image copy, fileids will + be consistent across the transition, while in the analogous multi- + vendor transitions they will not. This poses difficulties, especially for the client without special knowledge of the transition mechanisms adopted by the server. Note that although fileid is not a REQUIRED attribute, many servers support fileids and many clients - provide API's that depend on fileids. + provide APIs that depend on fileids. It is important to note that while clients themselves may have no trouble with a fileid changing as a result of a file system transition event, applications do typically have access to the fileid - (e.g. via stat), and the result of this is that an application may - work perfectly well if there is no file system instance transition or - if any such transition is among instances created by a single vendor, + (e.g., via stat). The result is that an application may work + perfectly well if there is no file system instance transition or if + any such transition is among instances created by a single vendor, yet be unable to deal with the situation in which a multi-vendor - transition occurs, at the wrong time. + transition occurs at the wrong time. Providing the same fileids in a multi-vendor (multiple server vendors) environment has generally been held to be quite difficult. While there is work to be done, it needs to be pointed out that this difficulty is partly self-imposed. Servers have typically identified - fileid with inode number, i.e. with a quantity used to find the file + fileid with inode number, i.e., with a quantity used to find the file in question. This identification poses special difficulties for migration of a file system between vendors where assigning the same index to a given file may not be possible. Note here that a fileid is not required to be useful to find the file in question, only that it is unique within the given file system. Servers prepared to accept a fileid as a single piece of metadata and store it apart from the value used to index the file information can relatively easily maintain a fileid value across a migration event, allowing a truly transparent migration event. In any case, where servers can provide continuity of fileids, they should, and the client should be able to find out that such continuity is available and take appropriate action. Information about the continuity (or lack thereof) of fileids across a file system transition is represented by specifying whether the file - systems in question are of the same _fileid_ class. + systems in question are of the same fileid class. Note that when consistent fileids do not exist across a transition (either because there is no continuity of fileids or because fileid is not a supported attribute on one of instances involved), and there are no reliable filehandles across a transition event (either because there is no filehandle continuity or because the filehandles are volatile), the client is in a position where it cannot verify that files it was accessing before the transition are the same objects. It is forced to assume that no object has been renamed, and, unless - there are guarantees that provide this (e.g. the file system is read- - only), problems for applications may occur. Therefore, use of such - configurations should be limited to situations where the problems - that this may cause can be tolerated. + there are guarantees that provide this (e.g., the file system is + read-only), problems for applications may occur. Therefore, use of + such configurations should be limited to situations where the + problems that this may cause can be tolerated. 7.7.4. Fsids and File System Transitions Since fsids are generally only unique within a per-server basis, it is likely that they will change during a file system transition. Clients should not make the fsids received from the server visible to applications since they may not be globally unique, and because they may change during a file system transition event. Applications are best served if they are isolated from such transitions to the extent possible. 7.7.5. The Change Attribute and File System Transitions Since the change attribute is defined as a server-specific one, change attributes fetched from one server are normally presumed to be invalid on another server. Such a presumption is troublesome since it would invalidate all cached change attributes, requiring refetching. Even more disruptive, the absence of any assured continuity for the change attribute means that even if the same value - is retrieved on refetch no conclusions can drawn as to whether the - object in question has changed. The identical change attribute could - be merely an artifact of a modified file with a different change - attribute construction algorithm, with that new algorithm just + is retrieved on refetch, no conclusions can be drawn as to whether + the object in question has changed. The identical change attribute + could be merely an artifact of a modified file with a different + change attribute construction algorithm, with that new algorithm just happening to result in an identical change value. When the two file systems have consistent change attribute formats, - and we say that they are in the same _change_ class, the client may + and we say that they are in the same change class, the client may assume a continuity of change attribute construction and handle this situation just as it would be handled without any file system transition. 7.7.6. Lock State and File System Transitions In a file system transition, the client needs to handle cases in which the two servers have cooperated in state management and in which they have not. Cooperation by two servers in state management requires coordination of client IDs. Before the client attempts to @@ -4040,281 +4047,279 @@ This state transfer will reduce disruption to the client when a file system transition occurs. If the servers are successful in transferring all state, the client can attempt to establish sessions associated with the client ID used for the source file system instance. If the server accepts that as a valid client ID, then the client may use the existing stateids associated with that client ID for the old file system instance in connection with that same client ID in connection with the transitioned file system instance. - File systems co-operating in state management may actually share - state or simply divide the identifier space so as to recognize (and - reject as stale) each other's stateids and client IDs. Servers which - do share state may not do so under all conditions or at all times. - The requirement for the server is that if it cannot be sure in - accepting a client ID that it reflects the locks the client was - given, it must treat all associated state as stale and report it as - such to the client. + File systems cooperating in state management may actually share state + or simply divide the identifier space so as to recognize (and reject + as stale) each other's stateids and client IDs. Servers that do + share state may not do so under all conditions or at all times. If + the server cannot be sure when accepting a client ID that it reflects + the locks the client was given, the server must treat all associated + state as stale and report it as such to the client. The client must establish a new client ID on the destination, if it - does not have one already, and reclaim locks if possible. In this - case, old stateids and client IDs should not be presented to the new - server since there is no assurance that they will not conflict with - IDs valid on that server. + does not have one already, and reclaim locks if allowed by the + server. In this case, old stateids and client IDs should not be + presented to the new server since there is no assurance that they + will not conflict with IDs valid on that server. When actual locks are not known to be maintained, the destination server may establish a grace period specific to the given file system, with non-reclaim locks being rejected for that file system, even though normal locks are being granted for other file systems. Clients should not infer the absence of a grace period for file systems being transitioned to a server from responses to requests for other file systems. In the case of lock reclamation for a given file system after a file system transition, edge conditions can arise similar to those for reclaim after server restart (although in the case of the planned state transfer associated with migration, these can be avoided by securely recording lock state as part of state migration). Unless the destination server can guarantee that locks will not be incorrectly granted, the destination server should not allow lock - reclaims and avoid establishing a grace period. (See Section 9.14 - for further details.) + reclaims and should avoid establishing a grace period. (See + Section 9.14 for further details.) Information about client identity may be propagated between servers in the form of client_owner4 and associated verifiers, under the assumption that the client presents the same values to all the servers with which it deals. Servers are encouraged to provide facilities to allow locks to be reclaimed on the new server after a file system transition. Often such facilities may not be available and client should be prepared to re-obtain locks, even though it is possible that the client may have its LOCK or OPEN request denied due to a conflicting lock. The consequences of having no facilities available to reclaim locks - on the sew server will depend on the type of environment. In some + on the new server will depend on the type of environment. In some environments, such as the transition between read-only file systems, such denial of locks should not pose large difficulties in practice. When an attempt to re-establish a lock on a new server is denied, the client should treat the situation as if its original lock had been revoked. Note that when the lock is granted, the client cannot assume that no conflicting lock could have been granted in the interim. Where change attribute continuity is present, the client may check the change attribute to check for unwanted file modifications. Where even this is not available, and the file system is not read-only, a client may reasonably treat all pending locks as having been revoked. 7.7.6.1. Transitions and the Lease_time Attribute - In order that the client may appropriately manage its leases in the + In order that the client may appropriately manage its lease in the case of a file system transition, the destination server must establish proper values for the lease_time attribute. When state is transferred transparently, that state should include the correct value of the lease_time attribute. The lease_time attribute on the destination server must never be less than that on - the source since this would result in premature expiration of leases - granted by the source server. Upon transitions in which state is - transferred transparently, the client is under no obligation to re- - fetch the lease_time attribute and may continue to use the value + the source, since this would result in premature expiration of a + lease granted by the source server. Upon transitions in which state + is transferred transparently, the client is under no obligation to + refetch the lease_time attribute and may continue to use the value previously fetched (on the source server). If state has not been transferred transparently because the client ID is rejected when presented to the new server, the client should fetch - the value of lease_time on the new (i.e. destination) server, and use - it for subsequent locking requests. However the server must respect - a grace period at least as long as the lease_time on the source - server, in order to ensure that clients have ample time to reclaim - their lock before potentially conflicting non-reclaimed locks are - granted. + the value of lease_time on the new (i.e., destination) server, and + use it for subsequent locking requests. However, the server must + respect a grace period of at least as long as the lease_time on the + source server, in order to ensure that clients have ample time to + reclaim their lock before potentially conflicting non-reclaimed locks + are granted. 7.7.7. Write Verifiers and File System Transitions In a file system transition, the two file systems may be clustered in the handling of unstably written data. When this is the case, and - the two file systems belong to the same _write-verifier_ class, write + the two file systems belong to the same write-verifier class, write verifiers returned from one system may be compared to those returned by the other and superfluous writes avoided. - When two file systems belong to different _write-verifier_ classes, - any verifier generated by one must not be compared to one provided by - the other. Instead, it should be treated as not equal even when the + When two file systems belong to different write-verifier classes, any + verifier generated by one must not be compared to one provided by the + other. Instead, it should be treated as not equal even when the values are identical. 7.7.8. Readdir Cookies and Verifiers and File System Transitions In a file system transition, the two file systems may be consistent in their handling of READDIR cookies and verifiers. When this is the - case, and the two file systems belong to the same _readdir_ class, + case, and the two file systems belong to the same readdir class, READDIR cookies and verifiers from one system may be recognized by the other and READDIR operations started on one server may be validly continued on the other, simply by presenting the cookie and verifier returned by a READDIR operation done on the first file system to the second. - When two file systems belong to different _readdir_ classes, any + When two file systems belong to different readdir classes, any READDIR cookie and verifier generated by one is not valid on the second, and must not be presented to that server by the client. The client should act as if the verifier was rejected. 7.7.9. File System Data and File System Transitions When multiple replicas exist and are used simultaneously or in succession by a client, applications using them will normally expect - that they contain data the same data or data which is consistent with - the normal sorts of changes that are made by other clients updating - the data of the file system. (with metadata being the same to the - degree inferred by the fs_locations attribute). However, when + that they contain either the same data or data that is consistent + with the normal sorts of changes that are made by other clients + updating the data of the file system (with metadata being the same to + the degree inferred by the fs_locations attribute). However, when multiple file systems are presented as replicas of one another, the precise relationship between the data of one and the data of another is not, as a general matter, specified by the NFSv4 protocol. It is quite possible to present as replicas file systems where the data of those file systems is sufficiently different that some applications have problems dealing with the transition between replicas. The namespace will typically be constructed so that applications can choose an appropriate level of support, so that in one position in - the namespace a varied set of replicas will be listed while in + the namespace a varied set of replicas will be listed, while in another only those that are up-to-date may be considered replicas. - The protocol does define three special cases of the relationship - among replicas to be specified by the server and relied upon by - clients: + The protocol does define four special cases of the relationship among + replicas to be specified by the server and relied upon by clients: o When multiple server addresses correspond to the same actual server, the client may depend on the fact that changes to data, metadata, or locks made on one file system are immediately reflected on others. o When multiple replicas exist and are used simultaneously by a client, they must designate the same data. Where file systems are writable, a change made on one instance must be visible on all instances, immediately upon the earlier of the return of the modifying requester or the visibility of that change on any of the associated replicas. This allows a client to use these replicas simultaneously without any special adaptation to the fact that - there are multiple replicas. In this case, locks, whether shared - or byte-range, and delegations obtained one replica are - immediately reflected on all replicas, even though these locks - will be managed under a set of client IDs. + there are multiple replicas. In this case, locks (whether share + reservations or byte-range locks), and delegations obtained on one + replica are immediately reflected on all replicas, even though + these locks will be managed under a set of client IDs. o When one replica is designated as the successor instance to - another existing instance after return NFS4ERR_MOVED (i.e. the + another existing instance after return NFS4ERR_MOVED (i.e., the case of migration), the client may depend on the fact that all - changes securely made to data (uncommitted writes are dealt with - in Section 7.7.7) on the original instance are made to the - successor image. + changes written to stable storage on the original instance are + written to stable storage of the successor (uncommitted writes are + dealt with in Section 7.7.7). o Where a file system is not writable but represents a read-only copy (possibly periodically updated) of a writable file system, clients have similar requirements with regard to the propagation of updates. They may need a guarantee that any change visible on the original file system instance must be immediately visible on any replica before the client transitions access to that replica, in order to avoid any possibility that a client, in effecting a transition to a replica, will see any reversion in file system - state. Since these file systems are presumed not to be suitable - for simultaneous use, there is no specification of how locking is - handled and it generally will be the case that locks obtained one - file system will be separate from those on others. Since these - are going to be read-only file systems, this is not expected to - pose an issue for clients or applications. + state. Since these file systems are presumed to be unsuitable for + simultaneous use, there is no specification of how locking is + handled; in general, locks obtained on one file system will be + separate from those on others. Since these are going to be read- + only file systems, this is not expected to pose an issue for + clients or applications. 7.8. Effecting File System Referrals Referrals are effected when an absent file system is encountered, and one or more alternate locations are made available by the fs_locations attribute. The client will typically get an - NFS4ERR_MOVED error, fetch the appropriate location information and + NFS4ERR_MOVED error, fetch the appropriate location information, and proceed to access the file system on a different server, even though it retains its logical position within the original namespace. Referrals differ from migration events in that they happen only when the client has not previously referenced the file system in question (so there is nothing to transition). Referrals can only come into effect when an absent file system is encountered at its root. The examples given in the sections below are somewhat artificial in - that an actual client will not typically do a multi-component lookup, - but will have cached information regarding the upper levels of the - name hierarchy. However, these example are chosen to make the + that an actual client will not typically do a multi-component look + up, but will have cached information regarding the upper levels of + the name hierarchy. However, these example are chosen to make the required behavior clear and easy to put within the scope of a small number of requests, without getting unduly into details of how specific clients might choose to cache things. 7.8.1. Referral Example (LOOKUP) Let us suppose that the following COMPOUND is sent in an environment in which /this/is/the/path is absent from the target server. This may be for a number of reasons. It may be the case that the file - system has moved, or, it may be the case that the target server is + system has moved, or it may be the case that the target server is functioning mainly, or solely, to refer clients to the servers on which various file systems are located. o PUTROOTFH o LOOKUP "this" o LOOKUP "is" - o LOOKUP "the" o LOOKUP "path" + o GETFH - o GETATTR fsid,fileid,size,time_modify + o GETATTR(fsid,fileid,size,time_modify) Under the given circumstances, the following will be the result. o PUTROOTFH --> NFS_OK. The current fh is now the root of the pseudo-fs. o LOOKUP "this" --> NFS_OK. The current fh is for /this and is within the pseudo-fs. o LOOKUP "is" --> NFS_OK. The current fh is for /this/is and is within the pseudo-fs. o LOOKUP "the" --> NFS_OK. The current fh is for /this/is/the and is within the pseudo-fs. o LOOKUP "path" --> NFS_OK. The current fh is for /this/is/the/path and is within a new, absent file system, but ... the client will never see the value of that fh. o GETFH --> NFS4ERR_MOVED. Fails because current fh is in an absent - file system at the start of the operation and the spec makes no - exception for GETFH. + file system at the start of the operation, and the specification + makes no exception for GETFH. - o GETATTR fsid,fileid,size,time_modify. Not executed because the + o GETATTR(fsid,fileid,size,time_modify) Not executed because the failure of the GETFH stops processing of the COMPOUND. Given the failure of the GETFH, the client has the job of determining the root of the absent file system and where to find that file - system, i.e. the server and path relative to that server's root fh. + system, i.e., the server and path relative to that server's root fh. Note here that in this example, the client did not obtain filehandles - and attribute information (e.g. fsid) for the intermediate + and attribute information (e.g., fsid) for the intermediate directories, so that it would not be sure where the absent file system starts. It could be the case, for example, that /this/is/the is the root of the moved file system and that the reason that the lookup of "path" succeeded is that the file system was not absent on that operation but was moved between the last LOOKUP and the GETFH (since COMPOUND is not atomic). Even if we had the fsids for all of the intermediate directories, we could have no way of knowing that /this/is/the/path was the root of a new file system, since we don't yet have its fsid. In order to get the necessary information, let us re-send the chain of LOOKUPs with GETFHs and GETATTRs to at least get the fsids so we can be sure where the appropriate file system boundaries are. The client could choose to get fs_locations at the same time but in most cases the client will have a good guess as to where file system - boundaries are (because of where and where not NFS4ERR_MOVED was + boundaries are (because of where NFS4ERR_MOVED was, and was not, received) making fetching of fs_locations unnecessary. OP01: PUTROOTFH --> NFS_OK - Current fh is root of pseudo-fs. OP02: GETATTR(fsid) --> NFS_OK - Just for completeness. Normally, clients will know the fsid of the pseudo-fs as soon as they establish communication with a @@ -4357,107 +4362,107 @@ OP11: GETFH --> NFS_OK - Current fh is for /this/is/the and is within pseudo-fs. OP12: LOOKUP "path" --> NFS_OK - Current fh is for /this/is/the/path and is within a new, absent file system, but ... - - The client will never see the value of that fh + - The client will never see the value of that fh. OP13: GETATTR(fsid, fs_locations) --> NFS_OK - We are getting the fsid to know where the file system boundaries - are. In this operation the fsid will be different than that of + are. In this operation, the fsid will be different than that of the parent directory (which in turn was retrieved in OP10). Note that the fsid we are given will not necessarily be preserved at - the new location. That fsid might be different and in fact the + the new location. That fsid might be different, and in fact the fsid we have for this file system might be a valid fsid of a different file system on that new server. - In this particular case, we are pretty sure anyway that what has moved is /this/is/the/path rather than /this/is/the since we have the fsid of the latter and it is that of the pseudo-fs, which presumably cannot move. However, in other examples, we might not - have this kind of information to rely on (e.g. /this/is/the might + have this kind of information to rely on (e.g., /this/is/the might be a non-pseudo file system separate from /this/is/the/path), so - we need to have another reliable source information on the - boundary of the file system which is moved. If, for example, the - file system "/this/is" had moved we would have a case of migration - rather than referral and once the boundaries of the migrated file + we need to have other reliable source information on the boundary + of the file system that is moved. If, for example, the file + system /this/is had moved, we would have a case of migration + rather than referral, and once the boundaries of the migrated file system was clear we could fetch fs_locations. - We are fetching fs_locations because the fact that we got an - NFS4ERR_MOVED at this point means that it most likely that this is - a referral and we need the destination. Even if it is the case - that "/this/is/the" is a file system which has migrated, we will + NFS4ERR_MOVED at this point means that it is most likely that this + is a referral and we need the destination. Even if it is the case + that /this/is/the is a file system that has migrated, we will still need the location information for that file system. OP14: GETFH --> NFS4ERR_MOVED + - Fails because current fh is in an absent file system at the start - of the operation and the spec makes no exception for GETFH. Note - that this means the server will never send the client a filehandle - from within an absent file system. + of the operation, and the specification makes no exception for + GETFH. Note that this means the server will never send the client + a filehandle from within an absent file system. Given the above, the client knows where the root of the absent file - system is (/this/is/the/path), by noting where the change of fsid + system is (/this/is/the/path) by noting where the change of fsid occurred (between "the" and "path"). The fs_locations attribute also gives the client the actual location of the absent file system, so that the referral can proceed. The server gives the client the bare minimum of information about the absent file system so that there will be very little scope for problems of conflict between information sent by the referring server and information of the file system's home. No filehandles and very few attributes are present on - the referring server and the client can treat those it receives as - basically transient information with the function of enabling the - referral. + the referring server, and the client can treat those it receives as + transient information with the function of enabling the referral. 7.8.2. Referral Example (READDIR) Another context in which a client may encounter referrals is when it - does a READDIR on directory in which some of the sub-directories are - the roots of absent file systems. + does a READDIR on a directory in which some of the sub-directories + are the roots of absent file systems. Suppose such a directory is read as follows: o PUTROOTFH o LOOKUP "this" o LOOKUP "is" o LOOKUP "the" o READDIR (fsid, size, time_modify, mounted_on_fileid) In this case, because rdattr_error is not requested, fs_locations is - not requested, and some of attributes cannot be provided, the result - will be an NFS4ERR_MOVED error on the READDIR, with the detailed - results as follows: + not requested, and some of the attributes cannot be provided, the + result will be an NFS4ERR_MOVED error on the READDIR, with the + detailed results as follows: o PUTROOTFH --> NFS_OK. The current fh is at the root of the pseudo-fs. o LOOKUP "this" --> NFS_OK. The current fh is for /this and is within the pseudo-fs. o LOOKUP "is" --> NFS_OK. The current fh is for /this/is and is within the pseudo-fs. o LOOKUP "the" --> NFS_OK. The current fh is for /this/is/the and is within the pseudo-fs. o READDIR (fsid, size, time_modify, mounted_on_fileid) --> NFS4ERR_MOVED. Note that the same error would have been returned - if /this/is/the had migrated, when in fact it is because the + if /this/is/the had migrated, but it is returned because the directory contains the root of an absent file system. So now suppose that we re-send with rdattr_error: o PUTROOTFH o LOOKUP "this" o LOOKUP "is" @@ -4511,173 +4516,177 @@ within the pseudo-fs. o LOOKUP "the" --> NFS_OK. The current fh is for /this/is/the and is within the pseudo-fs. o READDIR (rdattr_error, fs_locations, mounted_on_fileid, fsid, size, time_modify) --> NFS_OK. The attributes will be as shown below. The attributes for the directory entry with the component named - "path" will only contain + "path" will only contain: o rdattr_error (value: NFS_OK) o fs_locations o mounted_on_fileid (value: unique fileid within referring file system) o fsid (value: unique value within referring server) The attributes for entry "path" will not contain size or time_modify because these attributes are not available within an absent file system. 7.9. The Attribute fs_locations The fs_locations attribute is structured in the following way: struct fs_location4 { - utf8val_must server<>; + utf8must server<>; pathname4 rootpath; }; struct fs_locations4 { pathname4 fs_root; fs_location4 locations<>; }; The fs_location4 data type is used to represent the location of a file system by providing a server name and the path to the root of the file system within that server's namespace. When a set of servers have corresponding file systems at the same path within their namespaces, an array of server names may be provided. An entry in the server array is a UTF-8 string and represents one of a - traditional DNS host name, IPv4 address, or IPv6 address, or an zero- + traditional DNS host name, IPv4 address, IPv6 address, or an zero- length string. A zero-length string SHOULD be used to indicate the current address being used for the RPC call. It is not a requirement that all servers that share the same rootpath be listed in one fs_location4 instance. The array of server names is provided for convenience. Servers that share the same rootpath may also be listed in separate fs_location4 entries in the fs_locations attribute. The fs_locations4 data type and fs_locations attribute contain an array of such locations. Since the namespace of each server may be constructed differently, the "fs_root" field is provided. The path represented by fs_root represents the location of the file system in - the current server's namespace, i.e. that of the server from which + the current server's namespace, i.e., that of the server from which the fs_locations attribute was obtained. The fs_root path is meant to aid the client by clearly referencing the root of the file system whose locations are being reported, no matter what object within the current file system the current filehandle designates. The fs_root is simply the pathname the client used to reach the object on the - current server, the object being that the fs_locations attribute - applies to. + current server (i.e., the object to which the fs_locations attribute + applies). When the fs_locations attribute is interrogated and there are no alternate file system locations, the server SHOULD return a zero- length array of fs_location4 structures, together with a valid fs_root. As an example, suppose there is a replicated file system located at two servers (servA and servB). At servA, the file system is located - at path "/a/b/c". At, servB the file system is located at path - "/x/y/z". If the client were to obtain the fs_locations value for - the directory at "/a/b/c/d", it might not necessarily know that the - file system's root is located in servA's namespace at "/a/b/c". When - the client switches to servB, it will need to determine that the - directory it first referenced at servA is now represented by the path - "/x/y/z/d" on servB. To facilitate this, the fs_locations attribute - provided by servA would have a fs_root value of "/a/b/c" and two - entries in fs_locations. One entry in fs_locations will be for - itself (servA) and the other will be for servB with a path of - "/x/y/z". With this information, the client is able to substitute - "/x/y/z" for the "/a/b/c" at the beginning of its access path and - construct "/x/y/z/d" to use for the new server. + at path /a/b/c. At, servB the file system is located at path /x/y/z. + If the client were to obtain the fs_locations value for the directory + at /a/b/c/d, it might not necessarily know that the file system's + root is located in servA's namespace at /a/b/c. When the client + switches to servB, it will need to determine that the directory it + first referenced at servA is now represented by the path /x/y/z/d on + servB. To facilitate this, the fs_locations attribute provided by + servA would have an fs_root value of /a/b/c and two entries in + fs_locations. One entry in fs_locations will be for itself (servA) + and the other will be for servB with a path of /x/y/z. With this + information, the client is able to substitute /x/y/z for the /a/b/c + at the beginning of its access path and construct /x/y/z/d to use for + the new server. Note that: there is no requirement that the number of components in each rootpath be the same; there is no relation between the number of - components in rootpath or fs_root; and the none of the components in - each rootpath and fs_root have to be the same. In the above example, - we could have had a third element in the locations array, with server + components in rootpath or fs_root, and none of the components in each + rootpath and fs_root have to be the same. In the above example, we + could have had a third element in the locations array, with server equal to "servC", and rootpath equal to "/I/II", and a fourth element - in locations with server equal to "servD", and rootpath equal to + in locations with server equal to "servD" and rootpath equal to "/aleph/beth/gimel/daleth/he". The relationship between fs_root to a rootpath is that the client replaces the pathname indicated in fs_root for the current server for the substitute indicated in rootpath for the new server. - For an example for a referred or migrated file system, suppose there + For an example of a referred or migrated file system, suppose there is a file system located at serv1. At serv1, the file system is - located at "/az/buky/vedi/glagoli". The client finds that object at - "glagoli" has migrated (or is a referral). The client gets the - fs_locations attribute, which contains an fs_root of "/az/buky/vedi/ - glagoli", and one element in the locations array, with server equal - to "serv2", and rootpath equal to "/izhitsa/fita". The client - replaces "/az/buky/vedi/glagoli" with "/izhitsa/fita", and uses the - latter pathname on "serv2". + located at /az/buky/vedi/glagoli. The client finds that object at + glagoli has migrated (or is a referral). The client gets the + fs_locations attribute, which contains an fs_root of /az/buky/vedi/ + glagoli, and one element in the locations array, with server equal to + serv2, and rootpath equal to /izhitsa/fita. The client replaces /az/ + buky/vedi/glagoli with /izhitsa/fita, and uses the latter pathname on + serv2. Thus, the server MUST return an fs_root that is equal to the path the - client used to reach the object the fs_locations attribute applies - to. Otherwise the client cannot determine the new path to use on the - new server. + client used to reach the object to which the fs_locations attribute + applies. Otherwise, the client cannot determine the new path to use + on the new server. 7.9.1. Inferring Transition Modes When fs_locations is used, information about the specific locations should be assumed based on the following rules. The following rules are general and apply irrespective of the context. o All listed file system instances should be considered as of the - same _handle_ class, if and only if, the current fh_expire_type + same handle class if and only if the current fh_expire_type attribute does not include the FH4_VOL_MIGRATION bit. Note that in the case of referral, filehandle issues do not apply since there can be no filehandles known within the current file system nor is there any access to the fh_expire_type attribute on the referring (absent) file system. o All listed file system instances should be considered as of the - same _fileid_ class, if and only if, the fh_expire_type attribute + same fileid class if and only if the fh_expire_type attribute indicates persistent filehandles and does not include the FH4_VOL_MIGRATION bit. Note that in the case of referral, fileid issues do not apply since there can be no fileids known within the referring (absent) file system nor is there any access to the fh_expire_type attribute. o All file system instances servers should be considered as of - different _change_ classes. + different change classes. + + o All file system instances servers should be considered as of + different readdir classes. For other class assignments, handling of file system transitions depends on the reasons for the transition: - o When the transition is due to migration, that is the client was - directed to new file system after receiving an NFS4ERR_MOVED - error, the target should be treated as being of the same _write- - verifier_ class as the source. + o When the transition is due to migration, that is, the client was + directed to a new file system after receiving an NFS4ERR_MOVED + error, the target should be treated as being of the same write- + verifier class as the source. o When the transition is due to failover to another replica, that is, the client selected another replica without receiving and NFS4ERR_MOVED error, the target should be treated as being of a - different _write-verifier_ class from the source. + different write-verifier class from the source. The specific choices reflect typical implementation patterns for - failover and controlled migration respectively. + failover and controlled migration, respectively. See Section 17 for a discussion on the recommendations for the security flavor to be used by any GETATTR operation that requests the "fs_locations" attribute. 8. NFS Server Name Space + 8.1. Server Exports On a UNIX server the name space describes all the files reachable by pathnames under the root directory or "/". On a Windows NT server the name space constitutes all the files on disks named by mapped disk letters. NFS server administrators rarely make the entire server's filesystem name space available to NFS clients. More often portions of the name space are made available via an "export" feature. In previous versions of the NFS protocol, the root filehandle for each export is obtained through the MOUNT protocol; @@ -4892,24 +4901,23 @@ of the client might have had on the server, as opposed to forcing the new client incarnation to wait for the leases to expire. Breaking the lease state amounts to the server removing all lock, share reservation, and, where the server is not supporting the CLAIM_DELEGATE_PREV claim type, all delegation state associated with same client with the same identity. For discussion of delegation state recovery, see Section 10.2.1. Client identification is encapsulated in the following structure: - struct SETCLIENTID4args { - nfs_client_id4 client; - cb_client4 callback; - uint32_t callback_ident; + struct nfs_client_id4 { + verifier4 verifier; + opaque id; }; The first field, verifier is a client incarnation verifier that is used to detect client reboots. Only if the verifier is different from that which the server has previously recorded the client (as identified by the second field of the structure, id) does the server start the process of canceling the client's leased state. The second field, id is a variable length string that uniquely defines the client. @@ -5266,21 +5275,22 @@ request and response on a given lock_owner must be cached as long as the lock state exists on the server. The client MUST monotonically increment the sequence number for the CLOSE, LOCK, LOCKU, OPEN, OPEN_CONFIRM, and OPEN_DOWNGRADE operations. This is true even in the event that the previous operation that used the sequence number received an error. The only exception to this rule is if the previous operation received one of the following errors: NFS4ERR_STALE_CLIENTID, NFS4ERR_STALE_STATEID, NFS4ERR_BAD_STATEID, NFS4ERR_BAD_SEQID, NFS4ERR_BADXDR, - NFS4ERR_RESOURCE, NFS4ERR_NOFILEHANDLE. + NFS4ERR_RESOURCE, NFS4ERR_NOFILEHANDLE, NFS4ERR_LEASE_MOVED, or + NFS4ERR_MOVED. 9.1.6. Recovery from Replayed Requests As described above, the sequence number is per lock_owner. As long as the server maintains the last sequence number received and follows the methods described above, there are no risks of a Byzantine router re-sending old requests. The server need only maintain the (lock_owner, sequence number) state as long as there are open files or closed files with locks outstanding. @@ -5310,21 +5320,21 @@ returned to the client. 9.1.8. Use of Open Confirmation In the case that an OPEN is retransmitted and the lock_owner is being used for the first time or the lock_owner state has been previously released by the server, the use of the OPEN_CONFIRM operation will prevent incorrect behavior. When the server observes the use of the lock_owner for the first time, it will direct the client to perform the OPEN_CONFIRM for the corresponding OPEN. This sequence - establishes the use of an lock_owner and associated sequence number. + establishes the use of a lock_owner and associated sequence number. Since the OPEN_CONFIRM sequence connects a new open_owner on the server with an existing open_owner on a client, the sequence number may have any value. The OPEN_CONFIRM step assures the server that the value received is the correct one. (see Section 15.20 for further details.) There are a number of situations in which the requirement to confirm an OPEN would pose difficulties for the client and server, in that they would be prevented from acting in a timely fashion on information received, because that information would be provisional, @@ -6027,29 +6037,29 @@ When responsibility for handling a given file system is transferred to a new server (migration) or the client chooses to use an alternate server (e.g., in response to server unresponsiveness) in the context of file system replication, the appropriate handling of state shared between the client and server (i.e., locks, leases, stateids, and clientids) is as described below. The handling differs between migration and replication. For related discussion of file server state and recover of such see the sections under Section 9.6. - If server replica or a server immigrating a filesystem agrees to, or - is expected to, accept opaque values from the client that originated - from another server, then it is a wise implementation practice for - the servers to encode the "opaque" values in network byte order. - This way, servers acting as replicas or immigrating filesystems will - be able to parse values like stateids, directory cookies, - filehandles, etc. even if their native byte order is different from - other servers cooperating in the replication and migration of the - filesystem. + If a server replica or a server immigrating a filesystem agrees to, + or is expected to, accept opaque values from the client that + originated from another server, then it is a wise implementation + practice for the servers to encode the "opaque" values in network + byte order. This way, servers acting as replicas or immigrating + filesystems will be able to parse values like stateids, directory + cookies, filehandles, etc. even if their native byte order is + different from other servers cooperating in the replication and + migration of the filesystem. 9.14.1. Migration and State In the case of migration, the servers involved in the migration of a filesystem SHOULD transfer all server state from the original to the new server. This must be done in a way that is transparent to the client. This state transfer will ease the client's transition when a filesystem migration occurs. If the servers are successful in transferring all state, the client will continue to use stateids assigned by the original server. Therefore the new server must @@ -6064,21 +6074,21 @@ server will typically have a different expiration time from those for the same client, previously on the old server. To maintain the property that all leases on a given server for a given client expire at the same time, the server should advance the expiration time to the later of the leases being transferred or the leases already present. This allows the client to maintain lease renewal of both classes without special effort. The servers may choose not to transfer the state information upon migration. However, this choice is discouraged. In this case, when - the client presents state information from the original server (e.g. + the client presents state information from the original server (e.g., in a RENEW op or a READ op of zero length), the client must be prepared to receive either NFS4ERR_STALE_CLIENTID or NFS4ERR_STALE_STATEID from the new server. The client should then recover its state information as it normally would in response to a server failure. The new server must take care to allow for the recovery of state information as it would in the event of server restart. A client SHOULD re-establish new callback information with the new server as soon as possible, according to sequences described in @@ -6107,27 +6117,30 @@ In the case of lease renewal, the client may not be submitting requests for a filesystem that has been migrated to another server. This can occur because of the implicit lease renewal mechanism. The client renews leases for all filesystems when submitting a request to any one filesystem at the server. In order for the client to schedule renewal of leases that may have been relocated to the new server, the client must find out about lease relocation before those leases expire. To accomplish this, all - operations which implicitly renew leases for a client (i.e., OPEN, - CLOSE, READ, WRITE, RENEW, LOCK, LOCKT, LOCKU), will return the error + operations which implicitly renew leases for a client (such as OPEN, + CLOSE, READ, WRITE, RENEW, LOCK, and others), will return the error NFS4ERR_LEASE_MOVED if responsibility for any of the leases to be renewed has been transferred to a new server. This condition will continue until the client receives an NFS4ERR_MOVED error and the server receives the subsequent GETATTR(fs_locations) for an access to - each filesystem for which a lease has been moved to a new server. + each filesystem for which a lease has been moved to a new server. By + convention, the compound including the GETATTR(fs_locations) SHOULD + append a RENEW operation to permit the server to identify the client + doing the access. When a client receives an NFS4ERR_LEASE_MOVED error, it should perform an operation on each filesystem associated with the server in question. When the client receives an NFS4ERR_MOVED error, the client can follow the normal process to obtain the new server information (through the fs_locations attribute) and perform renewal of those leases on the new server. If the server has not had state transferred to it transparently, the client will receive either NFS4ERR_STALE_CLIENTID or NFS4ERR_STALE_STATEID from the new server, as described above, and the client can then recover state information @@ -6503,21 +6516,21 @@ The data that is written to the server as a prerequisite to the unlocking of a region must be written, at the server, to stable storage. The client may accomplish this either with synchronous writes or by following asynchronous writes with a COMMIT operation. This is required because retransmission of the modified data after a server reboot might conflict with a lock held by another client. A client implementation may choose to accommodate applications which use record locking in non-standard ways (e.g., using a record lock as - a global semaphore) by flushing to the server more data upon an LOCKU + a global semaphore) by flushing to the server more data upon a LOCKU than is covered by the locked range. This may include modified data within files other than the one for which the unlocks are being done. In such cases, the client must not interfere with applications whose READs and WRITEs are being done only within the bounds of record locks which the application holds. For example, an application locks a single byte of a file and proceeds to write that single byte. A client that chose to handle a LOCKU by flushing all modified data to the server could validly write that single byte in response to an unrelated unlock. However, it would not be valid to write the entire block in which that single written byte was located since it includes @@ -7490,64 +7502,66 @@ * adding bits to flag fields such as new attributes to GETATTR's bitmap4 data type * adding bits to existing attributes like ACLs that have flag words * extending enumerated types (including NFS4ERR_*) with new values - 4. Minor versions may not modify the structure of existing + 4. Minor versions must not modify the structure of existing attributes. - 5. Minor versions may not delete operations. + 5. Minor versions must not delete operations. This prevents the potential reuse of a particular operation "slot" in a future minor version. - 6. Minor versions may not delete attributes. + 6. Minor versions must not delete attributes. - 7. Minor versions may not delete flag bits or enumeration values. + 7. Minor versions must not delete flag bits or enumeration values. - 8. Minor versions may declare an operation as mandatory to NOT - implement. + 8. Minor versions may declare an operation MUST NOT be implement. - Specifying an operation as "mandatory to not implement" is + Specifying that an operation MUST NOT be implemented is equivalent to obsoleting an operation. For the client, it means - that the operation should not be sent to the server. For the + that the operation MUST NOT be sent to the server. For the server, an NFS error can be returned as opposed to "dropping" the request as an XDR decode error. This approach allows for the obsolescence of an operation while maintaining its structure so that a future minor version can reintroduce the operation. - 1. Minor versions may declare attributes mandatory to NOT - implement. + 1. Minor versions may declare that an attribute MUST NOT be + implemented. - 2. Minor versions may declare flag bits or enumeration values - as mandatory to NOT implement. + 2. Minor versions may declare that a flag bit or enumeration + value MUST NOT be implemented. - 9. Minor versions may downgrade features from mandatory to - recommended, or recommended to optional. + 9. Minor versions may downgrade features from REQUIRED to + RECOMMENDED, or RECOMMENDED to OPTIONAL. - 10. Minor versions may upgrade features from optional to recommended - or recommended to mandatory. + 10. Minor versions may upgrade features from OPTIONAL to RECOMMENDED + or RECOMMENDED to REQUIRED. - 11. A client and server that support minor version X must support + 11. A client and server that support minor version X SHOULD support minor versions 0 (zero) through X-1 as well. - 12. No new features may be introduced as mandatory in a minor - version. + 12. Except for infrastructural changes, no new features may be + introduced as REQUIRED in a minor version. This rule allows for the introduction of new functionality and forces the use of implementation experience before designating a - feature as mandatory. + feature as REQUIRED. On the other hand, some classes of + features are infrastructural and have broad effects. Allowing + such features to not be REQUIRED complicates implementation of + the minor version. 13. A client MUST NOT attempt to use a stateid, filehandle, or similar returned object from the COMPOUND procedure with minor version X for another COMPOUND procedure with minor version Y, where X != Y. 12. Internationalization This chapter describes the string-handling aspects of the NFS version 4 protocol, and how they address issues related to @@ -7580,27 +7594,28 @@ for the implementation to allow files created by other protocols and by local operations on the file system to be accessed using NFS version 4 as well. It also needs to be understood that a considerable portion of file name processing will occur within the implementation of the file system rather than within the limits of the NFS version 4 server implementation per se. As a result, cetain aspects of name processing may change as the locus of processing moves from file system to file system. As a result of these factors, the protocol - does not enforce uniformity of processing NFS version 4 server - requests on the server as a whole. Because the server interacts with - existing file system implementations, the same server handling will - produce different behavior when interacting with different file - system implementations. To attempt to require uniform behavior, and - treat the the protocol server and the file system as a unified - application, would considerably limit the usefulness of the protocol. + cannot enforce uniformity of name-related processing upon NFS version + 4 server requests on the server as a whole. Because the server + interacts with existing file system implementations, the same server + handling will produce different behavior when interacting with + different file system implementations. To attempt to require uniform + behavior, and treat the the protocol server and the file system as a + unified application, would considerably limit the usefulness of the + protocol. 12.1. Use of UTF-8 As mentioned above, UTF-8 is used as a convenient way to encode Unicode which allows clients that have no internationalization requirements to avoid these issues since the mapping of ASCII names to UTF-8 is the identity. 12.1.1. Relation to Stringprep @@ -7610,106 +7625,135 @@ in ways that make sense for typical users throughout the world." A protocol conforming to this framework must define a profile of stringprep "in order to fully specify the processing options." NFS version 4, while it does make normative references to stringprep and uses elements of that framework, it does not, for reasons that are explained below, conform to that framework, for all of the strings that are used within it. In addition to some specific issues which have caused stringprep to add confusion in handling certain characters for certain languages, - there are a number of reasons why stringprep profiles are not + there are a number of general reasons why stringprep profiles are not suitable for describing NFS version 4. o Restricting the character repertoire to Unicode 3.2, as required by stringprep is unduly constricting. o Many of the character tables in stringprep are inappropriate because of this limited character repertoire, so that normative reference to stringprep is not desirable in many case and instead, we allow more flexibility in the definition of case mapping tables. o Because of the presence of different file systems, the specifics of processing are not fully defined and some aspects that are are RECOMMENDED, rather than REQUIRED. Despite these issues, in many cases the general structure of stringprep profiles, consisting of sections which deal with the applicability of the description, the character repertoire, charcter mapping, normalization, prohibited characters, and issues of the - handling (i.e. possible prohibition) of bidirectional strings, is a + handling (i.e., possible prohibition) of bidirectional strings, is a convenient way to describe the string handling which is needed and will be used where appropriate. 12.1.2. Normalization, Equivalence, and Confusability Unicode has defined several equivalence relationships among the set of possible strings. Understanding the nature and purpose of these equivalence relations is important to understand the handling of - unicode strings within NFS version 4. + Unicode strings within NFS version 4. - o Some string pairs are thought as only differing in the way accents - and other diacritics are encoded. Such string pairs are called - "canonically equivalent". For example, the character LATIN SMALL - LETTER E WITH ACUTE (U+00E9) is defined as equivalent to the + Some string pairs are thought as only differing in the way accents + and other diacritics are encoded, as illustrated in the examples + below. Such string pairs are called "canonically equivalent". + + Such equivalence can occur when there are precomposed characters, + as an alternative to encoding a base character in addition to a + combining accent. For example, the character LATIN SMALL LETTER E + WITH ACUTE (U+00E9) is defined as canonically equivalent to the string consisting of LATIN SMALL LETTER E followed by COMBINING ACUTE ACCENT (U+0065, U+0301). - o Additionally there is an equvalence relation of "compatibility + When multiple combining diacritics are present, differences in the + ordering are not reflected in resulting display and the strings + are defined as canonically equivalent. For example, the string + consisting of LATIN SMALL LETTER Q, COMBINING ACUTE ACCENT, + COMBINING GRAVE ACCENT (U+0071, U+0301, U+0300) is canonically + quivalent to the string consisting of LATIN SMALL LETTER Q, + COMBINING GRAVE ACCENT, COMBINING ACUTE ACCENT (U+0071, U+0300, + U+0301) + + When both situations are present, the number of canonically + equivalent strings can be greater. Thus, the following strings + are all canonically equivalent: + + LATIN SMALL LETTER E, COMBINING MACRON, ACCENT, COMBINING ACUTE + ACCENT (U+0xxx, U+0304, U+0301) + LATIN SMALL LETTER E, COMBINING ACUTE ACCENT, COMBINING MACRON + (U+0xxx, U+0301, U+0304) + + LATIN SMALL LETTER E WITH MACRON, COMBINING ACUTE ACCENT + (U+011E, U+0301) + + LATIN SMALL LETTER E WITH ACUTE, COMBINING MACRON (U+00E9, + U+0304) + + LATIN SMALL LETTER E WITH MACRON AND ACUTE (U+1E16) + + Additionally there is an equivalence relation of "compatibility equivalence". Two canonically equivalent strings are necessarily - compatibility equivalent, although not the converse. An example - of compatibility equivalent strings which are not canonically - equivalent are GREEK CAPITAL LETTER OMEGA (U+03A9) and OHM SIGN - (U+2129). These are identical in appearance while other - compatibility equivalent strings are not. Another example would - be "x2" and the two character string denoting x-squared which are - clearly differnt in appearance although compatibility equivalent - and not canonically equivalent. These have Unicode encodings - LATIN SMALL LETTER X, DIGIT TWO (U+0078, U+0032) and LATIN SMALL - LETTER X, SUPERSCRIPT TWO (U+0078, U+00B2), + compatibility equivalent, although not the converse. An example of + compatibility equivalent strings which are not canonically equivalent + are GREEK CAPITAL LETTER OMEGA (U+03A9) and OHM SIGN (U+2129). These + are identical in appearance while other compatibility equivalent + strings are not. Another example would be "x2" and the two character + string denoting x-squared which are clearly differnt in appearance + although compatibility equivalent and not canonically equivalent. + These have Unicode encodings LATIN SMALL LETTER X, DIGIT TWO (U+0078, + U+0032) and LATIN SMALL LETTER X, SUPERSCRIPT TWO (U+0078, U+00B2), One way to deal with these equivalence relations is via - normalization. A normalization form maps all strings to correspond - normalized string in such a fashion that all strings that are - equivalent (canonically or compatibly, depending on the form) are - mapped to the same value. Thus the image of the mapping is a subset - of Unicode strings conceived as the representives of the equivalence - classes defined by the chosed equivalence relation. + normalization. A normalization form maps all strings to a + correspondig normalized string in such a fashion that all strings + that are equivalent (canonically or compatibly, depending on the + form) are mapped to the same value. Thus the image of the mapping is + a subset of Unicode strings conceived as the representives of the + equivalence classes defined by the chosen equivalence relation. In the NFS version 4 protocol, handling of issues related to internationalization with regard to normalization follows one of two basic patterns: o For strings whose function is related to other internet standards, such as server and domain naming, the normalization form defined by the appropriate internet standards is used. For server and - domain naming, this involves normalization form NKFC as specified + domain naming, this involves normalization form NFKC as specified in [10] o For other strings, particular those passed by the server to file system implementations, normalization requirements are the province of the file system and the job of this specification is not to specify a particular form but to make sure that interoperability is maximmized, even when clients and server-based - file systems may have different preferences. + file systems have different preferences. A related but distinct issue concerns string confusability. This can occur when two strings (including single-charcter strings) having a similar appearance. There have been attempts to define uniform processing in an attempt to avoid such confusion (see stringprep [9]) - but the results have often added to confusion. + but the results have often added confusion. Some examples of possible confusions and proposed processing intended to reduce/avoid confusions: - o Deletion of characters supposed to be invisible and appropriately + o Deletion of characters believed to be invisible and appropriately ignored, justifying their deletion, including, WORD JOINER (U+2060), and the ZERO WIDTH SPACE (U+200B). o Deletion of characters supposed to not bear semantics and only affect glyph choice, including the ZERO WIDTH NON-JOINER (U+200C) and the ZERO WIDTH JOINER (U+200D), where the deletion turns out to be a problem for Farsi speakers. o Prohibition of space characters such as the EM SPACE (U+2003), the EN SPACE (U+2002), and the THIN SPACE (U+2009). @@ -7734,275 +7778,277 @@ o For other strings, particularly those passed by the server to file system implementations, any such preparation requirements including the choice of how, or whether to address the confusability issue, are the responsibility of the file system to define, and for this specification to try to add its own set would add unacceptably to complexity, and make many files accessible locally and by other remote file access protocols, inaccessible by NFS version 4. This specification defines how the protocol maximizes interoperability in the face of different file system - implementations. - - NFS version 4 does allow file systems to map and to reject - characters, including those likely to result in confusion, since - file systems may choose to do such things. It defines what the - client will see in such cases, in order to limit problems that can - arise when a file name is created and it appears to have a - different name from the one it is assigned when the name is - created. + implementations . NFS version 4 does allow file systems to map + and to reject characters, including those likely to result in + confusion, since file systems may choose to do such things. It + defines what the client will see in such cases, in order to limit + problems that can arise when a file name is created and it appears + to have a different name from the one it is assigned when the name + is created. 12.2. String Type Overview 12.2.1. Overall String Class Divisions - NFS version 4 has to deal with with a large set of diffreent types of + NFS version 4 has to deal with a large set of diffreent types of strings and because of the different role of each, internationalization issues will be different for each: o For some types of strings, the fundamental internationalization- related decisions are the province of the file system or the security-handling functions of the server and the protocol's job is to establish the rules under which file systems and servers are allowed to exercise this freedom, to avoid adding to confusion. o In other cases, the fundamental internationalization issues are the responsibility of other IETF groups and our jobis simply to reference those and perhaps make a few choices as to how they are - to be used (e.g. U-labels vs. A-labels). + to be used (e.g., U-labels vs. A-labels). o There are also cases in which a string has a small amount of NFS version 4 processing which results in one or more strings being referred to one of the other categories. We will divide strings to be dealt with into the following classes: MIX indicating that there is small amount of preparatory processing - that either picks an appropriate modes of internationalization - handling or divides the string into a set of (two) strings with a - different mode internationalization handling for each. The - details are discussed in the section "Types with Pre-processing to - Resolve Mixture Issues". + that either picks an internationalization hadling mode or divides + the string into a set of (two) strings with a different mode + internationalization handling for each. The details are discussed + in the section "Types with Pre-processing to Resolve Mixture + Issues". NIP indicating that, for various reasons, there is no need for internationalization-specific processing to be performed. The specifics of the various string types handled in this way are described in the section "String Types without Internationalization Processing". INET indicating that the string needs to be processed in a fashion - is goverened by non-NFS-specific internet specifications. The + goverened by non-NFS-specific internet specifications. The details are discussed in the section "Types with Processing Defined by Other Internet Areas". - NFS indicating that the string needs to be processed in a fashion is - goverened by NFSv4-specific consideration. The primary focus is + NFS indicating that the string needs to be processed in a fashion + governed by NFSv4-specific considerations. The primary focus is on enabling flexibility for the various file systems to be accessed and is described in the section "String Types with NFS- specific Processing". 12.2.2. Divisions by Typedef Parent types There are a number of different string types within NFS version 4 and internationalization handling will be different for different types of strings. Each the types will be in one of four groups based on the parent type that specifies the nature of its relationship to utf8 and ascii. - utf8_should/SHOULD: indicating that strings of this type should be + utf8_should/USHOULD: indicating that strings of this type SHOULD be UTF-8 but clients and servers will not check for valid UTF-8 encoding. - utf8val_should/VSHOULD: indicating that strings of this type should + utf8val_should/UVSHOULD: indicating that strings of this type SHOULD be and generally will be in the form of the UTF-8 encoding of Unicode. Strings in most cases will be checked by the server for valid UTF-8 but for certain file systems, such checking may be inhibited. - utf8val_must/VMUST: indicating that strings of this type must be in + utf8val_must/UVMUST: indicating that strings of this type MUST be in the form of the UTF-8 encoding of Unicode. Strings will be - checked by the server for valid UTF-8 and the server should ensure + checked by the server for valid UTF-8 and the server SHOULD ensure that when sent to the client, they are valid UTF-8. - ascii_must/ASCII: indicating that strings of this type must be pure + ascii_must/ASCII: indicating that strings of this type MUST be pure ASCII, and thus automatically UTF-8. The processing of these string must ensure that they are only have ASCII characters but this need not be a separate step if any normally required check for validity inherently assures that only ASCII characters are present. + In those cases where UTF-8 is not required, USHOULD and UVSHOULD, and + strings that are not valid UTF-8 are received and accepted, the + receiver MUST NOT modify the strings. For example, setting + particular bits such as the high-order bit to zero MUST NOT be done. + 12.2.3. Individual Types and Their Handling The first table outlines the handling for the primary string types, - i.e. those not derived as a prefix or a suffix from a mixture type. + i.e., those not derived as a prefix or a suffix from a mixture type. - +-----------------+---------+-------+-------------------------------+ + +-----------------+----------+-------+------------------------------+ | Type | Parent | Class | Explanation | - +-----------------+---------+-------+-------------------------------+ - | comptag4 | SHOULD | NIP | Should be utf8 but no | + +-----------------+----------+-------+------------------------------+ + | comptag4 | USHOULD | NIP | Should be utf8 but no | | | | | validation by server or | | | | | client is to be done. | - | component4 | VSHOULD | NFS | Should be utf8 but clients | + | component4 | UVSHOULD | NFS | Should be utf8 but clients | | | | | may need to access file | - | | | | systems with a different name | - | | | | structure. files systems with | - | | | | non-utf8 names. | - | linktext4 | VSHOULD | NFS | Should be utf8 since text may | - | | | | include name components. | - | | | | Because of the need to access | - | | | | existing file systems, this | - | | | | check may be inhibited. | + | | | | systems with a different | + | | | | name structure, such as file | + | | | | systems that have non-utf8 | + | | | | names. | + | linktext4 | UVSHOULD | NFS | Should be utf8 since text | + | | | | may include name components. | + | | | | Because of the need to | + | | | | access existing file | + | | | | systems, this check may be | + | | | | inhibited. | | fattr4_mimetype | ASCII | NIP | All mime types are ascii so | | | | | no specific utf8 processing | | | | | is required, given that you | | | | | are comparing to that list. | - +-----------------+---------+-------+-------------------------------+ + +-----------------+----------+-------+------------------------------+ Table 5 - There are a number of string types that are compound in that they may - consist of multiple conjoined strings with different utf8-related - processing for each. + There are a number of string types that are subject to preliminary + processing. This processing may take the form either of selecting + one of two possible forms based on the string contents or it in may + consist of dividing the string into multiple conjoined strings each + with different utf8-related processing. +---------+--------+-------+----------------------------------------+ | Type | Parent | Class | Explanation | +---------+--------+-------+----------------------------------------+ - | prin4 | VMUST | MIX | Consists of two parts separated by an | + | prin4 | UVMUST | MIX | Consists of two parts separated by an | | | | | at-sign, a prinpfx4 and a prinsfx4. | | | | | These are described in the next table. | - | server4 | VMUST | MIX | Is either an IP address (serveraddr4) | + | server4 | UVMUST | MIX | Is either an IP address (serveraddr4) | | | | | which has to be pure ascii or a server | | | | | name svrname4, which is described | | | | | immediately below. | +---------+--------+-------+----------------------------------------+ - Table 6 The last table describes the components of the compound types described above. - +----------+-------+------+-----------------------------------------+ + +----------+--------+------+----------------------------------------+ | Type | Class | Def | Explanation | - +----------+-------+------+-----------------------------------------+ + +----------+--------+------+----------------------------------------+ | svraddr4 | ASCII | NIP | Server as IP address, whether IPv4 or | - | | | | IPv6, | - | svrname4 | VMUST | INET | Server name as returned by server. Not | - | | | | sent by client, except in | + | | | | IPv6. | + | svrname4 | UVMUST | INET | Server name as returned by server. | + | | | | Not sent by client, except in | | | | | VERIFY/NVERIFY. | - | prinsfx4 | VMUST | INET | Suffix part of principal, in the form | + | prinsfx4 | UVMUST | INET | Suffix part of principal, in the form | | | | | of a domain name. | - | prinpfx4 | VMUST | NFS | Must match one of a list of valid users | - | | | | or groups for that particular domain. | - +----------+-------+------+-----------------------------------------+ + | prinpfx4 | UVMUST | NFS | Must match one of a list of valid | + | | | | users or groups for that particular | + | | | | domain. | + +----------+--------+------+----------------------------------------+ Table 7 12.3. Errors Related to Strings When the client sends an invalid UTF-8 string in a context in which - UTF-8 is required, the server MUST return an NFS4ERR_INVAL error. - When the client sends an invalid UTF-8 string in a context in which - UTF-8 is recommended, the server SHOULD return an NFS4ERR_INVAL - error. These situations apply to cases in which inappropriate - prefixes are detected and where the count includes trailing bytes - that do not constitute a full UCS character. + UTF-8 is REQUIRED, the server MUST return an NFS4ERR_INVAL error. + Within the framework of the previous section, this applies to strings + whose type is defined as utf8val_must or ascii_must. When the client + sends an invalid UTF-8 string in a context in which UTF-8 is + RECOMMENDED and the server should test for UTF-8, the server SHOULD + return an NFS4ERR_INVAL error. Within the framework of the previous + section, this applies to strings whose type is defined as + utf8val_should. These situations apply to cases in which + inappropriate prefixes are detected and where the count includes + trailing bytes that do not constitute a full UCS character. - Where the client supplied string is valid UTF-8 but contains + Where the client-supplied string is valid UTF-8 but contains characters that are not supported by the server file system as a value for that string (e.g., names containing characters that have more than two octets on a file system that supports UCS-2 characters only, file name components containing slashes on file systems that do - not allow them in filename file name components), the server should - MUST return an NFS4ERR_BADCHAR error. + not allow them in file name components), the server MUST return an + NFS4ERR_BADCHAR error. Where a UTF-8 string is used as a file name component, and the file system, while supporting all of the characters within the name, does not allow that particular name to be used, the server should return the error NFS4ERR_BADNAME. This includes file system prohibitions of "." and ".." as file names for certain operations, and other such similar constraints. It does not include use of strings with non- preferred normalization modes. Where a UTF-8 string is used as a file name component, the file system implementation MUST NOT return NFS4ERR_BADNAME, simply due to - a normalization mismatch. In such cases the implementation MAY + a normalization mismatch. In such cases the implementation SHOULD convert the string to its own preferred normalization mode before performing the operation. As a result, a client cannot assume that a file created with a name it specifies will have that name when the directory is read. It may have instead, the name converted to the file system's preferred normalization form. - Where a UTF-8 string is used as other than a file name component and - the string does not meet the normalization requirements specified for - it, the error NFS4ERR_INVAL is returned. + Where a UTF-8 string is used as other than as file name component (or + as symbolic link text) and the string does not meet the normalization + requirements specified for it, the error NFS4ERR_INVAL is returned. 12.4. Types with Pre-processing to Resolve Mixture Issues 12.4.1. Processing of Principal Strings Strings denoting principals (users or groups) MUST be UTF-8 but since they consist of a principal prefix, an at-sign, and a domain, all three of which either are checked for being UTF-8, or inherently are UTF-8, checking the string as a whole for being UTF-8 is not required. Although a server implementation may choose to make this check on the string as whole, for example in converting it to Unicode, the description within this document, will reflect a processing model in which such checking happens after the division into a principal prefix and suffix, the latter being in the form of a domain name. The string should be scanned for at-signs. If there is more that one at-sign, the string is considered invalid. For cases in which there - are no at-signs or the at-sign appears at the start of end of the - string see Interpreting owner and owner_group Otherwise, the portion - before the at-sign is dealt with as a prinpfx4 and the portion after - is dealt with as a prinsfx4. + are no at-signs or the at-sign appears at the start or end of the + string see Interpreting owner and owner_group. Otherwise, the + portion before the at-sign is dealt with as a prinpfx4 and the + portion after is dealt with as a prinsfx4. 12.4.2. Processing of Server Id Strings Server id strings typically appear in responses (as attribute values) - and only appear in requests as attribute value presented to VERIFY + and only appear in requests as an attribute value presented to VERIFY and NVERIFY. With that exception, they are not subject to server validation and posible rejection. It is not expected that clients will typically do such validation on receipt of responses but they may as a way to check for proper server behavior. The responsibility for sending correct UTF-8 strings is with the server. - Servers are identified by either server names of IP addresses. Once + Servers are identified by either server names or IP addresses. Once an id has been identified as an IP address, then there is no processing specific to internationalization to be done, since such an address must be ASCII to be valid. - Identifiers which are not valid IP addresses are treated as server - names for which see below. There are fifteen top-level domains that - consist of two characters, each within the range a-f. Given that, it - is possible to have a string such as bb.bb.bb.bb, which might be - either an IP address or a server name. It is recommended that in - such cases, a check for a valid server name be done first and the - string interpreted as an IP address only if it found that the string - is not a server name. - 12.5. String Types without Internationalization Processing There are a number of types of strings which, for a number of different reasons, do not require any internationalization-specific - handling, such as valdiation of UTF-8, normaliztion, or character + handling, such as validation of UTF-8, normalization, or character mapping or checking. This does not necessarily mean that the strings need not be UTF-8. In some case, other checking on the string ensures that they are valid UTF-8, without doing any checking specific to internationalization. The following are the specific types: comptag4 strings are an aid to debugging and the sender should avoid confusion by not using anything but valid UTF-8. But any work - validating the string or modifying it would just add complication + validating the string or modifying it would only add complication to a mechanism whose basic function is best supported by making it not subject to any checking and having data maximally available to be looked at in a network trace. fattr4_mimetype strings need to be validated by matching against a list of valid mime types. Since these are all ASCII, no processing specific to internationaliztion is required since anything that does not match is invalid and anything which does not obey the rules of UTF-8 will not be ASCII and consequently will not match, and will be invalid. @@ -8022,83 +8068,85 @@ o Server names as they appear in the fs_locations attribute. Note that for most purposes, such server names will only be sent by the server to the client. The exception is use of the fs_locations attribute in a VERIFY or NVERIFY operation. o Principal suffixes which are used to denote sets of users and groups, and are in the form of domain names. The general rules for handling all of these domain-related strings - are similar and independent of role of the sender or receiver as - client or sender, although the consequences of failure to obey these - rules may be different for client or server. + are similar and independent of role the of the sender or receiver as + client or server although the consequences of failure to obey these + rules may be different for client or server. The server can report + errors when it is sent invalid strings, whereas the client will + simply ignore invalid string or use a default value in their place. The string sent SHOULD be in the form of a U-label although it MAY be in the form of an A-label or a UTF-8 string that would not map to itself when canonicalized by applying ToUnicode(ToASCII(...)). The receiver needs to be able to accept domain and server names in any of the formats allowed. The server MUST reject, using the the error NFS4ERR_INVAL, a string which is not valid UTF-8 or which begins with "xn--" and violates the rules for a valid A-label. When a domain string is part of id@domain or group@domain, the server SHOULD map domain strings which are A-labels or are UTF-8 domain names which are not U-labels, to the corresponding U-label, using ToUnicode(domain) or ToUnicode(ToASCII(domain)). As a result, the domain name returned within a userid on a GETATTR may not match that sent when the userid is set using SETATTR, although when this happens, the domain will be in the form of a U-label. When the server does not map domain strings which are not U-labels into a U-label, which it MAY do, it MUST NOT modify the domain and the domain returned on a GETATTR of the userid MUST be the same as that - using when setting the userid by the SETATTTR. + used when setting the userid by the SETATTTR. The server MAY implement VERIFY and NVERIFY without translating internal state to a string form, so that, for example, a user principal which represents a specific numeric user id, will match a different principal string which represents the same numeric user id. 12.7. String Types with NFS-specific Processing For a number of data types within NFSv4, the primary responsbibility for internationalization-related handling is that of some entity other than the server itself (see below for details). In these situations, the primary responsibility of NFS version 4 is to provide a framework in which that other entity (file system and server - operating system principal naming framework) to implement its own + operating system principal naming framework) implements its own decisions while establishing rules to limit interoperability issues. This pattern applies to the following data types: o In the case of name components (strings of type component4), the server-side file system implementation (of which there may be more than one for a particular server) deals with internationalization issues, in a fashion that is appropriate to NFS version 4, other remote file access protocols, and local file access methods. See - "Handling of File Came Components" for the detailed treatment. + "Handling of File Name Components" for the detailed treatment. o In the case of link text strings (strings of type lintext4), the issues are similar, but file systems are restricted in the set of acceptable internationalization-related processing that they may do, principally because symbolic links may contain name componetns that, when used, are presented to other file systems and/or other servers. See "Processing of Link Text" for the detailed treatment. o In the case of principal prefix strings, any decisions regarding internationalization are the responsibility of the server operating systems which may make its own rules regarding user and group name encoding. See "Processing of Principal Prefixes" for the detailed treatment. -12.7.1. Handling of File Came Components +12.7.1. Handling of File Name Components There are a number of places within client and server where file name components are processed: o On the client, file names may be processed as part of forming NFS version 4 requests. Any such processing will reflect specific needs of the client's environment and will be treated as out-of- scope from the viewpoint of this specification. o On the server, file names are processed as part of processing NFS @@ -8164,44 +8212,44 @@ o One alternate character repertoire is to represent file name components as strings of bytes with no protocol-defined encoding of multi-byte characters. Most typically, implementations that support this single-byte alternative will make it available as an option set by an administrator for all file systems within a server or for some particular file systems. If a server accepts non-UTF-8 strings anywhere within a specific file system, then it MUST do so throughout the entire file system. - o Another alternate character repertoires is the set of codepoints, + o Another alternate character repertoire is the set of codepoints, representable by the file system, most typically UCS-4. Individual file system implementations may have more restricted character repertoires, as for example file system that only are capable of storing names consisting of UCS-2 characters. When this is the case, and the character repertoire is not restricted to single-byte characters, characters not within that repertoire are treated as prohibited and the error NFS4ERR_BADCHAR is returned by the server when that character is encountered. Strings are intended to be in UTF-8 format and servers SHOULD return NFS4ERR_INVAL, as discussed above, when the characters sent are not valid UTF-8. When the character repertoire consists of single-byte characters, UTF-8 is not enforced. Such situations should be restricted to those where use is within a restricted environment where a single character mapping locale can be administratively - enforced, allowing a file name to be treated as string of bytes, + enforced, allowing a file name to be treated as a string of bytes, rather than as a string of characters. Such an arrangement might be necessary when NFS version 4 access to a file system containing names which are not valid UTF-8 needs to be provided. However, in any of the following situations, file names have to be - treated as strings of characters and servers MUST return + treated as strings of Unicode characters and servers MUST return NFS4ERR_INVAL when file names that are not in UTF-8 format: o Case-insensitive comparisons are specified by the file system and any characters sent contain non-ASCII byte codes. o Any normalization constraints are enforced by the server or file system implementation. o The server accepts a given name when creating a file and reports a different one when the directory is being examined. @@ -8214,24 +8262,24 @@ non-UTF-8 string, if NFS4ERR_INVAL is not returned, then name components will be treated as opaque and those sorts of modifications will not be seen. 12.7.1.3. Case-based Mapping Used for Component4 Strings Case-based mapping is not always a required part of server processing of name components. However, if the NFS version 4 file server supports the case_insensitive file system attribute, and if the case_insensitive attribute is true for a given file system, the NFS - version 4 server must use the Unicode case mapping tables for the + version 4 server MUST use the Unicode case mapping tables for the version of Unicode corresponding to the character repertoire. In the case where the character repertoire is UCS-2 or UCS-4, the case - mapping tables from the latest available version of Unicode should be + mapping tables from the latest available version of Unicode SHOULD be used. If the case_preserving attribute is present and set to false, then the NFS version 4 server MUST use the corresponding Unicode case mapping table to map case when processing component4 strings. Whether the server maps from lower to upper case or the upper to lower case is a matter for implementation choice. Stringprep Table B.2 should not be used for these purpose since it is limited to Unicode version 3.2 and also because it erroneously maps @@ -8240,143 +8288,257 @@ (SMALL LETTER SHARP S and CAPITAL LETTER SHARP S). Clients should be aware that servers may have mapped SMALL LETTER SHARP S to the string "ss" when case-insensitive mapping is in effect, with result that file whose name contains SMALL LETTER SHARP S may have that character replaced by "ss" or "SS". 12.7.1.4. Other Mapping Used for Component4 Strings Other than for issues of case mapping, an NFS version 4 server SHOULD - limit visible (i.e. those that change the name of file to reflect + limit visible (i.e., those that change the name of file to reflect those mappings to those from from a subset of the stringprep table B.1. Note particularly, the mapings from U+200C and U+200D to the empty string should be avoided, due to their undesirable effect on some strings in Farsi. Table B.1 may be used but it should be used only if required by the local file system implementation. For example, if the file system in question accepts file names containing the MONGOLIAN TODO SOFT HYPHEN character (U+1806) and they are distinct from the corresponding file names with this character removed, then using Table B.1 will cause functional problems when clients attempt to interact with that file system. The NFS version 4 server implementation including the filesystem MUST NOT silently remove characters not within Table B.1. If an implementation wishes to eliminate other characters because it is believed that allowing component name versions that both include - the character and not have while otherwise the same, will contribute - to confusion, it has two options: + the character and do not have while otherwise the same, will + contribute to confusion, it has two options: o Treat the characters as prohibited and return NFS4ERR_BADCHAR. o Eliminate the character as part of the name matching processing, while retaining it when a file is created. This would be analogous to file systems that are both case-insensitive and case- preserving,as dicussed above, or those which are both normalization-insensitive and normalization-preserving, as - discussed below. The handling will be insensitive to presence of - the chosen characters while preserving the presence or absence of - such chatacters within names. + discussed below. The handling will be insensitive to the presence + of the chosen characters while preserving the presence or absence + of such characters within names. Note that the second of these choices is a desirable way to handle characters within table B.1, again with the exception of U+200C and U+200D, which can cause issues for Farsi. In addition to modification due to normalization, discussed below, clients have to be able to deal with name modifications and other consequences of character mapping on the server, as discussed above. 12.7.1.5. Normalization Issues for Component Strings The issues are best discussed separately for the server and the client. It is important to note that the server and client may have different approaches to this area, and that the server choice may not - match the client operating environment so the issue of mismatches and - how they will be dealt with by the client is discussed in a later + match the client operating environment. The issue of mismatches and + how they may be best dealt with by the client is discussed in a later section. 12.7.1.5.1. Server Normalization Issues for Component Strings The NFS version 4 does not specify required use of a particular normalization form for component4 strings. Therefore, the server may receive unnormalized strings or strings that reflect either normalization form within protocol requests and responses. If the - operating environment requires normalization, then the server - implementation must normalize component4 strings within the protocol - server before presenting the information to the local file system. + file system requires normalization, then the server implementation + must normalize component4 strings within the protocol server before + presenting the information to the local file system. With regard to normalization, servers have the following choices, with the possibility that different choices may be selected for different file systems. o Implement a particular normalization form, either NFC, or NFD, in which case file names received from a client are converted to that normalization form and as a consequence, the client will always receive names in that normalization form. If this option is chosen, then it is impossible to create two files in the same directory that have different names which map to the same name when normalized. o Implement handling which is both normalization-insensitive and normalization-preserving. This makes it impossible to create two files in the same directory that have two different canonically - equivalent name, i.e. names which map to the same name when + equivalent names, i.e., names which map to the same name when normalized. However, unlike the previous option, clients will not have the names that they present modified to meet the server's normalization constraints. o Implement normalization-sensitive handling without enforcing a normalization form constraint on file names. This exposes the client to the possibility that two files can be created in the same directory which have different names which map to the same - name when normalized. This may be a significant issue when client - which use different normalization forms are used on the same file - system, but this issue needs to be set against the difficulty of - providing other sorts of normalization handling for some existing - file systems. + name when normalized. This may be a significant issue when + clients which use different normalization forms are used on the + same file system, but this issue needs to be set against the + difficulty of providing other sorts of normalization handling for + some existing file systems. 12.7.1.5.2. Client Normalization Issues for Component Strings The client, in processing name components, needs to deal with the fact that the server may impose normalization on file name components presented to it. As a result, a file can be created within a - directory and that name may have different name due to normalization - at the server. + directory and that name be different from that sent by the client due + to normalization at the server. Client operating environments differ in their handling of canonically - equivalent name. Some environments treat canonically equivalent + equivalent names. Some environments treat canonically equivalent strings as essentially equal and we will call these environments normalization-aware. Others, because of the pattern of their development with regard to these issues treat different strings as different, even if they are canonically equivalent. We call these normalization-unaware. + We discuss below issues that may arise when each of these types of + environments interact with the various types of file systems, with + regard to normalization handling. Note that complexity for the + client is increased given that there are no file system attributes to + determine the normalization handling present for that file system. + Where the client has the ability to create files (file system not + read-only and security allows it), attempting to create multiple + files with canonically equivalent names and looking at success + paaaters and the names assigned by the server to these files can + serve as a way to determine the relevant information. + Normalization-aware environments interoperate most normally with servers that either impose a given normalization form or those that implement name handling which is both normalization-insensitive and normalization-preserving name handling. However, clients need to be prepared to interoperate with servers that have normalization- sensitive file naming. In this situation, the client needs to be prepared for the fact that a directory may contain multiple names that it considers equivalent. + The following suggestions may be helpful in handling interoperability + issues for normalization-aware client environments, when they + interact with normalization-sensitive file systems. + + When READDIR is done, the names returned may include names that do + not match the client's normalization form, but instead are other + names canonically equivalent to the normalized name. + + When it can be determined that a normalization-insensitive server + file system is not involved, the client can simply normalize + filename components strings to its preferred normalization form. + + When it cannot be determined that a normalization-insensitive + server file system is not involved, the client is generally best + advised to process incoming name components so as to allow all + name components in a canonical equivalence class to be together. + When only a single member of class exists, it should generally + mapped directly to the preferred normalization form, whether the + name was of that form or not. + + When the client sees multiple names that are canonically + equivalent, it is clear you have a file systen which is + normalization sensitive. Clients should generally replace each + canonically equivalent name with one that appends some + distinguishing suffix, usually including a number. The numbers + should be assigned so that each distinct possible name with the + set of canonically equivalent names has an assigned numeric value. + Note that for some cases in which there are multiple instances of + strings that might be composed or decomposed and/or situations + with multiple diacritics to be applied to the same character, the + class might be large. + + When interacting with a normalization-sensitive filesystem, it may + be that the environment contains clients or implementations local + to the OS in which the file system is embedded, which use a + different normalization form. In such situations, a LOOKUP may + well fail, even though the directory contains a name canonically + equivalent to the name sought. One solution to this problem is to + re-do the LOOKUP in that situation with name converted to the + alternate normalization form. + + In the case in which normalization-unaware clients are involved in + the mix, LOOKUP can fail and then the second lOOKUP, described + above can also fail, even though there may well be a oanonically + equivalent name in the directory. One possible approach in that + case is to use a READDIR to find the equivalent name and lookup + that, although this can greatly add to client implementation + complexity. + + When interacting with a normalization-sensitive filesystem, the + situation where the environment contains clients or + implementations local to the OS in which the file system is + embedded, which use a different normalization form can also cause + issues when a file (or symlink or directory, etc.) is being + created. In such cases, you may be able to create an object of + the specified name even though, the directory contains a + canonically equivalent name. Similar issues can occur with LINK + and RENAME. The client can't really do much about such + sitautions, except be aware that they may occur. That's one of + the reasons normalization-sensitive server file system + implementations can be problematic to use when + internationalization issues are important. + Normalization-unaware environments interoperate most normally with servers that implement normalization-sensitive file naming. However, clients need to be prepared to interoperate with servers that impose a given normalization form or that implement name handling which is both normalization-insensitive and normalization-preserving. In the former case, a file created with a given name may find it changed to a different (although related name). In both cases, the client will have to deal with the fact that it is unable to create two names within a directory that are canonically equivalent. + Note that although the client implementation itself and the kernel + implementation may be normalization-unware, treating name components + as strings not subject to normalization, the environment as a whole + may be normalization-aware if commonly used libraries result in an + application environment where a single normalization form is used + throughout. Because of this, normalization-unaware environments may + be relatively rare. + + The following suggestions may be helpful in handling interoperability + issues for truely normalization-unaware client environments, when + they interact with file systems other than those which are + normalization-sensitive. The issues tend to be the inverse of those + for normalization-aware environments. The implementer should be + careful not to erroneously treat the environment as normalization- + unaware, based solely on the details of the kernel implementation. + + Unless the file system is normalization-preserving, when files (or + other objects) are created, the object name as reported by a + READDIR of the associated directory may show a name different than + the one used to create the object. This behavior is something + that the client has to accept. Since it has no preferred + normalization form, it has no way of converting the name to a + preferred form. + + In situations where there is an attempt to create multiple objects + in the same directory which have canonically-equivalent names. + these file systems will either report that an object of name + already exists or simply open a file of that other name. + + If it desired to have those two obects in the same directory, the + names must be made not canonically equivalent. It is possible to + append some distinguishing character to the name of the second + object but in clients having a typical file API (such as POSIX), + the fact that the name change occurred cannot be propagated back + to the requester. + + In cases where a client is application-specific, it may be + possible for it to deal with such a collision by modifying the + name and taking note of the changed name. + 12.7.1.6. Prohibited Characters for Component Names The NFS version 4 protocol does not specify particular characters that may not appear in component names. File systems may have their own set of prohibited characters for which the error NFS4ERR_BADCHAR should be returned by the server. Clients need to be prepared for this error to occur whenever file name components are presented to the server. Clients whose character repertoire for acceptable characters in file @@ -8410,28 +8572,28 @@ returning NFS4ERR_BADNAME. Clients may encounter names with bidirectional strings returned in responses from the server. If clients treat such strings as not valid file name components, it is up to the client whether it simply ignores these files or modifies the name component to meet its own rules for acceptable name component strings. 12.7.2. Processing of Link Text - Symbolic link text is defined as utf8_should and therefore the server - SHOULD validate link text on a CREATE and return NFS4ERR_INVAL if it - is is not valid UTF-8. Note that file systems which treat names as - strings of byte are an exception for which such validation need not - be done. One other situation in which an NFS version 4 might choose - (or be configured) not to make such a check is when links within file - system reference names in another which is configured to treat names - as strings of bytes. + Symbolic link text is defined as utf8val_should and therefore the + server SHOULD validate link text on a CREATE and return NFS4ERR_INVAL + if it is is not valid UTF-8. Note that file systems which treat + names as strings of byte are an exception for which such validation + need not be done. One other situation in which an NFS version 4 + might choose (or be configured) not to make such a check is when + links within file system reference names in another which is + configured to treat names as strings of bytes. On the other hand, UTF-8 validation of symbolic link text need not be done on the data resulting from a READLINK. Such data might have been stored by an NFS Version 4 server configured to allow non-UTF-8 link text or it might have resulted from symbolic link text stored via local file system access or access via another remote file access protocol. Note that because of the role of the symbolic link, as data stored and read by the user, other sorts of validations or modifications @@ -8684,21 +8846,21 @@ present at the server. It may have been relocated, migrated to another server or may have never been present. The client may obtain the new file system location by obtaining the "fs_locations" or attribute for the current filehandle. For further discussion, refer to Section 7 13.1.2.5. NFS4ERR_NOFILEHANDLE (Error Code 10020) The logical current or saved filehandle value is required by the current operation and is not set. This may be a result of a - malformed COMPOUND operation (i.e. no PUTFH or PUTROOTFH before an + malformed COMPOUND operation (i.e., no PUTFH or PUTROOTFH before an operation that requires the current filehandle be set). 13.1.2.6. NFS4ERR_NOTDIR (Error Code 20) The current (or saved) filehandle designates an object which is not a directory for an operation in which a directory is required. 13.1.2.7. NFS4ERR_STALE (Error Code 70) The current or saved filehandle value designating an argument to the @@ -8993,21 +9155,21 @@ currently held lock for the current lock owner and does not precisely match a single such lock where the server does not support this type of request, and thus does not implement POSIX locking semantics. See Section 15.12.5, Section 15.13.5, and Section 15.14.5 for a discussion of how this applies to LOCK, LOCKT, and LOCKU respectively. 13.1.8.9. NFS4ERR_OPENMODE (Error Code 10038) The client attempted a READ, WRITE, LOCK or other operation not - sanctioned by the stateid passed (e.g. writing to a file opened only + sanctioned by the stateid passed (e.g., writing to a file opened only for read). 13.1.9. Reclaim Errors These errors relate to the process of reclaiming locks after a server restart. 13.1.9.1. NFS4ERR_GRACE (Error Code 10013) The server is in its recovery or grace period which should at least @@ -9242,33 +9404,34 @@ | | NFS4ERR_DQUOT, NFS4ERR_FHEXPIRED, | | | NFS4ERR_IO, NFS4ERR_MOVED, NFS4ERR_NOENT, | | | NFS4ERR_NOFILEHANDLE, NFS4ERR_NOSPC, | | | NFS4ERR_NOTSUPP, NFS4ERR_RESOURCE, | | | NFS4ERR_ROFS, NFS4ERR_SERVERFAULT, | | | NFS4ERR_STALE | | OPEN_CONFIRM | NFS4ERR_ADMIN_REVOKED, NFS4ERR_BADHANDLE, | | | NFS4ERR_BAD_SEQID, NFS4ERR_BAD_STATEID, | | | NFS4ERR_BADXDR, NFS4ERR_EXPIRED, | | | NFS4ERR_FHEXPIRED, NFS4ERR_INVAL, | - | | NFS4ERR_ISDIR, NFS4ERR_MOVED, | - | | NFS4ERR_NOFILEHANDLE, NFS4ERR_OLD_STATEID, | - | | NFS4ERR_RESOURCE, NFS4ERR_SERVERFAULT, | - | | NFS4ERR_STALE, NFS4ERR_STALE_STATEID | + | | NFS4ERR_ISDIR, NFS4ERR_LEASE_MOVED, | + | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | + | | NFS4ERR_OLD_STATEID, NFS4ERR_RESOURCE, | + | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE, | + | | NFS4ERR_STALE_STATEID | | OPEN_DOWNGRADE | NFS4ERR_ADMIN_REVOKED, NFS4ERR_BADHANDLE, | | | NFS4ERR_BADXDR, NFS4ERR_BAD_SEQID, | | | NFS4ERR_BAD_STATEID, NFS4ERR_DELAY, | | | NFS4ERR_EXPIRED, NFS4ERR_FHEXPIRED, | - | | NFS4ERR_INVAL, NFS4ERR_MOVED, | - | | NFS4ERR_NOFILEHANDLE, NFS4ERR_OLD_STATEID, | - | | NFS4ERR_RESOURCE, NFS4ERR_ROFS, | - | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE, | - | | NFS4ERR_STALE_STATEID | + | | NFS4ERR_INVAL, NFS4ERR_LEASE_MOVED, | + | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | + | | NFS4ERR_OLD_STATEID, NFS4ERR_RESOURCE, | + | | NFS4ERR_ROFS, NFS4ERR_SERVERFAULT, | + | | NFS4ERR_STALE, NFS4ERR_STALE_STATEID | | PUTFH | NFS4ERR_BADHANDLE, NFS4ERR_BADXDR, | | | NFS4ERR_DELAY, NFS4ERR_FHEXPIRED, | | | NFS4ERR_MOVED, NFS4ERR_SERVERFAULT, | | | NFS4ERR_STALE, NFS4ERR_WRONGSEC | | PUTPUBFH | NFS4ERR_DELAY, NFS4ERR_SERVERFAULT, | | | NFS4ERR_WRONGSEC | | PUTROOTFH | NFS4ERR_DELAY, NFS4ERR_SERVERFAULT, | | | NFS4ERR_WRONGSEC | | READ | NFS4ERR_ACCESS, NFS4ERR_ADMIN_REVOKED, | | | NFS4ERR_BADHANDLE, NFS4ERR_BADXDR, | @@ -9344,32 +9507,33 @@ | | NFS4ERR_NOTDIR, NFS4ERR_RESOURCE, | | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE | | SETATTR | NFS4ERR_ACCESS, NFS4ERR_ADMIN_REVOKED, | | | NFS4ERR_ATTRNOTSUPP, NFS4ERR_BADCHAR, | | | NFS4ERR_BADHANDLE, NFS4ERR_BADOWNER, | | | NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, | | | NFS4ERR_DELAY, NFS4ERR_DQUOT, | | | NFS4ERR_EXPIRED, NFS4ERR_FBIG, | | | NFS4ERR_FHEXPIRED, NFS4ERR_GRACE, | | | NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_ISDIR, | - | | NFS4ERR_LOCKED, NFS4ERR_MOVED, | - | | NFS4ERR_NOFILEHANDLE, NFS4ERR_NOSPC, | - | | NFS4ERR_OLD_STATEID, NFS4ERR_OPENMODE, | - | | NFS4ERR_PERM, NFS4ERR_RESOURCE, | - | | NFS4ERR_ROFS, NFS4ERR_SERVERFAULT, | - | | NFS4ERR_STALE, NFS4ERR_STALE_STATEID | + | | NFS4ERR_LEASE_MOVED, NFS4ERR_LOCKED, | + | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | + | | NFS4ERR_NOSPC, NFS4ERR_OLD_STATEID, | + | | NFS4ERR_OPENMODE, NFS4ERR_PERM, | + | | NFS4ERR_RESOURCE, NFS4ERR_ROFS, | + | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE, | + | | NFS4ERR_STALE_STATEID | | SETCLIENTID | NFS4ERR_BADXDR, NFS4ERR_CLID_INUSE, | - | | NFS4ERR_INVAL, NFS4ERR_RESOURCE, | - | | NFS4ERR_SERVERFAULT | + | | NFS4ERR_DELAY, NFS4ERR_INVAL, | + | | NFS4ERR_RESOURCE, NFS4ERR_SERVERFAULT | | SETCLIENTID_CONFIRM | NFS4ERR_BADXDR, NFS4ERR_CLID_INUSE, | - | | NFS4ERR_RESOURCE, NFS4ERR_SERVERFAULT, | - | | NFS4ERR_STALE_CLIENTID | + | | NFS4ERR_DELAY, NFS4ERR_RESOURCE, | + | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE_CLIENTID | | VERIFY | NFS4ERR_ACCESS, NFS4ERR_ATTRNOTSUPP, | | | NFS4ERR_BADCHAR, NFS4ERR_BADHANDLE, | | | NFS4ERR_BADXDR, NFS4ERR_DELAY, | | | NFS4ERR_FHEXPIRED, NFS4ERR_GRACE, | | | NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_MOVED, | | | NFS4ERR_NOFILEHANDLE, NFS4ERR_NOT_SAME, | | | NFS4ERR_RESOURCE, NFS4ERR_SERVERFAULT, | | | NFS4ERR_STALE | | WRITE | NFS4ERR_ACCESS, NFS4ERR_ADMIN_REVOKED, | | | NFS4ERR_BADXDR, NFS4ERR_BADHANDLE, | @@ -9460,20 +9625,21 @@ | | OPEN_DOWNGRADE, READ, SETATTR, WRITE | | NFS4ERR_CB_PATH_DOWN | RENEW | | NFS4ERR_CLID_INUSE | SETCLIENTID, SETCLIENTID_CONFIRM | | NFS4ERR_DEADLOCK | LOCK | | NFS4ERR_DELAY | ACCESS, CB_GETATTR, CB_RECALL, CLOSE, | | | CREATE, GETATTR, LINK, LOCK, LOCKT, | | | LOOKUPP, NVERIFY, OPEN, OPENATTR, | | | OPEN_DOWNGRADE, PUTFH, PUTPUBFH, | | | PUTROOTFH, READ, READDIR, READLINK, | | | REMOVE, RENAME, SECINFO, SETATTR, | + | | SETCLIENTID, SETCLIENTID_CONFIRM, | | | VERIFY, WRITE | | NFS4ERR_DENIED | LOCK, LOCKT | | NFS4ERR_DQUOT | CREATE, LINK, OPEN, OPENATTR, RENAME, | | | SETATTR, WRITE | | NFS4ERR_EXIST | CREATE, LINK, OPEN, RENAME | | NFS4ERR_EXPIRED | CLOSE, DELEGRETURN, LOCK, LOCKU, OPEN, | | | OPEN_CONFIRM, OPEN_DOWNGRADE, READ, | | | RELEASE_LOCKOWNER, RENEW, SETATTR, | | | WRITE | | NFS4ERR_FBIG | OPEN, SETATTR, WRITE | @@ -9497,22 +9663,24 @@ | | RENAME, SECINFO, SETATTR, SETCLIENTID, | | | VERIFY, WRITE | | NFS4ERR_IO | ACCESS, COMMIT, CREATE, GETATTR, LINK, | | | LOOKUP, LOOKUPP, NVERIFY, OPEN, | | | OPENATTR, READ, READDIR, READLINK, | | | REMOVE, RENAME, SETATTR, VERIFY, WRITE | | NFS4ERR_ISDIR | CLOSE, COMMIT, LINK, LOCK, LOCKT, | | | LOCKU, OPEN, OPEN_CONFIRM, READ, | | | READLINK, SETATTR, WRITE | | NFS4ERR_LEASE_MOVED | CLOSE, DELEGPURGE, DELEGRETURN, LOCK, | - | | LOCKT, LOCKU, READ, RELEASE_LOCKOWNER, | - | | RENEW, WRITE | + | | LOCKT, LOCKU, OPEN_CONFIRM, | + | | OPEN_DOWNGRADE, READ, | + | | RELEASE_LOCKOWNER, RENEW, SETATTR, | + | | WRITE | | NFS4ERR_LOCKED | READ, SETATTR, WRITE | | NFS4ERR_LOCKS_HELD | CLOSE, RELEASE_LOCKOWNER | | NFS4ERR_LOCK_NOTSUPP | LOCK | | NFS4ERR_LOCK_RANGE | LOCK, LOCKT, LOCKU | | NFS4ERR_MLINK | LINK | | NFS4ERR_MOVED | ACCESS, CLOSE, COMMIT, CREATE, | | | DELEGRETURN, GETATTR, GETFH, LINK, | | | LOCK, LOCKT, LOCKU, LOOKUP, LOOKUPP, | | | NVERIFY, OPEN, OPENATTR, OPEN_CONFIRM, | | | OPEN_DOWNGRADE, PUTFH, READ, READDIR, | @@ -10923,21 +11091,21 @@ (cfh), stateid, cinfo, rflags, open_confirm, attrset delegation 15.18.2. ARGUMENT /* * Various definitions for OPEN */ enum createmode4 { UNCHECKED4 = 0, GUARDED4 = 1, - EXCLUSIVE4 = 2, + EXCLUSIVE4 = 2 }; union createhow4 switch (createmode4 mode) { case UNCHECKED4: case GUARDED4: fattr4 createattrs; case EXCLUSIVE4: verifier4 createverf; }; @@ -10976,21 +11144,21 @@ enum open_delegation_type4 { OPEN_DELEGATE_NONE = 0, OPEN_DELEGATE_READ = 1, OPEN_DELEGATE_WRITE = 2 }; enum open_claim_type4 { CLAIM_NULL = 0, CLAIM_PREVIOUS = 1, CLAIM_DELEGATE_CUR = 2, - CLAIM_DELEGATE_PREV = 3, + CLAIM_DELEGATE_PREV = 3 }; struct open_claim_delegate_cur4 { stateid4 delegate_stateid; component4 file; }; union open_claim4 switch (open_claim_type4 claim) { /* * No special rights to file. @@ -11728,24 +11896,24 @@ is returned with a data length set to 0 (zero) and eof is set to TRUE. The READ is subject to access permissions checking. If the client specifies a count value of 0 (zero), the READ succeeds and returns 0 (zero) bytes of data again subject to access permissions checking. The server may choose to return fewer bytes than specified by the client. The client needs to check for this condition and handle the condition appropriately. The stateid value for a READ request represents a value returned from - a previous record lock or share reservation request. The stateid is - used by the server to verify that the associated share reservation - and any record locks are still valid and to update lease timeouts for - the client. + a previous record lock or share reservation request or the stateid + associated with a delegation. The stateid is used by the server to + verify that the associated share reservation and any record locks are + still valid and to update lease timeouts for the client. If the read ended at the end-of-file (formally, in a correctly formed READ request, if offset + count is equal to the size of the file), or the read request extends beyond the size of the file (if offset + count is greater than the size of the file), eof is returned as TRUE; otherwise it is FALSE. A successful READ of an empty file will always return eof as TRUE. If the current filehandle is not a regular file, an error will be returned to the client. In the case the current filehandle @@ -11892,22 +12060,22 @@ For some filesystem environments, the directory entries "." and ".." have special meaning and in other environments, they may not. If the server supports these special entries within a directory, they should not be returned to the client as part of the READDIR response. To enable some client environments, the cookie values of 0, 1, and 2 are to be considered reserved. Note that the UNIX client will use these values when combining the server's response and local representations to enable a fully formed UNIX directory presentation to the application. - For READDIR arguments, cookie values of 1 and 2 should not be used - and for READDIR results cookie values of 0, 1, and 2 should not be + For READDIR arguments, cookie values of 1 and 2 SHOULD NOT be used + and for READDIR results cookie values of 0, 1, and 2 MUST NOT be returned. On success, the current filehandle retains its value. 15.26.5. IMPLEMENTATION The server's filesystem directory representations can differ greatly. A client's programming interfaces may also be bound to the local operating environment in a way that does not translate well into the NFS protocol. Therefore the use of the dircount and maxcount fields @@ -12198,21 +12366,23 @@ When the client holds delegations, it needs to use RENEW to detect when the server has determined that the callback path is down. When the server has made such a determination, only the RENEW operation will renew the lease on delegations. If the server determines the callback path is down, it returns NFS4ERR_CB_PATH_DOWN. Even though it returns NFS4ERR_CB_PATH_DOWN, the server MUST renew the lease on the record locks and share reservations that the client has established on the server. If for some reason the lock and share reservation lease cannot be renewed, then the server MUST return an error other than NFS4ERR_CB_PATH_DOWN, even if the callback path is - also down. + also down. In the event that the server has conditions such that is + could return either NFS4ERR_CB_PATH_DOWN or NFS4ERR_LEASE_MOVED, + NFS4ERR_LEASE_MOVED MUST be handled first. The client that issues RENEW MUST choose the principal, RPC security flavor, and if applicable, GSS-API mechanism and service via one of the following algorithms: o The client uses the same principal, RPC security flavor -- and if the flavor was RPCSEC_GSS -- the same mechanism and service that was used when the client id was established via SETCLIENTID_CONFIRM. @@ -12991,24 +13161,24 @@ stable is UNSTABLE4, the server is free to commit any part of the data and the metadata to stable storage, including all or none, before returning a reply to the client. There is no guarantee whether or when any uncommitted data will subsequently be committed to stable storage. The only guarantees made by the server are that it will not destroy any data without changing the value of verf and that it will not commit the data and metadata at a level less than that requested by the client. The stateid value for a WRITE request represents a value returned - from a previous record lock or share reservation request. The - stateid is used by the server to verify that the associated share - reservation and any record locks are still valid and to update lease - timeouts for the client. + from a previous record lock or share reservation request or the + stateid associated with a delegation. The stateid is used by the + server to verify that the associated share reservation and any record + locks are still valid and to update lease timeouts for the client. Upon successful completion, the following results are returned. The count result is the number of bytes of data written to the file. The server may write fewer bytes than requested. If so, the actual number of bytes written starting at location, offset, is returned. The server also returns an indication of the level of commitment of the data and metadata via committed. If the server committed all data and metadata to stable storage, committed should be set to FILE_SYNC4. If the level of commitment was at least as strong as @@ -13685,56 +13855,59 @@ NFS Server", USENIX Conference Proceedings , June 1990. [31] Callaghan, B., "NFS URL Scheme", RFC 2224, October 1997. [32] Chiu, A., Eisler, M., and B. Callaghan, "Security Negotiation for WebNFS", RFC 2755, January 2000. [33] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA Considerations Section in RFCs", BCP 26, RFC 5226, May 2008. - [34] Noveck, D. and R. Burnett, "Implementation Guide for Referrals - in NFSv4", draft-ietf-nfsv4-referrals-00 (work in progress), - July 2005. - Appendix A. Acknowledgments Rob Thurlow clarified how a client should contact a new server if a migration has occured. - David Black, Nico Williams, Mike Eisler and Trond Myklebust read many - drafts of Section 12 and contributed numerous useful suggestions, - without which the necessary revision of that section for this - document would not have been possible. + David Black, Nico Williams, Mike Eisler, Trond Myklebust, and James + Lentini read many drafts of Section 12 and contributed numerous + useful suggestions, without which the necessary revision of that + section for this document would not have been possible. Peter Staubach read almost all of the drafts of Section 12 leading to the published result and his numerous comments were always useful and contributed substantially to improving the quality of the final result. + James Lentini graciously read the rewrite of Section 7 and his + comments were vital in improving the quality of that effort. + + Rob Thurlow, Sorin Faibish, James Lentini, Bruce Fields, and Trond + Myklebust were faithful attendants of the biweekly triage meeting and + accepted many an action item. + Appendix B. RFC Editor Notes [RFC Editor: please remove this section prior to publishing this document as an RFC] [RFC Editor: prior to publishing this document as an RFC, please replace all occurrences of RFCTBD10 with RFCxxxx where xxxx is the RFC number of this document] Authors' Addresses Thomas Haynes - Oracle + NetApp 9110 E 66th St Tulsa, OK 74133 USA Phone: +1 918 307 1415 - Email: tom.haynes@oracle.com + Email: thomas@netapp.com David Noveck EMC 32 Coslin Drive Southborough, MA 01772 US Phone: +1 508 305 8404 Email: novecd@emc.com