NFSv4                                                          T. Haynes
Internet-Draft                                                 D. Noveck
Intended status: Standards Track                                 Editors
Expires: August 31, September 5, 2011                               February 27,                                March 04, 2011

                         NFS Version 4 Protocol
                   draft-ietf-nfsv4-rfc3530bis-07.txt
                   draft-ietf-nfsv4-rfc3530bis-08.txt

Abstract

   The Network File System (NFS) version 4 is a distributed filesystem
   protocol which owes heritage to NFS protocol version 2, RFC 1094, and
   version 3, RFC 1813.  Unlike earlier versions, the NFS version 4
   protocol supports traditional file access while integrating support
   for file locking and the mount protocol.  In addition, support for
   strong security (and its negotiation), compound operations, client
   caching, and internationalization have been added.  Of course,
   attention has been applied to making NFS version 4 operate well in an
   Internet environment.

   This document, together with the companion XDR description document,
   replaces RFC 3530 as the definition of the NFS version 4 protocol.

Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [1].

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on August 31, September 5, 2011.

Copyright Notice

   Copyright (c) 2011 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the BSD License.

   This document may contain material from IETF Documents or IETF
   Contributions published or made publicly available before November
   10, 2008.  The person(s) controlling the copyright in some of this
   material may not have granted the IETF Trust the right to allow
   modifications of such material outside the IETF Standards Process.
   Without obtaining an adequate license from the person(s) controlling
   the copyright in such materials, this document may not be modified
   outside the IETF Standards Process, and derivative works of it may
   not be created outside the IETF Standards Process, except to format
   it for publication as an RFC or to translate it into languages other
   than English.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   8
     1.1.   Changes since RFC 3530 . . . . . . . . . . . . . . . . .   8
     1.2.   Changes since RFC 3010 . . . . . . . . . . . . . . . . .   8   9
     1.3.   NFS Version 4 Goals  . . . . . . . . . . . . . . . . . .  10
     1.4.   Inconsistencies of this Document with the companion
            document NFS Version 4 Protocol  . . . . . . . . . . . .  10
     1.5.   Overview of NFS version 4 NFSv4 Features . . . . . . . . . . . . . . .  11
       1.5.1.   RPC and Security . . . . . . . . . . . . . . . . . .  11
       1.5.2.   Procedure and Operation Structure  . . . . . . . . .  11
       1.5.3.   Filesystem Model . . . . . . . . . . . . . . . . . .  12
       1.5.4.   OPEN and CLOSE . . . . . . . . . . . . . . . . . . .  14
       1.5.5.   File Locking . . . . . . . . . . . . . . . . . . . .  14
       1.5.6.   Client Caching and Delegation  . . . . . . . . . . .  14
     1.6.   General Definitions  . . . . . . . . . . . . . . . . . .  15
   2.  Protocol Data Types . . . . . . . . . . . . . . . . . . . . .  17
     2.1.   Basic Data Types . . . . . . . . . . . . . . . . . . . .  17
     2.2.   Structured Data Types  . . . . . . . . . . . . . . . . .  18  19
   3.  RPC and Security Flavor . . . . . . . . . . . . . . . . . . .  24
     3.1.   Ports and Transports . . . . . . . . . . . . . . . . . .  24
       3.1.1.   Client Retransmission Behavior . . . . . . . . . . .  25
     3.2.   Security Flavors . . . . . . . . . . . . . . . . . . . .  25
       3.2.1.   Security mechanisms for NFS version 4 NFSv4  . . . . . . . . . . .  26
     3.3.   Security Negotiation . . . . . . . . . . . . . . . . . .  28
       3.3.1.   SECINFO  . . . . . . . . . . . . . . . . . . . . . .  28
       3.3.2.   Security Error . . . . . . . . . . . . . . . . . . .  28
       3.3.3.   Callback RPC Authentication  . . . . . . . . . . . .  29
   4.  Filehandles . . . . . . . . . . . . . . . . . . . . . . . . .  31
     4.1.   Obtaining the First Filehandle . . . . . . . . . . . . .  31
       4.1.1.   Root Filehandle  . . . . . . . . . . . . . . . . . .  31
       4.1.2.   Public Filehandle  . . . . . . . . . . . . . . . . .  31
     4.2.   Filehandle Types . . . . . . . . . . . . . . . . . . . .  32
       4.2.1.   General Properties of a Filehandle . . . . . . . . .  32
       4.2.2.   Persistent Filehandle  . . . . . . . . . . . . . . .  33
       4.2.3.   Volatile Filehandle  . . . . . . . . . . . . . . . .  33
       4.2.4.   One Method of Constructing a Volatile Filehandle . .  35
     4.3.   Client Recovery from Filehandle Expiration . . . . . . .  35
   5.  File Attributes . . . . . . . . . . . . . . . . . . . . . . .  36
     5.1.   REQUIRED Attributes  . . . . . . . . . . . . . . . . . .  37
     5.2.   RECOMMENDED Attributes . . . . . . . . . . . . . . . . .  37
     5.3.   Named Attributes . . . . . . . . . . . . . . . . . . . .  38
     5.4.   Classification of Attributes . . . . . . . . . . . . . .  39
     5.5.   Set-Only and Get-Only Attributes . . . . . . . . . . . .  40
     5.6.   REQUIRED Attributes - List and Definition References . .  40
     5.7.   RECOMMENDED Attributes - List and Definition
            References . . . . . . . . . . . . . . . . . . . . . . .  41
     5.8.   Attribute Definitions  . . . . . . . . . . . . . . . . .  42
       5.8.1.   Definitions of REQUIRED Attributes . . . . . . . . .  42
       5.8.2.   Definitions of Uncategorized RECOMMENDED
                Attributes . . . . . . . . . . . . . . . . . . . . .  44
     5.9.   Interpreting owner and owner_group . . . . . . . . . . .  50
     5.10.  Character Case Attributes  . . . . . . . . . . . . . . .  53
   6.  Access Control Attributes . . . . . . . . . . . . . . . . . .  53
     6.1.   Goals  . . . . . . . . . . . . . . . . . . . . . . . . .  54
     6.2.   File Attributes Discussion . . . . . . . . . . . . . . .  54
       6.2.1.   Attribute 12: acl  . . . . . . . . . . . . . . . . .  54
       6.2.2.   Attribute 33: mode . . . . . . . . . . . . . . . . .  68
     6.3.   Common Methods . . . . . . . . . . . . . . . . . . . . .  69
       6.3.1.   Interpreting an ACL  . . . . . . . . . . . . . . . .  69
       6.3.2.   Computing a Mode Attribute from an ACL . . . . . . .  70
     6.4.   Requirements . . . . . . . . . . . . . . . . . . . . . .  71
       6.4.1.   Setting the mode and/or ACL Attributes . . . . . . .  72
       6.4.2.   Retrieving the mode and/or ACL Attributes  . . . . .  73
       6.4.3.   Creating New Objects . . . . . . . . . . . . . . . .  73
   7.  Multi-Server Namespace  . . . . . . . . . . . . . . . . . . .  75
     7.1.   Location Attributes  . . . . . . . . . . . . . . . . . .  75
     7.2.   File System Presence or Absence  . . . . . . . . . . . .  76
     7.3.   Getting Attributes for an Absent File System . . . . . .  77
       7.3.1.   GETATTR Within an Absent File System . . . . . . . .  77
       7.3.2.   READDIR and Absent File Systems  . . . . . . . . . .  78
     7.4.   Uses of Location Information . . . . . . . . . . . . . .  78
       7.4.1.   File System Replication  . . . . . . . . . . . . . .  79
       7.4.2.   File System Migration  . . . . . . . . . . . . . . .  80
       7.4.3.   Referrals  . . . . . . . . . . . . . . . . . . . . .  81
     7.5.   Location Entries and Server Identity . . . . . . . . . .  81
     7.6.   Additional Client-Side Considerations  . . . . . . . . .  82
     7.7.   Effecting File System Transitions  . . . . . . . . . . .  83
       7.7.1.   File System Transitions and Simultaneous Access  . .  84
       7.7.2.   Filehandles and File System Transitions  . . . . . .  84
       7.7.3.   Fileids and File System Transitions  . . . . . . . .  85
       7.7.4.   Fsids and File System Transitions  . . . . . . . . .  86
       7.7.5.   The Change Attribute and File System Transitions . .  86
       7.7.6.   Lock State and File System Transitions . . . . . . .  87
       7.7.7.   Write Verifiers and File System Transitions  . . . .  89
       7.7.8.   Readdir Cookies and Verifiers and File System
                Transitions  . . . . . . . . . . . . . . . . . . . .  89
       7.7.9.   File System Data and File System Transitions . . . .  90
     7.8.   Effecting File System Referrals  . . . . . . . . . . . .  91
       7.8.1.   Referral Example (LOOKUP)  . . . . . . . . . . . . .  91
       7.8.2.   Referral Example (READDIR) . . . . . . . . . . . . .  95
     7.9.   The Attribute fs_locations . . . . . . . . . . . . . . .  98
       7.9.1.   Inferring Transition Modes . . . . . . . . . . . . .  99
   8.  NFS Server Name Space . . . . . . . . . . . . . . . . . . . . 101
     8.1.   Server Exports . . . . . . . . . . . . . . . . . . . . . 101
     8.2.   Browsing Exports . . . . . . . . . . . . . . . . . . . . 101
     8.3.   Server Pseudo Filesystem . . . . . . . . . . . . . . . . 101
     8.4.   Multiple Roots . . . . . . . . . . . . . . . . . . . . . 102
     8.5.   Filehandle Volatility  . . . . . . . . . . . . . . . . . 102
     8.6.   Exported Root  . . . . . . . . . . . . . . . . . . . . . 102
     8.7.   Mount Point Crossing . . . . . . . . . . . . . . . . . . 103
     8.8.   Security Policy and Name Space Presentation  . . . . . . 103
   9.  File Locking and Share Reservations . . . . . . . . . . . . . 104
     9.1.   Locking  . . . . . . . . .   Opens and Byte-Range Locks . . . . . . . . . . . . . . . 105
       9.1.1.   Client ID  . . . . . . . . . . . . . . . . . . . . . 105
       9.1.2.   Server Release of Clientid . Client ID  . . . . . . . . . . . . 108
       9.1.3.   lock_owner and stateid   Stateid Definition . . . . . . . . . . . . . . . . . 109
       9.1.4.   lock_owner . . . . . . . . . . . . . . . . . . . . . 117
       9.1.5.   Use of the stateid Stateid and Locking . . . . . . . . . . . 110
       9.1.5. 117
       9.1.6.   Sequencing of Lock Requests  . . . . . . . . . . . . 112
       9.1.6. 119
       9.1.7.   Recovery from Replayed Requests  . . . . . . . . . . 113
       9.1.7. 120
       9.1.8.   Releasing lock_owner State . . . . . . . . . . . . . 114
       9.1.8. 121
       9.1.9.   Use of Open Confirmation . . . . . . . . . . . . . . 114 121
     9.2.   Lock Ranges  . . . . . . . . . . . . . . . . . . . . . . 115 122
     9.3.   Upgrading and Downgrading Locks  . . . . . . . . . . . . 116 123
     9.4.   Blocking Locks . . . . . . . . . . . . . . . . . . . . . 116 123
     9.5.   Lease Renewal  . . . . . . . . . . . . . . . . . . . . . 117 124
     9.6.   Crash Recovery . . . . . . . . . . . . . . . . . . . . . 118 125
       9.6.1.   Client Failure and Recovery  . . . . . . . . . . . . 118 125
       9.6.2.   Server Failure and Recovery  . . . . . . . . . . . . 119 126
       9.6.3.   Network Partitions and Recovery  . . . . . . . . . . 120 127
     9.7.   Recovery from a Lock Request Timeout or Abort  . . . . . 124 133
     9.8.   Server Revocation of Locks . . . . . . . . . . . . . . . 124 133
     9.9.   Share Reservations . . . . . . . . . . . . . . . . . . . 125 135
     9.10.  OPEN/CLOSE Operations  . . . . . . . . . . . . . . . . . 126 135
       9.10.1.  Close and Retention of State Information . . . . . . 127 136
     9.11.  Open Upgrade and Downgrade . . . . . . . . . . . . . . . 127 137
     9.12.  Short and Long Leases  . . . . . . . . . . . . . . . . . 128 137
     9.13.  Clocks, Propagation Delay, and Calculating Lease
            Expiration . . . . . . . . . . . . . . . . . . . . . . . 129 138
     9.14.  Migration, Replication and State . . . . . . . . . . . . 129 138
       9.14.1.  Migration and State  . . . . . . . . . . . . . . . . 130 139
       9.14.2.  Replication and State  . . . . . . . . . . . . . . . 130 140
       9.14.3.  Notification of Migrated Lease . . . . . . . . . . . 131 140
       9.14.4.  Migration and the Lease_time Attribute . . . . . . . 132 141
   10. Client-Side Caching . . . . . . . . . . . . . . . . . . . . . 132 141
     10.1.  Performance Challenges for Client-Side Caching . . . . . 133 142
     10.2.  Delegation and Callbacks . . . . . . . . . . . . . . . . 134 143
       10.2.1.  Delegation Recovery  . . . . . . . . . . . . . . . . 135 145
     10.3.  Data Caching . . . . . . . . . . . . . . . . . . . . . . 137 147
       10.3.1.  Data Caching and OPENs . . . . . . . . . . . . . . . 138 147
       10.3.2.  Data Caching and File Locking  . . . . . . . . . . . 139 148
       10.3.3.  Data Caching and Mandatory File Locking  . . . . . . 140 149
       10.3.4.  Data Caching and File Identity . . . . . . . . . . . 141 150

     10.4.  Open Delegation  . . . . . . . . . . . . . . . . . . . . 142 151
       10.4.1.  Open Delegation and Data Caching . . . . . . . . . . 144 153
       10.4.2.  Open Delegation and File Locks . . . . . . . . . . . 145 155
       10.4.3.  Handling of CB_GETATTR . . . . . . . . . . . . . . . 146 155
       10.4.4.  Recall of Open Delegation  . . . . . . . . . . . . . 149 158
       10.4.5.  OPEN Delegation Race with CB_RECALL  . . . . . . . . 160
       10.4.6.  Clients that Fail to Honor Delegation Recalls  . . . 151
       10.4.6. 161
       10.4.7.  Delegation Revocation  . . . . . . . . . . . . . . . 151 162
     10.5.  Data Caching and Revocation  . . . . . . . . . . . . . . 152 162
       10.5.1.  Revocation Recovery for Write Open Delegation  . . . 152 163
     10.6.  Attribute Caching  . . . . . . . . . . . . . . . . . . . 153 163
     10.7.  Data and Metadata Caching and Memory Mapped Files  . . . 155 165
     10.8.  Name Caching . . . . . . . . . . . . . . . . . . . . . . 157 167
     10.9.  Directory Caching  . . . . . . . . . . . . . . . . . . . 158 168
   11. Minor Versioning  . . . . . . . . . . . . . . . . . . . . . . 159 169
   12. Internationalization  . . . . . . . . . . . . . . . . . . . . 162 172
     12.1.  Use of UTF-8 . . . . . . . . . . . . . . . . . . . . . . 163 173
       12.1.1.  Relation to Stringprep . . . . . . . . . . . . . . . 163 173
       12.1.2.  Normalization, Equivalence, and Confusability  . . . 164 174
     12.2.  String Type Overview . . . . . . . . . . . . . . . . . . 166 177
       12.2.1.  Overall String Class Divisions . . . . . . . . . . . 167 177
       12.2.2.  Divisions by Typedef Parent types  . . . . . . . . . 168 178
       12.2.3.  Individual Types and Their Handling  . . . . . . . . 168 179
     12.3.  Errors Related to Strings  . . . . . . . . . . . . . . . 170 180
     12.4.  Types with Pre-processing to Resolve Mixture Issues  . . 171 181
       12.4.1.  Processing of Principal Strings  . . . . . . . . . . 171 181
       12.4.2.  Processing of Server Id Strings  . . . . . . . . . . 171 181
     12.5.  String Types without Internationalization Processing . . 172 182
     12.6.  Types with Processing Defined by Other Internet Areas  . 172 182
     12.7.  String Types with NFS-specific Processing  . . . . . . . 173 183
       12.7.1.  Handling of File Name Components . . . . . . . . . . 174 184
       12.7.2.  Processing of Link Text  . . . . . . . . . . . . . . 183 193
       12.7.3.  Processing of Principal Prefixes . . . . . . . . . . 184 194
   13. Error Values  . . . . . . . . . . . . . . . . . . . . . . . . 185 195
     13.1.  Error Definitions  . . . . . . . . . . . . . . . . . . . 185 195
       13.1.1.  General Errors . . . . . . . . . . . . . . . . . . . 187 197
       13.1.2.  Filehandle Errors  . . . . . . . . . . . . . . . . . 188 198
       13.1.3.  Compound Structure Errors  . . . . . . . . . . . . . 189 199
       13.1.4.  File System Errors . . . . . . . . . . . . . . . . . 190 200
       13.1.5.  State Management Errors  . . . . . . . . . . . . . . 192 202
       13.1.6.  Security Errors  . . . . . . . . . . . . . . . . . . 193 203
       13.1.7.  Name Errors  . . . . . . . . . . . . . . . . . . . . 193 203
       13.1.8.  Locking Errors . . . . . . . . . . . . . . . . . . . 194 204
       13.1.9.  Reclaim Errors . . . . . . . . . . . . . . . . . . . 195 205
       13.1.10. Client Management Errors . . . . . . . . . . . . . . 196 206
       13.1.11. Attribute Handling Errors  . . . . . . . . . . . . . 196 206
     13.2.  Operations and their valid errors  . . . . . . . . . . . 197 207
     13.3.  Callback operations and their valid errors . . . . . . . 205 214
     13.4.  Errors and the operations that use them  . . . . . . . . 205 214
   14. NFS version 4 NFSv4 Requests  . . . . . . . . . . . . . . . . . . . 209 . . . . 219
     14.1.  Compound Procedure . . . . . . . . . . . . . . . . . . . 210 219
     14.2.  Evaluation of a Compound Request . . . . . . . . . . . . 210 220
     14.3.  Synchronous Modifying Operations . . . . . . . . . . . . 211 221
     14.4.  Operation Values . . . . . . . . . . . . . . . . . . . . 212 221
   15. NFS version 4 NFSv4 Procedures  . . . . . . . . . . . . . . . . . . 212 . . . . 221
     15.1.  Procedure 0: NULL - No Operation . . . . . . . . . . . . 212 221
     15.2.  Procedure 1: COMPOUND - Compound Operations  . . . . . . 212 222
     15.3.  Operation 3: ACCESS - Check Access Rights  . . . . . . . 215 227
     15.4.  Operation 4: CLOSE - Close File  . . . . . . . . . . . . 218 230
     15.5.  Operation 5: COMMIT - Commit Cached Data . . . . . . . . 219 231
     15.6.  Operation 6: CREATE - Create a Non-Regular File Object . 221 233
     15.7.  Operation 7: DELEGPURGE - Purge Delegations Awaiting
            Recovery . . . . . . . . . . . . . . . . . . . . . . . . 224 236
     15.8.  Operation 8: DELEGRETURN - Return Delegation . . . . . . 225 237
     15.9.  Operation 9: GETATTR - Get Attributes  . . . . . . . . . 225 237
     15.10. Operation 10: GETFH - Get Current Filehandle . . . . . . 226 239
     15.11. Operation 11: LINK - Create Link to a File . . . . . . . 227 240
     15.12. Operation 12: LOCK - Create Lock . . . . . . . . . . . . 229 241
     15.13. Operation 13: LOCKT - Test For Lock  . . . . . . . . . . 233 245
     15.14. Operation 14: LOCKU - Unlock File  . . . . . . . . . . . 234 247
     15.15. Operation 15: LOOKUP - Lookup Filename . . . . . . . . . 236 248
     15.16. Operation 16: LOOKUPP - Lookup Parent Directory  . . . . 237 250
     15.17. Operation 17: NVERIFY - Verify Difference in
            Attributes . . . . . . . . . . . . . . . . . . . . . . . 238 250
     15.18. Operation 18: OPEN - Open a Regular File . . . . . . . . 239 252
     15.19. Operation 19: OPENATTR - Open Named Attribute
            Directory  . . . . . . . . . . . . . . . . . . . . . . . 249 262
     15.20. Operation 20: OPEN_CONFIRM - Confirm Open  . . . . . . . 250 263
     15.21. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access . 252 265
     15.22. Operation 22: PUTFH - Set Current Filehandle . . . . . . 253 266
     15.23. Operation 23: PUTPUBFH - Set Public Filehandle . . . . . 253 267
     15.24. Operation 24: PUTROOTFH - Set Root Filehandle  . . . . . 255 268
     15.25. Operation 25: READ - Read from File  . . . . . . . . . . 255 269
     15.26. Operation 26: READDIR - Read Directory . . . . . . . . . 258 271
     15.27. Operation 27: READLINK - Read Symbolic Link  . . . . . . 261 275
     15.28. Operation 28: REMOVE - Remove Filesystem Object  . . . . 262 276
     15.29. Operation 29: RENAME - Rename Directory Entry  . . . . . 264 278
     15.30. Operation 30: RENEW - Renew a Lease  . . . . . . . . . . 266 280
     15.31. Operation 31: RESTOREFH - Restore Saved Filehandle . . . 267 281
     15.32. Operation 32: SAVEFH - Save Current Filehandle . . . . . 268 282
     15.33. Operation 33: SECINFO - Obtain Available Security  . . . 269 282
     15.34. Operation 34: SETATTR - Set Attributes . . . . . . . . . 272 285
     15.35. Operation 35: SETCLIENTID - Negotiate Clientid . Client ID  . . . . 275 288
     15.36. Operation 36: SETCLIENTID_CONFIRM - Confirm Clientid . Client ID  . 278 292
     15.37. Operation 37: VERIFY - Verify Same Attributes  . . . . . 282 295
     15.38. Operation 38: WRITE - Write to File  . . . . . . . . . . 283 297
     15.39. Operation 39: RELEASE_LOCKOWNER - Release Lockowner
            State  . . . . . . . . . . . . . . . . . . . . . . . . . 287 301
     15.40. Operation 10044: ILLEGAL - Illegal operation . . . . . . 288 302
   16. NFS version 4 NFSv4 Callback Procedures . . . . . . . . . . . . . . 289 . . . . 302
     16.1.  Procedure 0: CB_NULL - No Operation  . . . . . . . . . . 289 303
     16.2.  Procedure 1: CB_COMPOUND - Compound Operations . . . . . 290 303
       16.2.6.  Operation 3: CB_GETATTR - Get Attributes . . . . . . 291 305
       16.2.7.  Operation 4: CB_RECALL - Recall an Open Delegation . 292 306
       16.2.8.  Operation 10044: CB_ILLEGAL - Illegal Callback
                Operation  . . . . . . . . . . . . . . . . . . . . . 293 307
   17. Security Considerations . . . . . . . . . . . . . . . . . . . 294 308
   18. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 296 309
     18.1.  Named Attribute Definition Definitions  . . . . . . . . . . . . . . 309
       18.1.1.  Initial Registry . . . . . . . . . . . . . . . . . . 296 310
       18.1.2.  Updating Registrations . . . . . . . . . . . . . . . 310
     18.2.  ONC RPC Network Identifiers (netids) . . . . . . . . . . 296 310
       18.2.1.  Initial Registry . . . . . . . . . . . . . . . . . . 312
       18.2.2.  Updating Registrations . . . . . . . . . . . . . . . 312
   19. References  . . . . . . . . . . . . . . . . . . . . . . . . . 297 312
     19.1.  Normative References . . . . . . . . . . . . . . . . . . 297 312
     19.2.  Informative References . . . . . . . . . . . . . . . . . 298 313
   Appendix A.  Acknowledgments  . . . . . . . . . . . . . . . . . . 300 315
   Appendix B.  RFC Editor Notes . . . . . . . . . . . . . . . . . . 300 316
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . . 301 316

1.  Introduction

1.1.  Changes since RFC 3530

   This document, together with the companion XDR description document
   [2], obsoletes RFC 3530 [11] as the authoritative document describing
   NFSv4.  It does not introduce any over-the-wire protocol changes, in
   the sense that previously valid requests requests remain valid.
   However, some requests previously defined as invalid, although not
   generally rejected, are now explicitly allowed, in that
   internationalization handling has been generalized and liberalized.
   The main changes from RFC 3530 are:

   o  The XDR definition has been moved to a companion document [2]

   o  Updates for the latest IETF intellectual property statements

   o  There is a restructured and more complete explanation of multi-
      server namespace features.  In particular, this explanation
      explicitly describes handling of inter-server referrals, even
      where neither migration nor replication is involved.

   o  More liberal handling of internationalization for file names and
      user and group names, with the elimination of restrictions imposed
      by stringprep, with the recognition that rules for the forms of
      these name are the province of the receiving entity.

   o  Updating handling of domain names to reflect IDNA.

   o  Restructuring of string types to more appropriately reflect the
      reality of required string processing.

   o  LIPKEY SPKM/3 has been moved from being REQUIRED to OPTIONAL.

   o  Some clarification on a client re-establishing callback
      information to the new server if state has been migrated migrated.

   o  A third edge case was added for Courtesy locks and network
      partitions.

   o  The definintion of stateid was strengthened, which had the side
      effect of introducing a semantic change in a COMPOUND structure
      having a current stateid and a saved stateid.

1.2.  Changes since RFC 3010

   This definition of the NFS version 4 NFSv4 protocol replaces or obsoletes the
   definition present in [12].  While portions of the two documents have
   remained the same, there have been substantive changes in others.
   The changes made between [12] and this document represent
   implementation experience and further review of the protocol.  While
   some modifications were made for ease of implementation or
   clarification, most updates represent errors or situations where the
   [12] definition were untenable.

   The following list is not all inclusive of all changes but presents
   some of the most notable changes or additions made:

   o  The state model has added an open_owner4 identifier.  This was
      done to accommodate Posix based clients and the model they use for
      file locking.  For Posix clients, an open_owner4 would correspond
      to a file descriptor potentially shared amongst a set of processes
      and the lock_owner4 identifier would correspond to a process that
      is locking a file.

   o  Clarifications and error conditions were added for the handling of
      the owner and group attributes.  Since these attributes are string
      based (as opposed to the numeric uid/gid of previous versions of
      NFS), translations may not be available and hence the changes
      made.

   o  Clarifications for the ACL and mode attributes to address
      evaluation and partial support.

   o  For identifiers that are defined as XDR opaque, limits were set on
      their size.

   o  Added the mounted_on_filed attribute to allow Posix clients to
      correctly construct local mounts.

   o  Modified the SETCLIENTID/SETCLIENTID_CONFIRM operations to deal
      correctly with confirmation details along with adding the ability
      to specify new client callback information.  Also added
      clarification of the callback information itself.

   o  Added a new operation LOCKOWNER_RELEASE to enable notifying the
      server that a lock_owner4 will no longer be used by the client.

   o  RENEW operation changes to identify the client correctly and allow
      for additional error returns.

   o  Verify error return possibilities for all operations.

   o  Remove use of the pathname4 data type from LOOKUP and OPEN in
      favor of having the client construct a sequence of LOOKUP
      operations to achieive achieve the same effect.

   o  Clarification of the internationalization issues and adoption of
      the new stringprep profile framework.

1.3.  NFS Version 4 Goals

   The NFS version 4 NFSv4 protocol is a further revision of the NFS protocol defined
   already by versions 2 [13] and 3 [14].  It retains the essential
   characteristics of previous versions: design for easy recovery,
   independent of transport protocols, operating systems and
   filesystems, simplicity, and good performance.  The NFS version 4 NFSv4 revision
   has the following goals:

   o  Improved access and good performance on the Internet.

      The protocol is designed to transit firewalls easily, perform well
      where latency is high and bandwidth is low, and scale to very
      large numbers of clients per server.

   o  Strong security with negotiation built into the protocol.

      The protocol builds on the work of the ONCRPC working group in
      supporting the RPCSEC_GSS protocol.  Additionally, the NFS version
      4 protocol provides a mechanism to allow clients and servers the
      ability to negotiate security and require clients and servers to
      support a minimal set of security schemes.

   o  Good cross-platform interoperability.

      The protocol features a filesystem model that provides a useful,
      common set of features that does not unduly favor one filesystem
      or operating system over another.

   o  Designed for protocol extensions.

      The protocol is designed to accept standard extensions that do not
      compromise backward compatibility.

1.4.  Inconsistencies of this Document with the companion document NFS
      Version 4 Protocol

   [2], NFS Version 4 Protocol, contains the definitions in XDR
   description language of the constructs used by the protocol.  Inside
   this document, several of the constructs are reproduced for purposes
   of explanation.  The reader is warned of the possibility of errors in
   the reproduced constructs outside of [2].  For any part of the
   document that is inconsistent with [2], [2] is to be considered
   authoritative.

1.5.  Overview of NFS version 4 NFSv4 Features

   To provide a reasonable context for the reader, the major features of
   NFS version 4
   NFSv4 protocol will be reviewed in brief.  This will be done to
   provide an appropriate context for both the reader who is familiar
   with the previous versions of the NFS protocol and the reader that is
   new to the NFS protocols.  For the reader new to the NFS protocols,
   there is still a fundamental knowledge that is expected.  The reader
   should be familiar with the XDR and RPC protocols as described in [3]
   and [15].  A basic knowledge of filesystems and distributed
   filesystems is expected as well.

1.5.1.  RPC and Security

   As with previous versions of NFS, the External Data Representation
   (XDR) and Remote Procedure Call (RPC) mechanisms used for the NFS
   version 4 NFSv4
   protocol are those defined in [3] and [15].  To meet end to end
   security requirements, the RPCSEC_GSS framework [4] will be used to
   extend the basic RPC security.  With the use of RPCSEC_GSS, various
   mechanisms can be provided to offer authentication, integrity, and
   privacy to the NFS version 4 protocol.  Kerberos V5 will be used as
   described in [16] to provide one security framework.  The LIPKEY GSS-API GSS-
   API mechanism described in [5] will be used to provide for the use of
   user password and server public key by the NFS version
   4 NFSv4 protocol.  With the
   use of RPCSEC_GSS, other mechanisms may also be specified and used
   for NFS version 4 security.

   To enable in-band security negotiation, the NFS version 4 NFSv4 protocol has added
   a new operation which provides the client a method of querying the
   server about its policies regarding which security mechanisms must be
   used for access to the server's filesystem resources.  With this, the
   client can securely match the security mechanism that meets the
   policies specified at both the client and server.

1.5.2.  Procedure and Operation Structure

   A significant departure from the previous versions of the NFS
   protocol is the introduction of the COMPOUND procedure.  For the NFS
   version 4
   NFSv4 protocol, there are two RPC procedures, NULL and COMPOUND.  The
   COMPOUND procedure is defined in terms of operations and these
   operations correspond more closely to the traditional NFS procedures.

   With the use of the COMPOUND procedure, the client is able to build
   simple or complex requests.  These COMPOUND requests allow for a
   reduction in the number of RPCs needed for logical filesystem
   operations.  For example, without previous contact with a server a
   client will be able to read data from a file in one request by
   combining LOOKUP, OPEN, and READ operations in a single COMPOUND RPC.
   With previous versions of the NFS protocol, this type of single
   request was not possible.

   The model used for COMPOUND is very simple.  There is no logical OR
   or ANDing of operations.  The operations combined within a COMPOUND
   request are evaluated in order by the server.  Once an operation
   returns a failing result, the evaluation ends and the results of all
   evaluated operations are returned to the client.

   The NFS version 4 NFSv4 protocol continues to have the client refer to a file or
   directory at the server by a "filehandle".  The COMPOUND procedure
   has a method of passing a filehandle from one operation to another
   within the sequence of operations.  There is a concept of a "current
   filehandle" and "saved filehandle".  Most operations use the "current
   filehandle" as the filesystem object to operate upon.  The "saved
   filehandle" is used as temporary filehandle storage within a COMPOUND
   procedure as well as an additional operand for certain operations.

1.5.3.  Filesystem Model

   The general filesystem model used for the NFS version 4 NFSv4 protocol is the same
   as previous versions.  The server filesystem is hierarchical with the
   regular files contained within being treated as opaque byte streams.
   In a slight departure, file and directory names are encoded with
   UTF-8 to deal with the basics of internationalization.

   The NFS version 4 NFSv4 protocol does not require a separate protocol to provide
   for the initial mapping between path name and filehandle.  Instead of
   using the older MOUNT protocol for this mapping, the server provides
   a ROOT filehandle that represents the logical root or top of the
   filesystem tree provided by the server.  The server provides multiple
   filesystems by gluing them together with pseudo filesystems.  These
   pseudo filesystems provide for potential gaps in the path names
   between real filesystems.

1.5.3.1.  Filehandle Types

   In previous versions of the NFS protocol, the filehandle provided by
   the server was guaranteed to be valid or persistent for the lifetime
   of the filesystem object to which it referred.  For some server
   implementations, this persistence requirement has been difficult to
   meet.  For the NFS version 4 NFSv4 protocol, this requirement has been relaxed by
   introducing another type of filehandle, volatile.  With persistent
   and volatile filehandle types, the server implementation can match
   the abilities of the filesystem at the server along with the
   operating environment.  The client will have knowledge of the type of
   filehandle being provided by the server and can be prepared to deal
   with the semantics of each.

1.5.3.2.  Attribute Types

   The NFS version 4 NFSv4 protocol introduces three classes of filesystem or has a rich and extensible file attributes.  Like the additional filehandle type, the
   classification object attribute
   structure, which is divided into REQUIRED, RECOMMENDED, and named
   attributes (see Section 5).

   Several (but not all) of file the REQUIRED attributes has been done to ease server
   implementations along with extending are derived from the overall functionality
   attributes of NFSv3 (see definition of the
   NFS protocol.  This fattr3 data type in [14]).
   An example of a REQUIRED attribute model is structured to be extensible
   such the file object's type
   (Section 5.8.1.2) so that new attributes regular files can be introduced in minor revisions of the
   protocol without requiring significant rework.

   The three classifications are: mandatory, recommended and named
   attributes.  This is a significant departure distinguished from the previous
   attribute model used
   directories (also known as folders in the NFS protocol.  Previously, the attributes
   for the filesystem some operating environments)
   and file objects were a fixed set other types of mainly UNIX
   attributes.  If the server or client did not support a particular
   attribute, it would have to simulate the attribute the best it could.

   Mandatory objects.  REQUIRED attributes are the minimal set discussed in
   Section 5.1.

   An example of file or filesystem three RECOMMENDED attributes that must be provided by the server are acl, sacl, and must be properly
   represented by the server.  Recommended dacl.
   These attributes represent
   different filesystem types define an Access Control List (ACL) on a file object
   ((Section 6).  An ACL provides directory and operating environments. file access control
   beyond the model used in NFSv3.  The
   recommended attributes will allow ACL definition allows for better interoperability
   specification of specific sets of permissions for individual users
   and the
   inclusion groups.  In addition, ACL inheritance allows propagation of more operating environments.  The mandatory
   access permissions and
   recommended attribute sets are traditional restriction down a directory tree as file or filesystem
   attributes.  The third type of attribute is the named attribute.
   system objects are created.  RECOMMENDED attributes are discussed in
   Section 5.2.

   A named attribute is an opaque byte stream that is associated with a
   directory or file and referred to by a string name.  Named attributes
   are meant to be used by client applications as a method to associate
   application specific
   application-specific data with a regular file or directory.

   One significant addition  NFSv4.1
   modifies named attributes relative to NFSv4.0 by tightening the recommended set
   allowed operations in order to prevent the development of file non-
   interoperable implementations.  Named attributes is
   the Access Control List (ACL) attribute.  This attribute provides for
   directory and file access control beyond the model used are discussed in previous
   versions
   Section 5.3.

1.5.3.3.  Multi-server Namespace

   NFSv4 contains a number of the NFS protocol.  The ACL definition allows for
   specification features to allow implementation of user
   namespaces that cross server boundaries and group level access control.

1.5.3.3.  Filesystem Replication that allow and Migration

   With the use of facilitate
   a special non-disruptive transfer of support for individual file attribute, the ability systems
   between servers.  They are all based upon attributes that allow one
   file system to inform the
   client of filesystem locations on another server is enabled.  The
   filesystem specify alternate or new locations attribute provides a method for that file
   system.

   These attributes may be used together with the client to
   probe the server about the location concept of absent file
   systems, which provide specifications for additional locations but no
   actual file system content.  This allows a filesystem.  In the event
   that a fileystems is not present on number of important
   facilities:

   o  Location attributes may be used with absent file systems to
      implement referrals whereby one server may direct the client will receive an
   error when attempting to operate on the filesystem and it can then
   query as a
      file system provided by another server.  This allows extensive
      multi-server namespaces to be constructed.

   o  Location attributes may be provided for present file systems to
      provide the correct filesystem location.  Thus is allowed
   construction locations of multi-server namespaces..

   These features also allow alternate file system replication and migration.  In instances or
      replicas to be used in the event of that the current file system
      instance becomes unavailable.

   o  Location attributes may be provided when a previously present file
      system becomes absent.  This allows non-disruptive migration of a filesystem, the client will receive an
   error when operating on the filesystem and it can then query location
   attribute to determine the new
      file system location.  Similar steps
   are used for replication, the client is able to query the server for
   the multiple available locations of a particular filesystem.  From
   this information, the client can use its own policies systems to access the
   appropriate filesystem location. alternate servers.

1.5.4.  OPEN and CLOSE

   The NFS version 4 NFSv4 protocol introduces OPEN and CLOSE operations.  The OPEN
   operation provides a single point where file lookup, creation, and
   share semantics can be combined.  The CLOSE operation also provides
   for the release of state accumulated by OPEN.

1.5.5.  File Locking

   With the NFS version 4 NFSv4 protocol, the support for byte range file locking is
   part of the NFS protocol.  The file locking support is structured so
   that an RPC callback mechanism is not required.  This is a departure
   from the previous versions of the NFS file locking protocol, Network
   Lock Manager (NLM).  The state associated with file locks is
   maintained at the server under a lease-based model.  The server
   defines a single lease period for all state held by a NFS client.  If
   the client does not renew its lease within the defined period, all
   state associated with the client's lease may be released by the
   server.  The client may renew its lease with use of the RENEW
   operation or implicitly by use of other operations (primarily READ).

1.5.6.  Client Caching and Delegation

   The file, attribute, and directory caching for the NFS version 4 NFSv4 protocol is
   similar to previous versions.  Attributes and directory information
   are cached for a duration determined by the client.  At the end of a
   predefined timeout, the client will query the server to see if the
   related filesystem object has been updated.

   For file data, the client checks its cache validity when the file is
   opened.  A query is sent to the server to determine if the file has
   been changed.  Based on this information, the client determines if
   the data cache for the file should kept or released.  Also, when the
   file is closed, any modified data is written to the server.

   If an application wants to serialize access to file data, file
   locking of the file data ranges in question should be used.

   The major addition to NFS version 4 NFSv4 in the area of caching is the ability of
   the server to delegate certain responsibilities to the client.  When
   the server grants a delegation for a file to a client, the client is
   guaranteed certain semantics with respect to the sharing of that file
   with other clients.  At OPEN, the server may provide the client
   either a read OPEN_DELEGATE_READ or write OPEN_DELEGATE_WRITE delegation for the
   file.  If the client is granted a read OPEN_DELEGATE_READ delegation, it
   is assured that no other client has the ability to write to the file
   for the duration of the delegation.  If the client is granted a write
   OPEN_DELEGATE_WRITE delegation, the client is assured that no other
   client has read or write access to the file.

   Delegations can be recalled by the server.  If another client
   requests access to the file in such a way that the access conflicts
   with the granted delegation, the server is able to notify the initial
   client and recall the delegation.  This requires that a callback path
   exist between the server and client.  If this callback path does not
   exist, then delegations cannot be granted.  The essence of a
   delegation is that it allows the client to locally service operations
   such as OPEN, CLOSE, LOCK, LOCKU, READ, or WRITE without immediate
   interaction with the server.

1.6.  General Definitions

   The following definitions are provided for the purpose of providing
   an appropriate context for the reader.

   Byte  In this document, a byte is an octet, i.e., a datum exactly 8
      bits in length.

   Client  The "client" client is the entity that accesses the NFS server's
      resources.  The client may be an application which that contains the
      logic to access the NFS server directly.  The client may also be
      the traditional operating system client that provides remote
      filesystem services for a set of applications.

      In the case of file locking

      With reference to byte-range locking, the client is also the
      entity that maintains a set of locks on behalf of one or more
      applications.  This client is responsible for crash or failure
      recovery for those locks it manages.

      Note that multiple clients may share the same transport and
      connection and multiple clients may exist on the same network
      node.

   Clientid

   Client ID  A 64-bit quantity used as a unique, short-hand reference
      to a client supplied Verifier and ID.  The server is responsible
      for supplying the Clientid. Client ID.

   File System  The file system is the collection of objects on a server
      that share the same fsid attribute (see Section 5.8.1.9).

   Lease  An interval of time defined by the server for which the client
      is irrevocably granted a lock.  At the end of a lease period the
      lock may be revoked if the lease has not been extended.  The lock
      must be revoked if a conflicting lock has been granted after the
      lease interval.

      All leases granted by a server have the same fixed interval.  Note
      that the fixed interval was chosen to alleviate the expense a
      server would have in maintaining state about variable length
      leases across server failures.

   Lock  The term "lock" is used to refer to both record (byte-range)
      locks as well as share reservations unless specifically stated
      otherwise.

   Server  The "Server" is the entity responsible for coordinating
      client access to a set of filesystems.

   Stable Storage  NFS version 4  NFSv4 servers must be able to recover without data
      loss from multiple power failures (including cascading power
      failures, that is, several power failures in quick succession),
      operating system failures, and hardware failure of components
      other than the storage medium itself (for example, disk,
      nonvolatile RAM).

      Some examples of stable storage that are allowable for an NFS
      server include:

      1.  Media commit of data, that is, the modified data has been
          successfully written to the disk media, for example, the disk
          platter.

      2.  An immediate reply disk drive with battery-backed on-drive
          intermediate storage or uninterruptible power system (UPS).

      3.  Server commit of data with battery-backed intermediate storage
          and recovery software.

      4.  Cache commit with uninterruptible power system (UPS) and
          recovery software.

   Stateid  A stateid is a 128-bit quantity returned by a server that
      uniquely defines the open and locking state states provided by the
      server for a specific open open-owner or lock owner lock-owner/open-owner pair for
      a specific file.

      Stateids composed of all bits 0 or all bits 1 have special meaning file and are reserved values. type of lock.

   Verifier  A 64-bit quantity generated by the client that the server
      can use to determine if the client has restarted and lost all
      previous lock state.

2.  Protocol Data Types

   The syntax and semantics to describe the data types of the NFS
   version 4 protocol are defined in the XDR [15] and RPC [3] documents.
   The next sections build upon the XDR data types to define types and
   structures specific to this protocol.

2.1.  Basic Data Types

                   These are the base NFSv4 data types.

   +----------------+--------------------------------------------------+
   | Data Type      | Definition                                       |
   +----------------+--------------------------------------------------+
   | int32_t        | typedef int int32_t;                             |
   | uint32_t       | typedef unsigned int uint32_t;                   |
   | int64_t        | typedef hyper int64_t;                           |
   | uint64_t       | typedef unsigned hyper uint64_t;                 |
   | attrlist4      | typedef opaque attrlist4<>;                      |
   |                | Used for file/directory attributes.              |
   | bitmap4        | typedef uint32_t bitmap4<>;                      |
   |                | Used in attribute array encoding.                |
   | changeid4      | typedef uint64_t changeid4;                      |
   |                | Used in the definition of change_info4.          |
   | clientid4      | typedef uint64_t clientid4;                      |
   |                | Shorthand reference to client identification.    |
   | count4         | typedef uint32_t count4;                         |
   |                | Various count parameters (READ, WRITE, COMMIT).  |
   | length4        | typedef uint64_t length4;                        |
   |                | Describes LOCK lengths.                          |
   | mode4          | typedef uint32_t mode4;                          |
   |                | Mode attribute data type.                        |
   | nfs_cookie4    | typedef uint64_t nfs_cookie4;                    |
   |                | Opaque cookie value for READDIR.                 |
   | nfs_fh4        | typedef opaque nfs_fh4<NFS4_FHSIZE>;             |
   |                | Filehandle definition.                           |
   | nfs_ftype4     | enum nfs_ftype4;                                 |
   |                | Various defined file types.                      |
   | nfsstat4       | enum nfsstat4;                                   |
   |                | Return value for operations.                     |
   | offset4        | typedef uint64_t offset4;                        |
   |                | Various offset designations (READ, WRITE, LOCK,  |
   |                | COMMIT).                                         |
   | qop4           | typedef uint32_t qop4;                           |
   |                | Quality of protection designation in SECINFO.    |
   | sec_oid4       | typedef opaque sec_oid4<>;                       |
   |                | Security Object Identifier.  The sec_oid4 data   |
   |                | type is not really opaque.  Instead it contains  |
   |                | an ASN.1 OBJECT IDENTIFIER as used by GSS-API in |
   |                | the mech_type argument to GSS_Init_sec_context.  |
   |                | See [6] for details.                             |
   | seqid4         | typedef uint32_t seqid4;                         |
   |                | Sequence identifier used for file locking.       |
   | utf8string     | typedef opaque utf8string<>;                     |
   |                | UTF-8 encoding for strings.                      |
   | utf8_should    | typedef utf8string utf8_should;                  |
   |                | String expected to be UTF8 but no validation     |
   | utf8val_should | typedef utf8string utf8val_should;               |
   |                | String SHOULD be sent UTF8 and SHOULD be         |
   |                | validated                                        |
   | utf8val_must   | typedef utf8string utf8val_must;                 |
   |                | String MUST be sent UTF8 and MUST be validated   |
   | ascii_must     | typedef utf8string ascii_must;                   |
   |                | String MUST be sent as ASCII and thus is         |
   |                | automatically UTF8                               |
   | comptag4       | typedef utf8_should comptag4;                    |
   |                | Tag should be UTF8 but is not checked            |
   | component4     | typedef utf8val_should component4;               |
   |                | Represents path name components.                 |
   | linktext4      | typedef utf8val_should linktext4;                |
   |                | Symbolic link contents.                          |
   | pathname4      | typedef component4 pathname4<>;                  |
   |                | Represents path name for fs_locations.           |
   | nfs_lockid4    | typedef uint64_t nfs_lockid4;                    |
   | verifier4      | typedef opaque verifier4[NFS4_VERIFIER_SIZE];    |
   |                | Verifier used for various operations (COMMIT,    |
   |                | CREATE, EXCHANGE_ID, OPEN, READDIR, WRITE)       |
   |                | NFS4_VERIFIER_SIZE is defined as 8.              |
   +----------------+--------------------------------------------------+
                          End of Base Data Types

                                  Table 1

2.2.  Structured Data Types

2.2.1.  nfstime4

   struct nfstime4 {
           int64_t         seconds;
           uint32_t        nseconds;
   };

   The nfstime4 structure gives the number of seconds and nanoseconds
   since midnight or 0 hour January 1, 1970 Coordinated Universal Time
   (UTC).  Values greater than zero for the seconds field denote dates
   after the 0 hour January 1, 1970.  Values less than zero for the
   seconds field denote dates before the 0 hour January 1, 1970.  In
   both cases, the nseconds field is to be added to the seconds field
   for the final time representation.  For example, if the time to be
   represented is one-half second before 0 hour January 1, 1970, the
   seconds field would have a value of negative one (-1) and the
   nseconds fields would have a value of one-half second (500000000).
   Values greater than 999,999,999 for nseconds are considered invalid.

   This data type is used to pass time and date information.  A server
   converts to and from its local representation of time when processing
   time values, preserving as much accuracy as possible.  If the
   precision of timestamps stored for a filesystem object is less than
   defined, loss of precision can occur.  An adjunct time maintenance
   protocol is recommended to reduce client and server time skew.

2.2.2.  time_how4

   enum time_how4 {
           SET_TO_SERVER_TIME4 = 0,
           SET_TO_CLIENT_TIME4 = 1
   };

2.2.3.  settime4

   union settime4 switch (time_how4 set_it) {
    case SET_TO_CLIENT_TIME4:
            nfstime4       time;
    default:
            void;
   };
   The above definitions are used as the attribute definitions to set
   time values.  If set_it is SET_TO_SERVER_TIME4, then the server uses
   its local representation of time for the time value.

2.2.4.  specdata4

   struct specdata4 {
    uint32_t specdata1; /* major device number */
    uint32_t specdata2; /* minor device number */
   };

   This data type represents additional information for the device file
   types NF4CHR and NF4BLK.

2.2.5.  fsid4

   struct fsid4 {
           uint64_t        major;
           uint64_t        minor;
   };

   This type is the filesystem identifier that is used as a mandatory
   attribute.

2.2.6.  fs_location4

   struct fs_location4 {
           utf8must        server<>;
           pathname4       rootpath;
   };

2.2.7.  fs_locations4

   struct fs_locations4 {
           pathname4       fs_root;
           fs_location4    locations<>;
   };

   The fs_location4 and fs_locations4 data types are used for the
   fs_locations recommended attribute which is used for migration and
   replication support.

2.2.8.  fattr4

   struct fattr4 {
           bitmap4         attrmask;
           attrlist4       attr_vals;
   };
   The fattr4 structure is used to represent file and directory
   attributes.

   The bitmap is a counted array of 32 bit integers used to contain bit
   values.  The position of the integer in the array that contains bit n
   can be computed from the expression (n / 32) and its bit within that
   integer is (n mod 32).

                     0            1
   +-----------+-----------+-----------+--
   |  count    | 31  ..  0 | 63  .. 32 |
   +-----------+-----------+-----------+--

2.2.9.  change_info4

   struct change_info4 {
           bool            atomic;
           changeid4       before;
           changeid4       after;
   };

   This structure is used with the CREATE, LINK, REMOVE, RENAME
   operations to let the client know the value of the change attribute
   for the directory in which the target filesystem object resides.

2.2.10.  clientaddr4

   struct clientaddr4 {
           /* see struct rpcb in RFC 1833 */
           string r_netid<>;    /* network id */
           string r_addr<>;     /* universal address */
   };

   The clientaddr4 structure is used as part of the SETCLIENTID
   operation to either specify the address of the client that is using a
   clientid
   client ID or as part of the callback registration.  The r_netid and
   r_addr fields are specified in [17], but they are underspecified in
   [17] as far as what they should look like for specific protocols.

   For TCP over IPv4 and for UDP over IPv4, the format of r_addr is the
   US-ASCII string:

   h1.h2.h3.h4.p1.p2

   The prefix, "h1.h2.h3.h4", is the standard textual form for
   representing an IPv4 address, which is always four octets long.

   Assuming big-endian ordering, h1, h2, h3, and h4, are respectively,
   the first through fourth octets each converted to ASCII-decimal.
   Assuming big-endian ordering, p1 and p2 are, respectively, the first
   and second octets each converted to ASCII-decimal.  For example, if a
   host, in big-endian order, has an address of 0x0A010307 and there is
   a service listening on, in big endian order, port 0x020F (decimal
   527), then the complete universal address is "10.1.3.7.2.15".

   For TCP over IPv4 the value of r_netid is the string "tcp".  For UDP
   over IPv4 the value of r_netid is the string "udp".

   For TCP over IPv6 and for UDP over IPv6, the format of r_addr is the
   US-ASCII string:

   x1:x2:x3:x4:x5:x6:x7:x8.p1.p2

   The suffix "p1.p2" is the service port, and is computed the same way
   as with universal addresses for TCP and UDP over IPv4.  The prefix,
   "x1:x2:x3:x4:x5:x6:x7:x8", is the standard textual form for
   representing an IPv6 address as defined in Section 2.2 of [18].
   Additionally, the two alternative forms specified in Section 2.2 of
   [18] are also acceptable.

   For TCP over IPv6 the value of r_netid is the string "tcp6".  For UDP
   over IPv6 the value of r_netid is the string "udp6".

2.2.11.  cb_client4

   struct cb_client4 {
           unsigned int    cb_program;
           clientaddr4     cb_location;
   };

   This structure is used by the client to inform the server of its call
   back address; includes the program number and client address.

2.2.12.  nfs_client_id4

   struct nfs_client_id4 {
           verifier4       verifier;
           opaque          id<NFS4_OPAQUE_LIMIT>;
   };

   This structure is part of the arguments to the SETCLIENTID operation.
   NFS4_OPAQUE_LIMIT is defined as 1024.

2.2.13.  open_owner4

   struct open_owner4 {
           clientid4       clientid;
           opaque          owner<NFS4_OPAQUE_LIMIT>;
   };

   This structure is used to identify the owner of open state.
   NFS4_OPAQUE_LIMIT is defined as 1024.

2.2.14.  lock_owner4

   struct lock_owner4 {
           clientid4       clientid;
           opaque          owner<NFS4_OPAQUE_LIMIT>;
   };

   This structure is used to identify the owner of file locking state.
   NFS4_OPAQUE_LIMIT is defined as 1024.

2.2.15.  open_to_lock_owner4

   struct open_to_lock_owner4 {
           seqid4          open_seqid;
           stateid4        open_stateid;
           seqid4          lock_seqid;
           lock_owner4     lock_owner;
   };

   This structure is used for the first LOCK operation done for an
   open_owner4.  It provides both the open_stateid and lock_owner such
   that the transition is made from a valid open_stateid sequence to
   that of the new lock_stateid sequence.  Using this mechanism avoids
   the confirmation of the lock_owner/lock_seqid pair since it is tied
   to established state in the form of the open_stateid/open_seqid.

2.2.16.  stateid4

   struct stateid4 {
           uint32_t        seqid;
           opaque          other[12];
   };

   This structure is used for the various state sharing mechanisms
   between the client and server.  For the client, this data structure
   is read-only.  The starting value of the seqid field is undefined.
   The server is required to increment the seqid field monotonically at
   each transition of the stateid.  This is important since the client
   will inspect the seqid in OPEN stateids to determine the order of
   OPEN processing done by the server.

3.  RPC and Security Flavor

   The NFS version 4 NFSv4 protocol is a Remote Procedure Call (RPC) application that
   uses RPC version 2 and the corresponding eXternal Data Representation
   (XDR) as defined in [3] and [15].  The RPCSEC_GSS security flavor as
   defined in [4] MUST be used as the mechanism to deliver stronger
   security for the NFS version 4 NFSv4 protocol.

3.1.  Ports and Transports

   Historically, NFS version 2 NFSv2 and version 3 NFSv3 servers have resided on port 2049.  The
   registered port 2049 [19] for the NFS protocol should SHOULD be the default
   configuration.  Using the registered port for NFS services means the
   NFS client will not need to use the RPC binding protocols as
   described in [17]; this will allow NFS to transit firewalls.

   Where an NFS version 4 NFSv4 implementation supports operation over the IP network
   protocol, the supported transports between NFS and IP MUST be among
   the IETF-approved congestion control transport protocols, which
   include TCP and SCTP.  To enhance the possibilities for
   interoperability, an NFS version 4 NFSv4 implementation MUST support operation over
   the TCP transport protocol, at least until such time as a standards
   track RFC revises this requirement to use a different IETF-approved
   congestion control transport protocol.

   If TCP is used as the transport, the client and server SHOULD use
   persistent connections.  This will prevent the weakening of TCP's
   congestion control via short lived connections and will improve
   performance for the WAN environment by eliminating the need for SYN
   handshakes.

   As noted in Section 17, the authentication model for NFS version 4 NFSv4 has moved
   from machine-based to principal-based.  However, this modification of
   the authentication model does not imply a technical requirement to
   move the TCP connection management model from whole machine-based to
   one based on a per user model.  In particular, NFS over TCP client
   implementations have traditionally multiplexed traffic for multiple
   users over a common TCP connection between an NFS client and server.
   This has been true, regardless whether the NFS client is using
   AUTH_SYS, AUTH_DH, RPCSEC_GSS or any other flavor.  Similarly, NFS
   over TCP server implementations have assumed such a model and thus
   scale the implementation of TCP connection management in proportion
   to the number of expected client machines.  It is intended that NFS version 4 NFSv4
   will not modify this connection management model.  NFS version 4  NFSv4 clients that
   violate this assumption can expect scaling issues on the server and
   hence reduced service.

   Note that for various timers, the client and server should avoid
   inadvertent synchronization of those timers.  For further discussion
   of the general issue refer to [20].

3.1.1.  Client Retransmission Behavior

   When processing a request received over a reliable transport such as
   TCP, the NFS version 4 NFSv4 server MUST NOT silently drop the request, except if
   the transport connection has been broken.  Given such a contract
   between NFS version 4 NFSv4 clients and servers, clients MUST NOT retry a request
   unless one or both of the following are true:

   o  The transport connection has been broken

   o  The procedure being retried is the NULL procedure

   Since reliable transports, such as TCP, do not always synchronously
   inform a peer when the other peer has broken the connection (for
   example, when an NFS server reboots), the NFS version 4 NFSv4 client may want to
   actively "probe" the connection to see if has been broken.  Use of
   the NULL procedure is one recommended way to do so.  So, when a
   client experiences a remote procedure call timeout (of some arbitrary
   implementation specific amount), rather than retrying the remote
   procedure call, it could instead issue a NULL procedure call to the
   server.  If the server has died, the transport connection break will
   eventually be indicated to the NFS version 4 NFSv4 client.  The client can then
   reconnect, and then retry the original request.  If the NULL
   procedure call gets a response, the connection has not broken.  The
   client can decide to wait longer for the original request's response,
   or it can break the transport connection and reconnect before re-sending re-
   sending the original request.

   For callbacks from the server to the client, the same rules apply,
   but the server doing the callback becomes the client, and the client
   receiving the callback becomes the server.

3.2.  Security Flavors

   Traditional RPC implementations have included AUTH_NONE, AUTH_SYS,
   AUTH_DH, and AUTH_KRB4 as security flavors.  With [4] an additional
   security flavor of RPCSEC_GSS has been introduced which uses the
   functionality of GSS-API [6].  This allows for the use of various
   security mechanisms by the RPC layer without the additional
   implementation overhead of adding RPC security flavors.  For NFS
   version 4, NFSv4,
   the RPCSEC_GSS security flavor MUST be used to enable the mandatory
   security mechanism.  Other flavors, such as, AUTH_NONE, AUTH_SYS, and
   AUTH_DH MAY be implemented as well.

3.2.1.  Security mechanisms for NFS version 4 NFSv4

   The use of RPCSEC_GSS requires selection of: mechanism, quality of
   protection, and service (authentication, integrity, privacy).  The
   remainder of this document will refer to these three parameters of
   the RPCSEC_GSS security as the security triple.

3.2.1.1.  Kerberos V5 as a security triple

   The Kerberos V5 GSS-API mechanism as described in [16] MUST be
   implemented and provide the following security triples.

   column descriptions:

   1 == number of pseudo flavor
   2 == name of pseudo flavor
   3 == mechanism's OID
   4 == mechanism's algorithm(s)
   5 == RPCSEC_GSS service

   1      2     3                    4             5
   --------------------------------------------------------------------
   390003 krb5  1.2.840.113554.1.2.2 DES MAC MD5   rpc_gss_svc_none
   390004 krb5i 1.2.840.113554.1.2.2 DES MAC MD5   rpc_gss_svc_integrity
   390005 krb5p 1.2.840.113554.1.2.2 DES MAC MD5   rpc_gss_svc_privacy
                                     for integrity,
                                     and 56 bit DES
                                     for privacy.

   Note that the pseudo flavor is presented here as a mapping aid to the
   implementor.  Because this NFS protocol includes a method to
   negotiate security and it understands the GSS-API mechanism, the
   pseudo flavor is not needed.  The pseudo flavor is needed for NFS
   version 3 NFSv3
   since the security negotiation is done via the MOUNT protocol.

   For a discussion of NFS' use of RPCSEC_GSS and Kerberos V5, please
   see [21].

   Users and implementors are warned that 56 bit DES is no longer
   considered state of the art in terms of resistance to brute force
   attacks.  Once a revision to [16] is available that adds support for
   AES, implementors are urged to incorporate AES into their NFSv4 over
   Kerberos V5 protocol stacks, and users are similarly urged to migrate
   to the use of AES.

3.2.1.2.  LIPKEY as a security triple

   The LIPKEY GSS-API mechanism as described in [5] MAY be implemented
   and provide the following security triples.  The definition of the
   columns matches those in Section 3.2.1.1.

   1      2        3                   4              5
   --------------------------------------------------------------------
   390006 lipkey   1.3.6.1.5.5.9       negotiated  rpc_gss_svc_none
   390007 lipkey-i 1.3.6.1.5.5.9       negotiated  rpc_gss_svc_integrity
   390008 lipkey-p 1.3.6.1.5.5.9       negotiated  rpc_gss_svc_privacy

   The mechanism algorithm is listed as "negotiated".  This is because
   LIPKEY is layered on SPKM-3 and in SPKM-3 [5] the confidentiality and
   integrity algorithms are negotiated.  Since SPKM-3 specifies HMAC-MD5
   for integrity as MANDATORY, 128 bit cast5CBC for confidentiality for
   privacy as MANDATORY, and further specifies that HMAC-MD5 and
   cast5CBC MUST be listed first before weaker algorithms, specifying
   "negotiated" in column 4 does not impair interoperability.  In the
   event an SPKM-3 peer does not support the mandatory algorithms, the
   other peer is free to accept or reject the GSS-API context creation.

   Because SPKM-3 negotiates the algorithms, subsequent calls to
   LIPKEY's GSS_Wrap() and GSS_GetMIC() by RPCSEC_GSS will use a quality
   of protection value of 0 (zero).  See section 5.2 of [22] for an
   explanation.

   LIPKEY uses SPKM-3 to create a secure channel in which to pass a user
   name and password from the client to the server.  Once the user name
   and password have been accepted by the server, calls to the LIPKEY
   context are redirected to the SPKM-3 context.  See [5] for more
   details.

3.2.1.3.  SPKM-3 as a security triple

   The SPKM-3 GSS-API mechanism as described in [5] MAY be implemented
   and provide the following security triples.  The definition of the
   columns matches those in Section 3.2.1.1.

   1      2        3                   4              5
   --------------------------------------------------------------------
   390009 spkm3    1.3.6.1.5.5.1.3     negotiated  rpc_gss_svc_none
   390010 spkm3i   1.3.6.1.5.5.1.3     negotiated  rpc_gss_svc_integrity
   390011 spkm3p   1.3.6.1.5.5.1.3     negotiated  rpc_gss_svc_privacy

   For a discussion as to why the mechanism algorithm is listed as
   "negotiated", see Section 3.2.1.2.

   Because SPKM-3 negotiates the algorithms, subsequent calls to
   SPKM-3's GSS_Wrap() and GSS_GetMIC() by RPCSEC_GSS will use a quality
   of protection value of 0 (zero).  See section 5.2 of [22] for an
   explanation.

   Even though LIPKEY is layered over SPKM-3, SPKM-3 is specified as a
   mandatory set of triples to handle the situations where the initiator
   (the client) is anonymous or where the initiator has its own
   certificate.  If the initiator is anonymous, there will not be a user
   name and password to send to the target (the server).  If the
   initiator has its own certificate, then using passwords is
   superfluous.

3.3.  Security Negotiation

   With the NFS version 4 NFSv4 server potentially offering multiple security
   mechanisms, the client needs a method to determine or negotiate which
   mechanism is to be used for its communication with the server.  The
   NFS server may have multiple points within its filesystem name space
   that are available for use by NFS clients.  In turn the NFS server
   may be configured such that each of these entry points may have
   different or multiple security mechanisms in use.

   The security negotiation between client and server must SHOULD be done
   with a secure channel to eliminate the possibility of a third party
   intercepting the negotiation sequence and forcing the client and
   server to choose a lower level of security than required or desired.
   See Section 17 for further discussion.

3.3.1.  SECINFO

   The new SECINFO operation will allow the client to determine, on a
   per filehandle basis, what security triple is to be used for server
   access.  In general, the client will not have to use the SECINFO
   operation except during initial communication with the server or when
   the client crosses policy boundaries at the server.  It is possible
   that the server's policies change during the client's interaction
   therefore forcing the client to negotiate a new security triple.

3.3.2.  Security Error

   Based on the assumption that each NFS version 4 NFSv4 client and server
   must MUST
   support a minimum set of security (i.e., LIPKEY, SPKM-3, and
   Kerberos-V5 all under RPCSEC_GSS), the NFS client will start its
   communication with the server with one of the minimal security
   triples.  During communication with the server, the client may
   receive an NFS error of NFS4ERR_WRONGSEC.  This error allows the
   server to notify the client that the security triple currently being
   used is not appropriate for access to the server's filesystem
   resources.  The client is then responsible for determining what
   security triples are available at the server and choose one which is
   appropriate for the client.  See Section 15.33 for further discussion
   of how the client will respond to the NFS4ERR_WRONGSEC error and use
   SECINFO.

3.3.3.  Callback RPC Authentication

   Except as noted elsewhere in this section, the callback RPC
   (described later) MUST mutually authenticate the NFS server to the
   principal that acquired the clientid client ID (also described later), using
   the security flavor the original SETCLIENTID operation used.

   For AUTH_NONE, there are no principals, so this is a non-issue.

   AUTH_SYS has no notions of mutual authentication or a server
   principal, so the callback from the server simply uses the AUTH_SYS
   credential that the user used when he set up the delegation.

   For AUTH_DH, one commonly used convention is that the server uses the
   credential corresponding to this AUTH_DH principal:

   unix.host@domain

   where host and domain are variables corresponding to the name of
   server host and directory services domain in which it lives such as a
   Network Information System domain or a DNS domain.

   Because LIPKEY is layered over SPKM-3, it is permissible for the
   server to use SPKM-3 and not LIPKEY for the callback even if the
   client used LIPKEY for SETCLIENTID.

   Regardless of what security mechanism under RPCSEC_GSS is being used,
   the NFS server, MUST identify itself in GSS-API via a
   GSS_C_NT_HOSTBASED_SERVICE name type.  GSS_C_NT_HOSTBASED_SERVICE
   names are of the form:

   service@hostname

   For NFS, the "service" element is

   nfs

   Implementations of security mechanisms will convert nfs@hostname to
   various different forms.  For Kerberos V5 and LIPKEY, the following
   form is RECOMMENDED:

   nfs/hostname

   For Kerberos V5, nfs/hostname would be a server principal in the
   Kerberos Key Distribution Center database.  This is the same
   principal the client acquired a GSS-API context for when it issued
   the SETCLIENTID operation, therefore, the realm name for the server
   principal must be the same for the callback as it was for the
   SETCLIENTID.

   For LIPKEY, this would be the username passed to the target (the NFS
   version 4 client that receives the callback).

   It should be noted that LIPKEY may not work for callbacks, since the
   LIPKEY client uses a user id/password.  If the NFS client receiving
   the callback can authenticate the NFS server's user name/password
   pair, and if the user that the NFS server is authenticating to has a
   public key certificate, then it works.

   In situations where the NFS client uses LIPKEY and uses a per-host
   principal for the SETCLIENTID operation, instead of using LIPKEY for
   SETCLIENTID, it is RECOMMENDED that SPKM-3 with mutual authentication
   be used.  This effectively means that the client will use a
   certificate to authenticate and identify the initiator to the target
   on the NFS server.  Using SPKM-3 and not LIPKEY has the following
   advantages:

   o  When the server does a callback, it must authenticate to the
      principal used in the SETCLIENTID.  Even if LIPKEY is used,
      because LIPKEY is layered over SPKM-3, the NFS client will need to
      have a certificate that corresponds to the principal used in the
      SETCLIENTID operation.  From an administrative perspective, having
      a user name, password, and certificate for both the client and
      server is redundant.

   o  LIPKEY was intended to minimize additional infrastructure
      requirements beyond a certificate for the target, and the
      expectation is that existing password infrastructure can be
      leveraged for the initiator.  In some environments, a per-host
      password does not exist yet.  If certificates are used for any
      per-host principals, then additional password infrastructure is
      not needed.

   o  In cases when a host is both an NFS client and server, it can
      share the same per-host certificate.

4.  Filehandles

   The filehandle in the NFS protocol is a per server unique identifier
   for a filesystem object.  The contents of the filehandle are opaque
   to the client.  Therefore, the server is responsible for translating
   the filehandle to an internal representation of the filesystem
   object.

4.1.  Obtaining the First Filehandle

   The operations of the NFS protocol are defined in terms of one or
   more filehandles.  Therefore, the client needs a filehandle to
   initiate communication with the server.  With the NFS version 2 NFSv2 protocol [13]
   and the NFS version 3 NFSv3 protocol [14], there exists an ancillary protocol to
   obtain this first filehandle.  The MOUNT protocol, RPC program number
   100005, provides the mechanism of translating a string based
   filesystem path name to a filehandle which can then be used by the
   NFS protocols.

   The MOUNT protocol has deficiencies in the area of security and use
   via firewalls.  This is one reason that the use of the public
   filehandle was introduced in [23] and [24].  With the use of the
   public filehandle in combination with the LOOKUP operation in the NFS
   version 2
   NFSv2 and 3 NFSv3 protocols, it has been demonstrated that the MOUNT
   protocol is unnecessary for viable interaction between NFS client and
   server.

   Therefore, the NFS version 4 NFSv4 protocol will not use an ancillary protocol for
   translation from string based path names to a filehandle.  Two
   special filehandles will be used as starting points for the NFS
   client.

4.1.1.  Root Filehandle

   The first of the special filehandles is the ROOT filehandle.  The
   ROOT filehandle is the "conceptual" root of the filesystem name space
   at the NFS server.  The client uses or starts with the ROOT
   filehandle by employing the PUTROOTFH operation.  The PUTROOTFH
   operation instructs the server to set the "current" filehandle to the
   ROOT of the server's file tree.  Once this PUTROOTFH operation is
   used, the client can then traverse the entirety of the server's file
   tree with the LOOKUP operation.  A complete discussion of the server
   name space is in Section 8.

4.1.2.  Public Filehandle

   The second special filehandle is the PUBLIC filehandle.  Unlike the
   ROOT filehandle, the PUBLIC filehandle may be bound or represent an
   arbitrary filesystem object at the server.  The server is responsible
   for this binding.  It may be that the PUBLIC filehandle and the ROOT
   filehandle refer to the same filesystem object.  However, it is up to
   the administrative software at the server and the policies of the
   server administrator to define the binding of the PUBLIC filehandle
   and server filesystem object.  The client may not make any
   assumptions about this binding.  The client uses the PUBLIC
   filehandle via the PUTPUBFH operation.

4.2.  Filehandle Types

   In the NFS version 2 NFSv2 and 3 NFSv3 protocols, there was one type of filehandle
   with a single set of semantics.  This type of filehandle is termed
   "persistent" in NFS Version 4.  The semantics of a persistent
   filehandle remain the same as before.  A new type of filehandle
   introduced in NFS Version 4 is the "volatile" filehandle, which
   attempts to accommodate certain server environments.

   The volatile filehandle type was introduced to address server
   functionality or implementation issues which make correct
   implementation of a persistent filehandle infeasible.  Some server
   environments do not provide a filesystem level invariant that can be
   used to construct a persistent filehandle.  The underlying server
   filesystem may not provide the invariant or the server's filesystem
   programming interfaces may not provide access to the needed
   invariant.  Volatile filehandles may ease the implementation of
   server functionality such as hierarchical storage management or
   filesystem reorganization or migration.  However, the volatile
   filehandle increases the implementation burden for the client.

   Since the client will need to handle persistent and volatile
   filehandles differently, a file attribute is defined which may be
   used by the client to determine the filehandle types being returned
   by the server.

4.2.1.  General Properties of a Filehandle

   The filehandle contains all the information the server needs to
   distinguish an individual file.  To the client, the filehandle is
   opaque.  The client stores filehandles for use in a later request and
   can compare two filehandles from the same server for equality by
   doing a byte-by-byte comparison.  However, the client MUST NOT
   otherwise interpret the contents of filehandles.  If two filehandles
   from the same server are equal, they MUST refer to the same file.
   Servers SHOULD try to maintain a one-to-one correspondence between
   filehandles and files but this is not required.  Clients MUST use
   filehandle comparisons only to improve performance, not for correct
   behavior.  All clients need to be prepared for situations in which it
   cannot be determined whether two filehandles denote the same object
   and in such cases, avoid making invalid assumptions which might cause
   incorrect behavior.  Further discussion of filehandle and attribute
   comparison in the context of data caching is presented in
   Section 10.3.4.

   As an example, in the case that two different path names when
   traversed at the server terminate at the same filesystem object, the
   server SHOULD return the same filehandle for each path.  This can
   occur if a hard link is used to create two file names which refer to
   the same underlying file object and associated data.  For example, if
   paths /a/b/c and /a/d/c refer to the same file, the server SHOULD
   return the same filehandle for both path names traversals.

4.2.2.  Persistent Filehandle

   A persistent filehandle is defined as having a fixed value for the
   lifetime of the filesystem object to which it refers.  Once the
   server creates the filehandle for a filesystem object, the server
   MUST accept the same filehandle for the object for the lifetime of
   the object.  If the server restarts or reboots the NFS server must
   honor the same filehandle value as it did in the server's previous
   instantiation.  Similarly, if the filesystem is migrated, the new NFS
   server must honor the same filehandle as the old NFS server.

   The persistent filehandle will be become stale or invalid when the
   filesystem object is removed.  When the server is presented with a
   persistent filehandle that refers to a deleted object, it MUST return
   an error of NFS4ERR_STALE.  A filehandle may become stale when the
   filesystem containing the object is no longer available.  The file
   system may become unavailable if it exists on removable media and the
   media is no longer available at the server or the filesystem in whole
   has been destroyed or the filesystem has simply been removed from the
   server's name space (i.e., unmounted in a UNIX environment).

4.2.3.  Volatile Filehandle

   A volatile filehandle does not share the same longevity
   characteristics of a persistent filehandle.  The server may determine
   that a volatile filehandle is no longer valid at many different
   points in time.  If the server can definitively determine that a
   volatile filehandle refers to an object that has been removed, the
   server should return NFS4ERR_STALE to the client (as is the case for
   persistent filehandles).  In all other cases where the server
   determines that a volatile filehandle can no longer be used, it
   should return an error of NFS4ERR_FHEXPIRED.

   The mandatory attribute "fh_expire_type" is used by the client to
   determine what type of filehandle the server is providing for a
   particular filesystem.  This attribute is a bitmask with the
   following values:

   FH4_PERSISTENT  The value of FH4_PERSISTENT is used to indicate a
      persistent filehandle, which is valid until the object is removed
      from the filesystem.  The server will not return NFS4ERR_FHEXPIRED
      for this filehandle.  FH4_PERSISTENT is defined as a value in
      which none of the bits specified below are set.

   FH4_VOLATILE_ANY  The filehandle may expire at any time, except as
      specifically excluded (i.e., FH4_NOEXPIRE_WITH_OPEN).

   FH4_NOEXPIRE_WITH_OPEN  May only be set when FH4_VOLATILE_ANY is set.
      If this bit is set, then the meaning of FH4_VOLATILE_ANY is
      qualified to exclude any expiration of the filehandle when it is
      open.

   FH4_VOL_MIGRATION  The filehandle will expire as a result of
      migration.  If FH4_VOLATILE_ANY is set, FH4_VOL_MIGRATION is
      redundant.

   FH4_VOL_RENAME  The filehandle will expire during rename.  This
      includes a rename by the requesting client or a rename by any
      other client.  If FH4_VOLATILE_ANY is set, FH4_VOL_RENAME is
      redundant.

   Servers which provide volatile filehandles that may expire while open
   (i.e., if FH4_VOL_MIGRATION or FH4_VOL_RENAME is set or if
   FH4_VOLATILE_ANY is set and FH4_NOEXPIRE_WITH_OPEN not set), should
   deny a RENAME or REMOVE that would affect an OPEN file of any of the
   components leading to the OPEN file.  In addition, the server should
   deny all RENAME or REMOVE requests during the grace period upon
   server restart.

   Note that the bits FH4_VOL_MIGRATION and FH4_VOL_RENAME allow the
   client to determine that expiration has occurred whenever a specific
   event occurs, without an explicit filehandle expiration error from
   the server.  FH4_VOLATILE_ANY does not provide this form of
   information.  In situations where the server will expire many, but
   not all filehandles upon migration (e.g., all but those that are
   open), FH4_VOLATILE_ANY (in this case with FH4_NOEXPIRE_WITH_OPEN) is
   a better choice since the client may not assume that all filehandles
   will expire when migration occurs, and it is likely that additional
   expirations will occur (as a result of file CLOSE) that are separated
   in time from the migration event itself.

4.2.4.  One Method of Constructing a Volatile Filehandle

   A volatile filehandle, while opaque to the client could contain:

   [volatile bit = 1 | server boot time | slot | generation number]

   o  slot is an index in the server volatile filehandle table

   o  generation number is the generation number for the table entry/
      slot

   When the client presents a volatile filehandle, the server makes the
   following checks, which assume that the check for the volatile bit
   has passed.  If the server boot time is less than the current server
   boot time, return NFS4ERR_FHEXPIRED.  If slot is out of range, return
   NFS4ERR_BADHANDLE.  If the generation number does not match, return
   NFS4ERR_FHEXPIRED.

   When the server reboots, the table is gone (it is volatile).

   If volatile bit is 0, then it is a persistent filehandle with a
   different structure following it.

4.3.  Client Recovery from Filehandle Expiration

   If possible, the client SHOULD recover from the receipt of an
   NFS4ERR_FHEXPIRED error.  The client must take on additional
   responsibility so that it may prepare itself to recover from the
   expiration of a volatile filehandle.  If the server returns
   persistent filehandles, the client does not need these additional
   steps.

   For volatile filehandles, most commonly the client will need to store
   the component names leading up to and including the filesystem object
   in question.  With these names, the client should be able to recover
   by finding a filehandle in the name space that is still available or
   by starting at the root of the server's filesystem name space.

   If the expired filehandle refers to an object that has been removed
   from the filesystem, obviously the client will not be able to recover
   from the expired filehandle.

   It is also possible that the expired filehandle refers to a file that
   has been renamed.  If the file was renamed by another client, again
   it is possible that the original client will not be able to recover.
   However, in the case that the client itself is renaming the file and
   the file is open, it is possible that the client may be able to
   recover.  The client can determine the new path name based on the
   processing of the rename request.  The client can then regenerate the
   new filehandle based on the new path name.  The client could also use
   the compound operation mechanism to construct a set of operations
   like:

   RENAME A B
   LOOKUP B
   GETFH

   Note that the COMPOUND procedure does not provide atomicity.  This
   example only reduces the overhead of recovering from an expired
   filehandle.

5.  File Attributes

   To meet the requirements of extensibility and increased
   interoperability with non-UNIX platforms, attributes need to be
   handled in a flexible manner.  The NFSv3 fattr3 structure contains a
   fixed list of attributes that not all clients and servers are able to
   support or care about.  The fattr3 structure cannot be extended as
   new needs arise and it provides no way to indicate non-support.  With
   the NFSv4.0 protocol, the client is able to query what attributes the
   server supports and construct requests with only those supported
   attributes (or a subset thereof).

   To this end, attributes are divided into three groups: REQUIRED,
   RECOMMENDED, and named.  Both REQUIRED and RECOMMENDED attributes are
   supported in the NFSv4.0 protocol by a specific and well-defined
   encoding and are identified by number.  They are requested by setting
   a bit in the bit vector sent in the GETATTR request; the server
   response includes a bit vector to list what attributes were returned
   in the response.  New REQUIRED or RECOMMENDED attributes may be added
   to the NFSv4 protocol as part of a new minor version by publishing a
   Standards Track RFC which allocates a new attribute number value and
   defines the encoding for the attribute.  See Section 11 for further
   discussion.

   Named attributes are accessed by the new OPENATTR operation, which
   accesses a hidden directory of attributes associated with a file
   system object.  OPENATTR takes a filehandle for the object and
   returns the filehandle for the attribute hierarchy.  The filehandle
   for the named attributes is a directory object accessible by LOOKUP
   or READDIR and contains files whose names represent the named
   attributes and whose data bytes are the value of the attribute.  For
   example:

        +----------+-----------+---------------------------------+
        | LOOKUP   | "foo"     | ; look up file                  |
        | GETATTR  | attrbits  |                                 |
        | OPENATTR |           | ; access foo's named attributes |
        | LOOKUP   | "x11icon" | ; look up specific attribute    |
        | READ     | 0,4096    | ; read stream of bytes          |
        +----------+-----------+---------------------------------+

   Named attributes are intended for data needed by applications rather
   than by an NFS client implementation.  NFS implementors are strongly
   encouraged to define their new attributes as RECOMMENDED attributes
   by bringing them to the IETF Standards Track process.

   The set of attributes that are classified as REQUIRED is deliberately
   small since servers need to do whatever it takes to support them.  A
   server should support as many of the RECOMMENDED attributes as
   possible but, by their definition, the server is not required to
   support all of them.  Attributes are deemed REQUIRED if the data is
   both needed by a large number of clients and is not otherwise
   reasonably computable by the client when support is not provided on
   the server.

   Note that the hidden directory returned by OPENATTR is a convenience
   for protocol processing.  The client should not make any assumptions
   about the server's implementation of named attributes and whether or
   not the underlying file system at the server has a named attribute
   directory.  Therefore, operations such as SETATTR and GETATTR on the
   named attribute directory are undefined.

5.1.  REQUIRED Attributes

   These MUST be supported by every NFSv4.0 client and server in order
   to ensure a minimum level of interoperability.  The server MUST store
   and return these attributes, and the client MUST be able to function
   with an attribute set limited to these attributes.  With just the
   REQUIRED attributes some client functionality may be impaired or
   limited in some ways.  A client may ask for any of these attributes
   to be returned by setting a bit in the GETATTR request, and the
   server must return their value.

5.2.  RECOMMENDED Attributes

   These attributes are understood well enough to warrant support in the
   NFSv4.0 protocol.  However, they may not be supported on all clients
   and servers.  A client MAY ask for any of these attributes to be
   returned by setting a bit in the GETATTR request but must handle the
   case where the server does not return them.  A client MAY ask for the
   set of attributes the server supports and SHOULD NOT request
   attributes the server does not support.  A server should be tolerant
   of requests for unsupported attributes and simply not return them
   rather than considering the request an error.  It is expected that
   servers will support all attributes they comfortably can and only
   fail to support attributes that are difficult to support in their
   operating environments.  A server should provide attributes whenever
   they don't have to "tell lies" to the client.  For example, a file
   modification time should be either an accurate time or should not be
   supported by the server.  At times this will be difficult for
   clients, but a client is better positioned to decide whether and how
   to fabricate or construct an attribute or whether to do without the
   attribute.

5.3.  Named Attributes

   These attributes are not supported by direct encoding in the NFSv4
   protocol but are accessed by string names rather than numbers and
   correspond to an uninterpreted stream of bytes that are stored with
   the file system object.  The name space for these attributes may be
   accessed by using the OPENATTR operation.  The OPENATTR operation
   returns a filehandle for a virtual "named attribute directory", and
   further perusal and modification of the name space may be done using
   operations that work on more typical directories.  In particular,
   READDIR may be used to get a list of such named attributes, and
   LOOKUP and OPEN may select a particular attribute.  Creation of a new
   named attribute may be the result of an OPEN specifying file
   creation.

   Once an OPEN is done, named attributes may be examined and changed by
   normal READ and WRITE operations using the filehandles and stateids
   returned by OPEN.

   Named attributes and the named attribute directory may have their own
   (non-named) attributes.  Each of these objects must have all of the
   REQUIRED attributes and may have additional RECOMMENDED attributes.
   However, the set of attributes for named attributes and the named
   attribute directory need not be, and typically will not be, as large
   as that for other objects in that file system.

   Named attributes and the named attribute directory might be the
   target of delegations (in the case of the named attribute directory
   these will be directory delegations).  However, since granting of
   delegations is at the server's discretion, a server need not support
   delegations on named attributes or the named attribute directory.

   It is RECOMMENDED that servers support arbitrary named attributes.  A
   client should not depend on the ability to store any named attributes
   in the server's file system.  If a server does support named
   attributes, a client that is also able to handle them should be able
   to copy a file's data and metadata with complete transparency from
   one location to another; this would imply that names allowed for
   regular directory entries are valid for named attribute names as
   well.

   In NFSv4.0, the structure of named attribute directories is
   restricted in a number of ways, in order to prevent the development
   of non-interoperable implementations in which some servers support a
   fully general hierarchical directory structure for named attributes
   while others support a limited but adequate structure for named
   attributes.  In such an environment, clients or applications might
   come to depend on non-portable extensions.  The restrictions are:

   o  CREATE is not allowed in a named attribute directory.  Thus, such
      objects as symbolic links and special files are not allowed to be
      named attributes.  Further, directories may not be created in a
      named attribute directory, so no hierarchical structure of named
      attributes for a single object is allowed.

   o  If OPENATTR is done on a named attribute directory or on a named
      attribute, the server MUST return NFS4ERR_WRONG_TYPE.

   o  Doing a RENAME of a named attribute to a different named attribute
      directory or to an ordinary (i.e., non-named-attribute) directory
      is not allowed.

   o  Creating hard links between named attribute directories or between
      named attribute directories and ordinary directories is not
      allowed.

   Names of attributes will not be controlled by this document or other
   IETF Standards Track documents.  See Section 18 for further
   discussion.

5.4.  Classification of Attributes

   Each of the REQUIRED and RECOMMENDED attributes can be classified in
   one of three categories: per server (i.e., the value of the attribute
   will be the same for all file objects that share the same server),
   per file system (i.e., the value of the attribute will be the same
   for some or all file objects that share the same fsid attribute
   (Section 5.8.1.9) and server owner), or per file system object.  Note
   that it is possible that some per file system attributes may vary
   within the file system.  Note that it is possible that some per file
   system attributes may vary within the file system, depending on the
   value of the "homogeneous" (Section 5.8.2.16) attribute.  Note that
   the attributes time_access_set and time_modify_set are not listed in
   this section because they are write-only attributes corresponding to
   time_access and time_modify, and are used in a special instance of
   SETATTR.

   o  The per-server attribute is:

         lease_time

   o  The per-file system attributes are:

         supported_attrs, fh_expire_type, link_support, symlink_support,
         unique_handles, aclsupport, cansettime, case_insensitive,
         case_preserving, chown_restricted, files_avail, files_free,
         files_total, fs_locations, homogeneous, maxfilesize, maxname,
         maxread, maxwrite, no_trunc, space_avail, space_free,
         space_total, time_delta,

   o  The per-file system object attributes are:

         type, change, size, named_attr, fsid, rdattr_error, filehandle,
         acl, archive, fileid, hidden, maxlink, mimetype, mode,
         numlinks, owner, owner_group, rawdev, space_used, system,
         time_access, time_backup, time_create, time_metadata,
         time_modify, mounted_on_fileid

   For quota_avail_hard, quota_avail_soft, and quota_used, see their
   definitions below for the appropriate classification.

5.5.  Set-Only and Get-Only Attributes

   Some REQUIRED and RECOMMENDED attributes are set-only; i.e., they can
   be set via SETATTR but not retrieved via GETATTR.  Similarly, some
   REQUIRED and RECOMMENDED attributes are get-only; i.e., they can be
   retrieved via GETATTR but not set via SETATTR.  If a client attempts
   to set a get-only attribute or get a set-only attribute, the server
   MUST return NFS4ERR_INVAL.

5.6.  REQUIRED Attributes - List and Definition References

   The list of REQUIRED attributes appears in Table 2.  The meaning of
   the columns of the table are:

   o  Name: The name of attribute

   o  Id: The number assigned to the attribute.  In the event of
      conflicts between the assigned number and [2], the latter is
      likely authoritative, but should be resolved with Errata to this
      document and/or [2].  See [25] for the Errata process.

   o  Data Type: The XDR data type of the attribute.

   o  Acc: Access allowed to the attribute.  R means read-only (GETATTR
      may retrieve, SETATTR may not set).  W means write-only (SETATTR
      may set, GETATTR may not retrieve).  R W means read/write (GETATTR
      may retrieve, SETATTR may set).

   o  Defined in: The section of this specification that describes the
      attribute.

      +-----------------+----+------------+-----+------------------+
      | Name            | Id | Data Type  | Acc | Defined in:      |
      +-----------------+----+------------+-----+------------------+
      | supported_attrs | 0  | bitmap4    | R   | Section 5.8.1.1  |
      | type            | 1  | nfs_ftype4 | R   | Section 5.8.1.2  |
      | fh_expire_type  | 2  | uint32_t   | R   | Section 5.8.1.3  |
      | change          | 3  | uint64_t   | R   | Section 5.8.1.4  |
      | size            | 4  | uint64_t   | R W | Section 5.8.1.5  |
      | link_support    | 5  | bool       | R   | Section 5.8.1.6  |
      | symlink_support | 6  | bool       | R   | Section 5.8.1.7  |
      | named_attr      | 7  | bool       | R   | Section 5.8.1.8  |
      | fsid            | 8  | fsid4      | R   | Section 5.8.1.9  |
      | unique_handles  | 9  | bool       | R   | Section 5.8.1.10 |
      | lease_time      | 10 | nfs_lease4 | R   | Section 5.8.1.11 |
      | rdattr_error    | 11 | enum       | R   | Section 5.8.1.12 |
      | filehandle      | 19 | nfs_fh4    | R   | Section 5.8.1.13 |
      +-----------------+----+------------+-----+------------------+

                                  Table 2

5.7.  RECOMMENDED Attributes - List and Definition References

   The RECOMMENDED attributes are defined in Table 3.  The meanings of
   the column headers are the same as Table 2; see Section 5.6 for the
   meanings.

    +-------------------+----+--------------+-----+------------------+
    | Name              | Id | Data Type    | Acc | Defined in:      |
    +-------------------+----+--------------+-----+------------------+
    | acl               | 12 | nfsace4<>    | R W | Section 6.2.1    |
    | aclsupport        | 13 | uint32_t     | R   | Section 6.2.1.2  |
    | archive           | 14 | bool         | R W | Section 5.8.2.1  |
    | cansettime        | 15 | bool         | R   | Section 5.8.2.2  |
    | case_insensitive  | 16 | bool         | R   | Section 5.8.2.3  |
    | case_preserving   | 17 | bool         | R   | Section 5.8.2.4  |
    | chown_restricted  | 18 | bool         | R   | Section 5.8.2.5  |
    | fileid            | 20 | uint64_t     | R   | Section 5.8.2.6  |
    | files_avail       | 21 | uint64_t     | R   | Section 5.8.2.7  |
    | files_free        | 22 | uint64_t     | R   | Section 5.8.2.8  |
    | files_total       | 23 | uint64_t     | R   | Section 5.8.2.9  |
    | fs_locations      | 24 | fs_locations | R   | Section 5.8.2.10 |
    | hidden            | 25 | bool         | R W | Section 5.8.2.11 |
    | homogeneous       | 26 | bool         | R   | Section 5.8.2.12 |
    | maxfilesize       | 27 | uint64_t     | R   | Section 5.8.2.13 |
    | maxlink           | 28 | uint32_t     | R   | Section 5.8.2.14 |
    | maxname           | 29 | uint32_t     | R   | Section 5.8.2.15 |
    | maxread           | 30 | uint64_t     | R   | Section 5.8.2.16 |
    | maxwrite          | 31 | uint64_t     | R   | Section 5.8.2.17 |
    | mimetype          | 32 | utf8<>       | R W | Section 5.8.2.18 |
    | mode              | 33 | mode4        | R W | Section 6.2.2    |
    | mounted_on_fileid | 55 | uint64_t     | R   | Section 5.8.2.19 |
    | no_trunc          | 34 | bool         | R   | Section 5.8.2.20 |
    | numlinks          | 35 | uint32_t     | R   | Section 5.8.2.21 |
    | owner             | 36 | utf8<>       | R W | Section 5.8.2.22 |
    | owner_group       | 37 | utf8<>       | R W | Section 5.8.2.23 |
    | quota_avail_hard  | 38 | uint64_t     | R   | Section 5.8.2.24 |
    | quota_avail_soft  | 39 | uint64_t     | R   | Section 5.8.2.25 |
    | quota_used        | 40 | uint64_t     | R   | Section 5.8.2.26 |
    | rawdev            | 41 | specdata4    | R   | Section 5.8.2.27 |
    | space_avail       | 42 | uint64_t     | R   | Section 5.8.2.28 |
    | space_free        | 43 | uint64_t     | R   | Section 5.8.2.29 |
    | space_total       | 44 | uint64_t     | R   | Section 5.8.2.30 |
    | space_used        | 45 | uint64_t     | R   | Section 5.8.2.31 |
    | system            | 46 | bool         | R W | Section 5.8.2.32 |
    | time_access       | 47 | nfstime4     | R   | Section 5.8.2.33 |
    | time_access_set   | 48 | settime4     |   W | Section 5.8.2.34 |
    | time_backup       | 49 | nfstime4     | R W | Section 5.8.2.35 |
    | time_create       | 50 | nfstime4     | R W | Section 5.8.2.36 |
    | time_delta        | 51 | nfstime4     | R   | Section 5.8.2.37 |
    | time_metadata     | 52 | nfstime4     | R   | Section 5.8.2.38 |
    | time_modify       | 53 | nfstime4     | R   | Section 5.8.2.39 |
    | time_modify_set   | 54 | settime4     |   W | Section 5.8.2.40 |
    +-------------------+----+--------------+-----+------------------+

                                  Table 3

5.8.  Attribute Definitions

5.8.1.  Definitions of REQUIRED Attributes

5.8.1.1.  Attribute 0: supported_attrs

   The bit vector that would retrieve all REQUIRED and RECOMMENDED
   attributes that are supported for this object.  The scope of this
   attribute applies to all objects with a matching fsid.

5.8.1.2.  Attribute 1: type

   Designates the type of an object in terms of one of a number of
   special constants:

   o  NF4REG designates a regular file.

   o  NF4DIR designates a directory.

   o  NF4BLK designates a block device special file.

   o  NF4CHR designates a character device special file.

   o  NF4LNK designates a symbolic link.

   o  NF4SOCK designates a named socket special file.

   o  NF4FIFO designates a fifo special file.

   o  NF4ATTRDIR designates a named attribute directory.

   o  NF4NAMEDATTR designates a named attribute.

   Within the explanatory text and operation descriptions, the following
   phrases will be used with the meanings given below:

   o  The phrase "is a directory" means that the object's type attribute
      is NF4DIR or NF4ATTRDIR.

   o  The phrase "is a special file" means that the object's type
      attribute is NF4BLK, NF4CHR, NF4SOCK, or NF4FIFO.

   o  The phrase "is an ordinary file" means that the object's type
      attribute is NF4REG or NF4NAMEDATTR.

5.8.1.3.  Attribute 2: fh_expire_type

   Server uses this to specify filehandle expiration behavior to the
   client.  See Section 4 for additional description.

5.8.1.4.  Attribute 3: change

   A value created by the server that the client can use to determine if
   file data, directory contents, or attributes of the object have been
   modified.  The server may return the object's time_metadata attribute
   for this attribute's value but only if the file system object cannot
   be updated more frequently than the resolution of time_metadata.

5.8.1.5.  Attribute 4: size

   The size of the object in bytes.

5.8.1.6.  Attribute 5: link_support

   TRUE, if the object's file system supports hard links.

5.8.1.7.  Attribute 6: symlink_support

   TRUE, if the object's file system supports symbolic links.

5.8.1.8.  Attribute 7: named_attr

   TRUE, if this object has named attributes.  In other words, object
   has a non-empty named attribute directory.

5.8.1.9.  Attribute 8: fsid

   Unique file system identifier for the file system holding this
   object.  The fsid attribute has major and minor components, each of
   which are of data type uint64_t.

5.8.1.10.  Attribute 9: unique_handles

   TRUE, if two distinct filehandles are guaranteed to refer to two
   different file system objects.

5.8.1.11.  Attribute 10: lease_time

   Duration of the lease at server in seconds.

5.8.1.12.  Attribute 11: rdattr_error

   Error returned from an attempt to retrieve attributes during a
   READDIR operation.

5.8.1.13.  Attribute 19: filehandle

   The filehandle of this object (primarily for READDIR requests).

5.8.2.  Definitions of Uncategorized RECOMMENDED Attributes

   The definitions of most of the RECOMMENDED attributes follow.
   Collections that share a common category are defined in other
   sections.

5.8.2.1.  Attribute 14: archive

   TRUE, if this file has been archived since the time of last
   modification (deprecated in favor of time_backup).

5.8.2.2.  Attribute 15: cansettime

   TRUE, if the server is able to change the times for a file system
   object as specified in a SETATTR operation.

5.8.2.3.  Attribute 16: case_insensitive

   TRUE, if file name comparisons on this file system are case
   insensitive.

5.8.2.4.  Attribute 17: case_preserving

   TRUE, if file name case on this file system is preserved.

5.8.2.5.  Attribute 18: chown_restricted

   If TRUE, the server will reject any request to change either the
   owner or the group associated with a file if the caller is not a
   privileged user (for example, "root" in UNIX operating environments
   or in Windows 2000, the "Take Ownership" privilege).

5.8.2.6.  Attribute 20: fileid

   A number uniquely identifying the file within the file system.

5.8.2.7.  Attribute 21: files_avail

   File slots available to this user on the file system containing this
   object -- this should be the smallest relevant limit.

5.8.2.8.  Attribute 22: files_free

   Free file slots on the file system containing this object - this
   should be the smallest relevant limit.

5.8.2.9.  Attribute 23: files_total

   Total file slots on the file system containing this object.

5.8.2.10.  Attribute 24: fs_locations

   Locations where this file system may be found.  If the server returns
   NFS4ERR_MOVED as an error, this attribute MUST be supported.

   The server can specify a root path by setting an array of zero path
   compenents.
   components.  Other than this special case, the server MUST not
   present empty path components to the client.

5.8.2.11.  Attribute 25: hidden

   TRUE, if the file is considered hidden with respect to the Windows
   API.

5.8.2.12.  Attribute 26: homogeneous

   TRUE, if this object's file system is homogeneous, i.e., all objects
   in the file system (all objects on the server with the same fsid)
   have common values for all per-file-system attributes.

5.8.2.13.  Attribute 27: maxfilesize

   Maximum supported file size for the file system of this object.

5.8.2.14.  Attribute 28: maxlink

   Maximum number of links for this object.

5.8.2.15.  Attribute 29: maxname

   Maximum file name size supported for this object.

5.8.2.16.  Attribute 30: maxread

   Maximum amount of data the READ operation will return for this
   object.

5.8.2.17.  Attribute 31: maxwrite

   Maximum amount of data the WRITE operation will accept for this
   object.  This attribute SHOULD be supported if the file is writable.
   Lack of this attribute can lead to the client either wasting
   bandwidth or not receiving the best performance.

5.8.2.18.  Attribute 32: mimetype

   MIME body type/subtype of this object.

5.8.2.19.  Attribute 55: mounted_on_fileid

   Like fileid, but if the target filehandle is the root of a file
   system, this attribute represents the fileid of the underlying
   directory.

   UNIX-based operating environments connect a file system into the
   namespace by connecting (mounting) the file system onto the existing
   file object (the mount point, usually a directory) of an existing
   file system.  When the mount point's parent directory is read via an
   API like readdir(), the return results are directory entries, each
   with a component name and a fileid.  The fileid of the mount point's
   directory entry will be different from the fileid that the stat()
   system call returns.  The stat() system call is returning the fileid
   of the root of the mounted file system, whereas readdir() is
   returning the fileid that stat() would have returned before any file
   systems were mounted on the mount point.

   Unlike NFSv3, NFSv4.0 allows a client's LOOKUP request to cross other
   file systems.  The client detects the file system crossing whenever
   the filehandle argument of LOOKUP has an fsid attribute different
   from that of the filehandle returned by LOOKUP.  A UNIX-based client
   will consider this a "mount point crossing".  UNIX has a legacy
   scheme for allowing a process to determine its current working
   directory.  This relies on readdir() of a mount point's parent and
   stat() of the mount point returning fileids as previously described.
   The mounted_on_fileid attribute corresponds to the fileid that
   readdir() would have returned as described previously.

   While the NFSv4.0 client could simply fabricate a fileid
   corresponding to what mounted_on_fileid provides (and if the server
   does not support mounted_on_fileid, the client has no choice), there
   is a risk that the client will generate a fileid that conflicts with
   one that is already assigned to another object in the file system.
   Instead, if the server can provide the mounted_on_fileid, the
   potential for client operational problems in this area is eliminated.

   If the server detects that there is no mounted point at the target
   file object, then the value for mounted_on_fileid that it returns is
   the same as that of the fileid attribute.

   The mounted_on_fileid attribute is RECOMMENDED, so the server SHOULD
   provide it if possible, and for a UNIX-based server, this is
   straightforward.  Usually, mounted_on_fileid will be requested during
   a READDIR operation, in which case it is trivial (at least for UNIX-
   based servers) to return mounted_on_fileid since it is equal to the
   fileid of a directory entry returned by readdir().  If
   mounted_on_fileid is requested in a GETATTR operation, the server
   should obey an invariant that has it returning a value that is equal
   to the file object's entry in the object's parent directory, i.e.,
   what readdir() would have returned.  Some operating environments
   allow a series of two or more file systems to be mounted onto a
   single mount point.  In this case, for the server to obey the
   aforementioned invariant, it will need to find the base mount point,
   and not the intermediate mount points.

5.8.2.20.  Attribute 34: no_trunc

   If this attribute is TRUE, then if the client uses a file name longer
   than name_max, an error will be returned instead of the name being
   truncated.

5.8.2.21.  Attribute 35: numlinks

   Number of hard links to this object.

5.8.2.22.  Attribute 36: owner

   The string name of the owner of this object.

5.8.2.23.  Attribute 37: owner_group

   The string name of the group ownership of this object.

5.8.2.24.  Attribute 38: quota_avail_hard

   The value in bytes that represents the amount of additional disk
   space beyond the current allocation that can be allocated to this
   file or directory before further allocations will be refused.  It is
   understood that this space may be consumed by allocations to other
   files or directories.

5.8.2.25.  Attribute 39: quota_avail_soft

   The value in bytes that represents the amount of additional disk
   space that can be allocated to this file or directory before the user
   may reasonably be warned.  It is understood that this space may be
   consumed by allocations to other files or directories though there is
   a rule as to which other files or directories.

5.8.2.26.  Attribute 40: quota_used

   The value in bytes that represents the amount of disc space used by
   this file or directory and possibly a number of other similar files
   or directories, where the set of "similar" meets at least the
   criterion that allocating space to any file or directory in the set
   will reduce the "quota_avail_hard" of every other file or directory
   in the set.

   Note that there may be a number of distinct but overlapping sets of
   files or directories for which a quota_used value is maintained, e.g.
   "all files with a given owner", "all files with a given group owner".

   etc.  The server is at liberty to choose any of those sets when
   providing the content of the quota_used attribute, but should do so
   in a repeatable way.  The rule may be configured per file system or
   may be "choose the set with the smallest quota".

5.8.2.27.  Attribute 41: rawdev

   Raw device number of file of type NF4BLK or NF4CHR.  The device
   number is split into major and minor numbers.  If the file's type
   attribute is not NF4BLK or NF4CHR, the value returned SHOULD NOT be
   considered useful.

5.8.2.28.  Attribute 42: space_avail

   Disk space in bytes available to this user on the file system
   containing this object -- this should be the smallest relevant limit.

5.8.2.29.  Attribute 43: space_free

   Free disk space in bytes on the file system containing this object --
   this should be the smallest relevant limit.

5.8.2.30.  Attribute 44: space_total

   Total disk space in bytes on the file system containing this object.

5.8.2.31.  Attribute 45: space_used

   Number of file system bytes allocated to this object.

5.8.2.32.  Attribute 46: system

   This attribute is TRUE if this file is a "system" file with respect
   to the Windows operating environment.

5.8.2.33.  Attribute 47: time_access

   The time_access attribute represents the time of last access to the
   object by a READ operation sent to the server.  The notion of what is
   an "access" depends on the server's operating environment and/or the
   server's file system semantics.  For example, for servers obeying
   Portable Operating System Interface (POSIX) semantics, time_access
   would be updated only by the READ and READDIR operations and not any
   of the operations that modify the content of the object [16], [17],
   [26], [27], [28].  Of course, setting the corresponding
   time_access_set attribute is another way to modify the time_access
   attribute.

   Whenever the file object resides on a writable file system, the
   server should make its best efforts to record time_access into stable
   storage.  However, to mitigate the performance effects of doing so,
   and most especially whenever the server is satisfying the read of the
   object's content from its cache, the server MAY cache access time
   updates and lazily write them to stable storage.  It is also
   acceptable to give administrators of the server the option to disable
   time_access updates.

5.8.2.34.  Attribute 48: time_access_set

   Sets the time of last access to the object.  SETATTR use only.

5.8.2.35.  Attribute 49: time_backup

   The time of last backup of the object.

5.8.2.36.  Attribute 50: time_create

   The time of creation of the object.  This attribute does not have any
   relation to the traditional UNIX file attribute "ctime" or "change
   time".

5.8.2.37.  Attribute 51: time_delta

   Smallest useful server time granularity.

5.8.2.38.  Attribute 52: time_metadata

   The time of last metadata modification of the object.

5.8.2.39.  Attribute 53: time_modify

   The time of last modification to the object.

5.8.2.40.  Attribute 54: time_modify_set

   Sets the time of last modification to the object.  SETATTR use only.

5.9.  Interpreting owner and owner_group

   The RECOMMENDED attributes "owner" and "owner_group" (and also users
   and groups within the "acl" attribute) are represented in terms of a
   UTF-8 string.  To avoid a representation that is tied to a particular
   underlying implementation at the client or server, the use of the
   UTF-8 string has been chosen.  Note that section 6.1 of RFC 2624 [29]
   provides additional rationale.  It is expected that the client and
   server will have their own local representation of owner and
   owner_group that is used for local storage or presentation to the end
   user.  Therefore, it is expected that when these attributes are
   transferred between the client and server, the local representation
   is translated to a syntax of the form "user@dns_domain".  This will
   allow for a client and server that do not use the same local
   representation the ability to translate to a common syntax that can
   be interpreted by both.

   Similarly, security principals may be represented in different ways
   by different security mechanisms.  Servers normally translate these
   representations into a common format, generally that used by local
   storage, to serve as a means of identifying the users corresponding
   to these security principals.  When these local identifiers are
   translated to the form of the owner attribute, associated with files
   created by such principals, they identify, in a common format, the
   users associated with each corresponding set of security principals.

   The translation used to interpret owner and group strings is not
   specified as part of the protocol.  This allows various solutions to
   be employed.  For example, a local translation table may be consulted
   that maps a numeric identifier to the user@dns_domain syntax.  A name
   service may also be used to accomplish the translation.  A server may
   provide a more general service, not limited by any particular
   translation (which would only translate a limited set of possible
   strings) by storing the owner and owner_group attributes in local
   storage without any translation or it may augment a translation
   method by storing the entire string for attributes for which no
   translation is available while using the local representation for
   those cases in which a translation is available.

   Servers that do not provide support for all possible values of the
   owner and owner_group attributes SHOULD return an error
   (NFS4ERR_BADOWNER) when a string is presented that has no
   translation, as the value to be set for a SETATTR of the owner,
   owner_group, or acl attributes.  When a server does accept an owner
   or owner_group value as valid on a SETATTR (and similarly for the
   owner and group strings in an acl), it is promising to return that
   same string for which see below) when a corresponding GETATTR is
   done.  For some internationalization-related exceptions where this is
   not possible, see below.  Configuration changes (including changes
   from the mapping of the string to the local representation) and ill-
   constructed name translations (those that contain aliasing) may make
   that promise impossible to honor.  Servers should make appropriate
   efforts to avoid a situation in which these attributes have their
   values changed when no real change to ownership has occurred.

   The "dns_domain" portion of the owner string is meant to be a DNS
   domain name.  For example, user@example.org.  Servers should accept
   as valid a set of users for at least one domain.  A server may treat
   other domains as having no valid translations.  A more general
   service is provided when a server is capable of accepting users for
   multiple domains, or for all domains, subject to security
   constraints.

   As an implementation guide, both clients and servers may provide a
   means to configure the "dns_domain" portion of the owner string.  For
   example, the DNS domain name might be "lab.example.org", but the user
   names are defined in "example.org".  In the absence of such a
   configuration, or as a default, the current DNS domain name should be
   the value used for the "dns_domain".

   As mentioned above, it is desirable that a server when accepting a
   string of the form user@domain or group@domain in an attribute,
   return this same string when that corresponding attribute is fetched.
   Internationalization issues (for a general discussion of which see
   Section 12) make this impossible and the client needs to take note of
   the following situations:

   o  The string representing the domain may be converted to equivalent
      U-label, if presented using a form other a a U-label.  See
      Section 12.6 for details.

   o  The user or group may be returned in a different form, due to
      normalization issues, although it will always be a canonically
      equivalent string.  See See Section 12.7.3 for details.

   In the case where there is no translation available to the client or
   server, the attribute value will be constructed without the "@".
   Therefore, the absence of the "@" from the owner or owner_group
   attribute signifies that no translation was available at the sender
   and that the receiver of the attribute should not use that string as
   a basis for translation into its own internal format.  Even though
   the attribute value cannot be translated, it may still be useful.  In
   the case of a client, the attribute string may be used for local
   display of ownership.

   To provide a greater degree of compatibility with NFSv3, which
   identified users and groups by 32-bit unsigned user identifiers and
   group identifiers, owner and group strings that consist of decimal
   numeric values with no leading zeros can be given a special
   interpretation by clients and servers that choose to provide such
   support.  The receiver may treat such a user or group string as
   representing the same user as would be represented by an NFSv3 uid or
   gid having the corresponding numeric value.

   A server SHOULD reject such a numeric value if the security mechanism
   is kerberized.  I.e., in such a scenario, the client will already
   need to form "user@domain" strings.  For any other security
   mechanism, the server SHOULD accept such numeric values.  As an
   implementation note, the server could make such an acceptance be
   configurable.  If the server does not support numeric values or if it
   is configured off, then it MUST return an NFS4ERR_BADOWNER error.  If
   the security mechanism is kerberized and the client attempts to use
   the special form, then the server SHOULD return an NFS4ERR_BADOWNER
   error when there is a valid translation for the user or owner
   designated in this way.  In that case, the client must use the
   appropriate user@domain string and not the special form for
   compatibility.

   The client MUST always accept numeric values if the security
   mechanism is not kerberized.  A client can determine if a server
   supports such a mechanism by first attempting to provide a numeric
   value and only if it is rejected with an NFS4ERR_BADOWNER error, then
   providing a name value.  After the first detection of such an error,
   the client should only use the special form.

   The owner string "nobody" may be used to designate an anonymous user,
   which will be associated with a file created by a security principal
   that cannot be mapped through normal means to the owner attribute.

5.10.  Character Case Attributes

   With respect to the case_insensitive and case_preserving attributes,
   each UCS-4 character (which UTF-8 encodes) has a "long descriptive
   name" RFC1345 [30] which may or may not include the word "CAPITAL" or
   "SMALL".  The presence of SMALL or CAPITAL allows an NFS server to
   implement unambiguous and efficient table driven mappings for case
   insensitive comparisons, and non-case-preserving storage, although
   there are variations that occur additional characters with a name
   including "SMALL" or "CAPITAL" are added in a subsequent version of
   Unicode.

   For general character handling and internationalization issues, see
   Section 12.  For details regarding case mapping, see the section
   Case-based Mapping Used for Component4 Strings.

6.  Access Control Attributes

   Access Control Lists (ACLs) are file attributes that specify fine
   grained access control.  This chapter covers the "acl", "aclsupport",
   "mode", file attributes, and their interactions.  Note that file
   attributes may apply to any file system object.

6.1.  Goals

   ACLs and modes represent two well established models for specifying
   permissions.  This chapter specifies requirements that attempt to
   meet the following goals:

   o  If a server supports the mode attribute, it should provide
      reasonable semantics to clients that only set and retrieve the
      mode attribute.

   o  If a server supports ACL attributes, it should provide reasonable
      semantics to clients that only set and retrieve those attributes.

   o  On servers that support the mode attribute, if ACL attributes have
      never been set on an object, via inheritance or explicitly, the
      behavior should be traditional UNIX-like behavior.

   o  On servers that support the mode attribute, if the ACL attributes
      have been previously set on an object, either explicitly or via
      inheritance:

      *  Setting only the mode attribute should effectively control the
         traditional UNIX-like permissions of read, write, and execute
         on owner, owner_group, and other.

      *  Setting only the mode attribute should provide reasonable
         security.  For example, setting a mode of 000 should be enough
         to ensure that future opens for read or write by any principal
         fail, regardless of a previously existing or inherited ACL.

   o  When a mode attribute is set on an object, the ACL attributes may
      need to be modified so as to not conflict with the new mode.  In
      such cases, it is desirable that the ACL keep as much information
      as possible.  This includes information about inheritance, AUDIT
      and ALARM ACEs, and permissions granted and denied that do not
      conflict with the new mode.

6.2.  File Attributes Discussion

6.2.1.  Attribute 12: acl

   The NFSv4.0 ACL attribute contains an array of access control entries
   (ACEs) that are associated with the file system object.  Although the
   client can read and write the acl attribute, the server is
   responsible for using the ACL to perform access control.  The client
   can use the OPEN or ACCESS operations to check access without
   modifying or reading data or metadata.

   The NFS ACE structure is defined as follows:

   typedef uint32_t        acetype4;

   typedef uint32_t aceflag4;

   typedef uint32_t        acemask4;

   struct nfsace4 {
           acetype4        type;
           aceflag4        flag;
           acemask4        access_mask;
           utf8_must       who;
   };

   To determine if a request succeeds, the server processes each nfsace4
   entry in order.  Only ACEs which have a "who" that matches the
   requester are considered.  Each ACE is processed until all of the
   bits of the requester's access have been ALLOWED.  Once a bit (see
   below) has been ALLOWED by an ACCESS_ALLOWED_ACE, it is no longer
   considered in the processing of later ACEs.  If an ACCESS_DENIED_ACE
   is encountered where the requester's access still has unALLOWED bits
   in common with the "access_mask" of the ACE, the request is denied.
   When the ACL is fully processed, if there are bits in the requester's
   mask that have not been ALLOWED or DENIED, access is denied.

   Unlike the ALLOW and DENY ACE types, the ALARM and AUDIT ACE types do
   not affect a requester's access, and instead are for triggering
   events as a result of a requester's access attempt.  Therefore, AUDIT
   and ALARM ACEs are processed only after processing ALLOW and DENY
   ACEs.

   The NFSv4.0 ACL model is quite rich.  Some server platforms may
   provide access control functionality that goes beyond the UNIX-style
   mode attribute, but which is not as rich as the NFS ACL model.  So
   that users can take advantage of this more limited functionality, the
   server may support the acl attributes by mapping between its ACL
   model and the NFSv4.0 ACL model.  Servers must ensure that the ACL
   they actually store or enforce is at least as strict as the NFSv4 ACL
   that was set.  It is tempting to accomplish this by rejecting any ACL
   that falls outside the small set that can be represented accurately.
   However, such an approach can render ACLs unusable without special
   client-side knowledge of the server's mapping, which defeats the
   purpose of having a common NFSv4 ACL protocol.  Therefore servers
   should accept every ACL that they can without compromising security.

   To help accomplish this, servers may make a special exception, in the
   case of unsupported permission bits, to the rule that bits not
   ALLOWED or DENIED by an ACL must be denied.  For example, a UNIX-
   style server might choose to silently allow read attribute
   permissions even though an ACL does not explicitly allow those
   permissions.  (An ACL that explicitly denies permission to read
   attributes should still be rejected.)

   The situation is complicated by the fact that a server may have
   multiple modules that enforce ACLs.  For example, the enforcement for
   NFSv4.0 access may be different from, but not weaker than, the
   enforcement for local access, and both may be different from the
   enforcement for access through other protocols such as SMB.  So it
   may be useful for a server to accept an ACL even if not all of its
   modules are able to support it.

   The guiding principle with regard to NFSv4 access is that the server
   must not accept ACLs that appear to make access to the file more
   restrictive than it really is.

6.2.1.1.  ACE Type

   The constants used for the type field (acetype4) are as follows:

   const ACE4_ACCESS_ALLOWED_ACE_TYPE      = 0x00000000;
   const ACE4_ACCESS_DENIED_ACE_TYPE       = 0x00000001;
   const ACE4_SYSTEM_AUDIT_ACE_TYPE        = 0x00000002;
   const ACE4_SYSTEM_ALARM_ACE_TYPE        = 0x00000003;

   All four but types are permitted in the acl attribute.

   +------------------------------+--------------+---------------------+
   | Value                        | Abbreviation | Description         |
   +------------------------------+--------------+---------------------+
   | ACE4_ACCESS_ALLOWED_ACE_TYPE | ALLOW        | Explicitly grants   |
   |                              |              | the access defined  |
   |                              |              | in acemask4 to the  |
   |                              |              | file or directory.  |
   | ACE4_ACCESS_DENIED_ACE_TYPE  | DENY         | Explicitly denies   |
   |                              |              | the access defined  |
   |                              |              | in acemask4 to the  |
   |                              |              | file or directory.  |
   | ACE4_SYSTEM_AUDIT_ACE_TYPE   | AUDIT        | LOG (in a system    |
   |                              |              | dependent way) any  |
   |                              |              | access attempt to a |
   |                              |              | file or directory   |
   |                              |              | which uses any of   |
   |                              |              | the access methods  |
   |                              |              | specified in        |
   |                              |              | acemask4.           |
   | ACE4_SYSTEM_ALARM_ACE_TYPE   | ALARM        | Generate a system   |
   |                              |              | ALARM (system       |
   |                              |              | dependent) when any |
   |                              |              | access attempt is   |
   |                              |              | made to a file or   |
   |                              |              | directory for the   |
   |                              |              | access methods      |
   |                              |              | specified in        |
   |                              |              | acemask4.           |
   +------------------------------+--------------+---------------------+

    The "Abbreviation" column denotes how the types will be referred to
                   throughout the rest of this chapter.

6.2.1.2.  Attribute 13: aclsupport

   A server need not support all of the above ACE types.  This attribute
   indicates which ACE types are supported for the current file system.
   The bitmask constants used to represent the above definitions within
   the aclsupport attribute are as follows:

   const ACL4_SUPPORT_ALLOW_ACL    = 0x00000001;
   const ACL4_SUPPORT_DENY_ACL     = 0x00000002;
   const ACL4_SUPPORT_AUDIT_ACL    = 0x00000004;
   const ACL4_SUPPORT_ALARM_ACL    = 0x00000008;

   Servers which support either the ALLOW or DENY ACE type SHOULD
   support both ALLOW and DENY ACE types.

   Clients should not attempt to set an ACE unless the server claims
   support for that ACE type.  If the server receives a request to set
   an ACE that it cannot store, it MUST reject the request with
   NFS4ERR_ATTRNOTSUPP.  If the server receives a request to set an ACE
   that it can store but cannot enforce, the server SHOULD reject the
   request with NFS4ERR_ATTRNOTSUPP.

   Support for any of the ACL attributes is optional (albeit,
   RECOMMENDED).

6.2.1.3.  ACE Access Mask

   The bitmask constants used for the access mask field are as follows:

   const ACE4_READ_DATA            = 0x00000001;
   const ACE4_LIST_DIRECTORY       = 0x00000001;
   const ACE4_WRITE_DATA           = 0x00000002;
   const ACE4_ADD_FILE             = 0x00000002;
   const ACE4_APPEND_DATA          = 0x00000004;
   const ACE4_ADD_SUBDIRECTORY     = 0x00000004;
   const ACE4_READ_NAMED_ATTRS     = 0x00000008;
   const ACE4_WRITE_NAMED_ATTRS    = 0x00000010;
   const ACE4_EXECUTE              = 0x00000020;
   const ACE4_DELETE_CHILD         = 0x00000040;
   const ACE4_READ_ATTRIBUTES      = 0x00000080;
   const ACE4_WRITE_ATTRIBUTES     = 0x00000100;

   const ACE4_DELETE               = 0x00010000;
   const ACE4_READ_ACL             = 0x00020000;
   const ACE4_WRITE_ACL            = 0x00040000;
   const ACE4_WRITE_OWNER          = 0x00080000;
   const ACE4_SYNCHRONIZE          = 0x00100000;

   Note that some masks have coincident values, for example,
   ACE4_READ_DATA and ACE4_LIST_DIRECTORY.  The mask entries
   ACE4_LIST_DIRECTORY, ACE4_ADD_FILE, and ACE4_ADD_SUBDIRECTORY are
   intended to be used with directory objects, while ACE4_READ_DATA,
   ACE4_WRITE_DATA, and ACE4_APPEND_DATA are intended to be used with
   non-directory objects.

6.2.1.3.1.  Discussion of Mask Attributes

   ACE4_READ_DATA

      Operation(s) affected:

         READ

         OPEN

      Discussion:

         Permission to read the data of the file.

         Servers SHOULD allow a user the ability to read the data of the
         file when only the ACE4_EXECUTE access mask bit is allowed.

   ACE4_LIST_DIRECTORY

      Operation(s) affected:

         READDIR

      Discussion:

         Permission to list the contents of a directory.

   ACE4_WRITE_DATA

      Operation(s) affected:

         WRITE

         OPEN

         SETATTR of size

      Discussion:

         Permission to modify a file's data.

   ACE4_ADD_FILE

      Operation(s) affected:

         CREATE

         LINK

         OPEN

         RENAME

      Discussion:

         Permission to add a new file in a directory.  The CREATE
         operation is affected when nfs_ftype4 is NF4LNK, NF4BLK,
         NF4CHR, NF4SOCK, or NF4FIFO.  (NF4DIR is not listed because it
         is covered by ACE4_ADD_SUBDIRECTORY.)  OPEN is affected when
         used to create a regular file.  LINK and RENAME are always
         affected.

   ACE4_APPEND_DATA

      Operation(s) affected:

         WRITE

         OPEN

         SETATTR of size

      Discussion:

         The ability to modify a file's data, but only starting at EOF.
         This allows for the notion of append-only files, by allowing
         ACE4_APPEND_DATA and denying ACE4_WRITE_DATA to the same user
         or group.  If a file has an ACL such as the one described above
         and a WRITE request is made for somewhere other than EOF, the
         server SHOULD return NFS4ERR_ACCESS.

   ACE4_ADD_SUBDIRECTORY

      Operation(s) affected:

         CREATE

         RENAME

      Discussion:

         Permission to create a subdirectory in a directory.  The CREATE
         operation is affected when nfs_ftype4 is NF4DIR.  The RENAME
         operation is always affected.

   ACE4_READ_NAMED_ATTRS

      Operation(s) affected:

         OPENATTR

      Discussion:

         Permission to read the named attributes of a file or to lookup
         the named attributes directory.  OPENATTR is affected when it
         is not used to create a named attribute directory.  This is
         when 1.) createdir is TRUE, but a named attribute directory
         already exists, or 2.) createdir is FALSE.

   ACE4_WRITE_NAMED_ATTRS

      Operation(s) affected:

         OPENATTR

      Discussion:

         Permission to write the named attributes of a file or to create
         a named attribute directory.  OPENATTR is affected when it is
         used to create a named attribute directory.  This is when
         createdir is TRUE and no named attribute directory exists.  The
         ability to check whether or not a named attribute directory
         exists depends on the ability to look it up, therefore, users
         also need the ACE4_READ_NAMED_ATTRS permission in order to
         create a named attribute directory.

   ACE4_EXECUTE

      Operation(s) affected:

         READ

         OPEN

         REMOVE

         RENAME

         LINK

         CREATE

      Discussion:

         Permission to execute a file.

         Servers SHOULD allow a user the ability to read the data of the
         file when only the ACE4_EXECUTE access mask bit is allowed.
         This is because there is no way to execute a file without
         reading the contents.  Though a server may treat ACE4_EXECUTE
         and ACE4_READ_DATA bits identically when deciding to permit a
         READ operation, it SHOULD still allow the two bits to be set
         independently in ACLs, and MUST distinguish between them when
         replying to ACCESS operations.  In particular, servers SHOULD
         NOT silently turn on one of the two bits when the other is set,
         as that would make it impossible for the client to correctly
         enforce the distinction between read and execute permissions.

         As an example, following a SETATTR of the following ACL:

         nfsuser:ACE4_EXECUTE:ALLOW

         A subsequent GETATTR of ACL for that file SHOULD return:

         nfsuser:ACE4_EXECUTE:ALLOW

         Rather than:

         nfsuser:ACE4_EXECUTE/ACE4_READ_DATA:ALLOW

   ACE4_EXECUTE

      Operation(s) affected:

         LOOKUP

      Discussion:

         Permission to traverse/search a directory.

   ACE4_DELETE_CHILD

      Operation(s) affected:

         REMOVE

         RENAME

      Discussion:

         Permission to delete a file or directory within a directory.
         See Section 6.2.1.3.2 for information on ACE4_DELETE and
         ACE4_DELETE_CHILD interact.

   ACE4_READ_ATTRIBUTES

      Operation(s) affected:

         GETATTR of file system object attributes
         VERIFY

         NVERIFY

         READDIR

      Discussion:

         The ability to read basic attributes (non-ACLs) of a file.  On
         a UNIX system, basic attributes can be thought of as the stat
         level attributes.  Allowing this access mask bit would mean the
         entity can execute "ls -l" and stat.  If a READDIR operation
         requests attributes, this mask must be allowed for the READDIR
         to succeed.

   ACE4_WRITE_ATTRIBUTES

      Operation(s) affected:

         SETATTR of time_access_set, time_backup,

         time_create, time_modify_set, mimetype, hidden, system

      Discussion:

         Permission to change the times associated with a file or
         directory to an arbitrary value.  Also permission to change the
         mimetype, hidden and system attributes.  A user having
         ACE4_WRITE_DATA or ACE4_WRITE_ATTRIBUTES will be allowed to set
         the times associated with a file to the current server time.

   ACE4_DELETE

      Operation(s) affected:

         REMOVE

      Discussion:

         Permission to delete the file or directory.  See
         Section 6.2.1.3.2 for information on ACE4_DELETE and
         ACE4_DELETE_CHILD interact.

   ACE4_READ_ACL
      Operation(s) affected:

         GETATTR of acl

         NVERIFY

         VERIFY

      Discussion:

         Permission to read the ACL.

   ACE4_WRITE_ACL

      Operation(s) affected:

         SETATTR of acl and mode

      Discussion:

         Permission to write the acl and mode attributes.

   ACE4_WRITE_OWNER

      Operation(s) affected:

         SETATTR of owner and owner_group

      Discussion:

         Permission to write the owner and owner_group attributes.  On
         UNIX systems, this is the ability to execute chown() and
         chgrp().

   ACE4_SYNCHRONIZE

      Operation(s) affected:

         NONE

      Discussion:

         Permission to access file locally at the server with
         synchronized reads and writes.

   Server implementations need not provide the granularity of control
   that is implied by this list of masks.  For example, POSIX-based
   systems might not distinguish ACE4_APPEND_DATA (the ability to append
   to a file) from ACE4_WRITE_DATA (the ability to modify existing
   contents); both masks would be tied to a single "write" permission.
   When such a server returns attributes to the client, it would show
   both ACE4_APPEND_DATA and ACE4_WRITE_DATA if and only if the write
   permission is enabled.

   If a server receives a SETATTR request that it cannot accurately
   implement, it should err in the direction of more restricted access,
   except in the previously discussed cases of execute and read.  For
   example, suppose a server cannot distinguish overwriting data from
   appending new data, as described in the previous paragraph.  If a
   client submits an ALLOW ACE where ACE4_APPEND_DATA is set but
   ACE4_WRITE_DATA is not (or vice versa), the server should either turn
   off ACE4_APPEND_DATA or reject the request with NFS4ERR_ATTRNOTSUPP.

6.2.1.3.2.  ACE4_DELETE vs. ACE4_DELETE_CHILD

   Two access mask bits govern the ability to delete a directory entry:
   ACE4_DELETE on the object itself (the "target"), and
   ACE4_DELETE_CHILD on the containing directory (the "parent").

   Many systems also take the "sticky bit" (MODE4_SVTX) on a directory
   to allow unlink only to a user that owns either the target or the
   parent; on some such systems the decision also depends on whether the
   target is writable.

   Servers SHOULD allow unlink if either ACE4_DELETE is permitted on the
   target, or ACE4_DELETE_CHILD is permitted on the parent.  (Note that
   this is true even if the parent or target explicitly denies one of
   these permissions.)

   If the ACLs in question neither explicitly ALLOW nor DENY either of
   the above, and if MODE4_SVTX is not set on the parent, then the
   server SHOULD allow the removal if and only if ACE4_ADD_FILE is
   permitted.  In the case where MODE4_SVTX is set, the server may also
   require the remover to own either the parent or the target, or may
   require the target to be writable.

   This allows servers to support something close to traditional UNIX-
   like semantics, with ACE4_ADD_FILE taking the place of the write bit.

6.2.1.4.  ACE flag

   The bitmask constants used for the flag field are as follows:

   const ACE4_FILE_INHERIT_ACE             = 0x00000001;
   const ACE4_DIRECTORY_INHERIT_ACE        = 0x00000002;
   const ACE4_NO_PROPAGATE_INHERIT_ACE     = 0x00000004;
   const ACE4_INHERIT_ONLY_ACE             = 0x00000008;
   const ACE4_SUCCESSFUL_ACCESS_ACE_FLAG   = 0x00000010;
   const ACE4_FAILED_ACCESS_ACE_FLAG       = 0x00000020;
   const ACE4_IDENTIFIER_GROUP             = 0x00000040;

   A server need not support any of these flags.  If the server supports
   flags that are similar to, but not exactly the same as, these flags,
   the implementation may define a mapping between the protocol-defined
   flags and the implementation-defined flags.

   For example, suppose a client tries to set an ACE with
   ACE4_FILE_INHERIT_ACE set but not ACE4_DIRECTORY_INHERIT_ACE.  If the
   server does not support any form of ACL inheritance, the server
   should reject the request with NFS4ERR_ATTRNOTSUPP.  If the server
   supports a single "inherit ACE" flag that applies to both files and
   directories, the server may reject the request (i.e., requiring the
   client to set both the file and directory inheritance flags).  The
   server may also accept the request and silently turn on the
   ACE4_DIRECTORY_INHERIT_ACE flag.

6.2.1.4.1.  Discussion of Flag Bits

   ACE4_FILE_INHERIT_ACE
      Any non-directory file in any sub-directory will get this ACE
      inherited.

   ACE4_DIRECTORY_INHERIT_ACE
      Can be placed on a directory and indicates that this ACE should be
      added to each new directory created.
      If this flag is set in an ACE in an ACL attribute to be set on a
      non-directory file system object, the operation attempting to set
      the ACL SHOULD fail with NFS4ERR_ATTRNOTSUPP.

   ACE4_INHERIT_ONLY_ACE
      Can be placed on a directory but does not apply to the directory;
      ALLOW and DENY ACEs with this bit set do not affect access to the
      directory, and AUDIT and ALARM ACEs with this bit set do not
      trigger log or alarm events.  Such ACEs only take effect once they
      are applied (with this bit cleared) to newly created files and
      directories as specified by the above two flags.
      If this flag is present on an ACE, but neither
      ACE4_DIRECTORY_INHERIT_ACE nor ACE4_FILE_INHERIT_ACE is present,
      then an operation attempting to set such an attribute SHOULD fail
      with NFS4ERR_ATTRNOTSUPP.

   ACE4_NO_PROPAGATE_INHERIT_ACE
      Can be placed on a directory.  This flag tells the server that
      inheritance of this ACE should stop at newly created child
      directories.

   ACE4_SUCCESSFUL_ACCESS_ACE_FLAG

   ACE4_FAILED_ACCESS_ACE_FLAG
      The ACE4_SUCCESSFUL_ACCESS_ACE_FLAG (SUCCESS) and
      ACE4_FAILED_ACCESS_ACE_FLAG (FAILED) flag bits may be set only on
      ACE4_SYSTEM_AUDIT_ACE_TYPE (AUDIT) and ACE4_SYSTEM_ALARM_ACE_TYPE
      (ALARM) ACE types.  If during the processing of the file's ACL,
      the server encounters an AUDIT or ALARM ACE that matches the
      principal attempting the OPEN, the server notes that fact, and the
      presence, if any, of the SUCCESS and FAILED flags encountered in
      the AUDIT or ALARM ACE.  Once the server completes the ACL
      processing, it then notes if the operation succeeded or failed.
      If the operation succeeded, and if the SUCCESS flag was set for a
      matching AUDIT or ALARM ACE, then the appropriate AUDIT or ALARM
      event occurs.  If the operation failed, and if the FAILED flag was
      set for the matching AUDIT or ALARM ACE, then the appropriate
      AUDIT or ALARM event occurs.  Either or both of the SUCCESS or
      FAILED can be set, but if neither is set, the AUDIT or ALARM ACE
      is not useful.

      The previously described processing applies to ACCESS operations
      even when they return NFS4_OK.  For the purposes of AUDIT and
      ALARM, we consider an ACCESS operation to be a "failure" if it
      fails to return a bit that was requested and supported.

   ACE4_IDENTIFIER_GROUP
      Indicates that the "who" refers to a GROUP as defined under UNIX
      or a GROUP ACCOUNT as defined under Windows.  Clients and servers
      MUST ignore the ACE4_IDENTIFIER_GROUP flag on ACEs with a who
      value equal to one of the special identifiers outlined in
      Section 6.2.1.5.

6.2.1.5.  ACE Who

   The "who" field of an ACE is an identifier that specifies the
   principal or principals to whom the ACE applies.  It may refer to a
   user or a group, with the flag bit ACE4_IDENTIFIER_GROUP specifying
   which.

   There are several special identifiers which need to be understood
   universally, rather than in the context of a particular DNS domain.
   Some of these identifiers cannot be understood when an NFS client
   accesses the server, but have meaning when a local process accesses
   the file.  The ability to display and modify these permissions is
   permitted over NFS, even if none of the access methods on the server
   understands the identifiers.

   +---------------+--------------------------------------------------+
   | Who           | Description                                      |
   +---------------+--------------------------------------------------+
   | OWNER         | The owner of the file                            |
   | GROUP         | The group associated with the file.              |
   | EVERYONE      | The world, including the owner and owning group. |
   | INTERACTIVE   | Accessed from an interactive terminal.           |
   | NETWORK       | Accessed via the network.                        |
   | DIALUP        | Accessed as a dialup user to the server.         |
   | BATCH         | Accessed from a batch job.                       |
   | ANONYMOUS     | Accessed without any authentication.             |
   | AUTHENTICATED | Any authenticated user (opposite of ANONYMOUS)   |
   | SERVICE       | Access from a system service.                    |
   +---------------+--------------------------------------------------+

                                  Table 4

   To avoid conflict, these special identifiers are distinguished by an
   appended "@" and should appear in the form "xxxx@" (with no domain
   name after the "@").  For example: ANONYMOUS@.

   The ACE4_IDENTIFIER_GROUP flag MUST be ignored on entries with these
   special identifiers.  When encoding entries with these special
   identifiers, the ACE4_IDENTIFIER_GROUP flag SHOULD be set to zero.

6.2.1.5.1.  Discussion of EVERYONE@

   It is important to note that "EVERYONE@" is not equivalent to the
   UNIX "other" entity.  This is because, by definition, UNIX "other"
   does not include the owner or owning group of a file.  "EVERYONE@"
   means literally everyone, including the owner or owning group.

6.2.2.  Attribute 33: mode

   The NFSv4.0 mode attribute is based on the UNIX mode bits.  The
   following bits are defined:

   const MODE4_SUID = 0x800;  /* set user id on execution */
   const MODE4_SGID = 0x400;  /* set group id on execution */
   const MODE4_SVTX = 0x200;  /* save text even after use */
   const MODE4_RUSR = 0x100;  /* read permission: owner */
   const MODE4_WUSR = 0x080;  /* write permission: owner */
   const MODE4_XUSR = 0x040;  /* execute permission: owner */
   const MODE4_RGRP = 0x020;  /* read permission: group */
   const MODE4_WGRP = 0x010;  /* write permission: group */
   const MODE4_XGRP = 0x008;  /* execute permission: group */
   const MODE4_ROTH = 0x004;  /* read permission: other */
   const MODE4_WOTH = 0x002;  /* write permission: other */
   const MODE4_XOTH = 0x001;  /* execute permission: other */

   Bits MODE4_RUSR, MODE4_WUSR, and MODE4_XUSR apply to the principal
   identified in the owner attribute.  Bits MODE4_RGRP, MODE4_WGRP, and
   MODE4_XGRP apply to principals identified in the owner_group
   attribute but who are not identified in the owner attribute.  Bits
   MODE4_ROTH, MODE4_WOTH, MODE4_XOTH apply to any principal that does
   not match that in the owner attribute, and does not have a group
   matching that of the owner_group attribute.

   Bits within the mode other than those specified above are not defined
   by this protocol.  A server MUST NOT return bits other than those
   defined above in a GETATTR or READDIR operation, and it MUST return
   NFS4ERR_INVAL if bits other than those defined above are set in a
   SETATTR, CREATE, OPEN, VERIFY or NVERIFY operation.

6.3.  Common Methods

   The requirements in this section will be referred to in future
   sections, especially Section 6.4.

6.3.1.  Interpreting an ACL

6.3.1.1.  Server Considerations

   The server uses the algorithm described in Section 6.2.1 to determine
   whether an ACL allows access to an object.  However, the ACL may not
   be the sole determiner of access.  For example:

   o  In the case of a file system exported as read-only, the server may
      deny write permissions even though an object's ACL grants it.

   o  Server implementations MAY grant ACE4_WRITE_ACL and ACE4_READ_ACL
      permissions to prevent a situation from arising in which there is
      no valid way to ever modify the ACL.

   o  All servers will allow a user the ability to read the data of the
      file when only the execute permission is granted (i.e., If the ACL
      denies the user the ACE4_READ_DATA access and allows the user
      ACE4_EXECUTE, the server will allow the user to read the data of
      the file).

   o  Many servers have the notion of owner-override in which the owner
      of the object is allowed to override accesses that are denied by
      the ACL.  This may be helpful, for example, to allow users
      continued access to open files on which the permissions have
      changed.

   o  Many servers have the notion of a "superuser" that has privileges
      beyond an ordinary user.  The superuser may be able to read or
      write data or metadata in ways that would not be permitted by the
      ACL.

6.3.1.2.  Client Considerations

   Clients SHOULD NOT do their own access checks based on their
   interpretation the ACL, but rather use the OPEN and ACCESS operations
   to do access checks.  This allows the client to act on the results of
   having the server determine whether or not access should be granted
   based on its interpretation of the ACL.

   Clients must be aware of situations in which an object's ACL will
   define a certain access even though the server will not enforce it.
   In general, but especially in these situations, the client needs to
   do its part in the enforcement of access as defined by the ACL.  To
   do this, the client MAY send the appropriate ACCESS operation prior
   to servicing the request of the user or application in order to
   determine whether the user or application should be granted the
   access requested.  For examples in which the ACL may define accesses
   that the server doesn't enforce see Section 6.3.1.1.

6.3.2.  Computing a Mode Attribute from an ACL

   The following method can be used to calculate the MODE4_R*, MODE4_W*
   and MODE4_X* bits of a mode attribute, based upon an ACL.

   First, for each of the special identifiers OWNER@, GROUP@, and
   EVERYONE@, evaluate the ACL in order, considering only ALLOW and DENY
   ACEs for the identifier EVERYONE@ and for the identifier under
   consideration.  The result of the evaluation will be an NFSv4 ACL
   mask showing exactly which bits are permitted to that identifier.

   Then translate the calculated mask for OWNER@, GROUP@, and EVERYONE@
   into mode bits for, respectively, the user, group, and other, as
   follows:

   1.  Set the read bit (MODE4_RUSR, MODE4_RGRP, or MODE4_ROTH) if and
       only if ACE4_READ_DATA is set in the corresponding mask.

   2.  Set the write bit (MODE4_WUSR, MODE4_WGRP, or MODE4_WOTH) if and
       only if ACE4_WRITE_DATA and ACE4_APPEND_DATA are both set in the
       corresponding mask.

   3.  Set the execute bit (MODE4_XUSR, MODE4_XGRP, or MODE4_XOTH), if
       and only if ACE4_EXECUTE is set in the corresponding mask.

6.3.2.1.  Discussion

   Some server implementations also add bits permitted to named users
   and groups to the group bits (MODE4_RGRP, MODE4_WGRP, and
   MODE4_XGRP).

   Implementations are discouraged from doing this, because it has been
   found to cause confusion for users who see members of a file's group
   denied access that the mode bits appear to allow.  (The presence of
   DENY ACEs may also lead to such behavior, but DENY ACEs are expected
   to be more rarely used.)

   The same user confusion seen when fetching the mode also results if
   setting the mode does not effectively control permissions for the
   owner, group, and other users; this motivates some of the
   requirements that follow.

6.4.  Requirements

   The server that supports both mode and ACL must take care to
   synchronize the MODE4_*USR, MODE4_*GRP, and MODE4_*OTH bits with the
   ACEs which have respective who fields of "OWNER@", "GROUP@", and
   "EVERYONE@" so that the client can see semantically equivalent access
   permissions exist whether the client asks for owner, owner_group and
   mode attributes, or for just the ACL.

   In this section, much is made of the methods in Section 6.3.2.  Many
   requirements refer to this section.  But note that the methods have
   behaviors specified with "SHOULD".  This is intentional, to avoid
   invalidating existing implementations that compute the mode according
   to the withdrawn POSIX ACL draft (1003.1e draft 17), rather than by
   actual permissions on owner, group, and other.

6.4.1.  Setting the mode and/or ACL Attributes

6.4.1.1.  Setting mode and not ACL

   When any of the nine low-order mode bits are subject to change,
   either because the mode attribute was set or because the
   mode_set_masked attribute was set and the mask included one or more
   bits from the nine low-order mode bits, and no ACL attribute is
   explicitly set, the acl attribute must be modified in accordance with
   the updated value of those bits.  This must happen even if the value
   of the low-order bits is the same after the mode is set as before.

   Note that any AUDIT or ALARM ACEs are unaffected by changes to the
   mode.

   In cases in which the permissions bits are subject to change, the acl
   attribute MUST be modified such that the mode computed via the method
   in Section 6.3.2 yields the low-order nine bits (MODE4_R*, MODE4_W*,
   MODE4_X*) of the mode attribute as modified by the attribute change.
   The ACL attributes SHOULD also be modified such that:

   1.  If MODE4_RGRP is not set, entities explicitly listed in the ACL
       other than OWNER@ and EVERYONE@ SHOULD NOT be granted
       ACE4_READ_DATA.

   2.  If MODE4_WGRP is not set, entities explicitly listed in the ACL
       other than OWNER@ and EVERYONE@ SHOULD NOT be granted
       ACE4_WRITE_DATA or ACE4_APPEND_DATA.

   3.  If MODE4_XGRP is not set, entities explicitly listed in the ACL
       other than OWNER@ and EVERYONE@ SHOULD NOT be granted
       ACE4_EXECUTE.

   Access mask bits other those listed above, appearing in ALLOW ACEs,
   MAY also be disabled.

   Note that ACEs with the flag ACE4_INHERIT_ONLY_ACE set do not affect
   the permissions of the ACL itself, nor do ACEs of the type AUDIT and
   ALARM.  As such, it is desirable to leave these ACEs unmodified when
   modifying the ACL attributes.

   Also note that the requirement may be met by discarding the acl in
   favor of an ACL that represents the mode and only the mode.  This is
   permitted, but it is preferable for a server to preserve as much of
   the ACL as possible without violating the above requirements.
   Discarding the ACL makes it effectively impossible for a file created
   with a mode attribute to inherit an ACL (see Section 6.4.3).

6.4.1.2.  Setting ACL and not mode

   When setting the acl and not setting the mode or mode_set_masked
   attributes, the permission bits of the mode need to be derived from
   the ACL.  In this case, the ACL attribute SHOULD be set as given.
   The nine low-order bits of the mode attribute (MODE4_R*, MODE4_W*,
   MODE4_X*) MUST be modified to match the result of the method
   Section 6.3.2.  The three high-order bits of the mode (MODE4_SUID,
   MODE4_SGID, MODE4_SVTX) SHOULD remain unchanged.

6.4.1.3.  Setting both ACL and mode

   When setting both the mode (includes use of either the mode attribute
   or the mode_set_masked attribute) and the acl attribute in the same
   operation, the attributes MUST be applied in this order: mode (or
   mode_set_masked), then ACL.  The mode-related attribute is set as
   given, then the ACL attribute is set as given, possibly changing the
   final mode, as described above in Section 6.4.1.2.

6.4.2.  Retrieving the mode and/or ACL Attributes

   This section applies only to servers that support both the mode and
   ACL attributes.

   Some server implementations may have a concept of "objects without
   ACLs", meaning that all permissions are granted and denied according
   to the mode attribute, and that no ACL attribute is stored for that
   object.  If an ACL attribute is requested of such a server, the
   server SHOULD return an ACL that does not conflict with the mode;
   that is to say, the ACL returned SHOULD represent the nine low-order
   bits of the mode attribute (MODE4_R*, MODE4_W*, MODE4_X*) as
   described in Section 6.3.2.

   For other server implementations, the ACL attribute is always present
   for every object.  Such servers SHOULD store at least the three high-
   order bits of the mode attribute (MODE4_SUID, MODE4_SGID,
   MODE4_SVTX).  The server SHOULD return a mode attribute if one is
   requested, and the low-order nine bits of the mode (MODE4_R*,
   MODE4_W*, MODE4_X*) MUST match the result of applying the method in
   Section 6.3.2 to the ACL attribute.

6.4.3.  Creating New Objects

   If a server supports any ACL attributes, it may use the ACL
   attributes on the parent directory to compute an initial ACL
   attribute for a newly created object.  This will be referred to as
   the inherited ACL within this section.  The act of adding one or more
   ACEs to the inherited ACL that are based upon ACEs in the parent
   directory's ACL will be referred to as inheriting an ACE within this
   section.

   Implementors should standardize on what the behavior of CREATE and
   OPEN must be depending on the presence or absence of the mode and ACL
   attributes.

   1.  If just the mode is given in the call:

       In this case, inheritance SHOULD take place, but the mode MUST be
       applied to the inherited ACL as described in Section 6.4.1.1,
       thereby modifying the ACL.

   2.  If just the ACL is given in the call:

       In this case, inheritance SHOULD NOT take place, and the ACL as
       defined in the CREATE or OPEN will be set without modification,
       and the mode modified as in Section 6.4.1.2

   3.  If both mode and ACL are given in the call:

       In this case, inheritance SHOULD NOT take place, and both
       attributes will be set as described in Section 6.4.1.3.

   4.  If neither mode nor ACL are given in the call:

       In the case where an object is being created without any initial
       attributes at all, e.g., an OPEN operation with an opentype4 of
       OPEN4_CREATE and a createmode4 of EXCLUSIVE4, inheritance SHOULD
       NOT take place.  Instead, the server SHOULD set permissions to
       deny all access to the newly created object.  It is expected that
       the appropriate client will set the desired attributes in a
       subsequent SETATTR operation, and the server SHOULD allow that
       operation to succeed, regardless of what permissions the object
       is created with.  For example, an empty ACL denies all
       permissions, but the server should allow the owner's SETATTR to
       succeed even though WRITE_ACL is implicitly denied.

       In other cases, inheritance SHOULD take place, and no
       modifications to the ACL will happen.  The mode attribute, if
       supported, MUST be as computed in Section 6.3.2, with the
       MODE4_SUID, MODE4_SGID and MODE4_SVTX bits clear.  If no
       inheritable ACEs exist on the parent directory, the rules for
       creating acl attributes are implementation defined.

6.4.3.1.  The Inherited ACL

   If the object being created is not a directory, the inherited ACL
   SHOULD NOT inherit ACEs from the parent directory ACL unless the
   ACE4_FILE_INHERIT_FLAG is set.

   If the object being created is a directory, the inherited ACL should
   inherit all inheritable ACEs from the parent directory, those that
   have ACE4_FILE_INHERIT_ACE or ACE4_DIRECTORY_INHERIT_ACE flag set.
   If the inheritable ACE has ACE4_FILE_INHERIT_ACE set, but
   ACE4_DIRECTORY_INHERIT_ACE is clear, the inherited ACE on the newly
   created directory MUST have the ACE4_INHERIT_ONLY_ACE flag set to
   prevent the directory from being affected by ACEs meant for non-
   directories.

   When a new directory is created, the server MAY split any inherited
   ACE which is both inheritable and effective (in other words, which
   has neither ACE4_INHERIT_ONLY_ACE nor ACE4_NO_PROPAGATE_INHERIT_ACE
   set), into two ACEs, one with no inheritance flags, and one with
   ACE4_INHERIT_ONLY_ACE set.  This makes it simpler to modify the
   effective permissions on the directory without modifying the ACE
   which is to be inherited to the new directory's children.

7.  Multi-Server Namespace

   NFSv4 supports attributes that allow a namespace to extend beyond the
   boundaries of a single server.  It is RECOMMENDED that clients and
   servers support construction of such multi-server namespaces.  Use of
   such multi-server namespaces is OPTIONAL, however, and for many
   purposes, single-server namespaces are perfectly acceptable.  Use of
   multi-server namespaces can provide many advantages, however, by
   separating a file system's logical position in a namespace from the
   (possibly changing) logistical and administrative considerations that
   result in particular file systems being located on particular
   servers.

7.1.  Location Attributes

   NFSv4 contains RECOMMENDED attributes that allow file systems on one
   server to be associated with one or more instances of that file
   system on other servers.  These attributes specify such file system
   instances by specifying a server address target (either as a DNS name
   representing one or more IP addresses or as a literal IP address)
   together with the path of that file system within the associated
   single-server namespace.

   The fs_locations RECOMMENDED attribute allows specification of the
   file system locations where the data corresponding to a given file
   system may be found.

7.2.  File System Presence or Absence

   A given location in an NFSv4 namespace (typically but not necessarily
   a multi-server namespace) can have a number of file system instance
   locations associated with it via the fs_locations attribute.  There
   may also be an actual current file system at that location,
   accessible via normal namespace operations (e.g., LOOKUP).  In this
   case, the file system is said to be "present" at that position in the
   namespace, and clients will typically use it, reserving use of
   additional locations specified via the location-related attributes to
   situations in which the principal location is no longer available.

   When there is no actual file system at the namespace location in
   question, the file system is said to be "absent".  An absent file
   system contains no files or directories other than the root.  Any
   reference to it, except to access a small set of attributes useful in
   determining alternate locations, will result in an error,
   NFS4ERR_MOVED.  Note that if the server ever returns the error
   NFS4ERR_MOVED, it MUST support the fs_locations attribute.

   While the error name suggests that we have a case of a file system
   that once was present, and has only become absent later, this is only
   one possibility.  A position in the namespace may be permanently
   absent with the set of file system(s) designated by the location
   attributes being the only realization.  The name NFS4ERR_MOVED
   reflects an earlier, more limited conception of its function, but
   this error will be returned whenever the referenced file system is
   absent, whether it has moved or not.

   Except in the case of GETATTR-type operations (to be discussed
   later), when the current filehandle at the start of an operation is
   within an absent file system, that operation is not performed and the
   error NFS4ERR_MOVED is returned, to indicate that the file system is
   absent on the current server.

   Because a GETFH cannot succeed if the current filehandle is within an
   absent file system, filehandles within an absent file system cannot
   be transferred to the client.  When a client does have filehandles
   within an absent file system, it is the result of obtaining them when
   the file system was present, and having the file system become absent
   subsequently.

   It should be noted that because the check for the current filehandle
   being within an absent file system happens at the start of every
   operation, operations that change the current filehandle so that it
   is within an absent file system will not result in an error.  This
   allows such combinations as PUTFH-GETATTR and LOOKUP-GETATTR to be
   used to get attribute information, particularly location attribute
   information, as discussed below.

7.3.  Getting Attributes for an Absent File System

   When a file system is absent, most attributes are not available, but
   it is necessary to allow the client access to the small set of
   attributes that are available, and most particularly that which gives
   information about the correct current locations for this file system,
   fs_locations.

7.3.1.  GETATTR Within an Absent File System

   As mentioned above, an exception is made for GETATTR in that
   attributes may be obtained for a filehandle within an absent file
   system.  This exception only applies if the attribute mask contains
   at least the fs_locations attribute bit, which indicates the client
   is interested in a result regarding an absent file system.  If it is
   not requested, GETATTR will result in an NFS4ERR_MOVED error.

   When a GETATTR is done on an absent file system, the set of supported
   attributes is very limited.  Many attributes, including those that
   are normally REQUIRED, will not be available on an absent file
   system.  In addition to the fs_locations attribute, the following
   attributes SHOULD be available on absent file systems.  In the case
   of RECOMMENDED attributes, they should be available at least to the
   same degree that they are available on present file systems.

   fsid:  This attribute should be provided so that the client can
      determine file system boundaries, including, in particular, the
      boundary between present and absent file systems.  This value must
      be different from any other fsid on the current server and need
      have no particular relationship to fsids on any particular
      destination to which the client might be directed.

   mounted_on_fileid:  For objects at the top of an absent file system,
      this attribute needs to be available.  Since the fileid is within
      the present parent file system, there should be no need to
      reference the absent file system to provide this information.

   Other attributes SHOULD NOT be made available for absent file
   systems, even when it is possible to provide them.  The server should
   not assume that more information is always better and should avoid
   gratuitously providing additional information.

   When a GETATTR operation includes a bit mask for the attribute
   fs_locations, but where the bit mask includes attributes that are not
   supported, GETATTR will not return an error, but will return the mask
   of the actual attributes supported with the results.

   Handling of VERIFY/NVERIFY is similar to GETATTR in that if the
   attribute mask does not include fs_locations the error NFS4ERR_MOVED
   will result.  It differs in that any appearance in the attribute mask
   of an attribute not supported for an absent file system (and note
   that this will include some normally REQUIRED attributes) will also
   cause an NFS4ERR_MOVED result.

7.3.2.  READDIR and Absent File Systems

   A READDIR performed when the current filehandle is within an absent
   file system will result in an NFS4ERR_MOVED error, since, unlike the
   case of GETATTR, no such exception is made for READDIR.

   Attributes for an absent file system may be fetched via a READDIR for
   a directory in a present file system, when that directory contains
   the root directories of one or more absent file systems.  In this
   case, the handling is as follows:

   o  If the attribute set requested includes fs_locations, then
      fetching of attributes proceeds normally and no NFS4ERR_MOVED
      indication is returned, even when the rdattr_error attribute is
      requested.

   o  If the attribute set requested does not include fs_locations, then
      if the rdattr_error attribute is requested, each directory entry
      for the root of an absent file system will report NFS4ERR_MOVED as
      the value of the rdattr_error attribute.

   o  If the attribute set requested does not include either of the
      attributes fs_locations or rdattr_error then the occurrence of the
      root of an absent file system within the directory will result in
      the READDIR failing with an NFS4ERR_MOVED error.

   o  The unavailability of an attribute because of a file system's
      absence, even one that is ordinarily REQUIRED, does not result in
      any error indication.  The set of attributes returned for the root
      directory of the absent file system in that case is simply
      restricted to those actually available.

7.4.  Uses of Location Information

   The location-bearing attribute of fs_locations provides, together
   with the possibility of absent file systems, a number of important
   facilities in providing reliable, manageable, and scalable data
   access.

   When a file system is present, these attributes can provide
   alternative locations, to be used to access the same data, in the
   event of server failures, communications problems, or other
   difficulties that make continued access to the current file system
   impossible or otherwise impractical.  Under some circumstances,
   multiple alternative locations may be used simultaneously to provide
   higher-performance access to the file system in question.  Provision
   of such alternate locations is referred to as "replication" although
   there are cases in which replicated sets of data are not in fact
   present, and the replicas are instead different paths to the same
   data.

   When a file system is present and becomes absent, clients can be
   given the opportunity to have continued access to their data, at an
   alternate location.  In this case, a continued attempt to use the
   data in the now-absent file system will result in an NFS4ERR_MOVED
   error and, at that point, the successor locations (typically only one
   although multiple choices are possible) can be fetched and used to
   continue access.  Transfer of the file system contents to the new
   location is referred to as "migration", but it should be kept in mind
   that there are cases in which this term can be used, like
   "replication", when there is no actual data migration per se.

   Where a file system was not previously present, specification of file
   system location provides a means by which file systems located on one
   server can be associated with a namespace defined by another server,
   thus allowing a general multi-server namespace facility.  A
   designation of such a location, in place of an absent file system, is
   called a "referral".

   Because client support for location-related attributes is OPTIONAL, a
   server may (but is not required to) take action to hide migration and
   referral events from such clients, by acting as a proxy, for example.

7.4.1.  File System Replication

   The fs_locations attribute provides alternative locations, to be used
   to access data in place of or in addition to the current file system
   instance.  On first access to a file system, the client should obtain
   the value of the set of alternate locations by interrogating the
   fs_locations attribute.

   In the event that server failures, communications problems, or other
   difficulties make continued access to the current file system
   impossible or otherwise impractical, the client can use the alternate
   locations as a way to get continued access to its data.  Multiple
   locations may be used simultaneously, to provide higher performance
   through the exploitation of multiple paths between client and target
   file system.

   The alternate locations may be physical replicas of the (typically
   read-only) file system data, or they may reflect alternate paths to
   the same server or provide for the use of various forms of server
   clustering in which multiple servers provide alternate ways of
   accessing the same physical file system.  How these different modes
   of file system transition are represented within the fs_locations
   attribute and how the client deals with file system transition issues
   will be discussed in detail below.

   Multiple server addresses, whether they are derived from a single
   entry with a DNS name representing a set of IP addresses or from
   multiple entries each with its own server address, may correspond to
   the same actual server.

7.4.2.  File System Migration

   When a file system is present and becomes absent, clients can be
   given the opportunity to have continued access to their data, at an
   alternate location, as specified by the fs_locations attribute.
   Typically, a client will be accessing the file system in question,
   get an NFS4ERR_MOVED error, and then use the fs_locations attribute
   to determine the new location of the data.

   Such migration can be helpful in providing load balancing or general
   resource reallocation.  The protocol does not specify how the file
   system will be moved between servers.  It is anticipated that a
   number of different server-to-server transfer mechanisms might be
   used with the choice left to the server implementor.  The NFSv4
   protocol specifies the method used to communicate the migration event
   between client and server.

   The new location may be an alternate communication path to the same
   server or, in the case of various forms of server clustering, another
   server providing access to the same physical file system.  The
   client's responsibilities in dealing with this transition depend on
   the specific nature of the new access path as well as how and whether
   data was in fact migrated.  These issues will be discussed in detail
   below.

   When an alternate location is designated as the target for migration,
   it must designate the same data.  Where file systems are writable, a
   change made on the original file system must be visible on all
   migration targets.  Where a file system is not writable but
   represents a read-only copy (possibly periodically updated) of a
   writable file system, similar requirements apply to the propagation
   of updates.  Any change visible in the original file system must
   already be effected on all migration targets, to avoid any
   possibility that a client, in effecting a transition to the migration
   target, will see any reversion in file system state.

7.4.3.  Referrals

   Referrals provide a way of placing a file system in a location within
   the namespace essentially without respect to its physical location on
   a given server.  This allows a single server or a set of servers to
   present a multi-server namespace that encompasses file systems
   located on multiple servers.  Some likely uses of this include
   establishment of site-wide or organization-wide namespaces, or even
   knitting such together into a truly global namespace.

   Referrals occur when a client determines, upon first referencing a
   position in the current namespace, that it is part of a new file
   system and that the file system is absent.  When this occurs,
   typically by receiving the error NFS4ERR_MOVED, the actual location
   or locations of the file system can be determined by fetching the
   fs_locations attribute.

   The locations-related attribute may designate a single file system
   location or multiple file system locations, to be selected based on
   the needs of the client.

   Use of multi-server namespaces is enabled by NFSv4 but is not
   required.  The use of multi-server namespaces and their scope will
   depend on the applications used and system administration
   preferences.

   Multi-server namespaces can be established by a single server
   providing a large set of referrals to all of the included file
   systems.  Alternatively, a single multi-server namespace may be
   administratively segmented with separate referral file systems (on
   separate servers) for each separately administered portion of the
   namespace.  The top-level referral file system or any segment may use
   replicated referral file systems for higher availability.

   Generally, multi-server namespaces are for the most part uniform, in
   that the same data made available to one client at a given location
   in the namespace is made available to all clients at that location.

7.5.  Location Entries and Server Identity

   As mentioned above, a single location entry may have a server address
   target in the form of a DNS name that may represent multiple IP
   addresses, while multiple location entries may have their own server
   address targets that reference the same server.

   When multiple addresses for the same server exist, the client may
   assume that for each file system in the namespace of a given server
   network address, there exist file systems at corresponding namespace
   locations for each of the other server network addresses.  It may do
   this even in the absence of explicit listing in fs_locations.  Such
   corresponding file system locations can be used as alternate
   locations, just as those explicitly specified via the fs_locations
   attribute.

   If a single location entry designates multiple server IP addresses,
   the client cannot assume that these addresses are multiple paths to
   the same server.  In most cases, they will be, but the client MUST
   verify that before acting on that assumption.  When two server
   addresses are designated by a single location entry and they
   correspond to different servers, this normally indicates some sort of
   misconfiguration, and so the client should avoid using such location
   entries when alternatives are available.  When they are not, clients
   should pick one of IP addresses and use it, without using others that
   are not directed to the same server.

7.6.  Additional Client-Side Considerations

   When clients make use of servers that implement referrals,
   replication, and migration, care should be taken that a user who
   mounts a given file system that includes a referral or a relocated
   file system continues to see a coherent picture of that user-side
   file system despite the fact that it contains a number of server-side
   file systems that may be on different servers.

   One important issue is upward navigation from the root of a server-
   side file system to its parent (specified as ".." in UNIX), in the
   case in which it transitions to that file system as a result of
   referral, migration, or a transition as a result of replication.
   When the client is at such a point, and it needs to ascend to the
   parent, it must go back to the parent as seen within the multi-server
   namespace rather than sending a LOOKUPP operation to the server,
   which would result in the parent within that server's single-server
   namespace.  In order to do this, the client needs to remember the
   filehandles that represent such file system roots and use these
   instead of issuing a LOOKUPP operation to the current server.  This
   will allow the client to present to applications a consistent
   namespace, where upward navigation and downward navigation are
   consistent.

   Another issue concerns refresh of referral locations.  When referrals
   are used extensively, they may change as server configurations
   change.  It is expected that clients will cache information related
   to traversing referrals so that future client-side requests are
   resolved locally without server communication.  This is usually
   rooted in client-side name look up caching.  Clients should
   periodically purge this data for referral points in order to detect
   changes in location information.

7.7.  Effecting File System Transitions

   Transitions between file system instances, whether due to switching
   between replicas upon server unavailability or to server-initiated
   migration events, are best dealt with together.  This is so even
   though, for the server, pragmatic considerations will normally force
   different implementation strategies for planned and unplanned
   transitions.  Even though the prototypical use cases of replication
   and migration contain distinctive sets of features, when all
   possibilities for these operations are considered, there is an
   underlying unity of these operations, from the client's point of
   view, that makes treating them together desirable.

   A number of methods are possible for servers to replicate data and to
   track client state in order to allow clients to transition between
   file system instances with a minimum of disruption.  Such methods
   vary between those that use inter-server clustering techniques to
   limit the changes seen by the client, to those that are less
   aggressive, use more standard methods of replicating data, and impose
   a greater burden on the client to adapt to the transition.

   The NFSv4 protocol does not impose choices on clients and servers
   with regard to that spectrum of transition methods.  In fact, there
   are many valid choices, depending on client and application
   requirements and their interaction with server implementation
   choices.  The NFSv4.0 protocol does not provide the servers a means
   of communicating the transition methods.  In the NFSv4.1 protocol
   [31], an additional attribute "fs_locations_info" is presented, which
   will define the specific choices that can be made, how these choices
   are communicated to the client, and how the client is to deal with
   any discontinuities.

   In the sections below, references will be made to various possible
   server implementation choices as a way of illustrating the transition
   scenarios that clients may deal with.  The intent here is not to
   define or limit server implementations but rather to illustrate the
   range of issues that clients may face.  Again, as the NFSv4.0
   protocol does not have an explict explicit means of communicating these
   issues to the client, the intent is to document the problems that can
   be faced in a multi-server name space and allow the client to use the
   inferred transitions available via fs_locations and other attributes
   (see Section 7.9.1).

   In the discussion below, references will be made to a file system
   having a particular property or to two file systems (typically the
   source and destination) belonging to a common class of any of several
   types.  Two file systems that belong to such a class share some
   important aspects of file system behavior that clients may depend
   upon when present, to easily effect a seamless transition between
   file system instances.  Conversely, where the file systems do not
   belong to such a common class, the client has to deal with various
   sorts of implementation discontinuities that may cause performance or
   other issues in effecting a transition.

   While fs_locations is available, default assumptions with regard to
   such classifications have to be inferred (see Section 7.9.1 for
   details).

   In cases in which one server is expected to accept opaque values from
   the client that originated from another server, the servers SHOULD
   encode the "opaque" values in big-endian byte order.  If this is
   done, servers acting as replicas or immigrating file systems will be
   able to parse values like stateids, directory cookies, filehandles,
   etc., even if their native byte order is different from that of other
   servers cooperating in the replication and migration of the file
   system.

7.7.1.  File System Transitions and Simultaneous Access

   When a single file system may be accessed at multiple locations,
   either because of an indication of file system identity as reported
   by the fs_locations attribute, the client will, depending on specific
   circumstances as discussed below, either:

   o  Access multiple instances simultaneously, each of which represents
      an alternate path to the same data and metadata.

   o  Acesses  Accesses one instance (or set of instances) and then transition to
      an alternative instance (or set of instances) as a result of
      network issues, server unresponsiveness, or server-directed
      migration.

7.7.2.  Filehandles and File System Transitions

   There are a number of ways in which filehandles can be handled across
   a file system transition.  These can be divided into two broad
   classes depending upon whether the two file systems across which the
   transition happens share sufficient state to effect some sort of
   continuity of file system handling.

   When there is no such cooperation in filehandle assignment, the two
   file systems are reported as being in different handle classes.  In
   this case, all filehandles are assumed to expire as part of the file
   system transition.  Note that this behavior does not depend on
   fh_expire_type attribute and depends on the specification of the
   FH4_VOL_MIGRATION bit.

   When there is co-operation in filehandle assignment, the two file
   systems are reported as being in the same handle classes.  In this
   case, persistent filehandles remain valid after the file system
   transition, while volatile filehandles (excluding those that are only
   volatile due to the FH4_VOL_MIGRATION bit) are subject to expiration
   on the target server.

7.7.3.  Fileids and File System Transitions

   The issue of continuity of fileids in the event of a file system
   transition needs to be addressed.  The general expectation is that in
   situations in which the two file system instances are created by a
   single vendor using some sort of file system image copy, fileids will
   be consistent across the transition, while in the analogous multi-
   vendor transitions they will not.  This poses difficulties,
   especially for the client without special knowledge of the transition
   mechanisms adopted by the server.  Note that although fileid is not a
   REQUIRED attribute, many servers support fileids and many clients
   provide APIs that depend on fileids.

   It is important to note that while clients themselves may have no
   trouble with a fileid changing as a result of a file system
   transition event, applications do typically have access to the fileid
   (e.g., via stat).  The result is that an application may work
   perfectly well if there is no file system instance transition or if
   any such transition is among instances created by a single vendor,
   yet be unable to deal with the situation in which a multi-vendor
   transition occurs at the wrong time.

   Providing the same fileids in a multi-vendor (multiple server
   vendors) environment has generally been held to be quite difficult.
   While there is work to be done, it needs to be pointed out that this
   difficulty is partly self-imposed.  Servers have typically identified
   fileid with inode number, i.e., with a quantity used to find the file
   in question.  This identification poses special difficulties for
   migration of a file system between vendors where assigning the same
   index to a given file may not be possible.  Note here that a fileid
   is not required to be useful to find the file in question, only that
   it is unique within the given file system.  Servers prepared to
   accept a fileid as a single piece of metadata and store it apart from
   the value used to index the file information can relatively easily
   maintain a fileid value across a migration event, allowing a truly
   transparent migration event.

   In any case, where servers can provide continuity of fileids, they
   should, and the client should be able to find out that such
   continuity is available and take appropriate action.  Information
   about the continuity (or lack thereof) of fileids across a file
   system transition is represented by specifying whether the file
   systems in question are of the same fileid class.

   Note that when consistent fileids do not exist across a transition
   (either because there is no continuity of fileids or because fileid
   is not a supported attribute on one of instances involved), and there
   are no reliable filehandles across a transition event (either because
   there is no filehandle continuity or because the filehandles are
   volatile), the client is in a position where it cannot verify that
   files it was accessing before the transition are the same objects.
   It is forced to assume that no object has been renamed, and, unless
   there are guarantees that provide this (e.g., the file system is
   read-only), problems for applications may occur.  Therefore, use of
   such configurations should be limited to situations where the
   problems that this may cause can be tolerated.

7.7.4.  Fsids and File System Transitions

   Since fsids are generally only unique within a per-server basis, it
   is likely that they will change during a file system transition.
   Clients should not make the fsids received from the server visible to
   applications since they may not be globally unique, and because they
   may change during a file system transition event.  Applications are
   best served if they are isolated from such transitions to the extent
   possible.

7.7.5.  The Change Attribute and File System Transitions

   Since the change attribute is defined as a server-specific one,
   change attributes fetched from one server are normally presumed to be
   invalid on another server.  Such a presumption is troublesome since
   it would invalidate all cached change attributes, requiring
   refetching.  Even more disruptive, the absence of any assured
   continuity for the change attribute means that even if the same value
   is retrieved on refetch, no conclusions can be drawn as to whether
   the object in question has changed.  The identical change attribute
   could be merely an artifact of a modified file with a different
   change attribute construction algorithm, with that new algorithm just
   happening to result in an identical change value.

   When the two file systems have consistent change attribute formats,
   and we say that they are in the same change class, the client may
   assume a continuity of change attribute construction and handle this
   situation just as it would be handled without any file system
   transition.

7.7.6.  Lock State and File System Transitions

   In a file system transition, the client needs to handle cases in
   which the two servers have cooperated in state management and in
   which they have not.  Cooperation by two servers in state management
   requires coordination of client IDs.  Before the client attempts to
   use a client ID associated with one server in a request to the server
   of the other file system, it must eliminate the possibility that two
   non-cooperating servers have assigned the same client ID by accident.

   In the case of migration, the servers involved in the migration of a
   file system SHOULD transfer all server state from the original to the
   new server.  When this is done, it must be done in a way that is
   transparent to the client.  With replication, such a degree of common
   state is typically not the case.

   This state transfer will reduce disruption to the client when a file
   system transition occurs.  If the servers are successful in
   transferring all state, the client can attempt to establish sessions
   associated with the client ID used for the source file system
   instance.  If the server accepts that as a valid client ID, then the
   client may use the existing stateids associated with that client ID
   for the old file system instance in connection with that same client
   ID in connection with the transitioned file system instance.

   File systems cooperating in state management may actually share state
   or simply divide the identifier space so as to recognize (and reject
   as stale) each other's stateids and client IDs.  Servers that do
   share state may not do so under all conditions or at all times.  If
   the server cannot be sure when accepting a client ID that it reflects
   the locks the client was given, the server must treat all associated
   state as stale and report it as such to the client.

   The client must establish a new client ID on the destination, if it
   does not have one already, and reclaim locks if allowed by the
   server.  In this case, old stateids and client IDs should not be
   presented to the new server since there is no assurance that they
   will not conflict with IDs valid on that server.

   When actual locks are not known to be maintained, the destination
   server may establish a grace period specific to the given file
   system, with non-reclaim locks being rejected for that file system,
   even though normal locks are being granted for other file systems.
   Clients should not infer the absence of a grace period for file
   systems being transitioned to a server from responses to requests for
   other file systems.

   In the case of lock reclamation for a given file system after a file
   system transition, edge conditions can arise similar to those for
   reclaim after server restart (although in the case of the planned
   state transfer associated with migration, these can be avoided by
   securely recording lock state as part of state migration).  Unless
   the destination server can guarantee that locks will not be
   incorrectly granted, the destination server should not allow lock
   reclaims and should avoid establishing a grace period.  (See
   Section 9.14 for further details.)

   Information about client identity may be propagated between servers
   in the form of client_owner4 and associated verifiers, under the
   assumption that the client presents the same values to all the
   servers with which it deals.

   Servers are encouraged to provide facilities to allow locks to be
   reclaimed on the new server after a file system transition.  Often
   such facilities may not be available and client should be prepared to
   re-obtain locks, even though it is possible that the client may have
   its LOCK or OPEN request denied due to a conflicting lock.

   The consequences of having no facilities available to reclaim locks
   on the new server will depend on the type of environment.  In some
   environments, such as the transition between read-only file systems,
   such denial of locks should not pose large difficulties in practice.
   When an attempt to re-establish a lock on a new server is denied, the
   client should treat the situation as if its original lock had been
   revoked.  Note that when the lock is granted, the client cannot
   assume that no conflicting lock could have been granted in the
   interim.  Where change attribute continuity is present, the client
   may check the change attribute to check for unwanted file
   modifications.  Where even this is not available, and the file system
   is not read-only, a client may reasonably treat all pending locks as
   having been revoked.

7.7.6.1.  Transitions and the Lease_time Attribute

   In order that the client may appropriately manage its lease in the
   case of a file system transition, the destination server must
   establish proper values for the lease_time attribute.

   When state is transferred transparently, that state should include
   the correct value of the lease_time attribute.  The lease_time
   attribute on the destination server must never be less than that on
   the source, since this would result in premature expiration of a
   lease granted by the source server.  Upon transitions in which state
   is transferred transparently, the client is under no obligation to
   refetch the lease_time attribute and may continue to use the value
   previously fetched (on the source server).

   If state has not been transferred transparently because the client ID
   is rejected when presented to the new server, the client should fetch
   the value of lease_time on the new (i.e., destination) server, and
   use it for subsequent locking requests.  However, the server must
   respect a grace period of at least as long as the lease_time on the
   source server, in order to ensure that clients have ample time to
   reclaim their lock before potentially conflicting non-reclaimed locks
   are granted.

7.7.7.  Write Verifiers and File System Transitions

   In a file system transition, the two file systems may be clustered in
   the handling of unstably written data.  When this is the case, and
   the two file systems belong to the same write-verifier class, write
   verifiers returned from one system may be compared to those returned
   by the other and superfluous writes avoided.

   When two file systems belong to different write-verifier classes, any
   verifier generated by one must not be compared to one provided by the
   other.  Instead, it should be treated as not equal even when the
   values are identical.

7.7.8.  Readdir Cookies and Verifiers and File System Transitions

   In a file system transition, the two file systems may be consistent
   in their handling of READDIR cookies and verifiers.  When this is the
   case, and the two file systems belong to the same readdir class,
   READDIR cookies and verifiers from one system may be recognized by
   the other and READDIR operations started on one server may be validly
   continued on the other, simply by presenting the cookie and verifier
   returned by a READDIR operation done on the first file system to the
   second.

   When two file systems belong to different readdir classes, any
   READDIR cookie and verifier generated by one is not valid on the
   second, and must not be presented to that server by the client.  The
   client should act as if the verifier was rejected.

7.7.9.  File System Data and File System Transitions

   When multiple replicas exist and are used simultaneously or in
   succession by a client, applications using them will normally expect
   that they contain either the same data or data that is consistent
   with the normal sorts of changes that are made by other clients
   updating the data of the file system (with metadata being the same to
   the degree inferred by the fs_locations attribute).  However, when
   multiple file systems are presented as replicas of one another, the
   precise relationship between the data of one and the data of another
   is not, as a general matter, specified by the NFSv4 protocol.  It is
   quite possible to present as replicas file systems where the data of
   those file systems is sufficiently different that some applications
   have problems dealing with the transition between replicas.  The
   namespace will typically be constructed so that applications can
   choose an appropriate level of support, so that in one position in
   the namespace a varied set of replicas will be listed, while in
   another only those that are up-to-date may be considered replicas.
   The protocol does define four special cases of the relationship among
   replicas to be specified by the server and relied upon by clients:

   o  When multiple server addresses correspond to the same actual
      server, the client may depend on the fact that changes to data,
      metadata, or locks made on one file system are immediately
      reflected on others.

   o  When multiple replicas exist and are used simultaneously by a
      client, they must designate the same data.  Where file systems are
      writable, a change made on one instance must be visible on all
      instances, immediately upon the earlier of the return of the
      modifying requester or the visibility of that change on any of the
      associated replicas.  This allows a client to use these replicas
      simultaneously without any special adaptation to the fact that
      there are multiple replicas.  In this case, locks (whether share
      reservations or byte-range locks), and delegations obtained on one
      replica are immediately reflected on all replicas, even though
      these locks will be managed under a set of client IDs.

   o  When one replica is designated as the successor instance to
      another existing instance after return NFS4ERR_MOVED (i.e., the
      case of migration), the client may depend on the fact that all
      changes written to stable storage on the original instance are
      written to stable storage of the successor (uncommitted writes are
      dealt with in Section 7.7.7).

   o  Where a file system is not writable but represents a read-only
      copy (possibly periodically updated) of a writable file system,
      clients have similar requirements with regard to the propagation
      of updates.  They may need a guarantee that any change visible on
      the original file system instance must be immediately visible on
      any replica before the client transitions access to that replica,
      in order to avoid any possibility that a client, in effecting a
      transition to a replica, will see any reversion in file system
      state.  Since these file systems are presumed to be unsuitable for
      simultaneous use, there is no specification of how locking is
      handled; in general, locks obtained on one file system will be
      separate from those on others.  Since these are going to be read-
      only file systems, this is not expected to pose an issue for
      clients or applications.

7.8.  Effecting File System Referrals

   Referrals are effected when an absent file system is encountered, and
   one or more alternate locations are made available by the
   fs_locations attribute.  The client will typically get an
   NFS4ERR_MOVED error, fetch the appropriate location information, and
   proceed to access the file system on a different server, even though
   it retains its logical position within the original namespace.
   Referrals differ from migration events in that they happen only when
   the client has not previously referenced the file system in question
   (so there is nothing to transition).  Referrals can only come into
   effect when an absent file system is encountered at its root.

   The examples given in the sections below are somewhat artificial in
   that an actual client will not typically do a multi-component look
   up, but will have cached information regarding the upper levels of
   the name hierarchy.  However, these example are chosen to make the
   required behavior clear and easy to put within the scope of a small
   number of requests, without getting unduly into details of how
   specific clients might choose to cache things.

7.8.1.  Referral Example (LOOKUP)

   Let us suppose that the following COMPOUND is sent in an environment
   in which /this/is/the/path is absent from the target server.  This
   may be for a number of reasons.  It may be the case that the file
   system has moved, or it may be the case that the target server is
   functioning mainly, or solely, to refer clients to the servers on
   which various file systems are located.

   o  PUTROOTFH

   o  LOOKUP "this"

   o  LOOKUP "is"
   o  LOOKUP "the"

   o  LOOKUP "path"

   o  GETFH

   o  GETATTR(fsid,fileid,size,time_modify)

   Under the given circumstances, the following will be the result.

   o  PUTROOTFH --> NFS_OK.  The current fh is now the root of the
      pseudo-fs.

   o  LOOKUP "this" --> NFS_OK.  The current fh is for /this and is
      within the pseudo-fs.

   o  LOOKUP "is" --> NFS_OK.  The current fh is for /this/is and is
      within the pseudo-fs.

   o  LOOKUP "the" --> NFS_OK.  The current fh is for /this/is/the and
      is within the pseudo-fs.

   o  LOOKUP "path" --> NFS_OK.  The current fh is for /this/is/the/path
      and is within a new, absent file system, but ... the client will
      never see the value of that fh.

   o  GETFH --> NFS4ERR_MOVED.  Fails because current fh is in an absent
      file system at the start of the operation, and the specification
      makes no exception for GETFH.

   o  GETATTR(fsid,fileid,size,time_modify) Not executed because the
      failure of the GETFH stops processing of the COMPOUND.

   Given the failure of the GETFH, the client has the job of determining
   the root of the absent file system and where to find that file
   system, i.e., the server and path relative to that server's root fh.
   Note here that in this example, the client did not obtain filehandles
   and attribute information (e.g., fsid) for the intermediate
   directories, so that it would not be sure where the absent file
   system starts.  It could be the case, for example, that /this/is/the
   is the root of the moved file system and that the reason that the
   look up of "path" succeeded is that the file system was not absent on
   that operation but was moved between the last LOOKUP and the GETFH
   (since COMPOUND is not atomic).  Even if we had the fsids for all of
   the intermediate directories, we could have no way of knowing that
   /this/is/the/path was the root of a new file system, since we don't
   yet have its fsid.

   In order to get the necessary information, let us re-send the chain
   of LOOKUPs with GETFHs and GETATTRs to at least get the fsids so we
   can be sure where the appropriate file system boundaries are.  The
   client could choose to get fs_locations at the same time but in most
   cases the client will have a good guess as to where file system
   boundaries are (because of where NFS4ERR_MOVED was, and was not,
   received) making fetching of fs_locations unnecessary.

   OP01:  PUTROOTFH --> NFS_OK

   -  Current fh is root of pseudo-fs.

   OP02:  GETATTR(fsid) --> NFS_OK

   -  Just for completeness.  Normally, clients will know the fsid of
      the pseudo-fs as soon as they establish communication with a
      server.

   OP03:  LOOKUP "this" --> NFS_OK

   OP04:  GETATTR(fsid) --> NFS_OK

   -  Get current fsid to see where file system boundaries are.  The
      fsid will be that for the pseudo-fs in this example, so no
      boundary.

   OP05:  GETFH --> NFS_OK

   -  Current fh is for /this and is within pseudo-fs.

   OP06:  LOOKUP "is" --> NFS_OK

   -  Current fh is for /this/is and is within pseudo-fs.

   OP07:  GETATTR(fsid) --> NFS_OK

   -  Get current fsid to see where file system boundaries are.  The
      fsid will be that for the pseudo-fs in this example, so no
      boundary.

   OP08:  GETFH --> NFS_OK

   -  Current fh is for /this/is and is within pseudo-fs.

   OP09:  LOOKUP "the" --> NFS_OK
   -  Current fh is for /this/is/the and is within pseudo-fs.

   OP10:  GETATTR(fsid) --> NFS_OK

   -  Get current fsid to see where file system boundaries are.  The
      fsid will be that for the pseudo-fs in this example, so no
      boundary.

   OP11:  GETFH --> NFS_OK

   -  Current fh is for /this/is/the and is within pseudo-fs.

   OP12:  LOOKUP "path" --> NFS_OK

   -  Current fh is for /this/is/the/path and is within a new, absent
      file system, but ...

   -  The client will never see the value of that fh.

   OP13:  GETATTR(fsid, fs_locations) --> NFS_OK

   -  We are getting the fsid to know where the file system boundaries
      are.  In this operation, the fsid will be different than that of
      the parent directory (which in turn was retrieved in OP10).  Note
      that the fsid we are given will not necessarily be preserved at
      the new location.  That fsid might be different, and in fact the
      fsid we have for this file system might be a valid fsid of a
      different file system on that new server.

   -  In this particular case, we are pretty sure anyway that what has
      moved is /this/is/the/path rather than /this/is/the since we have
      the fsid of the latter and it is that of the pseudo-fs, which
      presumably cannot move.  However, in other examples, we might not
      have this kind of information to rely on (e.g., /this/is/the might
      be a non-pseudo file system separate from /this/is/the/path), so
      we need to have other reliable source information on the boundary
      of the file system that is moved.  If, for example, the file
      system /this/is had moved, we would have a case of migration
      rather than referral, and once the boundaries of the migrated file
      system was clear we could fetch fs_locations.

   -  We are fetching fs_locations because the fact that we got an
      NFS4ERR_MOVED at this point means that it is most likely that this
      is a referral and we need the destination.  Even if it is the case
      that /this/is/the is a file system that has migrated, we will
      still need the location information for that file system.

   OP14:  GETFH --> NFS4ERR_MOVED

   -  Fails because current fh is in an absent file system at the start
      of the operation, and the specification makes no exception for
      GETFH.  Note that this means the server will never send the client
      a filehandle from within an absent file system.

   Given the above, the client knows where the root of the absent file
   system is (/this/is/the/path) by noting where the change of fsid
   occurred (between "the" and "path").  The fs_locations attribute also
   gives the client the actual location of the absent file system, so
   that the referral can proceed.  The server gives the client the bare
   minimum of information about the absent file system so that there
   will be very little scope for problems of conflict between
   information sent by the referring server and information of the file
   system's home.  No filehandles and very few attributes are present on
   the referring server, and the client can treat those it receives as
   transient information with the function of enabling the referral.

7.8.2.  Referral Example (READDIR)

   Another context in which a client may encounter referrals is when it
   does a READDIR on a directory in which some of the sub-directories
   are the roots of absent file systems.

   Suppose such a directory is read as follows:

   o  PUTROOTFH

   o  LOOKUP "this"

   o  LOOKUP "is"

   o  LOOKUP "the"

   o  READDIR (fsid, size, time_modify, mounted_on_fileid)

   In this case, because rdattr_error is not requested, fs_locations is
   not requested, and some of the attributes cannot be provided, the
   result will be an NFS4ERR_MOVED error on the READDIR, with the
   detailed results as follows:

   o  PUTROOTFH --> NFS_OK.  The current fh is at the root of the
      pseudo-fs.

   o  LOOKUP "this" --> NFS_OK.  The current fh is for /this and is
      within the pseudo-fs.

   o  LOOKUP "is" --> NFS_OK.  The current fh is for /this/is and is
      within the pseudo-fs.

   o  LOOKUP "the" --> NFS_OK.  The current fh is for /this/is/the and
      is within the pseudo-fs.

   o  READDIR (fsid, size, time_modify, mounted_on_fileid) -->
      NFS4ERR_MOVED.  Note that the same error would have been returned
      if /this/is/the had migrated, but it is returned because the
      directory contains the root of an absent file system.

   So now suppose that we re-send with rdattr_error:

   o  PUTROOTFH

   o  LOOKUP "this"

   o  LOOKUP "is"

   o  LOOKUP "the"

   o  READDIR (rdattr_error, fsid, size, time_modify, mounted_on_fileid)

   The results will be:

   o  PUTROOTFH --> NFS_OK.  The current fh is at the root of the
      pseudo-fs.

   o  LOOKUP "this" --> NFS_OK.  The current fh is for /this and is
      within the pseudo-fs.

   o  LOOKUP "is" --> NFS_OK.  The current fh is for /this/is and is
      within the pseudo-fs.

   o  LOOKUP "the" --> NFS_OK.  The current fh is for /this/is/the and
      is within the pseudo-fs.

   o  READDIR (rdattr_error, fsid, size, time_modify, mounted_on_fileid)
      --> NFS_OK.  The attributes for directory entry with the component
      named "path" will only contain rdattr_error with the value
      NFS4ERR_MOVED, together with an fsid value and a value for
      mounted_on_fileid.

   So suppose we do another READDIR to get fs_locations (although we
   could have used a GETATTR directly, as in Section 7.8.1).

   o  PUTROOTFH

   o  LOOKUP "this"

   o  LOOKUP "is"

   o  LOOKUP "the"

   o  READDIR (rdattr_error, fs_locations, mounted_on_fileid, fsid,
      size, time_modify)

   The results would be:

   o  PUTROOTFH --> NFS_OK.  The current fh is at the root of the
      pseudo-fs.

   o  LOOKUP "this" --> NFS_OK.  The current fh is for /this and is
      within the pseudo-fs.

   o  LOOKUP "is" --> NFS_OK.  The current fh is for /this/is and is
      within the pseudo-fs.

   o  LOOKUP "the" --> NFS_OK.  The current fh is for /this/is/the and
      is within the pseudo-fs.

   o  READDIR (rdattr_error, fs_locations, mounted_on_fileid, fsid,
      size, time_modify) --> NFS_OK.  The attributes will be as shown
      below.

   The attributes for the directory entry with the component named
   "path" will only contain:

   o  rdattr_error (value: NFS_OK)

   o  fs_locations

   o  mounted_on_fileid (value: unique fileid within referring file
      system)

   o  fsid (value: unique value within referring server)

   The attributes for entry "path" will not contain size or time_modify
   because these attributes are not available within an absent file
   system.

7.9.  The Attribute fs_locations

   The fs_locations attribute is structured in the following way:

   struct fs_location4 {
           utf8must        server<>;
           pathname4       rootpath;
   };

   struct fs_locations4 {
           pathname4       fs_root;
           fs_location4    locations<>;
   };

   The fs_location4 data type is used to represent the location of a
   file system by providing a server name and the path to the root of
   the file system within that server's namespace.  When a set of
   servers have corresponding file systems at the same path within their
   namespaces, an array of server names may be provided.  An entry in
   the server array is a UTF-8 string and represents one of a
   traditional DNS host name, IPv4 address, IPv6 address, or an zero-
   length string.  A zero-length string SHOULD be used to indicate the
   current address being used for the RPC call.  It is not a requirement
   that all servers that share the same rootpath be listed in one
   fs_location4 instance.  The array of server names is provided for
   convenience.  Servers that share the same rootpath may also be listed
   in separate fs_location4 entries in the fs_locations attribute.

   The fs_locations4 data type and fs_locations attribute contain an
   array of such locations.  Since the namespace of each server may be
   constructed differently, the "fs_root" field is provided.  The path
   represented by fs_root represents the location of the file system in
   the current server's namespace, i.e., that of the server from which
   the fs_locations attribute was obtained.  The fs_root path is meant
   to aid the client by clearly referencing the root of the file system
   whose locations are being reported, no matter what object within the
   current file system the current filehandle designates.  The fs_root
   is simply the pathname the client used to reach the object on the
   current server (i.e., the object to which the fs_locations attribute
   applies).

   When the fs_locations attribute is interrogated and there are no
   alternate file system locations, the server SHOULD return a zero-
   length array of fs_location4 structures, together with a valid
   fs_root.

   As an example, suppose there is a replicated file system located at
   two servers (servA and servB).  At servA, the file system is located
   at path /a/b/c.  At, servB the file system is located at path /x/y/z.
   If the client were to obtain the fs_locations value for the directory
   at /a/b/c/d, it might not necessarily know that the file system's
   root is located in servA's namespace at /a/b/c.  When the client
   switches to servB, it will need to determine that the directory it
   first referenced at servA is now represented by the path /x/y/z/d on
   servB.  To facilitate this, the fs_locations attribute provided by
   servA would have an fs_root value of /a/b/c and two entries in
   fs_locations.  One entry in fs_locations will be for itself (servA)
   and the other will be for servB with a path of /x/y/z.  With this
   information, the client is able to substitute /x/y/z for the /a/b/c
   at the beginning of its access path and construct /x/y/z/d to use for
   the new server.

   Note that: there is no requirement that the number of components in
   each rootpath be the same; there is no relation between the number of
   components in rootpath or fs_root, and none of the components in each
   rootpath and fs_root have to be the same.  In the above example, we
   could have had a third element in the locations array, with server
   equal to "servC", and rootpath equal to "/I/II", and a fourth element
   in locations with server equal to "servD" and rootpath equal to
   "/aleph/beth/gimel/daleth/he".

   The relationship between fs_root to a rootpath is that the client
   replaces the pathname indicated in fs_root for the current server for
   the substitute indicated in rootpath for the new server.

   For an example of a referred or migrated file system, suppose there
   is a file system located at serv1.  At serv1, the file system is
   located at /az/buky/vedi/glagoli.  The client finds that object at
   glagoli has migrated (or is a referral).  The client gets the
   fs_locations attribute, which contains an fs_root of /az/buky/vedi/
   glagoli, and one element in the locations array, with server equal to
   serv2, and rootpath equal to /izhitsa/fita.  The client replaces /az/
   buky/vedi/glagoli with /izhitsa/fita, and uses the latter pathname on
   serv2.

   Thus, the server MUST return an fs_root that is equal to the path the
   client used to reach the object to which the fs_locations attribute
   applies.  Otherwise, the client cannot determine the new path to use
   on the new server.

7.9.1.  Inferring Transition Modes

   When fs_locations is used, information about the specific locations
   should be assumed based on the following rules.

   The following rules are general and apply irrespective of the
   context.

   o  All listed file system instances should be considered as of the
      same handle class if and only if the current fh_expire_type
      attribute does not include the FH4_VOL_MIGRATION bit.  Note that
      in the case of referral, filehandle issues do not apply since
      there can be no filehandles known within the current file system
      nor is there any access to the fh_expire_type attribute on the
      referring (absent) file system.

   o  All listed file system instances should be considered as of the
      same fileid class if and only if the fh_expire_type attribute
      indicates persistent filehandles and does not include the
      FH4_VOL_MIGRATION bit.  Note that in the case of referral, fileid
      issues do not apply since there can be no fileids known within the
      referring (absent) file system nor is there any access to the
      fh_expire_type attribute.

   o  All file system instances servers should be considered as of
      different change classes.

   o  All file system instances servers should be considered as of
      different readdir classes.

   For other class assignments, handling of file system transitions
   depends on the reasons for the transition:

   o  When the transition is due to migration, that is, the client was
      directed to a new file system after receiving an NFS4ERR_MOVED
      error, the target should be treated as being of the same write-
      verifier class as the source.

   o  When the transition is due to failover to another replica, that
      is, the client selected another replica without receiving and
      NFS4ERR_MOVED error, the target should be treated as being of a
      different write-verifier class from the source.

   The specific choices reflect typical implementation patterns for
   failover and controlled migration, respectively.

   See Section 17 for a discussion on the recommendations for the
   security flavor to be used by any GETATTR operation that requests the
   "fs_locations" attribute.

8.  NFS Server Name Space

8.1.  Server Exports

   On a UNIX server the name space describes all the files reachable by
   pathnames under the root directory or "/".  On a Windows NT server
   the name space constitutes all the files on disks named by mapped
   disk letters.  NFS server administrators rarely make the entire
   server's filesystem name space available to NFS clients.  More often
   portions of the name space are made available via an "export"
   feature.  In previous versions of the NFS protocol, the root
   filehandle for each export is obtained through the MOUNT protocol;
   the client sends a string that identifies the export of name space
   and the server returns the root filehandle for it.  The MOUNT
   protocol supports an EXPORTS procedure that will enumerate the
   server's exports.

8.2.  Browsing Exports

   The NFS version 4 NFSv4 protocol provides a root filehandle that clients can use to
   obtain filehandles for these exports via a multi-component LOOKUP.  A
   common user experience is to use a graphical user interface (perhaps
   a file "Open" dialog window) to find a file via progressive browsing
   through a directory tree.  The client must be able to move from one
   export to another export via single-component, progressive LOOKUP
   operations.

   This style of browsing is not well supported by the NFS version 2 NFSv2 and
   3 NFSv3
   protocols.  The client expects all LOOKUP operations to remain within
   a single server filesystem.  For example, the device attribute will
   not change.  This prevents a client from taking name space paths that
   span exports.

   An automounter on the client can obtain a snapshot of the server's
   name space using the EXPORTS procedure of the MOUNT protocol.  If it
   understands the server's pathname syntax, it can create an image of
   the server's name space on the client.  The parts of the name space
   that are not exported by the server are filled in with a "pseudo
   filesystem" that allows the user to browse from one mounted
   filesystem to another.  There is a drawback to this representation of
   the server's name space on the client: it is static.  If the server
   administrator adds a new export the client will be unaware of it.

8.3.  Server Pseudo Filesystem

   NFS version 4

   NFSv4 servers avoid this name space inconsistency by presenting all
   the exports within the framework of a single server name space.  An NFS version 4
   NFSv4 client uses LOOKUP and READDIR operations to browse seamlessly
   from one export to another.  Portions of the server name space that
   are not exported are bridged via a "pseudo filesystem" that provides
   a view of exported directories only.  A pseudo filesystem has a
   unique fsid and behaves like a normal, read only filesystem.

   Based on the construction of the server's name space, it is possible
   that multiple pseudo filesystems may exist.  For example,

   /a         pseudo filesystem
   /a/b       real filesystem
   /a/b/c     pseudo filesystem
   /a/b/c/d   real filesystem

   Each of the pseudo filesystems are considered separate entities and
   therefore will have a unique fsid.

8.4.  Multiple Roots

   The DOS and Windows operating environments are sometimes described as
   having "multiple roots".  Filesystems are commonly represented as
   disk letters.  MacOS represents filesystems as top level names.  NFS
   version 4
   NFSv4 servers for these platforms can construct a pseudo file system
   above these root names so that disk letters or volume names are
   simply directory names in the pseudo root.

8.5.  Filehandle Volatility

   The nature of the server's pseudo filesystem is that it is a logical
   representation of filesystem(s) available from the server.
   Therefore, the pseudo filesystem is most likely constructed
   dynamically when the server is first instantiated.  It is expected
   that the pseudo filesystem may not have an on disk counterpart from
   which persistent filehandles could be constructed.  Even though it is
   preferable that the server provide persistent filehandles for the
   pseudo filesystem, the NFS client should expect that pseudo file
   system filehandles are volatile.  This can be confirmed by checking
   the associated "fh_expire_type" attribute for those filehandles in
   question.  If the filehandles are volatile, the NFS client must be
   prepared to recover a filehandle value (e.g., with a multi-component
   LOOKUP) when receiving an error of NFS4ERR_FHEXPIRED.

8.6.  Exported Root

   If the server's root filesystem is exported, one might conclude that
   a pseudo-filesystem is not needed.  This would be wrong.  Assume the
   following filesystems on a server:

   /       disk1  (exported)
   /a      disk2  (not exported)
   /a/b    disk3  (exported)

   Because disk2 is not exported, disk3 cannot be reached with simple
   LOOKUPs.  The server must bridge the gap with a pseudo-filesystem.

8.7.  Mount Point Crossing

   The server filesystem environment may be constructed in such a way
   that one filesystem contains a directory which is 'covered' or
   mounted upon by a second filesystem.  For example:

   /a/b            (filesystem 1)
   /a/b/c/d        (filesystem 2)

   The pseudo filesystem for this server may be constructed to look
   like:

   /               (place holder/not exported)
   /a/b            (filesystem 1)
   /a/b/c/d        (filesystem 2)

   It is the server's responsibility to present the pseudo filesystem
   that is complete to the client.  If the client sends a lookup request
   for the path "/a/b/c/d", the server's response is the filehandle of
   the filesystem "/a/b/c/d".  In previous versions of the NFS protocol,
   the server would respond with the filehandle of directory "/a/b/c/d"
   within the filesystem "/a/b".

   The NFS client will be able to determine if it crosses a server mount
   point by a change in the value of the "fsid" attribute.

8.8.  Security Policy and Name Space Presentation

   The application of the server's security policy needs to be carefully
   considered by the implementor.  One may choose to limit the
   viewability of portions of the pseudo filesystem based on the
   server's perception of the client's ability to authenticate itself
   properly.  However, with the support of multiple security mechanisms
   and the ability to negotiate the appropriate use of these mechanisms,
   the server is unable to properly determine if a client will be able
   to authenticate itself.  If, based on its policies, the server
   chooses to limit the contents of the pseudo filesystem, the server
   may effectively hide filesystems from a client that may otherwise
   have legitimate access.

   As suggested practice, the server should apply the security policy of
   a shared resource in the server's namespace to the components of the
   resource's ancestors.  For example:

   /
   /a/b
   /a/b/c

   The /a/b/c directory is a real filesystem and is the shared resource.
   The security policy for /a/b/c is Kerberos with integrity.  The
   server should apply the same security policy to /, /a, and /a/b.
   This allows for the extension of the protection of the server's
   namespace to the ancestors of the real shared resource.

   For the case of the use of multiple, disjoint security mechanisms in
   the server's resources, the security for a particular object in the
   server's namespace should be the union of all security mechanisms of
   all direct descendants.

9.  File Locking and Share Reservations

   Integrating locking into the NFS protocol necessarily causes it to be
   stateful.  With the inclusion of share reservations the protocol
   becomes substantially more dependent on state than the traditional
   combination of NFS and NLM (Network Lock Manager) [32].  There are
   three components to making this state manageable:

   o  Clear  clear division between client and server

   o  Ability  ability to reliably detect inconsistency in state between client
      and server

   o  Simple  simple and robust recovery mechanisms

   In this model, the server owns the state information.  The client
   communicates
   requests changes in locks and the server responds with the changes
   made.  Non-client-initiated changes in locking state are infrequent.
   The client receives prompt notification of such changes and can
   adjust its view of this the locking state to reflect the server's changes.

   Individual pieces of state created by the server as needed.  The and passed to the
   client at its request are represented by 128-bit stateids.  These
   stateids may represent a particular open file, a set of byte-range
   locks held by a particular owner, or a recallable delegation of
   privileges to access a file in particular ways or at a particular
   location.

   In all cases, there is also able a transition from the most general information
   that represents a client as a whole to detect inconsistent state before modifying the eventual lightweight
   stateid used for most client and server locking interactions.  The
   details of this transition will vary with the type of object but it
   always starts with a
   file. client ID.

   To support Win32 share reservations it is necessary to atomically
   OPEN or CREATE files.  Having a separate share/unshare operation
   would not allow correct implementation of the Win32 OpenFile API.  In
   order to correctly implement share semantics, the previous NFS
   protocol mechanisms used when a file is opened or created (LOOKUP,
   CREATE, ACCESS) need to be replaced.  The NFS version 4 NFSv4 protocol has an OPEN
   operation that subsumes the NFS version 3 NFSv3 methodology of LOOKUP, CREATE, and
   ACCESS.  However, because many operations require a filehandle, the
   traditional LOOKUP is preserved to map a file name to filehandle
   without establishing state on the server.  The policy of granting
   access or modifying files is managed by the server based on the
   client's state.  These mechanisms can implement policy ranging from
   advisory only locking to full mandatory locking.

9.1.  Locking  Opens and Byte-Range Locks

   It is assumed that manipulating a byte-range lock is rare when
   compared to READ and WRITE operations.  It is also assumed that crashes
   server restarts and network partitions are relatively rare.
   Therefore it is important that the READ and WRITE operations have a
   lightweight mechanism to indicate if they possess a held lock.  A
   byte-range lock request contains the heavyweight information required
   to establish a lock and uniquely define the lock
   owner. owner of the lock.

   The following sections describe the transition from the heavy weight
   information to the eventual stateid used for most client and server
   locking and lease interactions.

9.1.1.  Client ID

   For each LOCK request, the client must identify itself to the server.
   This is done in such a way as to allow for correct lock
   identification and crash recovery.  A sequence of a SETCLIENTID
   operation followed by a SETCLIENTID_CONFIRM operation is required to
   establish the identification onto the server.  Establishment of
   identification by a new incarnation of the client also has the effect
   of immediately breaking any leased state that a previous incarnation
   of the client might have had on the server, as opposed to forcing the
   new client incarnation to wait for the leases to expire.  Breaking
   the lease state amounts to the server removing all lock, share
   reservation, and, where the server is not supporting the
   CLAIM_DELEGATE_PREV claim type, all delegation state associated with
   same client with the same identity.  For discussion of delegation
   state recovery, see Section 10.2.1.

   Client identification is encapsulated in

   Owners of opens and owners of byte-range locks are separate entities
   and remain separate even if the following structure:

   struct nfs_client_id4 {
           verifier4       verifier; same opaque          id<NFS4_OPAQUE_LIMIT>;
   };

   The first field, arrays are used to
   designate owners of each.  The protocol distinguishes between open-
   owners (represented by open_owner4 structures) and lock-owners
   (represented by lock_owner4 structures).

   Each open is associated with a specific open-owner while each byte-
   range lock is associated with a lock-owner and an open-owner, the
   latter being the open-owner associated with the open file under which
   the LOCK operation was done.

   Unlike the text in NFSv4.1 [31], this text treats "lock_owner" as
   meaning both a open_owner4 and a lock_owner4.  Also, a "lock" can
   refer to both a byte-range and share lock.

   Client identification is encapsulated in the following structure:

   struct nfs_client_id4 {
           verifier4       verifier;
           opaque          id<NFS4_OPAQUE_LIMIT>;
   };

   The first field, verifier is a client incarnation verifier that is
   used to detect client reboots.  Only if the verifier is different
   from that which the server has previously recorded the client (as
   identified by the second field of the structure, id) does the server
   start the process of canceling the client's leased state.

   The second field, id is a variable length string that uniquely
   defines the client.

   There are several considerations for how the client generates the id
   string:

   o  The string should be unique so that multiple clients do not
      present the same string.  The consequences of two clients
      presenting the same string range from one client getting an error
      to one client having its leased state abruptly and unexpectedly
      canceled.

   o  The string should be selected so the subsequent incarnations
      (e.g., reboots) of the same client cause the client to present the
      same string.  The implementor is cautioned against an approach
      that requires the string to be recorded in a local file because
      this precludes the use of the implementation in an environment
      where there is no local disk and all file access is from an NFS
      version 4 NFSv4
      server.

   o  The string should be different for each server network address
      that the client accesses, rather than common to all server network
      addresses.  The reason is that it may not be possible for the
      client to tell if the same server is listening on multiple network
      addresses.  If the client issues SETCLIENTID with the same id
      string to each network address of such a server, the server will
      think it is the same client, and each successive SETCLIENTID will
      cause the server to begin the process of removing the client's
      previous leased state.

   o  The algorithm for generating the string should not assume that the
      client's network address won't change.  This includes changes
      between client incarnations and even changes while the client is
      stilling running in its current incarnation.  This means that if
      the client includes just the client's and server's network address
      in the id string, there is a real risk, after the client gives up
      the network address, that another client, using a similar
      algorithm for generating the id string, will generate a
      conflicting id string.

   Given the above considerations, an example of a well generated id
   string is one that includes:

   o  The server's network address.

   o  The client's network address.

   o  For a user level NFS version 4 NFSv4 client, it should contain additional
      information to distinguish the client from other user level
      clients running on the same host, such as an universally unique
      identifier (UUID).

   o  Additional information that tends to be unique, such as one or
      more of:

      *  The client machine's serial number (for privacy reasons, it is
         best to perform some one way function on the serial number).

      *  A MAC address.

      *  The timestamp of when the NFS version 4 NFSv4 software was first installed on
         the client (though this is subject to the previously mentioned
         caution about using information that is stored in a file,
         because the file might only be accessible over NFS version 4). NFSv4).

      *  A true random number.  However since this number ought to be
         the same between client incarnations, this shares the same
         problem as that of the using the timestamp of the software
         installation.

   As a security measure, the server MUST NOT cancel a client's leased
   state if the principal that established the state for a given id
   string is not the same as the principal issuing the SETCLIENTID.

   Note that SETCLIENTID and SETCLIENTID_CONFIRM has a secondary purpose
   of establishing the information the server needs to make callbacks to
   the client for purpose of supporting delegations.  It is permitted to
   change this information via SETCLIENTID and SETCLIENTID_CONFIRM
   within the same incarnation of the client without removing the
   client's leased state.

   Once a SETCLIENTID and SETCLIENTID_CONFIRM sequence has successfully
   completed, the client uses the shorthand client identifier, of type
   clientid4, instead of the longer and less compact nfs_client_id4
   structure.  This shorthand client identifier (a clientid) client ID) is
   assigned by the server and should be chosen so that it will not
   conflict with a clientid client ID previously assigned by the server.  This
   applies across server restarts or reboots.  When a clientid client ID is
   presented to a server and that clientid client ID is not recognized, as would
   happen after a server reboot, the server will reject the request with
   the error NFS4ERR_STALE_CLIENTID.  When this happens, the client must
   obtain a new clientid client ID by use of the SETCLIENTID operation and then
   proceed to any other necessary recovery for the server reboot case
   (See Section 9.6.2).

   The client must also employ the SETCLIENTID operation when it
   receives a NFS4ERR_STALE_STATEID error using a stateid derived from
   its current clientid, client ID, since this also indicates a server reboot
   which has invalidated the existing clientid client ID (see Section 9.1.3 9.1.4 for
   details).

   See the detailed descriptions of SETCLIENTID and SETCLIENTID_CONFIRM
   for a complete specification of the operations.

9.1.2.  Server Release of Clientid Client ID

   If the server determines that the client holds no associated state
   for its clientid, client ID, the server may choose to release the clientid. client ID.
   The server may make this choice for an inactive client so that
   resources are not consumed by those intermittently active clients.
   If the client contacts the server after this release, the server must
   ensure the client receives the appropriate error so that it will use
   the SETCLIENTID/SETCLIENTID_CONFIRM sequence to establish a new
   identity.  It should be clear that the server must be very hesitant
   to release a
   clientid client ID since the resulting work on the client to
   recover from such an event will be the same burden as if the server
   had failed and restarted.  Typically a server would not release a clientid
   client ID unless there had been no activity from that client for many
   minutes.

   Note that if the id string in a SETCLIENTID request is properly
   constructed, and if the client takes care to use the same principal
   for each successive use of SETCLIENTID, then, barring an active
   denial of service attack, NFS4ERR_CLID_INUSE should never be
   returned.

   However, client bugs, server bugs, or perhaps a deliberate change of
   the principal owner of the id string (such as the case of a client
   that changes security flavors, and under the new flavor, there is no
   mapping to the previous owner) will in rare cases result in
   NFS4ERR_CLID_INUSE.

   In that event, when the server gets a SETCLIENTID for a client id ID
   that currently has no state, or it has state, but the lease has
   expired, rather than returning NFS4ERR_CLID_INUSE, the server MUST
   allow the SETCLIENTID, and confirm the new clientid client ID if followed by
   the appropriate SETCLIENTID_CONFIRM.

9.1.3.  lock_owner and stateid  Stateid Definition

   When requesting a lock, the client must present to the server the
   clientid grants a lock of any type (including opens, byte-
   range locks, and an identifier delegations), it responds with a unique stateid that
   represents a set of locks (often a single lock) for the owner same file, of
   the requested lock.
   These two fields are referred to as the lock_owner same type, and sharing the definition
   of those fields are:

   o  A clientid returned by the server as part of the client's use same ownership characteristics.  Thus,
   opens of the SETCLIENTID operation.

   o  A variable length opaque array used to uniquely define the owner same file by different open- owners each have an
   identifying stateid.  Similarly, each set of byte-range locks on a lock managed
   file owned by the client.

      This may be a thread id, process id, or other unique value.

   When the server grants the lock, it responds with a unique specific lock-owner has its own identifying stateid.
   Delegations also have associated stateids by which they may be
   referenced.  The stateid is used as a shorthand reference to the lock_owner, since a lock
   or set of locks, and given a stateid, the server will be maintaining can determine the correspondence between them.

   The server is free to form
   associated state-owner or state-owners (in the stateid in any manner that it chooses
   as long as it is able to recognize invalid and out-of-date stateids.
   This requirement includes those stateids generated by earlier
   instances case of an open-owner/
   lock-owner pair) and the server.  From this, associated filehandle.  When stateids are
   used, the client can current filehandle must be properly
   notified of a server restart.  This notification will occur when the one associated with that
   stateid.

   All stateids associated with a given client presents ID are associated with a stateid to
   common lease that represents the server from a previous
   instantiation.

   The server must be able claim of those stateids and the
   objects they represent to distinguish be maintained by the following situations and
   return server.  See
   Section 9.5 for a discussion of the error as specified:

   o lease.

   The server may assign stateids independently for different clients.

   A stateid was generated by with the same bit pattern for one client may designate an earlier server instance (i.e.,
      before
   entirely different set of locks for a server reboot).  The error NFS4ERR_STALE_STATEID should
      be returned.

   o different client.  The stateid was generated by the current server instance but
   is always interpreted with respect to the
      stateid no longer designates client ID associated with
   the current locking state for the
      lockowner-file pair in question (i.e., one or more locking
      operations has occurred).  The error NFS4ERR_OLD_STATEID should be
      returned.

      This error condition will only occur when session.

9.1.3.1.  Stateid Types

   With the client issues a exception of special stateids (see Section 9.1.3.3), each
   stateid represents locking request which changes objects of one of a stateid while an I/O request that
      uses set of types defined
   by the NFSv4 protocol.  Note that stateid in all these cases, where we speak
   of guarantee, it is outstanding. understood there are situations such as a client
   restart, or lock revocation, that allow the guarantee to be voided.

   o  The  Stateids may represent opens of files.

      Each stateid was generated by the current server instance but in this case represents the
      stateid does not designate a locking OPEN state for any active
      lockowner-file pair.  The error NFS4ERR_BAD_STATEID should be
      returned.

      This error condition will occur when there has been a logic error
      on the part of the given
      client or server.  This should not happen.

   One mechanism that may be used ID/open-owner/filehandle triple.  Such stateids are subject
      to satisfy these requirements is for change (with consequent incrementing of the server to, stateid's seqid) in
      response to OPENs that result in upgrade and OPEN_DOWNGRADE
      operations.

   o  divide the "other" field  Stateids may represent sets of each stateid into two fields:

      *  A server verifier which uniquely designates byte-range locks.

      All locks held on a particular server
         instantiation.

      *  An index into file by a table of locking-state structures.

   o  utilize particular owner and all
      gotten under the "seqid" field aegis of each stateid, such that seqid is
      monotonically incremented for each stateid that is a particular open file are associated
      with
      the same index into the locking-state table.

   By matching the incoming a single stateid and its field values with the state
   held at the server, seqid being incremented whenever
      LOCK and LOCKU operations affect that set of locks.

   o  Stateids may represent file delegations, which are recallable
      guarantees by the server is able to easily determine if a
   stateid is valid for its current instantiation and state.  If the
   stateid is client, that other clients will
      not valid, reference, or will not modify a particular file, until the appropriate error can be supplied to
      delegation is returned.

      A stateid represents a single delegation held by a client for a
      particular filehandle.

9.1.3.2.  Stateid Structure

   Stateids are divided into two fields, a 96-bit "other" field
   identifying the
   client.

9.1.4.  Use specific set of the stateid and Locking

   All READ, WRITE locks and SETATTR operations contain a stateid.  For 32-bit "seqid" sequence
   value.  Except in the
   purposes case of special stateids (see Section 9.1.3.3),
   a particular value of this section, SETATTR operations which change the size
   attribute "other" field denotes a set of locks of the
   same type (for example, byte-range locks, opens, delegations, or
   layouts), for a specific file are treated as if they are writing the area
   between the old and new size (i.e., the range truncated or added to directory, and sharing the file by means same
   ownership characteristics.  The seqid designates a specific instance
   of the SETATTR), even where SETATTR such a set of locks, and is not
   explicitly mentioned incremented to indicate changes in the text.

   If the lock_owner performs
   such a READ set of locks, either by the addition or WRITE in deletion of locks from
   the set, a situation change in which it
   has established a lock or share reservation on the server (any OPEN
   constitutes a share reservation) the stateid (previously returned by
   the server) must be used to indicate what locks, including both
   record locks and share reservations, are held by the lockowner.  If
   no state is established by byte-range they apply to, or an upgrade or
   downgrade in the client, either record lock type of one or share
   reservation, more locks.

   When such a stateid set of all bits 0 locks is used.  Regardless whether a
   stateid of all bits 0, or a stateid returned by first created, the server is used,
   if there is returns a conflicting share reservation or mandatory record lock
   held on
   stateid with seqid value of one.  On subsequent operations that
   modify the file, set of locks, the server MUST refuse is required to service increment the READ or WRITE
   operation.

   Share reservations are established
   "seqid" field by OPEN operations one whenever it returns a stateid for the same
   state-owner/file/type combination and by their
   nature are mandatory there is some change in that when the OPEN denies READ or WRITE
   operations, that denial results in such operations being rejected
   with error NFS4ERR_LOCKED.  Record set
   of locks may be implemented by actually designated.  In this case, the server will return a
   stateid with an "other" field the same as either mandatory or advisory, or previously used for that
   state-owner/file/type combination, with an incremented "seqid" field.
   This pattern continues until the choice seqid is incremented past
   NFS4_UINT32_MAX, and one (not zero) is the next seqid value.  The
   purpose of mandatory or
   advisory behavior may be determined by the incrementing of the seqid is to allow the server on to
   communicate to the basis of client the
   file being accessed (for example, some UNIX-based servers support order in which operations that modified
   locking state associated with a
   "mandatory lock bit" on stateid have been processed and to
   make it possible for the mode attribute such client to send requests that if set, record
   locks are required conditional
   on the file before I/O is possible).  When record set of locks are advisory, they only prevent not having changed since the granting of conflicting
   lock requests and have no effect on READs or WRITEs.  Mandatory
   record locks, however, prevent conflicting I/O operations.  When they
   are attempted, they are rejected with NFS4ERR_LOCKED. stateid in question
   was returned.

   When the a client gets NFS4ERR_LOCKED on sends a file it knows it has stateid to the proper share
   reservation for, server, it will need has two choices with
   regard to issue a LOCK request on the region
   of seqid sent.  It may set the file that includes seqid to zero to indicate
   to the region server that it wishes the I/O was most up-to-date seqid for that
   stateid's "other" field to be performed on,
   with an appropriate locktype (i.e., READ*_LT for a READ operation,
   WRITE*_LT for a WRITE operation).

   With NFS version 3, there was no notion used.  This would be the common choice
   in the case of a stateid so there was no
   way to tell if the application process of the client sending the sent with a READ or WRITE operation had operation.  It
   also acquired may set a non-zero value, in which case the appropriate record lock on server checks if
   that seqid is the file.  Thus there was no way correct one.  In that case, the server is required
   to implement mandatory locking.
   With return NFS4ERR_OLD_STATEID if the stateid construct, this barrier has been removed.

   Note that for UNIX environments that support mandatory file locking,
   the distinction between advisory and mandatory locking seqid is subtle.  In
   fact, advisory and mandatory record locks are exactly the same in so
   far as lower than the APIs most
   current value and requirements on implementation.  If NFS4ERR_BAD_STATEID if the mandatory
   lock attribute seqid is set on the file, greater than
   the server checks to see if most current value.  This would be the
   lockowner has an appropriate shared (read) or exclusive (write)
   record lock on common choice in the region it wishes to read case
   of stateids sent with a CLOSE or write to.  If there is
   no appropriate lock, OPEN_DOWNGRADE.  Because OPENs may
   be sent in parallel for the server checks if there is same owner, a conflicting lock
   (which can be client might close a file
   without knowing that an OPEN upgrade had been done by attempting to acquire the conflicting lock on
   the behalf of the lockowner, and if successful, release server,
   changing the lock
   after in question.  If CLOSE were sent with a zero seqid,
   the READ or WRITE is done), and if there is, OPEN upgrade would be cancelled before the server returns
   NFS4ERR_LOCKED.

   For Windows environments, there are no advisory record locks, so client even received
   an indication that an upgrade had happened.

   When a stateid is sent by the server always checks for record locks during I/O requests.

   Thus, to the NFS version 4 LOCK operation does client as part of a
   callback operation, it is not need subject to distinguish
   between advisory checking for a current seqid
   and mandatory record locks.  It returning NFS4ERR_OLD_STATEID.  This is because the NFS version 4
   server's processing of client is not
   in a position to know the READ most up-to-date seqid and WRITE operations that introduces
   the distinction.

   Every stateid other than thus cannot
   verify it.  Unless specially noted, the special stateid values noted in this
   section, whether returned by an OPEN-type operation (i.e., OPEN,
   OPEN_DOWNGRADE), or by a LOCK-type operation (i.e., LOCK or LOCKU),
   defines an access mode seqid value for the file (i.e., READ, WRITE, or READ-
   WRITE) as established a stateid
   sent by the original OPEN which began server to the stateid
   sequence, and client as modified by subsequent OPENs and OPEN_DOWNGRADEs
   within that stateid sequence.  When part of a READ, WRITE, or SETATTR which
   specifies the size attribute, is done, the operation callback is subject to
   checking against the access mode required to verify that the operation is
   appropriate given the OPEN
   be zero with which the operation NFS4ERR_BAD_STATEID returned if it is associated. not.

   In making comparisons between seqids, both by the case client in
   determining the order of WRITE-type operations (i.e., WRITEs and SETATTRs which
   set size), by the server must verify that in determining
   whether the access mode allows writing
   and return an NFS4ERR_OPENMODE error if it does not.  In NFS4ERR_OLD_STATEID is to be returned, the case, possibility of
   READ, the server may perform
   the corresponding check on seqid being swapped around past the access
   mode, or it may choose to allow READ on opens for WRITE only, to
   accommodate clients whose write implementation may unavoidably do
   reads (e.g., due NFS4_UINT32_MAX value needs
   to buffer cache constraints).  However, even if
   READs be taken into account.  When two seqid values are allowed in these circumstances, being compared,
   the server MUST still check total count of slots for locks that conflict all sessions associated with the READ (e.g., another open specify
   denial of READs).  Note that a server which does enforce current
   client is used to do this.  When one seqid value is less than this
   total slot count and another seqid value is greater than
   NFS4_UINT32_MAX minus the access
   mode check on READs need not explicitly check for conflicting share
   reservations since total slot count, the existence of OPEN for read access guarantees
   that no conflicting share reservation can exist.

   A stateid of all bits 1 (one) MAY allow READ operations former is to bypass
   locking checks at the server.  However, WRITE operations with a
   stateid with bits all 1 (one) MUST NOT bypass locking checks and are be
   treated exactly the same as if a stateid of lower than the latter, despite the fact that it is
   numerically greater.

9.1.3.3.  Special Stateids

   Stateid values whose "other" field is either all bits 0 were used.

   A lock zeros or all ones
   are reserved.  They may not be granted while a READ or WRITE operation using one
   of assigned by the server but have
   special stateids meanings defined by the protocol.  The particular meaning
   depends on whether the "other" field is being performed all zeros or all ones and the range
   specific value of the lock
   request conflicts with the range "seqid" field.

   The following combinations of "other" and "seqid" are defined in
   NFSv4:

   o  When "other" and "seqid" are both zero, the READ or WRITE operation.  For stateid is treated as
      a special anonymous stateid, which can be used in READ, WRITE, and
      SETATTR requests to indicate the purposes absence of this paragraph, a conflict occurs when a shared lock any open state
      associated with the request.  When an anonymous stateid value is requested
      used, and a WRITE operation is being performed, or an
   exclusive lock is requested existing open denies the form of access requested,
      then access will be denied to the request.

   o  When "other" and either "seqid" are both all ones, the stateid is a
      special READ or a WRITE operation bypass stateid.  When this value is
   being performed.  A SETATTR that sets size used in WRITE or
      SETATTR, it is treated similarly like the anonymous value.  When used in
      READ, the server MAY grant access, even if access would normally
      be denied to a
   WRITE as discussed above.

9.1.5.  Sequencing of Lock Requests

   Locking READ requests.

   o  When "other" is different than most NFS operations as it requires "at-
   most-one" semantics that are not provided by ONCRPC.  ONCRPC over a
   reliable transport zero and "seqid" is not sufficient because a sequence of locking
   requests may span multiple TCP connections.  In one, the face of
   retransmission or reordering, lock or unlock requests must have a
   well defined and consistent behavior.  To accomplish this, each lock
   request contains a sequence number that stateid represents
      the current stateid, which is whatever value is a consecutively increasing
   integer.  Different lock_owners have different sequences.  The server
   maintains the last sequence number (L) received stateid
      returned by an operation within the COMPOUND.  In the case of an
      OPEN, the stateid returned for the open file, and not the response that
   was returned.  The server
      delegation is free used.  The stateid passed to assign any the operation in place
      of the special value for has its "seqid" value set to zero, except
      when the first
   request issued for any given lock_owner.

   Note that for requests that contain a sequence number, for each
   lock_owner, current stateid is used by the operation CLOSE or
      OPEN_DOWNGRADE.  If there should be is no more than one outstanding request.

   If a request (r) with operation in the COMPOUND which
      has returned a previous sequence number (r < L) is received,
   it is rejected with stateid value, the server MUST return of error NFS4ERR_BAD_SEQID.  Given a
   properly-functioning client, the response to (r) must have been
   received before error
      NFS4ERR_BAD_STATEID.  As illustrated in Figure 5, if the last request (L) was sent.  If a duplicate value of
   last request (r == L) is received, the stored response
      a current stateid is returned.
   If a request beyond special stateid, and the next sequence (r == L + 2) is received, it is
   rejected with stateid of an
      operation's arguments has "other" set to zero, and "seqid" set to
      one, then the server MUST return of the error NFS4ERR_BAD_SEQID.  Sequence
   history NFS4ERR_BAD_STATEID.

   o  When "other" is reinitialized whenever the SETCLIENTID/SETCLIENTID_CONFIRM
   sequence changes the client verifier.

   Since the sequence number zero and "seqid" is represented with an unsigned 32-bit
   integer, NFS4_UINT32_MAX, the arithmetic involved with the sequence number is mod
   2^32.  For an example of modulo arithmetic involving sequence numbers
   see [33].

   It stateid
      represents a reserved stateid value defined to be invalid.  When
      this stateid is critical used, the server maintain the last response sent to MUST return the
   client to provide error
      NFS4ERR_BAD_STATEID.

   If a more reliable cache of duplicate non-idempotent
   requests than that stateid value is used which has all zero or all ones in the
   "other" field, but does not match one of the traditional cache described in [34].  The
   traditional duplicate request cache uses a least recently used
   algorithm for removing unneeded requests.  However, cases above, the last lock
   request server
   MUST return the error NFS4ERR_BAD_STATEID.

   Special stateids, unlike other stateids, are not associated with
   individual client IDs or filehandles and response on a given lock_owner must can be cached as long as used with all valid
   client IDs and filehandles.  In the lock state exists on case of a special stateid
   designating the server.

   The client MUST monotonically increment current stateid, the sequence number current stateid value
   substituted for the
   CLOSE, LOCK, LOCKU, OPEN, OPEN_CONFIRM, special stateid is associated with a particular
   client ID and OPEN_DOWNGRADE
   operations.  This filehandle, and so, if it is true even in the event used where current
   filehandle does not match that associated with the previous
   operation that used current stateid,
   the sequence number received an error.  The only
   exception operation to this rule which the stateid is if passed will return
   NFS4ERR_BAD_STATEID.

9.1.3.4.  Stateid Lifetime and Validation

   Stateids must remain valid until either a client restart or a server
   restart or until the previous operation received one client returns all of the following errors: NFS4ERR_STALE_CLIENTID, NFS4ERR_STALE_STATEID,
   NFS4ERR_BAD_STATEID, NFS4ERR_BAD_SEQID, NFS4ERR_BADXDR,
   NFS4ERR_RESOURCE, NFS4ERR_NOFILEHANDLE, NFS4ERR_LEASE_MOVED, or
   NFS4ERR_MOVED.

9.1.6.  Recovery from Replayed Requests

   As described above, locks associated with
   the sequence number is per lock_owner.  As long stateid by means of an operation such as CLOSE or DELEGRETURN.
   If the server maintains the last sequence number received and follows
   the methods described above, there locks are no risks of a Byzantine router
   re-sending old requests.  The server need only maintain the
   (lock_owner, sequence number) state lost due to revocation as long as there are open files
   or closed files with locks outstanding.

   LOCK, LOCKU, OPEN, OPEN_DOWNGRADE, and CLOSE each contain a sequence
   number and therefore the risk of the replay of these operations
   resulting in undesired effects client ID is non-existent while the server
   maintains
   valid, the lock_owner stateid remains a valid designation of that revoked state.

9.1.7.  Releasing lock_owner State

   When
   Stateids associated with byte-range locks are an exception.  They
   remain valid even if a particular lock_owner no longer holds LOCKU frees all remaining locks, so long as
   the open or file locking
   state at the server, the server may choose to release the sequence
   number state associated with which they are associated remains open.

   It should be noted that there are situations in which the lock_owner.  The server may make
   this choice based on lease expiration, for client's
   locks become invalid, without the reclamation client requesting they be returned.
   These include lease expiration and a number of server
   memory, or other implementation specific details.  In any event, the
   server is able to do this safely only when forms of lock
   revocation within the lock_owner no longer lease period.  It is being utilized by the client.  The server may choose important to hold the
   lock_owner state note that in
   these situations, the event that retransmitted requests are
   received.  However, stateid remains valid and the period client can use it
   to hold this state is implementation
   specific.

   In determine the case that disposition of the associated lost locks.

   An "other" value must never be reused for a LOCK, LOCKU, OPEN_DOWNGRADE, different purpose (i.e.
   different filehandle, owner, or CLOSE is
   retransmitted after type of locks) within the context of
   a single client ID.  A server has previously released may retain the lock_owner
   state, "other" value for the server will find that
   same purpose beyond the lock_owner has no files open and
   an error will point where it may otherwise be returned freed but if
   it does so, it must maintain "seqid" continuity with previous values.

   One mechanism that may be used to satisfy the client.  If the lock_owner does have
   a file open, requirement that the stateid will not match
   server recognize invalid and again an error out-of-date stateids is
   returned for the server
   to divide the client.

9.1.8.  Use "other" field of Open Confirmation

   In the case that an OPEN stateid into two fields.

   o  An index into a table of locking-state structures.

   o  A generation number which is retransmitted and incremented on each allocation of a
      table entry for a particular use.

   And then store in each table entry,

   o  The client ID with which the lock_owner stateid is being
   used associated.

   o  The current generation number for the first time or the lock_owner state has been previously
   released by (at most one) valid stateid
      sharing this index value.

   o  The filehandle of the server, file on which the use locks are taken.

   o  An indication of the OPEN_CONFIRM operation will
   prevent incorrect behavior.  When type of stateid (open, byte-range lock, file
      delegation).

   o  The last "seqid" value returned corresponding to the server observes current
      "other" value.

   o  An indication of the use current status of the
   lock_owner locks associated with
      this stateid.  In particular, whether these have been revoked and
      if so, for what reason.

   With this information, an incoming stateid can be validated and the first time, it will direct the client to perform
   the OPEN_CONFIRM
   appropriate error returned when necessary.  Special and non-special
   stateids are handled separately.  (See Section 9.1.3.3 for the corresponding OPEN.  This sequence
   establishes the use a
   discussion of special stateids.)

   When a lock_owner stateid is being tested, and associated sequence number.
   Since the OPEN_CONFIRM sequence connects "other" field is all zeros or
   all ones, a new open_owner on check that the
   server with an existing open_owner on "other" and "seqid" fields match a client, defined
   combination for a special stateid is done and the sequence number
   may have any value.  The OPEN_CONFIRM step assures results determined
   as follows:

   o  If the server that "other" and "seqid" fields do not match a defined
      combination associated with a special stateid, the value received error
      NFS4ERR_BAD_STATEID is returned.

   o  If the correct one. (see Section 15.20 for further
   details.)

   There are special stateid is one designating the current stateid, and
      there is a number of situations in which current stateid, then the requirement to confirm
   an OPEN would pose difficulties current stateid is
      substituted for the client special stateid and server, the checks appropriate to
      non-special stateids in that
   they would be prevented from acting performed.

   o  If the combination is valid in a timely fashion on
   information received, because that information would be provisional,
   subject general but is not appropriate to deletion upon non-confirmation.  Fortunately, these are
   situations
      the context in which the server can avoid the need for confirmation stateid is used (e.g. an all-zero stateid
      is used when responding to open requests.  The two constraints are:

   o  The server must not bestow a delegation for any open which would
      require confirmation.

   o  The server MUST NOT require confirmation on a reclaim-type an open
      (i.e., one specifying claim type CLAIM_PREVIOUS or
      CLAIM_DELEGATE_PREV).

   These constraints are related stateid is required in that reclaim-type opens are a LOCK operation), the only
   ones in which
      error NFS4ERR_BAD_STATEID is also returned.

   o  Otherwise, the server may be required to send a delegation.  For
   CLAIM_NULL, sending check is completed and the delegation special stateid is optional while for
   CLAIM_DELEGATE_CUR, no delegation
      accepted as valid.

   When a stateid is sent.

   Delegations being sent with an open requiring confirmation are
   troublesome because recovering from non-confirmation adds undue
   complexity to tested, and the protocol while requiring confirmation on reclaim-
   type opens poses difficulties in that "other" field is neither all
   zeros or all ones, the inability following procedure could be used to resolve validate
   an incoming stateid and return an appropriate error, when necessary,
   assuming that the
   status of "other" field would be divided into a table index
   and an entry generation.

   o  If the reclaim until lease expiration may make it difficult to
   have timely determination of table index field is outside the set range of locks being reclaimed (since the grace period may expire).

   Requiring open confirmation on reclaim-type opens associated
      table, return NFS4ERR_BAD_STATEID.

   o  If the selected table entry is avoidable
   because of a different generation than that
      specified in the nature of incoming stateid, return NFS4ERR_BAD_STATEID.

   o  If the environments selected table entry does not match the current filehandle,
      return NFS4ERR_BAD_STATEID.

   o  If the client ID in which such opens are
   done.  For CLAIM_PREVIOUS opens, this the table entry does not match the client ID
      associated with the current session, return NFS4ERR_BAD_STATEID.

   o  If the stateid represents revoked state, then return
      NFS4ERR_EXPIRED, NFS4ERR_ADMIN_REVOKED, or NFS4ERR_DELEG_REVOKED,
      as appropriate.

   o  If the stateid type is immediately after server
   reboot, so there should be no time not valid for lockowners to the context in which the
      stateid appears, return NFS4ERR_BAD_STATEID.  Note that a stateid
      may be created,
   found to valid in general, but be unused, and recycled.  For CLAIM_DELEGATE_PREV opens, we
   are dealing with a client reboot situation.  A server which supports
   delegation can be sure that no lockowners invalid for that client have been
   recycled since client initialization and thus can ensure that
   confirmation will not be required.

9.2.  Lock Ranges

   The protocol allows a lock owner to request a lock with a byte range
   and then either upgrade or unlock particular
      operation, as, for example, when a sub-range of the initial lock.
   It stateid which doesn't represent
      byte-range locks is expected that this will be an uncommon type passed to the non-from_open case of request.  In any
   case, servers LOCK or server filesystems may to
      LOCKU, or when a stateid which does not be able represent an open is
      passed to support sub-
   range lock semantics. CLOSE or OPEN_DOWNGRADE.  In such cases, the event that a server receives a locking
   request that represents a sub-range of current locking state for the
   lock owner, MUST
      return NFS4ERR_BAD_STATEID.

   o  If the server "seqid" field is allowed to not zero, and it is greater than the
      current sequence value corresponding the current "other" field,
      return NFS4ERR_BAD_STATEID.

   o  If the error
   NFS4ERR_LOCK_RANGE to signify that it does "seqid" field is not support sub-range lock
   operations.  Therefore, zero, and it is less than the client should be prepared to receive this
   error and, if appropriate, report current
      sequence value corresponding the error to current "other" field, return
      NFS4ERR_OLD_STATEID.

   o  Otherwise, the requesting
   application.

   The client stateid is discouraged from combining multiple independent locking
   ranges valid and the table entry should contain
      any additional information about the type of stateid and
      information associated with that happen to be adjacent into a single request since particular type of stateid, such
      as the
   server may not support sub-range requests associated set of locks, such as open-owner and lock-owner
      information, as well as information on the specific locks, such as
      open modes and byte ranges.

9.1.3.5.  Stateid Use for reasons related I/O Operations

   Clients performing I/O operations need to select an appropriate
   stateid based on the recovery locks (including opens and delegations) held by
   the client and the various types of state-owners sending the I/O
   requests.  SETATTR operations that change the file locking state size are treated
   like I/O operations in the event of server failure.
   As discussed this regard.

   The following rules, applied in order of decreasing priority, govern
   the Section 9.6.2 below, the server may employ
   certain optimizations during recovery that work effectively only when selection of the client's behavior during lock recovery is similar to appropriate stateid.  In following these rules,
   the client's
   locking behavior prior to server failure.

9.3.  Upgrading and Downgrading Locks

   If a client has a write lock on a record, will only consider locks of which it can request has actually received
   notification by an atomic
   downgrade of appropriate operation response or callback.

   o  If the lock to client holds a read lock via the LOCK request, by setting
   the type to READ_LT.  If delegation for the server supports atomic downgrade, file in question, the
   request will succeed.  If not, it will return NFS4ERR_LOCK_NOTSUPP.
   The client should
      delegation stateid SHOULD be prepared to receive this error, and used.

   o  Otherwise, if
   appropriate, report the error entity corresponding to the requesting application.

   If lock-owner (e.g., a client
      process) sending the I/O has a read byte-range lock on a record, it can request an atomic
   upgrade of stateid for the lock to a write lock via
      associated open file, then the LOCK request by setting
   the type to WRITE_LT or WRITEW_LT. byte-range lock stateid for that
      lock-owner and open file SHOULD be used.

   o  If there is no byte-range lock stateid, then the server does not support
   atomic upgrade, it will return NFS4ERR_LOCK_NOTSUPP.  If OPEN stateid for
      the upgrade
   can open file in question SHOULD be achieved without an existing conflict, used.

   o  Finally, if none of the request will
   succeed.  Otherwise, above apply, then a special stateid SHOULD
      be used.

   Ignoring these rules may result in situations in which the server will return either NFS4ERR_DENIED or
   NFS4ERR_DEADLOCK.  The error NFS4ERR_DEADLOCK is returned if the
   client issued
   does not have information necessary to properly process the LOCK request with request.
   For example, when mandatory byte-range locks are in effect, if the type set to WRITEW_LT and
   stateid does not indicate the
   server has detected proper lock-owner, via a deadlock. lock stateid,
   a request might be avoidably rejected.

   The client server however should be prepared not try to
   receive such errors enforce these ordering rules and if appropriate, report the error
   should use whatever information is available to the
   requesting application.

9.4.  Blocking Locks

   Some clients require the support properly process I/O
   requests.  In particular, when a client has a delegation for a given
   file, it SHOULD take note of blocking locks.  The NFS version
   4 protocol must not rely on this fact in processing a callback mechanism and therefore request, even
   if it is
   unable to notify sent with a special stateid.

9.1.3.6.  Stateid Use for SETATTR Operations

   In the case of SETATTR operations, a stateid is present.  In cases
   other than those that set the file size, the client may send either a
   special stateid or, when a previously denied lock has been
   granted.  Clients have no choice but to continually poll delegation is held for the
   lock.  This presents file in
   question, a fairness problem.  Two new lock types are
   added, READW and WRITEW, delegation stateid.  While the server SHOULD validate the
   stateid and are used to indicate may use the stateid to optimize the server that determination as to
   whether a delegation is held, it SHOULD note the client presence of a
   delegation even when a special stateid is requesting sent, and MUST accept a blocking lock.  The server should maintain
   an ordered list of pending blocking locks.
   valid delegation stateid when sent.

9.1.4.  lock_owner

   When requesting a lock, the conflicting lock
   is released, client must present to the server may wait the lease period
   client ID and an identifier for the first
   waiting client to re-request owner of the requested lock.  After
   These two fields are referred to as the lease period
   expires lock_owner and the next waiting definition
   of those fields are:

   o  A client request is allowed the lock.  Clients
   are required to poll at an interval sufficiently small that it is
   likely to acquire ID returned by the lock in a timely manner.  The server is not
   required to maintain a list of pending blocked locks as it is used to
   increase fairness and not correct operation.  Because part of the
   unordered nature of crash recovery, storing client's use of lock state to stable
   storage would be required
      the SETCLIENTID operation.

   o  A variable length opaque array used to guarantee ordered granting of blocking
   locks.

   Servers may also note uniquely define the lock types and delay returning denial owner
      of
   the request to allow extra time for a conflicting lock to managed by the client.

      This may be
   released, allowing a successful return.  In this way, clients can
   avoid thread id, process id, or other unique value.

   When the burden of needlessly frequent polling for blocking locks.
   The server should take care in the length of delay in grants the event lock, it responds with a unique stateid.
   The stateid is used as a shorthand reference to the
   client retransmits lock_owner, since
   the request.

   If a server receives a blocking lock request, denies it, will be maintaining the correspondence between them.

9.1.5.  Use of the Stateid and then
   later receives Locking

   All READ, WRITE and SETATTR operations contain a nonblocking request for stateid.  For the same lock,
   purposes of this section, SETATTR operations which is also
   denied, then it should remove change the lock in question from its list size
   attribute of
   pending blocking locks.  Clients should use such a nonblocking
   request to indicate to the server that this is the last time file are treated as if they
   intend to poll for are writing the lock, as may happen when area
   between the process
   requesting old and new size (i.e., the lock is interrupted.  This is a courtesy range truncated or added to
   the
   server, to prevent it from unnecessarily waiting a lease period
   before granting other lock requests.  However, clients are not
   required to perform this courtesy, and servers must file by means of the SETATTR), even where SETATTR is not depend on
   them doing so.  Also, clients must be prepared for
   explicitly mentioned in the possibility
   that this final locking request will be accepted.

9.5.  Lease Renewal text.  The purpose of a lease is to allow a server stateid passed to remove stale locks one of these
   operations must be one that are held by represents an OPEN, a client that has crashed set of byte-range
   locks, or is otherwise
   unreachable.  It is not a mechanism for cache consistency and lease
   renewals delegation, or it may not be denied if a special stateid representing
   anonymous access or the lease interval has not expired.

   The following events cause implicit renewal of all of special bypass stateid.

   If the leases for lock_owner performs a given client (i.e., all those sharing READ or WRITE in a given clientid).  Each of
   these is situation in which it
   has established a positive indication that lock or share reservation on the client is still active and
   that server (any OPEN
   constitutes a share reservation) the associated state stateid (previously returned by
   the server) must be used to indicate what locks, including both byte-
   range locks and share reservations, are held at by the server, for lockowner.  If no
   state is established by the client, is
   still valid.

   o  An OPEN with a valid clientid.

   o  Any operation made with either byte-range lock or share
   reservation, a valid stateid (CLOSE, DELEGPURGE,
      DELEGRETURN, LOCK, LOCKU, OPEN, OPEN_CONFIRM, OPEN_DOWNGRADE,
      READ, RENEW, SETATTR, or WRITE).  This does not include the
      special stateids of all bits 0 or is used.  Regardless whether a
   stateid of all bits 1.

      Note that if the client had restarted 0, or rebooted, the client
      would not be making these requests without issuing the
      SETCLIENTID/SETCLIENTID_CONFIRM sequence.  The use of a stateid returned by the
      SETCLIENTID/SETCLIENTID_CONFIRM sequence (one that changes server is used,
   if there is a conflicting share reservation or mandatory byte-range
   lock held on the
      client verifier) notifies file, the server to drop the locking state
      associated with the client.  SETCLIENTID/SETCLIENTID_CONFIRM never
      renews a lease.

      If the server has rebooted, MUST refuse to service the stateids (NFS4ERR_STALE_STATEID
      error) READ or the clientid (NFS4ERR_STALE_CLIENTID error) will not be
      valid hence preventing spurious renewals.

   This approach allows for low overhead lease renewal which scales
   well.  In the typical case no extra RPC calls
   WRITE operation.

   Share reservations are required for lease
   renewal and in the worst case one RPC is required every lease period
   (i.e., a RENEW operation).  The number of locks held established by the client is
   not a factor since all state for the client is involved with the
   lease renewal action.

   Since all OPEN operations that create a new lease also renew existing
   leases, the server must maintain a common lease expiration time for
   all valid leases for a given client.  This lease time can then be
   easily updated upon implicit lease renewal actions.

9.6.  Crash Recovery

   The important requirement and by their
   nature are mandatory in crash recovery is that both the client
   and the server know when the other has failed.  Additionally, it is
   required that a client sees a consistent view of data across server
   restarts or reboots.  All OPEN denies READ and or WRITE operations
   operations, that denial results in such operations being rejected
   with error NFS4ERR_LOCKED.  Byte-range locks may have
   been queued within be implemented by
   the client server as either mandatory or advisory, or network buffers must wait until the
   client has successfully recovered choice of
   mandatory or advisory behavior may be determined by the locks protecting server on the READ and
   WRITE operations.

9.6.1.  Client Failure and Recovery

   In
   basis of the event that file being accessed (for example, some UNIX-based
   servers support a client fails, the server may recover "mandatory lock bit" on the client's mode attribute such
   that if set, byte-range locks when are required on the associated leases have expired.  Conflicting file before I/O is
   possible).  When byte-range locks
   from another client may are advisory, they only be granted after this lease expiration.
   If the client is able to restart or reinitialize within the lease
   period the client may be forced to wait prevent the remainder
   granting of the lease
   period before obtaining new locks.

   To minimize client delay upon restart, conflicting lock requests and have no effect on READs or
   WRITEs.  Mandatory byte-range locks, however, prevent conflicting I/O
   operations.  When they are associated attempted, they are rejected with an instance of
   NFS4ERR_LOCKED.  When the client by gets NFS4ERR_LOCKED on a client supplied verifier.  This
   verifier is part of the initial SETCLIENTID call made by file it
   knows it has the client.
   The server returns a clientid as proper share reservation for, it will need to issue
   a result LOCK request on the region of the SETCLIENTID
   operation.  The client then confirms file that includes the use of region the clientid with
   SETCLIENTID_CONFIRM.  The clientid in combination
   I/O was to be performed on, with an opaque
   owner field is then used by the client to identify the lock owner appropriate locktype (i.e.,
   READ*_LT for
   OPEN.  This chain of associations is then used to identify all locks a READ operation, WRITE*_LT for a particular client.

   Since the verifier will be changed by the client upon each
   initialization, the server can compare WRITE operation).

   With NFSv3, there was no notion of a new verifier stateid so there was no way to
   tell if the verifier
   associated with currently held locks and determine that they do not
   match.  This signifies the client's new instantiation and subsequent
   loss application process of locking state.  As a result, the server is free to release
   all locks held which are associated with client sending the old clientid which READ or
   WRITE operation had also acquired the appropriate byte-range lock on
   the file.  Thus there was
   derived from no way to implement mandatory locking.
   With the old verifier. stateid construct, this barrier has been removed.

   Note that the verifier must have the same uniqueness properties of
   the verifier for UNIX environments that support mandatory file locking,
   the COMMIT operation.

9.6.2.  Server Failure distinction between advisory and Recovery

   If the server loses mandatory locking state (usually as a result of a restart
   or reboot), it must allow clients time to discover this fact is subtle.  In
   fact, advisory and re-
   establish the lost locking state.  The client must be able to re-
   establish mandatory byte-range locks are exactly the locking state without having same in
   so far as the server deny valid
   requests because APIs and requirements on implementation.  If the server has granted conflicting access to another
   client.  Likewise, if there
   mandatory lock attribute is set on the possibility that clients have not
   yet re-established their locking state for a file, the server must
   disallow READ and WRITE operations for that file.  The duration of
   this recovery period is equal checks to see
   if the duration of the lease period.

   A client can determine that server failure (and thus loss of locking
   state) lockowner has occurred, when it receives one of two errors.  The
   NFS4ERR_STALE_STATEID error indicates a stateid invalidated by a
   reboot or restart.  The NFS4ERR_STALE_CLIENTID error indicates a
   clientid invalidated by reboot an appropriate shared (read) or restart.  When either of these are
   received, the client must establish a new clientid (see
   Section 9.1.1) and re-establish exclusive
   (write) byte-range lock on the locking state as discussed below.

   The period of special handling of locking and READs and WRITEs, equal
   in duration region it wishes to read or write to.
   If there is no appropriate lock, the lease period, server checks if there is referred a
   conflicting lock (which can be done by attempting to as acquire the "grace
   period".  During
   conflicting lock on the grace period, clients recover locks and behalf of the
   associated state by reclaim-type locking requests (i.e., LOCK
   requests with reclaim set to true lockowner, and OPEN operations with a claim
   type of CLAIM_PREVIOUS).  During if successful,
   release the grace period, lock after the server must
   reject READ and or WRITE operations and non-reclaim locking requests
   (i.e., other LOCK is done), and OPEN operations) with an error of
   NFS4ERR_GRACE.

   If if there is,
   the server can reliably determine that granting a non-reclaim
   request will not conflict with reclamation of returns NFS4ERR_LOCKED.

   For Windows environments, there are no advisory byte-range locks, so
   the server always checks for byte-range locks by other clients, during I/O requests.

   Thus, the NFS4ERR_GRACE error NFSv4 LOCK operation does not have need to be returned distinguish between
   advisory and mandatory byte-range locks.  It is the non-
   reclaim client request can be serviced.  For NFS version 4
   server's processing of the server to be able to
   service READ and WRITE operations during that introduces
   the grace period, it must
   again be able to guarantee that no possible conflict could arise
   between distinction.

   Every stateid other than the special stateid values noted in this
   section, whether returned by an impending reclaim locking request and OPEN-type operation (i.e., OPEN,
   OPEN_DOWNGRADE), or by a LOCK-type operation (i.e., LOCK or LOCKU),
   defines an access mode for the READ file (i.e., READ, WRITE, or WRITE
   operation.  If READ-
   WRITE) as established by the server is unable to offer original OPEN which began the stateid
   sequence, and as modified by subsequent OPENs and OPEN_DOWNGRADEs
   within that guarantee, stateid sequence.  When a READ, WRITE, or SETATTR which
   specifies the
   NFS4ERR_GRACE error must be returned size attribute, is done, the operation is subject to
   checking against the client.

   For a server access mode to provide simple, valid handling during verify that the grace
   period, operation is
   appropriate given the easiest method OPEN with which the operation is to simply reject all non-reclaim
   locking requests associated.

   In the case of WRITE-type operations (i.e., WRITEs and READ SETATTRs which
   set size), the server must verify that the access mode allows writing
   and WRITE operations by returning return an NFS4ERR_OPENMODE error if it does not.  In the case, of
   READ, the
   NFS4ERR_GRACE error.  However, a server may keep information about
   granted locks in stable storage.  With this information, perform the server
   could determine if a regular lock corresponding check on the access
   mode, or it may choose to allow READ or on opens for WRITE operation can be
   safely processed.

   For example, only, to
   accommodate clients whose write implementation may unavoidably do
   reads (e.g., due to buffer cache constraints).  However, even if a count of locks on a given file is available
   READs are allowed in
   stable storage, these circumstances, the server can track reclaimed locks MUST still check
   for the file and
   when all reclaims have been processed, non-reclaim locking requests
   may be processed.  This way the server can ensure locks that non-reclaim
   locking requests will not conflict with potential reclaim requests.
   With respect to I/O requests, if the server is able to determine READ (e.g., another open specify
   denial of READs).  Note that
   there are no outstanding reclaim requests for a file by information
   from stable storage or another similar mechanism, server which does enforce the processing of
   I/O requests could proceed normally access
   mode check on READs need not explicitly check for conflicting share
   reservations since the file.

   To reiterate, existence of OPEN for a server read access guarantees
   that allows non-reclaim lock and I/O
   requests no conflicting share reservation can exist.

   A stateid of all bits 1 (one) MAY allow READ operations to be processed during bypass
   locking checks at the grace period, it server.  However, WRITE operations with a
   stateid with bits all 1 (one) MUST determine
   that no lock subsequently reclaimed will be rejected NOT bypass locking checks and that no are
   treated exactly the same as if a stateid of all bits 0 were used.

   A lock
   subsequently reclaimed would have prevented any I/O may not be granted while a READ or WRITE operation
   processed during using one
   of the grace period.

   Clients should be prepared for special stateids is being performed and the return range of NFS4ERR_GRACE errors for
   non-reclaim the lock and I/O requests.  In this case
   request conflicts with the client should
   employ a retry mechanism for range of the request.  A delay (on the order of
   several seconds) between retries should be used to avoid overwhelming READ or WRITE operation.  For
   the server.  Further discussion purposes of the general issue this paragraph, a conflict occurs when a shared lock
   is included in
   [20].  The client must account for the server requested and a WRITE operation is being performed, or an
   exclusive lock is requested and either a READ or a WRITE operation is
   being performed.  A SETATTR that sets size is able treated similarly to perform
   I/O and non-reclaim locking requests within the grace period a
   WRITE as well discussed above.

9.1.6.  Sequencing of Lock Requests

   Locking is different than most NFS operations as those it requires "at-
   most-one" semantics that cannot do so.

   A reclaim-type are not provided by ONCRPC.  ONCRPC over a
   reliable transport is not sufficient because a sequence of locking request outside the server's grace period can
   only succeed if
   requests may span multiple TCP connections.  In the server can guarantee that no conflicting lock face of
   retransmission or
   I/O request has been granted since reboot reordering, lock or restart.

   A server may, upon restart, establish unlock requests must have a new value for the lease
   period.  Therefore, clients should, once
   well defined and consistent behavior.  To accomplish this, each lock
   request contains a new clientid sequence number that is
   established, refetch a consecutively increasing
   integer.  Different lock_owners have different sequences.  The server
   maintains the lease_time attribute last sequence number (L) received and use it as the basis
   for lease renewal response that
   was returned.  The server is free to assign any value for the lease associated with first
   request issued for any given lock_owner.

   Note that server.
   However, the server must establish, for this restart event, requests that contain a grace
   period at least as long as the lease period sequence number, for the previous server
   instantiation.  This allows the client state obtained during the
   previous server instance to each
   lock_owner, there should be reliably re-established.

9.6.3.  Network Partitions and Recovery no more than one outstanding request.

   If the duration of a network partition request (r) with a previous sequence number (r < L) is greater than the lease
   period provided by received,
   it is rejected with the server, return of error NFS4ERR_BAD_SEQID.  Given a
   properly-functioning client, the server will response to (r) must have not been
   received a
   lease renewal from before the client. last request (L) was sent.  If this occurs, the server may free
   all locks held for the client.  As a result, all stateids held by the
   client will become invalid or stale.  Once the client duplicate of
   last request (r == L) is able to
   reach received, the server after such stored response is returned.
   If a network partition, all I/O submitted by
   the client with request beyond the now invalid stateids will fail next sequence (r == L + 2) is received, it is
   rejected with the server
   returning the error NFS4ERR_EXPIRED.  Once this return of error NFS4ERR_BAD_SEQID.  Sequence
   history is received, reinitialized whenever the SETCLIENTID/SETCLIENTID_CONFIRM
   sequence changes the client will suitably notify verifier.

   Since the application that held sequence number is represented with an unsigned 32-bit
   integer, the lock.

   As a courtesy to arithmetic involved with the client or as sequence number is mod
   2^32.  For an optimization, the server may
   continue to hold locks on behalf example of a client for which recent
   communication has extended beyond the lease period.  If modulo arithmetic involving sequence numbers
   see [33].

   It is critical the server
   receives maintain the last response sent to the
   client to provide a lock or I/O request more reliable cache of duplicate non-idempotent
   requests than that conflicts with one of these
   courtesy locks, the server must free traditional cache described in [34].  The
   traditional duplicate request cache uses a least recently used
   algorithm for removing unneeded requests.  However, the courtesy last lock
   request and grant the
   new request.

   When a network partition is combined with a server reboot, there are
   edge conditions that place requirements response on a given lock_owner must be cached as long as
   the server in order to
   avoid silent data corruption following lock state exists on the server reboot.  Two of
   these edge conditions are known, and are discussed below. server.

   The first edge condition has client MUST monotonically increment the following scenario:

   1.  Client A acquires a lock.

   2.  Client A sequence number for the
   CLOSE, LOCK, LOCKU, OPEN, OPEN_CONFIRM, and server experience mutual network partition, such
       that client A OPEN_DOWNGRADE
   operations.  This is unable to renew its lease.

   3.  Client A's lease expires, so server releases lock.

   4.  Client B acquires a lock true even in the event that would have conflicted with the previous
   operation that of
       Client A.

   5.  Client B releases used the lock

   6.  Server reboots

   7.  Network partition between client A and server heals.

   8.  Client A issues a RENEW operation, and gets back a
       NFS4ERR_STALE_CLIENTID.

   9.  Client A reclaims its lock within sequence number received an error.  The only
   exception to this rule is if the server's grace period.

   Thus, at previous operation received one of
   the final step, following errors: NFS4ERR_STALE_CLIENTID, NFS4ERR_STALE_STATEID,
   NFS4ERR_BAD_STATEID, NFS4ERR_BAD_SEQID, NFS4ERR_BADXDR,
   NFS4ERR_RESOURCE, NFS4ERR_NOFILEHANDLE, or NFS4ERR_MOVED.

9.1.7.  Recovery from Replayed Requests

   As described above, the server has erroneously granted client
   A's lock reclaim.  If client B modified sequence number is per lock_owner.  As long
   as the object server maintains the lock was
   protecting, client A will experience object corruption.

   The second known edge condition follows:

   1.   Client A acquires a lock.

   2.   Server reboots.

   3.   Client A last sequence number received and server experience mutual network partition, such
        that client A is unable to reclaim its lock within follows
   the grace
        period.

   4.   Server's reclaim grace period ends.  Client A has methods described above, there are no locks
        recorded on server.

   5.   Client B acquires a lock that would have conflicted with that risks of
        Client A.

   6.   Client B releases the lock.

   7.   Server reboots a second time.

   8.   Network partition between client A and Byzantine router
   re-sending old requests.  The server heals.

   9.   Client A issues a RENEW operation, and gets back a
        NFS4ERR_STALE_CLIENTID.

   10.  Client A reclaims its lock within need only maintain the server's grace period.

   As
   (lock_owner, sequence number) state as long as there are open files
   or closed files with locks outstanding.

   LOCK, LOCKU, OPEN, OPEN_DOWNGRADE, and CLOSE each contain a sequence
   number and therefore the first edge condition, the final step risk of the scenario replay of
   the second edge condition has these operations
   resulting in undesired effects is non-existent while the server erroneously granting client
   A's lock reclaim.

   Solving
   maintains the first and second edge conditions requires that the server
   either assume after it reboots that edge condition occurs, and thus
   return NFS4ERR_NO_GRACE for all reclaim attempts, lock_owner state.

9.1.8.  Releasing lock_owner State

   When a particular lock_owner no longer holds open or that file locking
   state at the server
   record some information stable storage.  The amount of information server, the server records in stable storage is in inverse proportion may choose to how
   harsh release the server wants to be whenever sequence
   number state associated with the edge conditions occur. lock_owner.  The server that is completely tolerant may make
   this choice based on lease expiration, for the reclamation of all edge conditions will record
   in stable storage every lock that is acquired, removing server
   memory, or other implementation specific details.  In any event, the lock
   record from stable storage
   server is able to do this safely only when the lock lock_owner no longer
   is unlocked being utilized by the
   client and client.  The server may choose to hold the lock's lockowner advances
   lock_owner state in the sequence number such event that retransmitted requests are
   received.  However, the lock release period to hold this state is not implementation
   specific.

   In the last stateful event for case that a LOCK, LOCKU, OPEN_DOWNGRADE, or CLOSE is
   retransmitted after the
   lockowner's sequence.  For server has previously released the two aforementioned edge conditions, lock_owner
   state, the harshest a server can be, and still support a grace period for
   reclaims, requires will find that the server record in stable storage
   information some minimal information.  For example, a server
   implementation could, for each client, save in stable storage a
   record containing:

   o lock_owner has no files open and
   an error will be returned to the client's id string

   o client.  If the lock_owner does have
   a boolean that indicates if file open, the client's lease expired or if there
      was administrative intervention (see Section 9.8) stateid will not match and again an error is
   returned to revoke a
      record lock, share reservation, or delegation

   o  a timestamp the client.

9.1.9.  Use of Open Confirmation

   In the case that an OPEN is updated retransmitted and the lock_owner is being
   used for the first time after a server boot or
      reboot the client acquires record locking, share reservation, or
      delegation lock_owner state on the server.  The timestamp need not be updated
      on subsequent lock requests until has been previously
   released by the server reboots.

   The server implementation would also record in server, the stable storage use of the
   timestamps from OPEN_CONFIRM operation will
   prevent incorrect behavior.  When the two most recent server reboots.

   Assuming observes the above record keeping, use of the
   lock_owner for the first edge condition,
   after the server reboots, time, it will direct the record that client A's lease expired
   means that another client could have acquired to perform
   the OPEN_CONFIRM for the corresponding OPEN.  This sequence
   establishes the use of a conflicting record
   lock, share reservation, or delegation.  Hence lock_owner and associated sequence number.
   Since the server must reject OPEN_CONFIRM sequence connects a reclaim from client A with new open_owner on the error NFS4ERR_NO_GRACE.

   For
   server with an existing open_owner on a client, the second edge condition, after sequence number
   may have any value.  The OPEN_CONFIRM step assures the server reboots that
   the value received is the correct one. (see Section 15.20 for further
   details.)

   There are a second
   time, the record that number of situations in which the client had requirement to confirm
   an unexpired record lock, share
   reservation, or delegation established before the server's previous
   incarnation means that OPEN would pose difficulties for the server must reject a reclaim from client A
   with the error NFS4ERR_NO_GRACE.

   Regardless of the level and approach server, in that
   they would be prevented from acting in a timely fashion on
   information received, because that information would be provisional,
   subject to record keeping, deletion upon non-confirmation.  Fortunately, these are
   situations in which the server
   MUST implement one of can avoid the following strategies (which apply need for confirmation
   when responding to
   reclaims of share reservations, record locks, and delegations):

   1.  Reject all reclaims with NFS4ERR_NO_GRACE.  This is superharsh,
       but necessary if the open requests.  The two constraints are:

   o  The server does must not want to record lock state in
       stable storage.

   2.  Record sufficient state in stable storage such that all known
       edge conditions involving bestow a delegation for any open which would
      require confirmation.

   o  The server reboot, including the two noted
       in this section, are detected.  False positives MUST NOT require confirmation on a reclaim-type open
      (i.e., one specifying claim type CLAIM_PREVIOUS or
      CLAIM_DELEGATE_PREV).

   These constraints are acceptable.
       Note related in that at this time, it is not known if there reclaim-type opens are other edge
       conditions.  In the event, after a server reboot, only
   ones in which the server
       determines that there is unrecoverable damage or corruption may be required to send a delegation.  For
   CLAIM_NULL, sending the the stable storage, then delegation is optional while for all clients and/or locks
       affected,
   CLAIM_DELEGATE_CUR, no delegation is sent.

   Delegations being sent with an open requiring confirmation are
   troublesome because recovering from non-confirmation adds undue
   complexity to the server MUST return NFS4ERR_NO_GRACE.

   A mandate for protocol while requiring confirmation on reclaim-
   type opens poses difficulties in that the client's handling inability to resolve the
   status of the NFS4ERR_NO_GRACE error is
   outside reclaim until lease expiration may make it difficult to
   have timely determination of the scope set of this specification, since locks being reclaimed (since
   the strategies for
   such handling are very dependent grace period may expire).

   Requiring open confirmation on the client's operating
   environment.  However, one potential approach reclaim-type opens is described below.

   When the client receives NFS4ERR_NO_GRACE, it could examine the
   change attribute avoidable
   because of the objects nature of the client environments in which such opens are
   done.  For CLAIM_PREVIOUS opens, this is trying to reclaim state
   for, and use that immediately after server
   reboot, so there should be no time for lockowners to determine whether be created,
   found to re-establish the state via
   normal OPEN or LOCK requests.  This is acceptable provided the
   client's operating environment allows it.  In otherwords, the be unused, and recycled.  For CLAIM_DELEGATE_PREV opens, we
   are dealing with a client
   implementor is advised to document reboot situation.  A server which supports
   delegation can be sure that no lockowners for his users the behavior.  The
   client could also inform the application that its record lock or
   share reservations (whether they were delegated or not) client have been
   lost, such as via
   recycled since client initialization and thus can ensure that
   confirmation will not be required.

9.2.  Lock Ranges

   The protocol allows a UNIX signal, lock owner to request a GUI pop-up window, etc.  See
   Section 10.5, for lock with a discussion of what the client should do for
   dealing with unreclaimed delegations on client state.

   For further discussion byte range
   and then either upgrade or unlock a sub-range of revocation the initial lock.
   It is expected that this will be an uncommon type of locks see Section 9.8.

9.7.  Recovery from a Lock Request Timeout request.  In any
   case, servers or Abort server filesystems may not be able to support sub-
   range lock semantics.  In the event that a lock server receives a locking
   request times out, that represents a client may decide sub-range of current locking state for the
   lock owner, the server is allowed to return the error
   NFS4ERR_LOCK_RANGE to signify that it does not
   retry support sub-range lock
   operations.  Therefore, the request.  The client may also abort should be prepared to receive this
   error and, if appropriate, report the request when error to the
   process for which it was issued requesting
   application.

   The client is terminated (e.g., in UNIX due discouraged from combining multiple independent locking
   ranges that happen to be adjacent into a
   signal).  It is possible though that single request since the
   server received the request may not support sub-range requests and acted upon it.  This would change for reasons related to
   the recovery of file locking state on the server without in the client being aware event of server failure.
   As discussed in the change.  It is paramount that Section 9.6.2 below, the
   client re-synchronize state with server before it attempts any other
   operation may employ
   certain optimizations during recovery that takes a seqid and/or a stateid with work effectively only when
   the same
   lock_owner.  This client's behavior during lock recovery is straightforward similar to do without a special re-
   synchronize operation.

   Since the client's
   locking behavior prior to server maintains the last lock request failure.

9.3.  Upgrading and response
   received on the lock_owner, for each lock_owner, the Downgrading Locks

   If a client should
   cache the last has a write lock request on a record, it sent such that can request an atomic
   downgrade of the lock to a read lock via the LOCK request, by setting
   the type to READ_LT.  If the server supports atomic downgrade, the
   request did
   not will succeed.  If not, it will return NFS4ERR_LOCK_NOTSUPP.
   The client should be prepared to receive a response.  From this, this error, and if
   appropriate, report the next time error to the requesting application.

   If a client does has a read lock operation for the lock_owner, on a record, it can send the cached request, if
   there is one, and if the request was one that established state
   (e.g., an atomic
   upgrade of the lock to a write lock via the LOCK request by setting
   the type to WRITE_LT or OPEN operation), WRITEW_LT.  If the server does not support
   atomic upgrade, it will return NFS4ERR_LOCK_NOTSUPP.  If the cached
   result or if never saw the request, perform it.  The client upgrade
   can
   follow up with a be achieved without an existing conflict, the request to remove will
   succeed.  Otherwise, the state (e.g., a LOCKU server will return either NFS4ERR_DENIED or CLOSE
   operation).  With this approach, the sequencing and stateid
   information on
   NFS4ERR_DEADLOCK.  The error NFS4ERR_DEADLOCK is returned if the
   client and server for issued the given lock_owner will
   re-synchronize and in turn LOCK request with the lock state will re-synchronize.

9.8.  Server Revocation of Locks

   At any point, type set to WRITEW_LT and the
   server can revoke locks held by has detected a deadlock.  The client and the
   client must should be prepared for this event.  When to
   receive such errors and if appropriate, report the error to the
   requesting application.

9.4.  Blocking Locks

   Some clients require the support of blocking locks.  The NFS version
   4 protocol must not rely on a callback mechanism and therefore is
   unable to notify a client detects that
   its locks have when a previously denied lock has been or may
   granted.  Clients have been revoked, the client is
   responsible no choice but to continually poll for validating the state information between itself
   lock.  This presents a fairness problem.  Two new lock types are
   added, READW and WRITEW, and are used to indicate to the server.  Validating locking state for server that
   the client means that it
   must verify or reclaim state for each lock currently held. is requesting a blocking lock.  The first instance server should maintain
   an ordered list of pending blocking locks.  When the conflicting lock revocation
   is upon released, the server reboot or re-
   initialization.  In this instance may wait the client will receive an error
   (NFS4ERR_STALE_STATEID or NFS4ERR_STALE_CLIENTID) and lease period for the first
   waiting client will
   proceed with normal crash recovery as described in to re-request the previous
   section.

   The second lock revocation event lock.  After the lease period
   expires the next waiting client request is allowed the inability lock.  Clients
   are required to renew poll at an interval sufficiently small that it is
   likely to acquire the lease
   before expiration.  While this lock in a timely manner.  The server is considered not
   required to maintain a rare or unusual event, list of pending blocked locks as it is used to
   increase fairness and not correct operation.  Because of the client must
   unordered nature of crash recovery, storing of lock state to stable
   storage would be prepared required to recover.  Both guarantee ordered granting of blocking
   locks.

   Servers may also note the server lock types and client
   will be able to detect delay returning denial of
   the failure request to renew allow extra time for a conflicting lock to be
   released, allowing a successful return.  In this way, clients can
   avoid the lease and are capable burden of recovering without data corruption.  For needlessly frequent polling for blocking locks.
   The server should take care in the server, it tracks length of delay in the
   last renewal event serviced for the
   client and knows when retransmits the lease
   will expire.  Similarly, request.

   If a server receives a blocking lock request, denies it, and then
   later receives a nonblocking request for the client must track operations same lock, which will
   renew the lease period.  Using is also
   denied, then it should remove the time that each lock in question from its list of
   pending blocking locks.  Clients should use such a nonblocking
   request was
   sent and to indicate to the time server that this is the corresponding reply was received, last time they
   intend to poll for the
   client should bound lock, as may happen when the time that process
   requesting the corresponding renewal could
   have occurred on lock is interrupted.  This is a courtesy to the server and thus determine if
   server, to prevent it is possible that from unnecessarily waiting a lease period expiration could have occurred.

   The third
   before granting other lock revocation event can occur as a result of
   administrative intervention within the lease period.  While requests.  However, clients are not
   required to perform this is
   considered a rare event, it is possible that courtesy, and servers must not depend on
   them doing so.  Also, clients must be prepared for the server's
   administrator has decided possibility
   that this final locking request will be accepted.

9.5.  Lease Renewal

   The purpose of a lease is to release or revoke allow a particular lock server to remove stale locks
   that are held by the client.  As a result of revocation, the client will receive an
   error of NFS4ERR_ADMIN_REVOKED.  In this instance the client may
   assume that only the lock_owner's locks have been lost.  The client
   notifies the lock holder appropriately.  The client has crashed or is otherwise
   unreachable.  It is not a mechanism for cache consistency and lease
   renewals may not assume be denied if the lease period interval has been renewed as not expired.

   The following events cause implicit renewal of all of the leases for
   a result given client (i.e., all those sharing a given client ID).  Each of
   these is a failed operation.

   When positive indication that the client determines the lease period may have expired, is still active and
   that the
   client must mark all locks associated state held at the server, for the associated lease as
   "unvalidated". client, is
   still valid.

   o  An OPEN with a valid client ID.

   o  Any operation made with a valid stateid (CLOSE, DELEGPURGE,
      DELEGRETURN, LOCK, LOCKU, OPEN, OPEN_CONFIRM, OPEN_DOWNGRADE,
      READ, RENEW, SETATTR, or WRITE).  This means does not include the client has been unable to re-establish
      special stateids of all bits 0 or confirm all bits 1.

      Note that if the appropriate lock state with client had restarted or rebooted, the server.  As described
   in Section 9.6, there are scenarios in which client
      would not be making these requests without issuing the server may grant
   conflicting locks after
      SETCLIENTID/SETCLIENTID_CONFIRM sequence.  The use of the lease period has expired for a client.
   When it is possible
      SETCLIENTID/SETCLIENTID_CONFIRM sequence (one that the lease period has expired, changes the
      client
   must validate each lock currently held verifier) notifies the server to ensure that a conflicting
   lock has not been granted.  The client may accomplish this task by
   issuing an I/O request, either a pending I/O or a zero-length read,
   specifying drop the stateid locking state
      associated with the lock in question. client.  SETCLIENTID/SETCLIENTID_CONFIRM never
      renews a lease.

      If the
   response to server has rebooted, the request is success, stateids (NFS4ERR_STALE_STATEID
      error) or the client has validated all of
   the locks governed by that stateid and re-established ID (NFS4ERR_STALE_CLIENTID error) will not be
      valid hence preventing spurious renewals.

   This approach allows for low overhead lease renewal which scales
   well.  In the appropriate
   state between itself typical case no extra RPC calls are required for lease
   renewal and in the server.

   If the I/O request is not successful, then worst case one or more RPC is required every lease period
   (i.e., a RENEW operation).  The number of the locks
   associated with the stateid was revoked held by the server and the client
   must notify the owner.

9.9.  Share Reservations

   A share reservation is a mechanism to control access to a file.  It is
   not a separate and independent mechanism from record locking.  When a factor since all state for the client opens is involved with the
   lease renewal action.

   Since all operations that create a file, it issues an OPEN operation to new lease also renew existing
   leases, the server
   specifying the type of access required (READ, WRITE, or BOTH) must maintain a common lease expiration time for
   all valid leases for a given client.  This lease time can then be
   easily updated upon implicit lease renewal actions.

9.6.  Crash Recovery

   The important requirement in crash recovery is that both the client
   and the
   type server know when the other has failed.  Additionally, it is
   required that a client sees a consistent view of access to deny others (deny NONE, READ, WRITE, data across server
   restarts or BOTH).  If
   the OPEN fails reboots.  All READ and WRITE operations that may have
   been queued within the client will fail the application's open request.

   Pseudo-code definition of or network buffers must wait until the semantics:

   if (request.access == 0)
           return (NFS4ERR_INVAL)
   else if ((request.access & file_state.deny)) ||
       (request.deny & file_state.access))
           return (NFS4ERR_DENIED)

   This checking of share reservations on OPEN is done with no exception
   for an existing OPEN for
   client has successfully recovered the same open_owner.

   The constants used for locks protecting the OPEN READ and OPEN_DOWNGRADE operations for the
   access
   WRITE operations.

9.6.1.  Client Failure and deny fields are as follows:

   const OPEN4_SHARE_ACCESS_READ   = 0x00000001;
   const OPEN4_SHARE_ACCESS_WRITE  = 0x00000002;
   const OPEN4_SHARE_ACCESS_BOTH   = 0x00000003;

   const OPEN4_SHARE_DENY_NONE     = 0x00000000;
   const OPEN4_SHARE_DENY_READ     = 0x00000001;
   const OPEN4_SHARE_DENY_WRITE    = 0x00000002;
   const OPEN4_SHARE_DENY_BOTH     = 0x00000003;

9.10.  OPEN/CLOSE Operations

   To provide correct share semantics, Recovery

   In the event that a client MUST use fails, the OPEN
   operation to obtain server may recover the initial filehandle and indicate client's
   locks when the desired
   access and what access, if any, to deny.  Even if associated leases have expired.  Conflicting locks
   from another client may only be granted after this lease expiration.
   If the client intends is able to use a stateid of all 0's restart or all 1's, it must still obtain the
   filehandle for the regular file with reinitialize within the OPEN operation so lease
   period the
   appropriate share semantics can client may be applied.  For clients that do not
   have a deny mode built into their open programming interfaces, deny
   equal forced to NONE should be used.

   The OPEN operation with wait the CREATE flag, also subsumes remainder of the CREATE
   operation for regular files as used in previous versions lease
   period before obtaining new locks.

   To minimize client delay upon restart, lock requests are associated
   with an instance of the NFS
   protocol. client by a client supplied verifier.  This allows
   verifier is part of the initial SETCLIENTID call made by the client.
   The server returns a create with client ID as a share to be done atomically. result of the SETCLIENTID
   operation.  The CLOSE operation removes all share reservations held by client then confirms the
   lock_owner on that file.  If record locks are held, use of the client SHOULD
   release ID with
   SETCLIENTID_CONFIRM.  The client ID in combination with an opaque
   owner field is then used by the client to identify the lock owner for
   OPEN.  This chain of associations is then used to identify all locks before issuing
   for a CLOSE.  The particular client.

   Since the verifier will be changed by the client upon each
   initialization, the server MAY free all
   outstanding can compare a new verifier to the verifier
   associated with currently held locks on CLOSE but some servers may and determine that they do not support
   match.  This signifies the CLOSE client's new instantiation and subsequent
   loss of locking state.  As a file that still has record locks held.  The server MUST return
   failure, NFS4ERR_LOCKS_HELD, if any locks would exist after the
   CLOSE.

   The LOOKUP operation will return a filehandle without establishing
   any lock state on the server.  Without a valid stateid, result, the server
   will assume is free to release
   all locks held which are associated with the old client has ID which was
   derived from the least access.  For example, a file
   opened with deny READ/WRITE cannot be accessed using a filehandle
   obtained through LOOKUP because it would not old verifier.

   Note that the verifier must have a valid stateid
   (i.e., using a stateid of all bits 0 or all bits 1).

9.10.1.  Close and Retention of State Information

   Since a CLOSE operation requests deallocation of a stateid, dealing
   with retransmission the same uniqueness properties of
   the CLOSE, may pose special difficulties,
   since verifier for the state information, which normally would be used to
   determine COMMIT operation.

9.6.2.  Server Failure and Recovery

   If the server loses locking state of the open file being designated, might be
   deallocated, resulting in an NFS4ERR_BAD_STATEID error.

   Servers may deal with this problem in (usually as a number result of ways.  To provide
   the greatest degree assurance that the protocol is being used
   properly, a server should, rather than deallocate the stateid, mark restart
   or reboot), it as close-pending, must allow clients time to discover this fact and retain re-
   establish the stateid with this status, until
   later deallocation.  In this way, a retransmitted CLOSE can lost locking state.  The client must be
   recognized since the stateid points able to re-
   establish the locking state information with this
   distinctive status, so that it can be handled without error.

   When adopting this strategy, a server should retain having the state
   information until server deny valid
   requests because the earliest of:

   o  Another validly sequenced request for server has granted conflicting access to another
   client.  Likewise, if there is the same lockowner, possibility that is clients have not
   yet re-established their locking state for a retransmission.

   o  The time that a lockowner is freed by file, the server due to period
      with no activity.

   o  All locks must
   disallow READ and WRITE operations for the client are freed as a result that file.  The duration of a SETCLIENTID.

   Servers may avoid
   this complexity, at recovery period is equal to the cost duration of less complete
   protocol error checking, by simply responding NFS4_OK in the event lease period.

   A client can determine that server failure (and thus loss of locking
   state) has occurred, when it receives one of two errors.  The
   NFS4ERR_STALE_STATEID error indicates a CLOSE for a deallocated stateid, on the assumption that this case
   must be caused stateid invalidated by a retransmitted close.  When adopting this
   approach, it is desirable to at least log an
   reboot or restart.  The NFS4ERR_STALE_CLIENTID error when returning a
   no-error indication in this situation.  If the server maintains indicates a
   reply-cache mechanism, it can verify
   client ID invalidated by reboot or restart.  When either of these are
   received, the CLOSE is indeed client must establish a
   retransmission new client ID (see
   Section 9.1.1) and avoid error logging in most cases.

9.11.  Open Upgrade re-establish the locking state as discussed below.

   The period of special handling of locking and Downgrade

   When an OPEN is done for a file READs and WRITEs, equal
   in duration to the lockowner for which the open
   is being done already has the file open, the result lease period, is referred to upgrade the
   open file status maintained on as the server to include "grace
   period".  During the access grace period, clients recover locks and
   deny bits specified by the new
   associated state by reclaim-type locking requests (i.e., LOCK
   requests with reclaim set to true and OPEN as well as those for operations with a claim
   type of CLAIM_PREVIOUS).  During the existing
   OPEN.  The result is that there is one open file, as far as grace period, the
   protocol is concerned, server must
   reject READ and it includes the union of the access WRITE operations and
   deny bits for all non-reclaim locking requests
   (i.e., other LOCK and OPEN operations) with an error of
   NFS4ERR_GRACE.

   If the OPEN requests completed.  Only server can reliably determine that granting a single
   CLOSE non-reclaim
   request will be done to reset the effects not conflict with reclamation of both OPENs.  Note that the
   client, when issuing locks by other clients,
   the OPEN, may NFS4ERR_GRACE error does not know that the same file is in
   fact being opened.  The above only applies if both OPENs result in
   the OPENed object being designated by have to be returned and the same filehandle.

   When non-
   reclaim client request can be serviced.  For the server chooses to export multiple filehandles corresponding be able to the same file object
   service READ and returns different filehandles on two
   different OPENs of the same file object, the server MUST NOT "OR"
   together WRITE operations during the access and deny bits grace period, it must
   again be able to guarantee that no possible conflict could arise
   between an impending reclaim locking request and coalesce the two open files.
   Instead READ or WRITE
   operation.  If the server is unable to offer that guarantee, the
   NFS4ERR_GRACE error must maintain separate OPENs with separate
   stateids and will require separate CLOSEs be returned to free them.

   When multiple open files on the client are merged into client.

   For a single open
   file object on the server, server to provide simple, valid handling during the close of one of grace
   period, the open files (on easiest method is to simply reject all non-reclaim
   locking requests and READ and WRITE operations by returning the
   client)
   NFS4ERR_GRACE error.  However, a server may necessitate change of keep information about
   granted locks in stable storage.  With this information, the access and deny status server
   could determine if a regular lock or READ or WRITE operation can be
   safely processed.

   For example, if a count of the
   open file locks on the server.  This a given file is because the union of available in
   stable storage, the access and
   deny bits server can track reclaimed locks for the remaining opens file and
   when all reclaims have been processed, non-reclaim locking requests
   may be smaller (i.e., a proper
   subset) than previously.  The OPEN_DOWNGRADE operation is used to
   make the necessary change and processed.  This way the client should use it server can ensure that non-reclaim
   locking requests will not conflict with potential reclaim requests.
   With respect to update I/O requests, if the server so is able to determine that share reservation requests by other clients
   there are
   handled properly.

9.12.  Short and Long Leases

   When determining the time period no outstanding reclaim requests for the server lease, the usual
   lease tradeoffs apply.  Short leases are good for fast server
   recovery at a cost of increased RENEW file by information
   from stable storage or READ (with zero length)
   requests.  Longer leases are certainly kinder and gentler to servers
   trying to handle very large numbers of clients.  The number of RENEW
   requests drop in proportion to another similar mechanism, the lease time.  The disadvantages processing of
   long leases are slower recovery after server failure (the server must
   wait
   I/O requests could proceed normally for the leases file.

   To reiterate, for a server that allows non-reclaim lock and I/O
   requests to expire be processed during the grace period, it MUST determine
   that no lock subsequently reclaimed will be rejected and that no lock
   subsequently reclaimed would have prevented any I/O operation
   processed during the grace period to elapse before
   granting new period.

   Clients should be prepared for the return of NFS4ERR_GRACE errors for
   non-reclaim lock requests) and increased file contention (if I/O requests.  In this case the client
   fails should
   employ a retry mechanism for the request.  A delay (on the order of
   several seconds) between retries should be used to transmit an unlock request then server avoid overwhelming
   the server.  Further discussion of the general issue is included in
   [20].  The client must wait account for lease
   expiration before granting new locks).

   Long leases are usable if the server that is able to store lease state in
   non-volatile memory.  Upon recovery, perform
   I/O and non-reclaim locking requests within the grace period as well
   as those that cannot do so.

   A reclaim-type locking request outside the server's grace period can
   only succeed if the server can reconstruct guarantee that no conflicting lock or
   I/O request has been granted since reboot or restart.

   A server may, upon restart, establish a new value for the lease state from its non-volatile memory and continue operation with
   its
   period.  Therefore, clients should, once a new client ID is
   established, refetch the lease_time attribute and therefore long leases would not be an issue.

9.13.  Clocks, Propagation Delay, and Calculating Lease Expiration

   To avoid use it as the need basis
   for synchronized clocks, lease times are granted by renewal for the server as a time delta. lease associated with that server.
   However, there is the server must establish, for this restart event, a requirement that grace
   period at least as long as the lease period for the previous server
   instantiation.  This allows the client and state obtained during the
   previous server clocks do not drift excessively over instance to be reliably re-established.

9.6.3.  Network Partitions and Recovery

   If the duration of the lock.  There a network partition is also greater than the issue of propagation delay across lease
   period provided by the
   network which could easily be several hundred milliseconds as well as server, the possibility that requests server will be lost and need to be
   retransmitted.

   To take propagation delay into account, the client should subtract it
   from have not received a
   lease times (e.g., if renewal from the client estimates client.  If this occurs, the one-way
   propagation delay as 200 msec, then it can assume that server may free
   all locks held for the lease is
   already 200 msec old when it gets it).  In addition, it will take
   another 200 msec to get client.  As a response back to the server.  So result, all stateids held by the
   client
   must send a lock renewal will become invalid or write data back stale.  Once the client is able to
   reach the server 400 msec
   before the lease would expire.

   The server's lease period configuration should take into account the after such a network distance of the clients that will be accessing partition, all I/O submitted by
   the server's
   resources.  It is expected that client with the lease period now invalid stateids will take into
   account fail with the network propagation delays and other network delay
   factors for server
   returning the client population.  Since error NFS4ERR_EXPIRED.  Once this error is received,
   the protocol does not allow
   for an automatic method to determine an appropriate lease period, client will suitably notify the
   server's administrator may have to tune application that held the lease period.

9.14.  Migration, Replication and State

   When responsibility for handling lock.

9.6.3.1.  Courtesy Locks

   As a given file system is transferred courtesy to a new server (migration) or the client chooses to use or as an alternate optimization, the server (e.g., in response may
   continue to server unresponsiveness) in the context
   of file system replication, the appropriate handling hold locks on behalf of state shared
   between the a client and server (i.e., locks, leases, stateids, and
   clientids) is as described below.  The handling differs between
   migration and replication.  For related discussion of file server
   state and recover of such see for which recent
   communication has extended beyond the sections under Section 9.6. lease period.  If a server replica or a the server immigrating
   receives a filesystem agrees to, lock or is expected to, accept opaque values from the client I/O request that
   originated from another server, then it is a wise implementation
   practice for the servers to encode the "opaque" values in network
   byte order.  This way, servers acting as replicas or immigrating
   filesystems will be able to parse values like stateids, directory
   cookies, filehandles, etc. even if their native byte order is
   different from other servers cooperating in the replication and
   migration conflicts with one of these
   courtesy locks, the filesystem.

9.14.1.  Migration and State

   In server must free the case of migration, courtesy lock and grant the servers involved in
   new request.

   If the migration of a
   filesystem SHOULD transfer all server state from does not reboot before the network partition is healed,
   when the original client tries to access a courtesy lock which was
   freed, the
   new server.  This must be done in server SHOULD send back a way that is transparent NFS4ERR_BAD_STATEID to the
   client.  This state transfer will ease the client's transition when a
   filesystem migration occurs.  If the servers are successful in
   transferring all state, the client will continue tries to use stateids
   assigned by the original server.  Therefore access a courtesy lock which was not
   freed, then the new server must
   recognize these stateids as valid.  This holds true for should mark all of the clientid courtesy locks as well.  Since responsibility for an entire filesystem
   implicitly being renewed.

   When a network partition is
   transferred combined with a migration event, server reboot, there is no possibility are
   edge conditions that
   conflicts will arise place requirements on the new server as a result of the transfer of
   locks.

   As part of the transfer of information between servers, leases would
   be transferred as well.  The leases being transferred in order to
   avoid silent data corruption following the new server will typically have a different expiration time from those for
   the same client, previously on the old server.  To maintain reboot.  Two of
   these edge conditions are known, and are discussed below.

9.6.3.1.1.  First Server Edge Condition

   The first edge condition has the
   property that all leases on following scenario:

   1.  Client A acquires a given lock.

   2.  Client A and server for a given experience mutual network partition, such
       that client expire
   at the same time, A is unable to renew its lease.

   3.  Client A's lease expires, so server releases lock.

   4.  Client B acquires a lock that would have conflicted with that of
       Client A.

   5.  Client B releases the lock

   6.  Server reboots

   7.  Network partition between client A and server should advance heals.

   8.  Client A issues a RENEW operation, and gets back a
       NFS4ERR_STALE_CLIENTID.

   9.  Client A reclaims its lock within the expiration time to server's grace period.

   Thus, at the later of final step, the leases being transferred or server has erroneously granted client
   A's lock reclaim.  If client B modified the leases already
   present.  This allows object the lock was
   protecting, client to maintain lease renewal of both
   classes without special effort. A will experience object corruption.

9.6.3.1.2.  Second Server Edge Condition

   The servers may choose not second known edge condition follows:

   1.   Client A acquires a lock.

   2.   Server reboots.

   3.   Client A and server experience mutual network partition, such
        that client A is unable to transfer reclaim its lock within the state information upon
   migration.  However, this choice is discouraged.  In this case, when grace
        period.

   4.   Server's reclaim grace period ends.  Client A has no locks
        recorded on server.

   5.   Client B acquires a lock that would have conflicted with that of
        Client A.

   6.   Client B releases the lock.

   7.   Server reboots a second time.

   8.   Network partition between client presents state information from the original A and server (e.g.,
   in heals.

   9.   Client A issues a RENEW op or operation, and gets back a READ op
        NFS4ERR_STALE_CLIENTID.

   10.  Client A reclaims its lock within the server's grace period.

   As with the first edge condition, the final step of zero length), the scenario of
   the second edge condition has the server erroneously granting client must be
   prepared to receive
   A's lock reclaim.

9.6.3.1.3.  Handling Server Edge Conditions

   Solving these edge conditions requires that the server either NFS4ERR_STALE_CLIENTID assume
   after it reboots that edge condition occurs, and thus return
   NFS4ERR_NO_GRACE for all reclaim attempts, or
   NFS4ERR_STALE_STATEID from that the new server.  The client should then
   recover its state server record
   some information as it normally would in response to a
   server failure. stable storage.  The new server must take care to allow for the
   recovery amount of state information as it would in the event of
   server
   restart.

   A client SHOULD re-establish new callback information with records in stable storage is in inverse proportion to how
   harsh the new server as soon as possible, according wants to sequences described be whenever the edge conditions occur.  The
   server that is completely tolerant of all edge conditions will record
   in
   Section 15.35 and Section 15.36.  This ensures stable storage every lock that server operations
   are not blocked is acquired, removing the lock
   record from stable storage only when the lock is unlocked by the inability to recall delegations.

9.14.2.  Replication and State

   Since
   client switch-over in and the case of replication is not under
   server control, lock's lockowner advances the handling of state sequence number such
   that the lock release is different.  In this case,
   leases, stateids and clientids do not have validity across a
   transition from one server to another.  The client must re-establish
   its locks on the new server.  This can be compared to last stateful event for the re-
   establishment of locks by means of reclaim-type requests after
   lockowner's sequence.  For the two aforementioned edge conditions,
   the harshest a server reboot.  The difference is can be, and still support a grace period for
   reclaims, requires that the server has no provision to
   distinguish requests reclaiming locks from those obtaining new locks
   or to defer the latter.  Thus, record in stable storage
   information some minimal information.  For example, a client re-establishing server
   implementation could, for each client, save in stable storage a lock on
   record containing:

   o  the
   new server (by means of client's id string

   o  a LOCK or OPEN request), may have boolean that indicates if the
   requests denied due client's lease expired or if there
      was administrative intervention (see Section 9.8) to revoke a conflicting lock.  Since replication is
   intended for read-only use of filesystems, such denial of locks
   should not pose large difficulties in practice.  When an attempt to
   re-establish
      byte-range lock, share reservation, or delegation

   o  a lock on timestamp that is updated the first time after a new server is denied, boot or
      reboot the client should
   treat the situation as if his original lock had been revoked.

9.14.3.  Notification of Migrated Lease

   In the case of lease renewal, acquires byte-range locking, share reservation,
      or delegation state on the client may server.  The timestamp need not be submitting
      updated on subsequent lock requests for a filesystem that has been migrated to another server.
   This can occur because of until the implicit lease renewal mechanism. server reboots.

   The
   client renews leases for all filesystems when submitting a request to
   any one filesystem at server implementation would also record in the server.

   In order stable storage the
   timestamps from the two most recent server reboots.

   Assuming the above record keeping, for the first edge condition,
   after the server reboots, the record that client to schedule renewal of leases A's lease expired
   means that may another client could have
   been relocated to the new server, acquired a conflicting record
   lock, share reservation, or delegation.  Hence the client server must find out about
   lease relocation before those leases expire.  To accomplish this, all
   operations which implicitly renew leases for reject
   a reclaim from client (such as OPEN,
   CLOSE, READ, WRITE, RENEW, LOCK, and others), will return A with the error
   NFS4ERR_LEASE_MOVED if responsibility for any of NFS4ERR_NO_GRACE.

   For the leases to be
   renewed has been transferred to second edge condition, after the server reboots for a new server.  This condition will
   continue until second
   time, the record that the client receives had an NFS4ERR_MOVED error and unexpired record lock, share
   reservation, or delegation established before the
   server receives server's previous
   incarnation means that the subsequent GETATTR(fs_locations) for an access to
   each filesystem for which a lease has been moved to server must reject a new server.  By
   convention, reclaim from client A
   with the compound including error NFS4ERR_NO_GRACE.

   Regardless of the GETATTR(fs_locations) SHOULD
   append a RENEW operation level and approach to permit record keeping, the server to identify the client
   doing the access.

   Upon receiving
   MUST implement one of the NFS4ERR_LEASE_MOVED error, a client that supports
   filesystem migration MUST probe all filesystems from that server on
   which it holds open state.  Once the client has successfully probed following strategies (which apply to
   reclaims of share reservations, byte-range locks, and delegations):

   1.  Reject all those filesystems which are migrated, reclaims with NFS4ERR_NO_GRACE.  This is super harsh,
       but necessary if the server MUST resume
   normal handling of stateful requests from that client.

   In order does not want to support legacy clients record lock state in
       stable storage.

   2.  Record sufficient state in stable storage such that do not handle the
   NFS4ERR_LEASE_MOVED error correctly, the all known
       edge conditions involving server SHOULD time out after
   a wait of at least reboot, including the two lease periods, noted
       in this section, are detected.  False positives are acceptable.
       Note that at which time this time, it will resume
   normal handling of stateful requests from all clients.  If a client
   attempts to access the migrated files, is not known if there are other edge
       conditions.  In the event, after a server MUST reply
   NFS4ERR_MOVED.

   When reboot, the client receives an NFS4ERR_MOVED error, server
       determines that there is unrecoverable damage or corruption to
       the client can
   follow the normal process to obtain stable storage, then for all clients and/or locks
       affected, the new server information
   (through MUST return NFS4ERR_NO_GRACE.

9.6.3.1.4.  Client Edge Condition

   A third edge condition effects the fs_locations attribute) client and perform renewal of those
   leases on not the new server.  If the
   server has not had state
   transferred to it transparently, reboots in the client will receive either
   NFS4ERR_STALE_CLIENTID or NFS4ERR_STALE_STATEID from middle of the new server,
   as described above.  The client can reclaiming some locks and
   then recover state information as
   it does a network partition is established, the client might be in the event
   situation of server failure.

9.14.4.  Migration and the Lease_time Attribute having reclaimed some, but not all locks.  In order that the case,
   a conservative client may appropriately manage its leases in the
   case of migration, the destination server must establish proper
   values for the lease_time attribute.

   When state is transferred transparently, would assume that state should include
   the correct value of the lease_time attribute. non-reclaimed locks were
   revoked.

   The lease_time
   attribute on third known edge condition follows:

   1.   Client A acquires a lock 1.

   2.   Client A acquires a lock 2.

   3.   Server reboots.

   4.   Client A issues a RENEW operation, and gets back a
        NFS4ERR_STALE_CLIENTID.

   5.   Client A reclaims its lock 1 within the destination server's grace period.

   6.   Client A and server must never be less than experience mutual network partition, such
        that on
   the source since this would result in premature expiration of leases
   granted by the source server.  Upon migration in which state is
   transferred transparently, the client A is under no obligation to re-
   fetch the lease_time attribute and may continue unable to use the value
   previously fetched (on reclaim its remaining locks within
        the source server).

   If state grace period.

   7.   Server's reclaim grace period ends.  Client A has not been transferred transparently (i.e., the client
   sees no locks
        recorded on server.

   8.   Server reboots a real or simulated server reboot), the second time.

   9.   Network partition between client should fetch the
   value of lease_time on the new (i.e., destination) server, A and use it
   for subsequent locking requests.  However the server must respect heals.

   10.  Client A issues a
   grace period at least as long as RENEW operation, and gets back a
        NFS4ERR_STALE_CLIENTID.

   11.  Client A reclaims its lock 1 within the lease_time on server's grace period.

   During the source server,
   in order to ensure partition, client A decided that clients have ample time the server had revoked
   lock 2.  After the partition, it was able to reclaim their
   locks before potentially conflicting non-reclaimed locks lock 1, but made
   no attempt to reclaim lock 2.  After the grace period, it is free to
   try to reestablish lock 2 via LOCK operations.

   Note that the other two edge conditions are granted.
   The means by which able to interact with
   this third edge condition.  Another client B may have established a
   conflicting lock during the new partition, made some changes, and the
   released the lock before the second server obtains reboot.

9.6.3.1.5.  Client's Handling of NFS4ERR_NO_GRACE

   A mandate for the value client's handling of lease_time on the old server NFS4ERR_NO_GRACE error is left to
   outside the server implementations.  It scope of this specification, since the strategies for
   such handling are very dependent on the client's operating
   environment.  However, one potential approach is not
   specified by described below.

   When the NFS version 4 protocol.

10.  Client-Side Caching

   Client-side caching of data, of file attributes, and client receives NFS4ERR_NO_GRACE, it could examine the
   change attribute of file names the objects the client is
   essential trying to providing good performance with reclaim state
   for, and use that to determine whether to re-establish the NFS protocol.
   Providing distributed cache coherence state via
   normal OPEN or LOCK requests.  This is a difficult problem and
   previous versions of acceptable provided the NFS protocol have not attempted
   client's operating environment allows it.
   Instead, several NFS  In other words, the client implementation techniques have been used
   implementor is advised to reduce the problems that a lack of coherence poses document for users.
   These techniques have not been clearly defined by earlier protocol
   specifications and it is often unclear what is valid or invalid
   client his users the behavior.  The NFS version 4 protocol uses many techniques similar to those
   client could also inform the application that its byte-range lock or
   share reservations (whether they were delegated or not) have been used in previous protocol versions.  The NFS version 4
   protocol does not provide distributed cache coherence.  However, it
   defines
   lost, such as via a more limited set UNIX signal, a GUI pop-up window, etc.  See
   Section 10.5, for a discussion of what the client should do for
   dealing with unreclaimed delegations on client state.

   For further discussion of revocation of caching guarantees to allow locks and
   share reservations see Section 9.8.

9.6.3.2.  Client's Reaction to be used without destructive interference from a Freed Lock

   There is no way for a client side caching.

   In addition, the NFS version 4 protocol introduces to predetermine how a delegation
   mechanism which allows many decisions normally made by the given server is
   going to
   be made locally by clients.  This mechanism provides efficient
   support of behave during a network partition.  When the common cases where sharing is infrequent partition
   heals, either the client still has all of its locks, it has some of
   its locks, or where
   sharing is read-only.

10.1.  Performance Challenges for Client-Side Caching

   Caching techniques used in previous versions it has none of them.  The client will be able to
   examine the NFS protocol have
   been successful in providing good performance.  However, several
   scalability challenges can arise when those techniques are used with
   very large numbers of clients.  This is particularly true when
   clients are geographically distributed which classically increases
   the latency for cache revalidation requests.

   The previous versions of the NFS protocol repeat their file data
   cache validation requests at the time the file is opened.  This
   behavior can have serious performance drawbacks.  A common case is
   one in which a file is only accessed by a single client.  Therefore,
   sharing is infrequent.

   In this case, repeated reference various error return values to determine its response.

   NFS4ERR_EXPIRED:

      All locks has been revoked during the server to find that no
   conflicts exist is expensive.  A better option with regards to
   performance is to allow a partition.  The client that repeatedly opens
      should use a file to do
   so without reference SETCLIENTID to the server.  This is done until potentially
   conflicting operations from another client actually occur.

   A similar situation arises in connection with file locking.  Sending
   file recover.

   NFS4ERR_ADMIN_REVOKED:

      The current lock has been revoked during the partition and unlock requests there
      is no clue as to whether the server as well as rebooted.

   NFS4ERR_BAD_STATEID:

      The current lock has been revoked during the read partition and
   write requests necessary to make data caching consistent with the
   locking semantics (see Section 10.3.2) can severely limit
   performance.  When locking is used
      server did not reboot.  Other locks MAY still be renewed.  The
      client MAY NOT want to provide protection against
   infrequent conflicts, do a large penalty is incurred.  This penalty may
   discourage the use of file locking by applications. SETCLIENTID and instead SHOULD probe
      via a RENEW call.

   NFS4ERR_NO_GRACE:

      The NFS version 4 protocol provides more aggressive caching
   strategies with current lock has been revoked during the partition and the following design goals:

   o  Compatibility with a large range of
      server semantics.

   o  Provide rebooted.  The server might have no information on the same caching benefits
      other locks.  They may still be renewable.

   NFS4ERR_OLD_STATEID:

      The server has not rebooted.  The client SHOULD handle this error
      as previous versions of it normally would.

9.7.  Recovery from a Lock Request Timeout or Abort

   In the NFS
      protocol when unable event a lock request times out, a client may decide to provide not
   retry the more aggressive model.

   o  Requirements for aggressive caching are organized so that a large
      portion of request.  The client may also abort the benefit can be obtained even request when not all of the
      requirements can be met.

   The appropriate requirements
   process for the server are discussed in later
   sections in which specific forms of caching are covered (see
   Section 10.4).

10.2.  Delegation and Callbacks

   Recallable delegation of server responsibilities for a file it was issued is terminated (e.g., in UNIX due to a
   client improves performance by avoiding repeated requests to
   signal).  It is possible though that the server in received the absence of inter-client conflict.  With request
   and acted upon it.  This would change the state on the use of a
   "callback" RPC from server to client, a server recalls delegated
   responsibilities when another without
   the client engages in sharing being aware of a
   delegated file.

   A delegation the change.  It is passed from paramount that the
   client re-synchronize state with server to before it attempts any other
   operation that takes a seqid and/or a stateid with the client, specifying same
   lock_owner.  This is straightforward to do without a special re-
   synchronize operation.

   Since the
   object of server maintains the delegation last lock request and response
   received on the type of delegation.  There are
   different types of delegations but lock_owner, for each type contains a stateid to be
   used to represent lock_owner, the delegation when performing operations that
   depend on client should
   cache the delegation.  This stateid is similar to those
   associated with locks and share reservations but differs in last lock request it sent such that the
   stateid for a delegation is associated with lock request did
   not receive a clientid and may be
   used on behalf of all response.  From this, the open_owners next time the client does a
   lock operation for the given client.  A
   delegation lock_owner, it can send the cached request, if
   there is made to one, and if the client as request was one that established state
   (e.g., a whole and not to any specific
   process LOCK or thread of control within OPEN operation), the server will return the cached
   result or if never saw the request, perform it.

   Because callback RPCs may not work in all environments (due  The client can
   follow up with a request to
   firewalls, for example), correct protocol operation does not depend
   on them.  Preliminary testing of callback functionality by means of remove the state (e.g., a
   CB_NULL procedure determines whether callbacks can be supported.  The
   CB_NULL procedure checks LOCKU or CLOSE
   operation).  With this approach, the continuity of sequencing and stateid
   information on the callback path.  A client and server makes a preliminary assessment of callback availability to a for the given client lock_owner will
   re-synchronize and avoids delegating responsibilities until it has
   determined that callbacks are supported.  Because in turn the granting lock state will re-synchronize.

9.8.  Server Revocation of a
   delegation is always conditional upon Locks

   At any point, the absence of conflicting
   access, clients must not assume that server can revoke locks held by a delegation will be granted client and
   they the
   client must always be prepared for OPENs to be processed without any
   delegations being granted.

   Once granted, a delegation behaves in most ways like a lock.  There
   is an associated lease this event.  When the client detects that
   its locks have been or may have been revoked, the client is subject to renewal together with all
   of
   responsible for validating the other leases held by state information between itself and
   the server.  Validating locking state for the client means that client.

   Unlike locks, it
   must verify or reclaim state for each lock currently held.

   The first instance of lock revocation is upon server reboot or re-
   initialization.  In this instance the client will receive an operation by a second error
   (NFS4ERR_STALE_STATEID or NFS4ERR_STALE_CLIENTID) and the client to a delegated file will cause
   proceed with normal crash recovery as described in the server previous
   section.

   The second lock revocation event is the inability to recall a delegation through renew the lease
   before expiration.  While this is considered a callback.

   On recall, rare or unusual event,
   the client holding the delegation must flush modified
   state (such as modified data) be prepared to recover.  Both the server and return the
   delegation.  The conflicting request client
   will not be acted on until the
   recall is complete.  The recall is considered complete when able to detect the
   client returns failure to renew the delegation or the server times its wait for the
   delegation to be returned lease and revokes the delegation as a result are capable
   of recovering without data corruption.  For the timeout.  In the interim, the server will either delay responding
   to conflicting requests or respond to them with NFS4ERR_DELAY.
   Following server, it tracks the resolution of
   last renewal event serviced for the recall, client and knows when the server has lease
   will expire.  Similarly, the
   information necessary to grant or deny client must track operations which will
   renew the second client's request.

   At lease period.  Using the time that each such request was
   sent and the client receives a delegation recall, it may have
   substantial state time that needs to be flushed to the server.  Therefore, corresponding reply was received, the server
   client should allow sufficient time for bound the delegation to be
   returned since it may involve numerous RPCs to time that the server.  If corresponding renewal could
   have occurred on the server is able to and thus determine that the client if it is diligently flushing
   state to the server possible that
   a lease period expiration could have occurred.

   The third lock revocation event can occur as a result of
   administrative intervention within the recall, the server may extend
   the usual time allowed for a recall.  However, the time allowed for
   recall completion should not be unbounded.

   An example of lease period.  While this is when responsibility to mediate opens on
   considered a given
   file rare event, it is delegated possible that the server's
   administrator has decided to release or revoke a client (see Section 10.4).  The server will
   not know what opens are in effect on particular lock held
   by the client.  Without this
   knowledge  As a result of revocation, the server client will be unable to determine if receive an
   error of NFS4ERR_ADMIN_REVOKED.  In this instance the access and
   deny state for client may
   assume that only the file allows any particular open until lock_owner's locks have been lost.  The client
   notifies the
   delegation for lock holder appropriately.  The client may not assume
   the file lease period has been returned.

   A client failure or renewed as a network partition can result in failure to
   respond to of a recall callback.  In this case, the server will revoke failed operation.

   When the delegation which in turn will render useless any modified state
   still on client determines the client.

   Clients need to be aware that server implementors lease period may enforce
   practical limitations on have expired, the number of delegations issued.  Further,
   client must mark all locks held for the associated lease as there is no way
   "unvalidated".  This means the client has been unable to determine re-establish
   or confirm the appropriate lock state with the server.  As described
   in Section 9.6, there are scenarios in which delegations to revoke, the server is allowed to revoke any.  If may grant
   conflicting locks after the server lease period has expired for a client.
   When it is implemented to
   revoke another delegation held by possible that client, then the lease period has expired, the client may be
   able
   must validate each lock currently held to determine ensure that a limit conflicting
   lock has not been reached because each new
   delegation request results in a revoke. granted.  The client could then
   determine which delegations it may not need and preemptively release
   them.

10.2.1.  Delegation Recovery

   There are three situations that delegation recovery must deal with:

   o  Client reboot or restart
   o  Server reboot or restart

   o  Network partition (full accomplish this task by
   issuing an I/O request, either a pending I/O or callback-only)

   In the event a zero-length read,
   specifying the client reboots or restarts, stateid associated with the failure to renew
   leases will result lock in question.  If the revocation of record locks and share
   reservations.  Delegations, however, may be treated a bit
   differently.

   There will be situations in which delegations will need
   response to be
   reestablished after a client reboots or restarts.  The reason for
   this the request is success, the client may have file data stored locally and this data
   was associated with has validated all of
   the previously held delegations.  The client will
   need to reestablish locks governed by that stateid and re-established the appropriate file
   state on between itself and the server.

   To allow for this type of client recovery, the server MAY extend

   If the
   period for delegation recovery beyond I/O request is not successful, then one or more of the typical lease expiration
   period.  This implies that requests from other clients that conflict locks
   associated with these delegations will need to wait.  Because the normal recall
   process may require significant time for stateid was revoked by the server and the client to flush changed
   state to
   must notify the server, other clients need be prepared for delays that
   occur because of owner.

9.9.  Share Reservations

   A share reservation is a conflicting delegation.  This longer interval
   would increase the window for clients mechanism to reboot control access to a file.  It
   is a separate and consult stable
   storage so that the delegations can be reclaimed.  For open
   delegations, such delegations are reclaimed using OPEN with independent mechanism from byte-range locking.
   When a claim client opens a file, it issues an OPEN operation to the server
   specifying the type of CLAIM_DELEGATE_PREV.  (See Section 10.5 and Section 15.18 for
   discussion of open delegation access required (READ, WRITE, or BOTH) and the details
   type of access to deny others (OPEN4_SHARE_DENY_NONE,
   OPEN4_SHARE_DENY_READ, OPEN4_SHARE_DENY_WRITE, or
   OPEN4_SHARE_DENY_BOTH).  If the OPEN respectively).

   A server MAY support a claim type fails the client will fail the
   application's open request.

   Pseudo-code definition of CLAIM_DELEGATE_PREV, but the semantics:

   if it
   does, it MUST NOT remove delegations upon SETCLIENTID_CONFIRM, and
   instead MUST, for a period (request.access == 0)
           return (NFS4ERR_INVAL)
   else if ((request.access & file_state.deny)) ||
       (request.deny & file_state.access))
           return (NFS4ERR_DENIED)

   This checking of time share reservations on OPEN is done with no less than that of the value of
   the lease_time attribute, maintain the client's delegations to allow
   time exception
   for an existing OPEN for the client to issue CLAIM_DELEGATE_PREV requests. same open_owner.

   The
   server that supports CLAIM_DELEGATE_PREV MUST support constants used for the DELEGPURGE
   operation.

   When OPEN and OPEN_DOWNGRADE operations for the server reboots or restarts, delegations
   access and deny fields are reclaimed (using as follows:

   const OPEN4_SHARE_ACCESS_READ   = 0x00000001;
   const OPEN4_SHARE_ACCESS_WRITE  = 0x00000002;
   const OPEN4_SHARE_ACCESS_BOTH   = 0x00000003;

   const OPEN4_SHARE_DENY_NONE     = 0x00000000;
   const OPEN4_SHARE_DENY_READ     = 0x00000001;
   const OPEN4_SHARE_DENY_WRITE    = 0x00000002;
   const OPEN4_SHARE_DENY_BOTH     = 0x00000003;

9.10.  OPEN/CLOSE Operations

   To provide correct share semantics, a client MUST use the OPEN
   operation with CLAIM_PREVIOUS) in a similar fashion to
   record locks obtain the initial filehandle and share reservations.  However, there is a slight
   semantic difference.  In indicate the normal case desired
   access and what access, if any, to deny.  Even if the server decides that client intends
   to use a
   delegation should not be granted, stateid of all 0's or all 1's, it performs must still obtain the requested action
   (e.g., OPEN) without granting any delegation.  For reclaim,
   filehandle for the
   server grants regular file with the delegation but a special designation is applied OPEN operation so
   that the client treats the delegation as having been granted but
   recalled by the server.  Because
   appropriate share semantics can be applied.  Clients that do not have
   a deny mode built into their programming interfaces for opening a
   file should request a deny mode of this, the client has the duty to
   write all modified state to OPEN4_SHARE_DENY_NONE.

   The OPEN operation with the server and then return CREATE flag, also subsumes the
   delegation.  This process of handling delegation reclaim reconciles
   three principles CREATE
   operation for regular files as used in previous versions of the NFS version 4 protocol:

   o  Upon reclaim,
   protocol.  This allows a client reporting resources assigned create with a share to it by an
      earlier server instance must be granted those resources.

   o done atomically.

   The server has unquestionable authority to determine whether
      delegations are to be granted and, once granted, whether they CLOSE operation removes all share reservations held by the
   lock_owner on that file.  If byte-range locks are
      to be continued.

   o held, the client
   SHOULD release all locks before issuing a CLOSE.  The use of callbacks is server MAY free
   all outstanding locks on CLOSE but some servers may not to be depended upon until support the client
      has proven its ability to receive them.

   When
   CLOSE of a client file that still has more than a single open associated with a
   delegation, state for those additional opens can be established using
   OPEN operations of type CLAIM_DELEGATE_CUR.  When these are used to
   establish opens associated with reclaimed delegations, the byte-range locks held.  The server
   MUST allow them when made within return failure, NFS4ERR_LOCKS_HELD, if any locks would exist
   after the grace period.

   When CLOSE.

   The LOOKUP operation will return a network partition occurs, delegations are subject to freeing
   by the server when the lease renewal period expires.  This is similar
   to filehandle without establishing
   any lock state on the behavior for locks and share reservations.  For delegations,
   however, server.  Without a valid stateid, the server may extend
   will assume the period in which conflicting
   requests are held off.  Eventually client has the occurrence of least access.  For example, if one
   client opened a conflicting
   request from file with OPEN4_SHARE_DENY_BOTH and another client will cause revocation of
   accesses the delegation.
   A loss of file via a filehandle obtained through LOOKUP, the callback path (e.g., by later network configuration
   change) will have
   second client could only read the same effect.  A recall request will fail and
   revocation of file using the delegation will result.

   A special read bypass
   stateid.  The second client normally finds out about revocation of a delegation when could not WRITE the file at all because
   it
   uses would not have a valid stateid associated with a delegation from OPEN and receives the error
   NFS4ERR_EXPIRED.  It also may find out about delegation revocation
   after a client reboot when it attempts to reclaim a delegation special anonymous
   stateid would not be allowed access.

9.10.1.  Close and
   receives that same error.  Note that in the case Retention of State Information

   Since a revoked write
   open delegation, there are issues because data may have been modified
   by the client whose delegation is revoked and separately by other
   clients.  See Section 10.5.1 for CLOSE operation requests deallocation of a discussion stateid, dealing
   with retransmission of such issues.  Note
   also that when delegations are revoked, information about the revoked
   delegation will be written by CLOSE, may pose special difficulties,
   since the server state information, which normally would be used to stable storage (as
   described
   determine the state of the open file being designated, might be
   deallocated, resulting in Section 9.6).  This is done to an NFS4ERR_BAD_STATEID error.

   Servers may deal with the case this problem in
   which a server reboots after revoking a delegation but before number of ways.  To provide
   the
   client holding greatest degree assurance that the revoked delegation protocol is notified about the
   revocation.

10.3.  Data Caching

   When applications share access to being used
   properly, a set of files, they need to be
   implemented so as to take account of the possibility of conflicting
   access by another application.  This is true whether the applications
   in question execute on different clients or reside on server should, rather than deallocate the same
   client.

   Share reservations stateid, mark
   it as close-pending, and record locks are the facilities retain the NFS
   version 4 protocol provides to allow applications to coordinate
   access by providing mutual exclusion facilities.  The NFS version 4
   protocol's data caching must stateid with this status, until
   later deallocation.  In this way, a retransmitted CLOSE can be implemented such that it does not
   invalidate
   recognized since the assumptions that those using these facilities depend
   upon.

10.3.1.  Data Caching and OPENs

   In order stateid points to avoid invalidating the sharing assumptions state information with this
   distinctive status, so that
   applications rely on, NFS version 4 clients should not provide cached
   data to applications or modify it on behalf of an application when it
   would not can be valid to obtain or modify that same data via handled without error.

   When adopting this strategy, a READ or
   WRITE operation.

   Furthermore, in server should retain the absence of open delegation (see Section 10.4) two
   additional rules apply.  Note that these rules are obeyed in practice
   by many NFS version 2 and version 3 clients. state
   information until the earliest of:

   o  First, cached data present on  Another validly sequenced request for the same lockowner, that is
      not a client must be revalidated after
      doing an OPEN.  Revalidating means retransmission.

   o  The time that a lockowner is freed by the client fetches the
      change attribute from the server, compares it server due to period
      with no activity.

   o  All locks for the cached
      change attribute, and if different, declares the cached data (as
      well client are freed as a result of a SETCLIENTID.

   Servers may avoid this complexity, at the cached attributes) as invalid.  This is to ensure that cost of less complete
   protocol error checking, by simply responding NFS4_OK in the data event of
   a CLOSE for a deallocated stateid, on the OPENed file is still correctly reflected in the
      client's cache.  This validation assumption that this case
   must be done caused by a retransmitted close.  When adopting this
   approach, it is desirable to at least log an error when the
      client's OPEN operation includes DENY=WRITE or BOTH thus
      terminating returning a period
   no-error indication in which other clients may have had this situation.  If the server maintains a
   reply-cache mechanism, it can verify the CLOSE is indeed a
   retransmission and avoid error logging in most cases.

9.11.  Open Upgrade and Downgrade

   When an OPEN is done for a file and the lockowner for which the
      opportunity to open
   is being done already has the file with WRITE access.  Clients may
      choose open, the result is to do upgrade the revalidation more often (i.e., at OPENs
      specifying DENY=NONE)
   open file status maintained on the server to parallel include the NFS version 3 protocol's
      practice access and
   deny bits specified by the new OPEN as well as those for the benefit of users assuming this degree of cache
      revalidation.  Since existing
   OPEN.  The result is that there is one open file, as far as the change attribute
   protocol is updated for data concerned, and
      metadata modifications, some client implementors may be tempted to
      use it includes the time_modify attribute union of the access and not change
   deny bits for all of the OPEN requests completed.  Only a single
   CLOSE will be done to validate cached
      data, so reset the effects of both OPENs.  Note that metadata changes do the
   client, when issuing the OPEN, may not spuriously invalidate clean
      data.  The implementor know that the same file is cautioned in this approach.
   fact being opened.  The change
      attribute is guaranteed to change for each update to above only applies if both OPENs result in
   the file,
      whereas time_modify is guaranteed OPENed object being designated by the same filehandle.

   When the server chooses to export multiple filehandles corresponding
   to change only at the
      granularity same file object and returns different filehandles on two
   different OPENs of the time_delta attribute.  Use by same file object, the client's data
      cache validation logic of time_modify server MUST NOT "OR"
   together the access and not change runs deny bits and coalesce the risk
      of two open files.
   Instead the client incorrectly marking stale data as valid.

   o  Second, modified data server must be flushed maintain separate OPENs with separate
   stateids and will require separate CLOSEs to free them.

   When multiple open files on the server before closing client are merged into a single open
   file OPENed for write.  This is complementary to object on the first rule.
      If server, the data is not flushed at CLOSE, close of one of the revalidation done after
      client OPENs as open files (on the
   client) may necessitate change of the access and deny status of the
   open file is unable to achieve its purpose.  The other
      aspect to flushing on the data before close server.  This is that because the data must be
      committed to stable storage, at union of the server, before access and
   deny bits for the CLOSE remaining opens may be smaller (i.e., a proper
   subset) than previously.  The OPEN_DOWNGRADE operation is requested by the client.  In used to
   make the case of a server
      reboot or restart necessary change and a CLOSEd file, the client should use it may not be possible to
      retransmit update the data
   server so that share reservation requests by other clients are
   handled properly.  The stateid returned has the same "other" field as
   that passed to the server.  The "seqid" value in the returned stateid
   MUST be written incremented, even in situations in which there is no change
   to the file.  Hence, this
      requirement.

10.3.2.  Data Caching access and File Locking

   For those applications that choose deny bits for the file.

9.12.  Short and Long Leases

   When determining the time period for the server lease, the usual
   lease tradeoffs apply.  Short leases are good for fast server
   recovery at a cost of increased RENEW or READ (with zero length)
   requests.  Longer leases are certainly kinder and gentler to use file locking instead servers
   trying to handle very large numbers of
   share reservations clients.  The number of RENEW
   requests drop in proportion to exclude inconsistent file access, there is an
   analogous set the lease time.  The disadvantages of constraints that apply
   long leases are slower recovery after server failure (the server must
   wait for the leases to expire and the grace period to elapse before
   granting new lock requests) and increased file contention (if client side data caching.
   These rules
   fails to transmit an unlock request then server must wait for lease
   expiration before granting new locks).

   Long leases are effective only usable if the file locking server is used in a way
   that matches able to store lease state in an equivalent way
   non-volatile memory.  Upon recovery, the actual READ server can reconstruct the
   lease state from its non-volatile memory and WRITE
   operations executed.  This is continue operation with
   its clients and therefore long leases would not be an issue.

9.13.  Clocks, Propagation Delay, and Calculating Lease Expiration

   To avoid the need for synchronized clocks, lease times are granted by
   the server as opposed to file locking that is
   based on pure convention.  For example, it a time delta.  However, there is possible to manipulate a two-megabyte file by dividing requirement that the file into two one-megabyte
   regions
   client and protecting access to server clocks do not drift excessively over the two regions by file locks on
   bytes zero and one.  A lock for write on byte zero duration
   of the file would
   represent the right to do READ and WRITE operations on lock.  There is also the first
   region.  A lock for write on byte one issue of propagation delay across the file would represent
   network which could easily be several hundred milliseconds as well as
   the
   right to do READ possibility that requests will be lost and WRITE operations on need to be
   retransmitted.

   To take propagation delay into account, the second region.  As long client should subtract it
   from lease times (e.g., if the client estimates the one-way
   propagation delay as all applications manipulating 200 msec, then it can assume that the file obey this convention, they lease is
   already 200 msec old when it gets it).  In addition, it will work on take
   another 200 msec to get a local filesystem.  However, they may not work with response back to the
   NFS version 4 protocol unless clients refrain from data caching.

   The rules for data caching in server.  So the file locking environment are:

   o  First, when a client obtains
   must send a file lock for a particular region,
      the renewal or write data cache corresponding back to the server 400 msec
   before the lease would expire.

   The server's lease period configuration should take into account the
   network distance of the clients that region (if any cached data
      exists) must will be revalidated.  If accessing the change attribute indicates server's
   resources.  It is expected that the file may have been updated since the cached data was
      obtained, the client must flush or invalidate lease period will take into
   account the cached data network propagation delays and other network delay
   factors for the newly locked region.  A client might choose to invalidate all
      of non-modified cached data that it has for the file but population.  Since the only
      requirement protocol does not allow
   for correct operation is an automatic method to invalidate all of determine an appropriate lease period, the data
      in
   server's administrator may have to tune the newly locked region.

   o  Second, before releasing a write lock lease period.

9.14.  Migration, Replication and State

   When responsibility for handling a region, all modified
      data for that region must be flushed given file system is transferred
   to a new server (migration) or the server.  The modified
      data must also be written client chooses to stable storage.

   Note that flushing data use an alternate
   server (e.g., in response to the server and unresponsiveness) in the invalidation context
   of cached
   data must reflect file system replication, the appropriate handling of state shared
   between the actual byte ranges locked or unlocked.
   Rounding these up or down to reflect client cache block boundaries
   will cause problems if not carefully done. and server (i.e., locks, leases, stateids, and
   client IDs) is as described below.  The handling differs between
   migration and replication.  For example, writing a
   modified block when only half related discussion of that block is within an area being
   unlocked may cause invalid modification to the region outside the
   unlocked area.  This, in turn, may be part file server
   state and recover of such see the sections under Section 9.6.

   If a region locked by
   another client.  Clients can avoid this situation by synchronously
   performing portions of write operations that overlap that portion
   (initial server replica or final) that is not a full block.  Similarly, invalidating server immigrating a locked area which filesystem agrees to,
   or is not an integral number of full buffer blocks
   would require expected to, accept opaque values from the client that
   originated from another server, then it is a wise implementation
   practice for the servers to read one encode the "opaque" values in network
   byte order.  This way, servers acting as replicas or two partial blocks immigrating
   filesystems will be able to parse values like stateids, directory
   cookies, filehandles, etc. even if their native byte order is
   different from other servers cooperating in the
   server if replication and
   migration of the revalidation procedure shows that filesystem.

9.14.1.  Migration and State

   In the data which case of migration, the
   client possesses may not servers involved in the migration of a
   filesystem SHOULD transfer all server state from the original to the
   new server.  This must be valid.

   The data done in a way that is written transparent to the server as
   client.  This state transfer will ease the client's transition when a prerequisite
   filesystem migration occurs.  If the servers are successful in
   transferring all state, the client will continue to use stateids
   assigned by the
   unlocking of a region original server.  Therefore the new server must be written, at
   recognize these stateids as valid.  This holds true for the server, to stable
   storage.  The client may accomplish this either with synchronous
   writes or by following asynchronous writes ID
   as well.  Since responsibility for an entire filesystem is
   transferred with a COMMIT operation.
   This migration event, there is required because retransmission of no possibility that
   conflicts will arise on the modified data after a new server reboot might conflict with a lock held by another client.

   A client implementation may choose to accommodate applications which
   use record locking in non-standard ways (e.g., using a record lock as a global semaphore) by flushing result of the transfer of
   locks.

   As part of the transfer of information between servers, leases would
   be transferred as well.  The leases being transferred to the new
   server more data upon will typically have a LOCKU
   than is covered by different expiration time from those for
   the locked range.  This may include modified data
   within files other than same client, previously on the one old server.  To maintain the
   property that all leases on a given server for which a given client expire
   at the unlocks are same time, the server should advance the expiration time to
   the later of the leases being done.
   In such cases, transferred or the leases already
   present.  This allows the client must to maintain lease renewal of both
   classes without special effort.

   The servers may choose not interfere with applications whose
   READs and WRITEs are being done only within to transfer the bounds of record
   locks which state information upon
   migration.  However, this choice is discouraged.  In this case, when
   the application holds.  For example, an application locks client presents state information from the original server (e.g.,
   in a single byte of RENEW op or a file and proceeds to write that single byte.  A READ op of zero length), the client that chose to handle a LOCKU by flushing all modified data must be
   prepared to receive either NFS4ERR_STALE_CLIENTID or
   NFS4ERR_STALE_STATEID from the server could validly write that single byte new server.  The client should then
   recover its state information as it normally would in response to an
   unrelated unlock.  However, it would not be valid a
   server failure.  The new server must take care to write allow for the entire
   block in which that single written byte was located since
   recovery of state information as it includes
   an area that is not locked and might be locked by another client.
   Client implementations can avoid this problem by dividing files would in the event of server
   restart.

   A client SHOULD re-establish new callback information with
   modified data into those for which all modifications are done the new
   server as soon as possible, according to
   areas covered by an appropriate record lock sequences described in
   Section 15.35 and those for which there Section 15.36.  This ensures that server operations
   are modifications not covered blocked by a record lock.  Any writes done for the former class inability to recall delegations.

9.14.2.  Replication and State

   Since client switch-over in the case of files must not include areas replication is not locked under
   server control, the handling of state is different.  In this case,
   leases, stateids and thus client IDs do not modified have validity across a
   transition from one server to another.  The client must re-establish
   its locks on the client.

10.3.3.  Data Caching and Mandatory File Locking

   Client side data caching needs new server.  This can be compared to respect mandatory file locking when
   it is in effect.  The presence the re-
   establishment of mandatory file locking for locks by means of reclaim-type requests after a given
   file
   server reboot.  The difference is indicated when that the client gets back NFS4ERR_LOCKED server has no provision to
   distinguish requests reclaiming locks from a
   READ those obtaining new locks
   or WRITE on a file it has an appropriate share reservation for.
   When mandatory locking is in effect for a file, to defer the latter.  Thus, a client must check
   for an appropriate file lock for data being read or written.  If re-establishing a lock exists for on the range being read
   new server (by means of a LOCK or written, the client OPEN request), may
   satisfy the request using have the client's validated cache.  If
   requests denied due to a conflicting lock.  Since replication is
   intended for read-only use of filesystems, such denial of locks
   should not pose large difficulties in practice.  When an
   appropriate file attempt to
   re-establish a lock on a new server is not held for denied, the range client should
   treat the situation as if his original lock had been revoked.

9.14.3.  Notification of Migrated Lease

   In the read or write, case of lease renewal, the read or write request must client may not be satisfied by the client's cache
   and the request must be sent submitting
   requests for a filesystem that has been migrated to another server.
   This can occur because of the server implicit lease renewal mechanism.  The
   client renews leases for processing.  When all filesystems when submitting a
   read or write request partially overlaps a locked region, to
   any one filesystem at the request
   should be subdivided into multiple pieces with each region (locked or
   not) treated appropriately.

10.3.4.  Data Caching and File Identity

   When clients cache data, server.

   In order for the file data needs client to be organized
   according schedule renewal of leases that may have
   been relocated to the filesystem object to new server, the client must find out about
   lease relocation before those leases expire.  To accomplish this, all
   operations which implicitly renew leases for a client (such as OPEN,
   CLOSE, READ, WRITE, RENEW, LOCK, and others), will return the data belongs.  For
   NFS version 3 clients, error
   NFS4ERR_LEASE_MOVED if responsibility for any of the typical practice leases to be
   renewed has been transferred to assume for a new server.  This condition will
   continue until the purpose of caching that distinct filehandles represent distinct
   filesystem objects.  The client then has the choice to organize receives an NFS4ERR_MOVED error and
   maintain the data cache on this basis.

   In the NFS version 4 protocol, there is now
   server receives the possibility subsequent GETATTR(fs_locations) for an access to have
   significant deviations from
   each filesystem for which a "one filehandle per object" model
   because lease has been moved to a filehandle may be constructed on new server.  By
   convention, the basis of compound including the object's
   pathname.  Therefore, clients need GETATTR(fs_locations) SHOULD
   append a reliable method RENEW operation to determine if
   two filehandles designate permit the same filesystem object.  If clients
   were simply server to assume identify the client
   doing the access.

   Upon receiving the NFS4ERR_LEASE_MOVED error, a client that supports
   filesystem migration MUST probe all distinct filehandles denote distinct
   objects and proceed to do data caching filesystems from that server on this basis, caching
   inconsistencies would arise between
   which it holds open state.  Once the distinct client side objects has successfully probed
   all those filesystems which mapped to are migrated, the same server side object.

   By providing a method to differentiate filehandles, the NFS version 4
   protocol alleviates a potential functional regression in comparison
   with the NFS version 3 protocol.  Without this method, caching
   inconsistencies within the same client could occur and this has not
   been present in previous versions MUST resume
   normal handling of the NFS protocol.  Note stateful requests from that it
   is possible client.

   In order to have such inconsistencies with applications executing
   on multiple support legacy clients but that is do not handle the issue being addressed here.

   For
   NFS4ERR_LEASE_MOVED error correctly, the purposes server SHOULD time out after
   a wait of data caching, the following steps allow an NFS
   version 4 client to determine whether at least two distinct filehandles denote
   the same server side object:

   o lease periods, at which time it will resume
   normal handling of stateful requests from all clients.  If GETATTR directed a client
   attempts to two filehandles returns different values of access the fsid attribute, then migrated files, the filehandles represent distinct
      objects.

   o  If GETATTR for any file with server MUST reply
   NFS4ERR_MOVED.

   When the client receives an fsid that matches NFS4ERR_MOVED error, the fsid of client can
   follow the
      two filehandles in question returns a unique_handles attribute
      with a value normal process to obtain the new server information
   (through the fs_locations attribute) and perform renewal of TRUE, then those
   leases on the two objects are distinct.

   o new server.  If GETATTR directed to the two filehandles does server has not return had state
   transferred to it transparently, the
      fileid attribute for both of client will receive either
   NFS4ERR_STALE_CLIENTID or NFS4ERR_STALE_STATEID from the handles, new server,
   as described above.  The client can then recover state information as
   it cannot be
      determined whether does in the two objects are event of server failure.

9.14.4.  Migration and the same.  Therefore,
      operations which depend on Lease_time Attribute

   In order that knowledge (e.g., the client side data
      caching) cannot be done reliably.

   o  If GETATTR directed to may appropriately manage its leases in the two filehandles returns different
   case of migration, the destination server must establish proper
   values for the fileid attribute, then they are distinct objects.

   o  Otherwise they are the same object.

10.4.  Open Delegation lease_time attribute.

   When a file state is being OPENed, transferred transparently, that state should include
   the server may delegate further handling correct value of opens and closes for the lease_time attribute.  The lease_time
   attribute on the destination server must never be less than that file to on
   the opening client.  Any such
   delegation is recallable, source since this would result in premature expiration of leases
   granted by the circumstances that allowed for source server.  Upon migration in which state is
   transferred transparently, the delegation are subject client is under no obligation to change.  In particular, re-
   fetch the server lease_time attribute and may
   receive a conflicting OPEN from another client, the server must
   recall continue to use the delegation before deciding whether value
   previously fetched (on the OPEN from source server).

   If state has not been transferred transparently (i.e., the other client may be granted.  Making
   sees a delegation is up to the real or simulated server and
   clients reboot), the client should not assume that any particular OPEN either will or
   will not result in an open delegation.  The following is a typical
   set fetch the
   value of conditions that servers might lease_time on the new (i.e., destination) server, and use in deciding whether OPEN
   should be delegated:

   o  The client must be able to respond to the server's callback it
   for subsequent locking requests.  The server will use  However the CB_NULL procedure for a test of
      callback ability.

   o  The client server must respect a
   grace period at least as long as the lease_time on the source server,
   in order to ensure that clients have responded properly ample time to previous recalls.

   o  There must be no current open reclaim their
   locks before potentially conflicting with the requested
      delegation.

   o  There should be no current delegation that conflicts with the
      delegation being requested.

   o non-reclaimed locks are granted.
   The probability of future conflicting open requests should be low
      based on means by which the recent history of new server obtains the file.

   o  The existence of any server-specific semantics value of OPEN/CLOSE that
      would make lease_time on
   the required handling incompatible with old server is left to the prescribed
      handling that server implementations.  It is not
   specified by the delegated client would apply (see below).

   There are two types NFS version 4 protocol.

10.  Client-Side Caching

   Client-side caching of open delegations, read data, of file attributes, and write.  A read open
   delegation allows a client to handle, on its own, requests to open a of file for reading that do not deny read access names is
   essential to others.  Multiple
   read open delegations may be outstanding simultaneously providing good performance with the NFS protocol.
   Providing distributed cache coherence is a difficult problem and do not
   conflict.  A write open delegation allows
   previous versions of the NFS protocol have not attempted it.
   Instead, several NFS client implementation techniques have been used
   to handle, on
   its own, all opens.  Only one write open delegation may exist for a
   given file at reduce the problems that a given time lack of coherence poses for users.
   These techniques have not been clearly defined by earlier protocol
   specifications and it is inconsistent with any read open
   delegations.

   When a often unclear what is valid or invalid
   client has a read open delegation, it may not make any changes behavior.

   The NFSv4 protocol uses many techniques similar to the contents or attributes of the file but it is assured those that no
   other client may do so.  When a client has a write open delegation, have
   been used in previous protocol versions.  The NFSv4 protocol does not
   provide distributed cache coherence.  However, it may modify the file data since no other client will defines a more
   limited set of caching guarantees to allow locks and share
   reservations to be accessing
   the file's data.  The used without destructive interference from client holding
   side caching.

   In addition, the NFSv4 protocol introduces a write delegation may only
   affect file attributes mechanism
   which are intimately connected with the file
   data: size, time_modify, change.

   When a client has an open delegation, it does not send OPENs or
   CLOSEs to allows many decisions normally made by the server but updates the appropriate status internally.
   For a read open delegation, opens that cannot to be handled made
   locally
   (opens for write or that deny read access) must be sent to by clients.  This mechanism provides efficient support of the
   server.

   When an open delegation
   common cases where sharing is made, the response to the OPEN contains an
   open delegation structure which specifies the following:

   o  the type of delegation (read infrequent or write)

   o  space limitation information to control flushing where sharing is read-
   only.

10.1.  Performance Challenges for Client-Side Caching

   Caching techniques used in previous versions of data on close
      (write open delegation only, see Section 10.4.1)

   o  an nfsace4 specifying read and write permissions

   o  a stateid to represent the delegation for READ and WRITE

   The delegation stateid NFS protocol have
   been successful in providing good performance.  However, several
   scalability challenges can arise when those techniques are used with
   very large numbers of clients.  This is separate and distinct from particularly true when
   clients are geographically distributed which classically increases
   the stateid latency for
   the OPEN proper. cache re-validation requests.

   The standard stateid, unlike previous versions of the delegation
   stateid, is associated with a particular lock_owner and will continue
   to be valid after NFS protocol repeat their file data
   cache validation requests at the delegation is recalled and time the file remains
   open.

   When is opened.  This
   behavior can have serious performance drawbacks.  A common case is
   one in which a request internal file is only accessed by a single client.  Therefore,
   sharing is infrequent.

   In this case, repeated reference to the client server to find that no
   conflicts exist is made expensive.  A better option with regards to open
   performance is to allow a client that repeatedly opens a file and open
   delegation is in effect, it will be accepted or rejected solely on
   the basis of the following conditions.  Any requirement for other
   checks to be made by the delegate should result in open delegation
   being denied do
   so that the checks can be made by the server itself.

   o  The access and deny bits for without reference to the request server.  This is done until potentially
   conflicting operations from another client actually occur.

   A similar situation arises in connection with file locking.  Sending
   file lock and unlock requests to the file server as described
      in Section 9.9.

   o  The well as the read and
   write permissions as determined below.

   The nfsace4 passed requests necessary to make data caching consistent with delegation the
   locking semantics (see Section 10.3.2) can be severely limit
   performance.  When locking is used to avoid frequent
   ACCESS calls. provide protection against
   infrequent conflicts, a large penalty is incurred.  This penalty may
   discourage the use of file locking by applications.

   The permission check should be as follows: NFSv4 protocol provides more aggressive caching strategies with
   the following design goals:

   o  If  Compatibility with a large range of server semantics.

   o  Provide the nfsace4 indicates that same caching benefits as previous versions of the open may be done, then it should
      be granted without reference NFS
      protocol when unable to provide the server. more aggressive model.

   o  If the nfsace4 indicates  Requirements for aggressive caching are organized so that a large
      portion of the open may not benefit can be done, then an
      ACCESS request must obtained even when not all of the
      requirements can be sent to met.

   The appropriate requirements for the server are discussed in later
   sections in which specific forms of caching are covered (see
   Section 10.4).

10.2.  Delegation and Callbacks

   Recallable delegation of server responsibilities for a file to a
   client improves performance by avoiding repeated requests to obtain the definitive
      answer.

   The
   server may return an nfsace4 that is more restrictive than in the
   actual ACL absence of inter-client conflict.  With the file.  This includes an nfsace4 that specifies
   denial use of all access.  Note that some common practices such as
   mapping the traditional user "root" a
   "callback" RPC from server to client, a server recalls delegated
   responsibilities when another client engages in sharing of a
   delegated file.

   A delegation is passed from the user "nobody" may make it
   incorrect server to return the actual ACL of client, specifying the file in
   object of the delegation
   response.

   The use and the type of delegation together with various other forms delegation.  There are
   different types of caching
   creates the possibility that no server authentication will ever be
   performed for delegations but each type contains a given user since all of the user's requests might stateid to be
   satisfied locally.  Where
   used to represent the client is depending delegation when performing operations that
   depend on the server for
   authentication, the client should be sure authentication occurs for
   each user by use of the ACCESS operation. delegation.  This should be the case
   even if an ACCESS operation would not be required otherwise.  As
   mentioned before, the server may enforce frequent authentication by
   returning an nfsace4 denying all access stateid is similar to those
   associated with every open delegation.

10.4.1.  Open Delegation locks and Data Caching

   OPEN delegation allows much of share reservations but differs in that the message overhead
   stateid for a delegation is associated with
   the opening a client ID and closing files to may be eliminated.  An open when an open
   used on behalf of all the open_owners for the given client.  A
   delegation is made to the client as a whole and not to any specific
   process or thread of control within it.

   Because callback RPCs may not work in effect all environments (due to
   firewalls, for example), correct protocol operation does not require that depend
   on them.  Preliminary testing of callback functionality by means of a validation message
   CB_NULL procedure determines whether callbacks can be
   sent to the server. supported.  The continued endurance
   CB_NULL procedure checks the continuity of the "read open
   delegation" provides callback path.  A
   server makes a guarantee that no OPEN for write preliminary assessment of callback availability to a
   given client and thus no
   write avoids delegating responsibilities until it has occurred.  Similarly, when closing
   determined that callbacks are supported.  Because the granting of a file opened for write
   and if write open
   delegation is in effect, always conditional upon the data written does absence of conflicting
   access, clients must not
   have to assume that a delegation will be flushed granted and
   they must always be prepared for OPENs to the server until the open be processed without any
   delegations being granted.

   Once granted, a delegation behaves in most ways like a lock.  There
   is
   recalled.  The continued endurance an associated lease that is subject to renewal together with all
   of the open delegation provides a
   guarantee that no open and thus no read or write has been done other leases held by
   another that client.

   For the purposes of open delegation, READs and WRITEs done without

   Unlike locks, an
   OPEN are treated as the functional equivalents of operation by a corresponding
   type of OPEN.  This refers second client to a delegated file
   will cause the READs and WRITEs that use server to recall a delegation through a callback.

   On recall, the
   special stateids consisting of all zero bits or all one bits.
   Therefore, READs or WRITEs with a special stateid done by another client will force holding the server to recall a write open delegation.  A
   WRITE with a special stateid done by another client will force a
   recall of read open delegations.

   With delegations, a client is able to avoid writing data delegation must flush modified
   state (such as modified data) to the server when and return the CLOSE of a file is serviced.
   delegation.  The file close system
   call is the usual point at which conflicting request will not be acted on until the client
   recall is notified of a lack of
   stable storage for the modified file data generated by the
   application.  At the close, file data complete.  The recall is written to the server and
   through normal accounting considered complete when the server is able to determine if
   client returns the
   available filesystem space for delegation or the data has been exceeded (i.e., server returns NFS4ERR_NOSPC or NFS4ERR_DQUOT).  This accounting
   includes quotas.  The introduction of delegations requires that a
   alternative method be in place times its wait for the same type of communication
   delegation to
   occur between client be returned and server.

   In revokes the delegation response, the server provides either the limit as a result of
   the size of timeout.  In the file or interim, the number of modified blocks and associated
   block size.  The server must ensure that the client will be able either delay responding
   to
   flush data conflicting requests or respond to them with NFS4ERR_DELAY.
   Following the server resolution of a size equal to that provided in the
   original delegation.  The server must make this assurance for all
   outstanding delegations.  Therefore, recall, the server must be careful in
   its management of available space for new has the
   information necessary to grant or modified data taking
   into account available filesystem space and any applicable quotas.
   The server can recall delegations as a result of managing deny the
   available filesystem space.  The client should abide by second client's request.

   At the server's
   state space limits for delegations.  If time the client exceeds the stated
   limits for receives a delegation recall, it may have
   substantial state that needs to be flushed to the delegation, server.  Therefore,
   the server's behavior is undefined.

   Based on server conditions, quotas or available filesystem space, should allow sufficient time for the
   server may grant write open delegations with very restrictive space
   limitations.  The limitations may be defined in a way that will
   always force modified data delegation to be flushed
   returned since it may involve numerous RPCs to the server.  If the
   server on close.

   With respect is able to authentication, determine that the client is diligently flushing modified data
   state to the server
   after as a CLOSE has occurred may be problematic.  For example, the user result of the application may have logged off recall, the client and unexpired
   authentication credentials server may extend
   the usual time allowed for a recall.  However, the time allowed for
   recall completion should not be present.  In unbounded.

   An example of this case, the
   client may need is when responsibility to take special care mediate opens on a given
   file is delegated to ensure that local unexpired
   credentials a client (see Section 10.4).  The server will
   not know what opens are in fact be available.  This may be accomplished by
   tracking effect on the expiration time of credentials and flushing data well in
   advance of their expiration or by making private copies of
   credentials to assure their availability when needed.

10.4.2.  Open Delegation and File Locks

   When a client holds a write open delegation, lock operations may be
   performed locally.  This includes those required for mandatory file
   locking.  This can be done since client.  Without this
   knowledge the delegation implies that there
   can server will be no conflicting locks.  Similarly, all of unable to determine if the revalidations
   that would normally be associated with obtaining locks access and
   deny state for the
   flushing of data associated with the releasing of locks need not be
   done.

   When a client holds a read file allows any particular open delegation, lock operations are not
   performed locally.  All lock operations, including those requesting
   non-exclusive locks, are sent to until the server for resolution.

10.4.3.  Handling of CB_GETATTR

   The server needs to employ special handling
   delegation for a GETATTR where the
   target is a file that has been returned.

   A client failure or a write open delegation network partition can result in effect.  The
   reason for failure to
   respond to a recall callback.  In this is that case, the client holding server will revoke
   the write delegation may
   have which in turn will render useless any modified state
   still on the data and the server needs to reflect this change to
   the second client that submitted the GETATTR.  Therefore, the client
   holding the write delegation needs client.

   Clients need to be interrogated.  The aware that server
   will use implementors may enforce
   practical limitations on the CB_GETATTR operation.  The only attributes that number of delegations issued.  Further,
   as there is no way to determine which delegations to revoke, the
   server can reliably query via CB_GETATTR are size and change.

   Since CB_GETATTR is being used allowed to satisfy another client's GETATTR
   request, revoke any.  If the server only needs is implemented to know if
   revoke another delegation held by that client, then the client holding the
   delegation may be
   able to determine that a limit has been reached because each new
   delegation request results in a modified version of the file.  If the client's copy
   of the delegated file is revoke.  The client could then
   determine which delegations it may not modified (data need and preemptively release
   them.

10.2.1.  Delegation Recovery

   There are three situations that delegation recovery must deal with:

   o  Client reboot or size), the server can
   satisfy the second client's GETATTR request from restart

   o  Server reboot or restart

   o  Network partition (full or callback-only)

   In the attributes
   stored locally at event the server.  If client reboots or restarts, the file is modified, failure to renew
   leases will result in the server
   only needs revocation of byte-range locks and share
   reservations.  Delegations, however, may be treated a bit
   differently.

   There will be situations in which delegations will need to know about be
   reestablished after a client reboots or restarts.  The reason for
   this modified state.  If the server
   determines that is the client may have file is currently modified, it data stored locally and this data
   was associated with the previously held delegations.  The client will respond
   need to reestablish the second client's GETATTR as if the appropriate file had been modified locally
   at state on the server.

   Since the form

   To allow for this type of the change attribute is determined by client recovery, the server
   and is opaque to MAY extend the client,
   period for delegation recovery beyond the client and server typical lease expiration
   period.  This implies that requests from other clients that conflict
   with these delegations will need to agree on a
   method of communicating the modified state of the file.  For wait.  Because the size
   attribute, normal recall
   process may require significant time for the client will report its current view of the file size.

   For the change attribute, to flush changed
   state to the handling is more involved.

   For server, other clients need be prepared for delays that
   occur because of a conflicting delegation.  This longer interval
   would increase the client, window for clients to reboot and consult stable
   storage so that the following steps will delegations can be taken when receiving reclaimed.  For open
   delegations, such delegations are reclaimed using OPEN with a
   write delegation:

   o  The value claim
   type of CLAIM_DELEGATE_PREV.  (See Section 10.5 and Section 15.18 for
   discussion of open delegation and the change attribute will be obtained from the details of OPEN respectively).

   A server MAY support a claim type of CLAIM_DELEGATE_PREV, but if it
   does, it MUST NOT remove delegations upon SETCLIENTID_CONFIRM, and cached.  Let this value be represented by c.

   o  The client will create
   instead MUST, for a value greater period of time no less than c that will be used
      for communicating modified data is held at of the client.  Let this value be represented by d.

   o  When of
   the client is queried via CB_GETATTR lease_time attribute, maintain the client's delegations to allow
   time for the change
      attribute, it checks client to see if it holds modified data.  If issue CLAIM_DELEGATE_PREV requests.  The
   server that supports CLAIM_DELEGATE_PREV MUST support the
      file is modified, DELEGPURGE
   operation.

   When the value d is returned for server reboots or restarts, delegations are reclaimed (using
   the change attribute
      value.  If this file OPEN operation with CLAIM_PREVIOUS) in a similar fashion to byte-
   range locks and share reservations.  However, there is not currently modified, a slight
   semantic difference.  In the client returns normal case if the value c for server decides that a
   delegation should not be granted, it performs the change attribute. requested action
   (e.g., OPEN) without granting any delegation.  For simplicity of implementation, reclaim, the client MAY for each CB_GETATTR
   return
   server grants the same value d.  This delegation but a special designation is true even if, between successive
   CB_GETATTR operations, the client again modifies in applied so
   that the file's data
   or metadata in its cache.  The client can return treats the same value
   because delegation as having been granted but
   recalled by the only requirement is that server.  Because of this, the client be able has the duty to indicate
   write all modified state to the server that the client holds modified data.  Therefore, and then return the
   value
   delegation.  This process of d may always be c + 1.

   While the change attribute is opaque to the client in the sense that
   it has no idea what units handling delegation reclaim reconciles
   three principles of time, if any, the server is counting
   change with, it is not opaque in that the NFSv4 protocol:

   o  Upon reclaim, a client has reporting resources assigned to treat it as by an unsigned integer, and the
      earlier server instance must be granted those resources.

   o  The server has unquestionable authority to be able determine whether
      delegations are to see the results
   of the client's changes to that integer.  Therefore, the server MUST
   encode the change attribute in network order when sending it be granted and, once granted, whether they are
      to the
   client. be continued.

   o  The client MUST decode it from network order use of callbacks is not to its native
   order when receiving it and be depended upon until the client MUST encode it network order
   when sending it
      has proven its ability to the server.  For this reason, change is defined as
   an unsigned integer rather receive them.

   When a client has more than an opaque array of octets.

   For the server, the following steps will be taken when providing a
   write delegation:

   o  Upon providing single open associated with a write
   delegation, the server will cache a copy state for those additional opens can be established using
   OPEN operations of
      the change attribute in the data structure it uses type CLAIM_DELEGATE_CUR.  When these are used to record
   establish opens associated with reclaimed delegations, the
      delegation.  Let this value be represented by sc.

   o server
   MUST allow them when made within the grace period.

   When a second client sends a GETATTR operation on the same file network partition occurs, delegations are subject to
      the server, freeing
   by the server obtains the change attribute from the first
      client.  Let this value be cc.

   o  If when the value cc lease renewal period expires.  This is equal similar
   to sc, the file is not modified behavior for locks and share reservations.  For delegations,
   however, the server returns may extend the current values for change, time_metadata, and
      time_modify (for example) to period in which conflicting
   requests are held off.  Eventually the second client.

   o  If occurrence of a conflicting
   request from another client will cause revocation of the value cc is NOT equal to sc, delegation.
   A loss of the file is currently modified
      at callback path (e.g., by later network configuration
   change) will have the first client and most likely same effect.  A recall request will be modified at fail and
   revocation of the server
      at delegation will result.

   A client normally finds out about revocation of a future time.  The server then delegation when it
   uses its current time a stateid associated with a delegation and receives the error
   NFS4ERR_EXPIRED.  It also may find out about delegation revocation
   after a client reboot when it attempts to
      construct attribute values for time_metadata reclaim a delegation and time_modify.  A
      new value
   receives that same error.  Note that in the case of sc, which we will call nsc, is computed a revoked
   OPEN_DELEGATE_WRITE delegation, there are issues because data may
   have been modified by the
      server, client whose delegation is revoked and
   separately by other clients.  See Section 10.5.1 for a discussion of
   such issues.  Note also that nsc >= sc + 1.  The server then returns when delegations are revoked,
   information about the
      constructed time_metadata, time_modify, and nsc values to revoked delegation will be written by the
      requester.  The
   server replaces sc to stable storage (as described in the delegation record Section 9.6).  This is done
   to deal with
      nsc.  To prevent the possibility of time_modify, time_metadata,
      and change from appearing to go backward (which would happen if case in which a server reboots after revoking a
   delegation but before the client holding the revoked delegation fails to write its modified data
      to is
   notified about the server before the delegation is revoked or returned), the
      server SHOULD update the file's metadata record with the
      constructed attribute values.  For reasons revocation.

10.3.  Data Caching

   When applications share access to a set of reasonable
      performance, committing the constructed attribute values files, they need to stable
      storage is OPTIONAL.

      As discussed earlier in this section, be
   implemented so as to take account of the client MAY return possibility of conflicting
   access by another application.  This is true whether the
      same cc value applications
   in question execute on different clients or reside on subsequent CB_GETATTR calls, even if the file was
      modified in same
   client.

   Share reservations and byte-range locks are the client's cache yet again between successive
      CB_GETATTR calls.  Therefore, facilities the server NFS
   version 4 protocol provides to allow applications to coordinate
   access by providing mutual exclusion facilities.  The NFSv4
   protocol's data caching must assume be implemented such that it does not
   invalidate the file
      has been modified yet again, assumptions that those using these facilities depend
   upon.

10.3.1.  Data Caching and MUST take care OPENs

   In order to ensure that avoid invalidating the
      new nsc sharing assumptions that
   applications rely on, NFSv4 clients should not provide cached data to
   applications or modify it constructs and returns is greater than the previous nsc on behalf of an application when it returned.  An example implementation's delegation record would
      satisfy this mandate by including a boolean field (let us call it
      "modified") that is set
   not be valid to false when obtain or modify that same data via a READ or WRITE
   operation.

   Furthermore, in the absence of open delegation is granted, (see Section 10.4) two
   additional rules apply.  Note that these rules are obeyed in practice
   by many NFSv2 and NFSv3 clients.

   o  First, cached data present on a client must be revalidated after
      doing an sc value set at OPEN.  Revalidating means that the time of grant to client fetches the
      change attribute
      value.  The modified field would be set to true from the first time cc
      != sc, and would stay true until server, compares it with the delegation is returned or
      revoked.  The processing for constructing nsc, time_modify, and
      time_metadata would use this pseudo code:

   if (!modified) {
       do CB_GETATTR for cached
      change attribute, and size;

          if (cc != sc)
              modified = TRUE;
      } else {
              do CB_GETATTR for size;
      } if (modified) {
          sc = sc + 1;
       time_modify = time_metadata = current_time;

       update sc, time_modify, time_metadata into file's metadata;
   }

   return to client (that sent GETATTR) the attributes
      it requested, but make sure size comes from what
      CB_GETATTR returned.  Do not update the file's metadata
      with different, declares the client's modified size.

   o  In cached data (as
      well as the case cached attributes) as invalid.  This is to ensure that
      the data for the OPENed file attribute size is different than still correctly reflected in the
      server's current value,
      client's cache.  This validation must be done at least when the server treats this as
      client's OPEN operation includes DENY=WRITE or BOTH thus
      terminating a modification
      regardless of period in which other clients may have had the value of
      opportunity to open the change attribute retrieved via
      CB_GETATTR and responds file with WRITE access.  Clients may
      choose to do the second client as in revalidation more often (i.e., at OPENs
      specifying DENY=NONE) to parallel the last step.

   This methodology resolves issues of clock differences between client
   and server and other scenarios where NFSv3 protocol's practice
      for the use benefit of CB_GETATTR break
   down.

   It should be noted that users assuming this degree of cache
      revalidation.  Since the server change attribute is under no obligation updated for data and
      metadata modifications, some client implementors may be tempted to
      use
   CB_GETATTR and therefore the server MAY simply recall the delegation time_modify attribute and not change to avoid its use.

10.4.4.  Recall of Open Delegation validate cached
      data, so that metadata changes do not spuriously invalidate clean
      data.  The following events necessitate recall of an open delegation:

   o  Potentially conflicting OPEN request (or READ/WRITE done with
      "special" stateid)

   o  SETATTR issued by another client

   o  REMOVE request for the file

   o  RENAME request implementor is cautioned in this approach.  The change
      attribute is guaranteed to change for each update to the file as either source or target of file,
      whereas time_modify is guaranteed to change only at the
      RENAME

   Whether a RENAME
      granularity of a directory in the path leading to time_delta attribute.  Use by the file
   results in recall client's data
      cache validation logic of an open delegation depends on time_modify and not change runs the semantics risk
      of the server filesystem.  If that filesystem denies such RENAMEs when a
   file is open, the recall client incorrectly marking stale data as valid.

   o  Second, modified data must be performed flushed to determine whether the server before closing
      a file in question is, in fact, open.

   In addition OPENed for write.  This is complementary to the situations above, first rule.
      If the server may choose to recall
   open delegations data is not flushed at any time if resource constraints make it
   advisable to do so.  Clients should always be prepared for CLOSE, the
   possibility of recall.

   When a revalidation done after
      client receives a recall for an open delegation, it needs OPENs as file is unable to
   update state on achieve its purpose.  The other
      aspect to flushing the server data before returning close is that the delegation.  These
   same updates data must be done whenever a client chooses to return a
   delegation voluntarily.  The following items of state need
      committed to be
   dealt with:

   o  If stable storage, at the file associated with server, before the delegation is no longer open and
      no previous CLOSE
      operation has been sent to is requested by the server, a CLOSE
      operation must be sent to client.  In the server.

   o  If case of a file has other open references at the client, then OPEN
      operations must server
      reboot or restart and a CLOSEd file, it may not be sent possible to
      retransmit the server.  The appropriate stateids
      will data to be provided by written to the server for subsequent file.  Hence, this
      requirement.

10.3.2.  Data Caching and File Locking

   For those applications that choose to use by the file locking instead of
   share reservations to exclude inconsistent file access, there is an
   analogous set of constraints that apply to client
      since the delegation stateid will not longer be valid. side data caching.
   These OPEN
      requests rules are done with the claim type of CLAIM_DELEGATE_CUR.  This
      will allow the presentation of effective only if the delegation stateid so file locking is used in a way
   that matches in an equivalent way the
      client can establish the appropriate rights to perform the OPEN.
      (see Section 15.18 for details.)

   o  If there are granted file locks, the corresponding LOCK actual READ and WRITE
   operations
      need to be performed. executed.  This applies is as opposed to the write open delegation
      case only.

   o file locking that is
   based on pure convention.  For example, it is possible to manipulate
   a write open delegation, if at two-megabyte file by dividing the time of recall file into two one-megabyte
   regions and protecting access to the two regions by file is
      not open locks on
   bytes zero and one.  A lock for write, all modified data write on byte zero of the file would
   represent the right to do READ and WRITE operations on the first
   region.  A lock for write on byte one of the file must be flushed would represent the
   right to do READ and WRITE operations on the server.  If second region.  As long
   as all applications manipulating the delegation had file obey this convention, they
   will work on a local filesystem.  However, they may not existed, work with the client
      would have done this
   NFSv4 protocol unless clients refrain from data flush before caching.

   The rules for data caching in the CLOSE operation. file locking environment are:

   o  For a write open delegation  First, when a client obtains a file is still open at the time
      of recall, any modified data lock for a particular region,
      the file needs data cache corresponding to that region (if any cached data
      exists) must be flushed to
      the server.

   o  With revalidated.  If the write open delegation in place, it is possible change attribute indicates
      that the file was truncated during the duration of the delegation.  For
      example, the truncation could may have occurred as a result of an OPEN
      UNCHECKED with a size attribute value of zero.  Therefore, if a
      truncation of the file has occurred and this operation has not been propagated to updated since the server, cached data was
      obtained, the truncation client must occur before
      any modified data is written to flush or invalidate the server.

   In cached data for
      the case newly locked region.  A client might choose to invalidate all
      of write open delegation, non-modified cached data that it has for the file locking imposes some
   additional requirements.  To precisely maintain but the associated
   invariant, it only
      requirement for correct operation is required to flush any modified invalidate all of the data
      in any region
   for which the newly locked region.

   o  Second, before releasing a write lock was released while the write delegation was in
   effect.  However, because the write open delegation implies no other
   locking by other clients, for a simpler implementation is to flush region, all modified
      data for the file (as described just above) if any write
   lock has been released while the write open delegation was in effect.

   An implementation need not wait until delegation recall (or deciding
   to voluntarily return a delegation) that region must be flushed to perform any of the above
   actions, if implementation considerations (e.g., resource
   availability constraints) make server.  The modified
      data must also be written to stable storage.

   Note that desirable.  Generally, however, flushing data to the fact that server and the actual open state invalidation of cached
   data must reflect the file may continue actual byte ranges locked or unlocked.
   Rounding these up or down to
   change makes it reflect client cache block boundaries
   will cause problems if not worthwhile to send information about opens and
   closes to the server, except as part carefully done.  For example, writing a
   modified block when only half of delegation return.  Only in that block is within an area being
   unlocked may cause invalid modification to the case of closing region outside the open that resulted
   unlocked area.  This, in obtaining the
   delegation would clients turn, may be likely to do part of a region locked by
   another client.  Clients can avoid this early, since, in situation by synchronously
   performing portions of write operations that
   case, the close once done will overlap that portion
   (initial or final) that is not be undone.  Regardless of the
   client's choices on scheduling these actions, all must be performed
   before the delegation a full block.  Similarly, invalidating
   a locked area which is returned, including (when applicable) not an integral number of full buffer blocks
   would require the
   close that corresponds client to read one or two partial blocks from the open
   server if the revalidation procedure shows that resulted in the delegation.
   These actions can be performed either in previous requests or in
   previous operations in data which the same COMPOUND request.

10.4.5.  Clients that Fail to Honor Delegation Recalls

   A
   client possesses may fail to respond not be valid.

   The data that is written to a recall for various reasons, such the server as a failure prerequisite to the
   unlocking of a region must be written, at the callback path from server server, to the client. stable
   storage.  The client may be unaware of accomplish this either with synchronous
   writes or by following asynchronous writes with a failure in the callback path. COMMIT operation.
   This lack is required because retransmission of
   awareness could result in the client finding out long after the
   failure that its delegation has been revoked, and another client has modified the data for which the client had a delegation.  This is
   especially a problem for the client that held after a write delegation.

   The
   server also has reboot might conflict with a dilemma in that the lock held by another client.

   A client that fails implementation may choose to
   respond accommodate applications which
   use byte-range locking in non-standard ways (e.g., using a byte-range
   lock as a global semaphore) by flushing to the recall might also be sending other NFS requests,
   including those that renew the lease before the lease expires.
   Without returning an error for those lease renewing operations, the server leads the client to believe that the delegation it has is in
   force.

   This difficulty more data upon
   a LOCKU than is solved covered by the following rules:

   o  When the callback path is down, the server MUST NOT revoke locked range.  This may include
   modified data within files other than the
      delegation if one of for which the unlocks
   are being done.  In such cases, the following occurs:

      *  The client has issued a RENEW operation must not interfere with
   applications whose READs and WRITEs are being done only within the server has
         returned an NFS4ERR_CB_PATH_DOWN error.  The server MUST renew
         the lease for any
   bounds of record locks and share reservations the
         client has that which the server has known about (as opposed to those application holds.  For example, an
   application locks a single byte of a file and share reservations the proceeds to write that
   single byte.  A client has established but not
         yet sent that chose to the server, due handle a LOCKU by flushing all
   modified data to the delegation).  The server
         SHOULD give the client a reasonable time could validly write that single byte in
   response to return its
         delegations an unrelated unlock.  However, it would not be valid to
   write the server before revoking the client's
         delegations.

      *  The client has entire block in which that single written byte was located
   since it includes an area that is not issued a RENEW operation locked and might be locked by
   another client.  Client implementations can avoid this problem by
   dividing files with modified data into those for some period of
         time after the server attempted which all
   modifications are done to recall areas covered by an appropriate byte-range
   lock and those for which there are modifications not covered by a
   byte-range lock.  Any writes done for the delegation.  This
         period former class of time MUST NOT be less than files must
   not include areas not locked and thus not modified on the value client.

10.3.3.  Data Caching and Mandatory File Locking

   Client side data caching needs to respect mandatory file locking when
   it is in effect.  The presence of the
         lease_time attribute.

   o  When mandatory file locking for a given
   file is indicated when the client holds gets back NFS4ERR_LOCKED from a delegation, it cannot rely
   READ or WRITE on operations,
      except for RENEW, that take a stateid, to renew delegation leases
      across callback path failures.  The client that wants to keep
      delegations file it has an appropriate share reservation for.
   When mandatory locking is in force across callback path failures must use RENEW
      to do so.

10.4.6.  Delegation Revocation

   At the point effect for a delegation is revoked, if there are associated opens
   on the client, file, the applications holding these opens need to be
   notified.  This notification usually occurs by returning errors client must check
   for
   READ/WRITE operations or when a close is attempted an appropriate file lock for the open file. data being read or written.  If no opens exist a
   lock exists for the file at range being read or written, the point client may
   satisfy the delegation is
   revoked, then notification of request using the revocation is unnecessary.
   However, if there client's validated cache.  If an
   appropriate file lock is modified data present at the client not held for the
   file, the user range of the application should be notified.  Unfortunately,
   it may read or write,
   the read or write request must not be possible to notify satisfied by the user since active applications
   may not client's cache
   and the request must be present at sent to the client.  See Section 10.5.1 server for additional
   details.

10.5. processing.  When a
   read or write request partially overlaps a locked region, the request
   should be subdivided into multiple pieces with each region (locked or
   not) treated appropriately.

10.3.4.  Data Caching and Revocation File Identity

   When locks and delegations are revoked, the assumptions upon which
   successful caching depend are no longer guaranteed.  For any locks or
   share reservations that have been revoked, clients cache data, the corresponding owner file data needs to be notified.  This notification includes applications with a
   file open that has a corresponding delegation organized
   according to the filesystem object to which the data belongs.  For
   NFSv3 clients, the typical practice has been revoked.
   Cached data associated with to assume for the revocation must be removed from the
   client.  In the case
   purpose of modified data existing in the client's cache, caching that data must be removed from the distinct filehandles represent distinct
   filesystem objects.  The client without it being written to then has the server.  As mentioned, choice to organize and
   maintain the assumptions made by data cache on this basis.

   In the client are no
   longer valid at NFSv4 protocol, there is now the point when a lock or delegation has been revoked.
   For example, another client may possibility to have been granted
   significant deviations from a conflicting lock
   after "one filehandle per object" model
   because a filehandle may be constructed on the revocation basis of the lock at the first client. object's
   pathname.  Therefore, clients need a reliable method to determine if
   two filehandles designate the same filesystem object.  If clients
   were simply to assume that all distinct filehandles denote distinct
   objects and proceed to do data within the lock range may have been modified by the other
   client.  Obviously, caching on this basis, caching
   inconsistencies would arise between the first distinct client is unable to guarantee side objects
   which mapped to the
   application what has occurred same server side object.

   By providing a method to differentiate filehandles, the file in the case of revocation.

   Notification to NFSv4
   protocol alleviates a lock owner will potential functional regression in many cases consist of simply
   returning an error on the next and all subsequent READs/WRITEs to the
   open file or on comparison
   with the close.  Where NFSv3 protocol.  Without this method, caching
   inconsistencies within the methods available to a same client
   make such notification impossible because errors for certain
   operations may not be returned, more drastic action such as signals
   or process termination may be appropriate.  The justification for could occur and this is has not
   been present in previous versions of the NFS protocol.  Note that an invariant for which an application depends on may be
   violated.  Depending it
   is possible to have such inconsistencies with applications executing
   on how errors are typically treated for the
   client operating environment, further levels of notification
   including logging, console messages, and GUI pop-ups may be
   appropriate.

10.5.1.  Revocation Recovery for Write Open Delegation

   Revocation recovery for a write open delegation poses multiple clients but that is not the special issue being addressed here.

   For the purposes of modified data in the client cache while caching, the file is not
   open.  In this situation, any following steps allow an NFSv4
   client which does not flush modified
   data to determine whether two distinct filehandles denote the same
   server on each close must ensure side object:

   o  If GETATTR directed to two filehandles returns different values of
      the fsid attribute, then the filehandles represent distinct
      objects.

   o  If GETATTR for any file with an fsid that matches the user receives
   appropriate notification fsid of the failure as
      two filehandles in question returns a result unique_handles attribute
      with a value of TRUE, then the
   revocation.  Since such situations may require human action to
   correct problems, notification schemes in which the appropriate user
   or administrator is notified may be necessary.  Logging and console
   messages two objects are typical examples. distinct.

   o  If there is modified data on GETATTR directed to the client, it must two filehandles does not be flushed
   normally to return the server.  A client may attempt to provide a copy
      fileid attribute for both of the file data as modified during handles, then it cannot be
      determined whether the delegation under a different
   name in two objects are the filesystem name space to ease recovery.  Note same.  Therefore,
      operations which depend on that when
   the knowledge (e.g., client can determine side data
      caching) cannot be done reliably.  Note that the file has if GETATTR does not been modified by any
   other client, or when the client has a complete cached copy of file
   in question, such a saved copy of the client's view of
      return the file may
   be of particular value fileid attribute for recovery.  In other case, recovery using a
   copy of the file based partially on the client's cached data and
   partially on the server copy as modified by other clients, both filehandles, it will be
   anything but straightforward, so clients may avoid saving file
   contents in these situations or mark the results specially to warn
   users of possible problems.

   Saving of such modified data in delegation revocation situations may
   be limited to files return
      it for neither of a certain size or might be used only when
   sufficient disk space is available within the target filesystem.
   Such saving may also be restricted to situations when the client has
   sufficient buffering resources to keep filehandles, since the cached copy available
   until it fsid for both
      filehandles is properly stored to the target filesystem.

10.6.  Attribute Caching

   The attributes discussed in this section do not include named
   attributes.  Individual named attributes are analogous same.

   o  If GETATTR directed to files and
   caching of the data for these needs to be handled just as data
   caching is two filehandles returns different
      values for ordinary files.  Similarly, LOOKUP results from an
   OPENATTR directory the fileid attribute, then they are distinct objects.

   o  Otherwise they are to be cached on the same basis as any other
   pathnames and similarly for directory contents.

   Clients may cache file attributes obtained from the server and use
   them to avoid subsequent GETATTR requests.  Such caching is write
   through in that modification to object.

10.4.  Open Delegation

   When a file attributes is always done by
   means of requests to being OPENed, the server may delegate further handling
   of opens and should not be done locally and
   cached.  The exception to this are modifications to attributes closes for that
   are intimately connected with data caching.  Therefore, extending a file by writing data to the local data cache opening client.  Any such
   delegation is reflected immediately
   in the size as seen on recallable, since the client without this change being
   immediately reflected on circumstances that allowed for
   the server.  Normally such changes delegation are not
   propagated directly subject to change.  In particular, the server but when the modified data is
   flushed to may
   receive a conflicting OPEN from another client, the server, analogous attribute changes are made on server must
   recall the
   server.  When open delegation is in effect, before deciding whether the modified attributes OPEN from the other
   client may be returned to the server in the response to granted.  Making a CB_RECALL call.

   The result of local caching of attributes delegation is that up to the attribute
   caches maintained on individual server and
   clients should not assume that any particular OPEN either will or
   will not be coherent.
   Changes made in one order on the server may be seen result in an open delegation.  The following is a different
   order on one client and typical
   set of conditions that servers might use in a third order on a different client. deciding whether OPEN
   should be delegated:

   o  The typical filesystem application programming interfaces do not
   provide means client must be able to respond to atomically modify or interrogate attributes for
   multiple files at the same time. server's callback
      requests.  The following rules provide an
   environment where the potential incoherences mentioned above can be
   reasonably managed.  These rules are derived from server will use the practice of
   previous NFS protocols.

   o  All attributes CB_NULL procedure for a given file (per-fsid attributes excepted) are
      cached as a unit at the client so that no non-serializability can
      arise within the context test of a single file.
      callback ability.

   o  An upper time boundary is maintained on how long a  The client cache
      entry can must have responded properly to previous recalls.

   o  There must be kept without being refreshed from no current open conflicting with the server. requested
      delegation.

   o  When operations are performed  There should be no current delegation that change attributes at conflicts with the
      server,
      delegation being requested.

   o  The probability of future conflicting open requests should be low
      based on the updated attribute set is requested as part recent history of the
      containing RPC.  This includes directory operations file.

   o  The existence of any server-specific semantics of OPEN/CLOSE that update
      attributes indirectly.  This is accomplished by following
      would make the
      modifying operation required handling incompatible with a GETATTR operation and then using the
      results of the GETATTR to update the client's cached attributes.

   Note prescribed
      handling that if the full set delegated client would apply (see below).

   There are two types of attributes open delegations, OPEN_DELEGATE_READ and
   OPEN_DELEGATE_WRITE.  A OPEN_DELEGATE_READ delegation allows a client
   to handle, on its own, requests to open a file for reading that do
   not deny read access to others.  Multiple OPEN_DELEGATE_READ
   delegations may be cached is requested by
   READDIR, the results can be cached by outstanding simultaneously and do not conflict.  A
   OPEN_DELEGATE_WRITE delegation allows the client to handle, on the same basis as
   attributes obtained via GETATTR.

   A client may validate its cached version of attributes
   own, all opens.  Only one OPEN_DELEGATE_WRITE delegation may exist
   for a given file by
   fetching just both the change and time_access attributes at a given time and assuming
   that if the change attribute it is inconsistent with any
   OPEN_DELEGATE_READ delegations.

   When a client has the same value as a OPEN_DELEGATE_READ delegation, it did when may not make
   any changes to the contents or attributes were cached, then no attributes other than time_access
   have changed.  The reason why time_access is also fetched is because
   many servers operate in environments where of the operation that updates
   change does not update time_access.  For example, POSIX file
   semantics but it is
   assured that no other client may do not update access time when so.  When a file is modified by the
   write system call.  Therefore, the client that wants has a current
   time_access value should fetch
   OPEN_DELEGATE_WRITE delegation, it with change during may modify the attribute
   cache validation processing and update its cached time_access. file data since no
   other client will be accessing the file's data.  The client may maintain holding a cache of modified attributes for those
   OPEN_DELEGATE_WRITE delegation may only affect file attributes intimately which
   are intimately connected with data of modified regular files
   (size, time_modify, and change).  Other than those three attributes, the client MUST NOT maintain file data: size, time_modify,
   change.

   When a cache of modified attributes.
   Instead, attribute changes are immediately sent client has an open delegation, it does not send OPENs or
   CLOSEs to the server.

   In some operating environments, server but updates the equivalent to time_access is
   expected to appropriate status internally.
   For a OPEN_DELEGATE_READ delegation, opens that cannot be implicitly updated by each handled
   locally (opens for write or that deny read of the content of access) must be sent to
   the
   file object.  If server.

   When an NFS client is caching the content of a file
   object, whether it open delegation is a regular file, directory, or symbolic link, made, the client SHOULD NOT update response to the time_access attribute (via SETATTR
   or a small READ or READDIR request) on OPEN contains an
   open delegation structure which specifies the server with each read that
   is satisfied from cache.  The reason is that this can defeat following:

   o  the
   performance benefits type of caching content, especially since an explicit
   SETATTR delegation (read or write)

   o  space limitation information to control flushing of time_access may alter the change attribute data on close
      (OPEN_DELEGATE_WRITE delegation only, see Section 10.4.1)

   o  an nfsace4 specifying read and write permissions

   o  a stateid to represent the server.
   If delegation for READ and WRITE

   The delegation stateid is separate and distinct from the change attribute changes, clients that are caching stateid for
   the content
   will think OPEN proper.  The standard stateid, unlike the content has changed, delegation
   stateid, is associated with a particular lock_owner and will re-read unmodified data
   from continue
   to be valid after the server.  Nor delegation is recalled and the file remains
   open.

   When a request internal to the client encouraged is made to maintain open a modified
   version of time_access file and open
   delegation is in its cache, since this would mean that the
   client effect, it will either eventually have to write be accepted or rejected solely on
   the access time to basis of the
   server with bad performance effects, or it would never update following conditions.  Any requirement for other
   checks to be made by the
   server's time_access, thereby resulting delegate should result in a situation where an
   application that caches access time between a close and open of delegation
   being denied so that the
   same file observes checks can be made by the server itself.

   o  The access time oscillating between and deny bits for the past request and
   present.  The time_access attribute always means the time of last
   access to a file by a as described
      in Section 9.9.

   o  The read and write permissions as determined below.

   The nfsace4 passed with delegation can be used to avoid frequent
   ACCESS calls.  The permission check should be as follows:

   o  If the nfsace4 indicates that was satisfied by the server.  This
   way clients will tend open may be done, then it should
      be granted without reference to see only time_access changes the server.

   o  If the nfsace4 indicates that go forward
   in time.

10.7.  Data and Metadata Caching and Memory Mapped Files

   Some operating environments include the capability for open may not be done, then an application
      ACCESS request must be sent to map a file's content into the application's address space.  Each
   time the application accesses a memory location that corresponds server to a
   block that has not been loaded into the address space, a page fault
   occurs and obtain the file definitive
      answer.

   The server may return an nfsace4 that is read (or if more restrictive than the block does not exist in
   actual ACL of the
   file, file.  This includes an nfsace4 that specifies
   denial of all access.  Note that some common practices such as
   mapping the block is allocated and then instantiated in traditional user "root" to the
   application's address space).

   As long as each memory mapped access user "nobody" may make it
   incorrect to return the actual ACL of the file requires a page
   fault, in the relevant attributes delegation
   response.

   The use of delegation together with various other forms of caching
   creates the file possibility that are used to detect
   access and modification (time_access, time_metadata, time_modify, and
   change) will be updated.  However, in many operating environments,
   when page faults are not required these attributes no server authentication will not ever be
   updated on reads or updates to
   performed for a given user since all of the file via memory access (regardless
   whether user's requests might be
   satisfied locally.  Where the file is local file or is being access remotely).  A client or server MAY fail to update attributes of a file that is
   being accessed via memory mapped I/O. This has several implications:

   o  If there is an application depending on the server that has memory mapped a
      file that a client is also accessing, for
   authentication, the client may should be sure authentication occurs for
   each user by use of the ACCESS operation.  This should be the case
   even if an ACCESS operation would not be able
      to get a consistent value required otherwise.  As
   mentioned before, the server may enforce frequent authentication by
   returning an nfsace4 denying all access with every open delegation.

10.4.1.  Open Delegation and Data Caching

   OPEN delegation allows much of the change attribute message overhead associated with
   the opening and closing files to determine
      whether its cache be eliminated.  An open when an open
   delegation is stale or not.  A server that knows in effect does not require that the
      file is memory mapped could always pessimistically return updated
      values for change so as a validation message be
   sent to force the application to always get server.  The continued endurance of the
      most up to date data
   "OPEN_DELEGATE_READ delegation" provides a guarantee that no OPEN for
   write and metadata thus no write has occurred.  Similarly, when closing a file
   opened for write and if OPEN_DELEGATE_WRITE delegation is in effect,
   the file.  However, due data written does not have to be flushed to the negative performance implications of this, such behavior is
      OPTIONAL.

   o  If server until the memory mapped file
   open delegation is not being modified on recalled.  The continued endurance of the server, open
   delegation provides a guarantee that no open and
      instead is just being thus no read or
   write has been done by an application via the memory mapped
      interface, another client.

   For the client will not see purposes of open delegation, READs and WRITEs done without an updated time_access
      attribute.  However, in many operating environments, neither will
      any process running on the server.  Thus NFS clients
   OPEN are at no
      disadvantage with respect to local processes.

   o  If there is another client that is memory mapping treated as the file, functional equivalents of a corresponding
   type of OPEN.  This refers to the READs and if WRITEs that client is holding a write delegation, use the same set
   special stateids consisting of issues
      as discussed in the previous two bullet items apply.  So, when all zero bits or all one bits.
   Therefore, READs or WRITEs with a special stateid done by another
   client will force the server does a CB_GETATTR to recall a file that the OPEN_DELEGATE_WRITE
   delegation.  A WRITE with a special stateid done by another client has modified in
      its cache, the response from CB_GETATTR
   will not necessarily be
      accurate.  As discussed earlier, the client's obligation force a recall of OPEN_DELEGATE_READ delegations.

   With delegations, a client is able to avoid writing data to
      report that the file has been modified since the delegation was
      granted, not whether it has been modified again between successive
      CB_GETATTR calls, and the
   server MUST assume that any file the
      client has modified in cache has been modified again between
      successive CB_GETATTR calls.  Depending on when the nature CLOSE of the
      client's memory management system, this weak obligation may not be
      possible.  A client MAY return stale information in CB_GETATTR
      whenever the a file is memory mapped.

   o serviced.  The mixture of memory mapping and file locking on the same file close system
   call is
      problematic.  Consider the following scenario, where usual point at which the page size
      on each client is 8192 bytes.

      *  Client A memory maps first page (8192 bytes) of file X

      *  Client B memory maps first page (8192 bytes) of file X

      *  Client A write locks first 4096 bytes

      *  Client B write locks second 4096 bytes

      *  Client A, via a STORE instruction modifies part notified of its locked
         region.

      *  Simultaneous to client A, client B issues a STORE on part lack of
         its locked region.

   Here
   stable storage for the challenge modified file data generated by the
   application.  At the close, file data is for each client to resynchronize written to get a
   correct view of the first page.  In many operating environments, server and
   through normal accounting the
   virtual memory management systems on each client only know a page server is
   modified, not that a subset of the page corresponding able to determine if the
   respective lock regions
   available filesystem space for the data has been modified.  So it is not possible exceeded (i.e.,
   server returns NFS4ERR_NOSPC or NFS4ERR_DQUOT).  This accounting
   includes quotas.  The introduction of delegations requires that a
   alternative method be in place for
   each client to do the right thing, which is to only write same type of communication to
   occur between client and server.

   In the delegation response, the server that portion provides either the limit of
   the page that is locked.  For example, if
   client A simply writes out size of the page, file or the number of modified blocks and then client B writes out associated
   block size.  The server must ensure that the
   page, client A's will be able to
   flush data is lost.

   Moreover, if mandatory locking is enabled on to the file, then we have server of a
   different problem.  When clients A and B issue size equal to that provided in the STORE
   instructions,
   original delegation.  The server must make this assurance for all
   outstanding delegations.  Therefore, the resulting page faults require server must be careful in
   its management of available space for new or modified data taking
   into account available filesystem space and any applicable quotas.
   The server can recall delegations as a record lock on result of managing the
   entire page.  Each
   available filesystem space.  The client then tries to extend their locked range to
   the entire page, which results in a deadlock.

   Communicating should abide by the NFS4ERR_DEADLOCK error to a STORE instruction is
   difficult at best. server's
   state space limits for delegations.  If a the client is locking exceeds the entire memory mapped file, there stated
   limits for the delegation, the server's behavior is no
   problem with advisory undefined.

   Based on server conditions, quotas or mandatory record locking, at least until available filesystem space, the
   client unlocks a region
   server may grant OPEN_DELEGATE_WRITE delegations with very
   restrictive space limitations.  The limitations may be defined in a
   way that will always force modified data to be flushed to the middle of server
   on close.

   With respect to authentication, flushing modified data to the file.

   Given server
   after a CLOSE has occurred may be problematic.  For example, the above issues user
   of the following are permitted:

   o  Clients and servers MAY deny memory mapping a file they know there
      are record locks for.

   o  Clients and servers MAY deny a record lock on a file they know is
      memory mapped.

   o  A client MAY deny memory mapping a file that it knows requires
      mandatory locking for I/O. If mandatory locking is enabled after application may have logged off the file is opened client and mapped, unexpired
   authentication credentials may not be present.  In this case, the
   client MAY deny the application
      further access may need to its mapped file.

10.8.  Name Caching

   The results of LOOKUP and READDIR operations take special care to ensure that local unexpired
   credentials will in fact be available.  This may be cached to avoid accomplished by
   tracking the cost expiration time of subsequent LOOKUP operations.  Just as credentials and flushing data well in the case
   advance of
   attribute caching, inconsistencies may arise among the various client
   caches.  To mitigate the effects their expiration or by making private copies of these inconsistencies
   credentials to assure their availability when needed.

10.4.2.  Open Delegation and given
   the context of typical filesystem APIs, an upper time boundary is
   maintained on how long File Locks

   When a client name cache entry holds a OPEN_DELEGATE_WRITE delegation, lock operations
   may be performed locally.  This includes those required for mandatory
   file locking.  This can be kept without
   verifying done since the delegation implies that
   there can be no conflicting locks.  Similarly, all of the entry has
   revalidations that would normally be associated with obtaining locks
   and the flushing of data associated with the releasing of locks need
   not been made invalid by a directory
   change operation performed by another client. be done.

   When a client is holds a OPEN_DELEGATE_READ delegation, lock operations
   are not making changes performed locally.  All lock operations, including those
   requesting non-exclusive locks, are sent to a directory for which there
   exist name cache entries, the client server for
   resolution.

10.4.3.  Handling of CB_GETATTR

   The server needs to periodically fetch
   attributes employ special handling for that directory to ensure that it a GETATTR where the
   target is not being
   modified.  After determining a file that no modification has occurred, the
   expiration time a OPEN_DELEGATE_WRITE delegation in effect.
   The reason for this is that the associated name cache entries client holding the
   OPEN_DELEGATE_WRITE delegation may be updated
   to be have modified the current time plus data and the name cache staleness bound.

   When a client is making changes to a given directory, it
   server needs to
   determine whether there have been changes made reflect this change to the directory by
   other clients.  It does this by using second client that
   submitted the change attribute as
   reported before and after GETATTR.  Therefore, the directory operation in client holding the associated
   change_info4 value returned for
   OPEN_DELEGATE_WRITE delegation needs to be interrogated.  The server
   will use the CB_GETATTR operation.  The only attributes that the
   server can reliably query via CB_GETATTR are size and change.

   Since CB_GETATTR is able being used to
   communicate satisfy another client's GETATTR
   request, the server only needs to know if the client whether holding the change_info4 data is provided
   atomically with respect to
   delegation has a modified version of the directory operation. file.  If the change
   values are provided atomically, client's copy
   of the client delegated file is then able to compare
   the pre-operation change value with not modified (data or size), the change value in server can
   satisfy the second client's
   name cache.  If the comparison indicates that GETATTR request from the directory was
   updated by another client, attributes
   stored locally at the name cache associated with server.  If the
   modified directory file is purged from modified, the client. server
   only needs to know about this modified state.  If the comparison
   indicates no modification, the name cache can be updated on server
   determines that the
   client file is currently modified, it will respond to reflect the directory operation and
   the associated timeout
   extended.  The post-operation change value needs to be saved second client's GETATTR as if the
   basis for future change_info4 comparisons.

   As demonstrated by file had been modified locally
   at the scenario above, name caching requires that server.

   Since the
   client revalidate name cache data by inspecting form of the change attribute
   of a directory at is determined by the point when server
   and is opaque to the name cache item was cached.
   This requires that client, the client and server update need to agree on a
   method of communicating the change attribute for
   directories when modified state of the contents file.  For the size
   attribute, the client will report its current view of the corresponding directory file size.
   For the change attribute, the handling is
   modified. more involved.

   For a client to use the change_info4 information
   appropriately and correctly, client, the server must report following steps will be taken when receiving a
   OPEN_DELEGATE_WRITE delegation:

   o  The value of the pre and post
   operation change attribute values atomically.  When will be obtained from the server
      and cached.  Let this value be represented by c.

   o  The client will create a value greater than c that will be used
      for communicating modified data is
   unable to report held at the before and after values atomically with respect client.  Let this
      value be represented by d.

   o  When the client is queried via CB_GETATTR for the change
      attribute, it checks to see if it holds modified data.  If the directory operation,
      file is modified, the server must indicate that fact in value d is returned for the
   change_info4 return change attribute
      value.  When the information  If this file is not atomically
   reported, currently modified, the client should not assume that other clients have not
   changed returns
      the directory.

10.9.  Directory Caching

   The results of READDIR operations may be used to avoid subsequent
   READDIR operations.  Just as in value c for the cases change attribute.

   For simplicity of attribute and name
   caching, inconsistencies may arise among implementation, the various client caches.
   To mitigate MAY for each CB_GETATTR
   return the effects of these inconsistencies, and given same value d.  This is true even if, between successive
   CB_GETATTR operations, the
   context of typical filesystem APIs, client again modifies in the following rules should be
   followed:

   o  Cached READDIR information for a directory which is not obtained file's data
   or metadata in a single READDIR operation must always be a consistent snapshot
      of directory contents.  This is determined by using a GETATTR
      before its cache.  The client can return the first READDIR and after same value
   because the last of READDIR only requirement is that
      contributes to the cache.

   o  An upper time boundary is maintained client be able to indicate
   to the length of
      time a directory cache entry is considered valid before server that the client
      must revalidate the cached information.

   The revalidation technique parallels that discussed in holds modified data.  Therefore, the case
   value of
   name caching.  When d may always be c + 1.

   While the client change attribute is not changing opaque to the directory client in
   question, checking the change attribute sense that
   it has no idea what units of time, if any, the directory with GETATTR server is adequate.  The lifetime of the cache entry can be extended at
   these checkpoints.  When a client counting
   change with, it is modifying the directory, not opaque in that the client needs has to use treat it as
   an unsigned integer, and the change_info4 data server has to determine whether there
   are other clients modifying be able to see the directory.  If it is determined results
   of the client's changes to that
   no other client modifications are occurring, integer.  Therefore, the server MUST
   encode the change attribute in network order when sending it to the
   client.  The client may update
   its directory cache MUST decode it from network order to reflect its own changes.

   As demonstrated previously, directory caching requires that native
   order when receiving it and the client revalidate directory cache data by inspecting MUST encode it network order
   when sending it to the server.  For this reason, change
   attribute is defined as
   an unsigned integer rather than an opaque array of a directory at bytes.

   For the point when server, the directory was cached.
   This requires that following steps will be taken when providing a
   OPEN_DELEGATE_WRITE delegation:

   o  Upon providing a OPEN_DELEGATE_WRITE delegation, the server update will
      cache a copy of the change attribute for
   directories when in the contents of data structure it uses
      to record the corresponding directory is
   modified.  For delegation.  Let this value be represented by sc.

   o  When a second client sends a GETATTR operation on the same file to use
      the change_info4 information
   appropriately and correctly, server, the server must report obtains the pre and post
   operation change attribute values atomically.  When from the server first
      client.  Let this value be cc.

   o  If the value cc is
   unable equal to report sc, the before file is not modified and after the
      server returns the current values atomically with respect for change, time_metadata, and
      time_modify (for example) to the directory operation, second client.

   o  If the server must indicate that fact in value cc is NOT equal to sc, the
   change_info4 return value.  When the information file is not atomically
   reported, currently modified
      at the first client should not assume that other clients have not
   changed the directory.

11.  Minor Versioning

   To address the requirement of an NFS protocol that can evolve as the
   need arises, the NFS version 4 protocol contains the rules and
   framework to allow for most likely will be modified at the server
      at a future minor changes or versioning. time.  The base assumption with respect server then uses its current time to minor versioning is that any
   future accepted minor version must follow the IETF process
      construct attribute values for time_metadata and be
   documented in a standards track RFC.  Therefore, each minor version
   number will correspond to an RFC.  Minor version zero time_modify.  A
      new value of the NFS
   version 4 protocol sc, which we will call nsc, is represented computed by this RFC.  The COMPOUND
   procedure will support the encoding of
      server, such that nsc >= sc + 1.  The server then returns the minor version being
   requested by
      constructed time_metadata, time_modify, and nsc values to the client.
      requester.  The following items represent server replaces sc in the basic rules for delegation record with
      nsc.  To prevent the development possibility of
   minor versions.  Note that a future minor version may decide to
   modify or add time_modify, time_metadata,
      and change from appearing to go backward (which would happen if
      the following rules as part of the minor version
   definition.

   1.   Procedures are not added or deleted

        To maintain client holding the general RPC model, NFS version 4 minor versions
        will not add delegation fails to or delete procedures from the NFS program.

   2.   Minor versions may add operations write its modified data
      to the COMPOUND and
        CB_COMPOUND procedures.

        The addition of operations to server before the COMPOUND and CB_COMPOUND
        procedures does not affect delegation is revoked or returned), the RPC model.

        1.  Minor versions may append attributes to GETATTR4args,
            bitmap4, and GETATTR4res.

            This allows for
      server SHOULD update the expansion file's metadata record with the
      constructed attribute values.  For reasons of reasonable
      performance, committing the constructed attribute model values to
            allow for future growth or adaptation.

        2.  Minor version X must append any new attributes after stable
      storage is OPTIONAL.

   As discussed earlier in this section, the
            last documented attribute.

            Since attribute results are specified as an opaque array of
            per-attribute XDR encoded results, client MAY return the complexity of adding
            new attributes same
   cc value on subsequent CB_GETATTR calls, even if the file was
   modified in the midst of client's cache yet again between successive
   CB_GETATTR calls.  Therefore, the current definitions will
            be too burdensome.

   3.   Minor versions server must not modify assume that the structure of an existing
        operation's arguments or results.

        Again file
   has been modified yet again, and MUST take care to ensure that the complexity of handling multiple structure definitions
        for a single operation
   new nsc it constructs and returns is too burdensome.  New operations should
        be added instead of modifying existing structures for a minor
        version.

        This rule does not preclude greater than the following adaptations in previous nsc it
   returned.  An example implementation's delegation record would
   satisfy this mandate by including a minor
        version.

        *  adding bits to flag fields such as new attributes boolean field (let us call it
   "modified") that is set to
           GETATTR's bitmap4 data type

        *  adding bits to existing attributes like ACLs that have flag
           words

        *  extending enumerated types (including NFS4ERR_*) with new
           values

   4.   Minor versions must not modify the structure of existing
        attributes.

   5.   Minor versions must not delete operations.

        This prevents FALSE when the potential reuse of a particular operation
        "slot" in a future minor version.

   6.   Minor versions must not delete attributes.

   7.   Minor versions must not delete flag bits or enumeration values.

   8.   Minor versions may declare an operation MUST NOT be implement.

        Specifying that an operation MUST NOT be implemented delegation is
        equivalent to obsoleting granted, and
   an operation.  For sc value set at the client, it means
        that time of grant to the operation MUST NOT change attribute value.
   The modified field would be sent set to TRUE the server.  For first time cc != sc, and
   would stay TRUE until the
        server, an NFS error can be delegation is returned as opposed or revoked.  The
   processing for constructing nsc, time_modify, and time_metadata would
   use this pseudo code:

       if (!modified) {
           do CB_GETATTR for change and size;

           if (cc != sc)
               modified = TRUE;
       } else {
           do CB_GETATTR for size;
       }

       if (modified) {
           sc = sc + 1;
           time_modify = time_metadata = current_time;
           update sc, time_modify, time_metadata into file's metadata;
       }

   This would return to "dropping" the request as an XDR decode error.  This approach allows for client (that sent GETATTR) the obsolescence of an operation while maintaining its structure
        so that a future minor version can reintroduce attributes it
   requested, but make sure size comes from what CB_GETATTR returned.
   The server would not update the operation.

        1.  Minor versions may declare file's metadata with the client's
   modified size.

   In the case that an the file attribute MUST NOT be
            implemented.

        2.  Minor versions may declare that size is different than the
   server's current value, the server treats this as a flag bit or enumeration modification
   regardless of the value MUST NOT be implemented.

   9.   Minor versions may downgrade features from REQUIRED to
        RECOMMENDED, or RECOMMENDED to OPTIONAL.

   10.  Minor versions may upgrade features from OPTIONAL to RECOMMENDED
        or RECOMMENDED of the change attribute retrieved via
   CB_GETATTR and responds to REQUIRED.

   11.  A the second client and server that support minor version X SHOULD support
        minor versions 0 (zero) through X-1 as well.

   12.  Except for infrastructural changes, no new features may be
        introduced as REQUIRED in a minor version.

        This rule allows for the introduction last step.

   This methodology resolves issues of new functionality clock differences between client
   and
        forces server and other scenarios where the use of implementation experience before designating a
        feature as REQUIRED.  On the other hand, some classes of
        features are infrastructural and have broad effects.  Allowing
        such features to not CB_GETATTR break
   down.

   It should be REQUIRED complicates implementation of noted that the minor version.

   13.  A client MUST NOT attempt server is under no obligation to use a stateid, filehandle, or
        similar returned object from the COMPOUND procedure with minor
        version X for another COMPOUND procedure with minor version Y,
        where X != Y.

12.  Internationalization

   This chapter describes
   CB_GETATTR and therefore the string-handling aspects of server MAY simply recall the NFS version
   4 protocol, and how they address issues related delegation
   to
   internationalization, including issues related to UTF-8,
   normalization, string preparation, case folding, and handling avoid its use.

10.4.4.  Recall of
   internationalization issues related to domains. Open Delegation

   The NFS version 4 protocol needs to deal with internationalization,
   or I18N, following events necessitate recall of an open delegation:

   o  Potentially conflicting OPEN request (or READ/WRITE done with respect to
      "special" stateid)

   o  SETATTR issued by another client

   o  REMOVE request for the file

   o  RENAME request for the file names and other strings as used within either source or target of the protocol.  The choice
      RENAME

   Whether a RENAME of string representation must allow for
   reasonable name/string access a directory in the path leading to clients, applications, and users
   which use various languages.  The UTF-8 encoding the file
   results in recall of an open delegation depends on the UCS as
   defined by [7] allows for this type semantics of access and follows
   the policy
   described server filesystem.  If that filesystem denies such RENAMEs when a
   file is open, the recall must be performed to determine whether the
   file in "IETF Policy on Character Sets and Languages", [8]. question is, in fact, open.

   In implementing such policies, addition to the situations above, the server may choose to recall
   open delegations at any time if resource constraints make it is important
   advisable to understand and
   respect do so.  Clients should always be prepared for the nature
   possibility of NFS version 4 as recall.

   When a means by which client
   implementations may invoke operations receives a recall for an open delegation, it needs to
   update state on remote file systems.  Server
   implementations act as the server before returning the delegation.  These
   same updates must be done whenever a conduit client chooses to return a range
   delegation voluntarily.  The following items of state need to be
   dealt with:

   o  If the file system
   implementations that associated with the NFS version 4 server typically invokes
   through delegation is no longer open and
      no previous CLOSE operation has been sent to the server, a virtual-file-system interface.

   Keeping this context in mind, one needs CLOSE
      operation must be sent to understand that the server.

   o  If a file
   systems with which clients will be interacting will generally not has other open references at the client, then OPEN
      operations must be
   devoted solely sent to access using NFS version 4.  Local access and its
   requirements the server.  The appropriate stateids
      will generally be important and often access over other
   remote file access protocols provided by the server for subsequent use by the client
      since the delegation stateid will not longer be as well.  It is generally a
   functional requirement in practice for valid.  These OPEN
      requests are done with the users claim type of CLAIM_DELEGATE_CUR.  This
      will allow the NFS version 4
   protocol (although it may be formally out presentation of scope for this document)
   for the implementation delegation stateid so that the
      client can establish the appropriate rights to allow files created by other protocols and
   by local operations on perform the OPEN.
      (see Section 15.18 for details.)

   o  If there are granted file system locks, the corresponding LOCK operations
      need to be accessed using NFS
   version 4 as well.

   It also needs performed.  This applies to be understood that the OPEN_DELEGATE_WRITE
      delegation case only.

   o  For a considerable portion OPEN_DELEGATE_WRITE delegation, if at the time of recall the
      file
   name processing will occur within the implementation of is not open for write, all modified data for the file
   system rather than within must be
      flushed to the limits of server.  If the NFS version 4 server
   implementation per se.  As delegation had not existed, the
      client would have done this data flush before the CLOSE operation.

   o  For a result, cetain aspects of name
   processing may change as OPEN_DELEGATE_WRITE delegation when a file is still open at
      the locus time of processing moves from recall, any modified data for the file
   system needs to be
      flushed to the server.

   o  With the OPEN_DELEGATE_WRITE delegation in place, it is possible
      that the file system.  As a result of these factors, was truncated during the protocol
   cannot enforce uniformity duration of name-related processing upon NFS version
   4 server requests on the server delegation.
      For example, the truncation could have occurred as a whole.  Because the server
   interacts result of an
      OPEN UNCHECKED4 with existing file system implementations, a size attribute value of zero.  Therefore,
      if a truncation of the same server
   handling will produce different behavior when interacting with
   different file system implementations.  To attempt to require uniform
   behavior, has occurred and treat this operation has
      not been propagated to the server, the protocol server and truncation must occur
      before any modified data is written to the file system as a
   unified application, would considerably limit server.

   In the usefulness case of OPEN_DELEGATE_WRITE delegation, file locking imposes
   some additional requirements.  To precisely maintain the
   protocol.

12.1.  Use of UTF-8

   As mentioned above, UTF-8 associated
   invariant, it is used as a convenient way required to encode
   Unicode flush any modified data in any region
   for which allows clients that have a write lock was released while the OPEN_DELEGATE_WRITE
   delegation was in effect.  However, because the OPEN_DELEGATE_WRITE
   delegation implies no internationalization
   requirements other locking by other clients, a simpler
   implementation is to avoid these issues since flush all modified data for the mapping of ASCII names
   to UTF-8 is file (as
   described just above) if any write lock has been released while the identity.

12.1.1.  Relation
   OPEN_DELEGATE_WRITE delegation was in effect.

   An implementation need not wait until delegation recall (or deciding
   to Stringprep

   RFC 3454 [9], otherwise known as "stringprep", documents voluntarily return a framework
   for using Unicode/UTF-8 in networking protocols, intended "to
   increase delegation) to perform any of the likelihood above
   actions, if implementation considerations (e.g., resource
   availability constraints) make that string input and string comparison work
   in ways desirable.  Generally, however,
   the fact that make sense for typical users throughout the world."  A
   protocol conforming to this framework must define a profile actual open state of
   stringprep "in order to fully specify the processing options."  NFS
   version 4, while file may continue to
   change makes it does make normative references not worthwhile to stringprep send information about opens and
   uses elements
   closes to the server, except as part of delegation return.  Only in
   the case of closing the open that framework, it does not, for reasons that are
   explained below, conform resulted in obtaining the
   delegation would clients be likely to do this early, since, in that framework, for all
   case, the close once done will not be undone.  Regardless of the strings
   client's choices on scheduling these actions, all must be performed
   before the delegation is returned, including (when applicable) the
   close that are used within it.

   In addition to some specific issues which have caused stringprep corresponds to
   add confusion in handling certain characters for certain languages,
   there are a number of general reasons why stringprep profiles are not
   suitable for describing NFS version 4.

   o  Restricting the character repertoire to Unicode 3.2, as required
      by stringprep is unduly constricting.

   o  Many of open that resulted in the character tables delegation.
   These actions can be performed either in stringprep are inappropriate
      because of this limited character repertoire, so that normative
      reference to stringprep is not desirable previous requests or in many case and instead,
      we allow more flexibility
   previous operations in the definition of case mapping
      tables.

   o  Because of same COMPOUND request.

10.4.5.  OPEN Delegation Race with CB_RECALL

   The server informs the presence client of different file systems, recall via a CB_RECALL.  A race case
   which may develop is when the specifics
      of processing are not fully defined and some aspects that are are
      RECOMMENDED, rather than REQUIRED.

   Despite these issues, in many cases delegation is immediately recalled
   before the general structure of
   stringprep profiles, consisting of sections COMPOUND which deal with established the
   applicability of delegation is returned to
   the description, client.  As the character repertoire, charcter
   mapping, normalization, prohibited characters, CB_RECALL provides both a stateid and issues of a
   filehandle for which the
   handling (i.e., possible prohibition) client has no mapping, it cannot honor the
   recall attempt.  At this point, the client has two choices, either do
   not respond or respond with NFS4ERR_BADHANDLE.  If it does not
   respond, then it runs the risk of bidirectional strings, is a
   convenient way the server deciding to describe not grant it
   further delegations.

   If instead it does reply with NFS4ERR_BADHANDLE, then both the string handling which is needed and
   will be used where appropriate.

12.1.2.  Normalization, Equivalence, client
   and Confusability

   Unicode has defined several equivalence relationships among the set server might be able to detect that a race condition is
   occurring.  The client can keep a list of possible strings.  Understanding pending delegations.  When
   it receives a CB_RECALL for an unknown delegation, it can cache the nature
   stateid and purpose filehandle on a list of these
   equivalence relations pending recalls.  When it is important to understand the handling of
   Unicode strings within NFS version 4.

   Some string pairs are thought as
   provided with a delegation, it would only differing in use it if it was not on the way accents
   and other diacritics are encoded, as illustrated in
   pending recall list.  Upon the examples
   below.  Such string pairs are called "canonically equivalent".

      Such equivalence next CB_RECALL, it could immediately
   return the delegation.

   In turn, the server can occur keep track of when there are precomposed characters,
      as an alternative to encoding it issues a base character in addition delegation and
   assume that if a client responds to the CB_RECALL with a
      combining accent.  For example,
   NFS4ERR_BADHANDLE, then the character LATIN SMALL LETTER E
      WITH ACUTE (U+00E9) is defined as canonically equivalent client has yet to receive the
      string consisting of LATIN SMALL LETTER E followed by COMBINING
      ACUTE ACCENT (U+0065, U+0301).

      When multiple combining diacritics are present, differences in delegation.
   The server SHOULD give the
      ordering are not reflected in resulting display client a reasonable time both to get this
   delegation and to return it before revoking the strings
      are defined as canonically equivalent.  For example, delegation.  Unlike a
   failed callback path, the string
      consisting of LATIN SMALL LETTER Q, COMBINING ACUTE ACCENT,
      COMBINING GRAVE ACCENT (U+0071, U+0301, U+0300) is canonically
      quivalent server should periodically probe the client
   with CB_RECALL to see if it has received the string consisting of LATIN SMALL LETTER Q,
      COMBINING GRAVE ACCENT, COMBINING ACUTE ACCENT (U+0071, U+0300,
      U+0301) delegation and is ready
   to return it.

   When both situations are present, the number of canonically
      equivalent strings can server finally determines that enough time has lapsed, it
   SHOULD revoke the delegation and it SHOULD NOT revoke the lease.
   During this extended recall process, the server SHOULD be greater.  Thus, renewing
   the following strings
      are all canonically equivalent:

         LATIN SMALL LETTER E, COMBINING MACRON, ACCENT, COMBINING ACUTE
         ACCENT (U+0xxx, U+0304, U+0301)

         LATIN SMALL LETTER E, COMBINING ACUTE ACCENT, COMBINING MACRON
         (U+0xxx, U+0301, U+0304)

         LATIN SMALL LETTER E WITH MACRON, COMBINING ACUTE ACCENT
         (U+011E, U+0301)

         LATIN SMALL LETTER E WITH ACUTE, COMBINING MACRON (U+00E9,
         U+0304)

         LATIN SMALL LETTER E WITH MACRON AND ACUTE (U+1E16)

   Additionally there client lease.  The intent here is an equivalence relation of "compatibility
   equivalence".  Two canonically equivalent strings are necessarily
   compatibility equivalent, although not that the converse.  An example of
   compatibility equivalent strings which are client not canonically equivalent
   are GREEK CAPITAL LETTER OMEGA (U+03A9) and OHM SIGN (U+2129).  These
   are identical in appearance while other compatibility equivalent
   strings are not.  Another example would be "x2" and pay too
   onerous a burden for a condition caused by the two character
   string denoting x-squared which are clearly differnt in appearance
   although compatibility equivalent and not canonically equivalent.
   These have Unicode encodings LATIN SMALL LETTER X, DIGIT TWO (U+0078,
   U+0032) and LATIN SMALL LETTER X, SUPERSCRIPT TWO (U+0078, U+00B2),

   One way server.

10.4.6.  Clients that Fail to deal with these equivalence relations is via
   normalization. Honor Delegation Recalls

   A normalization form maps all strings client may fail to respond to a
   correspondig normalized string in recall for various reasons, such as
   a fashion that all strings
   that are equivalent (canonically or compatibly, depending on failure of the
   form) are mapped callback path from server to the same value.  Thus the image client.  The client
   may be unaware of the mapping is a subset of Unicode strings conceived as failure in the representives callback path.  This lack of
   awareness could result in the
   equivalence classes defined by the chosen equivalence relation.

   In client finding out long after the NFS version 4 protocol, handling of issues related to
   internationalization with regard to normalization follows one of two
   basic patterns:

   o  For strings whose function is related to other internet standards,
      such as server
   failure that its delegation has been revoked, and domain naming, another client has
   modified the normalization form defined
      by data for which the appropriate internet standards client had a delegation.  This is used.  For
   especially a problem for the client that held a OPEN_DELEGATE_WRITE
   delegation.

   The server and
      domain naming, this involves normalization form NFKC as specified also has a dilemma in [10]

   o  For that the client that fails to
   respond to the recall might also be sending other strings, particular NFS requests,
   including those passed by that renew the lease before the lease expires.
   Without returning an error for those lease renewing operations, the
   server leads the client to file
      system implementations, normalization requirements are believe that the
      province of delegation it has is in
   force.

   This difficulty is solved by the file system and following rules:

   o  When the job of this specification callback path is
      not to specify down, the server MUST NOT revoke the
      delegation if one of the following occurs:

      *  The client has issued a particular form but to make sure that
      interoperability is maximmized, even when clients RENEW operation and server-based
      file systems have different preferences.

   A related but distinct issue concerns string confusability.  This can
   occur when two strings (including single-charcter strings) having a
   similar appearance.  There have been attempts to define uniform
   processing in the server has
         returned an attempt to avoid such confusion (see stringprep [9])
   but NFS4ERR_CB_PATH_DOWN error.  The server MUST renew
         the results have often added confusion.

   Some examples of possible confusions lease for any byte-range locks and proposed processing intended
   to reduce/avoid confusions:

   o  Deletion of characters believed share reservations the
         client has that the server has known about (as opposed to be invisible and appropriately
      ignored, justifying their deletion, including, WORD JOINER
      (U+2060), those
         locks and share reservations the ZERO WIDTH SPACE (U+200B).

   o  Deletion of characters supposed to client has established but not bear semantics and only
      affect glyph choice, including
         yet sent to the ZERO WIDTH NON-JOINER (U+200C)
      and server, due to the ZERO WIDTH JOINER (U+200D), where delegation).  The server
         SHOULD give the deletion turns out client a reasonable time to be return its
         delegations to the server before revoking the client's
         delegations.

      *  The client has not issued a problem RENEW operation for Farsi speakers.

   o  Prohibition some period of space characters such as
         time after the EM SPACE (U+2003), server attempted to recall the
      EN SPACE (U+2002), and delegation.  This
         period of time MUST NOT be less than the THIN SPACE (U+2009).

   In addition, character pairs which apprear very similar and could and
   often do result value of the
         lease_time attribute.

   o  When the client holds a delegation, it cannot rely on operations,
      except for RENEW, that take a stateid, to renew delegation leases
      across callback path failures.  The client that wants to keep
      delegations in confusion.  In addition force across callback path failures must use RENEW
      to what Unicode defines as
   "compatibility equivalence", do so.

10.4.7.  Delegation Revocation

   At the point a delegation is revoked, if there are a considerable number of
   additional character pairs that could cause confusion. associated opens
   on the client, the applications holding these opens need to be
   notified.  This includes
   characters such as LATIN CAPITAL LETTER O (U+004F) and DIGIT ZERO
   (U+0030), and CYRILLIC SMALL LETTER ER (U+0440) LATIN SMALL LETTER P
   (U+0070) (also with MATHEMATICAL BOLD SMALL P (U+1D429) and GREEK
   SMALL LETTER RHO (U+1D56, notification usually occurs by returning errors for good measure).

   NFS version 4, as it does with normalization, takes
   READ/WRITE operations or when a two-part
   approach to this issue:

   o  For strings whose function close is related to other internet standards,
      such as server and domain naming, any string processing to address attempted for the confusability issue open file.

   If no opens exist for the file at the point the delegation is defined by
   revoked, then notification of the appropriate internet
      standards revocation is used.  For server and domain naming, this unnecessary.
   However, if there is modified data present at the
      responsibility of IDNA as described in [10].

   o  For other strings, particularly those passed by client for the server to file
      system implementations, any such preparation requirements
      including
   file, the choice user of how, or whether to address the
      confusability issue, are application should be notified.  Unfortunately,
   it may not be possible to notify the responsibility of user since active applications
   may not be present at the file system to
      define, and client.  See Section 10.5.1 for this specification to try to add its own set would
      add unacceptably to complexity, additional
   details.

10.5.  Data Caching and make many files accessible
      locally Revocation

   When locks and by other remote file access protocols, inaccessible by
      NFS version 4.  This specification defines how delegations are revoked, the protocol
      maximizes interoperability in assumptions upon which
   successful caching depend are no longer guaranteed.  For any locks or
   share reservations that have been revoked, the face of different file system
      implementations .  NFS version 4 does allow file systems to map
      and to reject characters, including those likely corresponding owner
   needs to result in
      confusion, since be notified.  This notification includes applications with a
   file systems may choose to do such things.  It
      defines what open that has a corresponding delegation which has been revoked.
   Cached data associated with the client will see in such cases, revocation must be removed from the
   client.  In the case of modified data existing in order to limit
      problems the client's cache,
   that can arise when a file name is created and it appears
      to have a different name data must be removed from the one client without it is assigned when being written to
   the name
      is created.

12.2.  String Type Overview
12.2.1.  Overall String Class Divisions

   NFS version 4 server.  As mentioned, the assumptions made by the client are no
   longer valid at the point when a lock or delegation has to deal with been revoked.
   For example, another client may have been granted a large set of diffreent types of
   strings and because of conflicting lock
   after the different role of each,
   internationalization issues will be different for each:

   o  For some types revocation of strings, the fundamental internationalization-
      related decisions are lock at the province of first client.  Therefore, the file system or
   data within the
      security-handling functions of lock range may have been modified by the server and other
   client.  Obviously, the protocol's job first client is unable to establish the rules under which file systems and servers are
      allowed to exercise this freedom, guarantee to avoid adding the
   application what has occurred to confusion.

   o  In other cases, the fundamental internationalization issues are file in the responsibility case of other IETF groups and our jobis simply to
      reference those and perhaps make a few choices as to how they are revocation.

   Notification to be used (e.g., U-labels vs. A-labels).

   o  There are also cases in which a string has a small amount of NFS
      version 4 processing which results lock owner will in one or more strings being
      referred to one many cases consist of simply
   returning an error on the other categories.

   We will divide strings next and all subsequent READs/WRITEs to be dealt with into the following classes:

   MIX  indicating that there is small amount of preparatory processing
      that either picks an internationalization hadling mode
   open file or divides on the string into a set of (two) strings with close.  Where the methods available to a different mode
      internationalization handling client
   make such notification impossible because errors for each. certain
   operations may not be returned, more drastic action such as signals
   or process termination may be appropriate.  The details are discussed
      in the section "Types with Pre-processing to Resolve Mixture
      Issues".

   NIP  indicating that, justification for various reasons, there
   this is no need that an invariant for
      internationalization-specific processing to which an application depends on may be performed.  The
      specifics
   violated.  Depending on how errors are typically treated for the
   client operating environment, further levels of notification
   including logging, console messages, and GUI pop-ups may be
   appropriate.

10.5.1.  Revocation Recovery for Write Open Delegation

   Revocation recovery for a OPEN_DELEGATE_WRITE delegation poses the various string types handled in this way are
      described
   special issue of modified data in the section "String Types without
      Internationalization Processing".

   INET  indicating that client cache while the string needs file is
   not open.  In this situation, any client which does not flush
   modified data to be processed in a fashion
      goverened by non-NFS-specific internet specifications.  The
      details are discussed in the section "Types with Processing
      Defined by Other Internet Areas".

   NFS  indicating server on each close must ensure that the string needs to be processed in user
   receives appropriate notification of the failure as a fashion
      governed by NFSv4-specific considerations.  The primary focus is
      on enabling flexibility for result of the various file systems
   revocation.  Since such situations may require human action to
   correct problems, notification schemes in which the appropriate user
   or administrator is notified may be
      accessed necessary.  Logging and console
   messages are typical examples.

   If there is described in modified data on the section "String Types with NFS-
      specific Processing".

12.2.2.  Divisions by Typedef Parent types

   There are a number of different string types within NFS version 4 and
   internationalization handling will client, it must not be different for different types flushed
   normally to the server.  A client may attempt to provide a copy of strings.  Each
   the types will be file data as modified during the delegation under a different
   name in one of four groups based on the parent type filesystem name space to ease recovery.  Note that specifies when
   the nature of its relationship to utf8
   and ascii.

   utf8_should/USHOULD:  indicating client can determine that strings of this type SHOULD be
      UTF-8 but clients and servers will the file has not check for valid UTF-8
      encoding.

   utf8val_should/UVSHOULD:  indicating that strings been modified by any
   other client, or when the client has a complete cached copy of this type SHOULD
      be and generally will be file
   in the form question, such a saved copy of the UTF-8 encoding client's view of
      Unicode.  Strings in most cases will be checked by the server for
      valid UTF-8 but for certain file systems, such checking may
   be
      inhibited.

   utf8val_must/UVMUST:  indicating that strings of this type MUST be in
      the form particular value for recovery.  In other case, recovery using a
   copy of the UTF-8 encoding of Unicode.  Strings will be
      checked by file based partially on the server for valid UTF-8 client's cached data and
   partially on the server SHOULD ensure
      that when sent to the client, they are valid UTF-8.

   ascii_must/ASCII:  indicating that strings of this type MUST copy as modified by other clients, will be pure
      ASCII, and thus automatically UTF-8.  The processing of these
      string must ensure that they are only have ASCII characters
   anything but
      this need not straightforward, so clients may avoid saving file
   contents in these situations or mark the results specially to warn
   users of possible problems.

   Saving of such modified data in delegation revocation situations may
   be limited to files of a separate step if any normally required check
      for validity inherently assures that certain size or might be used only ASCII characters are
      present.

   In those cases where UTF-8 when
   sufficient disk space is not required, USHOULD and UVSHOULD, and
   strings that are not valid UTF-8 are received and accepted, available within the
   receiver MUST NOT modify target filesystem.
   Such saving may also be restricted to situations when the strings.  For example, setting
   particular bits such as client has
   sufficient buffering resources to keep the high-order bit cached copy available
   until it is properly stored to zero MUST NOT be done.

12.2.3. the target filesystem.

10.6.  Attribute Caching

   The attributes discussed in this section do not include named
   attributes.  Individual Types named attributes are analogous to files and Their Handling

   The first table outlines
   caching of the handling data for the primary string types,
   i.e., those not derived as a prefix or a suffix from a mixture type.

   +-----------------+----------+-------+------------------------------+
   | Type            | Parent   | Class | Explanation                  |
   +-----------------+----------+-------+------------------------------+
   | comptag4        | USHOULD  | NIP   | Should these needs to be utf8 but no        |
   |                 |          |       | validation by server or      |
   |                 |          |       | client handled just as data
   caching is for ordinary files.  Similarly, LOOKUP results from an
   OPENATTR directory are to be done.        |
   | component4      | UVSHOULD | NFS   | Should be utf8 but clients   |
   |                 |          |       | may need to access file      |
   |                 |          |       | systems with a different     |
   |                 |          |       | name structure, such cached on the same basis as any other
   pathnames and similarly for directory contents.

   Clients may cache file |
   |                 |          |       | systems attributes obtained from the server and use
   them to avoid subsequent GETATTR requests.  Such caching is write
   through in that have non-utf8   |
   |                 |          |       | names.                       |
   | linktext4       | UVSHOULD | NFS   | Should be utf8 since text    |
   |                 |          |       | may include name components. |
   |                 |          |       | Because modification to file attributes is always done by
   means of requests to the need server and should not be done locally and
   cached.  The exception to       |
   |                 |          |       | access existing file         |
   |                 |          |       | systems, this check may be   |
   |                 |          |       | inhibited.                   |
   | fattr4_mimetype | ASCII    | NIP   | All mime types are ascii so  |
   |                 |          |       | no specific utf8 processing  |
   |                 |          |       | is required, given that you  |
   |                 |          |       | are comparing modifications to attributes that list.  |
   +-----------------+----------+-------+------------------------------+

                                  Table 5

   There
   are intimately connected with data caching.  Therefore, extending a number of string types that are subject
   file by writing data to preliminary
   processing.  This processing may take the form either of selecting
   one of two possible forms based on the string contents or it local data cache is reflected immediately
   in may
   consist of dividing the string into multiple conjoined strings each
   with different utf8-related processing.

   +---------+--------+-------+----------------------------------------+
   | Type    | Parent | Class | Explanation                            |
   +---------+--------+-------+----------------------------------------+
   | prin4   | UVMUST | MIX   | Consists of two parts separated by an  |
   |         |        |       | at-sign, a prinpfx4 and a prinsfx4.    |
   |         |        |       | These are described in size as seen on the next table. |
   | server4 | UVMUST | MIX   | Is either an IP address (serveraddr4)  |
   |         |        |       | which has client without this change being
   immediately reflected on the server.  Normally such changes are not
   propagated directly to be pure ascii or a the server |
   |         |        |       | name svrname4, which but when the modified data is described      |
   |         |        |       | immediately below.                     |
   +---------+--------+-------+----------------------------------------+

                                  Table 6

   The last table describes
   flushed to the components of server, analogous attribute changes are made on the compound types
   described above.

   +----------+--------+------+----------------------------------------+
   | Type     | Class  | Def  | Explanation                            |
   +----------+--------+------+----------------------------------------+
   | svraddr4 | ASCII  | NIP  | Server as IP address, whether IPv4 or  |
   |          |        |      | IPv6.                                  |
   | svrname4 | UVMUST | INET | Server name as returned by
   server.     |
   |          |        |      | Not sent by client, except  When open delegation is in          |
   |          |        |      | VERIFY/NVERIFY.                        |
   | prinsfx4 | UVMUST | INET | Suffix part of principal, effect, the modified attributes
   may be returned to the server in the form  |
   |          |        |      | of response to a domain name.                      |
   | prinpfx4 | UVMUST | NFS  | Must match one CB_RECALL call.

   The result of a list local caching of valid      |
   |          |        |      | users or groups for attributes is that particular    |
   |          |        |      | domain.                                |
   +----------+--------+------+----------------------------------------+

                                  Table 7

12.3.  Errors Related to Strings

   When the client sends an invalid UTF-8 string in a context attribute
   caches maintained on individual clients will not be coherent.
   Changes made in which
   UTF-8 is REQUIRED, one order on the server MUST return an NFS4ERR_INVAL error.
   Within the framework of the previous section, this applies to strings
   whose type is defined as utf8val_must or ascii_must.  When the client
   sends an invalid UTF-8 string may be seen in a context in which UTF-8 is
   RECOMMENDED different
   order on one client and the server should test in a third order on a different client.

   The typical filesystem application programming interfaces do not
   provide means to atomically modify or interrogate attributes for UTF-8,
   multiple files at the server SHOULD
   return same time.  The following rules provide an NFS4ERR_INVAL error.  Within the framework of
   environment where the previous
   section, this applies to strings whose type is defined as
   utf8val_should. potential incoherency mentioned above can be
   reasonably managed.  These situations apply to cases in which
   inappropriate prefixes rules are detected and where derived from the count includes
   trailing bytes that do not constitute practice of
   previous NFS protocols.

   o  All attributes for a full UCS character.

   Where the client-supplied string is valid UTF-8 but contains
   characters that are not supported by the server given file system (per-fsid attributes excepted) are
      cached as a
   value for unit at the client so that string (e.g., names containing characters that have
   more than two octets on no non-serializability can
      arise within the context of a file system that supports UCS-2 characters
   only, file name components containing slashes single file.

   o  An upper time boundary is maintained on file systems how long a client cache
      entry can be kept without being refreshed from the server.

   o  When operations are performed that do
   not allow them in file name components), change attributes at the server MUST return an
   NFS4ERR_BADCHAR error.

   Where a UTF-8 string
      server, the updated attribute set is used requested as part of the
      containing RPC.  This includes directory operations that update
      attributes indirectly.  This is accomplished by following the
      modifying operation with a file name component, GETATTR operation and then using the file
   system, while supporting all
      results of the characters within GETATTR to update the name, does
   not allow client's cached attributes.

   Note that particular name if the full set of attributes to be used, cached is requested by
   READDIR, the server should return results can be cached by the error NFS4ERR_BADNAME.  This includes file system prohibitions of
   "." and ".." client on the same basis as file names
   attributes obtained via GETATTR.

   A client may validate its cached version of attributes for certain operations, a file by
   fetching just both the change and time_access attributes and assuming
   that if the change attribute has the same value as it did when the
   attributes were cached, then no attributes other such
   similar constraints.  It than time_access
   have changed.  The reason why time_access is also fetched is because
   many servers operate in environments where the operation that updates
   change does not include use of strings with non-
   preferred normalization modes.

   Where a UTF-8 string is used as update time_access.  For example, POSIX file
   semantics do not update access time when a file name component, is modified by the file
   write system implementation MUST NOT return NFS4ERR_BADNAME, simply due to
   a normalization mismatch.  In such cases call.  Therefore, the implementation SHOULD
   convert client that wants a current
   time_access value should fetch it with change during the string to attribute
   cache validation processing and update its own preferred normalization mode before
   performing the operation.  As a result, a cached time_access.

   The client cannot assume that may maintain a
   file created cache of modified attributes for those
   attributes intimately connected with data of modified regular files
   (size, time_modify, and change).  Other than those three attributes,
   the client MUST NOT maintain a name it specifies will have that name when cache of modified attributes.
   Instead, attribute changes are immediately sent to the
   directory is read.  It may have instead, server.

   In some operating environments, the name converted equivalent to time_access is
   expected to be implicitly updated by each read of the content of the
   file system's preferred normalization form.

   Where a UTF-8 string object.  If an NFS client is used as other than as caching the content of a file name component (or
   as
   object, whether it is a regular file, directory, or symbolic link text) and link,
   the string does not meet client SHOULD NOT update the normalization
   requirements specified for it, time_access attribute (via SETATTR
   or a small READ or READDIR request) on the error NFS4ERR_INVAL is returned.

12.4.  Types server with Pre-processing to Resolve Mixture Issues

12.4.1.  Processing of Principal Strings

   Strings denoting principals (users or groups) MUST be UTF-8 but since
   they consist each read that
   is satisfied from cache.  The reason is that this can defeat the
   performance benefits of a principal prefix, caching content, especially since an at-sign, and a domain, all
   three explicit
   SETATTR of which either are checked for being UTF-8, or inherently time_access may alter the change attribute on the server.
   If the change attribute changes, clients that are
   UTF-8, checking caching the string as a whole for being UTF-8 content
   will think the content has changed, and will re-read unmodified data
   from the server.  Nor is not
   required.  Although a server implementation may choose the client encouraged to make maintain a modified
   version of time_access in its cache, since this
   check on would mean that the string as whole, for example in converting it
   client will either eventually have to
   Unicode, write the description within this document, will reflect a
   processing model in which such checking happens after access time to the division
   into a principal prefix and suffix,
   server with bad performance effects, or it would never update the latter being
   server's time_access, thereby resulting in the form of a
   domain name.

   The string should be scanned for at-signs.  If there is more situation where an
   application that one
   at-sign, caches access time between a close and open of the string is considered invalid.  For cases in which there
   are no at-signs or
   same file observes the at-sign appears at access time oscillating between the start or end past and
   present.  The time_access attribute always means the time of last
   access to a file by a read that was satisfied by the
   string server.  This
   way clients will tend to see Interpreting owner only time_access changes that go forward
   in time.

10.7.  Data and owner_group.  Otherwise, Metadata Caching and Memory Mapped Files

   Some operating environments include the
   portion before capability for an application
   to map a file's content into the at-sign is dealt with as application's address space.  Each
   time the application accesses a prinpfx4 memory location that corresponds to a
   block that has not been loaded into the address space, a page fault
   occurs and the
   portion after file is dealt with as a prinsfx4.

12.4.2.  Processing of Server Id Strings

   Server id strings typically appear read (or if the block does not exist in responses (as attribute values) the
   file, the block is allocated and only appear then instantiated in requests the
   application's address space).

   As long as an attribute value presented each memory mapped access to VERIFY
   and NVERIFY.  With the file requires a page
   fault, the relevant attributes of the file that exception, they are not subject used to server
   validation detect
   access and posible rejection.  It is modification (time_access, time_metadata, time_modify, and
   change) will be updated.  However, in many operating environments,
   when page faults are not expected that clients required these attributes will typically do such validation not be
   updated on receipt of responses but they
   may as a way reads or updates to check for proper server behavior.  The responsibility
   for sending correct UTF-8 strings is with the server.

   Servers are identified by either server names file via memory access (regardless
   whether the file is local file or IP addresses.  Once
   an id has been identified as an IP address, then there is no
   processing specific to internationalization to be done, since such an
   address must be ASCII being access remotely).  A
   client or server MAY fail to be valid.

12.5.  String Types without Internationalization Processing

   There are a number of types update attributes of strings which, for a number of
   different reasons, do not require any internationalization-specific
   handling, such as validation of UTF-8, normalization, or character
   mapping or checking. file that is
   being accessed via memory mapped I/O. This does not necessarily mean has several implications:

   o  If there is an application on the server that has memory mapped a
      file that a client is also accessing, the strings
   need client may not be UTF-8.  In some case, other checking on able
      to get a consistent value of the string
   ensures change attribute to determine
      whether its cache is stale or not.  A server that they are valid UTF-8, without doing any checking
   specific knows that the
      file is memory mapped could always pessimistically return updated
      values for change so as to internationalization.

   The following are force the specific types:

   comptag4  strings are an aid application to debugging always get the
      most up to date data and metadata for the sender should avoid
      confusion by file.  However, due to
      the negative performance implications of this, such behavior is
      OPTIONAL.

   o  If the memory mapped file is not using anything but valid UTF-8.  But any work
      validating being modified on the string or modifying it would only add complication
      to a mechanism whose basic function server, and
      instead is best supported just being read by making it an application via the memory mapped
      interface, the client will not subject to any checking and having data maximally available to
      be looked at see an updated time_access
      attribute.  However, in a network trace.

   fattr4_mimetype  strings need to be validated by matching against a
      list of valid mime types.  Since these many operating environments, neither will
      any process running on the server.  Thus NFS clients are all ASCII, at no
      processing specific
      disadvantage with respect to internationaliztion local processes.

   o  If there is required since
      anything another client that does not match is invalid and anything which does
      not obey memory mapping the rules of UTF-8 will not be ASCII and consequently
      will not match, file, and will be invalid.

   svraddr4  strings, in order to be valid, need to be ASCII, but if you
      check them for validity, you have inherently checked
      that that
      they are ASCII and thus UTF-8.

12.6.  Types with Processing Defined by Other Internet Areas

   There are two types of strings which NFS version 4 deals with whose
   processing client is defined by other Internet standards, and where holding a OPEN_DELEGATE_WRITE delegation, the same
      set of issues
   related to different handling choices by server operating systems or
   server file systems do not apply.

   These are as follows:

   o  Server names as they appear discussed in the fs_locations attribute.  Note
      that for most purposes, such previous two bullet items apply.
      So, when a server names does a CB_GETATTR to a file that the client has
      modified in its cache, the response from CB_GETATTR will only not
      necessarily be sent by accurate.  As discussed earlier, the
      server client's
      obligation is to report that the client.  The exception is use of file has been modified since the fs_locations
      attribute in a VERIFY or NVERIFY operation.

   o  Principal suffixes which are used to denote sets of users and
      groups, and are in the form of domain names.

   The general rules for handling all of these domain-related strings
   are similar
      delegation was granted, not whether it has been modified again
      between successive CB_GETATTR calls, and independent of role the of server MUST assume
      that any file the sender or receiver as client or server although has modified in cache has been modified
      again between successive CB_GETATTR calls.  Depending on the consequences
      nature of failure to obey these
   rules the client's memory management system, this weak
      obligation may not be different for client or server.  The server can report
   errors when it is sent invalid strings, whereas the possible.  A client will
   simply ignore invalid string or use a default value in their place.

   The string sent SHOULD be in the form of a U-label although it MAY be return stale
      information in CB_GETATTR whenever the form of an A-label or a UTF-8 string that would not map to
   itself when canonicalized by applying ToUnicode(ToASCII(...)). file is memory mapped.

   o  The
   receiver needs to be able to accept domain and server names in any mixture of memory mapping and file locking on the formats allowed.  The server MUST reject, using same file is
      problematic.  Consider the following scenario, where the error
   NFS4ERR_INVAL, a string which page size
      on each client is not valid UTF-8 or which begins with
   "xn--" and violates the rules for 8192 bytes.

      *  Client A memory maps first page (8192 bytes) of file X

      *  Client B memory maps first page (8192 bytes) of file X

      *  Client A write locks first 4096 bytes

      *  Client B write locks second 4096 bytes
      *  Client A, via a valid A-label.

   When STORE instruction modifies part of its locked
         region.

      *  Simultaneous to client A, client B issues a domain string is STORE on part of id@domain or group@domain,
         its locked region.

   Here the server
   SHOULD map domain strings which are A-labels or are UTF-8 domain
   names which are not U-labels, challenge is for each client to the corresponding U-label, using
   ToUnicode(domain) or ToUnicode(ToASCII(domain)).  As resynchronize to get a result,
   correct view of the
   domain name returned within a userid first page.  In many operating environments, the
   virtual memory management systems on each client only know a GETATTR may page is
   modified, not match that
   sent when a subset of the userid page corresponding to the
   respective lock regions has been modified.  So it is set using SETATTR, although when this
   happens, not possible for
   each client to do the domain will be in right thing, which is to only write to the form
   server that portion of a U-label.  When the
   server does not map domain strings which are not U-labels into a
   U-label, which it MAY do, it MUST NOT modify page that is locked.  For example, if
   client A simply writes out the domain page, and then client B writes out the
   domain returned
   page, client A's data is lost.

   Moreover, if mandatory locking is enabled on a GETATTR of the userid MUST be file, then we have a
   different problem.  When clients A and B issue the same as that
   used when setting STORE
   instructions, the userid by resulting page faults require a byte-range lock on
   the SETATTTR.

   The server MAY implement VERIFY and NVERIFY without translating
   internal state entire page.  Each client then tries to a string form, so that, for example, a user
   principal extend their locked range
   to the entire page, which represents results in a specific numeric user id, will match deadlock.

   Communicating the NFS4ERR_DEADLOCK error to a
   different principal string which represents STORE instruction is
   difficult at best.

   If a client is locking the same numeric user id.

12.7.  String Types entire memory mapped file, there is no
   problem with NFS-specific Processing

   For advisory or mandatory byte-range locking, at least until
   the client unlocks a number of data types within NFSv4, region in the primary responsbibility
   for internationalization-related handling is that middle of some entity
   other than the server itself (see below for details).  In these
   situations, file.

   Given the primary responsibility of NFS version 4 is to provide
   a framework in which that other entity (file system and server
   operating system principal naming framework) implements its own
   decisions while establishing rules to limit interoperability issues.

   This pattern applies to above issues the following data types: are permitted:

   o  In the case of name components (strings of type component4), the
      server-side  Clients and servers MAY deny memory mapping a file system implementation (of which they know there may be more
      than one for
      are byte-range locks for.

   o  Clients and servers MAY deny a particular server) deals with internationalization
      issues, in byte-range lock on a fashion file they know
      is memory mapped.

   o  A client MAY deny memory mapping a file that it knows requires
      mandatory locking for I/O. If mandatory locking is appropriate to NFS version 4, other
      remote enabled after
      the file access protocols, is opened and local file access methods.  See
      "Handling of File Name Components" for mapped, the detailed treatment.

   o  In client MAY deny the case of link text strings (strings application
      further access to its mapped file.

10.8.  Name Caching

   The results of type lintext4), LOOKUP and READDIR operations may be cached to avoid
   the
      issues are similar, but file systems are restricted cost of subsequent LOOKUP operations.  Just as in the set case of
      acceptable internationalization-related processing that they may
      do, principally because symbolic links
   attribute caching, inconsistencies may contain name componetns
      that, when used, are presented to other file systems and/or other
      servers.  See "Processing of Link Text" for arise among the detailed
      treatment.

   o  In various client
   caches.  To mitigate the case effects of principal prefix strings, any decisions regarding
      internationalization are these inconsistencies and given
   the responsibility context of the server
      operating systems which may make its own rules regarding user and
      group typical filesystem APIs, an upper time boundary is
   maintained on how long a client name encoding.  See "Processing of Principal Prefixes" for cache entry can be kept without
   verifying that the detailed treatment.

12.7.1.  Handling of File Name Components

   There are entry has not been made invalid by a directory
   change operation performed by another client.

   When a number of places within client and server where file is not making changes to a directory for which there
   exist name
   components are processed:

   o  On cache entries, the client, file names may be processed as part of forming NFS
      version 4 requests.  Any such processing will reflect specific client needs of the client's environment and will be treated as out-of-
      scope from to periodically fetch
   attributes for that directory to ensure that it is not being
   modified.  After determining that no modification has occurred, the viewpoint of
   expiration time for the associated name cache entries may be updated
   to be the current time plus the name cache staleness bound.

   When a client is making changes to a given directory, it needs to
   determine whether there have been changes made to the directory by
   other clients.  It does this specification.

   o  On by using the server, file names are processed change attribute as part of processing NFS
      version 4 requests.  In practice, parts of
   reported before and after the processing will be
      implemented within directory operation in the NFS version 4 associated
   change_info4 value returned for the operation.  The server while other parts will
      be implemented within is able to
   communicate to the file system.  This processing client whether the change_info4 data is
      described in provided
   atomically with respect to the sections below.  These sections directory operation.  If the change
   values are organized in
      a fashion parallel provided atomically, the client is then able to a stringprep profile.  The same sorts of
      topics are dealt compare
   the pre-operation change value with but they differ the change value in the client's
   name cache.  If the comparison indicates that there the directory was
   updated by another client, the name cache associated with the
   modified directory is a wider
      range of possible processing choices.

   o  On purged from the client.  If the comparison
   indicates no modification, the server, file name components might potentially cache can be subject updated on the
   client to processing as part of generating NFS version 4 responses.  This
      specification assumes that this processing will be empty reflect the directory operation and that
      file name components will be copied verbatim at this point. the associated timeout
   extended.  The
      file name components may post-operation change value needs to be modified saved as they appear in responses,
      relative to the values used in
   basis for future change_info4 comparisons.

   As demonstrated by the request but this is only
      treated as reflecting changes made as part of request processing.
      For example, a change to a file scenario above, name component made in processing caching requires that the
   client revalidate name cache data by inspecting the change attribute
   of a CREATE operation will be reflected in directory at the READDIR since point when the
      files created will have names name cache item was cached.
   This requires that reflect CREATE-time processing.

   o  On the client, responses will need server update the change attribute for
   directories when the contents of the corresponding directory is
   modified.  For a client to be properly dealt with use the change_info4 information
   appropriately and correctly, the relevant issues will be discussed in server must report the sections below.

      Primarily, this will involve dealing with pre and post
   operation change attribute values atomically.  When the fact that file name
      components received in responses may need server is
   unable to be processed report the before and after values atomically with respect
   to meet the requirements of directory operation, the client's internal environment.  This will
      mainly involve dealing with changes in name components possibly
      made by server processing.  It also addresses other sorts of
      expected behavior must indicate that do fact in the
   change_info4 return value.  When the information is not involve a returned component4, such
      as whether a LOOKUP finds a given component4 or whether a CREATE
      or OPEN finds atomically
   reported, the client should not assume that a specified name already exists.

12.7.1.1.  Nature of Server Processing of Name Components in Request other clients have not
   changed the directory.

10.9.  Directory Caching

   The component4 type defines a potentially case sensitive string,
   typically results of UTF-8 characters.  Its use in NFS version 4 is for
   representing file name components.  Since file systems can implement
   case insensitive file name handling, it can READDIR operations may be used for both case
   sensitive to avoid subsequent
   READDIR operations.  Just as in the cases of attribute and case insensitive file name handling, based on
   caching, inconsistencies may arise among the
   attributes various client caches.

   To mitigate the effects of these inconsistencies, and given the file system.

   It may be
   context of typical filesystem APIs, the case that two valid distinct UTF-8 strings will following rules should be the
   same after the processing described below.  In such
   followed:

   o  Cached READDIR information for a case, directory which is not obtained
      in a server
   may either,

   o  disallow the creation of single READDIR operation must always be a second name if its post-processed form
      collides with that consistent snapshot
      of an existing name, or

   o  allow directory contents.  This is determined by using a GETATTR
      before the creation of first READDIR and after the second name, but arrange so last of READDIR that after
      post processing,
      contributes to the second name cache.

   o  An upper time boundary is different than maintained to indicate the post-
      processed form length of
      time a directory cache entry is considered valid before the first name.

12.7.1.2.  Character Repertoire for client
      must revalidate the Component4 Type cached information.

   The RECOMMENDED character repertoire for file revalidation technique parallels that discussed in the case of
   name components caching.  When the client is a
   recent/current version of Unicode, as encoded via UTF-8.  There are a
   number of alternate character repertoires which may be chosen by not changing the
   server based on implementation constraints including directory in
   question, checking the requirements change attribute of the file system being accessed.

   Two important alternative repertoires are:

   o  One alternate character repertoire is to represent file name
      components as strings of bytes directory with no protocol-defined encoding
      of multi-byte characters.  Most typically, implementations that
      support this single-byte alternative will make it available as an
      option set by an administrator for all file systems within a
      server or for some particular file systems.  If a server accepts
      non-UTF-8 strings anywhere within a specific file system, then it
      MUST do so throughout the entire file system.

   o  Another alternate character repertoire GETATTR
   is the set adequate.  The lifetime of codepoints,
      representable by the file system, most typically UCS-4.

   Individual file system implementations may have more restricted
   character repertoires, as for example file system that only are
   capable of storing names consisting of UCS-2 characters. cache entry can be extended at
   these checkpoints.  When this a client is modifying the case, and directory, the character repertoire is not restricted
   client needs to
   single-byte characters, characters not within that repertoire use the change_info4 data to determine whether there
   are
   treated as prohibited and other clients modifying the error NFS4ERR_BADCHAR directory.  If it is returned by
   the server when determined that character is encountered.

   Strings
   no other client modifications are intended occurring, the client may update
   its directory cache to be in UTF-8 format and servers SHOULD return
   NFS4ERR_INVAL, as discussed above, when reflect its own changes.

   As demonstrated previously, directory caching requires that the characters sent are not
   valid UTF-8.  When
   client revalidate directory cache data by inspecting the character repertoire consists of single-byte
   characters, UTF-8 is not enforced.  Such situations should be
   restricted to those where use is within a restricted environment
   where a single character mapping locale can be administratively
   enforced, allowing a file name to be treated as a string change
   attribute of bytes,
   rather than as a string of characters.  Such an arrangement might be
   necessary directory at the point when NFS version 4 access to a file system containing names
   which are not valid UTF-8 needs to be provided.

   However, in any the directory was cached.
   This requires that the server update the change attribute for
   directories when the contents of the following situations, file names have corresponding directory is
   modified.  For a client to be
   treated as strings of Unicode characters and servers MUST return
   NFS4ERR_INVAL when file names that are not in UTF-8 format:

   o  Case-insensitive comparisons are specified by use the file system change_info4 information
   appropriately and
      any characters sent contain non-ASCII byte codes.

   o  Any normalization constraints are enforced by correctly, the server or file
      system implementation.

   o  The server accepts a given name when creating a file must report the pre and reports a
      different one when post
   operation change attribute values atomically.  When the directory server is being examined.

   Much of
   unable to report the discussion below regarding normalization before and silent
   deletion of characters within component4 strings is not applicable
   when after values atomically with respect
   to the directory operation, the server does not enforce UTF-8 component4 strings and treats
   them as strings of bytes.  A client may determine must indicate that a given
   filesystem is operating fact in this mode by performing a LOOKUP using a
   non-UTF-8 string, if NFS4ERR_INVAL the
   change_info4 return value.  When the information is not returned, then name
   components will be treated as opaque and those sorts of modifications
   will atomically
   reported, the client should not be seen.

12.7.1.3.  Case-based Mapping Used for Component4 Strings

   Case-based mapping is assume that other clients have not always a required part of server processing
   of name components.  However, if
   changed the directory.

11.  Minor Versioning

   To address the requirement of an NFS version 4 file server
   supports protocol that can evolve as the case_insensitive file system attribute, and if
   need arises, the
   case_insensitive attribute is true NFSv4 protocol contains the rules and framework to
   allow for future minor changes or versioning.

   The base assumption with respect to minor versioning is that any
   future accepted minor version must follow the IETF process and be
   documented in a given file system, standards track RFC.  Therefore, each minor version
   number will correspond to an RFC.  Minor version zero of the NFS
   version 4 server MUST use protocol is represented by this RFC.  The COMPOUND and
   CB_COMPOUND procedures support the Unicode case mapping tables for encoding of the minor version of Unicode corresponding to
   being requested by the character repertoire.  In client.

   The following items represent the
   case where basic rules for the character repertoire is UCS-2 development of
   minor versions.  Note that a future minor version may decide to
   modify or UCS-4, add to the case
   mapping tables from following rules as part of the latest available minor version of Unicode SHOULD be
   used.

   If
   definition.

   1.   Procedures are not added or deleted

        To maintain the case_preserving attribute is present and set general RPC model, NFSv4 minor versions will not
        add to false, then or delete procedures from the NFS version 4 server MUST use the corresponding Unicode case
   mapping table program.

   2.   Minor versions may add operations to map case when processing component4 strings.
   Whether the server maps from lower COMPOUND and
        CB_COMPOUND procedures.

        The addition of operations to upper case or the upper to
   lower case is a matter for implementation choice.

   Stringprep Table B.2 should not be used for these purpose since it is
   limited to Unicode version 3.2 COMPOUND and also because it erroneously maps CB_COMPOUND
        procedures does not affect the German ligature eszett RPC model.

        1.  Minor versions may append attributes to the string "ss", whereas later versions
   of Unicode contain both lower-case and upper-case versions bitmap4 that
            represents sets of Eszett
   (SMALL LETTER SHARP S attributes and CAPITAL LETTER SHARP S).

   Clients should be aware that servers may have mapped SMALL LETTER
   SHARP S to the string "ss" when case-insensitive mapping is in
   effect, with result that file whose name contains SMALL LETTER SHARP
   S may have fattr4 that character replaced by "ss" or "SS".

12.7.1.4.  Other Mapping Used for Component4 Strings

   Other than for issues
            represents sets of case mapping, an NFS version 4 server SHOULD
   limit visible (i.e., those that change attribute values.

            This allows for the name expansion of file to reflect
   those mappings the attribute model to those from from a subset
            allow for future growth or adaptation.

        2.  Minor version X must append any new attributes after the
            last documented attribute.

            Since attribute results are specified as an opaque array of
            per-attribute XDR encoded results, the stringprep table
   B.1.  Note particularly, complexity of adding
            new attributes in the mapings from U+200C and U+200D to midst of the
   empty string should be avoided, due to their undesirable effect on
   some strings in Farsi.

   Table B.1 may current definitions would
            be used but it too burdensome.

   3.   Minor versions must not modify the structure of an existing
        operation's arguments or results.

        Again, the complexity of handling multiple structure definitions
        for a single operation is too burdensome.  New operations should
        be used only if required by the
   local file system implementation.  For example, if added instead of modifying existing structures for a minor
        version.

        This rule does not preclude the file system following adaptations in
   question accepts file names containing the MONGOLIAN TODO SOFT HYPHEN
   character (U+1806) a minor
        version.

        *  adding bits to flag fields, such as new attributes to
           GETATTR's bitmap4 data type, and they are distinct from the providing corresponding file
   names
           variants of opaque arrays, such as a notify4 used together
           with this character removed, then using Table B.1 will cause
   functional problems when clients attempt such bitmaps

        *  adding bits to interact with existing attributes like ACLs that file
   system.  The NFS version 4 server implementation including have flag
           words

        *  extending enumerated types (including NFS4ERR_*) with new
           values

   4.   Minor versions must not modify the
   filesystem structure of existing
        attributes.

   5.   Minor versions must not delete operations.

        This prevents the potential reuse of a particular operation
        "slot" in a future minor version.

   6.   Minor versions must not delete attributes.

   7.   Minor versions must not delete flag bits or enumeration values.

   8.   Minor versions may declare an operation MUST NOT silently remove characters not within Table B.1.

   If be implement.

        Specifying that an implementation wishes operation MUST NOT be implemented is
        equivalent to eliminate other characters because obsoleting an operation.  For the client, it
   is believed that allowing component name versions means
        that both include
   the character and do not have while otherwise the same, will
   contribute operation MUST NOT be sent to confusion, it has two options:

   o  Treat the characters server.  For the
        server, an NFS error can be returned as prohibited and return NFS4ERR_BADCHAR.

   o  Eliminate opposed to "dropping"
        the character request as part of an XDR decode error.  This approach allows for
        the name matching processing, obsolescence of an operation while retaining it when maintaining its structure
        so that a file is created.  This would future minor version can reintroduce the operation.

        1.  Minor versions may declare that an attribute MUST NOT be
      analogous to file systems
            implemented.

        2.  Minor versions may declare that are both case-insensitive and case-
      preserving,as dicussed above, a flag bit or those which are both
      normalization-insensitive enumeration
            value MUST NOT be implemented.

   9.   Minor versions may downgrade features from REQUIRED to
        RECOMMENDED, or RECOMMENDED to OPTIONAL.

   10.  Minor versions may upgrade features from OPTIONAL to RECOMMENDED
        or RECOMMENDED to REQUIRED.

   11.  A client and normalization-preserving, server that support minor version X SHOULD support
        minor versions 0 (zero) through X-1 as
      discussed below.  The handling will well.

   12.  Except for infrastructural changes, no new features may be insensitive to
        introduced as REQUIRED in a minor version.

        This rule allows for the presence introduction of new functionality and
        forces the chosen characters while preserving use of implementation experience before designating a
        feature as REQUIRED.  On the presence other hand, some classes of
        features are infrastructural and have broad effects.  Allowing
        infrastructural features to be RECOMMENDED or absence OPTIONAL
        complicates implementation of such characters within names.

   Note that the second of these choices is a desirable way minor version.

   13.  A client MUST NOT attempt to handle
   characters within table B.1, again use a stateid, filehandle, or
        similar returned object from the COMPOUND procedure with minor
        version X for another COMPOUND procedure with minor version Y,
        where X != Y.

12.  Internationalization

   This chapter describes the exception string-handling aspects of U+200C the NFSv4
   protocol, and
   U+200D, which can cause how they address issues for Farsi.

   In addition related to modification due
   internationalization, including issues related to UTF-8,
   normalization, discussed below,
   clients have string preparation, case folding, and handling of
   internationalization issues related to be able domains.

   The NFSv4 protocol needs to deal with name modifications internationalization, or I18N,
   with respect to file names and other
   consequences of character mapping on the server, strings as discussed above.

12.7.1.5.  Normalization Issues for Component Strings used within the
   protocol.  The issues are best discussed separately choice of string representation must allow for
   reasonable name/string access to clients, applications, and users
   which use various languages.  The UTF-8 encoding of the server UCS as
   defined by [7] allows for this type of access and follows the
   client.  It policy
   described in "IETF Policy on Character Sets and Languages", [8].

   In implementing such policies, it is important to note that the server understand and
   respect the nature of NFSv4 as a means by which client
   implementations may have
   different approaches invoke operations on remote file systems.  Server
   implementations act as a conduit to this area, and a range of file system
   implementations that the NFSv4 server choice may not
   match typically invokes through a
   virtual-file-system interface.

   Keeping this context in mind, one needs to understand that the client operating environment.  The issue of mismatches file
   systems with which clients will be interacting will generally not be
   devoted solely to access using NFS version 4.  Local access and
   how they may its
   requirements will generally be best dealt with by the client important and often access over other
   remote file access protocols will be as well.  It is discussed in generally a later
   section.

12.7.1.5.1.  Server Normalization Issues
   functional requirement in practice for Component Strings

   The NFS version 4 does not specify required use the users of a particular
   normalization form for component4 strings.  Therefore, the server may
   receive unnormalized strings or strings that reflect either
   normalization form within NFSv4
   protocol requests (although it may be formally out of scope for this document)
   for the implementation to allow files created by other protocols and responses.  If
   by local operations on the file system requires normalization, then to be accessed using NFS
   version 4 as well.

   It also needs to be understood that a considerable portion of file
   name processing will occur within the server implementation
   must normalize component4 strings of the file
   system rather than within the protocol limits of the NFSv4 server before
   presenting
   implementation per se.  As a result, cetain aspects of name
   processing may change as the information locus of processing moves from file
   system to the local file system.

   With regard to normalization, servers have  As a result of these factors, the following choices, protocol
   cannot enforce uniformity of name-related processing upon NFSv4
   server requests on the server as a whole.  Because the server
   interacts with existing file system implementations, the possibility that same server
   handling will produce different choices may be selected for behavior when interacting with
   different file systems.

   o  Implement a particular normalization form, either NFC, or NFD, in
      which case file names received from a client are converted system implementations.  To attempt to that
      normalization form require uniform
   behavior, and treat the the protocol server and the file system as a consequence,
   unified application, would considerably limit the client will always
      receive names in that normalization form.  If this option is
      chosen, then it usefulness of the
   protocol.

12.1.  Use of UTF-8

   As mentioned above, UTF-8 is impossible used as a convenient way to create two files in the same
      directory encode
   Unicode which allows clients that have different names which map no internationalization
   requirements to avoid these issues since the same name
      when normalized.

   o  Implement handling which mapping of ASCII names
   to UTF-8 is both normalization-insensitive and
      normalization-preserving.  This makes it impossible the identity.

12.1.1.  Relation to create two
      files Stringprep

   RFC 3454 [9], otherwise known as "stringprep", documents a framework
   for using Unicode/UTF-8 in networking protocols, intended "to
   increase the same directory likelihood that have two different canonically
      equivalent names, i.e., names which map to the same name when
      normalized.  However, unlike the previous option, clients will not
      have the names string input and string comparison work
   in ways that they present modified to meet make sense for typical users throughout the server's
      normalization constraints.

   o  Implement normalization-sensitive handling without enforcing world."  A
   protocol conforming to this framework must define a
      normalization form constraint on file names.  This exposes the
      client profile of
   stringprep "in order to fully specify the possibility processing options."
   NFSv4, while it does make normative references to stringprep and uses
   elements of that two files can be created in the
      same directory which have different names which map framework, it does not, for reasons that are
   explained below, conform to that framework, for all of the same
      name when normalized.  This may be a significant issue when
      clients which use different normalization forms strings
   that are used on the
      same file system, but this issue needs within it.

   In addition to be set against the
      difficulty of providing other sorts of normalization some specific issues which have caused stringprep to
   add confusion in handling certain characters for
      some existing file systems.

12.7.1.5.2.  Client Normalization Issues certain languages,
   there are a number of general reasons why stringprep profiles are not
   suitable for Component Strings

   The client, in processing name components, needs to deal with the
   fact that describing NFSv4.

   o  Restricting the server may impose normalization on file name components
   presented character repertoire to it.  As a result, a file can be created within a
   directory and that name be different from that sent Unicode 3.2, as required
      by stringprep is unduly constricting.

   o  Many of the client due
   to normalization at the server.

   Client operating environments differ character tables in their handling stringprep are inappropriate
      because of canonically
   equivalent names.  Some environments treat canonically equivalent
   strings as essentially equal this limited character repertoire, so that normative
      reference to stringprep is not desirable in many case and instead,
      we will call these environments
   normalization-aware.  Others, because of allow more flexibility in the pattern of their
   development with regard to these issues treat different strings as
   different, even if they are canonically equivalent.  We call these
   normalization-unaware.

   We discuss below issues that may arise when each definition of these types case mapping
      tables.

   o  Because of
   environments interact with the various types presence of different file systems, with
   regard to normalization handling.  Note that complexity for the
   client is increased given specifics
      of processing are not fully defined and some aspects that there are no file system attributes to
   determine are
      RECOMMENDED, rather than REQUIRED.

   Despite these issues, in many cases the normalization handling present for that file system.
   Where general structure of
   stringprep profiles, consisting of sections which deal with the client has
   applicability of the ability to create files (file system not
   read-only and security allows it), attempting to create multiple
   files with canonically equivalent names and looking at success
   paaaters and description, the names assigned by character repertoire, character
   mapping, normalization, prohibited characters, and issues of the server to these files can
   serve as
   handling (i.e., possible prohibition) of bidirectional strings, is a
   convenient way to determine describe the relevant information.

   Normalization-aware environments interoperate most normally with
   servers that either impose a given normalization form or those that
   implement name string handling which is both normalization-insensitive needed and
   normalization-preserving name handling.  However, clients need to
   will be
   prepared to interoperate with servers that have normalization-
   sensitive file naming.  In this situation, used where appropriate.

12.1.2.  Normalization, Equivalence, and Confusability

   Unicode has defined several equivalence relationships among the client needs to be
   prepared for set
   of possible strings.  Understanding the fact that a directory may contain multiple names
   that it considers equivalent.

   The following suggestions may be helpful in handling interoperability
   issues for normalization-aware client environments, when they
   interact with normalization-sensitive file systems.

      When READDIR nature and purpose of these
   equivalence relations is done, the names returned may include names that do
      not match important to understand the client's normalization form, but instead handling of
   Unicode strings within NFSv4.

   Some string pairs are thought as only differing in the way accents
   and other
      names canonically equivalent to diacritics are encoded, as illustrated in the normalized name.

      When it examples
   below.  Such string pairs are called "canonically equivalent".

      Such equivalence can be determined that occur when there are precomposed characters,
      as an alternative to encoding a normalization-insensitive server
      file system is not involved, the client can simply normalize
      filename components strings base character in addition to its preferred normalization form.

      When it cannot be determined that a normalization-insensitive
      server file system is not involved,
      combining accent.  For example, the client character LATIN SMALL LETTER E
      WITH ACUTE (U+00E9) is generally best
      advised to process incoming name components so defined as canonically equivalent to allow all
      name components in a canonical equivalence class to be together. the
      string consisting of LATIN SMALL LETTER E followed by COMBINING
      ACUTE ACCENT (U+0065, U+0301).

      When only a single member multiple combining diacritics are present, differences in the
      ordering are not reflected in resulting display and the strings
      are defined as canonically equivalent.  For example, the string
      consisting of class exists, it should generally
      mapped directly LATIN SMALL LETTER Q, COMBINING ACUTE ACCENT,
      COMBINING GRAVE ACCENT (U+0071, U+0301, U+0300) is canonically
      equivalent to the preferred normalization form, whether the
      name was string consisting of that form or not. LATIN SMALL LETTER Q,
      COMBINING GRAVE ACCENT, COMBINING ACUTE ACCENT (U+0071, U+0300,
      U+0301)

      When both situations are present, the client sees multiple names that number of canonically
      equivalent strings can be greater.  Thus, the following strings
      are all canonically
      equivalent, it is clear you have a file systen which equivalent:

         LATIN SMALL LETTER E, COMBINING MACRON, ACCENT, COMBINING ACUTE
         ACCENT (U+0xxx, U+0304, U+0301)
         LATIN SMALL LETTER E, COMBINING ACUTE ACCENT, COMBINING MACRON
         (U+0xxx, U+0301, U+0304)

         LATIN SMALL LETTER E WITH MACRON, COMBINING ACUTE ACCENT
         (U+011E, U+0301)

         LATIN SMALL LETTER E WITH ACUTE, COMBINING MACRON (U+00E9,
         U+0304)

         LATIN SMALL LETTER E WITH MACRON AND ACUTE (U+1E16)

   Additionally there is
      normalization sensitive.  Clients should generally replace each an equivalence relation of "compatibility
   equivalence".  Two canonically equivalent name with one that appends some
      distinguishing suffix, usually including a number.  The numbers
      should be assigned so that each distinct possible name with strings are necessarily
   compatibility equivalent, although not the
      set converse.  An example of canonically
   compatibility equivalent names has an assigned numeric value.
      Note that for some cases in strings which there are multiple instances of not canonically equivalent
   are GREEK CAPITAL LETTER OMEGA (U+03A9) and OHM SIGN (U+2129).  These
   are identical in appearance while other compatibility equivalent
   strings that might are not.  Another example would be composed or decomposed and/or situations
      with multiple diacritics to be applied to the same character, the
      class might be large.

      When interacting with a normalization-sensitive filesystem, it may
      be that the environment contains clients or implementations local
      to the OS in which "x2" and the file system is embedded, two character
   string denoting x-squared which use a are clearly different normalization form.  In such situations, a LOOKUP may
      well fail, even though the directory contains a name canonically in appearance
   although compatibility equivalent to the name sought. and not canonically equivalent.
   These have Unicode encodings LATIN SMALL LETTER X, DIGIT TWO (U+0078,
   U+0032) and LATIN SMALL LETTER X, SUPERSCRIPT TWO (U+0078, U+00B2),

   One solution way to this problem deal with these equivalence relations is via
   normalization.  A normalization form maps all strings to
      re-do the LOOKUP a
   corresponding normalized string in such a fashion that situation with name converted to the
      alternate normalization form.

      In all strings
   that are equivalent (canonically or compatibly, depending on the case in which normalization-unaware clients
   form) are involved in mapped to the mix, LOOKUP can fail and then same value.  Thus the second lOOKUP, described
      above can also fail, even though there may well be a oanonically
      equivalent name in image of the directory.  One possible approach in that
      case mapping is to use a READDIR to find the equivalent name and lookup
      that, although this can greatly add to client implementation
      complexity.

      When interacting with
   a normalization-sensitive filesystem, subset of Unicode strings conceived as the
      situation where representatives of the environment contains clients or
      implementations local to
   equivalence classes defined by the OS in which chosen equivalence relation.

   In the file system is
      embedded, which use a different normalization form can also cause NFSv4 protocol, handling of issues when a file (or symlink or directory, etc.) is being
      created.  In such cases, you may be able related to create an object
   internationalization with regard to normalization follows one of two
   basic patterns:

   o  For strings whose function is related to other internet standards,
      such as server and domain naming, the specified name even though, normalization form defined
      by the directory contains a
      canonically equivalent name.  Similar issues can occur with LINK appropriate internet standards is used.  For server and RENAME.  The client can't really do much about such
      sitautions, except be aware that they may occur.  That's one of
      domain naming, this involves normalization form NFKC as specified
      in [10]

   o  For other strings, particular those passed by the reasons normalization-sensitive server to file
      system
      implementations can be problematic to use when
      internationalization issues implementations, normalization requirements are important.

   Normalization-unaware environments interoperate most normally with
   servers that implement normalization-sensitive the
      province of the file naming.  However,
   clients need to be prepared system and the job of this specification is
      not to interoperate with servers that impose specify a given normalization particular form or but to make sure that implement name handling which
      interoperability is
   both normalization-insensitive maximized, even when clients and normalization-preserving.  In the
   former case, a server-based
      file created with a given name may find it changed to
   a systems have different (although preferences.

   A related name).  In both cases, the client will
   have to deal with the fact that it is unable to create but distinct issue concerns string confusability.  This can
   occur when two names
   within strings (including single-character strings) having a directory that are canonically equivalent.

   Note that although the client implementation itself and the kernel
   implementation may be normalization-unware, treating name components
   as strings not subject
   similar appearance.  There have been attempts to normalization, the environment as a whole
   may be normalization-aware if commonly used libraries result define uniform
   processing in an
   application environment where a single normalization form is used
   throughout.  Because attempt to avoid such confusion (see stringprep [9])
   but the results have often added confusion.

   Some examples of this, normalization-unaware environments may
   be relatively rare.

   The following suggestions may be helpful in handling interoperability
   issues for truely normalization-unaware client environments, when
   they interact with file systems other than those which are
   normalization-sensitive.  The issues tend possible confusions and proposed processing intended
   to reduce/avoid confusions:

   o  Deletion of characters believed to be invisible and appropriately
      ignored, justifying their deletion, including, WORD JOINER
      (U+2060), and the inverse ZERO WIDTH SPACE (U+200B).

   o  Deletion of those
   for normalization-aware environments.  The implementer should be
   careful not characters supposed to erroneously treat not bear semantics and only
      affect glyph choice, including the environment as normalization-
   unaware, based solely on ZERO WIDTH NON-JOINER (U+200C)
      and the details ZERO WIDTH JOINER (U+200D), where the deletion turns out
      to be a problem for Farsi speakers.

   o  Prohibition of space characters such as the kernel implementation.

      Unless EM SPACE (U+2003), the file system is normalization-preserving, when files (or
      other objects) are created,
      EN SPACE (U+2002), and the object name THIN SPACE (U+2009).

   In addition, character pairs which appear very similar and could and
   often do result in confusion.  In addition to what Unicode defines as reported by
   "compatibility equivalence", there are a
      READDIR considerable number of the associated directory may show
   additional character pairs that could cause confusion.  This includes
   characters such as LATIN CAPITAL LETTER O (U+004F) and DIGIT ZERO
   (U+0030), and CYRILLIC SMALL LETTER ER (U+0440) LATIN SMALL LETTER P
   (U+0070) (also with MATHEMATICAL BOLD SMALL P (U+1D429) and GREEK
   SMALL LETTER RHO (U+1D56, for good measure).

   NFSv4, as it does with normalization, takes a name different than
      the one used two-part approach to create the object.  This behavior
   this issue:

   o  For strings whose function is something
      that the client has related to accept.  Since it has no preferred
      normalization form, it has no way of converting the name other internet standards,
      such as server and domain naming, any string processing to a
      preferred form.

      In situations where there address
      the confusability issue is an attempt to create multiple objects
      in defined by the same directory which have canonically-equivalent names.
      these file systems will either report that an object of name
      already exists or simply open a file appropriate internet
      standards is used.  For server and domain naming, this is the
      responsibility of that IDNA as described in [10].

   o  For other name.

      If it desired to have strings, particularly those two obects in the same directory, passed by the
      names must be made not canonically equivalent.  It is possible to
      append some distinguishing character server to file
      system implementations, any such preparation requirements
      including the name choice of how, or whether to address the second
      object but in clients having a typical file API (such as POSIX),
      confusability issue, are the fact that responsibility of the name change occurred cannot be propagated back file system to the requester.

      In cases where a client is application-specific, it may be
      possible
      define, and for it this specification to deal with such a collision by modifying the
      name try to add its own set would
      add unacceptably to complexity, and taking note of make many files accessible
      locally and by other remote file access protocols, inaccessible by
      NFSv4.  This specification defines how the changed name.

12.7.1.6.  Prohibited Characters for Component Names

   The NFS version 4 protocol does not specify particular characters
   that may not appear maximizes
      interoperability in component names.  File systems may have their
   own set of prohibited characters for which the error NFS4ERR_BADCHAR
   should be returned by the server.  Clients need face of different file system
      implementations.  NFSv4 does allow file systems to be prepared for
   this error map and to occur whenever file name components are presented
      reject characters, including those likely to
   the server.

   Clients whose character repertoire for acceptable characters result in confusion,
      since file
   name components is smaller than the entire scope of UCS-4 systems may need choose to
   deal with names returned by the server that contain characters
   outside that repertoire. do such things.  It is up to defines what
      the client whether it simply
   ignores these files or modifies the name will see in such cases, in order to meet its own rules for
   acceptable names.

   Clients may encounter names limit problems that do not consist of valid UTF-8, if
   they interact with servers configured to allow this option.  They are
   not required to deal with this case and may treat the server as not
   functioning correctly, or they may handle this as normal.  Clients
   will normally make this a configuration option.  As discussed above,
   a client
      can determine whether arise when a particular file system name is being
   supported by the server in this mode by issuing a LOOKUP specifying created and it appears to have a
      different name which from the one it is not valid UTF-8 and seeing if NFS4ERR_INVAL assigned when the name is
   returned.

12.7.1.7.  Bidirectional
      created.

12.2.  String Checking for Component Names

   The NFS version 4 protocol does not require processing of component
   names Type Overview

12.2.1.  Overall String Class Divisions

   NFSv4 has to check for and reject bidirectional strings.  Such processing
   may be deal with a part large set of different types of the file system implementation but if so, its
   particular form will be defined by the file system implementation.
   When strings are rejected on this basis, and
   because of the error NFS4ERR_BADNAME
   would be returned.

   Clients need to different role of each, internationalization issues
   will be prepared different for each:

   o  For some types of strings, the fact that fundamental internationalization-
      related decisions are the server may reject a
   file name component if it consists province of a bidirectional string,
   returning NFS4ERR_BADNAME.

   Clients may encounter names with bidirectional strings returned in
   responses from the server.  If clients treat such strings as not
   valid file name components, it is up to the client whether it simply
   ignores these files system or modifies the name component to meet its own
   rules for acceptable name component strings.

12.7.2.  Processing
      security-handling functions of Link Text

   Symbolic link text is defined as utf8val_should and therefore the server SHOULD validate link text on a CREATE and return NFS4ERR_INVAL
   if it is the protocol's job
      is not valid UTF-8.  Note that to establish the rules under which file systems which treat
   names as strings of byte and servers are an exception for which such validation
   need not be done.  One other situation in which an NFS version 4
   might choose (or be configured) not
      allowed to make such a check is when
   links within file system reference names in another which is
   configured exercise this freedom, to treat names as strings of bytes.

   On the avoid adding to confusion.

   o  In other hand, UTF-8 validation of symbolic link text need not be
   done on cases, the data resulting from a READLINK.  Such data might have
   been stored by an NFS Version 4 server configured to allow non-UTF-8
   link text or it might have resulted from symbolic link text stored
   via local file system access or access via another remote file access
   protocol.

   Note that because of fundamental internationalization issues are
      the role responsibility of the symbolic link, as data stored
   and read by the user, other sorts of validations or modifications
   should not be done.  Note that when component names with the symbolic
   link text are used, such checks IETF groups and modifications will be done at
   that time.  In particular,

   o  Limitation of the character repertoire MUST NOT be done.  This
      includes limitations our job is simply to reflect
      reference those and perhaps make a particular version of unicode,
      or the inability of any particualr file system few choices as to store characters
      beyond UCS-2.

   o  Name mapping, whether for case folding or otherwise MUST NOT be
      done.

   o  Checks for a type of normalization or normalization how they are
      to a
      particular form MUST NOT be done.

   o  Checks for specific characters excluded by the server or file
      system MUST NOT be done. used (e.g., U-labels vs. A-labels).

   o  Checks for bidrectional strings MUST NOT be done.

12.7.3.  Processing of Principal Prefixes

   As mentioned above, users and groups  There are designated as also cases in which a particular string at a specified domain.  Servers will recognize has a set small amount of valid
   principals for NFSv4
      processing which results in one or more domains.  With regard strings being referred to the handling
      one of
   these strings, the following rules MUST be followed

   o  The string MUST other categories.

   We will divide strings to be checked by the server for valid UTF-8 and dealt with into the
      error NFS4ERR_INVAL returned if it following classes:

   MIX  indicating that there is not valid.

   o  The character repertoire for small amount of preparatory processing
      that either picks an internationalization handling mode or divides
      the principal prefix string should be
      limited to into a current version of Unicode when the server is
      implemented.  However, the client cannot be assured that all
      characters it receives as part set of (two) strings with a user or group attribute are
      those that different mode
      internationalization handling for each.  The details are defined discussed
      in the Unicode version it expects to work
      with.

   o  No character mapping is section "Types with Pre-processing to be done, as Resolve Mixture
      Issues".

   NIP  indicating that, for example table B.1 in
      stringprep, and no case mapping various reasons, there is no need for
      internationalization-specific processing to be done. performed.  The user and group
      names
      specifics of the various string types handled in this way are to be treated as case-sensitive.

   o  Strings must not be rejected based on their normalization.
      Servers should do normalization insensitive matching
      described in converting
      a user to group to an internal id.  The client cannot assume the section "String Types without
      Internationalization Processing".

   INET  indicating that the server preserves normalization so a user set to one string
      value may needs to be returned as processed in a string which differs fashion
      governed by non-NFS-specific internet specifications.  The details
      are discussed in nomralization
      and the client must be prepared to deal section "Types with that, by, for
      example, normalizing Processing Defined by
      Other Internet Areas".

   NFS  indicating that the string needs to be processed in a fashion
      governed by NFSv4-specific considerations.  The primary focus is
      on enabling flexibility for the various file systems to be
      accessed and is described in the client's prferred form.

   o section "String Types with NFS-
      specific Processing".

12.2.2.  Divisions by Typedef Parent types

   There are no checks a number of different string types within NFSv4 and
   internationalization handling will be different for specific invalid characters but servers
      may limit different types
   of strings.  Each the characters, with types will be in one of four groups based on
   the result parent type that any principal
      presented by specifies the client which has such a characters is treated as
      invalid.

   o  Specific checks for bidrectional nature of its relationship to utf8
   and ascii.

   utf8_should/USHOULD:  indicating that strings are not done of this type SHOULD be
      UTF-8 but clients and servers
      may limit the principal prefix will not check for valid UTF-8
      encoding.

   utf8val_should/UVSHOULD:  indicating that strings to those which are
      unidirectional or are of a certain direction, with this type SHOULD
      be and generally will be in the result that
      any principal presented form of the UTF-8 encoding of
      Unicode.  Strings in most cases will be checked by the client which done not meet server for
      valid UTF-8 but for certain file systems, such checking may be
      inhibited.

   utf8val_must/UVMUST:  indicating that
      criterion strings of this type MUST be in
      the form of the UTF-8 encoding of Unicode.  Strings will be treated as invaid.

13.  Error Values

   NFS error numbers are assigned
      checked by the server for valid UTF-8 and the server SHOULD ensure
      that when sent to failed operations within a Compound
   (COMPOUND or CB_COMPOUND) request.  A Compound request contains a
   number the client, they are valid UTF-8.

   ascii_must/ASCII:  indicating that strings of NFS operations this type MUST be pure
      ASCII, and thus automatically UTF-8.  The processing of these
      string must ensure that they are only have their results encoded in sequence
   in ASCII characters but
      this need not be a Compound reply.  The results of successful operations will
   consist of an NFS4_OK status followed by separate step if any normally required check
      for validity inherently assures that only ASCII characters are
      present.

   In those cases where UTF-8 is not required, USHOULD and UVSHOULD, and
   strings that are not valid UTF-8 are received and accepted, the encoded results of
   receiver MUST NOT modify the
   operation.  If an NFS operation fails, an error status will be
   entered in strings.  For example, setting
   particular bits such as the reply high-order bit to zero MUST NOT be done.

12.2.3.  Individual Types and Their Handling

   The first table outlines the Compound request will be terminated.

13.1.  Error Definitions

                        Protocol Error Definitions

       +-----------------------------+--------+-------------------+
       | Error                       | Number | Description       |
       +-----------------------------+--------+-------------------+
       | NFS4_OK                     | 0      | Section 13.1.3.1 handling for the primary string types,
   i.e., those not derived as a prefix or a suffix from a mixture type.

   +-----------------+----------+-------+------------------------------+
   | Type            | NFS4ERR_ACCESS Parent   | 13 Class | Section 13.1.6.1 Explanation                  |
   +-----------------+----------+-------+------------------------------+
   | NFS4ERR_ATTRNOTSUPP comptag4        | 10032 USHOULD  | Section 13.1.11.1 NIP   | Should be utf8 but no        | NFS4ERR_ADMIN_REVOKED
   | 10047                 | Section 13.1.5.1          |       | NFS4ERR_BADCHAR validation by server or      | 10040
   | Section 13.1.7.1                 |          | NFS4ERR_BADHANDLE       | 10001 client is to be done.        | Section 13.1.2.1
   | component4      | NFS4ERR_BADNAME UVSHOULD | 10041 NFS   | Section 13.1.7.2 Should be utf8 but clients   |
   | NFS4ERR_BADOWNER                 | 10039          | Section 13.1.11.2       | may need to access file      | NFS4ERR_BADTYPE
   | 10007                 | Section 13.1.4.1          |       | NFS4ERR_BADXDR systems with a different     | 10036
   | Section 13.1.1.1                 |          | NFS4ERR_BAD_COOKIE       | 10003 name structure, such as file | Section 13.1.1.2
   |                 | NFS4ERR_BAD_RANGE          | 10042       | Section 13.1.8.1 systems that have non-utf8   |
   | NFS4ERR_BAD_SEQID                 | 10026          | Section 13.1.8.2       | names.                       | NFS4ERR_BAD_STATEID
   | 10025 linktext4       | Section 13.1.5.2 UVSHOULD | NFS   | NFS4ERR_CLID_INUSE Should be utf8 since text    | 10017
   | Section 13.1.10.1                 |          | NFS4ERR_DEADLOCK       | 10045 may include name components. | Section 13.1.8.3
   |                 | NFS4ERR_DELAY          | 10008       | Section 13.1.1.3 Because of the need to       |
   | NFS4ERR_DENIED                 | 10010          | Section 13.1.8.4       | access existing file         | NFS4ERR_DQUOT
   | 69                 | Section 13.1.4.2          |       | NFS4ERR_EXIST systems, this check may be   | 17
   | Section 13.1.4.3                 |          | NFS4ERR_EXPIRED       | 10011 inhibited.                   | Section 13.1.5.3
   | fattr4_mimetype | NFS4ERR_FBIG ASCII    | 27 NIP   | Section 13.1.4.4 All mime types are ascii so  |
   | NFS4ERR_FHEXPIRED                 | 10014          | Section 13.1.2.2       | no specific utf8 processing  | NFS4ERR_FILE_OPEN
   | 10046                 | Section 13.1.4.5          |       | NFS4ERR_GRACE is required, given that you  | 10013
   | Section 13.1.9.1                 |          | NFS4ERR_INVAL       | 22 are comparing to that list.  | Section 13.1.1.4
   +-----------------+----------+-------+------------------------------+

                                  Table 5

   There are a number of string types that are subject to preliminary
   processing.  This processing may take the form either of selecting
   one of two possible forms based on the string contents or it in may
   consist of dividing the string into multiple conjoined strings each
   with different utf8-related processing.

   +---------+--------+-------+----------------------------------------+
   | Type    | NFS4ERR_IO Parent | 5 Class | Section 13.1.4.6 Explanation                            |
   +---------+--------+-------+----------------------------------------+
   | NFS4ERR_ISDIR prin4   | 21 UVMUST | Section 13.1.2.3 MIX   | Consists of two parts separated by an  | NFS4ERR_LEASE_MOVED
   | 10031         | Section 13.1.5.4        |       | NFS4ERR_LOCKED at-sign, a prinpfx4 and a prinsfx4.    | 10012
   | Section 13.1.8.5         |        | NFS4ERR_LOCKS_HELD       | 10037 These are described in the next table. | Section 13.1.8.6
   | server4 | NFS4ERR_LOCK_NOTSUPP UVMUST | 10043 MIX   | Section 13.1.8.7 Is either an IP address (serveraddr4)  |
   | NFS4ERR_LOCK_RANGE         | 10028        | Section 13.1.8.8       | which has to be pure ascii or a server | NFS4ERR_MINOR_VERS_MISMATCH
   | 10021         | Section 13.1.3.2        |       | NFS4ERR_MLINK name svrname4, which is described      | 31
   | Section 13.1.4.7         |        | NFS4ERR_MOVED       | 10019 immediately below.                     | Section 13.1.2.4
   +---------+--------+-------+----------------------------------------+
                                  Table 6

   The last table describes the components of the compound types
   described above.

   +----------+--------+------+----------------------------------------+
   | Type     | NFS4ERR_NAMETOOLONG Class  | 63 Def  | Section 13.1.7.3 Explanation                            |
   +----------+--------+------+----------------------------------------+
   | NFS4ERR_NOENT svraddr4 | 2 ASCII  | Section 13.1.4.8 NIP  | Server as IP address, whether IPv4 or  | NFS4ERR_NOFILEHANDLE
   | 10020          | Section 13.1.2.5        |      | NFS4ERR_NOSPC IPv6.                                  | 28
   | Section 13.1.4.9 svrname4 | UVMUST | NFS4ERR_NOTDIR INET | 20 Server name as returned by server.     | Section 13.1.2.6
   |          | NFS4ERR_NOTEMPTY        | 66      | Section 13.1.4.10 Not sent by client, except in          |
   | NFS4ERR_NOTSUPP          | 10004        | Section 13.1.1.5      | VERIFY/NVERIFY.                        | NFS4ERR_NOT_SAME
   | 10027 prinsfx4 | Section 13.1.11.3 UVMUST | INET | NFS4ERR_NO_GRACE Suffix part of principal, in the form  | 10033
   | Section 13.1.9.2          |        | NFS4ERR_NXIO      | 6 of a domain name.                      | Section 13.1.4.11
   | prinpfx4 | NFS4ERR_OLD_STATEID UVMUST | 10024 NFS  | Section 13.1.5.5 Must match one of a list of valid      |
   | NFS4ERR_OPENMODE          | 10038        | Section 13.1.8.9      | users or groups for that particular    | NFS4ERR_OP_ILLEGAL
   | 10044          | Section 13.1.3.3        |      | NFS4ERR_PERM domain.                                | 1      | Section 13.1.6.2  |
       | NFS4ERR_RECLAIM_BAD         | 10034  | Section 13.1.9.3  |
       | NFS4ERR_RECLAIM_CONFLICT    | 10035  | Section 13.1.9.4  |
       | NFS4ERR_RESOURCE            | 10018  | Section 13.1.3.4  |
       | NFS4ERR_RESTOREFH           | 10030  | Section 13.1.4.12 |
       | NFS4ERR_ROFS                | 30     | Section 13.1.4.13 |
       | NFS4ERR_SAME                | 10009  | Section 13.1.11.4 |
       | NFS4ERR_SERVERFAULT         | 10006  | Section 13.1.1.6  |
       | NFS4ERR_STALE               | 70     | Section 13.1.2.7  |
       | NFS4ERR_STALE_CLIENTID      | 10022  | Section 13.1.10.2 |
       | NFS4ERR_STALE_STATEID       | 10023  | Section 13.1.5.6  |
       | NFS4ERR_SYMLINK             | 10029  | Section 13.1.2.8  |
       | NFS4ERR_TOOSMALL            | 10005  | Section 13.1.1.7  |
       | NFS4ERR_WRONGSEC            | 10016  | Section 13.1.6.3  |
       | NFS4ERR_XDEV                | 18     | Section 13.1.4.14 |
       +-----------------------------+--------+-------------------+
   +----------+--------+------+----------------------------------------+

                                  Table 8

13.1.1.  General 7

12.3.  Errors

   This section deals with errors that are applicable Related to a broad set of
   different purposes.

13.1.1.1.  NFS4ERR_BADXDR (Error Code 10036)

   The arguments for this operation do not match those specified in Strings

   When the
   XDR definition.  This includes situations client sends an invalid UTF-8 string in a context in which
   UTF-8 is REQUIRED, the request ends
   before all server MUST return an NFS4ERR_INVAL error.
   Within the arguments have been seen.  Note that framework of the previous section, this error applies when fixed enumerations (these include booleans) have a value
   within to strings
   whose type is defined as utf8val_must or ascii_must.  When the input stream client
   sends an invalid UTF-8 string in a context in which UTF-8 is not valid for
   RECOMMENDED and the enum.  A replier
   may pre-parse all operations server should test for a Compound procedure before doing
   any operation execution and UTF-8, the server SHOULD
   return RPC-level XDR errors an NFS4ERR_INVAL error.  Within the framework of the previous
   section, this applies to strings whose type is defined as
   utf8val_should.  These situations apply to cases in which
   inappropriate prefixes are detected and where the count includes
   trailing bytes that case.

13.1.1.2.  NFS4ERR_BAD_COOKIE (Error Code 10003)

   Used for operations that provide do not constitute a set of information indexed by some
   quantity provided by full UCS character.

   Where the client or cookie sent client-supplied string is valid UTF-8 but contains
   characters that are not supported by the server for an
   earlier invocation.  Where the file system as a
   value cannot be used for its intended
   purpose, this error results.

13.1.1.3.  NFS4ERR_DELAY (Error Code 10008)

   For any of that string (e.g., names containing characters that have
   more than two octets on a number of reasons, the replier could file system that supports UCS-2 characters
   only, file name components containing slashes on file systems that do
   not process this
   operation allow them in what was deemed file name components), the server MUST return an
   NFS4ERR_BADCHAR error.

   Where a reasonable time.  The client should
   wait UTF-8 string is used as a file name component, and then try the request with a new RPC transaction ID.

   Some example file
   system, while supporting all of situations the characters within the name, does
   not allow that might lead particular name to this situation:

   o  A be used, the server that supports hierarchical storage receives a request to
      process a should return
   the error NFS4ERR_BADNAME.  This includes file that had been migrated.

   o  An operation requires a delegation recall to proceed system prohibitions of
   "." and waiting
      for this delegation recall makes processing this request in ".." as file names for certain operations, and other such
   similar constraints.  It does not include use of strings with non-
   preferred normalization modes.

   Where a
      timely fashion impossible. UTF-8 string is used as a file name component, the file
   system implementation MUST NOT return NFS4ERR_BADNAME, simply due to
   a normalization mismatch.  In such cases, cases the error NFS4ERR_DELAY allows these preparatory
   operations implementation SHOULD
   convert the string to proceed without holding up client resources such as a
   session slot.  After delaying for period of time, its own preferred normalization mode before
   performing the operation.  As a result, a client can then
   re-send the operation in question.

13.1.1.4.  NFS4ERR_INVAL (Error Code 22)

   The arguments for this operation are not valid for some reason, even
   though they do match those specified in cannot assume that a
   file created with a name it specifies will have that name when the XDR definition for
   directory is read.  It may have instead, the
   request.

13.1.1.5.  NFS4ERR_NOTSUPP (Error Code 10004)

   Operation not supported, either because name converted to the operation
   file system's preferred normalization form.

   Where a UTF-8 string is an OPTIONAL
   one used as other than as file name component (or
   as symbolic link text) and is the string does not supported by this server or because meet the operation MUST
   NOT be implemented in normalization
   requirements specified for it, the current minor version.

13.1.1.6.  NFS4ERR_SERVERFAULT (Error Code 10006)

   An error occurred on the server which does not map NFS4ERR_INVAL is returned.

12.4.  Types with Pre-processing to any Resolve Mixture Issues

12.4.1.  Processing of the
   specific legal NFSv4.1 protocol error values.  The client should
   translate this into Principal Strings

   Strings denoting principals (users or groups) MUST be UTF-8 but since
   they consist of a principal prefix, an appropriate error.  UNIX clients at-sign, and a domain, all
   three of which either are checked for being UTF-8, or inherently are
   UTF-8, checking the string as a whole for being UTF-8 is not
   required.  Although a server implementation may choose to
   translate make this to EIO.

13.1.1.7.  NFS4ERR_TOOSMALL (Error Code 10005)

   Used where an operation returns a variable amount of data, with a
   limit specified by
   check on the client.  Where string as whole, for example in converting it to
   Unicode, the data returned cannot be fit description within the limit specified by the client, this error results.

13.1.2.  Filehandle Errors

   These errors deal with the situation document, will reflect a
   processing model in which such checking happens after the current or saved
   filehandle, or division
   into a principal prefix and suffix, the filehandle passed to PUTFH intended to become latter being in the
   current filehandle, form of a
   domain name.

   The string should be scanned for at-signs.  If there is invalid in some way.  This includes situations
   in which more that one
   at-sign, the filehandle string is a valid filehandle considered invalid.  For cases in general but is not which there
   are no at-signs or the at-sign appears at the start or end of the appropriate object type for
   string see Interpreting owner and owner_group.  Otherwise, the current operation.

   Where
   portion before the error description indicates a problem at-sign is dealt with as a prinpfx4 and the current or
   saved filehandle, it
   portion after is to be understood that filehandles are only
   checked for the condition if dealt with as a prinsfx4.

12.4.2.  Processing of Server Id Strings

   Server id strings typically appear in responses (as attribute values)
   and only appear in requests as an attribute value presented to VERIFY
   and NVERIFY.  With that exception, they are implicit arguments not subject to server
   validation and possible rejection.  It is not expected that clients
   will typically do such validation on receipt of the
   operation in question.

13.1.2.1.  NFS4ERR_BADHANDLE (Error Code 10001)

   Illegal NFS filehandle responses but they
   may as a way to check for proper server behavior.  The responsibility
   for sending correct UTF-8 strings is with the current server.  The current file
   handle failed internal consistency checks.

   Servers are identified by either server names or IP addresses.  Once accepted
   an id has been identified as valid
   (by PUTFH), an IP address, then there is no subsequent status change can cause the filehandle
   processing specific to
   generate this error.

13.1.2.2.  NFS4ERR_FHEXPIRED (Error Code 10014)

   A current or saved filehandle which is internationalization to be done, since such an argument
   address must be ASCII to the current
   operation is volatile and has expired at the server.

13.1.2.3.  NFS4ERR_ISDIR (Error Code 21)

   The current or saved filehandle designates be valid.

12.5.  String Types without Internationalization Processing

   There are a directory when the
   current operation does not allow number of types of strings which, for a directory to be accepted number of
   different reasons, do not require any internationalization-specific
   handling, such as the
   target validation of this operation.

13.1.2.4.  NFS4ERR_MOVED (Error Code 10019)

   The file system which contains UTF-8, normalization, or character
   mapping or checking.  This does not necessarily mean that the current filehandle object is strings
   need not
   present at be UTF-8.  In some case, other checking on the server.  It may have been relocated, migrated string
   ensures that they are valid UTF-8, without doing any checking
   specific to
   another server or may have never been present. internationalization.

   The client may obtain following are the new file system location specific types:

   comptag4  strings are an aid to debugging and the sender should avoid
      confusion by obtaining not using anything but valid UTF-8.  But any work
      validating the "fs_locations" string or
   attribute for the current filehandle.  For further discussion, refer modifying it would only add complication
      to Section 7

13.1.2.5.  NFS4ERR_NOFILEHANDLE (Error Code 10020)

   The logical current or saved filehandle value a mechanism whose basic function is required best supported by the
   current operation and is making it
      not set.  This may subject to any checking and having data maximally available to
      be looked at in a result of network trace.

   fattr4_mimetype  strings need to be validated by matching against a
   malformed COMPOUND operation (i.e.,
      list of valid mime types.  Since these are all ASCII, no PUTFH or PUTROOTFH before an
   operation
      processing specific to internationalization is required since
      anything that requires does not match is invalid and anything which does
      not obey the current filehandle rules of UTF-8 will not be set).

13.1.2.6.  NFS4ERR_NOTDIR (Error Code 20)

   The current (or saved) filehandle designates an object which is ASCII and consequently
      will not a
   directory for an operation match, and will be invalid.

   svraddr4  strings, in which a directory is required.

13.1.2.7.  NFS4ERR_STALE (Error Code 70)

   The current or saved filehandle value designating an argument order to the
   current operation is invalid The file referred be valid, need to by be ASCII, but if you
      check them for validity, you have inherently checked that filehandle
   no longer exists or access to it has been revoked.

13.1.2.8.  NFS4ERR_SYMLINK (Error Code 10029)

   The current filehandle designates a symbolic link when the current
   operation does not allow a symbolic link as the target.

13.1.3.  Compound Structure Errors

   This section deals with errors that relate to overall structure
      they are ASCII and thus UTF-8.

12.6.  Types with Processing Defined by Other Internet Areas

   There are two types of a
   Compound request (by strings which we mean to include both COMPOUND NFSv4 deals with whose
   processing is defined by other Internet standards, and
   CB_COMPOUND), rather than where issues
   related to particular operations.

   There different handling choices by server operating systems or
   server file systems do not apply.

   These are a number of basic constraints on the operations that may as follows:

   o  Server names as they appear in a Compound request.

13.1.3.1.  NFS_OK (Error code 0)

   Indicates the operation completed successfully, in fs_locations attribute.  Note
      that all of for most purposes, such server names will only be sent by the
   constituent operations completed without error.

13.1.3.2.  NFS4ERR_MINOR_VERS_MISMATCH (Error code 10021)
      server to the client.  The minor version specified exception is not one that use of the current listener
   supports.  This value is returned fs_locations
      attribute in a VERIFY or NVERIFY operation.

   o  Principal suffixes which are used to denote sets of users and
      groups, and are in the overall status form of domain names.

   The general rules for handling all of these domain-related strings
   are similar and independent of role the
   Compound but is not associated with a specific operation since of the
   results must specify a result count sender or receiver as
   client or server although the consequences of zero.

13.1.3.3.  NFS4ERR_OP_ILLEGAL (Error Code 10044) failure to obey these
   rules may be different for client or server.  The operation code server can report
   errors when it is not a valid one for sent invalid strings, whereas the current Compound
   procedure. client will
   simply ignore invalid string or use a default value in their place.

   The opcode string sent SHOULD be in the result stream matched with this error
   is the ILLEGAL value, form of a U-label although the value that appears it MAY be
   in the request
   stream may be different.  Where form of an illegal value appears and the
   replier pre-parses all operations for A-label or a Compound procedure before
   doing any operation execution, an RPC-level XDR error may UTF-8 string that would not map to
   itself when canonicalized by applying ToUnicode(ToASCII(...)).  The
   receiver needs to be returned able to accept domain and server names in this case.

13.1.3.4.  NFS4ERR_RESOURCE (Error Code 10018)

   For the processing any of
   the Compound procedure, the formats allowed.  The server may exhaust
   available resources and cannot continue processing operations within MUST reject, using the Compound procedure.  This error will be returned from the server
   in those instances of resource exhaustion related to error
   NFS4ERR_INVAL, a string which is not valid UTF-8 or which begins with
   "xn--" and violates the processing rules for a valid A-label.

   When a domain string is part of id@domain or group@domain, the Compound procedure.

13.1.4.  File System Errors

   These errors describe situations server
   SHOULD map domain strings which occurred in the underlying
   file system implementation rather than in the protocol are A-labels or any NFSv4.x
   feature.

13.1.4.1.  NFS4ERR_BADTYPE (Error Code 10007)

   An attempt was made to create an object with an inappropriate type
   specified are UTF-8 domain
   names which are not U-labels, to CREATE.  This may be because the type is undefined,
   because it is corresponding U-label, using
   ToUnicode(domain) or ToUnicode(ToASCII(domain)).  As a type result, the
   domain name returned within a userid on a GETATTR may not supported by match that
   sent when the server, or because it userid is set using SETATTR, although when this
   happens, the domain will be in the form of a
   type for U-label.  When the
   server does not map domain strings which create is are not intended such as U-labels into a regular file or named
   attribute, for
   U-label, which OPEN is used to do it MAY do, it MUST NOT modify the file creation.

13.1.4.2.  NFS4ERR_DQUOT (Error Code 19)

   Resource (quota) hard limit exceeded.  The user's resource limit on domain and the server has been exceeded.

13.1.4.3.  NFS4ERR_EXIST (Error Code 17)

   A file
   domain returned on a GETATTR of the specified target name (when creating, renaming or
   linking) already exists.

13.1.4.4.  NFS4ERR_FBIG (Error Code 27)

   File too large. userid MUST be the same as that
   used when setting the userid by the SETATTTR.

   The operation would have caused a file server MAY implement VERIFY and NVERIFY without translating
   internal state to grow
   beyond a string form, so that, for example, a user
   principal which represents a specific numeric user id, will match a
   different principal string which represents the server's limit.

13.1.4.5.  NFS4ERR_FILE_OPEN (Error Code 10046)

   The operation is not allowed because same numeric user id.

12.7.  String Types with NFS-specific Processing

   For a file involved in number of data types within NFSv4, the operation primary responsibility
   for internationalization-related handling is currently open.  Servers may, but are not required to disallow
   linking-to, removing, or renaming open files.

13.1.4.6.  NFS4ERR_IO (Error Code 5)

   Indicates that an I/O error occurred of some entity
   other than the server itself (see below for which details).  In these
   situations, the file system was
   unable primary responsibility of NFSv4 is to provide recovery.

13.1.4.7.  NFS4ERR_MLINK (Error Code 31)

   The request would have caused the server's a
   framework in which that other entity (file system and server
   operating system principal naming framework) implements its own
   decisions while establishing rules to limit for interoperability issues.

   This pattern applies to the number following data types:

   o  In the case of
   hard links a file may have to be exceeded.

13.1.4.8.  NFS4ERR_NOENT (Error Code 2)

   Indicates no such file or directory.  The file or directory name
   specified does not exist.

13.1.4.9.  NFS4ERR_NOSPC (Error Code 28)

   Indicates no space left on device.  The operation would have caused components (strings of type component4), the server's
      server-side file system to exceed its limit.

13.1.4.10.  NFS4ERR_NOTEMPTY (Error Code 66)

   An attempt was made to remove implementation (of which there may be more
      than one for a directory that was not empty.

13.1.4.11.  NFS4ERR_NXIO (Error Code 5)

   I/O error.  No such device or address.

13.1.4.12.  NFS4ERR_RESTOREFH (Error Code 10030)

   The RESTOREFH operation does not have particular server) deals with internationalization
      issues, in a saved filehandle (identified
   by SAVEFH) fashion that is appropriate to operate upon.

13.1.4.13.  NFS4ERR_ROFS (Error Code 30)

   Indicates a read-only NFSv4, other remote
      file system.  A modifying operation was
   attempted on a read-only access protocols, and local file system.

13.1.4.14.  NFS4ERR_XDEV (Error Code 18)

   Indicates an attempt to do an operation, such as linking, access methods.  See
      "Handling of File Name Components" for the detailed treatment.

   o  In the case of link text strings (strings of type lintext4), the
      issues are similar, but file systems are restricted in the set of
      acceptable internationalization-related processing that
   inappropriately crosses a boundary.  This they may be due
      do, principally because symbolic links may contain name components
      that, when used, are presented to such
   boundaries as:

   o  That between other file systems (where and/or other
      servers.  See "Processing of Link Text" for the fsids are different). detailed
      treatment.

   o  That between different named attribute directories or between a
      named attribute directory  In the case of principal prefix strings, any decisions regarding
      internationalization are the responsibility of the server
      operating systems which may make its own rules regarding user and an ordinary directory.

   o  That between regions
      group name encoding.  See "Processing of Principal Prefixes" for
      the detailed treatment.

12.7.1.  Handling of File Name Components

   There are a number of places within client and server where file system that name
   components are processed:

   o  On the client, file system
      implementation treats names may be processed as separate (for example for space
      accounting purposes), part of forming
      NFSv4 requests.  Any such processing will reflect specific needs
      of the client's environment and where cross-connection between will be treated as out-of-scope
      from the
      regions viewpoint of this specification.

   o  On the server, file names are not allowed.

13.1.5.  State Management Errors processed as part of processing
      NFSv4 requests.  In practice, parts of the processing will be
      implemented within the NFS version 4 server while other parts will
      be implemented within the file system.  This processing is
      described in the sections below.  These errors indicate problems sections are organized in
      a fashion parallel to a stringprep profile.  The same sorts of
      topics are dealt with the stateid (or one but they differ in that there is a wider
      range of possible processing choices.

   o  On the
   stateids) passed server, file name components might potentially be subject
      to a given operation. processing as part of generating NFS version 4 responses.  This includes situations
      specification assumes that this processing will be empty and that
      file name components will be copied verbatim at this point.  The
      file name components may be modified as they appear in
   which responses,
      relative to the stateid values used in the request but this is invalid only
      treated as well reflecting changes made as situations part of request processing.
      For example, a change to a file name component made in processing
      a CREATE operation will be reflected in which the
   stateid is valid but designates revoked locking state.  Depending on the operation, READDIR since the stateid when valid may designate opens, byte-range
   locks, file or directory delegations, layouts, or device maps.

13.1.5.1.  NFS4ERR_ADMIN_REVOKED (Error Code 10047)

   A stateid designates locking state of any type
      files created will have names that has been revoked
   due reflect CREATE-time processing.

   o  On the client, responses will need to administrative interaction, possibly while be properly dealt with and
      the lease is valid.

13.1.5.2.  NFS4ERR_BAD_STATEID (Error Code 10026)

   A stateid generated by relevant issues will be discussed in the current server instance, but which does
   not designate any locking state (either current or superseded) for a
   current lockowner-file pair, was used.

13.1.5.3.  NFS4ERR_EXPIRED (Error Code 10011)

   A stateid designates locking state of any type sections below.
      Primarily, this will involve dealing with the fact that has been revoked
   due file name
      components received in responses may need to expiration be processed to meet
      the requirements of the client's lease, either immediately upon
   lease expiration, or following internal environment.  This will
      mainly involve dealing with changes in name components possibly
      made by server processing.  It also addresses other sorts of
      expected behavior that do not involve a later request for returned component4, such
      as whether a conflicting
   lock.

13.1.5.4.  NFS4ERR_LEASE_MOVED (Error Code 10031)

   A lease being renewed is associated with LOOKUP finds a file system given component4 or whether a CREATE
      or OPEN finds that has been
   migrated to a new server.

13.1.5.5.  NFS4ERR_OLD_STATEID (Error Code 10024)

   A stateid with specified name already exists.

12.7.1.1.  Nature of Server Processing of Name Components in Request

   The component4 type defines a non-zero seqid value does match the current seqid potentially case sensitive string,
   typically of UTF-8 characters.  Its use in NFS version 4 is for
   representing file name components.  Since file systems can implement
   case insensitive file name handling, it can be used for both case
   sensitive and case insensitive file name handling, based on the state designated by the user.

13.1.5.6.  NFS4ERR_STALE_STATEID (Error Code 10023)

   A stateid generated by an earlier server instance was used.

13.1.6.  Security Errors

   These are the various permission-related errors in NFSv4.1.

13.1.6.1.  NFS4ERR_ACCESS (Error Code 13)

   Indicates permission denied.  The caller does not have
   attributes of the correct
   permission to perform file system.

   It may be the requested operation.  Contrast this with
   NFS4ERR_PERM (Section 13.1.6.2), which restricts itself to owner or
   privileged user permission failures.

13.1.6.2.  NFS4ERR_PERM (Error Code 1)

   Indicates requester is not case that two valid distinct UTF-8 strings will be the owner.  The operation was not allowed
   because
   same after the caller is neither processing described below.  In such a privileged user (root) nor case, a server
   may either,

   o  disallow the owner creation of a second name if its post-processed form
      collides with that of an existing name, or

   o  allow the target creation of the operation.

13.1.6.3.  NFS4ERR_WRONGSEC (Error Code 10016)

   Indicates second name, but arrange so that after
      post processing, the security mechanism being used by second name is different than the client for post-
      processed form of the operation does not match first name.

12.7.1.2.  Character Repertoire for the server's security policy. Component4 Type

   The
   client should change the security mechanism being used and re-send
   the operation.  SECINFO can RECOMMENDED character repertoire for file name components is a
   recent/current version of Unicode, as encoded via UTF-8.  There are a
   number of alternate character repertoires which may be used to determine chosen by the appropriate
   mechanism.

13.1.7.  Name Errors

   Names in NFSv4 are UTF-8 strings.  When
   server based on implementation constraints including the strings are not are requirements
   of
   length zero, the error NFS4ERR_INVAL results.  When they are not
   valid UTF-8 the error NFS4ERR_INVAL also results, but servers may
   accommodate file systems with different character formats and not
   return this error.  Besides this, there are a number of other errors system being accessed.

   Two important alternative repertoires are:

   o  One alternate character repertoire is to indicate specific problems represent file name
      components as strings of bytes with names.

13.1.7.1.  NFS4ERR_BADCHAR (Error Code 10040)

   A UTF-8 string contains no protocol-defined encoding
      of multi-byte characters.  Most typically, implementations that
      support this single-byte alternative will make it available as an
      option set by an administrator for all file systems within a
      server or for some particular file systems.  If a server accepts
      non-UTF-8 strings anywhere within a specific file system, then it
      MUST do so throughout the entire file system.

   o  Another alternate character which repertoire is not supported by the
   server in the context in which it being used.

13.1.7.2.  NFS4ERR_BADNAME (Error Code 10041)

   A name string in a request consisted set of valid UTF-8 characters
   supported codepoints,
      representable by the server but file system, most typically UCS-4.

   Individual file system implementations may have more restricted
   character repertoires, as for example file system that only are
   capable of storing names consisting of UCS-2 characters.  When this
   is the name case, and the character repertoire is not supported restricted to
   single-byte characters, characters not within that repertoire are
   treated as prohibited and the error NFS4ERR_BADCHAR is returned by
   the server when that character is encountered.

   Strings are intended to be in UTF-8 format and servers SHOULD return
   NFS4ERR_INVAL, as a discussed above, when the characters sent are not
   valid name for current operation.  An example might UTF-8.  When the character repertoire consists of single-byte
   characters, UTF-8 is not enforced.  Such situations should be creating
   restricted to those where use is within a file or directory named ".." on restricted environment
   where a single character mapping locale can be administratively
   enforced, allowing a server whose file system uses
   that name for links to parent directories.

   This error should not be returned due treated as a normalization issue in string of bytes,
   rather than as a
   string.  When string of characters.  Such an arrangement might be
   necessary when NFSv4 access to a file system keeps containing names in a particular normalization
   form, it is the server's responsiblity which
   are not valid UTF-8 needs to do the approproriate
   normalization, rather than rejecting the name.

13.1.7.3.  NFS4ERR_NAMETOOLONG (Error Code 63)

   Returned when the filename be provided.

   However, in an operation exceeds any of the server's
   implementation limit.

13.1.8.  Locking Errors

   This section deal with errors related following situations, file names have to locking, both be
   treated as to share
   reservations strings of Unicode characters and byte-range locking.  It does servers MUST return
   NFS4ERR_INVAL when file names that are not deal with errors
   specific to in UTF-8 format:

   o  Case-insensitive comparisons are specified by the process of reclaiming locks.  Those file system and
      any characters sent contain non-ASCII byte codes.

   o  Any normalization constraints are dealt with in enforced by the next section.

13.1.8.1.  NFS4ERR_BAD_RANGE (Error Code 10042) server or file
      system implementation.

   o  The range for server accepts a LOCK, LOCKT, or LOCKU operation given name when creating a file and reports a
      different one when the directory is not appropriate to being examined.

   Much of the allowable range discussion below regarding normalization and silent
   deletion of offsets for characters within component4 strings is not applicable
   when the server.  E.g., this error
   results when a server which only supports 32-bit ranges receives a
   range that cannot be handled by does not enforce UTF-8 component4 strings and treats
   them as strings of bytes.  A client may determine that server.  (See Section 15.12.4).

13.1.8.2.  NFS4ERR_BAD_SEQID (Error Code 10026)

   The sequence number (seqid) in a locking request given
   filesystem is neither the next
   expected number or the last number processed.

13.1.8.3.  NFS4ERR_DEADLOCK (Error Code 10045)

   The server has been able to determine a file locking deadlock
   condition for operating in this mode by performing a blocking lock request.

13.1.8.4.  NFS4ERR_DENIED (Error Code 10010)

   An attempt to lock LOOKUP using a file
   non-UTF-8 string, if NFS4ERR_INVAL is denied.  Since this may not returned, then name
   components will be treated as opaque and those sorts of modifications
   will not be seen.

12.7.1.3.  Case-based Mapping Used for Component4 Strings

   Case-based mapping is not always a temporary
   condition, required part of server processing
   of name components.  However, if the client is encouraged to re-send NFSv4 file server supports the lock request until
   case_insensitive file system attribute, and if the lock case_insensitive
   attribute is accepted.  See Section 9.4 true for a discussion given file system, the NFS version 4 server
   MUST use the Unicode case mapping tables for the version of Unicode
   corresponding to the re-
   send.

13.1.8.5.  NFS4ERR_LOCKED (Error Code 10012)

   A read or write operation was attempted on a file character repertoire.  In the case where there was a
   conflict between the I/O and an existing lock:

   o  There
   character repertoire is a share reservation inconsistent with the I/O being done.

   o  The range to be read UCS-2 or written intersects an existing mandatory
      byte range lock.

13.1.8.6.  NFS4ERR_LOCKS_HELD (Error Code 10037)

   An operation was prevented by UCS-4, the unexpected presence of locks.

13.1.8.7.  NFS4ERR_LOCK_NOTSUPP (Error Code 10043)

   A locking request was attempted which would require case mapping tables from
   the upgrade or
   downgrade latest available version of a lock range already held by Unicode SHOULD be used.

   If the owner case_preserving attribute is present and set to false, then
   the NFSv4 server MUST use the corresponding Unicode case mapping
   table to map case when processing component4 strings.  Whether the
   server
   does not support atomic upgrade maps from lower to upper case or downgrade of locks.

13.1.8.8.  NFS4ERR_LOCK_RANGE (Error Code 10028)

   A lock request the upper to lower case is operating on a range that overlaps in part a
   currently held lock
   matter for the current lock owner and does not precisely
   match a single such lock where the server does not support this type
   of request, and thus does implementation choice.

   Stringprep Table B.2 should not implement POSIX locking semantics.  See
   Section 15.12.5, Section 15.13.5, and Section 15.14.5 be used for a
   discussion of how this applies these purpose since it is
   limited to LOCK, LOCKT, Unicode version 3.2 and LOCKU
   respectively.

13.1.8.9.  NFS4ERR_OPENMODE (Error Code 10038)

   The client attempted a READ, WRITE, LOCK or other operation not
   sanctioned by also because it erroneously maps
   the stateid passed (e.g., writing to a file opened only
   for read).

13.1.9.  Reclaim Errors

   These errors relate German ligature eszett to the process string "ss", whereas later versions
   of reclaiming locks after a server
   restart.

13.1.9.1.  NFS4ERR_GRACE (Error Code 10013)

   The server is in its recovery or grace period which should at least
   match the lease period Unicode contain both lower-case and upper-case versions of the server.  A locking request other than a
   reclaim could not Eszett
   (SMALL LETTER SHARP S and CAPITAL LETTER SHARP S).

   Clients should be granted during that period.

13.1.9.2.  NFS4ERR_NO_GRACE (Error Code 10033)

   A reclaim of client state was attempted in circumstances in which the
   server cannot guarantee aware that conflicting state has not been provided servers may have mapped SMALL LETTER
   SHARP S to another client.  As a result, the server cannot guarantee string "ss" when case-insensitive mapping is in
   effect, with result that
   conflicting state has not been provided to another client.

13.1.9.3.  NFS4ERR_RECLAIM_BAD (Error Code 10034)

   A reclaim attempted file whose name contains SMALL LETTER SHARP
   S may have that character replaced by "ss" or "SS".

12.7.1.4.  Other Mapping Used for Component4 Strings

   Other than for issues of case mapping, an NFSv4 server SHOULD limit
   visible (i.e., those that change the client does not match name of file to reflect those
   mappings to those from from a subset of the server's state
   consistency checks and has been rejected therefore as invalid.

13.1.9.4.  NFS4ERR_RECLAIM_CONFLICT (Error Code 10035)

   The reclaim attempted by stringprep table B.1.
   Note particularly, the client has encountered a conflict mappings from U+200C and
   cannot be satisfied.  Potentially indicates a misbehaving client,
   although not necessarily the one receiving U+200D to the error.  The
   misbehavior might empty
   string should be avoided, due to their undesirable effect on some
   strings in Farsi.

   Table B.1 may be used but it should be used only if required by the part of
   local file system implementation.  For example, if the client that established file system in
   question accepts file names containing the
   lock MONGOLIAN TODO SOFT HYPHEN
   character (U+1806) and they are distinct from the corresponding file
   names with which this client conflicted.

13.1.10.  Client Management Errors

   This sections deals with errors associated with requests used character removed, then using Table B.1 will cause
   functional problems when clients attempt to
   create and manage client IDs.

13.1.10.1.  NFS4ERR_CLID_INUSE (Error Code 10017)

   The SETCLIENTID operation has found interact with that a client id is already in
   use by another client.

13.1.10.2.  NFS4ERR_STALE_CLIENTID (Error Code 10022)

   A clientid not recognized by the file
   system.  The NFSv4 server was used in a locking or
   SETCLIENTID_CONFIRM request.

13.1.11.  Attribute Handling Errors

   This section deals with errors specific to attribute handling within
   NFSv4.

13.1.11.1.  NFS4ERR_ATTRNOTSUPP (Error Code 10032)

   An attribute specified is not supported by implementation including the server.  This error filesystem
   MUST NOT be returned by the GETATTR operation.

13.1.11.2.  NFS4ERR_BADOWNER (Error Code 10039)

   Returned when an owner or owner_group attribute value or the who
   field of an ace silently remove characters not within Table B.1.

   If an ACL attribute value cannot be translated implementation wishes to
   a local representation.

13.1.11.3.  NFS4ERR_NOT_SAME (Error Code 10027)

   This error eliminate other characters because it
   is returned by the VERIFY operation to signify believed that allowing component name versions that both include
   the
   attributes compared were character and do not have while otherwise the same as those provided in the
   client's request.

13.1.11.4.  NFS4ERR_SAME (Error Code 10009)

   This error is returned by the NVERIFY operation same, will
   contribute to signify that the
   attributes compared were confusion, it has two options:

   o  Treat the same characters as those provided in the client's
   request.

13.2.  Operations prohibited and their valid errors return NFS4ERR_BADCHAR.

   o  Eliminate the character as part of the name matching processing,
      while retaining it when a file is created.  This section contains would be
      analogous to file systems that are both case-insensitive and case-
      preserving,as discussed above, or those which are both
      normalization-insensitive and normalization-preserving, as
      discussed below.  The handling will be insensitive to the presence
      of the chosen characters while preserving the presence or absence
      of such characters within names.

   Note that the second of these choices is a desirable way to handle
   characters within table B.1, again with the exception of U+200C and
   U+200D, which gives can cause issues for Farsi.

   In addition to modification due to normalization, discussed below,
   clients have to be able to deal with name modifications and other
   consequences of character mapping on the valid error returns server, as discussed above.

12.7.1.5.  Normalization Issues for
   each protocol operation. Component Strings

   The error code NFS4_OK (indicating no
   error) issues are best discussed separately for the server and the
   client.  It is not listed but should be understood important to note that the server and client may have
   different approaches to this area, and that the server choice may not
   match the client operating environment.  The issue of mismatches and
   how they may be returnable best dealt with by all
   operations except ILLEGAL.

              Valid error returns the client is discussed in a later
   section.

12.7.1.5.1.  Server Normalization Issues for each Component Strings

   The NFSv4 does not specify required use of a particular normalization
   form for component4 strings.  Therefore, the server may receive
   unnormalized strings or strings that reflect either normalization
   form within protocol operation

   +---------------------+---------------------------------------------+
   | Operation           | Errors                                      |
   +---------------------+---------------------------------------------+
   | ACCESS              | NFS4ERR_ACCESS, NFS4ERR_BADHANDLE,          |
   |                     | NFS4ERR_BADXDR, NFS4ERR_DELAY,              |
   |                     | NFS4ERR_FHEXPIRED, NFS4ERR_INVAL,           |
   |                     | NFS4ERR_IO, NFS4ERR_MOVED,                  |
   |                     | NFS4ERR_NOFILEHANDLE, NFS4ERR_RESOURCE,     |
   |                     | NFS4ERR_SERVERFAULT, NFS4ERR_STALE          |
   | CLOSE               | NFS4ERR_ADMIN_REVOKED, NFS4ERR_BADHANDLE,   |
   |                     | NFS4ERR_BAD_SEQID, NFS4ERR_BAD_STATEID,     |
   |                     | NFS4ERR_BADXDR, NFS4ERR_DELAY,              |
   |                     | NFS4ERR_EXPIRED, NFS4ERR_FHEXPIRED,         |
   |                     | NFS4ERR_INVAL, NFS4ERR_ISDIR,               |
   |                     | NFS4ERR_LEASE_MOVED, NFS4ERR_LOCKS_HELD,    |
   |                     | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE,        |
   |                     | NFS4ERR_OLD_STATEID, NFS4ERR_RESOURCE,      |
   |                     | NFS4ERR_SERVERFAULT, NFS4ERR_STALE,         |
   |                     | NFS4ERR_STALE_STATEID                       |
   | COMMIT              | NFS4ERR_ACCESS, NFS4ERR_BADHANDLE,          |
   |                     | NFS4ERR_BADXDR, NFS4ERR_FHEXPIRED,          |
   |                     | NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_ISDIR,   |
   |                     | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE,        |
   |                     | NFS4ERR_RESOURCE, NFS4ERR_ROFS,             |
   |                     | NFS4ERR_SERVERFAULT, NFS4ERR_STALE,         |
   |                     | NFS4ERR_SYMLINK                             |
   | CREATE              | NFS4ERR_ACCESS, NFS4ERR_ATTRNOTSUPP,        |
   |                     | NFS4ERR_BADCHAR, NFS4ERR_BADHANDLE,         |
   |                     | NFS4ERR_BADNAME, NFS4ERR_BADOWNER,          |
   |                     | NFS4ERR_BADTYPE, NFS4ERR_BADXDR,            |
   |                     | NFS4ERR_DELAY, NFS4ERR_DQUOT,               |
   |                     | NFS4ERR_EXIST, NFS4ERR_FHEXPIRED,           |
   |                     | NFS4ERR_INVAL, NFS4ERR_IO, NFS4ERR_MOVED,   |
   |                     | NFS4ERR_NAMETOOLONG, NFS4ERR_NOFILEHANDLE,  |
   |                     | NFS4ERR_NOSPC, NFS4ERR_NOTDIR,              |
   |                     | NFS4ERR_PERM, NFS4ERR_RESOURCE,             |
   |                     | NFS4ERR_ROFS, NFS4ERR_SERVERFAULT,          |
   |                     | NFS4ERR_STALE                               |
   | DELEGPURGE          | NFS4ERR_BADXDR, NFS4ERR_NOTSUPP,            |
   |                     | NFS4ERR_LEASE_MOVED, NFS4ERR_RESOURCE,      |
   |                     | NFS4ERR_SERVERFAULT, NFS4ERR_STALE_CLIENTID |
   | DELEGRETURN         | NFS4ERR_ADMIN_REVOKED, NFS4ERR_BAD_STATEID, |
   |                     | NFS4ERR_BADXDR, NFS4ERR_EXPIRED,            |
   |                     | NFS4ERR_INVAL, NFS4ERR_LEASE_MOVED,         |
   |                     | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE,        |
   |                     | NFS4ERR_NOTSUPP, NFS4ERR_OLD_STATEID,       |
   |                     | NFS4ERR_RESOURCE, NFS4ERR_SERVERFAULT,      |
   |                     | NFS4ERR_STALE, NFS4ERR_STALE_STATEID        |
   | GETATTR             | NFS4ERR_ACCESS, NFS4ERR_BADHANDLE, requests and responses.  If the file system
   requires normalization, then the server implementation must normalize
   component4 strings within the protocol server before presenting the
   information to the local file system.

   With regard to normalization, servers have the following choices,
   with the possibility that different choices may be selected for
   different file systems.

   o  Implement a particular normalization form, either NFC, or NFD, in
      which case file names received from a client are converted to that
      normalization form and as a consequence, the client will always
      receive names in that normalization form.  If this option is
      chosen, then it is impossible to create two files in the same
      directory that have different names which map to the same name
      when normalized.

   o  Implement handling which is both normalization-insensitive and
      normalization-preserving.  This makes it impossible to create two
      files in the same directory that have two different canonically
      equivalent names, i.e., names which map to the same name when
      normalized.  However, unlike the previous option, clients will not
      have the names that they present modified to meet the server's
      normalization constraints.

   o  Implement normalization-sensitive handling without enforcing a
      normalization form constraint on file names.  This exposes the
      client to the possibility that two files can be created in the
      same directory which have different names which map to the same
      name when normalized.  This may be a significant issue when
      clients which use different normalization forms are used on the
      same file system, but this issue needs to be set against the
      difficulty of providing other sorts of normalization handling for
      some existing file systems.

12.7.1.5.2.  Client Normalization Issues for Component Strings

   The client, in processing name components, needs to deal with the
   fact that the server may impose normalization on file name components
   presented to it.  As a result, a file can be created within a
   directory and that name be different from