NFSv4                                                          T. Haynes
Internet-Draft                                                    Editor
Intended status: Standards Track                            May 09,                         August 14, 2011
Expires: November 10, 2011 February 15, 2012

                     NFS Version 4 Minor Version 2
                 draft-ietf-nfsv4-minorversion2-02.txt
                 draft-ietf-nfsv4-minorversion2-03.txt

Abstract

   This Internet-Draft describes NFS version 4 minor version two,
   focusing mainly on the protocol extensions made from NFS version 4
   minor version 0 and NFS version 4 minor version 1.  Major extensions
   introduced in NFS version 4 minor version two include: Server-side
   Copy, Space Reservations, and Support for Sparse Files.

Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [1].

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on November 10, 2011. February 15, 2012.

Copyright Notice

   Copyright (c) 2011 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

   This document may contain material from IETF Documents or IETF
   Contributions published or made publicly available before November
   10, 2008.  The person(s) controlling the copyright in some of this
   material may not have granted the IETF Trust the right to allow
   modifications of such material outside the IETF Standards Process.
   Without obtaining an adequate license from the person(s) controlling
   the copyright in such materials, this document may not be modified
   outside the IETF Standards Process, and derivative works of it may
   not be created outside the IETF Standards Process, except to format
   it for publication as an RFC or to translate it into languages other
   than English.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  5  6
     1.1.  The NFS Version 4 Minor Version 2 Protocol . . . . . . . .  5  6
     1.2.  Scope of This Document . . . . . . . . . . . . . . . . . .  5  6
     1.3.  NFSv4.2 Goals  . . . . . . . . . . . . . . . . . . . . . .  5  6
     1.4.  Overview of NFSv4.2 Features . . . . . . . . . . . . . . .  5  6
     1.5.  Differences from NFSv4.1 . . . . . . . . . . . . . . . . .  5  6
   2.  pNFS LAYOUTRETURN Error Handling . . . . . . . . . . . . . . .  5  6
     2.1.  Introduction . . . . . . . . . . . . . . . . . . . . . . .  5  6
     2.2.  Changes to Operation 51: LAYOUTRETURN  . . . . . . . . . .  6  7
       2.2.1.  ARGUMENT . . . . . . . . . . . . . . . . . . . . . . .  6  7
       2.2.2.  RESULT . . . . . . . . . . . . . . . . . . . . . . . .  6  7
       2.2.3.  DESCRIPTION  . . . . . . . . . . . . . . . . . . . . .  6  7
       2.2.4.  IMPLEMENTATION . . . . . . . . . . . . . . . . . . . .  7  8
   3.  Sharing change attribute implementation details with NFSv4
       clients  . . . . . . . . . . . . . . . . . . . . . . . . . . .  8  9
     3.1.  Abstract . . . . . . . . . . . . . . . . . . . . . . . . .  8  9
     3.2.  Introduction . . . . . . . . . . . . . . . . . . . . . . .  9 10
     3.3.  Definition of the 'change_attr_type' per-file system
           attribute  . . . . . . . . . . . . . . . . . . . . . . . .  9 10
   4.  NFS Server-side Copy . . . . . . . . . . . . . . . . . . . . . 10 11
     4.1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . 11 12
     4.2.  Protocol Overview  . . . . . . . . . . . . . . . . . . . . 11 12
       4.2.1.  Intra-Server Copy  . . . . . . . . . . . . . . . . . . 13 14
       4.2.2.  Inter-Server Copy  . . . . . . . . . . . . . . . . . . 14 15
       4.2.3.  Server-to-Server Copy Protocol . . . . . . . . . . . . 17 18
     4.3.  Operations . . . . . . . . . . . . . . . . . . . . . . . . 19 20
       4.3.1.  netloc4 - Network Locations  . . . . . . . . . . . . . 19 20
       4.3.2.  Operation 61: COPY_NOTIFY - Notify a source server
               of a future copy .  Copy Offload Stateids  . . . . . . . . . . . . . . . . 21
     4.4.  Security Considerations  . . 20
       4.3.3.  Operation 62: COPY_REVOKE - Revoke a destination
               server's copy privileges . . . . . . . . . . . . . . . 22
       4.3.4.  Operation 59: COPY - Initiate a server-side copy 21
       4.4.1.  Inter-Server Copy Security . . . 23
       4.3.5.  Operation 60: COPY_ABORT - Cancel a server-side
               copy . . . . . . . . . . . 21
   5.  Application Data Block Support . . . . . . . . . . . . . . 31
       4.3.6.  Operation 63: COPY_STATUS - Poll for status of a
               server-side copy . . 29
     5.1.  Generic Framework  . . . . . . . . . . . . . . . . . 32
       4.3.7.  Operation 15: CB_COPY - Report results of a
               server-side copy . . . 30
       5.1.1.  Data Block Representation  . . . . . . . . . . . . . . 31
       5.1.2.  Data Content . . 33
       4.3.8.  Copy Offload Stateids . . . . . . . . . . . . . . . . 35
     4.4.  Security Considerations . . . 31
     5.2.  pNFS Considerations  . . . . . . . . . . . . . . 35
       4.4.1.  Inter-Server Copy Security . . . . . 31
     5.3.  An Example of Detecting Corruption . . . . . . . . . 35
   5.  Application Data Block Support . . . 32
     5.4.  Example of READ_PLUS . . . . . . . . . . . . . 43
     5.1.  Generic Framework . . . . . . 34
     5.5.  Zero Filled Holes  . . . . . . . . . . . . . . 44
       5.1.1.  Data Block Representation . . . . . . 34
   6.  Space Reservation  . . . . . . . . 45
       5.1.2.  Data Content . . . . . . . . . . . . . . 34
     6.1.  Introduction . . . . . . . 45
     5.2.  Operation 64: INITIALIZE . . . . . . . . . . . . . . . . 34
     6.2.  Use Cases  . 45
       5.2.1.  ARGUMENT . . . . . . . . . . . . . . . . . . . . . . . 46
       5.2.2.  RESULT 35
       6.2.1.  Space Reservation  . . . . . . . . . . . . . . . . . . 36
       6.2.2.  Space freed on deletes . . . . . . 46
       5.2.3.  DESCRIPTION . . . . . . . . . . 36
       6.2.3.  Operations and attributes  . . . . . . . . . . . 47
     5.3.  Operation 65: READ_PLUS . . . 37
       6.2.4.  Attribute 77: space_reserved . . . . . . . . . . . . . 37
       6.2.5.  Attribute 78: space_freed  . 48
       5.3.1.  ARGUMENT . . . . . . . . . . . . . 38
       6.2.6.  Attribute 79: max_hole_punch . . . . . . . . . . 48
       5.3.2.  RESULT . . . 38
       6.2.7.  Operation 64: HOLE_PUNCH - Zero and deallocate
               blocks backing the file in the specified range.  . . . 38
   7.  Sparse Files . . . . . . . . . . . . . . . . . . 49
       5.3.3.  DESCRIPTION . . . . . . . 39
     7.1.  Introduction . . . . . . . . . . . . . . 49
     5.4.  pNFS Considerations . . . . . . . . . 39
     7.2.  Terminology  . . . . . . . . . . 50
     5.5.  An Example of Detecting Corruption . . . . . . . . . . . . 50
     5.6.  Example of READ_PLUS . 40
     7.3.  Applications and Sparse Files  . . . . . . . . . . . . . . 41
     7.4.  Overview of Sparse Files and NFSv4 . . . . 52
     5.7.  Zero Filled Holes . . . . . . . . 42
     7.5.  Operation 65: READ_PLUS  . . . . . . . . . . . . 52
   6.  Space Reservation . . . . . 43
       7.5.1.  ARGUMENT . . . . . . . . . . . . . . . . . 52
     6.1.  Introduction . . . . . . 43
       7.5.2.  RESULT . . . . . . . . . . . . . . . . . 52
     6.2.  Use Cases . . . . . . . 44
       7.5.3.  DESCRIPTION  . . . . . . . . . . . . . . . . . 54
       6.2.1.  Space Reservation . . . . 44
       7.5.4.  IMPLEMENTATION . . . . . . . . . . . . . . 54
       6.2.2.  Space freed on deletes . . . . . . 46
       7.5.5.  READ_PLUS with Sparse Files Example  . . . . . . . . . 47
     7.6.  Related Work . 54
       6.2.3.  Operations and attributes . . . . . . . . . . . . . . 55
       6.2.4.  Attribute 77: space_reserved . . . . . . . . 48
     7.7.  Other Proposed Designs . . . . . 55
       6.2.5.  Attribute 78: space_freed . . . . . . . . . . . . . 48
       7.7.1.  Multi-Data Server Hole Information . 56
       6.2.6.  Attribute 79: max_hole_punch . . . . . . . . . 48
       7.7.2.  Data Result Array  . . . . 56
       6.2.7.  Operation 64: HOLE_PUNCH - Zero and deallocate
               blocks backing the file in the specified range. . . . 56
   7.  Sparse Files . . . . . . . . . . . 49
       7.7.3.  User-Defined Sparse Mask . . . . . . . . . . . . . . 57
     7.1.  Introduction . 49
       7.7.4.  Allocated flag . . . . . . . . . . . . . . . . . . . . 49
       7.7.5.  Dense and Sparse pNFS File Layouts . . 57
     7.2.  Terminology . . . . . . . . 50
   8.  Labeled NFS  . . . . . . . . . . . . . . . 58
     7.3.  Applications and Sparse Files . . . . . . . . . . 50
     8.1.  Introduction . . . . 59
     7.4.  Overview of Sparse Files and NFSv4 . . . . . . . . . . . . 60
     7.5.  Operation 65: READ_PLUS . . . . . . . 50
     8.2.  Definitions  . . . . . . . . . . 61
       7.5.1.  ARGUMENT . . . . . . . . . . . . . 52
     8.3.  MAC Security Attribute . . . . . . . . . . 61
       7.5.2.  RESULT . . . . . . . . 52
       8.3.1.  Interpreting FATTR4_SEC_LABEL  . . . . . . . . . . . . 53
       8.3.2.  Delegations  . . . . 62
       7.5.3.  DESCRIPTION . . . . . . . . . . . . . . . . . 54
       8.3.3.  Permission Checking  . . . . 62
       7.5.4.  IMPLEMENTATION . . . . . . . . . . . . . 54
       8.3.4.  Object Creation  . . . . . . . 64
       7.5.5.  READ_PLUS with Sparse Files Example . . . . . . . . . 65
     7.6.  Related Work . . . 55
       8.3.5.  Existing Objects . . . . . . . . . . . . . . . . . . . 55
       8.3.6.  Label Changes  . 66
     7.7.  Other Proposed Designs . . . . . . . . . . . . . . . . . . 66
       7.7.1.  Multi-Data . 55
     8.4.  Procedure 16: CB_ATTR_CHANGED - Notify Client that the
           File's Attributes Changed  . . . . . . . . . . . . . . . . 56
     8.5.  pNFS Considerations  . . . . . . . . . . . . . . . . . . . 57
     8.6.  Discovery of Server Hole Information LNFS Support . . . . . . . . . . 66
       7.7.2.  Data Result Array . . . 57
     8.7.  MAC Security NFS Modes of Operation  . . . . . . . . . . . 58
       8.7.1.  Full Mode  . . . . 67
       7.7.3.  User-Defined Sparse Mask . . . . . . . . . . . . . . . 67
       7.7.4.  Allocated flag . . . 58
       8.7.2.  Smart Client Mode  . . . . . . . . . . . . . . . . . 67
       7.7.5.  Dense and Sparse pNFS File Layouts . 59
       8.7.3.  Smart Server Mode  . . . . . . . . . 68
   8.  Security Considerations . . . . . . . . . 60
     8.8.  Use Cases  . . . . . . . . . . 68
   9.  IANA Considerations . . . . . . . . . . . . . . 61
       8.8.1.  Full MAC labeling support for remotely mounted
               filesystems  . . . . . . . 68
   10. References . . . . . . . . . . . . . . 61
       8.8.2.  MAC labeling of virtual machine images stored on
               the network  . . . . . . . . . . . . 68
     10.1. Normative References . . . . . . . . . 61
       8.8.3.  International Traffic in Arms Regulations (ITAR) . . . 62
       8.8.4.  Legal Hold/eDiscovery  . . . . . . . 68
     10.2. Informative References . . . . . . . . . 62
       8.8.5.  Simple security label storage  . . . . . . . . . 69
   Appendix A.  Acknowledgments . . . 63
       8.8.6.  Diskless Linux . . . . . . . . . . . . . . . . 70
   Appendix B.  RFC Editor Notes . . . . 63
       8.8.7.  Multi-Level Security . . . . . . . . . . . . . . 71
   Author's Address . . . 64
     8.9.  Security Considerations  . . . . . . . . . . . . . . . . . 65
   9.  Security Considerations  . . . . . 71

1.  Introduction

1.1.  The NFS Version 4 Minor Version 2 Protocol

   The NFS version 4 minor version 2 (NFSv4.2) protocol is the third
   minor version of the NFS version 4 (NFSv4) protocol.  The first minor
   version, NFSv4.0, is described in [10] and the second minor version,
   NFSv4.1, is described in [2].  It follows the guidelines for minor
   versioning that are listed in Section 11 of RFC 3530bis.

   As a minor version, NFSv4.2 is consistent with the overall goals for
   NFSv4, but extends the protocol so as to better meet those goals,
   based on experiences with NFSv4.1.  In addition, NFSv4.2 has adopted
   some additional goals, which motivate some of the major extensions in
   NFSv4.2.

1.2.  Scope of This Document

   This document describes the NFSv4.2 protocol.  With respect to
   NFSv4.0 and NFSv4.1, this document does not:

   o  describe the NFSv4.0 or NFSv4.1 protocols, except where needed to
      contrast with NFSv4.2.

   o  modify the specification of the NFSv4.0 or NFSv4.1 protocols.

   o  clarify the NFSv4.0 . . . . . . . . . . . . . . 66
   10. Operations: REQUIRED, RECOMMENDED, or NFSv4.1 protocols.

   The full XDR for NFSv4.2 is presented in [3].

1.3.  NFSv4.2 Goals

1.4.  Overview of OPTIONAL . . . . . . . . 66
   11. NFSv4.2 Features

1.5.  Differences from NFSv4.1

2.  pNFS LAYOUTRETURN Error Handling

2.1.  Introduction

   In the pNFS description provided in [2], the client is not enabled to
   relay an error code from the DS to the MDS.  In the specification of
   the Objects-Based Layout protocol [4], use is made of the opaque
   lrf_body field of the LAYOUTRETURN argument to do such Operations . . . . . . . . . . . . . . . . . . . . . . 69
     11.1. Operation 59: COPY - Initiate a relaying of
   error codes.  In this section, we define server-side copy . . . . . 69
     11.2. Operation 60: COPY_ABORT - Cancel a new data structure to
   enable the passing server-side copy . . . 77
     11.3. Operation 61: COPY_NOTIFY - Notify a source server of error codes back to the MDS and provide some
   guidelines on what both the client and MDS should expect in such
   circumstances.

   There are two broad classes of errors, transient and persistent.  The
   client SHOULD strive to only use this new mechanism to report
   persistent errors.  It MUST be able to deal with transient issues by
   itself.  Also, while the client might consider an issue to be
   persistent, it MUST be prepared for the MDS to consider such issues
   to be persistent.  A prime example of this is if the MDS fences off a
   client from either a stateid or
           a filehandle.  The client will get an
   error from the DS and might relay either NFS4ERR_ACCESS or
   NFS4ERR_STALE_STATEID back to the MDS, with the belief that this is future copy  . . . . . . . . . . . . . . . . . . . . . . 78
     11.4. Operation 62: COPY_REVOKE - Revoke a
   hard error.  The MDS on the other hand, is waiting destination
           server's copy privileges . . . . . . . . . . . . . . . . . 80
     11.5. Operation 63: COPY_STATUS - Poll for the client to
   report such an error.  For it, the mission is accomplished in that
   the client has returned status of a layout that the MDS had most likley
   recalled.

2.2.  Changes
           server-side copy . . . . . . . . . . . . . . . . . . . . . 81
     11.6. Operation 64: INITIALIZE . . . . . . . . . . . . . . . . . 83
     11.7. Modification to Operation 51: LAYOUTRETURN

   The existing LAYOUTRETURN operation is extended by introducing a new
   data structure to report errors, layoutreturn_device_error4.  Also,
   layoutreturn_device_error4 is introduced to enable an array of errors
   to be reported.

2.2.1.  ARGUMENT

   The ARGUMENT specification of the LAYOUTRETURN operation in section
   18.44.1 of [2] is augmented by the following XDR code [11]:

   struct layoutreturn_device_error4 {
           deviceid4       lrde_deviceid;
           nfsstat4        lrde_status;
           nfs_opnum4      lrde_opnum;
   };

   struct layoutreturn_error_report4 {
           layoutreturn_device_error4      lrer_errors<>;
   };

2.2.2.  RESULT

   The RESULT of the LAYOUTRETURN operation is unchanged; see section
   18.44.2 of [2].

2.2.3.  DESCRIPTION

   The following text is added to the end of the LAYOUTRETURN operation
   DESCRIPTION in section 18.44.3 of [2].

   When a client used LAYOUTRETURN with a type of LAYOUTRETURN4_FILE,
   then if the lrf_body field is NULL, it indicates to the MDS that the
   client experienced no errors.  If lrf_body is non-NULL, then the
   field references error information which is layout type specific.
   I.e., the Objects-Based Layout protocol can continue to utilize
   lrf_body as specified in [4].  For both Files-Based Layouts, the
   field references a layoutreturn_device_error4, which contains an
   array 42: EXCHANGE_ID -
           Instantiate Client ID  . . . . . . . . . . . . . . . . . . 85
     11.8. Operation 65: READ_PLUS  . . . . . . . . . . . . . . . . . 86
   12. NFSv4.2 Callback Operations  . . . . . . . . . . . . . . . . . 88
     12.1. Operation 15: CB_COPY - Report results of layoutreturn_device_error4.

   Each individual layoutreturn_device_error4 descibes a single error
   associated with a DS, which is identfied via lrde_deviceid.  The
   operation which returned the error
           server-side copy . . . . . . . . . . . . . . . . . . . . . 88
   13. IANA Considerations  . . . . . . . . . . . . . . . . . . . . . 89
   14. References . . . . . . . . . . . . . . . . . . . . . . . . . . 89
     14.1. Normative References . . . . . . . . . . . . . . . . . . . 89
     14.2. Informative References . . . . . . . . . . . . . . . . . . 90
   Appendix A.  Acknowledgments . . . . . . . . . . . . . . . . . . . 91
   Appendix B.  RFC Editor Notes  . . . . . . . . . . . . . . . . . . 92
   Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 92

1.  Introduction

1.1.  The NFS Version 4 Minor Version 2 Protocol

   The NFS version 4 minor version 2 (NFSv4.2) protocol is identified via lrde_opnum.
   Finally the third
   minor version of the NFS error value (nfsstat4) encountered version 4 (NFSv4) protocol.  The first minor
   version, NFSv4.0, is provided via
   lrde_status described in [10] and may consist of the following error codes:

   NFS4_OKAY:  No issues were found for this device.

   NFS4ERR_NXIO:  The client was unable to establish any communication second minor version,
   NFSv4.1, is described in [2].  It follows the guidelines for minor
   versioning that are listed in Section 11 of RFC 3530bis.

   As a minor version, NFSv4.2 is consistent with the DS.

   NFS4ERR_*:  The client was able overall goals for
   NFSv4, but extends the protocol so as to establish communication better meet those goals,
   based on experiences with NFSv4.1.  In addition, NFSv4.2 has adopted
   some additional goals, which motivate some of the
      DS and is returning one major extensions in
   NFSv4.2.

1.2.  Scope of This Document

   This document describes the allowed error codes for NFSv4.2 protocol.  With respect to
   NFSv4.0 and NFSv4.1, this document does not:

   o  describe the
      operation denoted by lrde_opnum.

2.2.4.  IMPLEMENTATION

   The following text is added NFSv4.0 or NFSv4.1 protocols, except where needed to
      contrast with NFSv4.2.

   o  modify the end specification of the LAYOUTRETURN operation
   IMPLEMENTATION NFSv4.0 or NFSv4.1 protocols.

   o  clarify the NFSv4.0 or NFSv4.1 protocols.

   The full XDR for NFSv4.2 is presented in section 18.4.4 [3].

1.3.  NFSv4.2 Goals

1.4.  Overview of [2].

   A client that expects to use pNFS for a mounted filesystem SHOULD
   check for NFSv4.2 Features

1.5.  Differences from NFSv4.1

2.  pNFS support at mount time.  This check SHOULD be performed
   by sending a GETDEVICELIST operation, followed by layout-type-
   specific checks for accessibility of each storage device returned by
   GETDEVICELIST.  If the NFS server does not support pNFS, LAYOUTRETURN Error Handling

2.1.  Introduction

   In the
   GETDEVICELIST operation will be rejected with an NFS4ERR_NOTSUPP
   error; pNFS description provided in this situation it is up to [2], the client to determine whether
   it is acceptable not enabled to proceed with NFS-only access.

   Clients are expected
   relay an error code from the DS to tolerate transient storage device errors, and
   hence clients SHOULD NOT the MDS.  In the specification of
   the Objects-Based Layout protocol [4], use is made of the opaque
   lrf_body field of the LAYOUTRETURN error handling for
   device access problems that may be transient.  The methods by which a
   client decides whether an access problem is transient vs. persistent
   are implementation-specific, but may include retrying I/Os to a data
   server under appropriate conditions.

   When an I/O fails argument to do such a storage device, the client SHOULD retry the
   failed I/O via the MDS. relaying of
   error codes.  In this situation, before retrying the I/O,
   the client SHOULD return section, we define a new data structure to
   enable the layout, or passing of error codes back to the affected portion thereof, MDS and SHOULD indicate which storage device or devices was problematic.
   If provide some
   guidelines on what both the client does not do this, the and MDS may issue a layout recall
   callback should expect in order to perform the retried I/O. such
   circumstances.

   There are two broad classes of errors, transient and persistent.  The
   client needs SHOULD strive to be cognizant that since this error handling is
   optional in the MDS, the MDS may silently ignore only use this functionality. new mechanism to report
   persistent errors.  It MUST be able to deal with transient issues by
   itself.  Also, as while the client might consider an issue to be
   persistent, it MUST be prepared for the MDS may to consider some such issues the client reports
   to be
   expected (see Section 2.1), persistent.  A prime example of this is if the MDS fences off a
   client might find it difficult to
   detect a MDS which has not implemented error handling via
   LAYOUTRETURN.

   If an MDS is aware that from either a storage device is proving problematic to stateid or a
   client, filehandle.  The client will get an
   error from the MDS SHOULD NOT include that storage device in any pNFS
   layouts sent DS and might relay either NFS4ERR_ACCESS or
   NFS4ERR_STALE_STATEID back to that client.  If the MDS is aware MDS, with the belief that this is a storage
   device
   hard error.  The MDS on the other hand, is affecting many clients, then waiting for the MDS SHOULD NOT include
   that storage device in any pNFS layouts sent out.  Clients must still
   be aware that client to
   report such an error.  For it, the MDS might not have any choice mission is accomplished in using that
   the storage
   device, i.e., there might only be one possible client has returned a layout for the system.

   Another interesting complication is that for existing files, the MDS
   might have no choice in which storage devices to hand out had most likley
   recalled.

2.2.  Changes to clients. Operation 51: LAYOUTRETURN

   The MDS might try to restripe a file across existing LAYOUTRETURN operation is extended by introducing a different storage
   device, but clients need new
   data structure to be aware that not all implementations
   have restriping support.

   An MDS SHOULD react report errors, layoutreturn_device_error4.  Also,
   layoutreturn_device_error4 is introduced to a client return enable an array of layouts with errors by not
   using
   to be reported.

2.2.1.  ARGUMENT

   The ARGUMENT specification of the problematic storage devices LAYOUTRETURN operation in layouts for that client, but
   the MDS section
   18.44.1 of [2] is not required to indefinitely retain per-client storage
   device error information.  An MDS augmented by the following XDR code [11]:

   struct layoutreturn_device_error4 {
           deviceid4       lrde_deviceid;
           nfsstat4        lrde_status;
           nfs_opnum4      lrde_opnum;
   };

   struct layoutreturn_error_report4 {
           layoutreturn_device_error4      lrer_errors<>;
   };

2.2.2.  RESULT

   The RESULT of the LAYOUTRETURN operation is also not required to
   automatically reinstate use unchanged; see section
   18.44.2 of a previously problematic storage
   device; administrative intervention may be required instead.

   A client MAY perform I/O via [2].

2.2.3.  DESCRIPTION

   The following text is added to the MDS even when end of the client holds LAYOUTRETURN operation
   DESCRIPTION in section 18.44.3 of [2].

   When a
   layout that covers the I/O; servers MUST support this client
   behavior, and MAY recall layouts as needed to complete I/Os.

3.  Sharing change attribute implementation details used LAYOUTRETURN with NFSv4 clients

3.1.  Abstract

   This document describes an extension to the NFSv4 protocol that
   allows the server to share information about the implementation a type of
   its change attribute with LAYOUTRETURN4_FILE,
   then if the client.  The aim lrf_body field is NULL, it indicates to improve the
   client's ability to determine MDS that the order in which parallel updates to
   client experienced no errors.  If lrf_body is non-NULL, then the same file were processed.

3.2.  Introduction

   Although both
   field references error information which is layout type specific.
   I.e., the NFSv4 [10] and NFSv4.1 Objects-Based Layout protocol [2], define the
   change attribute as being mandatory can continue to implement, there is little utilize
   lrf_body as specified in [4].  For both Files-Based Layouts, the way
   field references a layoutreturn_device_error4, which contains an
   array of guidance.  The only feature that layoutreturn_device_error4.

   Each individual layoutreturn_device_error4 descibes a single error
   associated with a DS, which is mandated by identfied via lrde_deviceid.  The
   operation which returned the spec error is that identified via lrde_opnum.
   Finally the NFS error value must change whenever the file data or metadata
   change.

   While this allows for a wide range (nfsstat4) encountered is provided via
   lrde_status and may consist of implementations, it also leaves the following error codes:

   NFS4_OKAY:  No issues were found for this device.

   NFS4ERR_NXIO:  The client was unable to establish any communication
      with a conundrum: how does it determine which is the most
   recent value for DS.

   NFS4ERR_*:  The client was able to establish communication with the change attribute in a case where several RPC
   calls have been issued in parallel?  In other words if two COMPOUNDs,
   both containing WRITE
      DS and GETATTR requests is returning one of the allowed error codes for the same file, have
   been issued in parallel, how does
      operation denoted by lrde_opnum.

2.2.4.  IMPLEMENTATION

   The following text is added to the client determine which end of the
   two change attribute values returned LAYOUTRETURN operation
   IMPLEMENTATION in the replies to the GETATTR
   requests corresponds section 18.4.4 of [2].

   A client that expects to the most recent state use pNFS for a mounted filesystem SHOULD
   check for pNFS support at mount time.  This check SHOULD be performed
   by sending a GETDEVICELIST operation, followed by layout-type-
   specific checks for accessibility of each storage device returned by
   GETDEVICELIST.  If the file?  In some
   cases, NFS server does not support pNFS, the only recourse may
   GETDEVICELIST operation will be to send another COMPOUND containing a
   third GETATTR that is fully serialised rejected with an NFS4ERR_NOTSUPP
   error; in this situation it is up to the first two.

   In order client to avoid this kind of inefficiency, we propose a method to
   allow the server to share details about how the change attribute determine whether
   it is acceptable to proceed with NFS-only access.

   Clients are expected to evolve, so that tolerate transient storage device errors, and
   hence clients SHOULD NOT use the LAYOUTRETURN error handling for
   device access problems that may be transient.  The methods by which a
   client decides whether an access problem is transient vs. persistent
   are implementation-specific, but may immediately determine
   which, out of include retrying I/Os to a data
   server under appropriate conditions.

   When an I/O fails to a storage device, the several change attribute values returned by client SHOULD retry the
   server, is
   failed I/O via the most recent.

3.3.  Definition of MDS.  In this situation, before retrying the 'change_attr_type' per-file system attribute

   enum change_attr_typeinfo {
              NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR         = 0,
              NFS4_CHANGE_TYPE_IS_VERSION_COUNTER        = 1,
              NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS = 2,
              NFS4_CHANGE_TYPE_IS_TIME_METADATA          = 3,
              NFS4_CHANGE_TYPE_IS_UNDEFINED              = 4
   };

        +------------------+----+---------------------------+-----+
        | Name             | Id | Data Type                 | Acc |
        +------------------+----+---------------------------+-----+
        | change_attr_type | XX | enum change_attr_typeinfo | R   |
        +------------------+----+---------------------------+-----+

   The proposed solution is to enable I/O,
   the NFS server to provide
   additional information about how it expects client SHOULD return the change attribute
   value to evolve after layout, or the file data affected portion thereof,
   and SHOULD indicate which storage device or metadata has changed.  To devices was problematic.
   If the client does not do
   so, we define a new recommended attribute, 'change_attr_type', which this, the MDS may take values from enum change_attr_typeinfo as follows:

   NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR:  The change attribute value MUST
      monotonically increase for every atomic change issue a layout recall
   callback in order to perform the file
      attributes, data or directory contents.

   NFS4_CHANGE_TYPE_IS_VERSION_COUNTER: retried I/O.

   The change attribute value MUST
      be incremented by one unit for every atomic change to the file
      attributes, data or directory contents.  This property is
      preserved when writing client needs to pNFS data servers.

   NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS:  The change attribute
      value MUST be incremented by one unit for every atomic change to cognizant that since this error handling is
   optional in the file attributes, data or directory contents.  In MDS, the case
      where MDS may silently ignore this functionality.
   Also, as the MDS may consider some issues the client is writing reports to pNFS data servers, be
   expected (see Section 2.1), the number of
      increments is client might find it difficult to
   detect a MDS which has not guaranteed implemented error handling via
   LAYOUTRETURN.

   If an MDS is aware that a storage device is proving problematic to exactly match a
   client, the number of
      writes.

   NFS4_CHANGE_TYPE_IS_TIME_METADATA:  The change attribute is
      implemented as suggested MDS SHOULD NOT include that storage device in any pNFS
   layouts sent to that client.  If the NFSv4 spec [10] MDS is aware that a storage
   device is affecting many clients, then the MDS SHOULD NOT include
   that storage device in terms of any pNFS layouts sent out.  Clients must still
   be aware that the
      time_metadata attribute.

   NFS4_CHANGE_TYPE_IS_UNDEFINED:  The change attribute does MDS might not take
      values that fit into have any of these categories.

   If either NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR,
   NFS4_CHANGE_TYPE_IS_VERSION_COUNTER, or
   NFS4_CHANGE_TYPE_IS_TIME_METADATA are set, then choice in using the client knows at storage
   device, i.e., there might only be one possible layout for the very least system.

   Another interesting complication is that for existing files, the change attribute is monotonically increasing, MDS
   might have no choice in which is sufficient storage devices to resolve the question of which value is the
   most recent.

   If the hand out to clients.
   The MDS might try to restripe a file across a different storage
   device, but clients need to be aware that not all implementations
   have restriping support.

   An MDS SHOULD react to a client sees the value NFS4_CHANGE_TYPE_IS_TIME_METADATA, then
   by inspecting the value return of layouts with errors by not
   using the 'time_delta' attribute it additionally
   has the option of detecting rogue server implementations problematic storage devices in layouts for that client, but
   the MDS is not required to indefinitely retain per-client storage
   device error information.  An MDS is also not required to
   automatically reinstate use
   time_metadata in violation of a previously problematic storage
   device; administrative intervention may be required instead.

   A client MAY perform I/O via the spec.

   Finally, if MDS even when the client sees NFS4_CHANGE_TYPE_IS_VERSION_COUNTER, it
   has holds a
   layout that covers the ability I/O; servers MUST support this client
   behavior, and MAY recall layouts as needed to predict what the resulting complete I/Os.

3.  Sharing change attribute value
   should be after a COMPOUND containing a SETATTR, WRITE, or CREATE. implementation details with NFSv4 clients

3.1.  Abstract

   This again document describes an extension to the NFSv4 protocol that
   allows it the server to detect changes made in parallel by another
   client.  The value NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS permits share information about the same, but only if implementation of
   its change attribute with the client client.  The aim is not doing pNFS WRITEs.

4.  NFS Server-side Copy
4.1.  Introduction

   This document describes a server-side copy feature for to improve the NFS
   protocol.

   The server-side copy feature provides a mechanism for
   client's ability to determine the NFS client order in which parallel updates to perform a file copy on
   the server without same file were processed.

3.2.  Introduction

   Although both the data being
   transmitted back NFSv4 [10] and forth over NFSv4.1 protocol [2], define the network.

   Without this feature, an NFS client copies data from one location
   change attribute as being mandatory to
   another implement, there is little in
   the way of guidance.  The only feature that is mandated by reading the spec
   is that the value must change whenever the file data from or metadata
   change.

   While this allows for a wide range of implementations, it also leaves
   the server over client with a conundrum: how does it determine which is the network, most
   recent value for the change attribute in a case where several RPC
   calls have been issued in parallel?  In other words if two COMPOUNDs,
   both containing WRITE and
   then writing GETATTR requests for the data back over same file, have
   been issued in parallel, how does the network to client determine which of the server.  Using
   this server-side copy operation,
   two change attribute values returned in the client is able replies to instruct the
   server GETATTR
   requests corresponds to copy the data locally without the data being sent back and
   forth over most recent state of the network unnecessarily. file?  In general, this feature is useful whenever data is copied from one
   location some
   cases, the only recourse may be to send another on the server.  It COMPOUND containing a
   third GETATTR that is particularly useful when
   copying fully serialised with the contents of a file from a backup.  Backup-versions of a
   file are copied for a number of reasons, including restoring and
   cloning data.

   If the source object and destination object are on different file
   servers, the file servers will communicate with one another first two.

   In order to
   perform the copy operation.  The server-to-server protocol by which
   this is accomplished is not defined in avoid this document.

4.2.  Protocol Overview

   The server-side copy offload operations support both intra-server and
   inter-server file copies.  An intra-server copy is a copy in which
   the source file and destination file reside on the same server.  In
   an inter-server copy, the source file and destination file are on
   different servers.  In both cases, the copy may be performed
   synchronously or asynchronously.

   Throughout the rest kind of this document, inefficiency, we refer propose a method to
   allow the NFS server
   containing the source file as the "source server" and the NFS server to which share details about how the file change attribute is transferred as the "destination server".  In
   expected to evolve, so that the
   case client may immediately determine
   which, out of an intra-server copy, the source server and destination
   server are several change attribute values returned by the same server.  Therefore in
   server, is the context most recent.

3.3.  Definition of an intra-
   server copy, the terms source server and destination server refer 'change_attr_type' per-file system attribute

   enum change_attr_typeinfo {
              NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR         = 0,
              NFS4_CHANGE_TYPE_IS_VERSION_COUNTER        = 1,
              NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS = 2,
              NFS4_CHANGE_TYPE_IS_TIME_METADATA          = 3,
              NFS4_CHANGE_TYPE_IS_UNDEFINED              = 4
   };

        +------------------+----+---------------------------+-----+
        | Name             | Id | Data Type                 | Acc |
        +------------------+----+---------------------------+-----+
        | change_attr_type | XX | enum change_attr_typeinfo | R   |
        +------------------+----+---------------------------+-----+

   The proposed solution is to enable the single NFS server performing to provide
   additional information about how it expects the copy. change attribute
   value to evolve after the file data or metadata has changed.  To do
   so, we define a new recommended attribute, 'change_attr_type', which
   may take values from enum change_attr_typeinfo as follows:

   NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR:  The operations described below are designed change attribute value MUST
      monotonically increase for every atomic change to copy files.  Other the file system objects can
      attributes, data or directory contents.

   NFS4_CHANGE_TYPE_IS_VERSION_COUNTER:  The change attribute value MUST
      be copied incremented by building on these operations or
   using other techniques.  For example if the user wishes one unit for every atomic change to copy a
   directory, the client can synthesize a file
      attributes, data or directory copy contents.  This property is
      preserved when writing to pNFS data servers.

   NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS:  The change attribute
      value MUST be incremented by first
   creating one unit for every atomic change to
      the destination file attributes, data or directory and then copying contents.  In the source
   directory's files to case
      where the new destination directory.  If client is writing to pNFS data servers, the user
   wishes number of
      increments is not guaranteed to copy a namespace junction [12] [13], exactly match the client can use number of
      writes.

   NFS4_CHANGE_TYPE_IS_TIME_METADATA:  The change attribute is
      implemented as suggested in the
   ONC RPC Federated Filesystem protocol [13] to perform NFSv4 spec [10] in terms of the copy.
   Specifically
      time_metadata attribute.

   NFS4_CHANGE_TYPE_IS_UNDEFINED:  The change attribute does not take
      values that fit into any of these categories.

   If either NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR,
   NFS4_CHANGE_TYPE_IS_VERSION_COUNTER, or
   NFS4_CHANGE_TYPE_IS_TIME_METADATA are set, then the client can determine knows at
   the source junction's
   attributes using very least that the FEDFS_LOOKUP_FSN procedure and create a
   duplicate junction using the FEDFS_CREATE_JUNCTION procedure.

   For the inter-server copy protocol, the operations are defined to be
   compatible with a server-to-server copy protocol in change attribute is monotonically increasing,
   which is sufficient to resolve the
   destination server reads the file data from the source server.  This
   model in question of which the file data value is pulled from the source
   most recent.

   If the client sees the value NFS4_CHANGE_TYPE_IS_TIME_METADATA, then
   by inspecting the
   destination value of the 'time_delta' attribute it additionally
   has a number the option of advantages over a model detecting rogue server implementations that use
   time_metadata in which violation of the
   source pushes spec.

   Finally, if the file data client sees NFS4_CHANGE_TYPE_IS_VERSION_COUNTER, it
   has the ability to predict what the destination. resulting change attribute value
   should be after a COMPOUND containing a SETATTR, WRITE, or CREATE.
   This again allows it to detect changes made in parallel by another
   client.  The advantages of value NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS permits
   the pull model include:

   o  The pull model same, but only requires a remote server (i.e. if the destination
      server) to be granted read access.  A push model requires client is not doing pNFS WRITEs.

4.  NFS Server-side Copy
4.1.  Introduction

   This document describes a remote
      server (i.e. server-side copy feature for the source server) to be granted write access, which
      is more privileged.

   o NFS
   protocol.

   The pull model allows server-side copy feature provides a mechanism for the destination server NFS client
   to stop reading if it
      has run out of space.  In perform a push model, file copy on the destination server
      must flow control without the source server in this situation.

   o  The pull model allows the destination server to easily flow
      control data being
   transmitted back and forth over the network.

   Without this feature, an NFS client copies data stream from one location to
   another by adjusting reading the size of its read
      operations.  In a push model, data from the destination server does not have
      this ability.  The source server in a push model is capable of
      writing chunks larger than over the destination server has requested in
      attributes network, and session parameters.  In theory,
   then writing the destination
      server could perform a "short" write in this situation, but this
      approach is known to behave poorly in practice.

   The following operations are provided to support server-side copy:

   COPY_NOTIFY:  For inter-server copies, data back over the client sends this
      operation network to the source server to notify it of a future file server.  Using
   this server-side copy
      from a given destination server for the given user.

   COPY_REVOKE:  Also for inter-server copies, operation, the client sends this
      operation is able to instruct the source
   server to revoke permission to copy a file
      for the given user.

   COPY:  Used by data locally without the client to request a file copy.

   COPY_ABORT:  Used by data being sent back and
   forth over the client network unnecessarily.

   In general, this feature is useful whenever data is copied from one
   location to abort an asynchronous file copy.

   COPY_STATUS:  Used by another on the client to poll server.  It is particularly useful when
   copying the status contents of an
      asynchronous a file copy.

   CB_COPY:  Used by the destination server to report the results from a backup.  Backup-versions of an
      asynchronous a
   file copy to the client.

   These operations are described in detail in Section 4.3.  This
   section provides an overview copied for a number of how these operations reasons, including restoring and
   cloning data.

   If the source object and destination object are used on different file
   servers, the file servers will communicate with one another to
   perform the copy operation.  The server-to-server protocol by which
   this is accomplished is not defined in this document.

4.2.  Protocol Overview

   The server-side copy offload operations support both intra-server and
   inter-server file copies.

4.2.1.  Intra-Server Copy

   To  An intra-server copy is a copy in which
   the source file and destination file reside on a single server, the client uses a COPY operation.
   The server may respond to same server.  In
   an inter-server copy, the source file and destination file are on
   different servers.  In both cases, the copy operation with may be performed
   synchronously or asynchronously.

   Throughout the final results rest of this document, we refer to the copy or it may perform NFS server
   containing the copy asynchronously source file as the "source server" and deliver the
   results using a CB_COPY operation callback.  If NFS server
   to which the copy file is performed
   asynchronously, transferred as the client may poll "destination server".  In the status
   case of an intra-server copy, the copy using
   COPY_STATUS or cancel source server and destination
   server are the copy using COPY_ABORT.

   A synchronous intra-server copy is shown same server.  Therefore in Figure 1.  In this
   example, the NFS context of an intra-
   server chooses copy, the terms source server and destination server refer to perform
   the copy synchronously. single server performing the copy.

   The operations described below are designed to copy operation is completed, either successfully files.  Other
   file system objects can be copied by building on these operations or
   unsuccessfully, before
   using other techniques.  For example if the server replies user wishes to copy a
   directory, the client's request.
   The server's reply contains the final result of the operation.

     Client                                  Server
        +                                      +
        |                                      |
        |--- COPY ---------------------------->| Client requests
        |<------------------------------------/| client can synthesize a file copy
        |                                      |
        |                                      |

                Figure 1: A synchronous intra-server copy.

   An asynchronous intra-server directory copy is shown in Figure 2.  In this
   example, by first
   creating the NFS server performs destination directory and then copying the copy asynchronously.  The
   server's reply source
   directory's files to the copy request indicates that new destination directory.  If the user
   wishes to copy operation
   was initiated and the final result will be delivered at a later time.
   The server's reply also contains a copy stateid.  The namespace junction [12] [13], the client may can use
   this copy stateid to poll for status information (as shown) or the
   ONC RPC Federated Filesystem protocol [13] to
   cancel perform the copy using a COPY_ABORT.  When copy.
   Specifically the server completes client can determine the
   copy, source junction's
   attributes using the server performs FEDFS_LOOKUP_FSN procedure and create a callback to
   duplicate junction using the client and reports FEDFS_CREATE_JUNCTION procedure.

   For the
   results.

     Client                                  Server
        +                                      +
        |                                      |
        |--- COPY ---------------------------->| Client requests
        |<------------------------------------/| a file inter-server copy
        |                                      |
        |                                      |
        |--- COPY_STATUS --------------------->| Client may poll
        |<------------------------------------/| for status
        |                                      |
        |                  .                   | Multiple COPY_STATUS
        |                  .                   | protocol, the operations may be sent.
        |                  .                   |
        |                                      |
        |<-- CB_COPY --------------------------| Server reports results
        |\------------------------------------>|
        |                                      |

               Figure 2: An asynchronous intra-server copy.

4.2.2.  Inter-Server Copy

   A copy may also are defined to be performed between two servers.  The
   compatible with a server-to-server copy protocol
   is designed to accommodate a variety of network topologies.  As shown
   in Figure 3, the client and servers may be connected by multiple
   networks.  In particular, the servers may be connected by a
   specialized, high speed network (network 192.168.33.0/24 in which the
   diagram) that does not include the client.  The protocol allows the
   client to setup
   destination server reads the copy between file data from the servers (over network
   10.11.78.0/24 source server.  This
   model in which the diagram) and for the servers to communicate on
   the high speed network if they choose to do so.

                             192.168.33.0/24
                 +-------------------------------------+
                 |                                     |
                 |                                     |
                 | 192.168.33.18                       | 192.168.33.56
         +-------+------+                       +------+------+
         |     Source   |                       | Destination |
         +-------+------+                       +------+------+
                 | 10.11.78.18                         | 10.11.78.56
                 |                                     |
                 |                                     |
                 |             10.11.78.0/24           |
                 +------------------+------------------+
                                    |
                                    |
                                    | 10.11.78.243
                              +-----+-----+
                              |   Client  |
                              +-----------+

            Figure 3: An example inter-server network topology.

   For an inter-server copy, the client notifies file data is pulled from the source server that
   a file will be copied by the
   destination server using has a COPY_NOTIFY
   operation.  The client then initiates number of advantages over a model in which the copy by sending
   source pushes the COPY
   operation file data to the destination server. destination.  The destination advantages of
   the pull model include:

   o  The pull model only requires a remote server may
   perform (i.e., the copy synchronously or asynchronously.
      destination server) to be granted read access.  A synchronous inter-server copy push model
      requires a remote server (i.e., the source server) to be granted
      write access, which is shown in Figure 4.  In this case, more privileged.

   o  The pull model allows the destination server chooses to perform stop reading if it
      has run out of space.  In a push model, the copy before responding destination server
      must flow control the source server in this situation.

   o  The pull model allows the destination server to easily flow
      control the client's COPY request.

   An asynchronous copy data stream by adjusting the size of its read
      operations.  In a push model, the destination server does not have
      this ability.  The source server in a push model is shown capable of
      writing chunks larger than the destination server has requested in Figure 5.
      attributes and session parameters.  In this case, theory, the destination
      server chooses could perform a "short" write in this situation, but this
      approach is known to respond behave poorly in practice.

   The following operations are provided to support server-side copy:

   COPY_NOTIFY:  For inter-server copies, the client's COPY request
   immediately and then perform client sends this
      operation to the copy asynchronously.

     Client                Source         Destination
        +                    +                 +
        |                    |                 |
        |--- COPY_NOTIFY --->|                 |
        |<------------------/|                 |
        |                    |                 |
        |                    |                 |
        |--- COPY ---------------------------->|
        |                    |                 |
        |                    |                 |
        |                    |<----- read -----|
        |                    |\--------------->|
        |                    |                 |
        |                    |        .        | Multiple reads may
        |                    |        .        | be necessary
        |                    |        .        |
        |                    |                 |
        |                    |                 |
        |<------------------------------------/| Destination replies
        |                    |                 | source server to COPY

                Figure 4: A synchronous notify it of a future file copy
      from a given destination server for the given user.

   COPY_REVOKE:  Also for inter-server copies, the client sends this
      operation to the source server to revoke permission to copy a file
      for the given user.

   COPY:  Used by the client to request a file copy.

     Client                Source         Destination
        +                    +                 +
        |                    |                 |
        |--- COPY_NOTIFY --->|                 |
        |<------------------/|                 |
        |                    |                 |
        |                    |                 |
        |--- COPY ---------------------------->|
        |<------------------------------------/|
        |                    |                 |
        |                    |                 |
        |                    |<----- read -----|
        |                    |\--------------->|
        |                    |                 |
        |                    |        .        | Multiple reads may
        |                    |        .        | be necessary
        |                    |        .        |
        |                    |                 |
        |                    |                 |
        |--- COPY_STATUS --------------------->| Client may

   COPY_ABORT:  Used by the client to abort an asynchronous file copy.

   COPY_STATUS:  Used by the client to poll
        |<------------------------------------/| for the status
        |                    |                 |
        |                    |        .        | Multiple COPY_STATUS
        |                    |        .        | operations may be sent
        |                    |        .        |
        |                    |                 |
        |                    |                 |
        |                    |                 |
        |<-- CB_COPY --------------------------| Destination reports
        |\------------------------------------>| results
        |                    |                 |

               Figure 5: An of an
      asynchronous inter-server file copy.

4.2.3.  Server-to-Server Copy Protocol

   During an inter-server copy,

   CB_COPY:  Used by the destination server reads to report the results of an
      asynchronous file
   data from copy to the source server.  The source server and destination
   server client.

   These operations are not required described in detail in Section 4.3.  This
   section provides an overview of how these operations are used to use
   perform server-side copies.

4.2.1.  Intra-Server Copy

   To copy a specific protocol to transfer the file data.  The choice of what protocol to use is ultimately on a single server, the
   destination server's decision.

4.2.3.1.  Using NFSv4.x as client uses a Server-to-Server Copy Protocol COPY operation.
   The destination server MAY use standard NFSv4.x (where x >= 1) may respond to
   read the data from copy operation with the source server.  If NFSv4.x is used for final results
   of the
   server-to-server copy protocol, or it may perform the destination server can use the
   filehandle contained in copy asynchronously and deliver the COPY request with standard NFSv4.x
   operations to read data from
   results using a CB_COPY operation callback.  If the source server.  Specifically, copy is performed
   asynchronously, the
   destination server client may use the NFSv4.x OPEN operation's CLAIM_FH
   facility to open the file being copied and obtain an open stateid.
   Using poll the stateid, status of the destination server may then use NFSv4.x READ
   operations to read copy using
   COPY_STATUS or cancel the file.

4.2.3.2.  Using an alternative Server-to-Server Copy Protocol copy using COPY_ABORT.

   A synchronous intra-server copy is shown in Figure 1.  In a homogeneous environment, this
   example, the source and destination servers
   might be able NFS server chooses to perform the file copy extremely efficiently using
   specialized protocols.  For example the source and destination
   servers might be two nodes sharing a common file system format for
   the source and destination file systems.  Thus synchronously.
   The copy operation is completed, either successfully or
   unsuccessfully, before the source and
   destination are in an ideal position server replies to efficiently render the image client's request.
   The server's reply contains the final result of the source operation.

     Client                                  Server
        +                                      +
        |                                      |
        |--- COPY ---------------------------->| Client requests
        |<------------------------------------/| a file to copy
        |                                      |
        |                                      |

                Figure 1: A synchronous intra-server copy.

   An asynchronous intra-server copy is shown in Figure 2.  In this
   example, the destination file by replicating NFS server performs the file
   system formats at copy asynchronously.  The
   server's reply to the block level.  Another possibility is copy request indicates that the
   source copy operation
   was initiated and destination might the final result will be two nodes sharing delivered at a later time.
   The server's reply also contains a common storage
   area network, and thus there is no need to copy any data at all, and
   instead ownership of the file and its contents might simply be re-
   assigned stateid.  The client may use
   this copy stateid to the destination.  To allow poll for these possibilities, the
   destination server is allowed status information (as shown) or to use a server-to-server
   cancel the copy protocol
   of its choice.

   In a heterogeneous environment, using a protocol other than NFSv4.x
   (e.g.  HTTP [14] or FTP [15]) presents some challenges.  In
   particular, COPY_ABORT.  When the destination server is presented with completes the challenge of
   accessing
   copy, the source file given only an NFSv4.x filehandle.

   One option for protocols that identify source files with path names
   is server performs a callback to use an ASCII hexadecimal representation of the source
   filehandle as client and reports the
   results.

     Client                                  Server
        +                                      +
        |                                      |
        |--- COPY ---------------------------->| Client requests
        |<------------------------------------/| a file name.

   Another option copy
        |                                      |
        |                                      |
        |--- COPY_STATUS --------------------->| Client may poll
        |<------------------------------------/| for the source server status
        |                                      |
        |                  .                   | Multiple COPY_STATUS
        |                  .                   | operations may be sent.
        |                  .                   |
        |                                      |
        |<-- CB_COPY --------------------------| Server reports results
        |\------------------------------------>|
        |                                      |

               Figure 2: An asynchronous intra-server copy.

4.2.2.  Inter-Server Copy

   A copy may also be performed between two servers.  The copy protocol
   is designed to use URLs to direct the
   destination server to accommodate a specialized service.  For example, the
   response to COPY_NOTIFY could include the URL
   ftp://s1.example.com:9999/_FH/0x12345, where 0x12345 is the ASCII
   hexadecimal representation variety of network topologies.  As shown
   in Figure 3, the source filehandle.  When client and servers may be connected by multiple
   networks.  In particular, the
   destination server receives servers may be connected by a
   specialized, high speed network (network 192.168.33.0/24 in the source server's URL, it would use
   "_FH/0x12345" as
   diagram) that does not include the file name to pass client.  The protocol allows the
   client to setup the FTP server listening on
   port 9999 of s1.example.com.  On port 9999 there would be a special
   instance of copy between the FTP service that understands how servers (over network
   10.11.78.0/24 in the diagram) and for the servers to convert NFS
   filehandles communicate on
   the high speed network if they choose to do so.

                             192.168.33.0/24
                 +-------------------------------------+
                 |                                     |
                 |                                     |
                 | 192.168.33.18                       | 192.168.33.56
         +-------+------+                       +------+------+
         |     Source   |                       | Destination |
         +-------+------+                       +------+------+
                 | 10.11.78.18                         | 10.11.78.56
                 |                                     |
                 |                                     |
                 |             10.11.78.0/24           |
                 +------------------+------------------+
                                    |
                                    |
                                    | 10.11.78.243
                              +-----+-----+
                              |   Client  |
                              +-----------+

            Figure 3: An example inter-server network topology.

   For an open file descriptor (in many operating systems,
   this would require a new system call, one which is inter-server copy, the inverse of client notifies the
   makefh() function source server that the pre-NFSv4 MOUNT service needs).

   Authenticating and identifying
   a file will be copied by the destination server using a COPY_NOTIFY
   operation.  The client then initiates the copy by sending the COPY
   operation to the source destination server.  The destination server may
   perform the copy synchronously or asynchronously.

   A synchronous inter-server copy is also a challenge.  Recommendations for how to accomplish
   this are given shown in Section 4.4.1.2.4 and Section 4.4.1.4.

4.3.  Operations Figure 4.  In this case,
   the sections that follow, several operations are defined that
   together provide destination server chooses to perform the server-side copy feature.  These operations are
   intended to be OPTIONAL operations as defined in section 17 of [2].
   The COPY_NOTIFY, COPY_REVOKE, COPY, COPY_ABORT, and COPY_STATUS
   operations are designed to be sent within an NFSv4 COMPOUND
   procedure.  The CB_COPY operation is designed before responding
   to be sent within an
   NFSv4 CB_COMPOUND procedure.

   Each operation is performed in the context of the user identified by the ONC RPC credential of its containing COMPOUND or CB_COMPOUND
   request.  For example, a COPY_ABORT operation issued by a given user
   indicates that a specified client's COPY operation initiated by the same user
   be canceled.  Therefore a COPY_ABORT MUST NOT interfere with a copy
   of the same file initiated by another user. request.

   An NFS server MAY allow an administrative user to monitor or cancel
   copy operations using an implementation specific interface.

4.3.1.  netloc4 - Network Locations

   The server-side asynchronous copy operations specify network locations using the
   netloc4 data type shown below:

   enum netloc_type4 {
           NL4_NAME        = 0,
           NL4_URL         = 1,
           NL4_NETADDR     = 2
   };
   union netloc4 switch (netloc_type4 nl_type) {
           case NL4_NAME:          utf8str_cis nl_name;
           case NL4_URL:           utf8str_cis nl_url;
           case NL4_NETADDR:       netaddr4    nl_addr;
   };

   If the netloc4 is of type NL4_NAME, shown in Figure 5.  In this case, the nl_name field MUST be
   specified as a UTF-8 string.  The nl_name is expected
   destination server chooses to be resolved respond to a network address via DNS, LDAP, NIS, /etc/hosts, or some other
   means.  If the netloc4 is of type NL4_URL, a server URL [5]
   appropriate for client's COPY request
   immediately and then perform the server-to-server copy operation is specified as a
   UTF-8 string.  If the netloc4 is of type NL4_NETADDR, the nl_addr
   field MUST contain a valid netaddr4 as defined in Section 3.3.9 of
   [2].

   When netloc4 values are used for an inter-server copy as shown in asynchronously.

     Client                Source         Destination
        +                    +                 +
        |                    |                 |
        |--- COPY_NOTIFY --->|                 |
        |<------------------/|                 |
        |                    |                 |
        |                    |                 |
        |--- COPY ---------------------------->|
        |                    |                 |
        |                    |                 |
        |                    |<----- read -----|
        |                    |\--------------->|
        |                    |                 |
        |                    |        .        | Multiple reads may
        |                    |        .        | be necessary
        |                    |        .        |
        |                    |                 |
        |                    |                 |
        |<------------------------------------/| Destination replies
        |                    |                 | to COPY

                Figure 3, their values 4: A synchronous inter-server copy.

     Client                Source         Destination
        +                    +                 +
        |                    |                 |
        |--- COPY_NOTIFY --->|                 |
        |<------------------/|                 |
        |                    |                 |
        |                    |                 |
        |--- COPY ---------------------------->|
        |<------------------------------------/|
        |                    |                 |
        |                    |                 |
        |                    |<----- read -----|
        |                    |\--------------->|
        |                    |                 |
        |                    |        .        | Multiple reads may
        |                    |        .        | be evaluated on necessary
        |                    |        .        |
        |                    |                 |
        |                    |                 |
        |--- COPY_STATUS --------------------->| Client may poll
        |<------------------------------------/| for status
        |                    |                 |
        |                    |        .        | Multiple COPY_STATUS
        |                    |        .        | operations may be sent
        |                    |        .        |
        |                    |                 |
        |                    |                 |
        |                    |                 |
        |<-- CB_COPY --------------------------| Destination reports
        |\------------------------------------>| results
        |                    |                 |

               Figure 5: An asynchronous inter-server copy.

4.2.3.  Server-to-Server Copy Protocol

   During an inter-server copy, the source server, destination server, and client. server reads the file
   data from the source server.  The network environment in which
   these systems operate should be configured so that the netloc4 values
   are interpreted as intended on each system.

4.3.2.  Operation 61: COPY_NOTIFY - Notify a source server of a future
        copy

4.3.2.1.  ARGUMENT

   struct COPY_NOTIFY4args {
           /* CURRENT_FH: source file */
           netloc4         cna_destination_server;
   };

4.3.2.2.  RESULT

   struct COPY_NOTIFY4resok {
           nfstime4        cnr_lease_time;
           netloc4         cnr_source_server<>;
   };

   union COPY_NOTIFY4res switch (nfsstat4 cnr_status) {
           case NFS4_OK:
                   COPY_NOTIFY4resok       resok4;
           default:
                   void;
   };

4.3.2.3.  DESCRIPTION

   This operation is used for an inter-server copy.  A client sends this
   operation in a COMPOUND request to the source and destination
   server are not required to authorize use a
   destination server identified by cna_destination_server specific protocol to read transfer the
   file specified by CURRENT_FH on behalf of the given user. data.  The cna_destination_server MUST be specified using choice of what protocol to use is ultimately the netloc4
   network location format.
   destination server's decision.

4.2.3.1.  Using NFSv4.x as a Server-to-Server Copy Protocol

   The destination server is not required MAY use standard NFSv4.x (where x >= 1) to resolve
   read the
   cna_destination_server address before completing this operation.

   If this operation succeeds, data from the source server will allow server.  If NFSv4.x is used for the
   cna_destination_server to
   server-to-server copy the specified file on behalf of the
   given user.  If COPY_NOTIFY succeeds, protocol, the destination server is
   granted permission can use the
   filehandle contained in the COPY request with standard NFSv4.x
   operations to read data from the file as long as both of source server.  Specifically, the following
   conditions are met:

   o  The
   destination server begins reading may use the source file before NFSv4.x OPEN operation's CLAIM_FH
   facility to open the
      cnr_lease_time expires.  If file being copied and obtain an open stateid.
   Using the cnr_lease_time expires while stateid, the destination server is still reading the source file, the
      destination server is allowed may then use NFSv4.x READ
   operations to finish reading read the file.

   o  The client has not issued

4.2.3.2.  Using an alternative Server-to-Server Copy Protocol

   In a COPY_REVOKE for homogeneous environment, the same combination
      of user, filehandle, source and destination server.

   The cnr_lease_time is chosen by the source server.  A cnr_lease_time
   of 0 (zero) indicates an infinite lease.  To renew the copy lease
   time the client should resend servers
   might be able to perform the same file copy notification request to extremely efficiently using
   specialized protocols.  For example the source server.

   To avoid the need and destination
   servers might be two nodes sharing a common file system format for synchronized clocks, copy lease times are
   granted by
   the server as a time delta.  However, there is a
   requirement that source and destination file systems.  Thus the client source and server clocks do not drift
   excessively over
   destination are in an ideal position to efficiently render the duration image
   of the lease.  There is also source file to the issue
   of propagation delay across destination file by replicating the network which could easily be several
   hundred milliseconds as well as file
   system formats at the block level.  Another possibility is that requests will the
   source and destination might be
   lost two nodes sharing a common storage
   area network, and thus there is no need to copy any data at all, and
   instead ownership of the file and its contents might simply be retransmitted. re-
   assigned to the destination.  To take propagation delay into account, allow for these possibilities, the client should subtract it
   from
   destination server is allowed to use a server-to-server copy lease times (e.g. if the client estimates protocol
   of its choice.

   In a heterogeneous environment, using a protocol other than NFSv4.x
   (e.g,.  HTTP [14] or FTP [15]) presents some challenges.  In
   particular, the one-way
   propagation delay as 200 milliseconds, then it can assume destination server is presented with the challenge of
   accessing the source file given only an NFSv4.x filehandle.

   One option for protocols that identify source files with path names
   is to use an ASCII hexadecimal representation of the
   lease source
   filehandle as the file name.

   Another option for the source server is already 200 milliseconds old when it gets it).  In addition,
   it will take another 200 milliseconds to get a response back use URLs to direct the
   server.  So the client must send
   destination server to a lease renewal or send specialized service.  For example, the copy
   offload request
   response to COPY_NOTIFY could include the cna_destination_server at least 400
   milliseconds before URL
   ftp://s1.example.com:9999/_FH/0x12345, where 0x12345 is the copy lease would expire.  If ASCII
   hexadecimal representation of the propagation
   delay varies over source filehandle.  When the life of
   destination server receives the lease (e.g. source server's URL, it would use
   "_FH/0x12345" as the client is file name to pass to the FTP server listening on
   port 9999 of s1.example.com.  On port 9999 there would be a
   mobile host), special
   instance of the client will need FTP service that understands how to continuously subtract the
   increase in propagation delay from the copy lease times.

   The server's copy lease period configuration should take into account convert NFS
   filehandles to an open file descriptor (in many operating systems,
   this would require a new system call, one which is the network distance inverse of the clients
   makefh() function that will be accessing the
   server's resources.  It is expected that the lease period will take
   into account the network propagation delays pre-NFSv4 MOUNT service needs).

   Authenticating and other network delay
   factors for the client population.  Since the protocol does not allow
   for an automatic method to determine an appropriate copy lease
   period, identifying the server's administrator may have destination server to tune the copy lease
   period.

   A successful response will source
   server is also contain a list of names, addresses, challenge.  Recommendations for how to accomplish
   this are given in Section 4.4.1.2.4 and URLs called cnr_source_server, on which Section 4.4.1.4.

4.3.  Operations

   In the source is willing to
   accept connections from sections that follow, several operations are defined that
   together provide the destination. server-side copy feature.  These might not operations are
   intended to be
   reachable from the client OPTIONAL operations as defined in section 17 of [2].
   The COPY_NOTIFY, COPY_REVOKE, COPY, COPY_ABORT, and might COPY_STATUS
   operations are designed to be located on networks sent within an NFSv4 COMPOUND
   procedure.  The CB_COPY operation is designed to which be sent within an
   NFSv4 CB_COMPOUND procedure.

   Each operation is performed in the client has no connection.

   If context of the client wishes to perform an inter-server copy, user identified by
   the client MUST
   send ONC RPC credential of its containing COMPOUND or CB_COMPOUND
   request.  For example, a COPY_NOTIFY to the source server.  Therefore, COPY_ABORT operation issued by a given user
   indicates that a specified COPY operation initiated by the source
   server same user
   be canceled.  Therefore a COPY_ABORT MUST support COPY_NOTIFY.

   For NOT interfere with a copy only involving one server (the source and destination are
   on
   of the same server), this operation is unnecessary.

   The COPY_NOTIFY operation may fail for the following reasons (this is
   a partial list):

   NFS4ERR_MOVED:  The file system which contains the source file is not
      present on the source server.  The client can determine the
      correct location and reissue the operation with the correct
      location.

   NFS4ERR_NOTSUPP:  The copy offload operation is not supported initiated by the another user.

   An NFS server receiving this request.

   NFS4ERR_WRONGSEC:  The security mechanism being used by the client
      does not match the server's security policy.

4.3.3.  Operation 62: COPY_REVOKE MAY allow an administrative user to monitor or cancel
   copy operations using an implementation specific interface.

4.3.1.  netloc4 - Revoke a destination server's Network Locations

   The server-side copy
        privileges

4.3.3.1.  ARGUMENT

   struct COPY_REVOKE4args {
           /* CURRENT_FH: source file */ operations specify network locations using the
   netloc4         cra_destination_server; data type shown below:

   enum netloc_type4 {
           NL4_NAME        = 0,
           NL4_URL         = 1,
           NL4_NETADDR     = 2
   };

4.3.3.2.  RESULT

   struct COPY_REVOKE4res
   union netloc4 switch (netloc_type4 nl_type) {
           nfsstat4        crr_status;
           case NL4_NAME:          utf8str_cis nl_name;
           case NL4_URL:           utf8str_cis nl_url;
           case NL4_NETADDR:       netaddr4    nl_addr;
   };

4.3.3.3.  DESCRIPTION

   This operation

   If the netloc4 is used for an inter-server copy.  A client sends this
   operation in of type NL4_NAME, the nl_name field MUST be
   specified as a COMPOUND request UTF-8 string.  The nl_name is expected to the source server be resolved
   to revoke a network address via DNS, LDAP, NIS, /etc/hosts, or some other
   means.  If the
   authorization netloc4 is of type NL4_URL, a destination server identified by
   cra_destination_server from reading URL [5]
   appropriate for the file server-to-server copy operation is specified by CURRENT_FH
   on behalf of given user. as a
   UTF-8 string.  If the cra_destination_server has already
   begun copying netloc4 is of type NL4_NETADDR, the file, nl_addr
   field MUST contain a successful return from this operation
   indicates that further access will valid netaddr4 as defined in Section 3.3.9 of
   [2].

   When netloc4 values are used for an inter-server copy as shown in
   Figure 3, their values may be prevented. evaluated on the source server,
   destination server, and client.  The cra_destination_server MUST network environment in which
   these systems operate should be specified using configured so that the netloc4
   network location format.  The values
   are interpreted as intended on each system.

4.3.2.  Copy Offload Stateids

   A server is not required to resolve the
   cra_destination_server address before completing this operation.

   The COPY_REVOKE may perform a copy offload operation asynchronously.  An
   asynchronous copy is useful in situations tracked using a copy offload stateid.  Copy
   offload stateids are included in which the source COPY, COPY_ABORT, COPY_STATUS,
   and CB_COPY operations.

   Section 8.2.4 of [2] specifies that stateids are valid until either
   (A) the client or server granted a very long restart or infinite lease on (B) the destination
   server's ability to read client returns the source file and all
   resource.

   A copy operations on offload stateid will be valid until either (A) the source file have been completed.

   For a copy only involving one client or
   server (the source and destination are
   on restart or (B) the same server), this client returns the resource by issuing a
   COPY_ABORT operation is unnecessary.

   If or the server supports COPY_NOTIFY, client replies to a CB_COPY operation.

   A copy offload stateid's seqid MUST NOT be 0 (zero).  In the server context
   of a copy offload operation, it is REQUIRED ambiguous to support indicate the COPY_REVOKE operation.

   The COPY_REVOKE most
   recent copy offload operation may fail for the following reasons (this is using a partial list):

   NFS4ERR_MOVED:  The file system which contains the source file is not
      present on the source server.  The client can determine the
      correct location and reissue the operation stateid with the correct
      location.

   NFS4ERR_NOTSUPP:  The seqid of 0 (zero).
   Therefore a copy offload operation is not supported by the
      NFS server receiving stateid with seqid of 0 (zero) MUST be
   considered invalid.

4.4.  Security Considerations

   The security considerations pertaining to NFSv4 [10] apply to this request.

4.3.4.  Operation 59: COPY - Initiate a server-side copy

4.3.4.1.  ARGUMENT

   const COPY4_GUARDED     = 0x00000001;
   const COPY4_METADATA    = 0x00000002;

   struct COPY4args {
           /* SAVED_FH: source file */
           /* CURRENT_FH: destination file or */
           /*             directory           */
           offset4         ca_src_offset;
           offset4         ca_dst_offset;
           length4         ca_count;
           uint32_t        ca_flags;
           component4      ca_destination;
           netloc4         ca_source_server<>;
   };

4.3.4.2.  RESULT

   union COPY4res switch (nfsstat4 cr_status) {
           case NFS4_OK:
                   stateid4        cr_callback_id<1>;
           default:
                   length4         cr_bytes_copied;
   };

4.3.4.3.  DESCRIPTION
   document.

   The COPY operation is standard security mechanisms provide by NFSv4 [10] may be used for both intra- and inter-server copies.
   In both cases, the COPY is always sent from the client to
   secure the
   destination server of the file copy.  The COPY operation requests
   that a file be copied from protocol described in this document.

   NFSv4 clients and servers supporting the location specified by the SAVED_FH
   value inter-server copy
   operations described in this document are REQUIRED to implement [6],
   including the location specified by the combination of CURRENT_FH RPCSEC_GSSv3 privileges copy_from_auth and
   ca_destination.

   The SAVED_FH must be a regular file.
   copy_to_auth.  If SAVED_FH the server-to-server copy protocol is not a regular
   file, ONC RPC
   based, the operation MUST fail and return NFS4ERR_WRONG_TYPE.

   In order to set SAVED_FH servers are also REQUIRED to implement the source file handle, the compound
   procedure requesting the COPY will include a sub-sequence of
   operations such as

                           PUTFH source-fh
                           SAVEFH

   If the request is RPCSEC_GSSv3
   privilege copy_confirm_auth.  These requirements to implement are not
   requirements to use.  NFSv4 clients and servers are RECOMMENDED to
   use [6] to secure server-side copy operations.

4.4.1.  Inter-Server Copy Security

4.4.1.1.  Requirements for a server-to-server copy, the source-fh Secure Inter-Server Copy

   Inter-server copy is a
   filehandle from driven by several requirements:

   o  The specification MUST NOT mandate an inter-server copy protocol.
      There are many ways to copy data.  Some will be more optimal than
      others depending on the identities of the source server and the compound procedure is being
   executed on the
      destination server.  In this case,  For example the source-fh is source and destination
      servers might be two nodes sharing a
   foreign filehandle on common file system format for
      the server receiving source and destination file systems.  Thus the COPY request.  If
   either PUTFH or SAVEFH checked source and
      destination are in an ideal position to efficiently render the validity
      image of the filehandle, the
   operation would likely fail and return NFS4ERR_STALE.

   In order source file to avoid this problem, the minor version incorporating destination file by replicating
      the
   COPY operations will need to make a few small changes in file system formats at the handling
   of existing operations.  If a server supports block level.  In other cases, the server-to-server
   COPY feature, a PUTFH followed by a SAVEFH MUST NOT return
   NFS4ERR_STALE for either operation.  These restrictions do not pose
   substantial difficulties for servers.  The CURRENT_FH
      source and SAVED_FH
   may destination might be validated in the context two nodes sharing a common storage
      area network, and thus there is no need to copy any data at all,
      and instead ownership of the operation referencing them file and
   an NFS4ERR_STALE error returned its contents simply gets re-
      assigned to the destination.

   o  The specification MUST provide guidance for an invalid file handle at using NFSv4.x as a
      copy protocol.  For those source and destination servers willing
      to use NFSv4.x there are specific security considerations that
   point.
      this specification can and does address.

   o  The CURRENT_FH specification MUST NOT mandate pre-configuration between the
      source and ca_destination together specify destination server.  Requiring that the source and
      destination of first have a "copying relationship" increases the copy operation.  If ca_destination is of 0 (zero) length, then
   CURRENT_FH specifies
      administrative burden.  However the target file.  In this case, CURRENT_FH specification MUST
   be NOT
      preclude implementations that require pre-configuration.

   o  The specification MUST NOT mandate a regular file trust relationship between
      the source and not destination server.  The NFSv4 security model
      requires mutual authentication between a directory.  If ca_destination is not principal on an NFS
      client and a principal on an NFS server.  This model MUST continue
      with the introduction of 0
   (zero) length, COPY.

4.4.1.2.  Inter-Server Copy with RPCSEC_GSSv3

   When the ca_destination argument specifies client sends a COPY_NOTIFY to the file name source server to
   which expect
   the destination to attempt to copy data will be copied within from the directory identified by
   CURRENT_FH.  In source server, it is
   expected that this case, CURRENT_FH MUST be a directory and not a
   regular file.

   If copy is being done on behalf of the file named by ca_destination does not exist and principal
   (called the operation
   completes successfully, "user principal") that sent the file will be visible in RPC request that encloses
   the file system
   namespace.  If COMPOUND procedure that contains the file does not exist and COPY_NOTIFY operation.  The
   user principal is identified by the operation fails, RPC credentials.  A mechanism
   that allows the
   file MAY be visible user principal to authorize the destination server to
   perform the copy in a manner that lets the file system namespace depending on when source server properly
   authenticate the failure occurs destination's copy, and on without allowing the implementation
   destination to exceed its authorization is necessary.

   An approach that sends delegated credentials of the NFS client's user
   principal to the destination server
   receiving is not used for the COPY operation. following
   reasons.  If the ca_destination name cannot be
   created in client's user delegated its credentials, the
   destination file system (due to file name
   restrictions, such would authenticate as case or length), the operation MUST fail.

   The ca_src_offset is user principal.  If the offset within
   destination were using the NFSv4 protocol to perform the copy, then
   the source file from which server would authenticate the
   data will be read, destination server as the ca_dst_offset is
   user principal, and the offset within file copy would securely proceed.  However,
   this approach would allow the destination file server to which the data will be written, and copy other files.
   The user principal would have to trust the ca_count destination server to not
   do so.  This is counter to the number of bytes that will be copied.  An offset requirements, and therefore is not
   considered.  Instead an approach using RPCSEC_GSSv3 [6] privileges is
   proposed.

   One of 0 (zero)
   specifies the start stated applications of the file.  A count of 0 (zero) requests that
   all bytes from ca_src_offset through EOF be copied to proposed RPCSEC_GSSv3 protocol
   is compound client host and user authentication [+ privilege
   assertion].  For inter-server file copy, we require compound NFS
   server host and user authentication [+ privilege assertion].  The
   distinction between the
   destination.  If concurrent modifications to two is one without meaning.

   RPCSEC_GSSv3 introduces the notion of privileges.  We define three
   privileges:

   copy_from_auth:  A user principal is authorizing a source principal
      ("nfs@<source>") to allow a destination principal ("nfs@
      <destination>") to copy a file overlap
   with from the source file region being copied, the data copied may include
   all, some, or none of the modifications.  The client can use standard
   NFS operations (e.g.  OPEN with OPEN4_SHARE_DENY_WRITE or mandatory
   byte range locks) to protect against concurrent modifications if the
   client destination.
      This privilege is concerned about this.  If established on the source file's end of file is
   being modified in parallel with server before the user
      principal sends a copy COPY_NOTIFY operation to the source server.

   struct copy_from_auth_priv {
           secret4             cfap_shared_secret;
           netloc4             cfap_destination;
           /* the NFSv4 user name that specifies a count of 0
   (zero) bytes, the amount user principal maps to */
           utf8str_mixed       cfap_username;
           /* equal to seq_num of data copied rpc_gss_cred_vers_3_t */
           unsigned int        cfap_seq_num;
   };

      cap_shared_secret is implementation dependent
   (clients may guard against this case by specifying a non-zero count secret value or preventing modification of the source user principal generates.

   copy_to_auth:  A user principal is authorizing a destination
      principal ("nfs@<destination>") to allow it to copy a file as mentioned
   above).

   If from
      the source offset or to the source offset plus count destination.  This privilege is greater than
   or equal to established on
      the size of the source file, destination server before the user principal sends a COPY
      operation will fail with
   NFS4ERR_INVAL.  The destination offset or destination offset plus
   count may be greater than the size of to the destination file.  This
   allows for server.

   struct copy_to_auth_priv {
           /* equal to cfap_shared_secret */
           secret4              ctap_shared_secret;
           netloc4              ctap_source;
           /* the client NFSv4 user name that the user principal maps to issue parallel copies */
           utf8str_mixed        ctap_username;
           /* equal to implement
   operations such as "cat file1 file2 file3 file4 > dest".

   If the destination file seq_num of rpc_gss_cred_vers_3_t */
           unsigned int         ctap_seq_num;
   };

      ctap_shared_secret is created as a result of this command, secret value the
   destination file's size will be equal user principal generated
      and was used to establish the number of bytes
   successfully copied.  If copy_from_auth privilege with the
      source principal.

   copy_confirm_auth:  A destination file already existed, principal is confirming with the
   destination file's size may increase as a result of this operation
   (e.g. if ca_dst_offset plus ca_count
      source principal that it is greater than authorized to copy data from the
   destination's initial size).

   If
      source on behalf of the ca_source_server list user principal.  When the inter-server
      copy protocol is specified, then NFSv4, or for that matter, any protocol capable
      of being secured via RPCSEC_GSSv3 (i.e., any ONC RPC protocol),
      this privilege is an inter-
   server copy operation and established before the source file is on a remote server.  The
   client is expected to have previously issued a successful COPY_NOTIFY
   request to copied from the remote
      source server.  The ca_source_server list
   SHOULD be the same as to the COPY_NOTIFY response's cnr_source_server
   list.  If destination.

   struct copy_confirm_auth_priv {
           /* equal to GSS_GetMIC() of cfap_shared_secret */
           opaque              ccap_shared_secret_mic<>;
           /* the client includes NFSv4 user name that the entries from the COPY_NOTIFY
   response's cnr_source_server list in the ca_source_server list, the
   source server can indicate user principal maps to */
           utf8str_mixed       ccap_username;
           /* equal to seq_num of rpc_gss_cred_vers_3_t */
           unsigned int        ccap_seq_num;
   };

4.4.1.2.1.  Establishing a specific copy protocol for Security Context

   When the
   destination server user principal wants to use by returning a URL, which specifies both COPY a
   protocol service and server name.  Server-to-server copy protocol
   considerations are described in Section 4.2.3 file between two servers, if
   it has not established copy_from_auth and Section 4.4.1. copy_to_auth privileges on
   the servers, it establishes them:

   o  The ca_flags argument allows user principal generates a secret it will share with the copy operation to two
      servers.  This shared secret will be customized placed in the following ways using the guarded flag (COPY4_GUARDED)
      cfap_shared_secret and the
   metadata flag (COPY4_METADATA).

   [NOTE: Earlier versions ctap_shared_secret fields of this document defined a
   COPY4_SPACE_RESERVED flag for controlling space reservations on the
   destination file.  This flag has been removed
      appropriate privilege data types, copy_from_auth_priv and
      copy_to_auth_priv.

   o  An instance of copy_from_auth_priv is filled in with the expectation
   that the space_reserve attribute defined in XXX_TDH_XXX will be
   adopted.]

   If shared
      secret, the guarded flag is set destination server, and the destination exists on NFSv4 user id of the server,
   this operation user
      principal.  It will fail be sent with NFS4ERR_EXIST.

   If the guarded flag is not set an RPCSEC_GSS3_CREATE procedure,
      and the destination exists on the
   server, the behavior is implementation dependent.

   If the metadata flag so cfap_seq_num is set and to the client is requesting a whole file
   copy (i.e. ca_count is 0 (zero)), a subset seq_num of the destination file's
   attributes MUST be the same as the source file's corresponding
   attributes and a subset credential of the destination file's attributes SHOULD
   be the same as the source file's corresponding attributes.
      RPCSEC_GSS3_CREATE procedure.  Because cfap_shared_secret is a
      secret, after XDR encoding copy_from_auth_priv, GSS_Wrap() (with
      privacy) is invoked on copy_from_auth_priv.  The
   attributes
      RPCSEC_GSS3_CREATE procedure's arguments are:

      struct {
         rpc_gss3_gss_binding    *compound_binding;
         rpc_gss3_chan_binding   *chan_binding_mic;
         rpc_gss3_assertion      assertions<>;
         rpc_gss3_extension      extensions<>;
      } rpc_gss3_create_args;

      The string "copy_from_auth" is placed in assertions[0].privs.  The
      output of GSS_Wrap() is placed in extensions[0].data.  The field
      extensions[0].critical is set to TRUE.  The source server calls
      GSS_Unwrap() on the MUST and SHOULD copy subsets will be defined for
   each NFS version.

   For NFSv4.1, Table 1 privilege, and Table 2 list verifies that the REQUIRED and RECOMMENDED
   attributes respectively.  A "MUST" in seq_num
      matches the "Copy to destination file?"
   column indicates credential.  It then verifies that the attribute is part NFSv4 user id
      being asserted matches the source server's mapping of the MUST copy set.  A
   "SHOULD" in user
      principal.  If it does, the "Copy privilege is established on the source
      server as: <"copy_from_auth", user id, destination>.  The
      successful reply to destination file?" column indicates RPCSEC_GSS3_CREATE has:

      struct {
         opaque                  handle<>;
         rpc_gss3_chan_binding   *chan_binding_mic;
         rpc_gss3_assertion      granted_assertions<>;
         rpc_gss3_assertion      server_assertions<>;
         rpc_gss3_extension      extensions<>;
      } rpc_gss3_create_res;

      The field "handle" is the RPCSEC_GSSv3 handle that the
   attribute is part client will
      use on COPY_NOTIFY requests involving the source and destination
      server. granted_assertions[0].privs will be equal to
      "copy_from_auth".  The server will return a GSS_Wrap() of
      copy_to_auth_priv.

   o  An instance of copy_to_auth_priv is filled in with the SHOULD copy set.

          +--------------------+----+---------------------------+
          | Name               | Id | Copy shared
      secret, the source server, and the NFSv4 user id.  It will be sent
      with an RPCSEC_GSS3_CREATE procedure, and so ctap_seq_num is set
      to destination file? |
          +--------------------+----+---------------------------+
          | supported_attrs    | 0  | no                        |
          | type               | 1  | MUST                      |
          | fh_expire_type     | 2  | no                        |
          | change             | 3  | SHOULD                    |
          | size               | 4  | MUST                      |
          | link_support       | 5  | no                        |
          | symlink_support    | 6  | no                        |
          | named_attr         | 7  | no                        |
          | fsid               | 8  | no                        |
          | unique_handles     | 9  | no                        |
          | lease_time         | 10 | no                        |
          | rdattr_error       | 11 | no                        |
          | filehandle         | 19 | no                        |
          | suppattr_exclcreat | 75 | no                        |
          +--------------------+----+---------------------------+

                                  Table 1

          +--------------------+----+---------------------------+
          | Name               | Id | Copy the seq_num of the credential of the RPCSEC_GSS3_CREATE
      procedure.  Because ctap_shared_secret is a secret, after XDR
      encoding copy_to_auth_priv, GSS_Wrap() is invoked on
      copy_to_auth_priv.  The RPCSEC_GSS3_CREATE procedure's arguments
      are:

      struct {
         rpc_gss3_gss_binding    *compound_binding;
         rpc_gss3_chan_binding   *chan_binding_mic;
         rpc_gss3_assertion      assertions<>;
         rpc_gss3_extension      extensions<>;
      } rpc_gss3_create_args;

      The string "copy_to_auth" is placed in assertions[0].privs.  The
      output of GSS_Wrap() is placed in extensions[0].data.  The field
      extensions[0].critical is set to TRUE.  After unwrapping,
      verifying the seq_num, and the user principal to NFSv4 user ID
      mapping, the destination file? |
          +--------------------+----+---------------------------+
          | acl                | 12 | MUST                      |
          | aclsupport         | 13 | no                        |
          | archive            | 14 | no                        |
          | cansettime         | 15 | no                        |
          | case_insensitive   | 16 | no                        |
          | case_preserving    | 17 | no                        |
          | change_policy      | 60 | no                        |
          | chown_restricted   | 18 | MUST                      |
          | dacl               | 58 | MUST                      |
          | dir_notif_delay    | 56 | no                        |
          | dirent_notif_delay | 57 | no                        |
          | fileid             | 20 | no                        |
          | files_avail        | 21 | no                        |
          | files_free         | 22 | no                        |
          | files_total        | 23 | no                        |
          | fs_charset_cap     | 76 | no                        |
          | fs_layout_type     | 62 | no                        |
          | fs_locations       | 24 | no                        |
          | fs_locations_info  | 67 | no                        |
          | fs_status          | 61 | no                        |
          | hidden             | 25 | MUST                      |
          | homogeneous        | 26 | no                        |
          | layout_alignment   | 66 | no                        |
          | layout_blksize     | 65 | no                        |
          | layout_hint        | 63 | no                        |
          | layout_type        | 64 | no                        |
          | maxfilesize        | 27 | no                        |
          | maxlink            | 28 | no                        |
          | maxname            | 29 | no                        |
          | maxread            | 30 | no                        |
          | maxwrite           | 31 | no                        |
          | mdsthreshold       | 68 | no                        |
          | mimetype           | 32 | establishes a privilege of
      <"copy_to_auth", user id, source>.  The successful reply to
      RPCSEC_GSS3_CREATE has:

      struct {
         opaque                  handle<>;
         rpc_gss3_chan_binding   *chan_binding_mic;
         rpc_gss3_assertion      granted_assertions<>;
         rpc_gss3_assertion      server_assertions<>;
         rpc_gss3_extension      extensions<>;
      } rpc_gss3_create_res;

      The field "handle" is the RPCSEC_GSSv3 handle that the client will
      use on COPY requests involving the source and destination server.
      The field granted_assertions[0].privs will be equal to
      "copy_to_auth".  The server will return a GSS_Wrap() of
      copy_to_auth_priv.

4.4.1.2.2.  Starting a Secure Inter-Server Copy

   When the client sends a COPY_NOTIFY request to the source server, it
   uses the privileged "copy_from_auth" RPCSEC_GSSv3 handle.
   cna_destination_server in COPY_NOTIFY MUST                      |
          | mode               | 33 | MUST                      |
          | mode_set_masked    | 74 | no                        |
          | mounted_on_fileid  | 55 | no                        |
          | no_trunc           | 34 | no                        |
          | numlinks           | 35 | no                        |
          | owner              | 36 | MUST                      |
          | owner_group        | 37 | MUST                      |
          | quota_avail_hard   | 38 | no                        |
          | quota_avail_soft   | 39 | no                        |
          | quota_used         | 40 | no                        |
          | rawdev             | 41 | no                        |
          | retentevt_get      | 71 | MUST                      |
          | retentevt_set      | 72 | no                        |
          | retention_get      | 69 | MUST                      |
          | retention_hold     | 73 | MUST                      |
          | retention_set      | 70 | no                        |
          | sacl               | 59 | MUST                      |
          | space_avail        | 42 | no                        |
          | space_free         | 43 | no                        |
          | space_total        | 44 | no                        |
          | space_used         | 45 | no                        |
          | system             | 46 | MUST                      |
          | time_access        | 47 | MUST                      |
          | time_access_set    | 48 | no                        |
          | time_backup        | 49 | no                        |
          | time_create        | 50 | MUST                      |
          | time_delta         | 51 | no                        |
          | time_metadata      | 52 | SHOULD                    |
          | time_modify        | 53 | MUST                      |
          | time_modify_set    | 54 | no                        |
          +--------------------+----+---------------------------+

                                  Table 2

   [NOTE: The space_reserve attribute XXX_TDH_XXX will be in the MUST
   set.]

   [NOTE: The source file's attribute values will take precedence over
   any attribute values inherited by the destination file.]
   In same as the case name of an inter-server copy or an intra-server copy between
   file systems, the attributes supported for
   the source file and destination file could be different.  By definition,the REQUIRED
   attributes will be supported server specified in all cases.  If copy_from_auth_priv.  Otherwise,
   COPY_NOTIFY will fail with NFS4ERR_ACCESS.  The source server
   verifies that the metadata flag is
   set privilege <"copy_from_auth", user id, destination>
   exists, and annotates it with the source file filehandle, if the user
   principal has a RECOMMENDED attribute that is not
   supported for read access to the destination source file, and if administrative
   policies give the copy MUST fail with
   NFS4ERR_ATTRNOTSUPP.

   Any attribute supported by the destination server that is not set on
   the source file SHOULD be left unset.

   Metadata attributes not exposed via user principal and the NFS protocol SHOULD be copied client read access to
   the destination file where appropriate.

   The destination file's named attributes are not duplicated from the source file.  After file (i.e., if the copy process completes, ACCESS operation would grant read
   access).  Otherwise, COPY_NOTIFY will fail with NFS4ERR_ACCESS.

   When the client MAY
   attempt sends a COPY request to duplicate named attributes using standard NFSv4
   operations.  However, the destination file's named attribute
   capabilities MAY server, it
   uses the privileged "copy_to_auth" RPCSEC_GSSv3 handle.
   ca_source_server in COPY MUST be different from the same as the name of the source file's named attribute
   capabilities.

   If
   server specified in copy_to_auth_priv.  Otherwise, COPY will fail
   with NFS4ERR_ACCESS.  The destination server verifies that the metadata flag is not set
   privilege <"copy_to_auth", user id, source> exists, and annotates it
   with the source and destination filehandles.  If the client is requesting a whole
   file copy (i.e. ca_count is 0 (zero)), has
   failed to establish the destination file's
   metadata is implementation dependent. "copy_to_auth" policy it will reject the
   request with NFS4ERR_PARTNER_NO_AUTH.

   If the client is requesting sends a partial file copy (i.e. ca_count is not
   0 (zero)), the client SHOULD NOT set the metadata flag and COPY_REVOKE to the source server
   MUST ignore the metadata flag.

   If to rescind the operation does not result in an immediate failure,
   destination server's copy privilege, it uses the server
   will return NFS4_OK, privileged
   "copy_from_auth" RPCSEC_GSSv3 handle and the CURRENT_FH will remain the destination's
   filehandle.

   If an immediate failure does occur, cr_bytes_copied will cra_destination_server
   in COPY_REVOKE MUST be set to the number same as the name of bytes copied to the destination file before the error
   occurred. server
   specified in copy_from_auth_priv.  The cr_bytes_copied value indicates source server will then delete
   the number of bytes
   copied but not which specific bytes have been copied.

   A return of NFS4_OK indicates that either <"copy_from_auth", user id, destination> privilege and fail any
   subsequent copy requests sent under the operation is complete
   or auspices of this privilege
   from the operation was initiated destination server.

4.4.1.2.3.  Securing ONC RPC Server-to-Server Copy Protocols

   After a destination server has a "copy_to_auth" privilege established
   on it, and it receives a callback COPY request, if it knows it will be used use an ONC
   RPC protocol to deliver
   the final status of the operation.

   If copy data, it will establish a "copy_confirm_auth"
   privilege on the cr_callback_id is returned, this indicates that source server, using nfs@<destination> as the operation
   was initiated
   initiator principal, and a CB_COPY callback will deliver nfs@<source> as the final results target principal.

   The value of the operation.  The cr_callback_id stateid field ccap_shared_secret_mic is termed a copy
   stateid GSS_VerifyMIC() of
   the shared secret passed in this context. the copy_to_auth privilege.  The server field
   ccap_username is given the option mapping of returning
   the results in a callback because the data may require a relatively
   long period of time user principal to copy.

   If no cr_callback_id is returned, the operation completed
   synchronously an NFSv4 user
   name ("user"@"domain" form), and no callback will MUST be issued by the server. same as ctap_username
   and cfap_username.  The
   completion status of the operation field ccap_seq_num is indicated by cr_status.

   If the copy completes successfully, either synchronously or
   asynchronously, seq_num of the data copied from
   RPCSEC_GSSv3 credential used for the source file to RPCSEC_GSS3_CREATE procedure the
   destination file MUST appear identical will send to the NFS client.  However,
   the NFS server's on disk representation of the data in source server to establish the
   privilege.

   The source
   file server verifies the privilege, and destination file MAY differ.  For example, establishes a
   <"copy_confirm_auth", user id, destination> privilege.  If the NFS source
   server
   might encrypt, compress, deduplicate, or otherwise represent fails to verify the on
   disk privilege, the COPY operation will be
   rejected with NFS4ERR_PARTNER_NO_AUTH.  All subsequent ONC RPC
   requests sent from the destination to copy data in from the source and to
   the destination file differently.

   In will use the event of a failure RPCSEC_GSSv3 handle returned by the state
   source's RPCSEC_GSS3_CREATE response.

   Note that the use of the destination file "copy_confirm_auth" privilege accomplishes
   the following:

   o  if a protocol like NFS is
   implementation dependent.  The COPY operation may fail for being used, with export policies, export
      policies can be overridden in case the
   following reasons (this destination server as-an-
      NFS-client is not authorized

   o  manual configuration to allow a partial list).

   NFS4ERR_MOVED:  The file system which contains copy relationship between the
      source file, or
      the and destination file or directory is not present.  The client can
      determine the correct location and reissue the operation with needed.

   If the
      correct location.

   NFS4ERR_NOTSUPP:  The copy offload operation is not supported by the
      NFS server receiving this request.

   NFS4ERR_PARTNER_NOTSUPP:  The remote server does not support the
      server-to-server copy offload protocol.

   NFS4ERR_PARTNER_NO_AUTH:  The remote server does not authorize a
      server-to-server copy offload operation.  This may be due to the
      client's failure attempt to send establish a "copy_confirm_auth" privilege fails,
   then when the COPY_NOTIFY operation user principal sends a COPY request to destination, the remote
      server, the remote
   destination server receiving a server-to-server copy
      offload request after will reject it with NFS4ERR_PARTNER_NO_AUTH.

4.4.1.2.4.  Securing Non ONC RPC Server-to-Server Copy Protocols

   If the destination won't be using ONC RPC to copy lease time expired, or for some
      other permission problem.

   NFS4ERR_FBIG:  The copy operation would have caused the file to grow
      beyond data, then the server's limit.

   NFS4ERR_NOTDIR:  The CURRENT_FH is a file
   source and ca_destination has non-
      zero length.

   NFS4ERR_WRONG_TYPE:  The SAVED_FH is not a regular file.

   NFS4ERR_ISDIR: destination are using an unspecified copy protocol.  The CURRENT_FH is a directory
   destination could use the shared secret and ca_destination has
      zero length.

   NFS4ERR_INVAL:  The source offset or offset plus count are greater
      than or equal to the size of NFSv4 user id to
   prove to the source file.

   NFS4ERR_DELAY:  The server does not have the resources to perform that the
      copy operation at user principal has authorized the current time.  The client should retry
   copy.

   For protocols that authenticate user names with passwords (e.g., HTTP
   [14] and FTP [15]), the
      operation sometime in nfsv4 user id could be used as the future.

   NFS4ERR_METADATA_NOTSUPP:  The destination file cannot support user name,
   and an ASCII hexadecimal representation of the
      same metadata RPCSEC_GSSv3 shared
   secret could be used as the source file.

   NFS4ERR_WRONGSEC:  The user password or as input into non-
   password authentication methods like CHAP [16].

4.4.1.3.  Inter-Server Copy via ONC RPC but without RPCSEC_GSSv3

   ONC RPC security mechanism being flavors other than RPCSEC_GSSv3 MAY be used by the client
      does not match with the server's security policy.

4.3.5.  Operation 60: COPY_ABORT - Cancel a
   server-side copy

4.3.5.1.  ARGUMENT

   struct COPY_ABORT4args {
           /* CURRENT_FH: desination file */
           stateid4        caa_stateid;
   };

4.3.5.2.  RESULT

   struct COPY_ABORT4res {
           nfsstat4        car_status;
   };

4.3.5.3.  DESCRIPTION

   COPY_ABORT offload operations described in this document.  In
   particular, host-based ONC RPC security flavors such as AUTH_NONE and
   AUTH_SYS MAY be used.  If a host-based security flavor is used used, a
   minimal level of protection for both intra- and inter-server asynchronous
   copies.  The COPY_ABORT operation allows the client to cancel a
   server-side server-to-server copy operation that it initiated.  This operation protocol is sent
   possible.

   In the absence of strong security mechanisms such as RPCSEC_GSSv3,
   the challenge is how the source server and destination server
   identify themselves to each other, especially in the presence of
   multi-homed source and destination servers.  In a COMPOUND request multi-homed
   environment, the destination server might not contact the source
   server from the same network address specified by the client to in the destination server.
   COPY_NOTIFY.  This operation may can be used to cancel a copy when the application that
   requested overcome using the copy exits before procedure described
   below.

   When the operation is completed or for
   some other reason.

   The request contains client sends the filehandle and copy stateid cookies that act
   as source server the context for COPY_NOTIFY operation,
   the previously initiated copy operation.

   The result's car_status field indicates whether source server may reply to the cancel was
   successful or not.  A value client with a list of NFS4_OK indicates that the copy
   operation was canceled target
   addresses, names, and/or URLs and no callback will be issued by the server.
   A copy operation that is successfully canceled may result in none,
   some, or all of assign them to the data copied. unique triple:
   <source fh, user ID, destination address Y>.  If the server supports asynchronous copies, destination uses
   one of these target netlocs to contact the source server, the source
   server is REQUIRED will be able to
   support uniquely identify the COPY_ABORT operation.

   The COPY_ABORT operation may fail for destination server, even
   if the following reasons (this is
   a partial list):

   NFS4ERR_NOTSUPP:  The abort operation is destination server does not supported connect from the address specified
   by the NFS
      server receiving this request.

   NFS4ERR_RETRY:  The abort failed, but a retry at some time client in COPY_NOTIFY.

   For example, suppose the
      future MAY succeed.

   NFS4ERR_COMPLETE_ALREADY:  The abort failed, and a callback will
      deliver the results of network topology is as shown in Figure 3.
   If the copy operation.

   NFS4ERR_SERVERFAULT:  An error occurred on source filehandle is 0x12345, the source server that does not
      map may respond to
   a specific error code.

4.3.6.  Operation 63: COPY_STATUS - Poll COPY_NOTIFY for status of a server-side
        copy

4.3.6.1.  ARGUMENT

   struct COPY_STATUS4args {
           /* CURRENT_FH: destination file */
           stateid4        csa_stateid;
   };

4.3.6.2.  RESULT

   struct COPY_STATUS4resok {
           length4         csr_bytes_copied;
           nfsstat4        csr_complete<1>;
   };

   union COPY_STATUS4res switch (nfsstat4 csr_status) {
           case NFS4_OK:
                   COPY_STATUS4resok       resok4;
           default:
                   void;
   };

4.3.6.3.  DESCRIPTION

   COPY_STATUS is used for both intra- and inter-server asynchronous
   copies.  The COPY_STATUS operation allows 10.11.78.56 with the URLs:

      nfs://10.11.78.18//_COPY/10.11.78.56/_FH/0x12345

      nfs://192.168.33.18//_COPY/10.11.78.56/_FH/0x12345

   The client will then send these URLs to poll the destination server to determine in the status of an asynchronous copy
   COPY operation.
   This operation  Suppose that the 192.168.33.0/24 network is sent by a high
   speed network and the client destination server decides to transfer the destination server. file
   over this network.  If the destination contacts the source server
   from 192.168.33.56 over this operation is successful, network using NFSv4.1, it does the number of bytes copied
   following:

   COMPOUND  { PUTROOTFH, LOOKUP "_COPY" ; LOOKUP "10.11.78.56"; LOOKUP
      "_FH" ; OPEN "0x12345" ; GETFH }

   The source server will therefore know that these NFSv4.1 operations
   are
   returned to being issued by the client destination server identified in the csr_bytes_copied field.
   COPY_NOTIFY.

4.4.1.4.  Inter-Server Copy without ONC RPC and RPCSEC_GSSv3

   The
   csr_bytes_copied value indicates same techniques as Section 4.4.1.3, using unique URLs for each
   destination server, can be used for other protocols (e.g., HTTP [14]
   and FTP [15]) as well.

5.  Application Data Block Support

   At the number of bytes copied but OS level, files are contained on disk blocks.  Applications
   are also free to impose structure on the data contained in a file and
   we can define an Application Data Block (ADB) to be such a structure.
   From the application's viewpoint, it only wants to handle ADBs and
   not
   which specific raw bytes have been copied.

   If the optional csr_complete field (see [17]).  An ADB is present, the copy has
   completed.  In this case the status value indicates the result of the
   asynchronous copy operation.  In all cases, the server will also
   deliver the final results typically comprised of the asynchronous copy in two
   sections: a CB_COPY
   operation. header and data.  The failure of this operation does not indicate header describes the result
   characteristics of the
   asynchronous copy block and can provide a means to detect
   corruption in any way.

   If the server supports asynchronous copies, the server data payload.  The data section is REQUIRED typically
   initialized to
   support the COPY_STATUS operation. all zeros.

   The COPY_STATUS operation may fail for format of the following reasons (this is
   a partial list):

   NFS4ERR_NOTSUPP:  The copy status operation header is not supported by application specific, but there are two
   main components typically encountered:

   1.  An ADB Number (ADBN), which allows the
      NFS server receiving this request.

   NFS4ERR_BAD_STATEID:  The stateid application to determine
       which data block is not valid (see Section 4.3.8
      below).

   NFS4ERR_EXPIRED: being referenced.  The stateid has expired (see Copy Offload Stateid
      section below).

4.3.7.  Operation 15: CB_COPY - Report results of a server-side copy
4.3.7.1.  ARGUMENT

   union copy_info4 switch (nfsstat4 cca_status) {
           case NFS4_OK:
                   void;
           default:
                   length4         cca_bytes_copied;
   };

   struct CB_COPY4args {
           nfs_fh4         cca_fh;
           stateid4        cca_stateid;
           copy_info4      cca_copy_info;
   };

4.3.7.2.  RESULT

   struct CB_COPY4res {
           nfsstat4        ccr_status;
   };

4.3.7.3.  DESCRIPTION

   CB_COPY ADBN is used for both intra- a logical
       block number and inter-server asynchronous copies.
   The CB_COPY callback informs is useful when the client of the result of an
   asynchronous server-side copy.  This operation is sent by not storing the
   destination server
       blocks in contiguous memory.

   2.  Fields to describe the client in a CB_COMPOUND request.  The copy
   is identified by state of the filehandle ADB and stateid arguments.  The result a means to detect
       block corruption.  For both pieces of data, a useful property is
   indicated by
       that allowed values be unique in that if passed across the status field.  If
       network, corruption due to translation between big and little
       endian architectures are detectable.  For example, 0xF0DEDEF0 has
       the copy failed, cca_bytes_copied
   contains the number of bytes copied before the failure occurred.  The
   cca_bytes_copied value indicates the number of bytes copied but same bit pattern in both architectures.

   Applications already impose structures on files [17] and detect
   corruption in data blocks [18].  What they are not
   which specific bytes have been copied.

   In the absence of an established backchannel, the server cannot
   signal the completion of the COPY via a CB_COPY callback.  The loss
   of able to do is
   efficiently transfer and store ADBs.  To initialize a callback channel would be indicated by the server setting the
   SEQ4_STATUS_CB_PATH_DOWN flag in the sr_status_flags field of file with ADBs,
   the
   SEQUENCE operation.  The client must re-establish send the callback
   channel full ADB to receive the status of server and that must be
   stored on the COPY operation.  Prolonged loss
   of server.  When the callback channel application is initializing a file to
   have the ADB structure, it could result in compress the server dropping ADBs to just the COPY
   operation state and invalidating
   information to necessary to later reconstruct the copy stateid.

   If header portion of
   the client supports ADB when the COPY operation, contents are read back.  Using sparse file
   techniques, the client is REQUIRED disk blocks described by would not be allocated.
   Unlike sparse file techniques, there would be a small cost to
   support store
   the CB_COPY operation.

   The CB_COPY operation may fail compressed header data.

   In this section, we are going to define a generic framework for the following reasons (this is an
   ADB, present one approach to detecting corruption in a
   partial list):

   NFS4ERR_NOTSUPP:  The copy offload operation is not supported by given ADB
   implementation, and describe the model for how the
      NFS client receiving this request.

4.3.8.  Copy Offload Stateids

   A and server may perform a copy offload operation asynchronously.  An
   asynchronous copy is tracked using a copy offload stateid.  Copy
   offload stateids are included
   can support efficient initialization of ADBs, reading of ADB holes,
   punching holes in the COPY, COPY_ABORT, COPY_STATUS, ADBs, and CB_COPY operations.

   Section 8.2.4 space reservation.  Further, we need to
   be able to extend this model to applications which do not support
   ADBs, but wish to be able to handle sparse files, hole punching, and
   space reservation.

5.1.  Generic Framework

   We want the representation of [2] specifies that stateids the ADB to be flexible enough to
   support many different applications.  The most basic approach is no
   imposition of a block at all, which means we are valid until either
   (A) working with the client or raw
   bytes.  Such an approach would be useful for storing holes, punching
   holes, etc.  In more complex deployments, a server restart or (B) the client returns the
   resource.

   A copy offload stateid will might be valid until either (A)
   supporting multiple applications, each with their own definition of
   the client or
   server restart or (B) ADB.  One might store the client returns ADBN at the resource by issuing a
   COPY_ABORT operation or start of the client replies to block and then
   have a CB_COPY operation.

   A copy offload stateid's seqid MUST NOT be 0 (zero).  In guard pattern to detect corruption [19].  The next might store
   the context ADBN at an offset of a copy offload operation, it is ambiguous to indicate 100 bytes within the most
   recent copy offload operation using a stateid with seqid of 0 (zero).
   Therefore a copy offload stateid with seqid of 0 (zero) MUST be
   considered invalid.

4.4.  Security Considerations block and have no guard
   pattern at all.  The security considerations pertaining to NFSv4 [10] apply to this
   document. point is that existing applications might
   already have well defined formats for their data blocks.

   The standard security mechanisms provide by NFSv4 [10] may guard pattern can be used to
   secure the protocol described in this document.

   NFSv4 clients and servers supporting the the inter-server copy
   operations described in this document are REQUIRED to implement [6],
   including the RPCSEC_GSSv3 privileges copy_from_auth and
   copy_to_auth.  If the server-to-server copy protocol is ONC RPC
   based, represent the servers are also REQUIRED to implement state of the RPCSEC_GSSv3
   privilege copy_confirm_auth.  These requirements to implement are not
   requirements block, to use.  NFSv4 clients and servers are RECOMMENDED
   protect against corruption, or both.  Again, it needs to
   use [6] be able to secure server-side copy operations.

4.4.1.  Inter-Server Copy Security

4.4.1.1.  Requirements for Secure Inter-Server Copy

   Inter-server copy is driven by several requirements:

   o  The specification MUST NOT mandate an inter-server copy protocol.
      There are many ways
   be placed anywhere within the ADB.

   We need to copy data.  Some will be more optimal than
      others depending on able to represent the identities starting offset of the source server block and
      destination server.  For example
   the source and destination
      servers might be two nodes sharing a common file system format for size of the source and destination file systems.  Thus block.  Note that nothing prevents the source and
      destination are application
   from defining different sized blocks in an ideal position to efficiently render a file.

5.1.1.  Data Block Representation

   struct app_data_block4 {
           offset4         adb_offset;
           length4         adb_block_size;
           length4         adb_block_count;
           length4         adb_reloff_blocknum;
           count4          adb_block_num;
           length4         adb_reloff_pattern;
           opaque          adb_pattern<>;
   };

   The app_data_block4 structure captures the
      image of abstraction presented for
   the source file ADB.  The additional fields present are to allow the destination file by replicating
      the file system formats transmission
   of adb_block_count ADBs at one time.  We also use adb_block_num to
   convey the ADBN of the first block level.  In other cases, in the
      source and destination might be two nodes sharing a common storage
      area network, sequence.  Each ADB will
   contain the same adb_pattern string.

   As both adb_block_num and thus there adb_pattern are optional, if either
   adb_reloff_pattern or adb_reloff_blocknum is no need set to copy NFS4_UINT64_MAX,
   then the corresponding field is not set in any data at all,
      and instead ownership of the file and its contents simply gets re-
      assigned ADB.

5.1.2.  Data Content

   /*
    * Use an enum such that we can extend new types.
    */
   enum data_content4 {
           NFS4_CONTENT_DATA = 0,
           NFS4_CONTENT_APP_BLOCK = 1,
           NFS4_CONTENT_HOLE = 2
   };

   New operations might need to the destination.

   o  The specification MUST provide guidance for using NFSv4.x as a
      copy protocol.  For those source and destination servers willing differentiate between wanting to use NFSv4.x there are specific security considerations access
   data versus an ADB.  Also, future minor versions might want to
   introduce new data formats.  This enumeration allows that to occur.

5.2.  pNFS Considerations

   While this specification can and document does address.

   o  The specification MUST NOT not mandate pre-configuration between how sparse ADBs are recorded on
   the
      source and destination server.  Requiring that server, it does make the source and
      destination first have a "copying relationship" increases assumption that such information is not
   in the
      administrative burden.  However file.  I.e., the specification MUST NOT
      preclude implementations that require pre-configuration.

   o  The specification MUST NOT mandate a trust relationship between information is metadata.  As such, the
   INITIALIZE operation is defined to be not supported by the DS - it
   must be issued to the MDS.  But since the source and destination server.  The NFSv4 security model
      requires mutual authentication between a principal on an NFS client and must not assume a principal on an NFS server.  This model
   priori whether a read is sparse or not, the READ_PLUS operation MUST continue
      with
   be supported by both the introduction of COPY.

4.4.1.2.  Inter-Server Copy with RPCSEC_GSSv3

   When DS and the MDS.  I.e., the client sends a COPY_NOTIFY to might
   impose on the source server MDS to expect asynchronously read the destination to attempt to copy data from the source server, it is
   expected that DS.

   Furthermore, each DS MUST not report to a client either a sparse ADB
   or data which belongs to another DS.  One implication of this copy
   requirement is being done on behalf of the principal
   (called the "user principal") that sent the RPC request that encloses app_data_block4's adb_block_size MUST be
   either be the COMPOUND procedure that contains stripe width or the COPY_NOTIFY operation. stripe width must be an even
   multiple of it.

   The
   user principal second implication here is identified by the RPC credentials.  A mechanism that allows the user principal DS must be able to authorize use the destination server
   Control Protocol to
   perform the copy in a manner that lets determine from the source server properly
   authenticate MDS where the destination's copy, and without allowing the
   destination to exceed its authorization is necessary.

   An approach that sends delegated credentials of the client's user
   principal sparse ADBs
   occur.  [[Comment.1: Need to discuss what happens if after the destination server file
   is not used for the following
   reasons.  If the client's user delegated its credentials, the
   destination would authenticate as the user principal.  If the
   destination were using the NFSv4 protocol being written to perform the copy, then
   the source server would authenticate the destination server as the
   user principal, and an INITIALIZE occurs? --TH]] Perhaps instead
   of the file copy would securely proceed.  However,
   this approach would allow DS pulling from the destination server to copy other files.
   The user principal would have to trust MDS, the destination server to not
   do so.  This is counter MDS pushes to the requirements, and therefore is not
   considered.  Instead DS?  Thus an approach using RPCSEC_GSSv3 [6] privileges is
   proposed.

   One of the stated applications
   INITIALIZE causes a new push?  [[Comment.2: Still need to consider
   race cases of the proposed RPCSEC_GSSv3 protocol
   is compound client host and user authentication [+ privilege
   assertion].  For inter-server file copy, we require compound NFS
   server host DS getting a WRITE and user authentication [+ privilege assertion].  The
   distinction between the two is one without meaning.

   RPCSEC_GSSv3 introduces the notion MDS getting an
   INITIALIZE. --TH]]

5.3.  An Example of privileges.  We Detecting Corruption

   In this section, we define three
   privileges:

   copy_from_auth:  A user principal an ADB format in which corruption can be
   detected.  Note that this is authorizing a source principal
      ("nfs@<source>") just one possible format and means to allow
   detect corruption.

   Consider a destination principal ("nfs@
      <destination>") to copy a file from the source very basic implementation of an operating system's disk
   blocks.  A block is either data or it is an indirect block which
   allows for files to the destination.
      This privilege be larger than one block.  It is established on the source server before the user
      principal sends a COPY_NOTIFY operation desired to the source server.

   struct copy_from_auth_priv {
           secret4             cfap_shared_secret;
           netloc4             cfap_destination;
           /* the NFSv4 user name that the user principal maps be
   able to */
           utf8str_mixed       cfap_username;
           /* equal initialize a block.  Lastly, to seq_num of rpc_gss_cred_vers_3_t */
           unsigned int        cfap_seq_num;
   };

      cap_shared_secret is quickly unlink a secret value the user principal generates.

   copy_to_auth:  A user principal is authorizing file, a destination
      principal ("nfs@<destination>") to allow it
   block can be marked invalid.  The contents remain intact - which
   would enable this OS application to copy undelete a file from file.

   The application defines 4k sized data blocks, with an 8 byte block
   counter occurring at offset 0 in the source to block, and with the destination. guard
   pattern occurring at offset 8 inside the block.  Furthermore, the
   guard pattern can take one of four states:

   0xfeedface -   This privilege is established on the destination server before FREE state and indicates that the user principal sends a COPY
      operation to ADB
      format has been applied.

   0xcafedead -   This is the destination server.

   struct copy_to_auth_priv {
           /* equal DATA state and indicates that real data
      has been written to cfap_shared_secret */
           secret4              ctap_shared_secret;
           netloc4              ctap_source;
           /* this block.

   0xe4e5c001 -   This is the NFSv4 user name INDIRECT state and indicates that the user principal maps to */
           utf8str_mixed        ctap_username;
           /* equal to seq_num
      block contains block counter numbers that are chained off of rpc_gss_cred_vers_3_t */
           unsigned int         ctap_seq_num;
   };

      ctap_shared_secret this
      block.

   0xba1ed4a3 -   This is a secret value the user principal generated INVALID state and was used indicates that the block
      contains data whose contents are garbage.

   Finally, it also defines an 8 byte checksum [20] starting at byte 16
   which applies to establish the copy_from_auth privilege with remaining contents of the
      source principal.

   copy_confirm_auth:  A destination principal is confirming with block.  If the
      source principal state
   is FREE, then that checksum is trivially zero.  As such, the
   application has no need to transfer the checksum implicitly inside
   the ADB - it need not make the transfer layer aware of the fact that
   there is authorized a checksum (see [18] for an example of checksums used to copy
   detect corruption in application data from blocks).

   Corruption in each ADB can be detected thusly:

   o  If the
      source on behalf guard pattern is anything other than one of the user principal.  When allowed
      values, including all zeros.

   o  If the inter-server
      copy protocol guard pattern is NFSv4, or for that matter, FREE and any protocol capable other byte in the remainder
      of being secured via RPCSEC_GSSv3 (i.e. any ONC RPC protocol),
      this privilege the ADB is established before anything other than zero.

   o  If the file guard pattern is copied from anything other than FREE, then if the
      source to
      stored checksum does not match the destination.

   struct copy_confirm_auth_priv {
           /* equal to GSS_GetMIC() of cfap_shared_secret */
           opaque              ccap_shared_secret_mic<>;
           /* computed checksum.

   o  If the NFSv4 user name that guard pattern is INDIRECT and one of the user principal maps to */
           utf8str_mixed       ccap_username;
           /* equal to seq_num of rpc_gss_cred_vers_3_t */
           unsigned int        ccap_seq_num;
   };

4.4.1.2.1.  Establishing a Security Context

   When the user principal wants to COPY a file between two servers, if
   it stored indirect
      block numbers has not established copy_from_auth and copy_to_auth privileges on
   the servers, it establishes them:

   o  The user principal generates a secret it will share with value greater than the two
      servers.  This shared secret will be placed number of ADBs in the
      cfap_shared_secret and ctap_shared_secret fields of
      file.

   o  If the
      appropriate privilege data types, copy_from_auth_priv guard pattern is INDIRECT and
      copy_to_auth_priv.

   o  An instance one of copy_from_auth_priv is filled in with the shared
      secret, stored indirect
      block numbers is a duplicate of another stored indirect block
      number.

   As can be seen, the destination server, and application can detect errors based on the NFSv4 user id
   combination of the user
      principal.  It will be sent with an RPCSEC_GSS3_CREATE procedure, guard pattern state and so cfap_seq_num is set to the seq_num of checksum.  But also,
   the credential application can detect corruption based on the state and the
   contents of the
      RPCSEC_GSS3_CREATE procedure.  Because cfap_shared_secret is a
      secret, after XDR encoding copy_from_auth_priv, GSS_Wrap() (with
      privacy) is invoked on copy_from_auth_priv.  The
      RPCSEC_GSS3_CREATE procedure's arguments are:

           struct {
               rpc_gss3_gss_binding    *compound_binding;
               rpc_gss3_chan_binding   *chan_binding_mic;
               rpc_gss3_assertion      assertions<>;
               rpc_gss3_extension      extensions<>;
           } rpc_gss3_create_args;

      The string "copy_from_auth" ADB.  This last point is placed important in assertions[0].privs.  The
      output validating the
   minimum amount of GSS_Wrap() data we incorporated into our generic framework.
   I.e., the guard pattern is placed sufficient in extensions[0].data.  The field
      extensions[0].critical allowing applications to
   design their own corruption detection.

   Finally, it is set important to TRUE.  The source server calls
      GSS_Unwrap() on the privilege, and verifies that the seq_num
      matches the credential.  It then verifies note that the NFSv4 user id
      being asserted matches the source server's mapping none of these corruption checks
   occur in the user
      principal.  If it does, the privilege is established on the source
      server as: <"copy_from_auth", user id, destination>.  The
      successful reply to RPCSEC_GSS3_CREATE has:

           struct {
               opaque                  handle<>;
               rpc_gss3_chan_binding   *chan_binding_mic;
               rpc_gss3_assertion      granted_assertions<>;
               rpc_gss3_assertion      server_assertions<>;
               rpc_gss3_extension      extensions<>;
           } rpc_gss3_create_res; transport layer.  The field "handle" is the RPCSEC_GSSv3 handle that the server and client will
      use on COPY_NOTIFY requests involving components are
   totally unaware of the source file format and destination
      server. granted_assertions[0].privs will might report everything as
   being transferred correctly even in the case the application detects
   corruption.

5.4.  Example of READ_PLUS

   The hypothetical application presented in Section 5.3 can be equal used to
      "copy_from_auth".  The server will
   illustrate how READ_PLUS would return a GSS_Wrap() of
      copy_to_auth_priv.

   o  An instance an array of copy_to_auth_priv results.  A file is filled in
   created and initialized with 100 4k ADBs in the shared
      secret, FREE state:

      INITIALIZE {0, 4k, 100, 0, 0, 8, 0xfeedface}

   Further, assume the source server, and application writes a single ADB at 16k, changing
   the NFSv4 user id.  It will be sent
      with an RPCSEC_GSS3_CREATE procedure, and so ctap_seq_num is set guard pattern to 0xcafedead, we would then have in memory:

      0 -> (16k - 1)   : 4k, 4, 0, 0, 8, 0xfeedface
      16k -> (20k - 1) : 00 00 00 05 ca fe de ad XX XX ... XX XX
      20k -> 400k      : 4k, 95, 0, 6, 0xfeedface

   And when the seq_num client did a READ_PLUS of 64k at the credential start of the RPCSEC_GSS3_CREATE
      procedure.  Because ctap_shared_secret is file,
   it would get back a secret, after XDR
      encoding copy_to_auth_priv, GSS_Wrap() is invoked on
      copy_to_auth_priv.  The RPCSEC_GSS3_CREATE procedure's arguments
      are:

           struct {
               rpc_gss3_gss_binding    *compound_binding;
               rpc_gss3_chan_binding   *chan_binding_mic;
               rpc_gss3_assertion      assertions<>;
               rpc_gss3_extension      extensions<>;
           } rpc_gss3_create_args;

      The string "copy_to_auth" is placed in assertions[0].privs.  The
      output result of GSS_Wrap() is placed in extensions[0].data.  The field
      extensions[0].critical is set to TRUE.  After unwrapping,
      verifying the seq_num, an ADB, some data, and the user principal a final ADB:

      ADB {0, 4, 0, 0, 8, 0xfeedface}
      data 4k
      ADB {20k, 4k, 59, 0, 6, 0xfeedface}

5.5.  Zero Filled Holes

   As applications are free to NFSv4 user ID
      mapping, define the destination establishes a privilege structure of
      <"copy_to_auth", user id, source>.  The successful reply to
      RPCSEC_GSS3_CREATE has:

           struct {
               opaque                  handle<>;
               rpc_gss3_chan_binding   *chan_binding_mic;
               rpc_gss3_assertion      granted_assertions<>;
               rpc_gss3_assertion      server_assertions<>;
               rpc_gss3_extension      extensions<>;
           } rpc_gss3_create_res;

      The field "handle" an ADB, it is
   trivial to define an ADB which supports zero filled holes.  Such a
   case would encompass the RPCSEC_GSSv3 handle that the client will
      use on COPY requests involving the source traditional definitions of a sparse file and destination server.
      The field granted_assertions[0].privs will be equal
   hole punching.  For example, to
      "copy_to_auth".  The server will return punch a GSS_Wrap() 64k hole, starting at 100M,
   into an existing file which has no ADB structure:

      INITIALIZE {100M, 64k, 1, NFS4_UINT64_MAX,
                  0, NFS4_UINT64_MAX, 0x0}

6.  Space Reservation

6.1.  Introduction

   This section describes a set of
      copy_to_auth_priv.

4.4.1.2.2.  Starting operations that allow applications
   such as hypervisors to reserve space for a Secure Inter-Server Copy

   When file, report the client sends amount of
   actual disk space a COPY_NOTIFY request to the source server, it
   uses the privileged "copy_from_auth" RPCSEC_GSSv3 handle.
   cna_destination_server in COPY_NOTIFY MUST be the same as file occupies and freeup the name backing space of
   the destination server specified in copy_from_auth_priv.  Otherwise,
   COPY_NOTIFY will fail with NFS4ERR_ACCESS.  The source server
   verifies that the privilege <"copy_from_auth", user id, destination>
   exists, and annotates a
   file when it with the source filehandle, if is not required.

   In virtualized environments, virtual disk files are often stored on
   NFS mounted volumes.  Since virtual disk files represent the user
   principal has read access hard
   disks of virtual machines, hypervisors often have to guarantee
   certain properties for the source file.

   One such example is space reservation.  When a hypervisor creates a
   virtual disk file, and if administrative
   policies give the user principal and the NFS client read access it often tries to preallocate the space for the source
   file (i.e. if so that there are no future allocation related errors during the ACCESS
   operation would grant read
   access).  Otherwise, COPY_NOTIFY will fail with NFS4ERR_ACCESS.

   When of the client sends virtual machine.  Such errors prevent a COPY request to the destination server, it
   uses the privileged "copy_to_auth" RPCSEC_GSSv3 handle.
   ca_source_server virtual
   machine from continuing execution and result in COPY MUST downtime.

   Another useful feature would be the same as ability to report the name number of
   blocks that would be freed when a file is deleted.  Currently, NFS
   reports two size attributes:

   size  The logical file size of the source
   server specified in copy_to_auth_priv.  Otherwise, COPY will fail
   with NFS4ERR_ACCESS. file.

   space_used  The destination server verifies size in bytes that the
   privilege <"copy_to_auth", user id, source> exists, and annotates it
   with the source and destination filehandles.  If the client has
   failed file occupies on disk

   While these attributes are sufficient for space accounting in
   traditional filesystems, they prove to establish the "copy_to_auth" policy it will reject the
   request with NFS4ERR_PARTNER_NO_AUTH.

   If the client sends be inadequate in modern
   filesystems that support block sharing.  Having a COPY_REVOKE to the source server way to rescind the
   destination server's copy privilege, it uses tell the privileged
   "copy_from_auth" RPCSEC_GSSv3 handle and
   number of blocks that would be freed if the cra_destination_server file was deleted would be
   useful to applications that wish to migrate files when a volume is
   low on space.

   Since virtual disks represent a hard drive in COPY_REVOKE MUST a virtual machine, a
   virtual disk can be the same viewed as the name of the destination server
   specified in copy_from_auth_priv.  The source server will then delete
   the <"copy_from_auth", user id, destination> privilege and fail any
   subsequent copy requests sent under the auspices of this privilege
   from the destination server.

4.4.1.2.3.  Securing ONC RPC Server-to-Server Copy Protocols

   After a destination server has filesystem within a "copy_to_auth" privilege established
   on it, and it receives file.  Since not
   all blocks within a COPY request, if it knows it will use filesystem are in use, there is an ONC
   RPC protocol opportunity to copy data, it will establish a "copy_confirm_auth"
   privilege on the source server, using nfs@<destination> as
   reclaim blocks that are no longer in use.  A call to deallocate
   blocks could result in better space efficiency.  Lesser space MAY be
   consumed for backups after block deallocation.

   We propose the
   initiator principal, following operations and nfs@<source> as attributes for the target principal.

   The value of
   aforementioned use cases:

   space_reserved  This attribute specifies whether the field ccap_shared_secret_mic is a GSS_VerifyMIC() of blocks backing
      the shared secret passed in file have been preallocated.

   space_freed  This attribute specifies the copy_to_auth privilege.  The field
   ccap_username space freed when a file is
      deleted, taking block sharing into consideration.

   max_hole_punch  This attribute specifies the mapping maximum sized hole that
      can be punched on the filesystem.

   HOLE_PUNCH  This operation zeroes and/or deallocates the blocks
      backing a region of the user principal file.

6.2.  Use Cases
6.2.1.  Space Reservation

   Some applications require that once a file of a certain size is
   created, writes to that file never fail with an NFSv4 user
   name ("user"@"domain" form), and MUST be the same as ctap_username
   and cfap_username.  The field ccap_seq_num out of space
   condition.  One such example is the seq_num that of the
   RPCSEC_GSSv3 credential used for the RPCSEC_GSS3_CREATE procedure the
   destination will send a hypervisor writing to a
   virtual disk.  An out of space condition while writing to virtual
   disks would mean that the source server virtual machine would need to establish be frozen.

   Currently, in order to achieve such a guarantee, applications zero
   the
   privilege. entire file.  The source server verifies initial zeroing allocates the privilege, backing blocks
   and establishes a
   <"copy_confirm_auth", user id, destination> privilege.  If the source
   server fails to verify the privilege, the COPY operation will be
   rejected with NFS4ERR_PARTNER_NO_AUTH.  All all subsequent ONC RPC
   requests sent from the destination to copy data from writes are overwrites of already allocated blocks.
   This approach is not only inefficient in terms of the source amount of I/O
   done, it is also not guaranteed to
   the destination will use the RPCSEC_GSSv3 handle returned by the
   source's RPCSEC_GSS3_CREATE response.

   Note work on filesystems that the use are log
   structured or deduplicated.  An efficient way of the "copy_confirm_auth" privilege accomplishes
   the following:

   o  if a protocol like NFS is being used, with export policies, export
      policies can guaranteeing space
   reservation would be overridden in case beneficial to such applications.

   If the destination server as-an-
      NFS-client space_reserved attribute is not authorized

   o  manual configuration to allow set on a copy relationship between the
      source and destination file, it is guaranteed
   that writes that do not needed.

   If grow the attempt file will not fail with
   NFSERR_NOSPC.

6.2.2.  Space freed on deletes

   Currently, files in NFS have two size attributes:

   size  The logical file size of the file.

   space_used  The size in bytes that the file occupies on disk.

   While these attributes are sufficient for space accounting in
   traditional filesystems, they prove to be inadequate in modern
   filesystems that support block sharing.  In such filesystems,
   multiple inodes can point to establish a "copy_confirm_auth" privilege fails,
   then when the user principal sends single block with a COPY request block reference
   count to destination, the
   destination server will reject it with NFS4ERR_PARTNER_NO_AUTH.

4.4.1.2.4.  Securing Non ONC RPC Server-to-Server Copy Protocols guard against premature freeing.

   If the destination won't be using ONC RPC space_used of a file is interpreted to copy mean the data, then size in bytes of
   all disk blocks pointed to by the
   source and destination are using an unspecified copy protocol.  The
   destination could use inode of the file, then shared secret and
   blocks get double counted, over-reporting the NFSv4 user id to
   prove to space utilization.
   This also has the source server that the user principal has authorized the
   copy.

   For protocols adverse effect that authenticate user names with passwords (e.g.  HTTP
   [14] and FTP [15]), the nfsv4 user id could be used as the user name,
   and an ASCII hexadecimal representation deletion of the RPCSEC_GSSv3 a file with
   shared
   secret could be used as blocks frees up less than space_used bytes.

   On the user password or as input into non-
   password authentication methods like CHAP [16].

4.4.1.3.  Inter-Server Copy via ONC RPC but without RPCSEC_GSSv3

   ONC RPC security flavors other than RPCSEC_GSSv3 MAY be used with hand, if space_used is interpreted to mean the
   server-side copy offload operations described size in this document.  In
   particular, host-based ONC RPC security flavors such as AUTH_NONE and
   AUTH_SYS MAY be used.  If a host-based security flavor is used, a
   minimal level
   bytes of protection for the server-to-server copy protocol is
   possible.

   In those disk blocks unique to the absence inode of strong security mechanisms such as RPCSEC_GSSv3,
   the challenge is how the source server and destination server
   identify themselves to each other, especially file, then
   shared blocks are not counted in the presence any file, resulting in under-
   reporting of
   multi-homed source the space utilization.

   For example, two files A and destination servers.  In a multi-homed
   environment, B have 10 blocks each.  Let 6 of these
   blocks be shared between them.  Thus, the destination server might not contact combined space utilized by
   the source
   server from two files is 14 * BLOCK_SIZE bytes.  In the same network address specified by former case, the client in
   combined space utilization of the
   COPY_NOTIFY.  This can two files would be overcome using the procedure described
   below.

   When the client sends the source server the COPY_NOTIFY operation,
   the source server may reply to reported as 20 *
   BLOCK_SIZE.  However, deleting either would only result in 4 *
   BLOCK_SIZE being freed.  Conversely, the client with a list of target
   addresses, names, and/or URLs and assign them to latter interpretation would
   report that the unique triple:
   <source fh, user ID, destination address Y>.  If space utilization is only 8 * BLOCK_SIZE.

   Adding another size attribute, space_freed, is helpful in solving
   this problem. space_freed is the destination uses
   one number of these target netlocs blocks that are allocated
   to contact the source server, the source
   server will given file that would be able to uniquely identify the destination server, even
   if the destination server does not connect from the address specified
   by freed on its deletion.  In the client in COPY_NOTIFY.

   For
   example, suppose the network topology is both A and B would report space_freed as shown in Figure 3. 4 * BLOCK_SIZE and
   space_used as 10 * BLOCK_SIZE.  If the source filehandle A is 0x12345, the source server may respond to
   a COPY_NOTIFY for destination 10.11.78.56 with the URLs:

      nfs://10.11.78.18//_COPY/10.11.78.56/_FH/0x12345

      nfs://192.168.33.18//_COPY/10.11.78.56/_FH/0x12345

   The client deleted, B will then send these URLs to report
   space_freed as 10 * BLOCK_SIZE as the destination server deletion of B would result in
   the
   COPY operation.  Suppose that deallocation of all 10 blocks.

   The addition of this problem doesn't solve the 192.168.33.0/24 network problem of space being
   over-reported.  However, over-reporting is a high
   speed network better than under-
   reporting.

6.2.3.  Operations and attributes

   In the destination server decides to transfer the file
   over this network.  If the destination contacts the source server
   from 192.168.33.56 over this network using NFSv4.1, it does the
   following:

   COMPOUND  { PUTROOTFH, LOOKUP "_COPY" ; LOOKUP "10.11.78.56"; LOOKUP
      "_FH" ; OPEN "0x12345" ; GETFH }

   The source server will therefore know sections that these NFSv4.1 operations follow, one operation and three attributes are being issued by
   defined that together provide the destination server identified space management facilities
   outlined earlier in the
   COPY_NOTIFY.

4.4.1.4.  Inter-Server Copy without ONC RPC and RPCSEC_GSSv3 document.  The same techniques as Section 4.4.1.3, using unique URLs for each
   destination server, can operation is intended to be used for other protocols (e.g.  HTTP [14]
   OPTIONAL and FTP [15]) the attributes RECOMMENDED as well.

5.  Application Data Block Support

   At defined in section 17 of
   [2].

6.2.4.  Attribute 77: space_reserved

   The space_reserve attribute is a read/write attribute of type
   boolean.  It is a per file attribute.  When the OS level, files are contained on space_reserved
   attribute is set via SETATTR, the server must ensure that there is
   disk blocks.  Applications
   are also free space to impose structure on the data contained accommodate every byte in a the file and
   we before it can define an Application Data Block (ADB) to be such a structure.
   From return
   success.  If the application's viewpoint, server cannot guarantee this, it only wants must return
   NFS4ERR_NOSPC.

   If the client tries to handle ADBs and
   not raw bytes (see [17]).  An ADB is typically comprised of two
   sections: grow a header and data.  The header describes file which has the
   characteristics of space_reserved
   attribute set, the block and can provide a means server must guarantee that there is disk space to detect
   corruption in the data payload.  The data section is typically
   initialized to all zeros.

   The format of
   accommodate every byte in the header is application specific, but there are two
   main components typically encountered:

   1.  An ADB Number (ADBN), which allows file with the application to determine
       which data block is being referenced.  The ADBN is a logical
       block number and is useful when new size before it can
   return success.  If the client server cannot guarantee this, it must return
   NFS4ERR_NOSPC.

   It is not storing required that the
       blocks in contiguous memory.

   2.  Fields server allocate the space to describe the state file
   before returning success.  The allocation can be deferred, however,
   it must be guaranteed that it will not fail for lack of the ADB and a means to detect
       block corruption.  For both pieces space.

   The value of data, a useful property is
       that allowed values space_reserved can be unique in that if passed across the
       network, corruption due obtained at any time through
   GETATTR.

   In order to translation between big and little
       endian architectures are detectable.  For example, 0xF0DEDEF0 has avoid ambiguity, the same space_reserve bit cannot be set
   along with the size bit pattern in both architectures.

   Applications already impose structures on files [17] and detect
   corruption in data blocks [18].  What they are not able to do is
   efficiently transfer and store ADBs.  To initialize SETATTR.  Increasing the size of a file
   with ADBs,
   the client must send the full ADB to the server and that must space_reserve set will fail if space reservation cannot be
   stored on
   guaranteed for the server.  When new size.  If the application is initializing a file to
   have the ADB structure, it could compress size is decreased, space
   reservation is only guaranteed for the ADBs to just new size and the
   information to necessary to later reconstruct extra blocks
   backing the header portion of file can be released.

6.2.5.  Attribute 78: space_freed

   space_freed gives the ADB when number of bytes freed if the contents are file is deleted.
   This attribute is read back.  Using sparse only and is of type length4.  It is a per file
   techniques,
   attribute.

6.2.6.  Attribute 79: max_hole_punch

   max_hole_punch specifies the disk blocks described by would not be allocated.
   Unlike sparse file techniques, there would be maximum size of a small cost to store hole that the compressed header data.

   In this section, we are going to define a generic framework for an
   ADB, present one approach to detecting corruption in
   HOLE_PUNCH operation can handle.  This attribute is read only and of
   type length4.  It is a given ADB
   implementation, per filesystem attribute.  This attribute MUST
   be implemented if HOLE_PUNCH is implemented.

6.2.7.  Operation 64: HOLE_PUNCH - Zero and describe deallocate blocks backing
        the model for how file in the client and server
   can support efficient initialization specified range.

   WARNING: Most of ADBs, reading this section is now obsolete.  Parts of ADB holes,
   punching holes in ADBs, and space reservation.  Further, we it need to
   be able to extend this model to applications which do not support
   ADBs, but wish to be able to handle sparse files, hole punching, and
   space reservation.

5.1.  Generic Framework

   We want the representation of scavanged for the ADB to discussion, but for the most part, it cannot
   be flexible enough trusted.

6.2.7.1.  DESCRIPTION

   Whenever a client wishes to
   support many different applications.  The most basic approach is no
   imposition of a block at all, which means we are working with deallocate the raw
   bytes.  Such an approach would be useful for storing holes, punching
   holes, etc.  In more complex deployments, blocks backing a server might be
   supporting multiple applications, each with their own definition of
   the ADB.  One might store
   particular region in the ADBN at file, it calls the start of HOLE_PUNCH operation with
   the block and then
   have a guard pattern current filehandle set to detect corruption [19].  The next might store the ADBN at an offset filehandle of 100 the file in question,
   start offset and length in bytes within of the block region set in hpa_offset and have no guard
   pattern at all.
   hpa_count respectively.  All further reads to this region MUST return
   zeros until overwritten.  The point is filehandle specified must be that existing applications might
   already have well defined formats for their data blocks.

   The guard pattern can of a
   regular file.

   Situations may arise where hpa_offset and/or hpa_offset + hpa_count
   will not be used aligned to represent a boundary that the state server does allocations/
   deallocations in.  For most filesystems, this is the block size of
   the block, to
   protect against corruption, or both.  Again, file system.  In such a case, the server can deallocate as many
   bytes as it needs to can in the region.  The blocks that cannot be able to deallocated
   MUST be placed anywhere within zeroed.  Except for the ADB.

   We need block deallocation and maximum hole
   punching capability, a HOLE_PUNCH operation is to be able treated similar
   to represent the starting offset a write of zeroes.

   The server is not required to complete deallocating the block and
   the size of the block.  Note that nothing prevents the application
   from defining different sized blocks
   specified in a file.

5.1.1.  Data Block Representation

   struct app_data_block4 {
           offset4         adb_offset;
           length4         adb_block_size;
           length4         adb_block_count;
           length4         adb_reloff_blocknum;
           count4          adb_block_num;
           length4         adb_reloff_pattern;
           opaque          adb_pattern<>;
   };

   The app_data_block4 structure captures the abstraction presented for operation before returning.  It is acceptable to
   have the ADB.  The additional fields present are deallocation be deferred.  In fact, HOLE_PUNCH is merely a
   hint; it is valid for a server to allow return success without ever doing
   anything towards deallocating the transmission
   of adb_block_count ADBs at one time.  We also use adb_block_num blocks backing the region
   specified.  However, any future reads to
   convey the ADBN region MUST return
   zeroes.

   HOLE_PUNCH will result in the space_used attribute being decreased by
   the number of bytes that were deallocated.  The space_freed attribute
   may or may not decrease, depending on the first block in support and whether the sequence.  Each ADB
   blocks backing the specified range were shared or not.  The size
   attribute will
   contain remain unchanged.

   The HOLE_PUNCH operation MUST NOT change the same adb_pattern string.

   As both adb_block_num space reservation
   guarantee of the file.  While the server can deallocate the blocks
   specified by hpa_offset and adb_pattern are optional, if either
   adb_reloff_pattern or adb_reloff_blocknum is set hpa_count, future writes to NFS4_UINT64_MAX,
   then this region
   MUST NOT fail with NFSERR_NOSPC.

   The HOLE_PUNCH operation may fail for the corresponding field following reasons (this is
   a partial list):

   NFS4ERR_NOTSUPP  The Hole punch operations are not set in any of supported by the ADB.

5.1.2.  Data Content

   /*
    * Use an enum such that we can extend new types.
    */
   enum data_content4 {
           NFS4_CONTENT_DATA = 0,
           NFS4_CONTENT_APP_BLOCK = 1,
           NFS4_CONTENT_HOLE = 2
   };

   New operations might need to differentiate between wanting to access
   data versus
      NFS server receiving this request.

   NFS4ERR_DIR  The current filehandle is of type NF4DIR.

   NFS4ERR_SYMLINK  The current filehandle is of type NF4LNK.

   NFS4ERR_WRONG_TYPE  The current filehandle does not designate an ADB.  Also, future minor versions might want to
   introduce new data formats.  This enumeration allows that
      ordinary file.

7.  Sparse Files

   WARNING: Most of this section needs to occur.

5.2.  Operation 64: INITIALIZE

   The server has no concept be reworked because of the structure imposed by
   work going on in the
   application.  It ADB section.

7.1.  Introduction

   A sparse file is only when the application writes to a section common way of
   the representing a large file does order get imposed.  In order without
   having to detect corruption even
   before the application utilizes the file, utilize all of the application will want
   to initialize disk space for it.  Consequently, a range of ADBs.  It
   sparse file uses less physical space than its size indicates.  This
   means the INITIALIZE operation to
   do so.

5.2.1.  ARGUMENT

   /*
    * We use data_content4 in case we wish to
    * extend new types later. Note file contains 'holes', byte ranges within the file that we
    * are explicitly disallowing
   contain no data.
    */
   union initialize_arg4 switch (data_content4 content) {
   case NFS4_CONTENT_APP_BLOCK:
           app_data_block4 ia_adb;
   case NFS4_CONTENT_HOLE:
           length4         ia_hole_length;
   default:
           void;
   };

   struct INITIALIZE4args {
           /* CURRENT_FH:  Most modern file */
           stateid4        ia_stateid;
           stable_how4     ia_stable;
           offset4         ia_offset;
           initialize_arg4 ia_data<>;
   };

5.2.2.  RESULT

   struct INITIALIZE4resok {
           count4          ir_count;
           stable_how4     ir_committed;
           verifier4       ir_writeverf;
           data_content4   ir_sparse;
   };

   union INITIALIZE4res switch (nfsstat4 status) {
   case NFS4_OK:
           INITIALIZE4resok        resok4;
   default:
           void;
   };

5.2.3.  DESCRIPTION

   When the client invokes the INITIALIZE operation, it has two desired
   results:

   1.  The structure described by the app_data_block4 be imposed on the
       file.

   2.  The contents described systems support sparse files,
   including most UNIX file systems and NTFS, but notably not Apple's
   HFS+.  Common examples of sparse files include Virtual Machine (VM)
   OS/disk images, database files, log files, and even checkpoint
   recovery files most commonly used by the app_data_block4 be sparse. HPC community.

   If the server supports the INITIALIZE operation, it still might not
   support an application reads a hole in a sparse files.  So if it receives the INITIALIZE operation,
   then it MUST populate the contents of file, the file with the initialized
   ADBs.  In other words, if the server supports INITIALIZE, then it
   supports the concept of ADBs.  [[Comment.1: Do we want system must
   returns all zeros to support an
   asynchronous INITIALIZE?  Do we have to? --TH]]

   If the application.  For local data was already initialized, There are two interesting
   scenarios:

   1.  The data blocks are allocated.

   2.  Initializing in access there is
   little penalty, but with NFS these zeroes must be transferred back to
   the middle of an existing ADB. client.  If an application uses the NFS client to read data blocks were already allocated, then into
   memory, this wastes time and bandwidth as the INITIALIZE is a
   hole punch operation.  If INITIALIZE supports sparse files, then application waits for
   the
   data blocks are zeroes to be deallocated.  If not, then transferred.

   A sparse file is typically created by initializing the data blocks are file to be rewritten all
   zeros - nothing is written to the data in the indicated ADB format.  [[Comment.2: Need to
   document interaction between space reservation and file, instead the hole punching?
   --TH]]

   Since
   is recorded in the server has no knowledge of ADBs, it should not report
   misaligned creation of ADBs.  Even while it can detect them, it
   cannot disallow them, as metadata for the application file.  So a 8G disk image might
   be represented initially by a couple hundred bits in the process of
   changing the size of inode and
   nothing on the ADBs.  Thus disk.  If the server must be prepared VM then writes 100M to
   handle an INITIALIZE into an existing ADB.

   This document does not mandate a file in the manner
   middle of the image, there would now be two holes represented in which the server stores
   ADBs sparsely for a file.  It does assume that if ADBs are stored
   sparsely, then
   metadata and 100M in the server can detect when an INITIALIZE arrives that
   will force a new ADB data.

   Other applications want to start inside an existing ADB.  For example,
   assume that ADBi has initialize a adb_block_size of 4k and that an INITIALIZE
   starts 1k inside ADBi. file to patterns other than
   zero.  The server should [[Comment.3: Need problem with initializing to flesh
   this out. --TH]]

5.3.  Operation 65: READ_PLUS

   If the client sends a READ operation, it zero is explicitly stating that it is not supporting sparse files.  So if a READ occurs on a sparse
   ADB, then the server must expand such ADBs often
   difficult to be raw bytes.  If distinguish a
   READ occurs in the middle byte-range of an ADB, the server can only send back
   bytes starting initialized to all zeroes
   from that offset.

   Such an operation is inefficient for transfer of sparse sections data corruption, since a pattern of
   the file.  As such, READ zeroes is marked as OBSOLETE in NFSv4.2.  Instead, a client should issue READ_PLUS.  Note that probable pattern
   for corruption.  Instead, some applications, such as the client has no a
   priori knowledge of whether an ADB is present or not, it should
   always database
   management systems, use READ_PLUS.

5.3.1.  ARGUMENT

   struct READ_PLUS4args {
           /* CURRENT_FH: file */
           stateid4        rpa_stateid;
           offset4         rpa_offset;
           count4          rpa_count;
   };

5.3.2.  RESULT

   union read_plus_content switch (data_content4 content) {
   case NFS4_CONTENT_DATA:
           opaque          rpc_data<>;
   case NFS4_CONTENT_APP_BLOCK:
           app_data_block4 rpc_block;
   case NFS4_CONTENT_HOLE:
           length4         rpc_hole_length;
   default:
           void;
   };

   /*
    * Allow a return pattern consisting of an array bytes or words of contents.
    */
   struct read_plus_res4 {
           bool                    rpr_eof;
           read_plus_content       rpr_contents<>;
   };

   union READ_PLUS4res switch (nfsstat4 status) {
   case NFS4_OK:
           read_plus_res4  resok4;
   default:
           void;
   };

5.3.3.  DESCRIPTION

   Over the given range, READ_PLUS will return all data non-
   zero values.

   Besides reading sparse files and ADBs found
   as an array of read_plus_content.  It is possible initializing them, applications
   might want to have consecutive
   ADBs in hole punch, which is the array as either different definitions deallocation of ADBs are present
   or as the guard pattern changes.

   Edge cases exist for ABDs data
   blocks which either begin before the rpa_offset
   requested by the READ_PLUS or end after the rpa_count requested -
   both back a region of which may occur as not all applications which access the file
   are aware of file.  At such time, the main application imposing affected
   blocks are reinitialized to a format on the file
   contents, i.e., tar, dd, cp, etc.  READ_PLUS MUST retrieve whole
   ADBs, but it need not retrieve an entire sequences of ADBs.

   The server MUST return pattern.

   This section introduces a whole ADB because if it does not, it must
   expand that partial ADB before it sends it new operation to the client.  E.g., if
   an ADB had read patterns from a block size of 64k file,
   READ_PLUS, and the a new operation to both initialize patterns and to
   punch pattern holes into a file, WRITE_PLUS.  READ_PLUS was for 128k
   starting at an offset of 32k inside the ADB, then supports all
   the first 32k would
   be converted features of READ but includes an extension to data.

5.4.  pNFS Considerations

   While this document does not mandate how support sparse ADBs are recorded on
   the server, it does make the assumption that such information is not
   in the file.  I.e., the information is metadata.  As such, the
   INITIALIZE operation
   pattern files.  READ_PLUS is defined to be not supported by the DS - it
   must be issued guaranteed to the MDS.  But since the client must not assume a
   priori whether a read is perform no worse than
   READ, and can dramatically improve performance with sparse or not, the files.
   READ_PLUS operation MUST does not depend on pNFS protocol features, but can be supported used
   by both the DS and the MDS.  I.e., the client might
   impose on the MDS to asynchronously read the data from the DS.

   Furthermore, each DS MUST not report pNFS to a client either a support sparse ADB
   or data which belongs to another DS.  One implication files.

7.2.  Terminology

   Regular file:  An object of this
   requirement is file type NF4REG or NF4NAMEDATTR.

   Sparse file:  A Regular file that the app_data_block4's adb_block_size MUST be
   either be the stripe width contains one or the stripe width must more Holes.

   Hole:  A byte range within a Sparse file that contains regions of all
      zeroes.  For block-based file systems, this could also be an even
   multiple
      unallocated region of it. the file.

   Hole Threshold  The second implication here is that minimum length of a Hole as determined by the DS must be able
      server.  If a server chooses to use define a Hole Threshold, then it
      would not return hole information (nfs_readplusreshole) with a
      hole_offset and hole_length that specify a range shorter than the
   Control Protocol
      Hole Threshold.

7.3.  Applications and Sparse Files

   Applications may cause an NFS client to determine from the MDS where read holes in a file for
   several reasons.  This section describes three different application
   workloads that cause the NFS client to transfer data unnecessarily.
   These workloads are simply examples, and there are probably many more
   workloads that are negatively impacted by sparse ADBs
   occur.  [[Comment.4: Need files.

   The first workload that can cause holes to discuss what happens if after the file be read is being written to and an INITIALIZE occurs? --TH]] Perhaps instead
   of sequential
   reads within a sparse file.  When this happens, the DS pulling from NFS client may
   perform read requests ("readahead") into sections of the MDS, file not
   explicitly requested by the MDS pushes to application.  Since the DS?  Thus an
   INITIALIZE causes a new push?  [[Comment.5: Still need to consider
   race cases of NFS client cannot
   differentiate between holes and non-holes, the DS getting a WRITE and the MDS getting an
   INITIALIZE. --TH]]

5.5.  An Example NFS client may
   prefetch empty sections of Detecting Corruption

   In this section, we define an ADB format in which corruption can be
   detected.  Note that this the file.

   This workload is just one possible format exemplified by Virtual Machines and means to
   detect corruption.

   Consider a very basic implementation of their associated
   file system images, e.g., VMware .vmdk files, which are large sparse
   files encapsulating an entire operating system's disk
   blocks.  A block is either data or it is an indirect block which
   allows for system.  If a VM reads files
   within the file system image, this will translate to be sequential NFS
   read requests into the much larger than one block.  It file system image file.  Since NFS
   does not understand the internals of the file system image, it ends
   up performing readahead file holes.

   The second workload is desired generated by copying a file from a directory
   in NFS to be
   able either the same NFS server, to initialize a block.  Lastly, another file system, e.g.,
   another NFS or Samba server, to quickly unlink a file, local ext3 file system, or even a
   block can be marked invalid.  The contents remain intact - which
   would enable
   network socket.  In this OS application case, bandwidth and server resources are
   wasted as the entire file is transferred from the NFS server to undelete the
   NFS client.  Once a file.

   The application defines 4k sized data blocks, with an 8 byte block
   counter occurring at offset 0 in range of the block, and with file has been transferred to
   the guard
   pattern occurring at offset 8 inside the block.  Furthermore, the
   guard pattern can take one of four states:

   0xfeedface -   This client, it is up to the FREE state and indicates that the ADB
      format has been applied.

   0xcafedead -   This is client application, e.g., rsync, cp, scp,
   on how it writes the DATA state and indicates that real data
      has been written to this block.

   0xe4e5c001 -   This is the INDIRECT state target location.  For example, cp
   supports sparse files and indicates that the
      block contains block counter numbers that are chained off will not write all zero regions, whereas
   scp does not support sparse files and will transfer every byte of this
      block.

   0xba1ed4a3 -   This is the INVALID state and indicates
   file.

   The third workload is generated by applications that do not utilize
   the block
      contains NFS client cache, but instead use direct I/O and manage cached
   data whose contents are garbage.

   Finally, it also defines an 8 byte checksum [20] starting at byte 16 independently, e.g., databases.  These applications may perform
   whole file caching with sparse files, which applies to the remaining contents of the block.  If the state
   is FREE, then would mean that checksum is trivially zero.  As such, even the
   application has no need
   holes will be transferred to transfer the checksum implicitly inside
   the ADB - it need not make the transfer layer aware clients and cached.

7.4.  Overview of Sparse Files and NFSv4

   This proposal seeks to provide sparse file support to the fact that
   there is a checksum (see [18] for an example largest
   number of checksums used NFS client and server implementations, and as such proposes
   to add a new return code to
   detect corruption in application data blocks).

   Corruption in each ADB can be detected thusly:

   o  If the guard pattern is anything other than one mandatory NFSv4.1 READ_PLUS operation
   instead of the allowed
      values, including all zeros.

   o  If the guard pattern is FREE and any other byte in the remainder proposing additions or extensions of new or existing
   optional features (such as pNFS).

   As well, this document seeks to ensure that the ADB is anything other than zero.

   o  If the guard pattern is anything other than FREE, then if the
      stored checksum does proposed extensions
   are simple and do not match the computed checksum.

   o  If transfer data between the guard pattern is INDIRECT client and server
   unnecessarily.  For example, one of possible way to implement sparse
   file read support would be to have the stored indirect
      block numbers has client, on the first hole
   encountered or at OPEN time, request a value greater than Data Region Map from the number of ADBs
   server.  A Data Region Map would specify all zero and non-zero
   regions in the a file.

   o  If the guard pattern  While this option seems simple, it is INDIRECT less useful
   and one of the stored indirect
      block numbers is a duplicate of another stored indirect block
      number.

   As can become inefficient and cumbersome for several reasons:

   o  Data Region Maps can be seen, the application large, and transferring them can detect errors based on the
   combination reduce
      overall read performance.  For example, VMware's .vmdk files can
      have a file size of the guard pattern state over 100 GBs and the checksum.  But also,
   the application can detect corruption based on the state have a map well over several
      MBs.

   o  Data Region Maps can change frequently, and become invalidated on
      every write to the
   contents file.  NFSv4 has a single change attribute,
      which means any change to any region of the ADB. a file will invalidate all
      Data Region Maps.  This last point is important can result in validating the
   minimum amount of data we incorporated into our generic framework.
   I.e., the guard pattern is sufficient in allowing applications to
   design their own corruption detection.

   Finally, it is important map being transferred
      multiple times with each update to note the file.  For example, a VM
      that none of these corruption checks
   occur updates a config file in its file system image would
      invalidate the transport layer.  The server and client components are
   totally unaware of Data Region Map not only for itself, but for all
      other clients accessing the same file format and might report everything as
   being transferred correctly even in system image.

   o  Data Region Maps do not handle all zero-filled sections of the case
      file, reducing the application detects
   corruption.

5.6.  Example effectiveness of READ_PLUS

   The hypothetical application presented in Section 5.5 can the solution.  While it may be used
      possible to
   illustrate how READ_PLUS would return an array of results.  A file modify the maps to handle zero-filled sections (at
      possibly great effort to the server), it is
   created and initialized almost impossible with 100 4k ADBs in
      pNFS.  With pNFS, the FREE state:

      INITIALIZE {0, 4k, 100, 0, 0, 8, 0xfeedface}

   Further, assume owner of the application writes a single ADB at 16k, changing Data Region Map is the guard pattern to 0xcafedead, we would then have metadata
      server, which is not in memory:

      0 -> (16k - 1)   : 4k, 4, 0, 0, 8, 0xfeedface
      16k -> (20k - 1) : 00 00 00 05 ca fe de ad XX XX ... XX XX
      20k -> 400k      : 4k, 95, 0, 6, 0xfeedface

   And when the client did a READ_PLUS of 64k at the start data path and has no knowledge of the file,
   it would get back a result
      contents of an ADB, some data, and a final ADB:

      ADB {0, 4, 0, 0, 8, 0xfeedface} data 4k
      ADB {20k, 4k, 59, 0, 6, 0xfeedface}

5.7.  Zero Filled Holes

   As applications are free region.

   Another way to define the structure of an ADB, it handle holes is
   trivial compression, but this not ideal since
   it requires all implementations to define an ADB which supports zero filled holes.  Such a
   case would encompass the traditional definitions of agree on a sparse file single compression
   algorithm and
   hole punching.  For example, to punch a 64k hole, starting at 100M,
   into an existing file which has no ADB structure:

      INITIALIZE {100M, 64k, 1, NFS4_UINT64_MAX,
                  0, NFS4_UINT64_MAX, 0x0}

6.  Space Reservation

6.1.  Introduction

   This section describes requires a set fair amount of operations computational overhead.

   Note that allow applications
   such as hypervisors supporting writing to reserve space for a file, report the amount of
   actual disk space a sparse file occupies and freeup does not require
   changes to the backing space protocol.  Applications and/or NFS implementations can
   choose to ignore WRITE requests of all zeroes to the NFS server
   without consequence.

7.5.  Operation 65: READ_PLUS

   The section introduces a
   file when it is not required.

   In virtualized environments, virtual disk files are often stored on new read operation, named READ_PLUS, which
   allows NFS mounted volumes.  Since virtual disk files represent the hard
   disks of virtual machines, hypervisors often have clients to guarantee
   certain properties for the avoid reading holes in a sparse file.

   One such example
   READ_PLUS is space reservation.  When a hypervisor creates a
   virtual disk file, it often tries guaranteed to preallocate the space for the
   file so that there are perform no future allocation related errors during worse than READ, and can
   dramatically improve performance with sparse files.

   READ_PLUS supports all the
   operation features of the virtual machine.  Such errors prevent a virtual
   machine from continuing execution existing NFSv4.1 READ
   operation [2] and result in downtime.

   Another useful feature would be the ability adds a simple yet significant extension to report the number
   format of
   blocks that would be freed when its response.  The change allows the client to avoid
   returning all zeroes from a file hole, wasting computational and
   network resources and reducing performance.  READ_PLUS uses a new
   result structure that tells the client that the result is deleted.  Currently, NFS
   reports two size attributes:

   size  The logical file size all zeroes
   AND the byte-range of the file.

   space_used  The size hole in bytes that which the file occupies on disk

   While these attributes are sufficient for space accounting in
   traditional filesystems, they prove to be inadequate in modern
   filesystems that support block sharing.  Having a way to tell request was made.
   Returning the
   number of blocks hole's byte-range, and only upon request, avoids
   transferring large Data Region Maps that would may be freed if the file was deleted would be
   useful to applications that wish to migrate files when a volume is
   low on space.

   Since virtual disks represent a hard drive in a virtual machine, a
   virtual disk can be viewed as a filesystem within a file.  Since not
   all blocks within a filesystem are in use, there is an opportunity to
   reclaim blocks that are no longer in use.  A call to deallocate
   blocks could result in better space efficiency.  Lesser space MAY be
   consumed for backups after block deallocation.

   We propose the following operations soon invalidated and attributes for the
   aforementioned use cases:

   space_reserved  This attribute specifies whether the blocks backing
      the file have been preallocated.

   space_freed  This attribute specifies the space freed when
   contain information about a file is
      deleted, taking block sharing into consideration.

   max_hole_punch  This attribute specifies the maximum sized hole that
      can may not even be punched on the filesystem.

   HOLE_PUNCH  This read in its
   entirely.

   A new read operation zeroes and/or deallocates the blocks
      backing a region of the file.

6.2.  Use Cases

6.2.1.  Space Reservation

   Some applications require that once a file of a certain size is
   created, writes required due to NFSv4.1 minor versioning
   rules that file never fail with an out do not allow modification of space
   condition.  One such example existing operation's
   arguments or results.  READ_PLUS is that of designed in such a hypervisor writing way to a
   virtual disk.  An out of space condition while writing allow
   future extensions to virtual
   disks would mean that the virtual machine would need to result structure.  The same approach could
   be frozen.

   Currently, in order taken to achieve extend the argument structure, but a good use case is
   first required to make such a guarantee, applications zero
   the entire file. change.

7.5.1.  ARGUMENT

   struct READ_PLUS4args {
           /* CURRENT_FH: file */
           stateid4        rpa_stateid;
           offset4         rpa_offset;
           count4          rpa_count;
   };

7.5.2.  RESULT

   union read_plus_content switch (data_content4 content) {
   case NFS4_CONTENT_DATA:
           opaque          rpc_data<>;
   case NFS4_CONTENT_APP_BLOCK:
           app_data_block4 rpc_block;
   case NFS4_CONTENT_HOLE:
           hole_info4      rpc_hole;
   default:
           void;
   };

   /*
    * Allow a return of an array of contents.
    */
   struct read_plus_res4 {
           bool                    rpr_eof;
           read_plus_content       rpr_contents<>;
   };

   union READ_PLUS4res switch (nfsstat4 status) {
   case NFS4_OK:
           read_plus_res4  resok4;
   default:
           void;
   };

7.5.3.  DESCRIPTION

   The initial zeroing allocates READ_PLUS operation is based upon the backing blocks NFSv4.1 READ operation [2],
   and all subsequent writes are overwrites of already allocated blocks.
   This approach is not only inefficient in terms of similarly reads data from the amount regular file identified by the
   current filehandle.

   The client provides an offset of I/O
   done, it where the READ_PLUS is also not guaranteed to work on filesystems that start and
   a count of how many bytes are log
   structured or deduplicated. to be read.  An efficient way offset of guaranteeing space
   reservation would be beneficial zero means to such applications.

   If
   read data starting at the space_reserved attribute is set on a file, it beginning of the file.  If offset is guaranteed
   that writes that do not grow
   greater than or equal to the file will not fail with
   NFSERR_NOSPC.

6.2.2.  Space freed on deletes

   Currently, files in NFS have two size attributes:

   size  The logical file size of the file.

   space_used  The size in bytes that file, the file occupies on disk.

   While these attributes are sufficient for space accounting in
   traditional filesystems, they prove status NFS4_OK is
   returned with nfs_readplusrestype4 set to be inadequate in modern
   filesystems that support block sharing.  In such filesystems,
   multiple inodes can point READ_OK, data length set to a single block with a block reference
   count
   zero, and eof set to guard against premature freeing.

   If space_used of a file TRUE.  The READ_PLUS is interpreted subject to mean access
   permissions checking.

   If the size in client specifies a count value of zero, the READ_PLUS succeeds
   and returns zero bytes of data, again subject to access permissions
   checking.  In all disk blocks pointed situations, the server may choose to return fewer
   bytes than specified by the inode of the file, then shared
   blocks get double counted, over-reporting client.  The client needs to check for
   this condition and handle the space utilization.
   This also has condition appropriately.

   If the adverse effect client specifies an offset and count value that the deletion of a file with
   shared blocks frees up less than space_used bytes.

   On the other hand, if space_used is interpreted to mean the size in
   bytes of those disk blocks unique to the inode entirely
   contained within a hole of the file, then
   shared blocks are not counted in any file, resulting in under-
   reporting of the space utilization.

   For example, two files A status NFS4_OK is returned
   with nfs_readplusresok4 set to READ_HOLE, and B have 10 blocks each.  Let 6 of these
   blocks be shared between them.  Thus, the combined space utilized by
   the two files if information is 14 * BLOCK_SIZE bytes.  In
   available regarding the former case, hole, a nfs_readplusreshole structure
   containing the
   combined space utilization offset and range of the two files would be reported as 20 *
   BLOCK_SIZE.  However, deleting either would only result in 4 *
   BLOCK_SIZE being freed.  Conversely, the latter interpretation would
   report that the space utilization is only 8 * BLOCK_SIZE.

   Adding another size attribute, space_freed, is helpful in solving
   this problem. space_freed entire hole.  The
   nfs_readplusreshole structure is considered valid until the number of blocks that are allocated
   to the given file that would be freed on its deletion.  In the
   example, both A and B would report space_freed as 4 * BLOCK_SIZE and
   space_used as 10 * BLOCK_SIZE.  If A is deleted, B will report
   space_freed as 10 * BLOCK_SIZE as the deletion of B would result in
   changed (detected via the deallocation of all 10 blocks. change attribute).  The addition of this problem doesn't solve server MUST provide
   the problem of space being
   over-reported.  However, over-reporting is better than under-
   reporting.

6.2.3.  Operations and attributes

   In same semantics for nfs_readplusreshole as if the sections that follow, one operation and three attributes are
   defined that together provide client read the space management facilities
   outlined earlier in
   region and received zeroes; the document.  The operation is intended to implied holes contents lifetime MUST
   be
   OPTIONAL and exactly the attributes RECOMMENDED same as defined any other read data.

   If the client specifies an offset and count value that begins in section 17 of
   [2].

6.2.4.  Attribute 77: space_reserved

   The space_reserve attribute is a read/write attribute
   non-hole of type
   boolean.  It is a per file attribute.  When the space_reserved
   attribute is set via SETATTR, file but extends into hole the server must ensure that there is
   disk space to accommodate every byte in the file before it can return
   success.  If the server cannot guarantee this, it must should return
   NFS4ERR_NOSPC.

   If a
   short read with status NFS4_OK, nfs_readplusresok4 set to READ_OK,
   and data length set to the number of bytes returned.  The client tries to grow a file which has will
   then issue another READ_PLUS for the space_reserved
   attribute set, remaining bytes, which the
   server must guarantee that there is disk space to
   accommodate every byte in the file will respond with information about the new size before it can
   return success. hole in the file.

   If the server cannot guarantee this, it must return
   NFS4ERR_NOSPC.

   It is not required knows that the server allocate requested byte range is into a hole of
   the space to file, but has no further information regarding the file
   before returning success.  The allocation can be deferred, however,
   it must be guaranteed that hole, it will not fail for lack of space.

   The value of space_reserved
   returns a nfs_readplusreshole structure with holeres4 set to
   HOLE_NOINFO.

   If hole information is available and can be obtained at any time through
   GETATTR.

   In order returned to avoid ambiguity, the space_reserve bit cannot be set
   along with the size bit in SETATTR.  Increasing client,
   the size of server returns a file nfs_readplusreshole structure with space_reserve set will fail if space reservation cannot be
   guaranteed for the new size.  If value of
   holeres4 to HOLE_INFO.  The values of hole_offset and hole_length
   define the file size is decreased, space
   reservation is only guaranteed byte-range for the new size and current hole in the extra blocks
   backing file.  These values
   represent the file can be released.

6.2.5.  Attribute 78: space_freed

   space_freed gives information known to the number of bytes freed server and may describe a
   byte-range smaller than the true size of the hole.

   Except when special stateids are used, the stateid value for a
   READ_PLUS request represents a value returned from a previous byte-
   range lock or share reservation request or the stateid associated
   with a delegation.  The stateid identifies the associated owners if
   any and is used by the server to verify that the associated locks are
   still valid (e.g., have not been revoked).

   If the read ended at the end-of-file (formally, in a correctly formed
   READ_PLUS operation, if offset + count is equal to the size of the
   file), or the READ_PLUS operation extends beyond the size of the file
   (if offset + count is deleted.
   This attribute greater than the size of the file), eof is read only and
   returned as TRUE; otherwise, it is FALSE.  A successful READ_PLUS of
   an empty file will always return eof as TRUE.

   If the current filehandle is not an ordinary file, an error will be
   returned to the client.  In the case that the current filehandle
   represents an object of type length4.  It NF4DIR, NFS4ERR_ISDIR is returned.  If
   the current filehandle designates a per file
   attribute.

6.2.6.  Attribute 79: max_hole_punch symbolic link, NFS4ERR_SYMLINK is
   returned.  In all other cases, NFS4ERR_WRONG_TYPE is returned.

   For a READ_PLUS with a stateid value of all bits equal to zero, the
   server MAY allow the READ_PLUS to be serviced subject to mandatory
   byte-range locks or the current share deny modes for the file.  For a
   READ_PLUS with a stateid value of all bits equal to one, the server
   MAY allow READ_PLUS operations to bypass locking checks at the
   server.

   On success, the current filehandle retains its value.

7.5.4.  IMPLEMENTATION

   If the server returns a "short read" (i.e., fewer data than requested
   and eof is set to FALSE), the client should send another READ_PLUS to
   get the remaining data.  A server may return less data than requested
   under several circumstances.  The file may have been truncated by
   another client or perhaps on the server itself, changing the file
   size from what the requesting client believes to be the case.  This
   would reduce the actual amount of data available to the client.  It
   is possible that the server reduce the transfer size and so return a
   short read result.  Server resource exhaustion may also occur in a
   short read.

   If mandatory byte-range locking is in effect for the file, and if the
   byte-range corresponding to the data to be read from the file is
   WRITE_LT locked by an owner not associated with the stateid, the
   server will return the NFS4ERR_LOCKED error.  The client should try
   to get the appropriate READ_LT via the LOCK operation before re-
   attempting the READ_PLUS.  When the READ_PLUS completes, the client
   should release the byte-range lock via LOCKU.  In addition, the
   server MUST return a nfs_readplusreshole structure with values of
   hole_offset and hole_length that are within the owner's locked byte
   range.

   If another client has an OPEN_DELEGATE_WRITE delegation for the file
   being read, the delegation must be recalled, and the operation cannot
   proceed until that delegation is returned or revoked.  Except where
   this happens very quickly, one or more NFS4ERR_DELAY errors will be
   returned to requests made while the delegation remains outstanding.
   Normally, delegations will not be recalled as a result of a READ_PLUS
   operation since the recall will occur as a result of an earlier OPEN.
   However, since it is possible for a READ_PLUS to be done with a
   special stateid, the server needs to check for this case even though
   the client should have done an OPEN previously.

7.5.4.1.  Additional pNFS Implementation Information

   With pNFS, the semantics of using READ_PLUS remains the same.  Any
   data server MAY return a READ_HOLE result for a READ_PLUS request
   that it receives.

   When a data server chooses to return a READ_HOLE result, it has the
   option of returning hole information for the data stored on that data
   server (as defined by the data layout), but it MUST not return a
   nfs_readplusreshole structure with a byte range that includes data
   managed by another data server.

   1.  Data servers that cannot determine hole information SHOULD return
       HOLE_NOINFO.

   2.  Data servers that can obtain hole information for the parts of
       the file stored on that data server, the data server SHOULD
       return HOLE_INFO and the byte range of the hole stored on that
       data server.

   A data server should do its best to return as much information about
   a hole as is feasible without having to contact the metadata server.
   If communication with the metadata server is required, then every
   attempt should be taken to minimize the number of requests.

   If mandatory locking is enforced, then the data server must also
   ensure that to return only information for a Hole that is within the
   owner's locked byte range.

7.5.5.  READ_PLUS with Sparse Files Example

   To see how the return value READ_HOLE will work, the following table
   describes a sparse file.  For each byte range, the file contains
   either non-zero data or a hole.  In addition, the server in this
   example uses a hole threshold of 32K.

                        +-------------+----------+
                        | Byte-Range  | Contents |
                        +-------------+----------+
                        | 0-15999     | Hole     |
                        | 16K-31999   | Non-Zero |
                        | 32K-255999  | Hole     |
                        | 256K-287999 | Non-Zero |
                        | 288K-353999 | Hole     |
                        | 354K-417999 | Non-Zero |
                        +-------------+----------+

                                  Table 1

   Under the given circumstances, if a client was to read the file from
   beginning to end with a max read size of 64K, the following will be
   the result.  This assumes the client has already opened the file and
   acquired a valid stateid and just needs to issue READ_PLUS requests.

   1.  READ_PLUS(s, 0, 64K) --> NFS_OK, readplusrestype4 = READ_OK, eof
       = false, data<>[32K].  Return a short read, as the last half of
       the request was all zeroes.  Note that the first hole is read
       back as all zeros as it is below the hole threshhold.

   2.  READ_PLUS(s, 32K, 64K) --> NFS_OK, readplusrestype4 = READ_HOLE,
       nfs_readplusreshole(HOLE_INFO)(32K, 224K).  The requested range
       was all zeros, and the current hole begins at offset 32K and is
       224K in length.

   3.  READ_PLUS(s, 256K, 64K) --> NFS_OK, readplusrestype4 = READ_OK,
       eof = false, data<>[32K].  Return a short read, as the last half
       of the request was all zeroes.

   4.  READ_PLUS(s, 288K, 64K) --> NFS_OK, readplusrestype4 = READ_HOLE,
       nfs_readplusreshole(HOLE_INFO)(288K, 66K).

   5.  READ_PLUS(s, 354K, 64K) --> NFS_OK, readplusrestype4 = READ_OK,
       eof = true, data<>[64K].

7.6.  Related Work

   Solaris and ZFS support an extension to lseek(2) that allows
   applications to discover holes in a file.  The values, SEEK_HOLE and
   SEEK_DATA, allow clients to seek to the next hole or beginning of
   data, respectively.

   XFS supports the XFS_IOC_GETBMAP extended attribute, which returns
   the Data Region Map for a file.  Clients can then use this
   information to avoid reading holes in a file.

   NTFS and CIFS support the FSCTL_SET_SPARSE attribute, which allows
   applications to control whether empty regions of the file are
   preallocated and filled in with zeros or simply left unallocated.

7.7.  Other Proposed Designs

7.7.1.  Multi-Data Server Hole Information

   The current design prohibits pnfs data servers from returning hole
   information for regions of a file that are not stored on that data
   server.  Having data servers return information regarding other data
   servers changes the fundamental principal that all metadata
   information comes from the metadata server.

   Here is a brief description if we did choose to support multi-data
   server hole information:

   For a data server that can obtain hole information for the entire
   file without severe performance impact, it MAY return HOLE_INFO and
   the byte range of the entire file hole.  When a pNFS client receives
   a READ_HOLE result and a non-empty nfs_readplusreshole structure, it
   MAY use this information in conjunction with a valid layout for the
   file to determine the next data server for the next region of data
   that is not in a hole.

7.7.2.  Data Result Array

   If a single read request contains one or more Holes with a length
   greater than the Sparse Threshold, the current design would return
   results indicating a short read to the client.  A client would then
   send a series of read requests to the server to retrieve information
   for the Holes and the remaining data.  To avoid turning a single read
   request into several exchanges between the client and server, the
   server may need to choose a relatively large Sparse Threshold in
   order to decrease the number of short reads it creates.  A large
   Sparse Threshold may miss many smaller holes, which in turn may
   negate the benefits of sparse read support.

   To avoid this situation, one option is to have the READ_PLUS
   operation return information for multiple holes in a single return
   value.  This would allow several small holes to be described in a
   single read response without requiring multliple exchanges between
   the client and server.

   One important item to consider with returning an array of data chunks
   is its impact on RDMA, which may use different block sizes on the
   client and server (among other things).

7.7.3.  User-Defined Sparse Mask

   Add mask (instead of just zeroes).  Specified by server or client?

7.7.4.  Allocated flag

   A Hole on the server may be an allocated byte-range consisting of all
   zeroes or may not be allocated at all.  To ensure this information is
   properly communicated to the client, it may be beneficial to add a
   'alloc' flag to the HOLE_INFO section of nfs_readplusreshole.  This
   would allow an NFS client to copy a file from one file system to
   another and have it more closely resemble the original.

7.7.5.  Dense and Sparse pNFS File Layouts

   The hole information returned form a data server must be understood
   by pNFS clients using both Dense or Sparse file layout types.  Does
   the current READ_PLUS return value work for both layout types?  Does
   the data server know if it is using dense or sparse so that it can
   return the correct hole_offset and hole_length values?

8.  Labeled NFS

   WARNING: Need to pull out the requirements.

8.1.  Introduction

   Mandatory Access Control (MAC) systems have been mainstreamed in
   modern operating systems such as Linux (R), FreeBSD (R), Solaris
   (TM), and Windows Vista (R).  MAC systems bind security attributes to
   subjects (processes) and objects within a system.  These attributes
   are used with other information in the system to make access control
   decisions.

   Access control models such as Unix permissions or Access Control
   Lists are commonly referred to as Discretionary Access Control (DAC)
   models.  These systems base their access decisions on user identity
   and resource ownership.  In contrast MAC models base their access
   control decisions on the label on the subject (usually a process) and
   the object it wishes to access.  These labels may contain user
   identity information but usually contain additional information.  In
   DAC systems users are free to specify the access rules for resources
   that they own.  MAC models base their security decisions on a system
   wide policy established by an administrator or organization which the
   users do not have the ability to override.  DAC systems offer no real
   protection against malicious or flawed software due to each program
   running with the full permissions of the user executing it.
   Inversely MAC models can confine malicious or flawed software and
   usually act at a finer granularity than their DAC counterparts.

   People desire to use NFSv4 with these systems.  A mechanism is
   required to provide security attribute information to NFSv4 clients
   and servers.  This mechanism has the following requirements:

   (1)  Clients must be able to convey to the server the security
        attribute of the subject making the access request.  The server
        may provide a mechanism to enforce MAC policy based on the
        requesting subject's security attribute.

   (2)  Server must be able to store and retrieve the security attribute
        of exported files as requested by the client.

   (3)  Server must provide a mechanism for notifying clients of
        attribute changes of files on the server.

   (4)  Clients and Servers must be able to negotiate Label Formats and
        Domains of Interpretation (DOI) and provide a mechanism to
        translate between them as needed.

   These four requirements are key to the system with only requirements
   (2) and (3) requiring changes to NFSv4.  The ability to convey the
   security attribute of the subject as described in requirement (1)
   falls upon the RPC layer to implement (see [6]).  Requirement (4)
   allows communication between different MAC implementations.  The
   management of label formats, DOIs, and the translation between them
   does not require any support from NFSv4 on a protocol level and is
   out of the scope of this document.

   The first change necessary is to devise a method for transporting and
   storing security label data on NFSv4 file objects.  Security labels
   have several semantics that are met by NFSv4 recommended attributes
   such as the ability to set the label value upon object creation.
   Access control on these attributes are done through a combination of
   two mechanisms.  As with other recommended attributes on file objects
   the usual DAC checks (ACLs and permission bits) will be performed to
   ensure that proper file ownership is enforced.  In addition a MAC
   system MAY be employed on the client, server, or both to enforce
   additional policy on what subjects may modify security label
   information.

   The second change is to provide a method for the server to notify the
   client that the attribute changed on an open file on the server.  If
   the file is closed, then during the open attempt, the client will
   gather the new attribute value.  The server MUST not communicate the
   new value of the attribute, the client MUST query it.  This
   requirement stems from the need for the client to provide sufficient
   access rights to the attribute.

   The final change necessary is a modification to the RPC layer used in
   NFSv4 in the form of a new version of the RPCSEC_GSS [7] framework.
   In order for an NFSv4 server to apply MAC checks it must obtain
   additional information from the client.  Several methods were
   explored for performing this and it was decided that the best
   approach was to incorporate the ability to make security attribute
   assertions through the RPC mechanism.  RPCSECGSSv3 [6] outlines a
   method to assert additional security information such as security
   labels on gss context creation and have that data bound to all RPC
   requests that make use of that context.

8.2.  Definitions

   Label Format Specifier (LFS):  is an identifier used by the client to
      establish the syntactic format of the security label and the
      semantic meaning of its components.  These specifiers exist in a
      registry associated with documents describing the format and
      semantics of the label.

   Label Format Registry:  is the IANA registry containing all
      registered LFS along with references to the documents that
      describe the syntactic format and semantics of the security label.

   Policy Identifier (PI):  is an optional part of the definition of a
      Label Format Specifier which allows for clients and server to
      identify specific security policies.

   Domain of Interpretation (DOI):  represents an administrative
      security boundary, where all systems within the DOI have
      semantically coherent labeling.  That is, a security attribute
      must always mean exactly the same thing anywhere within the DOI.

   Object:  is a passive resource within the system that we wish to be
      protected.  Objects can be entities such as files, directories,
      pipes, sockets, and many other system resources relevant to the
      protection of the system state.

   Subject:  A subject is an active entity usually a process which is
      requesting access to an object.

   Multi-Level Security (MLS):  is a traditional model where objects are
      given a sensitivity level (Unclassified, Secret, Top Secret, etc)
      and a category set [21].

8.3.  MAC Security Attribute

   MAC models base access decisions on security attributes bound to
   subjects and objects.  This information can range from a user
   identity for an identity based MAC model, sensitivity levels for
   Multi-level security, or a type for Type Enforcement.  These models
   base their decisions on different criteria but the semantics of the
   security attribute remain the same.  The semantics required by the
   security attributes are listed below:

   o  Must provide flexibility with respect to MAC model.

   o  Must provide the ability to atomically set security information
      upon object creation

   o  Must provide the ability to enforce access control decisions both
      on the client and the server

   o  Must not expose an object to either the client or server name
      space before its security information has been bound to it.

   NFSv4 provides several options for implementing the security
   attribute.  The first option is to implement the security attribute
   as a named attribute.  Named attributes provide flexibility since
   they are treated as an opaque field but lack a way to atomically set
   the attribute on creation.  In addition, named attributes themselves
   are file system objects which need to be assigned a security
   attribute.  This raises the question of how to assign security
   attributes to the file and directories used to hold the security
   attribute for the file in question.  The inability to atomically
   assign the security attribute on file creation and the necessity to
   assign security attributes to its sub-components makes named
   attributes unacceptable as a method for storing security attributes.

   The second option is to implement the security attribute as a
   recommended attribute.  These attributes have a fixed format and
   semantics, which conflicts with the flexible nature of the security
   attribute.  To resolve this the security attribute consists of two
   components.  The first component is a LFS as defined in [22] to allow
   for interoperability between MAC mechanisms.  The second component is
   an opaque field which is the actual security attribute data.  To
   allow for various MAC models NFSv4 should be used solely as a
   transport mechanism for the security attribute.  It is the
   responsibility of the endpoints to consume the security attribute and
   make access decisions based on their respective models.  In addition,
   creation of objects through OPEN and CREATE allows for the security
   attribute to be specified upon creation.  By providing an atomic
   create and set operation for the security attribute it is possible to
   enforce the second and fourth requirements.  The recommended
   attribute FATTR4_SEC_LABEL will be used to satisfy this requirement.

8.3.1.  Interpreting FATTR4_SEC_LABEL

   The XDR [11] necessary to implement Labeled NFSv4 is presented in
   Figure 6:

         const FATTR4_SEC_LABEL   = 81;

         typedef uint32_t  policy4;
         struct labelformat_spec4 {
           policy4   lfs_lfs;
           policy4   lfs_pi;
         };

         struct sec_label_attr_info {
           labelformat_spec4   slai_lfs;
           opaque              slai_data<>;
         };

                                 Figure 6

   The FATTR4_SEC_LABEL contains an array of two components with the
   first component being an LFS.  It serves to provide the receiving end
   with the information necessary to translate the security attribute
   into a form that is usable by the endpoint.  Label Formats assigned
   an LFS may optionally choose to include a Policy Identifier field to
   allow for complex policy deployments.  The LFS and Label Format
   Registry are described in detail in [22].  The translation used to
   interpret the security attribute is not specified as part of the
   protocol as it may depend on various factors.  The second component
   is an opaque section which contains the data of the attribute.  This
   component is dependent on the MAC model to interpret and enforce.

   In particular, it is the responsibility of the LFS specification to
   define a maximum size for the opaque section, slai_data<>.  When
   creating or modifying a label for an object, the client needs to be
   guaranteed that the server will accept a label that is sized
   correctly.  By both client and server being part of a specific MAC
   model, the client will be aware of the size.

8.3.2.  Delegations

   In the event that a security attribute is changed on the server while
   a client holds a delegation on the file, the client should follow the
   existing protocol with respect to attribute changes.  It should flush
   all changes back to the server and relinquish the delegation.

8.3.3.  Permission Checking

   It is not feasible to enumerate all possible MAC models and even
   levels of protection within a subset of these models.  This means
   that the NFSv4 client and servers cannot be expected to directly make
   access control decisions based on the security attribute.  Instead
   NFSv4 should defer permission checking on this attribute to the host
   system.  These checks are performed in addition to existing DAC and
   ACL checks outlined in the NFSv4 protocol.  Section 8.7 gives a
   specific example of how the security attribute is handled under a
   particular MAC model.

8.3.4.  Object Creation

   When creating files in NFSv4 the OPEN and CREATE operations are used.
   One of the parameters to these operations is an fattr4 structure
   containing the attributes the file is to be created with.  This
   allows NFSv4 to atomically set the security attribute of files upon
   creation.  When a client is MAC aware it must always provide the
   initial security attribute upon file creation.  In the event that the
   server is the only MAC aware entity in the system it should ignore
   the security attribute specified by the client and instead make the
   determination itself.  A more in depth explanation can be found in
   Section 8.7.

8.3.5.  Existing Objects

   Note that under the MAC model, all objects must have labels.
   Therefore, if an existing server is upgraded to include LNFS support,
   then it is the responsibility of the security system to define the
   behavior for existing objects.  For example, if the security system
   is LFS 0, which means the server just stores and returns labels, then
   existing files should return labels which are set to an empty value.

8.3.6.  Label Changes

   As per the requirements, when a file's security label is modified,
   the server must notify all clients which have the file opened of the
   change in label.  It does so with CB_ATTR_CHANGED.  There are
   preconditions to making an attribute change imposed by NFSv4 and the
   security system might want to impose others.  In the process of
   meeting these preconditions, the server may chose to either serve the
   request in whole or return NFS4ERR_DELAY to the SETATTR operation.

   If there are open delegations on the file belonging to client other
   than the one making the label change, then the process described in
   Section 8.3.2 must be followed.

   As the server is always presented with the subject label from the
   client, it does not necessarily need to communicate the fact that the
   label has changed to the client.  In the cases where the change
   outright denies the client access, the client will be able to quickly
   determine that there is a new label in effect.  It is in cases where
   the client may share the same object between multiple subjects or a
   security system which is not strictly hierarchical that the
   CB_ATTR_CHANGED callback is very useful.  It allows the server to
   inform the clients that the cached security attribute is now stale.

   In the scenario presented in Section 8.8.5, the clients are smart and
   the server has a very simple security system which just stores the
   labels.  In this system, the MAC label check always allows access,
   regardless of the subject label.

   The way in which MAC labels are enforced is by the smart client.  So
   if client A changes a security label on a file, then the server MUST
   inform all clients that have the file opened that the label has
   changed via CB_ATTR_CHANGED.  Then the clients MUST retrieve the new
   label and MUST enforce access via the new attribute values.

   [[Comment.3: Describe a LFS of 0, which will be the means to indicate
   such a deployment.  In the current LFR, 0 is marked as reserved.  If
   we use it, then we define the default LFS to be used by a LNFS aware
   server.  I.e., it lets smart clients work together in the face of a
   dumb server.  Note that will supporting this system is optional, it
   will make for a very good debugging mode during development.  I.e.,
   even if a server does not deploy with another security system, this
   mode gets your foot in the door. --TH]]

8.4.  Procedure 16: CB_ATTR_CHANGED - Notify Client that the File's
      Attributes Changed

8.4.1.  ARGUMENTS

        struct CB_ATTR_CHANGED4args {
                nfs_fh4         acca_fh;
                bitmap4         acca_critical;
                bitmap4         acca_info;
        };

8.4.2.  RESULTS

        struct CB_ATTR_CHANGED4res {
                nfsstat4        accr_status;
        };

8.4.3.  DESCRIPTION

   The CB_ATTR_CHANGED callback operation is used by the server to
   indicate to the client that the file's attributes have been modified
   on the server.  The server does not convey how the attributes have
   changed, just that they have been modified.  The server can inform
   the client about both critical and informational attribute changes in
   the bitmask arguments.  The client SHOULD query the server about all
   attributes set in acca_critical.  For all changes reflected in
   acca_info, the client can decide whether or not it wants to poll the
   server.

   The CB_ATTR_CHANGED callback operation with the FATTR4_SEC_LABEL set
   in acca_critical is the method used by the server to indicate that
   the MAC label for the file referenced by acca_fh has changed.  In
   many ways, the server does not care about the result returned by the
   client.

8.5.  pNFS Considerations

   This section examines the issues in deploying LNFS in a pNFS
   community of servers.

8.5.1.  MAC Label Checks

   The new FATTR4_SEC_LABEL attribute is metadata information and as
   such the DS is not aware of the value contained on the MDS.
   Fortunately, the NFSv4.1 protocol [2] already has provisions for
   doing access level checks from the DS to the MDS.  In order for the
   DS to validate the subject label presented by the client, it SHOULD
   utilize this mechanism.

   If a file's FATTR4_SEC_LABEL is changed, then the MDS should utilize
   CB_ATTR_CHANGED to inform the client of that fact.  If the MDS is
   maintaining

8.6.  Discovery of Server LNFS Support

   The server can easily determine that a client supports LNFS when it
   queries for the FATTR4_SEC_LABEL label for an object.  Note that it
   cannot assume that the presence of RPCSEC_GSSv3 indicates LNFS
   support.  The client might need to discover which LFS the server
   supports.

   A server which supports LNFS MUST allow a client with any subject
   label to retrieve the FATTR4_SEC_LABEL attribute for the root
   filehandle, ROOTFH.  The following compound must always succeed as
   far as a MAC label check is concerned:

        PUTROOTFH, GETATTR {FATTR4_SEC_LABEL}

   Note that the server might have imposed a security flavor on the root
   that precludes such access.  I.e., if the server requires kerberized
   access and the client presents a compound with AUTH_SYS, then the
   server is allowed to return NFS4ERR_WRONGSEC in this case.  But if
   the client presents a correct security flavor, then the server MUST
   return the FATTR4_SEC_LABEL attribute with the supported LFS filled
   in.

8.7.  MAC Security NFS Modes of Operation

   A system using Labeled NFS may operate in three modes.  The first
   mode provides the most protection and is called "full mode".  In this
   mode both the client and server implement a MAC model allowing each
   end to make an access control decision.  The remaining two modes are
   variations on each other and are called "smart client" and "smart
   server" modes.  In these modes one end of the connection is not
   implementing a MAC model and because of this these operating modes
   offer less protection than full mode.

8.7.1.  Full Mode

   Full mode environments consist of MAC aware NFSv4 servers and clients
   and may be composed of mixed MAC models and policies.  The system
   requires that both the client and server have an opportunity to
   perform an access control check based on all relevant information
   within the network.  The file object security attribute is provided
   using the mechanism described in Section 8.3.  The security attribute
   of the subject making the request is transported at the RPC layer
   using the mechanism described in RPCSECGSSv3 [6].

8.7.1.1.  Initial Labeling and Translation

   The ability to create a file is an action that a MAC model may wish
   to mediate.  The client is given the responsibility to determine the
   initial security attribute to be placed on a file.  This allows the
   client to make a decision as to the acceptable security attributes to
   create a file with before sending the request to the server.  Once
   the server receives the creation request from the client it may
   choose to evaluate if the security attribute is acceptable.

   Security attributes on the client and server may vary based on MAC
   model and policy.  To handle this the security attribute field has an
   LFS component.  This component is a mechanism for the host to
   identify the format and meaning of the opaque portion of the security
   attribute.  A full mode environment may contain hosts operating in
   several different LFSs and DOIs.  In this case a mechanism for
   translating the opaque portion of the security attribute is needed.
   The actual translation function will vary based on MAC model and
   policy and is out of the scope of this document.  If a translation is
   unavailable for a given LFS and DOI then the request SHOULD be
   denied.  Another recourse is to allow the host to provide a fallback
   mapping for unknown security attributes.

8.7.1.2.  Policy Enforcement

   In full mode access control decisions are made by both the clients
   and servers.  When a client makes a request it takes the security
   attribute from the requesting process and makes an access control
   decision based on that attribute and the security attribute of the
   object it is trying to access.  If the client denies that access an
   RPC call to the server is never made.  If however the access is
   allowed the client will make a call to the NFS server.

   When the server receives the request from the client it extracts the
   security attribute conveyed in the RPC request.  The server then uses
   this security attribute and the attribute of the object the client is
   trying to access to make an access control decision.  If the server's
   policy allows this access it will fulfill the client's request,
   otherwise it will return NFS4ERR_ACCESS.

   Implementations MAY validate security attributes supplied over the
   network to ensure that they are within a set of attributes permitted
   from a specific peer, and if not, reject them.  Note that a system
   may permit a different set of attributes to be accepted from each
   peer.  An example of this can be seen in Section 8.8.7.1.

8.7.2.  Smart Client Mode

   Smart client environments consist of NFSv4 servers that are not MAC
   aware but NFSv4 clients that are.  Clients in this environment are
   may consist of groups implementing different MAC models policies.
   The system requires that all clients in the environment be
   responsible for access control checks.  Due to the amount of trust
   placed in the clients this mode is only to be used in a trusted
   environment.

8.7.2.1.  Initial Labeling and Translation

   Just like in full mode the client is responsible for determining the
   initial label upon object creation.  The server in smart client mode
   does not implement a MAC model, however, it may provide the ability
   to restrict the creation and labeling of object with certain labels
   based on different criteria as described in Section 8.7.1.2.

   In a smart client environment a group of clients operate in a single
   DOI.  This removes the need for the clients to maintain a set of DOI
   translations.  Servers should provide a method to allow different
   groups of clients to access the server at the same time.  However it
   should not let two groups of clients operating in different DOIs to
   access the same files.

8.7.2.2.  Policy Enforcement

   In smart client mode access control decisions are made by the
   clients.  When a client accesses an object it obtains the security
   attribute of the object from the server and combines it with the
   security attribute of the process making the request to make an
   access control decision.  This check is in addition to the DAC checks
   provided by NFSv4 so this may fail based on the DAC criteria even if
   the MAC policy grants access.  As the policy check is located on the
   client an access control denial should take the form that is native
   to the platform.

8.7.3.  Smart Server Mode

   Smart server environments consist of NFSv4 servers that are MAC aware
   and one or more MAC unaware clients.  The server is the only entity
   enforcing policy, and may selectively provide standard NFS services
   to clients based on their authentication credentials and/or
   associated network attributes (e.g., IP address, network interface).
   The level of trust and access extended to a client in this mode is
   configuration-specific.

8.7.3.1.  Initial Labeling and Translation

   In smart server mode all labeling and access control decisions are
   performed by the NFSv4 server.  In this environment the NFSv4 clients
   are not MAC aware so they cannot provide input into the access
   control decision.  This requires the server to determine the initial
   labeling of objects.  Normally the subject to use in this calculation
   would originate from the client.  Instead the NFSv4 server may choose
   to assign the subject security attribute based on their
   authentication credentials and/or associated network attributes
   (e.g., IP address, network interface).

   In smart server mode security attributes are contained solely within
   the NFSv4 server.  This means that all security attributes used in
   the system remain within a single LFS and DOI.  Since security
   attributes will not cross DOIs or change format there is no need to
   provide any translation functionality above that which is needed
   internally by the MAC model.

8.7.3.2.  Policy Enforcement

   All access control decisions in smart server mode are made by the
   server.  The server will assign the subject a security attribute
   based on some criteria (e.g., IP address, network interface).  Using
   the newly calculated security attribute and the security attribute of
   the object being requested the MAC model makes the access control
   check and returns NFS4ERR_ACCESS on a denial and NFS4_OK on success.
   This check is done transparently to the client so if the MAC
   permission check fails the client may be unaware of the reason for
   the permission failure.  When operating in this mode administrators
   attempting to debug permission failures should be aware to check the
   MAC policy running on the server in addition to the DAC settings.

8.8.  Use Cases

   MAC labeling is meant to allow NFSv4 to be deployed in site
   configurable security schemes.  The LFS and opaque data scheme allows
   for flexibility to meet these different implementations.  In this
   section, we provide some examples of how NFSv4 could be deployed to
   meet existing needs.  This is not an exhaustive listing.

8.8.1.  Full MAC labeling support for remotely mounted filesystems

   In this case, we assume a local networked environment where the
   servers and clients are under common administrative control.  All
   systems in this network have the same MAC implementation and
   semantically identical MAC security labels for objects (i.e. labels
   mean the same thing on different systems, even if the policies on
   each system may differ to some extent).  Clients will be able to
   apply fine-grained MAC policy to objects accessed via NFS mounts, and
   thus improve the overall consistency of MAC policy application within
   this environment.

   An example of this case would be where user home directories are
   remotely mounted, and fine-grained MAC policy is implemented to
   protect, for example, private user data from being read by malicious
   web scripts running in the user's browser.  With Labeled NFS, fine-
   grained MAC labeling of the user's files will allow the local MAC
   policy to be implemented and provide the desired protection.

8.8.2.  MAC labeling of virtual machine images stored on the network

   Virtualization is now a commonly implemented feature of modern
   operating systems, and there is a need to ensure that MAC security
   policy is able to to protect virtualized resources.  A common
   implementation scheme involves storing virtualized guest filesystems
   on a networked server, which are then mounted remotely by guests upon
   instantiation.  In this case, there is a need to ensure that the
   local guest kernel is able to access fine-grained MAC labels on the
   remotely mounted filesystem so that its MAC security policy can be
   applied.

8.8.3.  International Traffic in Arms Regulations (ITAR)

   The International Traffic in Arms Regulations (ITAR) is put forth by
   the United States Department of State, Directorate of Defense and
   Trade Controls.  ITAR places strict requirements on the export and
   thus access of defense articles and defense services.  Organizations
   that manage projects with articles and services deemed as within the
   scope of ITAR must ensure the regulations are met.  The regulations
   require an assurance that ITAR information is accessed on a need-to-
   know basis, thus requiring strict, centrally managed access controls
   on items labeled as ITAR.  Additionally, organizations must be able
   to prove that the controls were adequately maintained and that
   foreign nationals were not permitted access to these defense articles
   or service.  ITAR control applicability may be dynamic; information
   may become subject to ITAR after creation (e.g., when the defense
   implications of technology are recognized).

8.8.4.  Legal Hold/eDiscovery

   Increased cases of legal holds on electronic sources of information
   (ESI) have resulted in organizations taking a pro-active approach to
   reduce the scope and thus costs associated with these activities.
   ESI Data Maps are increasing in use and require support in operating
   systems to strictly manage access controls in the case of a legal
   hold.  The sizeable quantity of information involved in a legal
   discovery request may preclude making a copy of the information to a
   separate system that manages the legal hold on the copies; this
   results in a need to enforce the legal hold on the original
   information.

   Organizations are taking steps to map out the sources of information
   that are most likely to be placed under a legal hold, these efforts
   result in ESI Data Maps.  ESI Data Maps specify the Electronic Source
   of Information and the requirements for sensitivity and criticality.
   In the case of a legal hold, the ESI data map and labels can be used
   to ensure the legal hold is properly enforced on the predetermined
   set of information.  An ESI data map narrows the scope of a legal
   hold to the predetermined ESI.  The information must then be
   protected at a level of security of which the weight and
   admissibility of that evidence may be proved in a court of law.
   Current systems use application level controls and do not adequately
   meet the requirements.  Labels may be used in advance when an ESI
   data map exercise is conducted with controls being applied at the
   time of a hold or labels may be applied to data sets during an
   eDiscovery exercise to ensure the data protections are adequate
   during the legal hold period.

   Note that this use case requires multi-attribute labels, as both
   information sensitivity (e.g., to disclosure) and information
   criticality (e.g., to continued business operations) need to be
   captured.

8.8.5.  Simple security label storage

   In this case, a mixed and loosely administered network is assumed,
   where nodes may be running a variety of operating systems with
   different security mechanisms and security policies.  It is desired
   that network file servers be simply capable of storing and retrieving
   MAC security labels for clients which use such labels.  The Labeled
   NFS protocol would be implemented here solely to enable transport of
   MAC security labels across the network.  It should be noted that in
   such an environment, overall security cannot be as strongly enforced
   as in case (a), and that this scheme is aimed at allowing MAC-capable
   clients to function with local MAC security policy enabled rather
   than perhaps disabling it entirely.

8.8.6.  Diskless Linux

   A number of popular operating system distributions depend on a
   mandatory access control (MAC) model to implement a kernel-enforced
   security policy.  Typically, such models assign particular roles to
   individual processes, which limit or permit performing certain
   operations on a set of files, directories, sockets, or other objects.
   While the enforcing of the policy is typically a matter for the
   diskless NFS client itself, the filesystem objects in such models
   will typically carry MAC labels that are used to define policy on
   access.  These policies may, for instance, describe privilege
   transitions that cannot be replicated using standard NFS ACL based
   models.

   For instance on a SYSV compatible system, if the 'init' process
   spawns a process that attempts to start the 'NetworkManager'
   executable, there may be a policy that sets up a role transition if
   the 'init' process and 'NetworkManager' file labels match a
   particular rule.  Without this role transition, the process may find
   itself having insufficient privileges to perform its primary job of
   configuring network interfaces.

   In setups of this type, a lot of the policy targets (such as sockets
   or privileged system calls) are entirely local to the client.  The
   use of RPCSEC_GSSv3 for enforcing compliance at the server level is
   therefore of limited value.  The ability to permanently label files
   and have those labels read back by the client is, however, crucial to
   the ability to enforce that policy.

8.8.7.  Multi-Level Security

   In a MLS system objects are generally assigned a sensitivity level
   and a set of compartments.  The sensitivity levels within the system
   are given an order ranging from lowest to highest classification
   level.  Read access to an object is allowed when the sensitivity
   level of the subject "dominates" the object it wants to access.  This
   means that the sensitivity level of the subject is higher than that
   of the object it wishes to access and that its set of compartments is
   a super-set of the compartments on the object.

   The rest of the section will just use sensitivity levels.  In general
   the example is a client that wishes to list the contents of a
   directory.  The system defines the sensitivity levels as Unclassified
   (U), Secret (S), and Top Secret (TS).  The directory to be searched
   is labeled Top Secret which means access to read the directory will
   only be granted if the subject making the request is also labeled Top
   Secret.

8.8.7.1.  Full Mode

   In the first part of this example a process on the client is running
   at the Secret level.  The process issues a readdir system call which
   enters the kernel.  Before translating the readdir system call into a
   request to the NFSv4 server the host operating system will consult
   the MAC module to see if the operation is allowed.  Since the process
   is operating at Secret and the directory to be accessed is labeled
   Top Secret the MAC module will deny the request and an error code is
   returned to user space.

   Consider a second case where instead of running at Secret the process
   is running at Top Secret.  In this case the sensitivity of the
   process is equal to or greater than that of the directory so the MAC
   module will allow the request.  Now the readdir is translated into
   the necessary NFSv4 call to the server.  For the RPC request the
   client is using the proper credential to assert to the server that
   the process is running at Top Secret.

   When the server receives the request it extracts the security label
   from the RPC session and retrieves the label on the directory.  The
   server then checks with its MAC module if a Top Secret process is
   allowed to read the contents of the Top Secret directory.  Since this
   is allowed by the policy then the server will return the appropriate
   information back to the client.

   In this example the policy on the client and server were both the
   same.  In the event that they were running different policies a
   translation of the labels might be needed.  In this case it could be
   possible for a check to pass on the client and fail on the server.
   The server may consider additional information when making its policy
   decisions.  For example the server could determine that a certain
   subnet is only cleared for data up to Secret classification.  If that
   constraint was in place for the example above the client would still
   succeed, but the server would fail since the client is asserting a
   label that it is not able to use (Top Secret on a Secret network).

8.8.7.2.  Smart Client Mode

   In smart client mode the example is identical to the first part of a
   full mode operation.  A process on the client labeled Secret wishes
   to access a Top Secret directory.  As in the full mode example this
   is denied since Secret does not dominate Top Secret.  If the process
   were operating at Top Secret it would pass the local access control
   check and the NFSv4 operation would proceed as in a normal NFSv4
   environment.

8.8.7.3.  Smart Server Mode

   In a smart server mode the client behaves as if it were in a normal
   NFSv4 environment.  Since the process on the client does not provide
   a security attribute the server must define a mechanism for labeling
   all requests from a client.  Assume that the server is using the same
   criteria used in the full mode example.  The server sees the request
   as coming from a subnet that is a Secret network.  The server
   determines that all clients on that subnet will have their requests
   labeled with Secret.  Since the directory on the server is labeled
   Top Secret and Secret does not dominate Top Secret the server would
   fail the request with NFS4ERR_ACCESS.

8.9.  Security Considerations

   This entire document deals with security issues.

   Depending on the level of protection the MAC system offers there may
   be a requirement to tightly bind the security attribute to the data.

   When either the client is in Smart Client Mode or server is in Smart
   Server Mode, it is important to realize that the other side is not
   enforcing MAC protections.  Alternate methods might be in use to
   handle the lack of MAC support and care should be taken to identify
   and mitigate threats from possible tampering outside of these
   methods.

   An example of this is that a smart server that modifies READDIR or
   LOOKUP results based on the client's subject label might want to
   always construct the same subject label for a client which does not
   present one.  This will prevent a non-LNFS client from mixing entries
   in the directory cache.

9.  Security Considerations

10.  Operations: REQUIRED, RECOMMENDED, or OPTIONAL

   The following tables summarize the operations of the NFSv4.2 protocol
   and the corresponding designation of REQUIRED, RECOMMENDED, and
   OPTIONAL to implement or MUST NOT implement.  The designation of MUST
   NOT implement is reserved for those operations that were defined in
   either NFSv4.0 or NFSV4.1 and MUST NOT be implemented in NFSv4.2.

   For the most part, the REQUIRED, RECOMMENDED, or OPTIONAL designation
   for operations sent by the client is for the server implementation.
   The client is generally required to implement the operations needed
   for the operating environment for which it serves.  For example, a
   read-only NFSv4.2 client would have no need to implement the WRITE
   operation and is not required to do so.

   The REQUIRED or OPTIONAL designation for callback operations sent by
   the server is for both the client and server.  Generally, the client
   has the option of creating the backchannel and sending the operations
   on the fore channel that will be a catalyst for the server sending
   callback operations.  A partial exception is CB_RECALL_SLOT; the only
   way the client can avoid supporting this operation is by not creating
   a backchannel.

   Since this is a summary of the operations and their designation,
   there are subtleties that are not presented here.  Therefore, if
   there is a question of the requirements of implementation, the
   operation descriptions themselves must be consulted along with other
   relevant explanatory text within this either specification or that of
   NFSv4.1 [2]..

   The abbreviations used in the second and third columns of the table
   are defined as follows.

   REQ  REQUIRED to implement

   REC  RECOMMEND to implement

   OPT  OPTIONAL to implement
   MNI  MUST NOT implement

   For the NFSv4.2 features that are OPTIONAL, the operations that
   support those features are OPTIONAL, and the server would return
   NFS4ERR_NOTSUPP in response to the client's use of those operations.
   If an OPTIONAL feature is supported, it is possible that a set of
   operations related to the feature become REQUIRED to implement.  The
   third column of the table designates the feature(s) and if the
   operation is REQUIRED or OPTIONAL in the presence of support for the
   feature.

   The OPTIONAL features identified and their abbreviations are as
   follows:

   pNFS  Parallel NFS

   FDELG  File Delegations

   DDELG  Directory Delegations

   COPY  Server Side Copy

   ADB  Application Data Blocks

                                Operations

   +----------------------+--------------------+-----------------------+
   | Operation            | REQ, REC, OPT, or  | Feature (REQ, REC, or |
   |                      | MNI                | OPT)                  |
   +----------------------+--------------------+-----------------------+
   | ACCESS               | REQ                |                       |
   | BACKCHANNEL_CTL      | REQ                |                       |
   | BIND_CONN_TO_SESSION | REQ                |                       |
   | CLOSE                | REQ                |                       |
   | COMMIT               | REQ                |                       |
   | COPY                 | OPT                | COPY (REQ)            |
   | COPY_ABORT           | OPT                | COPY (REQ)            |
   | COPY_NOTIFY          | OPT                | COPY (REQ)            |
   | COPY_REVOKE          | OPT                | COPY (REQ)            |
   | COPY_STATUS          | OPT                | COPY (REQ)            |
   | CREATE               | REQ                |                       |
   | CREATE_SESSION       | REQ                |                       |
   | DELEGPURGE           | OPT                | FDELG (REQ)           |
   | DELEGRETURN          | OPT                | FDELG, DDELG, pNFS    |
   |                      |                    | (REQ)                 |
   | DESTROY_CLIENTID     | REQ                |                       |
   | DESTROY_SESSION      | REQ                |                       |
   | EXCHANGE_ID          | REQ                |                       |
   | FREE_STATEID         | REQ                |                       |
   | GETATTR              | REQ                |                       |
   | GETDEVICEINFO        | OPT                | pNFS (REQ)            |
   | GETDEVICELIST        | OPT                | pNFS (OPT)            |
   | GETFH                | REQ                |                       |
   | INITIALIZE           | OPT                | ADB (REQ)             |
   | GET_DIR_DELEGATION   | OPT                | DDELG (REQ)           |
   | LAYOUTCOMMIT         | OPT                | pNFS (REQ)            |
   | LAYOUTGET            | OPT                | pNFS (REQ)            |
   | LAYOUTRETURN         | OPT                | pNFS (REQ)            |
   | LINK                 | OPT                |                       |
   | LOCK                 | REQ                |                       |
   | LOCKT                | REQ                |                       |
   | LOCKU                | REQ                |                       |
   | LOOKUP               | REQ                |                       |
   | LOOKUPP              | REQ                |                       |
   | NVERIFY              | REQ                |                       |
   | OPEN                 | REQ                |                       |
   | OPENATTR             | OPT                |                       |
   | OPEN_CONFIRM         | MNI                |                       |
   | OPEN_DOWNGRADE       | REQ                |                       |
   | PUTFH                | REQ                |                       |
   | PUTPUBFH             | REQ                |                       |
   | PUTROOTFH            | REQ                |                       |
   | READ                 | OPT                |                       |
   | READDIR              | REQ                |                       |
   | READLINK             | OPT                |                       |
   | READ_PLUS            | OPT                | ADB (REQ)             |
   | RECLAIM_COMPLETE     | REQ                |                       |
   | RELEASE_LOCKOWNER    | MNI                |                       |
   | REMOVE               | REQ                |                       |
   | RENAME               | REQ                |                       |
   | RENEW                | MNI                |                       |
   | RESTOREFH            | REQ                |                       |
   | SAVEFH               | REQ                |                       |
   | SECINFO              | REQ                |                       |
   | SECINFO_NO_NAME      | REC                | pNFS file layout      |
   |                      |                    | (REQ)                 |
   | SEQUENCE             | REQ                |                       |
   | SETATTR              | REQ                |                       |
   | SETCLIENTID          | MNI                |                       |
   | SETCLIENTID_CONFIRM  | MNI                |                       |
   | SET_SSV              | REQ                |                       |
   | TEST_STATEID         | REQ                |                       |
   | VERIFY               | REQ                |                       |
   | WANT_DELEGATION      | OPT                | FDELG (OPT)           |
   | WRITE                | REQ                |                       |
   +----------------------+--------------------+-----------------------+
                            Callback Operations

   +-------------------------+-------------------+---------------------+
   | Operation               | REQ, REC, OPT, or | Feature (REQ, REC,  |
   |                         | MNI               | or OPT)             |
   +-------------------------+-------------------+---------------------+
   | CB_COPY                 | OPT               | COPY (REQ)          |
   | CB_GETATTR              | OPT               | FDELG (REQ)         |
   | CB_LAYOUTRECALL         | OPT               | pNFS (REQ)          |
   | CB_NOTIFY               | OPT               | DDELG (REQ)         |
   | CB_NOTIFY_DEVICEID      | OPT               | pNFS (OPT)          |
   | CB_NOTIFY_LOCK          | OPT               |                     |
   | CB_PUSH_DELEG           | OPT               | FDELG (OPT)         |
   | CB_RECALL               | OPT               | FDELG, DDELG, pNFS  |
   |                         |                   | (REQ)               |
   | CB_RECALL_ANY           | OPT               | FDELG, DDELG, pNFS  |
   |                         |                   | (REQ)               |
   | CB_RECALL_SLOT          | REQ               |                     |
   | CB_RECALLABLE_OBJ_AVAIL | OPT               | DDELG, pNFS (REQ)   |
   | CB_SEQUENCE             | OPT               | FDELG, DDELG, pNFS  |
   |                         |                   | (REQ)               |
   | CB_WANTS_CANCELLED      | OPT               | FDELG, DDELG, pNFS  |
   |                         |                   | (REQ)               |
   +-------------------------+-------------------+---------------------+

11.  NFSv4.2 Operations

11.1.  Operation 59: COPY - Initiate a server-side copy

11.1.1.  ARGUMENT

   const COPY4_GUARDED     = 0x00000001;
   const COPY4_METADATA    = 0x00000002;

   struct COPY4args {
           /* SAVED_FH: source file */
           /* CURRENT_FH: destination file or */
           /*             directory           */
           offset4         ca_src_offset;
           offset4         ca_dst_offset;
           length4         ca_count;
           uint32_t        ca_flags;
           component4      ca_destination;
           netloc4         ca_source_server<>;
   };

11.1.2.  RESULT

   union COPY4res switch (nfsstat4 cr_status) {
           case NFS4_OK:
                   stateid4        cr_callback_id<1>;
           default:
                   length4         cr_bytes_copied;
   };

11.1.3.  DESCRIPTION

   The COPY operation is used for both intra- and inter-server copies.
   In both cases, the COPY is always sent from the client to the
   destination server of the file copy.  The COPY operation requests
   that a file be copied from the location specified by the SAVED_FH
   value to the location specified by the combination of CURRENT_FH and
   ca_destination.

   The SAVED_FH must be a regular file.  If SAVED_FH is not a regular
   file, the operation MUST fail and return NFS4ERR_WRONG_TYPE.

   In order to set SAVED_FH to the source file handle, the compound
   procedure requesting the COPY will include a sub-sequence of
   operations such as

      PUTFH source-fh
      SAVEFH

   If the request is for a server-to-server copy, the source-fh is a
   filehandle from the source server and the compound procedure is being
   executed on the destination server.  In this case, the source-fh is a
   foreign filehandle on the server receiving the COPY request.  If
   either PUTFH or SAVEFH checked the validity of the filehandle, the
   operation would likely fail and return NFS4ERR_STALE.

   In order to avoid this problem, the minor version incorporating the
   COPY operations will need to make a few small changes in the handling
   of existing operations.  If a server supports the server-to-server
   COPY feature, a PUTFH followed by a SAVEFH MUST NOT return
   NFS4ERR_STALE for either operation.  These restrictions do not pose
   substantial difficulties for servers.  The CURRENT_FH and SAVED_FH
   may be validated in the context of the operation referencing them and
   an NFS4ERR_STALE error returned for an invalid file handle at that
   point.

   The CURRENT_FH and ca_destination together specify the destination of
   the copy operation.  If ca_destination is of 0 (zero) length, then
   CURRENT_FH specifies the target file.  In this case, CURRENT_FH MUST
   be a regular file and not a directory.  If ca_destination is not of 0
   (zero) length, the ca_destination argument specifies the file name to
   which the data will be copied within the directory identified by
   CURRENT_FH.  In this case, CURRENT_FH MUST be a directory and not a
   regular file.

   If the file named by ca_destination does not exist and the operation
   completes successfully, the file will be visible in the file system
   namespace.  If the file does not exist and the operation fails, the
   file MAY be visible in the file system namespace depending on when
   the failure occurs and on the implementation of the NFS server
   receiving the COPY operation.  If the ca_destination name cannot be
   created in the destination file system (due to file name
   restrictions, such as case or length), the operation MUST fail.

   The ca_src_offset is the offset within the source file from which the
   data will be read, the ca_dst_offset is the offset within the
   destination file to which the data will be written, and the ca_count
   is the number of bytes that will be copied.  An offset of 0 (zero)
   specifies the start of the file.  A count of 0 (zero) requests that
   all bytes from ca_src_offset through EOF be copied to the
   destination.  If concurrent modifications to the source file overlap
   with the source file region being copied, the data copied may include
   all, some, or none of the modifications.  The client can use standard
   NFS operations (e.g., OPEN with OPEN4_SHARE_DENY_WRITE or mandatory
   byte range locks) to protect against concurrent modifications if the
   client is concerned about this.  If the source file's end of file is
   being modified in parallel with a copy that specifies a count of 0
   (zero) bytes, the amount of data copied is implementation dependent
   (clients may guard against this case by specifying a non-zero count
   value or preventing modification of the source file as mentioned
   above).

   If the source offset or the source offset plus count is greater than
   or equal to the size of the source file, the operation will fail with
   NFS4ERR_INVAL.  The destination offset or destination offset plus
   count may be greater than the size of the destination file.  This
   allows for the client to issue parallel copies to implement
   operations such as "cat file1 file2 file3 file4 > dest".

   If the destination file is created as a result of this command, the
   destination file's size will be equal to the number of bytes
   successfully copied.  If the destination file already existed, the
   destination file's size may increase as a result of this operation
   (e.g. if ca_dst_offset plus ca_count is greater than the
   destination's initial size).

   If the ca_source_server list is specified, then this is an inter-
   server copy operation and the source file is on a remote server.  The
   client is expected to have previously issued a successful COPY_NOTIFY
   request to the remote source server.  The ca_source_server list
   SHOULD be the same as the COPY_NOTIFY response's cnr_source_server
   list.  If the client includes the entries from the COPY_NOTIFY
   response's cnr_source_server list in the ca_source_server list, the
   source server can indicate a specific copy protocol for the
   destination server to use by returning a URL, which specifies both a
   protocol service and server name.  Server-to-server copy protocol
   considerations are described in Section 4.2.3 and Section 4.4.1.

   The ca_flags argument allows the copy operation to be customized in
   the following ways using the guarded flag (COPY4_GUARDED) and the
   metadata flag (COPY4_METADATA).

   If the guarded flag is set and the destination exists on the server,
   this operation will fail with NFS4ERR_EXIST.

   If the guarded flag is not set and the destination exists on the
   server, the behavior is implementation dependent.

   If the metadata flag is set and the client is requesting a whole file
   copy (i.e., ca_count is 0 (zero)), a subset of the destination file's
   attributes MUST be the same as the source file's corresponding
   attributes and a subset of the destination file's attributes SHOULD
   be the same as the source file's corresponding attributes.  The
   attributes in the MUST and SHOULD copy subsets will be defined for
   each NFS version.

   For NFSv4.1, Table 2 and Table 3 list the REQUIRED and RECOMMENDED
   attributes respectively.  A "MUST" in the "Copy to destination file?"
   column indicates that the attribute is part of the MUST copy set.  A
   "SHOULD" in the "Copy to destination file?" column indicates that the
   attribute is part of the SHOULD copy set.

          +--------------------+----+---------------------------+
          | Name               | Id | Copy to destination file? |
          +--------------------+----+---------------------------+
          | supported_attrs    | 0  | no                        |
          | type               | 1  | MUST                      |
          | fh_expire_type     | 2  | no                        |
          | change             | 3  | SHOULD                    |
          | size               | 4  | MUST                      |
          | link_support       | 5  | no                        |
          | symlink_support    | 6  | no                        |
          | named_attr         | 7  | no                        |
          | fsid               | 8  | no                        |
          | unique_handles     | 9  | no                        |
          | lease_time         | 10 | no                        |
          | rdattr_error       | 11 | no                        |
          | filehandle         | 19 | no                        |
          | suppattr_exclcreat | 75 | no                        |
          +--------------------+----+---------------------------+

                                  Table 2

          +--------------------+----+---------------------------+
          | Name               | Id | Copy to destination file? |
          +--------------------+----+---------------------------+
          | acl                | 12 | MUST                      |
          | aclsupport         | 13 | no                        |
          | archive            | 14 | no                        |
          | cansettime         | 15 | no                        |
          | case_insensitive   | 16 | no                        |
          | case_preserving    | 17 | no                        |
          | change_policy      | 60 | no                        |
          | chown_restricted   | 18 | MUST                      |
          | dacl               | 58 | MUST                      |
          | dir_notif_delay    | 56 | no                        |
          | dirent_notif_delay | 57 | no                        |
          | fileid             | 20 | no                        |
          | files_avail        | 21 | no                        |
          | files_free         | 22 | no                        |
          | files_total        | 23 | no                        |
          | fs_charset_cap     | 76 | no                        |
          | fs_layout_type     | 62 | no                        |
          | fs_locations       | 24 | no                        |
          | fs_locations_info  | 67 | no                        |
          | fs_status          | 61 | no                        |
          | hidden             | 25 | MUST                      |
          | homogeneous        | 26 | no                        |
          | layout_alignment   | 66 | no                        |
          | layout_blksize     | 65 | no                        |
          | layout_hint        | 63 | no                        |
          | layout_type        | 64 | no                        |
          | maxfilesize        | 27 | no                        |
          | maxlink            | 28 | no                        |
          | maxname            | 29 | no                        |
          | maxread            | 30 | no                        |
          | maxwrite           | 31 | no                        |
          | max_hole_punch specifies the maximum size of a hole that the
   HOLE_PUNCH operation can handle.  This attribute is read only and of
   type length4.  It is a per filesystem attribute.  This attribute     | 31 | no                        |
          | mdsthreshold       | 68 | no                        |
          | mimetype           | 32 | MUST
   be implemented if HOLE_PUNCH is implemented.

6.2.7.  Operation 64: HOLE_PUNCH - Zero and deallocate blocks backing
        the file in the specified range.

   WARNING: Most of this section is now obsolete.  Parts of it need to
   be scavanged for the ADB discussion, but for the most part, it cannot
   be trusted.

6.2.7.1.  DESCRIPTION

   Whenever a client wishes to deallocate the blocks backing a
   particular region in the file, it calls the HOLE_PUNCH operation with
   the current filehandle set to the filehandle of the file in question,
   start offset and length in bytes of the region set in hpa_offset and
   hpa_count respectively.  All further reads to this region                      |
          | mode               | 33 | MUST                      |
          | mode_set_masked    | 74 | no                        |
          | mounted_on_fileid  | 55 | no                        |
          | no_trunc           | 34 | no                        |
          | numlinks           | 35 | no                        |
          | owner              | 36 | MUST                      |
          | owner_group        | 37 | MUST                      |
          | quota_avail_hard   | 38 | no                        |
          | quota_avail_soft   | 39 | no                        |
          | quota_used         | 40 | no                        |
          | rawdev             | 41 | no                        |
          | retentevt_get      | 71 | MUST return
   zeros until overwritten.  The filehandle specified must be that of a
   regular file.

   Situations may arise where hpa_offset and/or hpa_offset + hpa_count
   will not be aligned to a boundary that the server does allocations/
   deallocations in.  For most filesystems, this is the block size of
   the file system.  In such a case, the server can deallocate as many
   bytes as it can in the region.  The blocks that cannot be deallocated                      |
          | retentevt_set      | 72 | no                        |
          | retention_get      | 69 | MUST be zeroed.  Except for the block deallocation and maximum hole
   punching capability, a HOLE_PUNCH operation is to be treated similar
   to a write of zeroes.

   The server is not required to complete deallocating the blocks
   specified in the operation before returning.  It is acceptable to
   have the deallocation be deferred.  In fact, HOLE_PUNCH is merely a
   hint; it is valid for a server to return success without ever doing
   anything towards deallocating the blocks backing the region
   specified.  However, any future reads to the region                      |
          | retention_hold     | 73 | MUST return
   zeroes.

   HOLE_PUNCH will result in the                      |
          | retention_set      | 70 | no                        |
          | sacl               | 59 | MUST                      |
          | space_avail        | 42 | no                        |
          | space_free         | 43 | no                        |
          | space_freed        | 78 | no                        |
          | space_reserved     | 77 | MUST                      |
          | space_total        | 44 | no                        |
          | space_used attribute being decreased by
   the number of bytes that were deallocated.  The space_freed attribute
   may or may not decrease, depending on the support and whether the
   blocks backing the specified range were shared or not.  The size
   attribute will remain unchanged.

   The HOLE_PUNCH operation         | 45 | no                        |
          | system             | 46 | MUST NOT change the space reservation
   guarantee of the file.  While the server can deallocate the blocks
   specified by hpa_offset and hpa_count, future writes to this region                      |
          | time_access        | 47 | MUST NOT fail with NFSERR_NOSPC.

   The HOLE_PUNCH operation may fail for the following reasons (this is
   a partial list):

   NFS4ERR_NOTSUPP  The Hole punch operations are not supported by the
      NFS server receiving this request.

   NFS4ERR_DIR  The current filehandle is of type NF4DIR.

   NFS4ERR_SYMLINK  The current filehandle is of type NF4LNK.

   NFS4ERR_WRONG_TYPE  The current filehandle does not designate an
      ordinary file.

7.  Sparse Files

   WARNING: Most of this section needs to be reworked because of the
   work going on in the ADB section.

7.1.  Introduction

   A sparse file is a common way of representing a large file without
   having to utilize all of the disk space for it.  Consequently, a
   sparse file uses less physical space than its size indicates.  This
   means the file contains 'holes', byte ranges within the file that
   contain                      |
          | time_access_set    | 48 | no                        |
          | time_backup        | 49 | no                        |
          | time_create        | 50 | MUST                      |
          | time_delta         | 51 | no                        |
          | time_metadata      | 52 | SHOULD                    |
          | time_modify        | 53 | MUST                      |
          | time_modify_set    | 54 | no data.  Most modern file systems support sparse files,
   including most UNIX file systems and NTFS, but notably not Apple's
   HFS+.  Common examples of sparse files include Virtual Machine (VM)
   OS/disk images, database files, log files, and even checkpoint
   recovery files most commonly used                        |
          +--------------------+----+---------------------------+

                                  Table 3

   [NOTE: The source file's attribute values will take precedence over
   any attribute values inherited by the HPC community.

   If an application reads a hole in a sparse file, the file system must
   returns all zeros to the application.  For local data access there is
   little penalty, but with NFS these zeroes must be transferred back to destination file.]
   In the client.  If case of an application uses the NFS client to read data into
   memory, this wastes time and bandwidth as inter-server copy or an intra-server copy between
   file systems, the application waits attributes supported for the zeroes to be transferred.

   A sparse source file is typically created by initializing the and
   destination file to could be different.  By definition,the REQUIRED
   attributes will be supported in all
   zeros - nothing cases.  If the metadata flag is written to
   set and the data in source file has a RECOMMENDED attribute that is not
   supported for the destination file, instead the hole copy MUST fail with
   NFS4ERR_ATTRNOTSUPP.

   Any attribute supported by the destination server that is recorded in not set on
   the metadata for source file SHOULD be left unset.

   Metadata attributes not exposed via the file.  So a 8G disk image might NFS protocol SHOULD be represented initially by a couple hundred bits in copied
   to the destination file where appropriate.

   The destination file's named attributes are not duplicated from the inode and
   nothing on
   source file.  After the disk.  If copy process completes, the VM then writes 100M client MAY
   attempt to a file in the
   middle of duplicate named attributes using standard NFSv4
   operations.  However, the image, there would now destination file's named attribute
   capabilities MAY be two holes represented in different from the source file's named attribute
   capabilities.

   If the metadata flag is not set and 100M in the data.

   Other applications want to initialize client is requesting a whole
   file to patterns other than
   zero.  The problem with initializing to zero copy (i.e., ca_count is that it 0 (zero)), the destination file's
   metadata is often
   difficult to distinguish a byte-range of initialized to all zeroes
   from data corruption, since a pattern of zeroes implementation dependent.

   If the client is requesting a probable pattern
   for corruption.  Instead, some applications, such as database
   management systems, use pattern consisting of bytes or words of non-
   zero values.

   Besides reading sparse files and initializing them, applications
   might want to hole punch, which partial file copy (i.e., ca_count is
   not 0 (zero)), the deallocation of the data
   blocks which back a region of the file.  At such time, the affected
   blocks are reinitialized to a pattern.

   This section introduces a new operation to read patterns from a file,
   READ_PLUS, and a new operation to both initialize patterns and to
   punch pattern holes into a file, WRITE_PLUS.  READ_PLUS supports all
   the features of READ but includes an extension to support sparse
   pattern files.  READ_PLUS is guaranteed to perform no worse than
   READ, client SHOULD NOT set the metadata flag and can dramatically improve performance with sparse files.
   READ_PLUS the
   server MUST ignore the metadata flag.

   If the operation does not depend on pNFS protocol features, but can result in an immediate failure, the server
   will return NFS4_OK, and the CURRENT_FH will remain the destination's
   filehandle.

   If an immediate failure does occur, cr_bytes_copied will be used
   by pNFS set to support sparse files.

7.2.  Terminology

   Regular file:  An object
   the number of bytes copied to the destination file type NF4REG or NF4NAMEDATTR.

   Sparse file: before the error
   occurred.  The cr_bytes_copied value indicates the number of bytes
   copied but not which specific bytes have been copied.

   A Regular file return of NFS4_OK indicates that contains one either the operation is complete
   or more Holes.

   Hole:  A byte range within the operation was initiated and a Sparse file that contains regions of all
      zeroes.  For block-based file systems, this could also callback will be an
      unallocated region of used to deliver
   the file.

   Hole Threshold  The minimum length final status of a Hole as determined by the
      server. operation.

   If a server chooses to define a Hole Threshold, then it
      would not return hole information (nfs_readplusreshole) with a
      hole_offset and hole_length that specify a range shorter than the
      Hole Threshold.

7.3.  Applications and Sparse Files

   Applications may cause an NFS client to read holes in a file for
   several reasons.  This section describes three different application
   workloads cr_callback_id is returned, this indicates that cause the NFS client to transfer data unnecessarily.
   These workloads are simply examples, operation
   was initiated and there are probably many more
   workloads that are negatively impacted by sparse files. a CB_COPY callback will deliver the final results
   of the operation.  The first workload that can cause holes to be read cr_callback_id stateid is sequential
   reads within termed a sparse file.  When copy
   stateid in this happens, context.  The server is given the NFS client may
   perform read requests ("readahead") into sections option of returning
   the file not
   explicitly requested by results in a callback because the application.  Since data may require a relatively
   long period of time to copy.

   If no cr_callback_id is returned, the NFS client cannot
   differentiate between holes operation completed
   synchronously and non-holes, no callback will be issued by the NFS client may
   prefetch empty sections server.  The
   completion status of the file.

   This workload operation is exemplified indicated by Virtual Machines and their associated
   file system images, e.g., VMware .vmdk files, which are large sparse
   files encapsulating an entire operating system. cr_status.

   If a VM reads files
   within the copy completes successfully, either synchronously or
   asynchronously, the data copied from the source file system image, this will translate to sequential NFS
   read requests into the much larger
   destination file system image file.  Since MUST appear identical to the NFS
   does not understand client.  However,
   the internals NFS server's on disk representation of the file system image, it ends
   up performing readahead file holes.

   The second workload is generated by copying a file from a directory data in NFS to either the same NFS server, to another file system, e.g.,
   another NFS or Samba server, to a local ext3 source
   file system, or even a
   network socket.  In this case, bandwidth and destination file MAY differ.  For example, the NFS server resources are
   wasted as
   might encrypt, compress, deduplicate, or otherwise represent the on
   disk data in the entire source and destination file is transferred from the NFS server to differently.

   In the
   NFS client.  Once event of a byte range failure the state of the destination file has been transferred to
   the client, it is up to
   implementation dependent.  The COPY operation may fail for the client application, e.g., rsync, cp, scp,
   on how it writes
   following reasons (this is a partial list).

   NFS4ERR_MOVED:  The file system which contains the data to source file, or
      the target location.  For example, cp
   supports sparse files and will not write all zero regions, whereas
   scp does destination file or directory is not support sparse files present.  The client can
      determine the correct location and will transfer every byte of reissue the
   file. operation with the
      correct location.

   NFS4ERR_NOTSUPP:  The third workload copy offload operation is generated by applications that do not utilize supported by the
      NFS client cache, but instead use direct I/O and manage cached
   data independently, e.g., databases.  These applications may perform
   whole file caching with sparse files, which would mean that even the
   holes will be transferred to the clients and cached.

7.4.  Overview of Sparse Files and NFSv4

   This proposal seeks to provide sparse file server receiving this request.

   NFS4ERR_PARTNER_NOTSUPP:  The remote server does not support to the largest
   number of NFS client and
      server-to-server copy offload protocol.

   NFS4ERR_PARTNER_NO_AUTH:  The remote server implementations, and as such proposes
   to add does not authorize a new return code
      server-to-server copy offload operation.  This may be due to the mandatory NFSv4.1 READ_PLUS operation
   instead of proposing additions or extensions of new or existing
   optional features (such as pNFS).

   As well, this document seeks the
      client's failure to ensure that send the proposed extensions
   are simple and do not transfer data between COPY_NOTIFY operation to the client and remote
      server, the remote server
   unnecessarily.  For example, one possible way to implement sparse
   file read support receiving a server-to-server copy
      offload request after the copy lease time expired, or for some
      other permission problem.

   NFS4ERR_FBIG:  The copy operation would be to have caused the client, on file to grow
      beyond the first hole
   encountered or at OPEN time, request server's limit.

   NFS4ERR_NOTDIR:  The CURRENT_FH is a Data Region Map from the
   server.  A Data Region Map would specify all zero file and non-zero
   regions in ca_destination has non-
      zero length.

   NFS4ERR_WRONG_TYPE:  The SAVED_FH is not a regular file.  While this option seems simple, it

   NFS4ERR_ISDIR:  The CURRENT_FH is less useful
   and can become inefficient and cumbersome for several reasons:

   o  Data Region Maps can be large, and transferring them can reduce
      overall read performance.  For example, VMware's .vmdk files can
      have a file directory and ca_destination has
      zero length.

   NFS4ERR_INVAL:  The source offset or offset plus count are greater
      than or equal to the size of over 100 GBs and the source file.

   NFS4ERR_DELAY:  The server does not have the resources to perform the
      copy operation at the current time.  The client should retry the
      operation sometime in the future.

   NFS4ERR_METADATA_NOTSUPP:  The destination file cannot support the
      same metadata as the source file.

   NFS4ERR_WRONGSEC:  The security mechanism being used by the client
      does not match the server's security policy.

11.2.  Operation 60: COPY_ABORT - Cancel a map well over several
      MBs.

   o  Data Region Maps can change frequently, server-side copy

11.2.1.  ARGUMENT

   struct COPY_ABORT4args {
           /* CURRENT_FH: desination file */
           stateid4        caa_stateid;
   };

11.2.2.  RESULT

   struct COPY_ABORT4res {
           nfsstat4        car_status;
   };

11.2.3.  DESCRIPTION

   COPY_ABORT is used for both intra- and become invalidated on
      every write to inter-server asynchronous
   copies.  The COPY_ABORT operation allows the file.  NFSv4 has a single change attribute,
      which means any change client to any region of cancel a file will invalidate all
      Data Region Maps.
   server-side copy operation that it initiated.  This can result operation is sent
   in a COMPOUND request from the map being transferred
      multiple times with each update client to the file.  For example, destination server.
   This operation may be used to cancel a VM copy when the application that updates a config file in its file system image would
      invalidate
   requested the Data Region Map not only for itself, but copy exits before the operation is completed or for all
   some other clients accessing reason.

   The request contains the same file system image.

   o  Data Region Maps do not handle all zero-filled sections of filehandle and copy stateid cookies that act
   as the
      file, reducing context for the effectiveness previously initiated copy operation.

   The result's car_status field indicates whether the cancel was
   successful or not.  A value of NFS4_OK indicates that the solution.  While it may copy
   operation was canceled and no callback will be
      possible to modify issued by the maps to handle zero-filled sections (at
      possibly great effort to server.
   A copy operation that is successfully canceled may result in none,
   some, or all of the server), it data copied.

   If the server supports asynchronous copies, the server is almost impossible with
      pNFS.  With pNFS, REQUIRED to
   support the owner of COPY_ABORT operation.

   The COPY_ABORT operation may fail for the Data Region Map following reasons (this is the metadata
      server, which
   a partial list):

   NFS4ERR_NOTSUPP:  The abort operation is not supported by the NFS
      server receiving this request.

   NFS4ERR_RETRY:  The abort failed, but a retry at some time in the data path
      future MAY succeed.

   NFS4ERR_COMPLETE_ALREADY:  The abort failed, and has no knowledge of a callback will
      deliver the
      contents results of a data region.

   Another way to handle holes is compression, but this the copy operation.

   NFS4ERR_SERVERFAULT:  An error occurred on the server that does not ideal since
   it requires all implementations
      map to agree on a single compression
   algorithm and requires specific error code.

11.3.  Operation 61: COPY_NOTIFY - Notify a fair amount source server of computational overhead.

   Note that supporting writing a future
       copy

11.3.1.  ARGUMENT

   struct COPY_NOTIFY4args {
           /* CURRENT_FH: source file */
           netloc4         cna_destination_server;
   };

11.3.2.  RESULT

   struct COPY_NOTIFY4resok {
           nfstime4        cnr_lease_time;
           netloc4         cnr_source_server<>;
   };

   union COPY_NOTIFY4res switch (nfsstat4 cnr_status) {
           case NFS4_OK:
                   COPY_NOTIFY4resok       resok4;
           default:
                   void;
   };

11.3.3.  DESCRIPTION

   This operation is used for an inter-server copy.  A client sends this
   operation in a COMPOUND request to the source server to authorize a sparse
   destination server identified by cna_destination_server to read the
   file does specified by CURRENT_FH on behalf of the given user.

   The cna_destination_server MUST be specified using the netloc4
   network location format.  The server is not require
   changes required to resolve the protocol.  Applications and/or NFS implementations can
   choose
   cna_destination_server address before completing this operation.

   If this operation succeeds, the source server will allow the
   cna_destination_server to copy the specified file on behalf of the
   given user.  If COPY_NOTIFY succeeds, the destination server is
   granted permission to ignore WRITE requests read the file as long as both of all zeroes to the NFS server
   without consequence.

7.5.  Operation 65: READ_PLUS following
   conditions are met:

   o  The section introduces a new read operation, named READ_PLUS, which
   allows NFS clients to avoid destination server begins reading holes in a sparse file.
   READ_PLUS the source file before the
      cnr_lease_time expires.  If the cnr_lease_time expires while the
      destination server is guaranteed to perform no worse than READ, and can
   dramatically improve performance with sparse files.

   READ_PLUS supports all still reading the features of source file, the existing NFSv4.1 READ
   operation [2] and adds a simple yet significant extension
      destination server is allowed to finish reading the
   format of its response. file.

   o  The change allows the client to avoid
   returning all zeroes from a file hole, wasting computational and
   network resources and reducing performance.  READ_PLUS uses has not issued a new
   result structure that tells the client that COPY_REVOKE for the result same combination
      of user, filehandle, and destination server.

   The cnr_lease_time is all zeroes
   AND chosen by the byte-range source server.  A cnr_lease_time
   of 0 (zero) indicates an infinite lease.  To renew the hole in which copy lease
   time the client should resend the same copy notification request was made.
   Returning to
   the hole's byte-range, and only upon request, avoids
   transferring large Data Region Maps that may be soon invalidated and
   contain information about source server.

   To avoid the need for synchronized clocks, copy lease times are
   granted by the server as a file that may not even be read in its
   entirely.

   A new read operation is required due to NFSv4.1 minor versioning
   rules time delta.  However, there is a
   requirement that the client and server clocks do not allow modification drift
   excessively over the duration of existing operation's
   arguments or results.  READ_PLUS the lease.  There is designed in such a way to allow
   future extensions to also the result structure.  The same approach issue
   of propagation delay across the network which could easily be taken to extend several
   hundred milliseconds as well as the argument structure, but a good use case is
   first required possibility that requests will be
   lost and need to make such a change.

7.5.1.  ARGUMENT

   struct READ_PLUS4args {
           /* CURRENT_FH: file */
           stateid4        rpa_stateid;
           offset4         rpa_offset;
           count4          rpa_count;
   };

7.5.2.  RESULT

   union read_plus_content switch (data_content4 content) {
   case NFS4_CONTENT_DATA:
           opaque          rpc_data<>;
   case NFS4_CONTENT_APP_BLOCK:
           app_data_block4 rpc_block;
   case NFS4_CONTENT_HOLE:
           length4         rpc_hole_length;
   default:
           void;
   };

   /*
    * Allow a return of an array of contents.
    */
   struct read_plus_res4 {
           bool                    rpr_eof;
           read_plus_content       rpr_contents<>;
   };

   union READ_PLUS4res switch (nfsstat4 status) {
   case NFS4_OK:
           read_plus_res4  resok4;
   default:
           void;
   };

7.5.3.  DESCRIPTION

   The READ_PLUS operation is based upon be retransmitted.

   To take propagation delay into account, the NFSv4.1 READ operation [2],
   and similarly reads data client should subtract it
   from copy lease times (e.g., if the regular file identified by the
   current filehandle.

   The client provides an offset of where estimates the READ_PLUS one-way
   propagation delay as 200 milliseconds, then it can assume that the
   lease is already 200 milliseconds old when it gets it).  In addition,
   it will take another 200 milliseconds to start and get a count of how many bytes are response back to be read.  An offset of zero means the
   server.  So the client must send a lease renewal or send the copy
   offload request to
   read data starting at the beginning of cna_destination_server at least 400
   milliseconds before the file. copy lease would expire.  If offset is
   greater than or equal to the size of propagation
   delay varies over the file, life of the status NFS4_OK is
   returned with nfs_readplusrestype4 set to READ_OK, data length set to
   zero, and eof set to TRUE.  The READ_PLUS is subject to access
   permissions checking.

   If lease (e.g., the client specifies is on a count value of zero,
   mobile host), the READ_PLUS succeeds
   and returns zero bytes of data, again subject client will need to access permissions
   checking.  In all situations, continuously subtract the server may choose to return fewer
   bytes than specified by
   increase in propagation delay from the client. copy lease times.

   The client needs to check for
   this condition and handle server's copy lease period configuration should take into account
   the condition appropriately.

   If network distance of the client specifies an offset and count value clients that will be accessing the
   server's resources.  It is entirely
   contained within a hole of expected that the file, lease period will take
   into account the status NFS4_OK is returned
   with nfs_readplusresok4 set to READ_HOLE, network propagation delays and if information is
   available regarding other network delay
   factors for the hole, a nfs_readplusreshole structure
   containing client population.  Since the offset and range of protocol does not allow
   for an automatic method to determine an appropriate copy lease
   period, the entire hole.  The
   nfs_readplusreshole structure is considered valid until server's administrator may have to tune the copy lease
   period.

   A successful response will also contain a list of names, addresses,
   and URLs called cnr_source_server, on which the file source is
   changed (detected via the change attribute).  The server MUST provide willing to
   accept connections from the same semantics for nfs_readplusreshole as if destination.  These might not be
   reachable from the client read the
   region and received zeroes; the implied holes contents lifetime MUST might be exactly located on networks to which
   the same as any other read data. client has no connection.

   If the client specifies wishes to perform an offset and count value that begins in inter-server copy, the client MUST
   send a
   non-hole of COPY_NOTIFY to the file but extends into hole source server.  Therefore, the source
   server should return MUST support COPY_NOTIFY.

   For a
   short read with status NFS4_OK, nfs_readplusresok4 set to READ_OK, copy only involving one server (the source and data length set to destination are
   on the number of bytes returned. same server), this operation is unnecessary.

   The client will
   then issue another READ_PLUS COPY_NOTIFY operation may fail for the remaining bytes, following reasons (this is
   a partial list):

   NFS4ERR_MOVED:  The file system which contains the
   server will respond source file is not
      present on the source server.  The client can determine the
      correct location and reissue the operation with information about the hole correct
      location.

   NFS4ERR_NOTSUPP:  The copy offload operation is not supported by the
      NFS server receiving this request.

   NFS4ERR_WRONGSEC:  The security mechanism being used by the client
      does not match the server's security policy.

11.4.  Operation 62: COPY_REVOKE - Revoke a destination server's copy
       privileges

11.4.1.  ARGUMENT

   struct COPY_REVOKE4args {
           /* CURRENT_FH: source file */
           netloc4         cra_destination_server;
   };

11.4.2.  RESULT

   struct COPY_REVOKE4res {
           nfsstat4        crr_status;
   };

11.4.3.  DESCRIPTION

   This operation is used for an inter-server copy.  A client sends this
   operation in a COMPOUND request to the file.

   If the source server knows that to revoke the requested byte range is into
   authorization of a hole destination server identified by
   cra_destination_server from reading the file specified by CURRENT_FH
   on behalf of given user.  If the file, but cra_destination_server has no further information regarding already
   begun copying the hole, it
   returns file, a nfs_readplusreshole structure with holeres4 set to
   HOLE_NOINFO.

   If hole information is available and can successful return from this operation
   indicates that further access will be returned to the client, prevented.

   The cra_destination_server MUST be specified using the netloc4
   network location format.  The server returns a nfs_readplusreshole structure with the value of
   holeres4 is not required to HOLE_INFO.  The values of hole_offset and hole_length
   define the byte-range for resolve the current hole
   cra_destination_server address before completing this operation.

   The COPY_REVOKE operation is useful in situations in which the file.  These values
   represent the information known to the source
   server and may describe a
   byte-range smaller than the true size of the hole.

   Except when special stateids are used, the stateid value for a
   READ_PLUS request represents a value returned from granted a previous byte-
   range lock or share reservation request very long or infinite lease on the stateid associated
   with a delegation.  The stateid identifies destination
   server's ability to read the associated owners if
   any source file and is used by the server to verify that all copy operations on
   the associated locks are
   still valid (e.g., source file have not been revoked).

   If the read ended at the end-of-file (formally, in completed.

   For a correctly formed
   READ_PLUS operation, if offset + count is equal to the size of the
   file), or copy only involving one server (the source and destination are
   on the READ_PLUS same server), this operation extends beyond is unnecessary.

   If the size of server supports COPY_NOTIFY, the file
   (if offset + count server is greater than REQUIRED to support
   the size of COPY_REVOKE operation.

   The COPY_REVOKE operation may fail for the file), eof is
   returned as TRUE; otherwise, it following reasons (this is FALSE.  A successful READ_PLUS of
   an empty
   a partial list):

   NFS4ERR_MOVED:  The file will always return eof as TRUE.

   If system which contains the current filehandle source file is not an ordinary file, an error will be
   returned to
      present on the client.  In source server.  The client can determine the case that
      correct location and reissue the current filehandle
   represents an object of type NF4DIR, NFS4ERR_ISDIR operation with the correct
      location.

   NFS4ERR_NOTSUPP:  The copy offload operation is returned.  If not supported by the current filehandle designates
      NFS server receiving this request.

11.5.  Operation 63: COPY_STATUS - Poll for status of a symbolic link, NFS4ERR_SYMLINK is
   returned.  In all other cases, NFS4ERR_WRONG_TYPE server-side copy
11.5.1.  ARGUMENT

   struct COPY_STATUS4args {
           /* CURRENT_FH: destination file */
           stateid4        csa_stateid;
   };

11.5.2.  RESULT

   struct COPY_STATUS4resok {
           length4         csr_bytes_copied;
           nfsstat4        csr_complete<1>;
   };

   union COPY_STATUS4res switch (nfsstat4 csr_status) {
           case NFS4_OK:
                   COPY_STATUS4resok       resok4;
           default:
                   void;
   };

11.5.3.  DESCRIPTION

   COPY_STATUS is returned.

   For a READ_PLUS with a stateid value of all bits equal used for both intra- and inter-server asynchronous
   copies.  The COPY_STATUS operation allows the client to zero, poll the
   server MAY allow the READ_PLUS to be serviced subject to mandatory
   byte-range locks or the current share deny modes for determine the file.  For a
   READ_PLUS with a stateid value of all bits equal to one, status of an asynchronous copy operation.
   This operation is sent by the server
   MAY allow READ_PLUS operations client to bypass locking checks at the destination server.

   On success, the current filehandle retains its value.

7.5.4.  IMPLEMENTATION

   If the server returns a "short read" (i.e., fewer data than requested
   and eof this operation is set successful, the number of bytes copied are
   returned to FALSE), the client should send another READ_PLUS to
   get in the remaining data.  A server may return less data than requested
   under several circumstances. csr_bytes_copied field.  The file may
   csr_bytes_copied value indicates the number of bytes copied but not
   which specific bytes have been truncated by
   another client or perhaps on the server itself, changing copied.

   If the file
   size from what optional csr_complete field is present, the requesting client believes to be copy has
   completed.  In this case the case.  This
   would reduce status value indicates the actual amount result of data available to the client.  It
   is possible that
   asynchronous copy operation.  In all cases, the server reduce the transfer size and so return a
   short read result.  Server resource exhaustion may will also occur in a
   short read.

   If mandatory byte-range locking is in effect for the file, and if the
   byte-range corresponding to
   deliver the data to be read from final results of the file is
   WRITE_LT locked by an owner asynchronous copy in a CB_COPY
   operation.

   The failure of this operation does not associated with indicate the stateid, result of the
   asynchronous copy in any way.

   If the server will return supports asynchronous copies, the NFS4ERR_LOCKED error.  The client should try server is REQUIRED to get
   support the appropriate READ_LT via COPY_STATUS operation.

   The COPY_STATUS operation may fail for the LOCK following reasons (this is
   a partial list):

   NFS4ERR_NOTSUPP:  The copy status operation before re-
   attempting the READ_PLUS.  When is not supported by the READ_PLUS completes,
      NFS server receiving this request.

   NFS4ERR_BAD_STATEID:  The stateid is not valid (see Section 4.3.2
      below).

   NFS4ERR_EXPIRED:  The stateid has expired (see Copy Offload Stateid
      section below).

11.6.  Operation 64: INITIALIZE

   The server has no concept of the client
   should release structure imposed by the byte-range lock via LOCKU.  In addition,
   application.  It is only when the
   server MUST return application writes to a nfs_readplusreshole structure with values section of
   hole_offset and hole_length that are within the owner's locked byte
   range.

   If another client has an OPEN_DELEGATE_WRITE delegation for
   the file
   being read, does order get imposed.  In order to detect corruption even
   before the delegation must be recalled, and application utilizes the operation cannot
   proceed until that delegation is returned or revoked.  Except where
   this happens very quickly, one or more NFS4ERR_DELAY errors will be
   returned to requests made while file, the delegation remains outstanding.
   Normally, delegations application will not be recalled as want
   to initialize a result range of a READ_PLUS
   operation since ADBs.  It uses the recall will occur as a result of an earlier OPEN.
   However, since it is possible for a READ_PLUS INITIALIZE operation to be done with a
   special stateid, the server needs
   do so.

11.6.1.  ARGUMENT

   /*
    * We use data_content4 in case we wish to check for this
    * extend new types later. Note that we
    * are explicitly disallowing data.
    */
   union initialize_arg4 switch (data_content4 content) {
   case NFS4_CONTENT_APP_BLOCK:
           app_data_block4 ia_adb;
   case NFS4_CONTENT_HOLE:
           hole_info4      ia_hole;
   default:
           void;
   };

   struct INITIALIZE4args {
           /* CURRENT_FH: file */
           stateid4        ia_stateid;
           stable_how4     ia_stable;
           initialize_arg4 ia_data<>;
   };

11.6.2.  RESULT

   struct INITIALIZE4resok {
           count4          ir_count;
           stable_how4     ir_committed;
           verifier4       ir_writeverf;
           data_content4   ir_sparse;
   };

   union INITIALIZE4res switch (nfsstat4 status) {
   case even though NFS4_OK:
           INITIALIZE4resok        resok4;
   default:
           void;
   };

11.6.3.  DESCRIPTION

   When the client should have done an OPEN previously.

7.5.4.1.  Additional pNFS Implementation Information

   With pNFS, the semantics of using READ_PLUS remains invokes the same.  Any
   data server MAY return a READ_HOLE result for a READ_PLUS request
   that it receives.

   When a data server chooses to return a READ_HOLE result, INITIALIZE operation, it has two desired
   results:

   1.  The structure described by the
   option of returning hole information for the data stored app_data_block4 be imposed on that data
   server (as defined the
       file.

   2.  The contents described by the data layout), but app_data_block4 be sparse.

   If the server supports the INITIALIZE operation, it MUST still might not return a
   nfs_readplusreshole structure with a byte range that includes data
   managed by another data server.

   1.  Data servers that cannot determine hole information SHOULD return
       HOLE_NOINFO.

   2.  Data servers that can obtain hole information for
   support sparse files.  So if it receives the parts INITIALIZE operation,
   then it MUST populate the contents of the file stored on that data server, with the initialized
   ADBs.  In other words, if the data server SHOULD
       return HOLE_INFO and supports INITIALIZE, then it
   supports the byte range concept of ADBs.  [[Comment.4: Do we want to support an
   asynchronous INITIALIZE?  Do we have to? --TH]]

   If the hole stored on that
       data server.

   A data server should do its best to return as much information about
   a hole as is feasible without having to contact was already initialized, There are two interesting
   scenarios:

   1.  The data blocks are allocated.

   2.  Initializing in the metadata server. middle of an existing ADB.

   If communication with the metadata server data blocks were already allocated, then the INITIALIZE is required, a
   hole punch operation.  If INITIALIZE supports sparse files, then every
   attempt should be taken to minimize the number of requests.
   data blocks are to be deallocated.  If mandatory locking is enforced, not, then the data server must also
   ensure that blocks are
   to return only information for a Hole that is within be rewritten in the
   owner's locked byte range.

7.5.5.  READ_PLUS with Sparse Files Example

   To see how indicated ADB format.  [[Comment.5: Need to
   document interaction between space reservation and hole punching?
   --TH]]
   Since the return value READ_HOLE will work, server has no knowledge of ADBs, it should not report
   misaligned creation of ADBs.  Even while it can detect them, it
   cannot disallow them, as the following table
   describes a sparse file.  For each byte range, application might be in the file contains
   either non-zero data or a hole.  In addition, process of
   changing the server in this
   example uses a hole threshold size of 32K.

                        +-------------+----------+
                        | Byte-Range  | Contents |
                        +-------------+----------+
                        | 0-15999     | Hole     |
                        | 16K-31999   | Non-Zero |
                        | 32K-255999  | Hole     |
                        | 256K-287999 | Non-Zero |
                        | 288K-353999 | Hole     |
                        | 354K-417999 | Non-Zero |
                        +-------------+----------+

                                  Table 3

   Under the given circumstances, if a client was to read ADBs.  Thus the file from
   beginning server must be prepared to end with a max read size of 64K,
   handle an INITIALIZE into an existing ADB.

   This document does not mandate the following will be manner in which the result.  This assumes server stores
   ADBs sparsely for a file.  It does assume that if ADBs are stored
   sparsely, then the client server can detect when an INITIALIZE arrives that
   will force a new ADB to start inside an existing ADB.  For example,
   assume that ADBi has already opened the file and
   acquired a valid stateid adb_block_size of 4k and just needs that an INITIALIZE
   starts 1k inside ADBi.  The server should [[Comment.6: Need to issue READ_PLUS requests.

   1.  READ_PLUS(s, 0, 64K) --> NFS_OK, readplusrestype4 = READ_OK, eof flesh
   this out. --TH]]

11.7.  Modification to Operation 42: EXCHANGE_ID - Instantiate Client ID

11.7.1.  ARGUMENT

      /* new */
      const EXCHGID4_FLAG_SUPP_FENCE_OPS      = false, data<>[32K].  Return a short read, as the last half of
       the request was all zeroes.  Note 0x00000004;

11.7.2.  RESULT

      Unchanged

11.7.3.  MOTIVATION

   Enterprise applications require guarantees that the first hole is read
       back an operation has
   either aborted or completed.  NFSv4.1 provides this guarantee as all zeros long
   as it is below the hole threshhold.

   2.  READ_PLUS(s, 32K, 64K) --> NFS_OK, readplusrestype4 = READ_HOLE,
       nfs_readplusreshole(HOLE_INFO)(32K, 224K).  The requested range
       was all zeros, and the current hole begins at offset 32K and session is
       224K in length.

   3.  READ_PLUS(s, 256K, 64K) --> NFS_OK, readplusrestype4 = READ_OK,
       eof = false, data<>[32K].  Return alive: simply send a short read, as SEQUENCE operation on the last half same
   slot with a new sequence number, and the successful return of
   SEQUENCE indicates the request was all zeroes.

   4.  READ_PLUS(s, 288K, 64K) --> NFS_OK, readplusrestype4 = READ_HOLE,
       nfs_readplusreshole(HOLE_INFO)(288K, 66K).

   5.  READ_PLUS(s, 354K, 64K) --> NFS_OK, readplusrestype4 = READ_OK,
       eof = true, data<>[64K].

7.6.  Related Work

   Solaris and ZFS support an extension to lseek(2) that allows
   applications previous operation has completed.  However, if
   the session is lost, there is no way to discover holes know when any in a file.  The values, SEEK_HOLE and
   SEEK_DATA, allow clients to seek to the next hole progress
   operations have aborted or beginning of
   data, respectively.

   XFS supports completed.  In hindsight, the XFS_IOC_GETBMAP extended attribute, which returns NFSv4.1
   specification should have mandated that DESTROY_SESSION abort/
   complete all outstanding operations.

11.7.4.  DESCRIPTION

   A client SHOULD request the Data Region Map for a file.  Clients can then use EXCHGID4_FLAG_SUPP_FENCE_OPS capability
   when it sends an EXCHANGE_ID operation.  The server SHOULD set this
   information to avoid reading holes
   capability in a file.

   NTFS and CIFS support the FSCTL_SET_SPARSE attribute, which allows
   applications to control EXCHANGE_ID reply whether empty regions of the file client requests it or
   not.  If the client ID is created with this capability then the
   following will occur:

   o  The server will not reply to DESTROY_SESSION until all operations
      in progress are
   preallocated and filled completed or aborted.

   o  The server will not reply to subsequent EXCHANGE_ID invoked on the
      same Client Owner with a new verifier until all operations in
      progress on the Client ID's session are completed or aborted.

   o  When DESTROY_CLIENTID is invoked, if there are sessions (both idle
      and non-idle), opens, locks, delegations, layouts, and/or wants
      (Section 18.49) associated with zeros the client ID are removed.
      Pending operations will be completed or simply left unallocated.

7.7.  Other Proposed Designs

7.7.1.  Multi-Data Server Hole Information aborted before the
      sessions, opens, locks, delegations, layouts, and/or wants are
      deleted.

   o  The current design prohibits pnfs data servers from returning hole
   information for regions of NFS server SHOULD support client ID trunking, and if it does
      and the EXCHGID4_FLAG_SUPP_FENCE_OPS capability is enabled, then a file that are not stored
      session ID created on that data
   server.  Having data servers return information regarding other data
   servers changes one node of the fundamental principal that storage cluster MUST be
      destroyable via DESTROY_SESSION.  In addition, DESTROY_CLIENTID
      and an EXCHANGE_ID with a new verifier affects all metadata
   information comes from sessions
      regardless what node the metadata server.

   Here is sessions were created on.

11.8.  Operation 65: READ_PLUS

   If the client sends a brief description READ operation, it is explicitly stating that
   it is not supporting sparse files.  So if we did choose to support multi-data a READ occurs on a sparse
   ADB, then the server hole information:

   For must expand such ADBs to be raw bytes.  If a data
   READ occurs in the middle of an ADB, the server that can obtain hole information only send back
   bytes starting from that offset.

   Such an operation is inefficient for transfer of sparse sections of
   the entire
   file without severe performance impact, file.  As such, READ is marked as OBSOLETE in NFSv4.2.  Instead,
   a client should issue READ_PLUS.  Note that as the client has no a
   priori knowledge of whether an ADB is present or not, it MAY should
   always use READ_PLUS.

11.8.1.  ARGUMENT

   struct READ_PLUS4args {
           /* CURRENT_FH: file */
           stateid4        rpa_stateid;
           offset4         rpa_offset;
           count4          rpa_count;
   };

11.8.2.  RESULT

   union read_plus_content switch (data_content4 content) {
   case NFS4_CONTENT_DATA:
           opaque          rpc_data<>;
   case NFS4_CONTENT_APP_BLOCK:
           app_data_block4 rpc_block;
   case NFS4_CONTENT_HOLE:
           hole_info4      rpc_hole;
   default:
           void;
   };

   /*
    * Allow a return HOLE_INFO of an array of contents.
    */
   struct read_plus_res4 {
           bool                    rpr_eof;
           read_plus_content       rpr_contents<>;
   };

   union READ_PLUS4res switch (nfsstat4 status) {
   case NFS4_OK:
           read_plus_res4  resok4;
   default:
           void;
   };

11.8.3.  DESCRIPTION

   Over the given range, READ_PLUS will return all data and ADBs found
   as an array of read_plus_content.  It is possible to have consecutive
   ADBs in the array as either different definitions of ADBs are present
   or as the guard pattern changes.

   Edge cases exist for ABDs which either begin before the byte range rpa_offset
   requested by the READ_PLUS or end after the rpa_count requested -
   both of which may occur as not all applications which access the entire file hole.  When a pNFS client receives
   a READ_HOLE result and a non-empty nfs_readplusreshole structure, it
   MAY use this information in conjunction with
   are aware of the main application imposing a valid layout for format on the file
   contents, i.e., tar, dd, cp, etc.  READ_PLUS MUST retrieve whole
   ADBs, but it need not retrieve an entire sequences of ADBs.

   The server MUST return a whole ADB because if it does not, it must
   expand that partial ADB before it sends it to determine the next data server for client.  E.g., if
   an ADB had a block size of 64k and the next region READ_PLUS was for 128k
   starting at an offset of data
   that is not in a hole.

7.7.2.  Data Result Array

   If a single read request contains one or more Holes with a length
   greater than 32k inside the Sparse Threshold, ADB, then the current design first 32k would return
   be converted to data.

12.  NFSv4.2 Callback Operations

12.1.  Operation 15: CB_COPY - Report results indicating of a short read to server-side copy

12.1.1.  ARGUMENT

   union copy_info4 switch (nfsstat4 cca_status) {
           case NFS4_OK:
                   void;
           default:
                   length4         cca_bytes_copied;
   };

   struct CB_COPY4args {
           nfs_fh4         cca_fh;
           stateid4        cca_stateid;
           copy_info4      cca_copy_info;
   };

12.1.2.  RESULT

   struct CB_COPY4res {
           nfsstat4        ccr_status;
   };

12.1.3.  DESCRIPTION

   CB_COPY is used for both intra- and inter-server asynchronous copies.
   The CB_COPY callback informs the client.  A client would then
   send a series of read requests to the result of an
   asynchronous server-side copy.  This operation is sent by the
   destination server to retrieve information
   for the Holes and the remaining data.  To avoid turning client in a single read
   request into several exchanges between CB_COMPOUND request.  The copy
   is identified by the client filehandle and server, stateid arguments.  The result is
   indicated by the
   server may need to choose a relatively large Sparse Threshold in
   order to decrease status field.  If the copy failed, cca_bytes_copied
   contains the number of short reads it creates.  A large
   Sparse Threshold may miss many smaller holes, which in turn may
   negate bytes copied before the benefits failure occurred.  The
   cca_bytes_copied value indicates the number of sparse read support.

   To avoid this situation, one option is to bytes copied but not
   which specific bytes have been copied.

   In the READ_PLUS
   operation return information for multiple holes in a single return
   value.  This would allow several small holes to be described in a
   single read response without requiring multliple exchanges between
   the client and server.

   One important item to consider with returning an array absence of data chunks
   is its impact on RDMA, which may use different block sizes on an established backchannel, the
   client and server (among other things).

7.7.3.  User-Defined Sparse Mask

   Add mask (instead cannot
   signal the completion of just zeroes).  Specified by server or client?

7.7.4.  Allocated flag

   A Hole on the server may be an allocated byte-range consisting COPY via a CB_COPY callback.  The loss
   of all
   zeroes or may not a callback channel would be allocated at all.  To ensure this information is
   properly communicated to indicated by the client, it may be beneficial to add a
   'alloc' server setting the
   SEQ4_STATUS_CB_PATH_DOWN flag to in the HOLE_INFO section sr_status_flags field of nfs_readplusreshole.  This
   would allow an NFS the
   SEQUENCE operation.  The client must re-establish the callback
   channel to receive the status of the COPY operation.  Prolonged loss
   of the callback channel could result in the server dropping the COPY
   operation state and invalidating the copy a file from one file system stateid.

   If the client supports the COPY operation, the client is REQUIRED to
   another and have it more closely resemble
   support the original.

7.7.5.  Dense and Sparse pNFS File Layouts CB_COPY operation.

   The hole information returned form a data server must be understood
   by pNFS clients using both Dense or Sparse file layout types.  Does
   the current READ_PLUS return value work CB_COPY operation may fail for both layout types?  Does the data server know if it following reasons (this is using dense or sparse so that it can
   return a
   partial list):

   NFS4ERR_NOTSUPP:  The copy offload operation is not supported by the correct hole_offset and hole_length values?

8.  Security Considerations

9.
      NFS client receiving this request.

13.  IANA Considerations

   This section uses terms that are defined in [21].

10. [23].

14.  References

10.1.

14.1.  Normative References

   [1]   Bradner, S., "Key words for use in RFCs to Indicate Requirement
         Levels", March 1997.

   [2]   Shepler, S., Eisler, M., and D. Noveck, "Network File System
         (NFS) Version 4 Minor Version 1 Protocol", RFC 5661,
         January 2010.

   [3]   Haynes, T., "Network File System (NFS) Version 4 Minor Version
         2 External Data Representation Standard (XDR) Description",
         March 2011.

   [4]   Halevy, B., Welch, B., and J. Zelenka, "Object-Based Parallel
         NFS (pNFS) Operations", RFC 5664, January 2010.

   [5]   Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform
         Resource Identifier (URI): Generic Syntax", STD 66, RFC 3986,
         January 2005.

   [6]   Haynes, T. and N. Williams, N., "Remote Procedure Call (RPC)
         Security Version 3", draft-williams-rpcsecgssv3 (work in
         progress), 2008. 2011.

   [7]   Eisler, M., Chiu, A., and L. Ling, "RPCSEC_GSS Protocol
         Specification", RFC 2203, September 1997.

   [8]   Shepler, S., Eisler, M., and D. Noveck, "Network File System
         (NFS) Version 4 Minor Version 1 External Data Representation
         Standard (XDR) Description", RFC 5662, January 2010.

   [8]

   [9]   Black, D., Glasgow, J., and S. Fridella, "Parallel NFS (pNFS)
         Block/Volume Layout", RFC 5663, January 2010.

   [9]   Eisler, M., Chiu, A., and L. Ling, "RPCSEC_GSS Protocol
         Specification", RFC 2203, September 1997.

10.2.

14.2.  Informative References

   [10]  Haynes, T. and D. Noveck, "Network File System (NFS) version 4
         Protocol", draft-ietf-nfsv4-rfc3530bis-09 (Work In Progress),
         March 2011.

   [11]  Eisler, M., "XDR: External Data Representation Standard",
         RFC 4506, May 2006.

   [12]  Lentini, J., Everhart, C., Ellard, D., Tewari, R., and M. Naik,
         "NSDB Protocol for Federated Filesystems",
         draft-ietf-nfsv4-federated-fs-protocol (Work In Progress),
         2010.

   [13]  Lentini, J., Everhart, C., Ellard, D., Tewari, R., and M. Naik,
         "Administration Protocol for Federated Filesystems",
         draft-ietf-nfsv4-federated-fs-admin (Work In Progress), 2010.

   [14]  Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L.,
         Leach, P., and T. Berners-Lee, "Hypertext Transfer Protocol --
         HTTP/1.1", RFC 2616, June 1999.

   [15]  Postel, J. and J. Reynolds, "File Transfer Protocol", STD 9,
         RFC 959, October 1985.

   [16]  Simpson, W., "PPP Challenge Handshake Authentication Protocol
         (CHAP)", RFC 1994, August 1996.

   [17]  Strohm, R., "Chapter 2, Data Blocks, Extents, and Segments, of
         Oracle Database Concepts 11g Release 1 (11.1)", January 2011.

   [18]  Ashdown, L., "Chapter 15, Validating Database Files and
         Backups, of Oracle Database Backup and Recovery User's Guide
         11g Release 1 (11.1)", August 2008.

   [19]  McDougall, R. and J. Mauro, "Section 11.4.3, Detecting Memory
         Corruption of Solaris Internals", 2007.

   [20]  Bairavasundaram, L., Goodson, G., Schroeder, B., Arpaci-
         Dusseau, A., and R. Arpaci-Dusseau, "An Analysis of Data
         Corruption in the Storage Stack", Proceedings of the 6th USENIX
         Symposium on File and Storage Technologies (FAST '08) , 2008.

   [21]  "Section 46.6. Multi-Level Security (MLS) of Deployment Guide:
         Deployment, configuration and administration of Red Hat
         Enterprise Linux 5, Edition 6", 2011.

   [22]  Quigley, D. and J. Lu, "Registry Specification for MAC Security
         Label Formats", draft-quigley-label-format-registry (work in
         progress), 2011.

   [23]  Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA
         Considerations Section in RFCs", BCP 26, RFC 5226, May 2008.

   [22]

   [24]  Nowicki, B., "NFS: Network File System Protocol specification",
         RFC 1094, March 1989.

   [23]

   [25]  Callaghan, B., Pawlowski, B., and P. Staubach, "NFS Version 3
         Protocol Specification", RFC 1813, June 1995.

   [24]

   [26]  Srinivasan, R., "Binding Protocols for ONC RPC Version 2",
         RFC 1833, August 1995.

   [25]

   [27]  Eisler, M., "NFS Version 2 and Version 3 Security Issues and
         the NFS Protocol's Use of RPCSEC_GSS and Kerberos V5",
         RFC 2623, June 1999.

   [26]

   [28]  Callaghan, B., "NFS URL Scheme", RFC 2224, October 1997.

   [27]

   [29]  Shepler, S., "NFS Version 4 Design Considerations", RFC 2624,
         June 1999.

   [28]

   [30]  Reynolds, J., "Assigned Numbers: RFC 1700 is Replaced by an On-
         line Database", RFC 3232, January 2002.

   [29]

   [31]  Linn, J., "The Kerberos Version 5 GSS-API Mechanism", RFC 1964,
         June 1996.

   [30]

   [32]  Shepler, S., Callaghan, B., Robinson, D., Thurlow, R., Beame,
         C., Eisler, M., and D. Noveck, "Network File System (NFS)
         version 4 Protocol", RFC 3530, April 2003.

Appendix A.  Acknowledgments

   For the pNFS Access Permissions Check, the original draft was by
   Sorin Faibish, David Black, Mike Eisler, and Jason Glasgow.  The work
   was influenced by discussions with Benny Halevy and Bruce Fields.  A
   review was done by Tom Haynes.

   For the Sharing change attribute implementation details with NFSv4
   clients, the original draft was by Trond Myklebust.

   For the NFS Server-side Copy, the original draft was by James
   Lentini, Mike Eisler, Deepak Kenchammana, Anshul Madan, and Rahul
   Iyer.  Talpey co-authored an unpublished version of that document.

   It was also was reviewed by a number of individuals: Pranoop Erasani,
   Tom Haynes, Arthur Lent, Trond Myklebust, Dave Noveck, Theresa
   Lingutla-Raj, Manjunath Shankararao, Satyam Vaghani, and Nico
   Williams.

   For the NFS space reservation operations, the original draft was by
   Mike Eisler, James Lentini, Manjunath Shankararao, and Rahul Iyer.

   For the sparse file support, the original draft was by Dean
   Hildebrand and Marc Eshel.  Valuable input and advice was received
   from Sorin Faibish, Bruce Fields, Benny Halevy, Trond Myklebust, and
   Richard Scheffenegger.

   For Labeled NFS, the original draft was by David Quigley, James
   Morris, Jarret Lu, and Tom Haynes.  Peter Staubach, Trond Myklebust,
   Sorrin Faibish, Nico Williams, and David Black also contributed in
   the final push to get this accepted.

Appendix B.  RFC Editor Notes

   [RFC Editor: please remove this section prior to publishing this
   document as an RFC]

   [RFC Editor: prior to publishing this document as an RFC, please
   replace all occurrences of RFCTBD10 with RFCxxxx where xxxx is the
   RFC number of this document]

Author's Address

   Thomas Haynes
   NetApp
   9110 E 66th St
   Tulsa, OK  74133
   USA

   Phone: +1 918 307 1415
   Email: thomas@netapp.com
   URI:   http://www.tulsalabs.com