draft-ietf-nfsv4-minorversion2-02.txt   draft-ietf-nfsv4-minorversion2-03.txt 
NFSv4 T. Haynes NFSv4 T. Haynes
Internet-Draft Editor Internet-Draft Editor
Intended status: Standards Track May 09, 2011 Intended status: Standards Track August 14, 2011
Expires: November 10, 2011 Expires: February 15, 2012
NFS Version 4 Minor Version 2 NFS Version 4 Minor Version 2
draft-ietf-nfsv4-minorversion2-02.txt draft-ietf-nfsv4-minorversion2-03.txt
Abstract Abstract
This Internet-Draft describes NFS version 4 minor version two, This Internet-Draft describes NFS version 4 minor version two,
focusing mainly on the protocol extensions made from NFS version 4 focusing mainly on the protocol extensions made from NFS version 4
minor version 0 and NFS version 4 minor version 1. Major extensions minor version 0 and NFS version 4 minor version 1. Major extensions
introduced in NFS version 4 minor version two include: Server-side introduced in NFS version 4 minor version two include: Server-side
Copy, Space Reservations, and Support for Sparse Files. Copy, Space Reservations, and Support for Sparse Files.
Requirements Language Requirements Language
skipping to change at page 1, line 40 skipping to change at page 1, line 40
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on November 10, 2011. This Internet-Draft will expire on February 15, 2012.
Copyright Notice Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 3, line 7 skipping to change at page 3, line 7
modifications of such material outside the IETF Standards Process. modifications of such material outside the IETF Standards Process.
Without obtaining an adequate license from the person(s) controlling Without obtaining an adequate license from the person(s) controlling
the copyright in such materials, this document may not be modified the copyright in such materials, this document may not be modified
outside the IETF Standards Process, and derivative works of it may outside the IETF Standards Process, and derivative works of it may
not be created outside the IETF Standards Process, except to format not be created outside the IETF Standards Process, except to format
it for publication as an RFC or to translate it into languages other it for publication as an RFC or to translate it into languages other
than English. than English.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1. The NFS Version 4 Minor Version 2 Protocol . . . . . . . . 5 1.1. The NFS Version 4 Minor Version 2 Protocol . . . . . . . . 6
1.2. Scope of This Document . . . . . . . . . . . . . . . . . . 5 1.2. Scope of This Document . . . . . . . . . . . . . . . . . . 6
1.3. NFSv4.2 Goals . . . . . . . . . . . . . . . . . . . . . . 5 1.3. NFSv4.2 Goals . . . . . . . . . . . . . . . . . . . . . . 6
1.4. Overview of NFSv4.2 Features . . . . . . . . . . . . . . . 5 1.4. Overview of NFSv4.2 Features . . . . . . . . . . . . . . . 6
1.5. Differences from NFSv4.1 . . . . . . . . . . . . . . . . . 5 1.5. Differences from NFSv4.1 . . . . . . . . . . . . . . . . . 6
2. pNFS LAYOUTRETURN Error Handling . . . . . . . . . . . . . . . 5 2. pNFS LAYOUTRETURN Error Handling . . . . . . . . . . . . . . . 6
2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 5 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 6
2.2. Changes to Operation 51: LAYOUTRETURN . . . . . . . . . . 6 2.2. Changes to Operation 51: LAYOUTRETURN . . . . . . . . . . 7
2.2.1. ARGUMENT . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.1. ARGUMENT . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2. RESULT . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.2. RESULT . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.3. DESCRIPTION . . . . . . . . . . . . . . . . . . . . . 6 2.2.3. DESCRIPTION . . . . . . . . . . . . . . . . . . . . . 7
2.2.4. IMPLEMENTATION . . . . . . . . . . . . . . . . . . . . 7 2.2.4. IMPLEMENTATION . . . . . . . . . . . . . . . . . . . . 8
3. Sharing change attribute implementation details with NFSv4 3. Sharing change attribute implementation details with NFSv4
clients . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 clients . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1. Abstract . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.1. Abstract . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2. Introduction . . . . . . . . . . . . . . . . . . . . . . . 9 3.2. Introduction . . . . . . . . . . . . . . . . . . . . . . . 10
3.3. Definition of the 'change_attr_type' per-file system 3.3. Definition of the 'change_attr_type' per-file system
attribute . . . . . . . . . . . . . . . . . . . . . . . . 9 attribute . . . . . . . . . . . . . . . . . . . . . . . . 10
4. NFS Server-side Copy . . . . . . . . . . . . . . . . . . . . . 10 4. NFS Server-side Copy . . . . . . . . . . . . . . . . . . . . . 11
4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 11 4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 12
4.2. Protocol Overview . . . . . . . . . . . . . . . . . . . . 11 4.2. Protocol Overview . . . . . . . . . . . . . . . . . . . . 12
4.2.1. Intra-Server Copy . . . . . . . . . . . . . . . . . . 13 4.2.1. Intra-Server Copy . . . . . . . . . . . . . . . . . . 14
4.2.2. Inter-Server Copy . . . . . . . . . . . . . . . . . . 14 4.2.2. Inter-Server Copy . . . . . . . . . . . . . . . . . . 15
4.2.3. Server-to-Server Copy Protocol . . . . . . . . . . . . 17 4.2.3. Server-to-Server Copy Protocol . . . . . . . . . . . . 18
4.3. Operations . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3. Operations . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3.1. netloc4 - Network Locations . . . . . . . . . . . . . 19 4.3.1. netloc4 - Network Locations . . . . . . . . . . . . . 20
4.3.2. Operation 61: COPY_NOTIFY - Notify a source server 4.3.2. Copy Offload Stateids . . . . . . . . . . . . . . . . 21
of a future copy . . . . . . . . . . . . . . . . . . . 20 4.4. Security Considerations . . . . . . . . . . . . . . . . . 21
4.3.3. Operation 62: COPY_REVOKE - Revoke a destination 4.4.1. Inter-Server Copy Security . . . . . . . . . . . . . . 21
server's copy privileges . . . . . . . . . . . . . . . 22 5. Application Data Block Support . . . . . . . . . . . . . . . . 29
4.3.4. Operation 59: COPY - Initiate a server-side copy . . . 23 5.1. Generic Framework . . . . . . . . . . . . . . . . . . . . 30
4.3.5. Operation 60: COPY_ABORT - Cancel a server-side 5.1.1. Data Block Representation . . . . . . . . . . . . . . 31
copy . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.1.2. Data Content . . . . . . . . . . . . . . . . . . . . . 31
4.3.6. Operation 63: COPY_STATUS - Poll for status of a 5.2. pNFS Considerations . . . . . . . . . . . . . . . . . . . 31
server-side copy . . . . . . . . . . . . . . . . . . . 32 5.3. An Example of Detecting Corruption . . . . . . . . . . . . 32
4.3.7. Operation 15: CB_COPY - Report results of a 5.4. Example of READ_PLUS . . . . . . . . . . . . . . . . . . . 34
server-side copy . . . . . . . . . . . . . . . . . . . 33 5.5. Zero Filled Holes . . . . . . . . . . . . . . . . . . . . 34
4.3.8. Copy Offload Stateids . . . . . . . . . . . . . . . . 35 6. Space Reservation . . . . . . . . . . . . . . . . . . . . . . 34
4.4. Security Considerations . . . . . . . . . . . . . . . . . 35 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 34
4.4.1. Inter-Server Copy Security . . . . . . . . . . . . . . 35 6.2. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 35
5. Application Data Block Support . . . . . . . . . . . . . . . . 43 6.2.1. Space Reservation . . . . . . . . . . . . . . . . . . 36
5.1. Generic Framework . . . . . . . . . . . . . . . . . . . . 44 6.2.2. Space freed on deletes . . . . . . . . . . . . . . . . 36
5.1.1. Data Block Representation . . . . . . . . . . . . . . 45 6.2.3. Operations and attributes . . . . . . . . . . . . . . 37
5.1.2. Data Content . . . . . . . . . . . . . . . . . . . . . 45 6.2.4. Attribute 77: space_reserved . . . . . . . . . . . . . 37
5.2. Operation 64: INITIALIZE . . . . . . . . . . . . . . . . . 45 6.2.5. Attribute 78: space_freed . . . . . . . . . . . . . . 38
5.2.1. ARGUMENT . . . . . . . . . . . . . . . . . . . . . . . 46 6.2.6. Attribute 79: max_hole_punch . . . . . . . . . . . . . 38
5.2.2. RESULT . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2.3. DESCRIPTION . . . . . . . . . . . . . . . . . . . . . 47
5.3. Operation 65: READ_PLUS . . . . . . . . . . . . . . . . . 48
5.3.1. ARGUMENT . . . . . . . . . . . . . . . . . . . . . . . 48
5.3.2. RESULT . . . . . . . . . . . . . . . . . . . . . . . . 49
5.3.3. DESCRIPTION . . . . . . . . . . . . . . . . . . . . . 49
5.4. pNFS Considerations . . . . . . . . . . . . . . . . . . . 50
5.5. An Example of Detecting Corruption . . . . . . . . . . . . 50
5.6. Example of READ_PLUS . . . . . . . . . . . . . . . . . . . 52
5.7. Zero Filled Holes . . . . . . . . . . . . . . . . . . . . 52
6. Space Reservation . . . . . . . . . . . . . . . . . . . . . . 52
6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 52
6.2. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 54
6.2.1. Space Reservation . . . . . . . . . . . . . . . . . . 54
6.2.2. Space freed on deletes . . . . . . . . . . . . . . . . 54
6.2.3. Operations and attributes . . . . . . . . . . . . . . 55
6.2.4. Attribute 77: space_reserved . . . . . . . . . . . . . 55
6.2.5. Attribute 78: space_freed . . . . . . . . . . . . . . 56
6.2.6. Attribute 79: max_hole_punch . . . . . . . . . . . . . 56
6.2.7. Operation 64: HOLE_PUNCH - Zero and deallocate 6.2.7. Operation 64: HOLE_PUNCH - Zero and deallocate
blocks backing the file in the specified range. . . . 56 blocks backing the file in the specified range. . . . 38
7. Sparse Files . . . . . . . . . . . . . . . . . . . . . . . . . 57 7. Sparse Files . . . . . . . . . . . . . . . . . . . . . . . . . 39
7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 57 7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 39
7.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 58 7.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 40
7.3. Applications and Sparse Files . . . . . . . . . . . . . . 59 7.3. Applications and Sparse Files . . . . . . . . . . . . . . 41
7.4. Overview of Sparse Files and NFSv4 . . . . . . . . . . . . 60 7.4. Overview of Sparse Files and NFSv4 . . . . . . . . . . . . 42
7.5. Operation 65: READ_PLUS . . . . . . . . . . . . . . . . . 61 7.5. Operation 65: READ_PLUS . . . . . . . . . . . . . . . . . 43
7.5.1. ARGUMENT . . . . . . . . . . . . . . . . . . . . . . . 61 7.5.1. ARGUMENT . . . . . . . . . . . . . . . . . . . . . . . 43
7.5.2. RESULT . . . . . . . . . . . . . . . . . . . . . . . . 62 7.5.2. RESULT . . . . . . . . . . . . . . . . . . . . . . . . 44
7.5.3. DESCRIPTION . . . . . . . . . . . . . . . . . . . . . 62 7.5.3. DESCRIPTION . . . . . . . . . . . . . . . . . . . . . 44
7.5.4. IMPLEMENTATION . . . . . . . . . . . . . . . . . . . . 64 7.5.4. IMPLEMENTATION . . . . . . . . . . . . . . . . . . . . 46
7.5.5. READ_PLUS with Sparse Files Example . . . . . . . . . 65 7.5.5. READ_PLUS with Sparse Files Example . . . . . . . . . 47
7.6. Related Work . . . . . . . . . . . . . . . . . . . . . . . 66 7.6. Related Work . . . . . . . . . . . . . . . . . . . . . . . 48
7.7. Other Proposed Designs . . . . . . . . . . . . . . . . . . 66 7.7. Other Proposed Designs . . . . . . . . . . . . . . . . . . 48
7.7.1. Multi-Data Server Hole Information . . . . . . . . . . 66 7.7.1. Multi-Data Server Hole Information . . . . . . . . . . 48
7.7.2. Data Result Array . . . . . . . . . . . . . . . . . . 67 7.7.2. Data Result Array . . . . . . . . . . . . . . . . . . 49
7.7.3. User-Defined Sparse Mask . . . . . . . . . . . . . . . 67 7.7.3. User-Defined Sparse Mask . . . . . . . . . . . . . . . 49
7.7.4. Allocated flag . . . . . . . . . . . . . . . . . . . . 67 7.7.4. Allocated flag . . . . . . . . . . . . . . . . . . . . 49
7.7.5. Dense and Sparse pNFS File Layouts . . . . . . . . . . 68 7.7.5. Dense and Sparse pNFS File Layouts . . . . . . . . . . 50
8. Security Considerations . . . . . . . . . . . . . . . . . . . 68 8. Labeled NFS . . . . . . . . . . . . . . . . . . . . . . . . . 50
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 68 8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 50
10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 68 8.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 52
10.1. Normative References . . . . . . . . . . . . . . . . . . . 68 8.3. MAC Security Attribute . . . . . . . . . . . . . . . . . . 52
10.2. Informative References . . . . . . . . . . . . . . . . . . 69 8.3.1. Interpreting FATTR4_SEC_LABEL . . . . . . . . . . . . 53
Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . . 70 8.3.2. Delegations . . . . . . . . . . . . . . . . . . . . . 54
Appendix B. RFC Editor Notes . . . . . . . . . . . . . . . . . . 71 8.3.3. Permission Checking . . . . . . . . . . . . . . . . . 54
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 71 8.3.4. Object Creation . . . . . . . . . . . . . . . . . . . 55
8.3.5. Existing Objects . . . . . . . . . . . . . . . . . . . 55
8.3.6. Label Changes . . . . . . . . . . . . . . . . . . . . 55
8.4. Procedure 16: CB_ATTR_CHANGED - Notify Client that the
File's Attributes Changed . . . . . . . . . . . . . . . . 56
8.5. pNFS Considerations . . . . . . . . . . . . . . . . . . . 57
8.6. Discovery of Server LNFS Support . . . . . . . . . . . . . 57
8.7. MAC Security NFS Modes of Operation . . . . . . . . . . . 58
8.7.1. Full Mode . . . . . . . . . . . . . . . . . . . . . . 58
8.7.2. Smart Client Mode . . . . . . . . . . . . . . . . . . 59
8.7.3. Smart Server Mode . . . . . . . . . . . . . . . . . . 60
8.8. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 61
8.8.1. Full MAC labeling support for remotely mounted
filesystems . . . . . . . . . . . . . . . . . . . . . 61
8.8.2. MAC labeling of virtual machine images stored on
the network . . . . . . . . . . . . . . . . . . . . . 61
8.8.3. International Traffic in Arms Regulations (ITAR) . . . 62
8.8.4. Legal Hold/eDiscovery . . . . . . . . . . . . . . . . 62
8.8.5. Simple security label storage . . . . . . . . . . . . 63
8.8.6. Diskless Linux . . . . . . . . . . . . . . . . . . . . 63
8.8.7. Multi-Level Security . . . . . . . . . . . . . . . . . 64
8.9. Security Considerations . . . . . . . . . . . . . . . . . 65
9. Security Considerations . . . . . . . . . . . . . . . . . . . 66
10. Operations: REQUIRED, RECOMMENDED, or OPTIONAL . . . . . . . . 66
11. NFSv4.2 Operations . . . . . . . . . . . . . . . . . . . . . . 69
11.1. Operation 59: COPY - Initiate a server-side copy . . . . . 69
11.2. Operation 60: COPY_ABORT - Cancel a server-side copy . . . 77
11.3. Operation 61: COPY_NOTIFY - Notify a source server of
a future copy . . . . . . . . . . . . . . . . . . . . . . 78
11.4. Operation 62: COPY_REVOKE - Revoke a destination
server's copy privileges . . . . . . . . . . . . . . . . . 80
11.5. Operation 63: COPY_STATUS - Poll for status of a
server-side copy . . . . . . . . . . . . . . . . . . . . . 81
11.6. Operation 64: INITIALIZE . . . . . . . . . . . . . . . . . 83
11.7. Modification to Operation 42: EXCHANGE_ID -
Instantiate Client ID . . . . . . . . . . . . . . . . . . 85
11.8. Operation 65: READ_PLUS . . . . . . . . . . . . . . . . . 86
12. NFSv4.2 Callback Operations . . . . . . . . . . . . . . . . . 88
12.1. Operation 15: CB_COPY - Report results of a
server-side copy . . . . . . . . . . . . . . . . . . . . . 88
13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 89
14. References . . . . . . . . . . . . . . . . . . . . . . . . . . 89
14.1. Normative References . . . . . . . . . . . . . . . . . . . 89
14.2. Informative References . . . . . . . . . . . . . . . . . . 90
Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . . 91
Appendix B. RFC Editor Notes . . . . . . . . . . . . . . . . . . 92
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 92
1. Introduction 1. Introduction
1.1. The NFS Version 4 Minor Version 2 Protocol 1.1. The NFS Version 4 Minor Version 2 Protocol
The NFS version 4 minor version 2 (NFSv4.2) protocol is the third The NFS version 4 minor version 2 (NFSv4.2) protocol is the third
minor version of the NFS version 4 (NFSv4) protocol. The first minor minor version of the NFS version 4 (NFSv4) protocol. The first minor
version, NFSv4.0, is described in [10] and the second minor version, version, NFSv4.0, is described in [10] and the second minor version,
NFSv4.1, is described in [2]. It follows the guidelines for minor NFSv4.1, is described in [2]. It follows the guidelines for minor
versioning that are listed in Section 11 of RFC 3530bis. versioning that are listed in Section 11 of RFC 3530bis.
skipping to change at page 12, line 20 skipping to change at page 13, line 20
duplicate junction using the FEDFS_CREATE_JUNCTION procedure. duplicate junction using the FEDFS_CREATE_JUNCTION procedure.
For the inter-server copy protocol, the operations are defined to be For the inter-server copy protocol, the operations are defined to be
compatible with a server-to-server copy protocol in which the compatible with a server-to-server copy protocol in which the
destination server reads the file data from the source server. This destination server reads the file data from the source server. This
model in which the file data is pulled from the source by the model in which the file data is pulled from the source by the
destination has a number of advantages over a model in which the destination has a number of advantages over a model in which the
source pushes the file data to the destination. The advantages of source pushes the file data to the destination. The advantages of
the pull model include: the pull model include:
o The pull model only requires a remote server (i.e. the destination o The pull model only requires a remote server (i.e., the
server) to be granted read access. A push model requires a remote destination server) to be granted read access. A push model
server (i.e. the source server) to be granted write access, which requires a remote server (i.e., the source server) to be granted
is more privileged. write access, which is more privileged.
o The pull model allows the destination server to stop reading if it o The pull model allows the destination server to stop reading if it
has run out of space. In a push model, the destination server has run out of space. In a push model, the destination server
must flow control the source server in this situation. must flow control the source server in this situation.
o The pull model allows the destination server to easily flow o The pull model allows the destination server to easily flow
control the data stream by adjusting the size of its read control the data stream by adjusting the size of its read
operations. In a push model, the destination server does not have operations. In a push model, the destination server does not have
this ability. The source server in a push model is capable of this ability. The source server in a push model is capable of
writing chunks larger than the destination server has requested in writing chunks larger than the destination server has requested in
skipping to change at page 18, line 28 skipping to change at page 19, line 28
of the source file to the destination file by replicating the file of the source file to the destination file by replicating the file
system formats at the block level. Another possibility is that the system formats at the block level. Another possibility is that the
source and destination might be two nodes sharing a common storage source and destination might be two nodes sharing a common storage
area network, and thus there is no need to copy any data at all, and area network, and thus there is no need to copy any data at all, and
instead ownership of the file and its contents might simply be re- instead ownership of the file and its contents might simply be re-
assigned to the destination. To allow for these possibilities, the assigned to the destination. To allow for these possibilities, the
destination server is allowed to use a server-to-server copy protocol destination server is allowed to use a server-to-server copy protocol
of its choice. of its choice.
In a heterogeneous environment, using a protocol other than NFSv4.x In a heterogeneous environment, using a protocol other than NFSv4.x
(e.g. HTTP [14] or FTP [15]) presents some challenges. In (e.g,. HTTP [14] or FTP [15]) presents some challenges. In
particular, the destination server is presented with the challenge of particular, the destination server is presented with the challenge of
accessing the source file given only an NFSv4.x filehandle. accessing the source file given only an NFSv4.x filehandle.
One option for protocols that identify source files with path names One option for protocols that identify source files with path names
is to use an ASCII hexadecimal representation of the source is to use an ASCII hexadecimal representation of the source
filehandle as the file name. filehandle as the file name.
Another option for the source server is to use URLs to direct the Another option for the source server is to use URLs to direct the
destination server to a specialized service. For example, the destination server to a specialized service. For example, the
response to COPY_NOTIFY could include the URL response to COPY_NOTIFY could include the URL
skipping to change at page 20, line 7 skipping to change at page 21, line 7
UTF-8 string. If the netloc4 is of type NL4_NETADDR, the nl_addr UTF-8 string. If the netloc4 is of type NL4_NETADDR, the nl_addr
field MUST contain a valid netaddr4 as defined in Section 3.3.9 of field MUST contain a valid netaddr4 as defined in Section 3.3.9 of
[2]. [2].
When netloc4 values are used for an inter-server copy as shown in When netloc4 values are used for an inter-server copy as shown in
Figure 3, their values may be evaluated on the source server, Figure 3, their values may be evaluated on the source server,
destination server, and client. The network environment in which destination server, and client. The network environment in which
these systems operate should be configured so that the netloc4 values these systems operate should be configured so that the netloc4 values
are interpreted as intended on each system. are interpreted as intended on each system.
4.3.2. Operation 61: COPY_NOTIFY - Notify a source server of a future 4.3.2. Copy Offload Stateids
copy
4.3.2.1. ARGUMENT
struct COPY_NOTIFY4args {
/* CURRENT_FH: source file */
netloc4 cna_destination_server;
};
4.3.2.2. RESULT
struct COPY_NOTIFY4resok {
nfstime4 cnr_lease_time;
netloc4 cnr_source_server<>;
};
union COPY_NOTIFY4res switch (nfsstat4 cnr_status) {
case NFS4_OK:
COPY_NOTIFY4resok resok4;
default:
void;
};
4.3.2.3. DESCRIPTION
This operation is used for an inter-server copy. A client sends this
operation in a COMPOUND request to the source server to authorize a
destination server identified by cna_destination_server to read the
file specified by CURRENT_FH on behalf of the given user.
The cna_destination_server MUST be specified using the netloc4
network location format. The server is not required to resolve the
cna_destination_server address before completing this operation.
If this operation succeeds, the source server will allow the
cna_destination_server to copy the specified file on behalf of the
given user. If COPY_NOTIFY succeeds, the destination server is
granted permission to read the file as long as both of the following
conditions are met:
o The destination server begins reading the source file before the
cnr_lease_time expires. If the cnr_lease_time expires while the
destination server is still reading the source file, the
destination server is allowed to finish reading the file.
o The client has not issued a COPY_REVOKE for the same combination
of user, filehandle, and destination server.
The cnr_lease_time is chosen by the source server. A cnr_lease_time
of 0 (zero) indicates an infinite lease. To renew the copy lease
time the client should resend the same copy notification request to
the source server.
To avoid the need for synchronized clocks, copy lease times are
granted by the server as a time delta. However, there is a
requirement that the client and server clocks do not drift
excessively over the duration of the lease. There is also the issue
of propagation delay across the network which could easily be several
hundred milliseconds as well as the possibility that requests will be
lost and need to be retransmitted.
To take propagation delay into account, the client should subtract it
from copy lease times (e.g. if the client estimates the one-way
propagation delay as 200 milliseconds, then it can assume that the
lease is already 200 milliseconds old when it gets it). In addition,
it will take another 200 milliseconds to get a response back to the
server. So the client must send a lease renewal or send the copy
offload request to the cna_destination_server at least 400
milliseconds before the copy lease would expire. If the propagation
delay varies over the life of the lease (e.g. the client is on a
mobile host), the client will need to continuously subtract the
increase in propagation delay from the copy lease times.
The server's copy lease period configuration should take into account
the network distance of the clients that will be accessing the
server's resources. It is expected that the lease period will take
into account the network propagation delays and other network delay
factors for the client population. Since the protocol does not allow
for an automatic method to determine an appropriate copy lease
period, the server's administrator may have to tune the copy lease
period.
A successful response will also contain a list of names, addresses,
and URLs called cnr_source_server, on which the source is willing to
accept connections from the destination. These might not be
reachable from the client and might be located on networks to which
the client has no connection.
If the client wishes to perform an inter-server copy, the client MUST
send a COPY_NOTIFY to the source server. Therefore, the source
server MUST support COPY_NOTIFY.
For a copy only involving one server (the source and destination are
on the same server), this operation is unnecessary.
The COPY_NOTIFY operation may fail for the following reasons (this is
a partial list):
NFS4ERR_MOVED: The file system which contains the source file is not
present on the source server. The client can determine the
correct location and reissue the operation with the correct
location.
NFS4ERR_NOTSUPP: The copy offload operation is not supported by the
NFS server receiving this request.
NFS4ERR_WRONGSEC: The security mechanism being used by the client
does not match the server's security policy.
4.3.3. Operation 62: COPY_REVOKE - Revoke a destination server's copy
privileges
4.3.3.1. ARGUMENT
struct COPY_REVOKE4args {
/* CURRENT_FH: source file */
netloc4 cra_destination_server;
};
4.3.3.2. RESULT
struct COPY_REVOKE4res {
nfsstat4 crr_status;
};
4.3.3.3. DESCRIPTION
This operation is used for an inter-server copy. A client sends this
operation in a COMPOUND request to the source server to revoke the
authorization of a destination server identified by
cra_destination_server from reading the file specified by CURRENT_FH
on behalf of given user. If the cra_destination_server has already
begun copying the file, a successful return from this operation
indicates that further access will be prevented.
The cra_destination_server MUST be specified using the netloc4
network location format. The server is not required to resolve the
cra_destination_server address before completing this operation.
The COPY_REVOKE operation is useful in situations in which the source
server granted a very long or infinite lease on the destination
server's ability to read the source file and all copy operations on
the source file have been completed.
For a copy only involving one server (the source and destination are
on the same server), this operation is unnecessary.
If the server supports COPY_NOTIFY, the server is REQUIRED to support
the COPY_REVOKE operation.
The COPY_REVOKE operation may fail for the following reasons (this is
a partial list):
NFS4ERR_MOVED: The file system which contains the source file is not
present on the source server. The client can determine the
correct location and reissue the operation with the correct
location.
NFS4ERR_NOTSUPP: The copy offload operation is not supported by the
NFS server receiving this request.
4.3.4. Operation 59: COPY - Initiate a server-side copy
4.3.4.1. ARGUMENT
const COPY4_GUARDED = 0x00000001;
const COPY4_METADATA = 0x00000002;
struct COPY4args {
/* SAVED_FH: source file */
/* CURRENT_FH: destination file or */
/* directory */
offset4 ca_src_offset;
offset4 ca_dst_offset;
length4 ca_count;
uint32_t ca_flags;
component4 ca_destination;
netloc4 ca_source_server<>;
};
4.3.4.2. RESULT
union COPY4res switch (nfsstat4 cr_status) {
case NFS4_OK:
stateid4 cr_callback_id<1>;
default:
length4 cr_bytes_copied;
};
4.3.4.3. DESCRIPTION
The COPY operation is used for both intra- and inter-server copies.
In both cases, the COPY is always sent from the client to the
destination server of the file copy. The COPY operation requests
that a file be copied from the location specified by the SAVED_FH
value to the location specified by the combination of CURRENT_FH and
ca_destination.
The SAVED_FH must be a regular file. If SAVED_FH is not a regular
file, the operation MUST fail and return NFS4ERR_WRONG_TYPE.
In order to set SAVED_FH to the source file handle, the compound
procedure requesting the COPY will include a sub-sequence of
operations such as
PUTFH source-fh
SAVEFH
If the request is for a server-to-server copy, the source-fh is a
filehandle from the source server and the compound procedure is being
executed on the destination server. In this case, the source-fh is a
foreign filehandle on the server receiving the COPY request. If
either PUTFH or SAVEFH checked the validity of the filehandle, the
operation would likely fail and return NFS4ERR_STALE.
In order to avoid this problem, the minor version incorporating the
COPY operations will need to make a few small changes in the handling
of existing operations. If a server supports the server-to-server
COPY feature, a PUTFH followed by a SAVEFH MUST NOT return
NFS4ERR_STALE for either operation. These restrictions do not pose
substantial difficulties for servers. The CURRENT_FH and SAVED_FH
may be validated in the context of the operation referencing them and
an NFS4ERR_STALE error returned for an invalid file handle at that
point.
The CURRENT_FH and ca_destination together specify the destination of
the copy operation. If ca_destination is of 0 (zero) length, then
CURRENT_FH specifies the target file. In this case, CURRENT_FH MUST
be a regular file and not a directory. If ca_destination is not of 0
(zero) length, the ca_destination argument specifies the file name to
which the data will be copied within the directory identified by
CURRENT_FH. In this case, CURRENT_FH MUST be a directory and not a
regular file.
If the file named by ca_destination does not exist and the operation
completes successfully, the file will be visible in the file system
namespace. If the file does not exist and the operation fails, the
file MAY be visible in the file system namespace depending on when
the failure occurs and on the implementation of the NFS server
receiving the COPY operation. If the ca_destination name cannot be
created in the destination file system (due to file name
restrictions, such as case or length), the operation MUST fail.
The ca_src_offset is the offset within the source file from which the
data will be read, the ca_dst_offset is the offset within the
destination file to which the data will be written, and the ca_count
is the number of bytes that will be copied. An offset of 0 (zero)
specifies the start of the file. A count of 0 (zero) requests that
all bytes from ca_src_offset through EOF be copied to the
destination. If concurrent modifications to the source file overlap
with the source file region being copied, the data copied may include
all, some, or none of the modifications. The client can use standard
NFS operations (e.g. OPEN with OPEN4_SHARE_DENY_WRITE or mandatory
byte range locks) to protect against concurrent modifications if the
client is concerned about this. If the source file's end of file is
being modified in parallel with a copy that specifies a count of 0
(zero) bytes, the amount of data copied is implementation dependent
(clients may guard against this case by specifying a non-zero count
value or preventing modification of the source file as mentioned
above).
If the source offset or the source offset plus count is greater than
or equal to the size of the source file, the operation will fail with
NFS4ERR_INVAL. The destination offset or destination offset plus
count may be greater than the size of the destination file. This
allows for the client to issue parallel copies to implement
operations such as "cat file1 file2 file3 file4 > dest".
If the destination file is created as a result of this command, the
destination file's size will be equal to the number of bytes
successfully copied. If the destination file already existed, the
destination file's size may increase as a result of this operation
(e.g. if ca_dst_offset plus ca_count is greater than the
destination's initial size).
If the ca_source_server list is specified, then this is an inter-
server copy operation and the source file is on a remote server. The
client is expected to have previously issued a successful COPY_NOTIFY
request to the remote source server. The ca_source_server list
SHOULD be the same as the COPY_NOTIFY response's cnr_source_server
list. If the client includes the entries from the COPY_NOTIFY
response's cnr_source_server list in the ca_source_server list, the
source server can indicate a specific copy protocol for the
destination server to use by returning a URL, which specifies both a
protocol service and server name. Server-to-server copy protocol
considerations are described in Section 4.2.3 and Section 4.4.1.
The ca_flags argument allows the copy operation to be customized in
the following ways using the guarded flag (COPY4_GUARDED) and the
metadata flag (COPY4_METADATA).
[NOTE: Earlier versions of this document defined a
COPY4_SPACE_RESERVED flag for controlling space reservations on the
destination file. This flag has been removed with the expectation
that the space_reserve attribute defined in XXX_TDH_XXX will be
adopted.]
If the guarded flag is set and the destination exists on the server,
this operation will fail with NFS4ERR_EXIST.
If the guarded flag is not set and the destination exists on the
server, the behavior is implementation dependent.
If the metadata flag is set and the client is requesting a whole file
copy (i.e. ca_count is 0 (zero)), a subset of the destination file's
attributes MUST be the same as the source file's corresponding
attributes and a subset of the destination file's attributes SHOULD
be the same as the source file's corresponding attributes. The
attributes in the MUST and SHOULD copy subsets will be defined for
each NFS version.
For NFSv4.1, Table 1 and Table 2 list the REQUIRED and RECOMMENDED
attributes respectively. A "MUST" in the "Copy to destination file?"
column indicates that the attribute is part of the MUST copy set. A
"SHOULD" in the "Copy to destination file?" column indicates that the
attribute is part of the SHOULD copy set.
+--------------------+----+---------------------------+
| Name | Id | Copy to destination file? |
+--------------------+----+---------------------------+
| supported_attrs | 0 | no |
| type | 1 | MUST |
| fh_expire_type | 2 | no |
| change | 3 | SHOULD |
| size | 4 | MUST |
| link_support | 5 | no |
| symlink_support | 6 | no |
| named_attr | 7 | no |
| fsid | 8 | no |
| unique_handles | 9 | no |
| lease_time | 10 | no |
| rdattr_error | 11 | no |
| filehandle | 19 | no |
| suppattr_exclcreat | 75 | no |
+--------------------+----+---------------------------+
Table 1
+--------------------+----+---------------------------+
| Name | Id | Copy to destination file? |
+--------------------+----+---------------------------+
| acl | 12 | MUST |
| aclsupport | 13 | no |
| archive | 14 | no |
| cansettime | 15 | no |
| case_insensitive | 16 | no |
| case_preserving | 17 | no |
| change_policy | 60 | no |
| chown_restricted | 18 | MUST |
| dacl | 58 | MUST |
| dir_notif_delay | 56 | no |
| dirent_notif_delay | 57 | no |
| fileid | 20 | no |
| files_avail | 21 | no |
| files_free | 22 | no |
| files_total | 23 | no |
| fs_charset_cap | 76 | no |
| fs_layout_type | 62 | no |
| fs_locations | 24 | no |
| fs_locations_info | 67 | no |
| fs_status | 61 | no |
| hidden | 25 | MUST |
| homogeneous | 26 | no |
| layout_alignment | 66 | no |
| layout_blksize | 65 | no |
| layout_hint | 63 | no |
| layout_type | 64 | no |
| maxfilesize | 27 | no |
| maxlink | 28 | no |
| maxname | 29 | no |
| maxread | 30 | no |
| maxwrite | 31 | no |
| mdsthreshold | 68 | no |
| mimetype | 32 | MUST |
| mode | 33 | MUST |
| mode_set_masked | 74 | no |
| mounted_on_fileid | 55 | no |
| no_trunc | 34 | no |
| numlinks | 35 | no |
| owner | 36 | MUST |
| owner_group | 37 | MUST |
| quota_avail_hard | 38 | no |
| quota_avail_soft | 39 | no |
| quota_used | 40 | no |
| rawdev | 41 | no |
| retentevt_get | 71 | MUST |
| retentevt_set | 72 | no |
| retention_get | 69 | MUST |
| retention_hold | 73 | MUST |
| retention_set | 70 | no |
| sacl | 59 | MUST |
| space_avail | 42 | no |
| space_free | 43 | no |
| space_total | 44 | no |
| space_used | 45 | no |
| system | 46 | MUST |
| time_access | 47 | MUST |
| time_access_set | 48 | no |
| time_backup | 49 | no |
| time_create | 50 | MUST |
| time_delta | 51 | no |
| time_metadata | 52 | SHOULD |
| time_modify | 53 | MUST |
| time_modify_set | 54 | no |
+--------------------+----+---------------------------+
Table 2
[NOTE: The space_reserve attribute XXX_TDH_XXX will be in the MUST
set.]
[NOTE: The source file's attribute values will take precedence over
any attribute values inherited by the destination file.]
In the case of an inter-server copy or an intra-server copy between
file systems, the attributes supported for the source file and
destination file could be different. By definition,the REQUIRED
attributes will be supported in all cases. If the metadata flag is
set and the source file has a RECOMMENDED attribute that is not
supported for the destination file, the copy MUST fail with
NFS4ERR_ATTRNOTSUPP.
Any attribute supported by the destination server that is not set on
the source file SHOULD be left unset.
Metadata attributes not exposed via the NFS protocol SHOULD be copied
to the destination file where appropriate.
The destination file's named attributes are not duplicated from the
source file. After the copy process completes, the client MAY
attempt to duplicate named attributes using standard NFSv4
operations. However, the destination file's named attribute
capabilities MAY be different from the source file's named attribute
capabilities.
If the metadata flag is not set and the client is requesting a whole
file copy (i.e. ca_count is 0 (zero)), the destination file's
metadata is implementation dependent.
If the client is requesting a partial file copy (i.e. ca_count is not
0 (zero)), the client SHOULD NOT set the metadata flag and the server
MUST ignore the metadata flag.
If the operation does not result in an immediate failure, the server
will return NFS4_OK, and the CURRENT_FH will remain the destination's
filehandle.
If an immediate failure does occur, cr_bytes_copied will be set to
the number of bytes copied to the destination file before the error
occurred. The cr_bytes_copied value indicates the number of bytes
copied but not which specific bytes have been copied.
A return of NFS4_OK indicates that either the operation is complete
or the operation was initiated and a callback will be used to deliver
the final status of the operation.
If the cr_callback_id is returned, this indicates that the operation
was initiated and a CB_COPY callback will deliver the final results
of the operation. The cr_callback_id stateid is termed a copy
stateid in this context. The server is given the option of returning
the results in a callback because the data may require a relatively
long period of time to copy.
If no cr_callback_id is returned, the operation completed
synchronously and no callback will be issued by the server. The
completion status of the operation is indicated by cr_status.
If the copy completes successfully, either synchronously or
asynchronously, the data copied from the source file to the
destination file MUST appear identical to the NFS client. However,
the NFS server's on disk representation of the data in the source
file and destination file MAY differ. For example, the NFS server
might encrypt, compress, deduplicate, or otherwise represent the on
disk data in the source and destination file differently.
In the event of a failure the state of the destination file is
implementation dependent. The COPY operation may fail for the
following reasons (this is a partial list).
NFS4ERR_MOVED: The file system which contains the source file, or
the destination file or directory is not present. The client can
determine the correct location and reissue the operation with the
correct location.
NFS4ERR_NOTSUPP: The copy offload operation is not supported by the
NFS server receiving this request.
NFS4ERR_PARTNER_NOTSUPP: The remote server does not support the
server-to-server copy offload protocol.
NFS4ERR_PARTNER_NO_AUTH: The remote server does not authorize a
server-to-server copy offload operation. This may be due to the
client's failure to send the COPY_NOTIFY operation to the remote
server, the remote server receiving a server-to-server copy
offload request after the copy lease time expired, or for some
other permission problem.
NFS4ERR_FBIG: The copy operation would have caused the file to grow
beyond the server's limit.
NFS4ERR_NOTDIR: The CURRENT_FH is a file and ca_destination has non-
zero length.
NFS4ERR_WRONG_TYPE: The SAVED_FH is not a regular file.
NFS4ERR_ISDIR: The CURRENT_FH is a directory and ca_destination has
zero length.
NFS4ERR_INVAL: The source offset or offset plus count are greater
than or equal to the size of the source file.
NFS4ERR_DELAY: The server does not have the resources to perform the
copy operation at the current time. The client should retry the
operation sometime in the future.
NFS4ERR_METADATA_NOTSUPP: The destination file cannot support the
same metadata as the source file.
NFS4ERR_WRONGSEC: The security mechanism being used by the client
does not match the server's security policy.
4.3.5. Operation 60: COPY_ABORT - Cancel a server-side copy
4.3.5.1. ARGUMENT
struct COPY_ABORT4args {
/* CURRENT_FH: desination file */
stateid4 caa_stateid;
};
4.3.5.2. RESULT
struct COPY_ABORT4res {
nfsstat4 car_status;
};
4.3.5.3. DESCRIPTION
COPY_ABORT is used for both intra- and inter-server asynchronous
copies. The COPY_ABORT operation allows the client to cancel a
server-side copy operation that it initiated. This operation is sent
in a COMPOUND request from the client to the destination server.
This operation may be used to cancel a copy when the application that
requested the copy exits before the operation is completed or for
some other reason.
The request contains the filehandle and copy stateid cookies that act
as the context for the previously initiated copy operation.
The result's car_status field indicates whether the cancel was
successful or not. A value of NFS4_OK indicates that the copy
operation was canceled and no callback will be issued by the server.
A copy operation that is successfully canceled may result in none,
some, or all of the data copied.
If the server supports asynchronous copies, the server is REQUIRED to
support the COPY_ABORT operation.
The COPY_ABORT operation may fail for the following reasons (this is
a partial list):
NFS4ERR_NOTSUPP: The abort operation is not supported by the NFS
server receiving this request.
NFS4ERR_RETRY: The abort failed, but a retry at some time in the
future MAY succeed.
NFS4ERR_COMPLETE_ALREADY: The abort failed, and a callback will
deliver the results of the copy operation.
NFS4ERR_SERVERFAULT: An error occurred on the server that does not
map to a specific error code.
4.3.6. Operation 63: COPY_STATUS - Poll for status of a server-side
copy
4.3.6.1. ARGUMENT
struct COPY_STATUS4args {
/* CURRENT_FH: destination file */
stateid4 csa_stateid;
};
4.3.6.2. RESULT
struct COPY_STATUS4resok {
length4 csr_bytes_copied;
nfsstat4 csr_complete<1>;
};
union COPY_STATUS4res switch (nfsstat4 csr_status) {
case NFS4_OK:
COPY_STATUS4resok resok4;
default:
void;
};
4.3.6.3. DESCRIPTION
COPY_STATUS is used for both intra- and inter-server asynchronous
copies. The COPY_STATUS operation allows the client to poll the
server to determine the status of an asynchronous copy operation.
This operation is sent by the client to the destination server.
If this operation is successful, the number of bytes copied are
returned to the client in the csr_bytes_copied field. The
csr_bytes_copied value indicates the number of bytes copied but not
which specific bytes have been copied.
If the optional csr_complete field is present, the copy has
completed. In this case the status value indicates the result of the
asynchronous copy operation. In all cases, the server will also
deliver the final results of the asynchronous copy in a CB_COPY
operation.
The failure of this operation does not indicate the result of the
asynchronous copy in any way.
If the server supports asynchronous copies, the server is REQUIRED to
support the COPY_STATUS operation.
The COPY_STATUS operation may fail for the following reasons (this is
a partial list):
NFS4ERR_NOTSUPP: The copy status operation is not supported by the
NFS server receiving this request.
NFS4ERR_BAD_STATEID: The stateid is not valid (see Section 4.3.8
below).
NFS4ERR_EXPIRED: The stateid has expired (see Copy Offload Stateid
section below).
4.3.7. Operation 15: CB_COPY - Report results of a server-side copy
4.3.7.1. ARGUMENT
union copy_info4 switch (nfsstat4 cca_status) {
case NFS4_OK:
void;
default:
length4 cca_bytes_copied;
};
struct CB_COPY4args {
nfs_fh4 cca_fh;
stateid4 cca_stateid;
copy_info4 cca_copy_info;
};
4.3.7.2. RESULT
struct CB_COPY4res {
nfsstat4 ccr_status;
};
4.3.7.3. DESCRIPTION
CB_COPY is used for both intra- and inter-server asynchronous copies.
The CB_COPY callback informs the client of the result of an
asynchronous server-side copy. This operation is sent by the
destination server to the client in a CB_COMPOUND request. The copy
is identified by the filehandle and stateid arguments. The result is
indicated by the status field. If the copy failed, cca_bytes_copied
contains the number of bytes copied before the failure occurred. The
cca_bytes_copied value indicates the number of bytes copied but not
which specific bytes have been copied.
In the absence of an established backchannel, the server cannot
signal the completion of the COPY via a CB_COPY callback. The loss
of a callback channel would be indicated by the server setting the
SEQ4_STATUS_CB_PATH_DOWN flag in the sr_status_flags field of the
SEQUENCE operation. The client must re-establish the callback
channel to receive the status of the COPY operation. Prolonged loss
of the callback channel could result in the server dropping the COPY
operation state and invalidating the copy stateid.
If the client supports the COPY operation, the client is REQUIRED to
support the CB_COPY operation.
The CB_COPY operation may fail for the following reasons (this is a
partial list):
NFS4ERR_NOTSUPP: The copy offload operation is not supported by the
NFS client receiving this request.
4.3.8. Copy Offload Stateids
A server may perform a copy offload operation asynchronously. An A server may perform a copy offload operation asynchronously. An
asynchronous copy is tracked using a copy offload stateid. Copy asynchronous copy is tracked using a copy offload stateid. Copy
offload stateids are included in the COPY, COPY_ABORT, COPY_STATUS, offload stateids are included in the COPY, COPY_ABORT, COPY_STATUS,
and CB_COPY operations. and CB_COPY operations.
Section 8.2.4 of [2] specifies that stateids are valid until either Section 8.2.4 of [2] specifies that stateids are valid until either
(A) the client or server restart or (B) the client returns the (A) the client or server restart or (B) the client returns the
resource. resource.
skipping to change at page 38, line 23 skipping to change at page 24, line 23
}; };
ctap_shared_secret is a secret value the user principal generated ctap_shared_secret is a secret value the user principal generated
and was used to establish the copy_from_auth privilege with the and was used to establish the copy_from_auth privilege with the
source principal. source principal.
copy_confirm_auth: A destination principal is confirming with the copy_confirm_auth: A destination principal is confirming with the
source principal that it is authorized to copy data from the source principal that it is authorized to copy data from the
source on behalf of the user principal. When the inter-server source on behalf of the user principal. When the inter-server
copy protocol is NFSv4, or for that matter, any protocol capable copy protocol is NFSv4, or for that matter, any protocol capable
of being secured via RPCSEC_GSSv3 (i.e. any ONC RPC protocol), of being secured via RPCSEC_GSSv3 (i.e., any ONC RPC protocol),
this privilege is established before the file is copied from the this privilege is established before the file is copied from the
source to the destination. source to the destination.
struct copy_confirm_auth_priv { struct copy_confirm_auth_priv {
/* equal to GSS_GetMIC() of cfap_shared_secret */ /* equal to GSS_GetMIC() of cfap_shared_secret */
opaque ccap_shared_secret_mic<>; opaque ccap_shared_secret_mic<>;
/* the NFSv4 user name that the user principal maps to */ /* the NFSv4 user name that the user principal maps to */
utf8str_mixed ccap_username; utf8str_mixed ccap_username;
/* equal to seq_num of rpc_gss_cred_vers_3_t */ /* equal to seq_num of rpc_gss_cred_vers_3_t */
unsigned int ccap_seq_num; unsigned int ccap_seq_num;
skipping to change at page 39, line 10 skipping to change at page 25, line 10
o An instance of copy_from_auth_priv is filled in with the shared o An instance of copy_from_auth_priv is filled in with the shared
secret, the destination server, and the NFSv4 user id of the user secret, the destination server, and the NFSv4 user id of the user
principal. It will be sent with an RPCSEC_GSS3_CREATE procedure, principal. It will be sent with an RPCSEC_GSS3_CREATE procedure,
and so cfap_seq_num is set to the seq_num of the credential of the and so cfap_seq_num is set to the seq_num of the credential of the
RPCSEC_GSS3_CREATE procedure. Because cfap_shared_secret is a RPCSEC_GSS3_CREATE procedure. Because cfap_shared_secret is a
secret, after XDR encoding copy_from_auth_priv, GSS_Wrap() (with secret, after XDR encoding copy_from_auth_priv, GSS_Wrap() (with
privacy) is invoked on copy_from_auth_priv. The privacy) is invoked on copy_from_auth_priv. The
RPCSEC_GSS3_CREATE procedure's arguments are: RPCSEC_GSS3_CREATE procedure's arguments are:
struct { struct {
rpc_gss3_gss_binding *compound_binding; rpc_gss3_gss_binding *compound_binding;
rpc_gss3_chan_binding *chan_binding_mic; rpc_gss3_chan_binding *chan_binding_mic;
rpc_gss3_assertion assertions<>; rpc_gss3_assertion assertions<>;
rpc_gss3_extension extensions<>; rpc_gss3_extension extensions<>;
} rpc_gss3_create_args; } rpc_gss3_create_args;
The string "copy_from_auth" is placed in assertions[0].privs. The The string "copy_from_auth" is placed in assertions[0].privs. The
output of GSS_Wrap() is placed in extensions[0].data. The field output of GSS_Wrap() is placed in extensions[0].data. The field
extensions[0].critical is set to TRUE. The source server calls extensions[0].critical is set to TRUE. The source server calls
GSS_Unwrap() on the privilege, and verifies that the seq_num GSS_Unwrap() on the privilege, and verifies that the seq_num
matches the credential. It then verifies that the NFSv4 user id matches the credential. It then verifies that the NFSv4 user id
being asserted matches the source server's mapping of the user being asserted matches the source server's mapping of the user
principal. If it does, the privilege is established on the source principal. If it does, the privilege is established on the source
server as: <"copy_from_auth", user id, destination>. The server as: <"copy_from_auth", user id, destination>. The
successful reply to RPCSEC_GSS3_CREATE has: successful reply to RPCSEC_GSS3_CREATE has:
struct { struct {
opaque handle<>; opaque handle<>;
rpc_gss3_chan_binding *chan_binding_mic; rpc_gss3_chan_binding *chan_binding_mic;
rpc_gss3_assertion granted_assertions<>; rpc_gss3_assertion granted_assertions<>;
rpc_gss3_assertion server_assertions<>; rpc_gss3_assertion server_assertions<>;
rpc_gss3_extension extensions<>; rpc_gss3_extension extensions<>;
} rpc_gss3_create_res; } rpc_gss3_create_res;
The field "handle" is the RPCSEC_GSSv3 handle that the client will The field "handle" is the RPCSEC_GSSv3 handle that the client will
use on COPY_NOTIFY requests involving the source and destination use on COPY_NOTIFY requests involving the source and destination
server. granted_assertions[0].privs will be equal to server. granted_assertions[0].privs will be equal to
"copy_from_auth". The server will return a GSS_Wrap() of "copy_from_auth". The server will return a GSS_Wrap() of
copy_to_auth_priv. copy_to_auth_priv.
o An instance of copy_to_auth_priv is filled in with the shared o An instance of copy_to_auth_priv is filled in with the shared
secret, the source server, and the NFSv4 user id. It will be sent secret, the source server, and the NFSv4 user id. It will be sent
with an RPCSEC_GSS3_CREATE procedure, and so ctap_seq_num is set with an RPCSEC_GSS3_CREATE procedure, and so ctap_seq_num is set
to the seq_num of the credential of the RPCSEC_GSS3_CREATE to the seq_num of the credential of the RPCSEC_GSS3_CREATE
procedure. Because ctap_shared_secret is a secret, after XDR procedure. Because ctap_shared_secret is a secret, after XDR
encoding copy_to_auth_priv, GSS_Wrap() is invoked on encoding copy_to_auth_priv, GSS_Wrap() is invoked on
copy_to_auth_priv. The RPCSEC_GSS3_CREATE procedure's arguments copy_to_auth_priv. The RPCSEC_GSS3_CREATE procedure's arguments
are: are:
struct { struct {
rpc_gss3_gss_binding *compound_binding; rpc_gss3_gss_binding *compound_binding;
rpc_gss3_chan_binding *chan_binding_mic; rpc_gss3_chan_binding *chan_binding_mic;
rpc_gss3_assertion assertions<>; rpc_gss3_assertion assertions<>;
rpc_gss3_extension extensions<>; rpc_gss3_extension extensions<>;
} rpc_gss3_create_args; } rpc_gss3_create_args;
The string "copy_to_auth" is placed in assertions[0].privs. The The string "copy_to_auth" is placed in assertions[0].privs. The
output of GSS_Wrap() is placed in extensions[0].data. The field output of GSS_Wrap() is placed in extensions[0].data. The field
extensions[0].critical is set to TRUE. After unwrapping, extensions[0].critical is set to TRUE. After unwrapping,
verifying the seq_num, and the user principal to NFSv4 user ID verifying the seq_num, and the user principal to NFSv4 user ID
mapping, the destination establishes a privilege of mapping, the destination establishes a privilege of
<"copy_to_auth", user id, source>. The successful reply to <"copy_to_auth", user id, source>. The successful reply to
RPCSEC_GSS3_CREATE has: RPCSEC_GSS3_CREATE has:
struct { struct {
opaque handle<>; opaque handle<>;
rpc_gss3_chan_binding *chan_binding_mic; rpc_gss3_chan_binding *chan_binding_mic;
rpc_gss3_assertion granted_assertions<>; rpc_gss3_assertion granted_assertions<>;
rpc_gss3_assertion server_assertions<>; rpc_gss3_assertion server_assertions<>;
rpc_gss3_extension extensions<>; rpc_gss3_extension extensions<>;
} rpc_gss3_create_res; } rpc_gss3_create_res;
The field "handle" is the RPCSEC_GSSv3 handle that the client will The field "handle" is the RPCSEC_GSSv3 handle that the client will
use on COPY requests involving the source and destination server. use on COPY requests involving the source and destination server.
The field granted_assertions[0].privs will be equal to The field granted_assertions[0].privs will be equal to
"copy_to_auth". The server will return a GSS_Wrap() of "copy_to_auth". The server will return a GSS_Wrap() of
copy_to_auth_priv. copy_to_auth_priv.
4.4.1.2.2. Starting a Secure Inter-Server Copy 4.4.1.2.2. Starting a Secure Inter-Server Copy
When the client sends a COPY_NOTIFY request to the source server, it When the client sends a COPY_NOTIFY request to the source server, it
uses the privileged "copy_from_auth" RPCSEC_GSSv3 handle. uses the privileged "copy_from_auth" RPCSEC_GSSv3 handle.
cna_destination_server in COPY_NOTIFY MUST be the same as the name of cna_destination_server in COPY_NOTIFY MUST be the same as the name of
the destination server specified in copy_from_auth_priv. Otherwise, the destination server specified in copy_from_auth_priv. Otherwise,
COPY_NOTIFY will fail with NFS4ERR_ACCESS. The source server COPY_NOTIFY will fail with NFS4ERR_ACCESS. The source server
verifies that the privilege <"copy_from_auth", user id, destination> verifies that the privilege <"copy_from_auth", user id, destination>
exists, and annotates it with the source filehandle, if the user exists, and annotates it with the source filehandle, if the user
principal has read access to the source file, and if administrative principal has read access to the source file, and if administrative
policies give the user principal and the NFS client read access to policies give the user principal and the NFS client read access to
the source file (i.e. if the ACCESS operation would grant read the source file (i.e., if the ACCESS operation would grant read
access). Otherwise, COPY_NOTIFY will fail with NFS4ERR_ACCESS. access). Otherwise, COPY_NOTIFY will fail with NFS4ERR_ACCESS.
When the client sends a COPY request to the destination server, it When the client sends a COPY request to the destination server, it
uses the privileged "copy_to_auth" RPCSEC_GSSv3 handle. uses the privileged "copy_to_auth" RPCSEC_GSSv3 handle.
ca_source_server in COPY MUST be the same as the name of the source ca_source_server in COPY MUST be the same as the name of the source
server specified in copy_to_auth_priv. Otherwise, COPY will fail server specified in copy_to_auth_priv. Otherwise, COPY will fail
with NFS4ERR_ACCESS. The destination server verifies that the with NFS4ERR_ACCESS. The destination server verifies that the
privilege <"copy_to_auth", user id, source> exists, and annotates it privilege <"copy_to_auth", user id, source> exists, and annotates it
with the source and destination filehandles. If the client has with the source and destination filehandles. If the client has
failed to establish the "copy_to_auth" policy it will reject the failed to establish the "copy_to_auth" policy it will reject the
skipping to change at page 42, line 24 skipping to change at page 28, line 24
destination server will reject it with NFS4ERR_PARTNER_NO_AUTH. destination server will reject it with NFS4ERR_PARTNER_NO_AUTH.
4.4.1.2.4. Securing Non ONC RPC Server-to-Server Copy Protocols 4.4.1.2.4. Securing Non ONC RPC Server-to-Server Copy Protocols
If the destination won't be using ONC RPC to copy the data, then the If the destination won't be using ONC RPC to copy the data, then the
source and destination are using an unspecified copy protocol. The source and destination are using an unspecified copy protocol. The
destination could use the shared secret and the NFSv4 user id to destination could use the shared secret and the NFSv4 user id to
prove to the source server that the user principal has authorized the prove to the source server that the user principal has authorized the
copy. copy.
For protocols that authenticate user names with passwords (e.g. HTTP For protocols that authenticate user names with passwords (e.g., HTTP
[14] and FTP [15]), the nfsv4 user id could be used as the user name, [14] and FTP [15]), the nfsv4 user id could be used as the user name,
and an ASCII hexadecimal representation of the RPCSEC_GSSv3 shared and an ASCII hexadecimal representation of the RPCSEC_GSSv3 shared
secret could be used as the user password or as input into non- secret could be used as the user password or as input into non-
password authentication methods like CHAP [16]. password authentication methods like CHAP [16].
4.4.1.3. Inter-Server Copy via ONC RPC but without RPCSEC_GSSv3 4.4.1.3. Inter-Server Copy via ONC RPC but without RPCSEC_GSSv3
ONC RPC security flavors other than RPCSEC_GSSv3 MAY be used with the ONC RPC security flavors other than RPCSEC_GSSv3 MAY be used with the
server-side copy offload operations described in this document. In server-side copy offload operations described in this document. In
particular, host-based ONC RPC security flavors such as AUTH_NONE and particular, host-based ONC RPC security flavors such as AUTH_NONE and
skipping to change at page 43, line 33 skipping to change at page 29, line 33
COMPOUND { PUTROOTFH, LOOKUP "_COPY" ; LOOKUP "10.11.78.56"; LOOKUP COMPOUND { PUTROOTFH, LOOKUP "_COPY" ; LOOKUP "10.11.78.56"; LOOKUP
"_FH" ; OPEN "0x12345" ; GETFH } "_FH" ; OPEN "0x12345" ; GETFH }
The source server will therefore know that these NFSv4.1 operations The source server will therefore know that these NFSv4.1 operations
are being issued by the destination server identified in the are being issued by the destination server identified in the
COPY_NOTIFY. COPY_NOTIFY.
4.4.1.4. Inter-Server Copy without ONC RPC and RPCSEC_GSSv3 4.4.1.4. Inter-Server Copy without ONC RPC and RPCSEC_GSSv3
The same techniques as Section 4.4.1.3, using unique URLs for each The same techniques as Section 4.4.1.3, using unique URLs for each
destination server, can be used for other protocols (e.g. HTTP [14] destination server, can be used for other protocols (e.g., HTTP [14]
and FTP [15]) as well. and FTP [15]) as well.
5. Application Data Block Support 5. Application Data Block Support
At the OS level, files are contained on disk blocks. Applications At the OS level, files are contained on disk blocks. Applications
are also free to impose structure on the data contained in a file and are also free to impose structure on the data contained in a file and
we can define an Application Data Block (ADB) to be such a structure. we can define an Application Data Block (ADB) to be such a structure.
From the application's viewpoint, it only wants to handle ADBs and From the application's viewpoint, it only wants to handle ADBs and
not raw bytes (see [17]). An ADB is typically comprised of two not raw bytes (see [17]). An ADB is typically comprised of two
sections: a header and data. The header describes the sections: a header and data. The header describes the
skipping to change at page 45, line 48 skipping to change at page 31, line 48
enum data_content4 { enum data_content4 {
NFS4_CONTENT_DATA = 0, NFS4_CONTENT_DATA = 0,
NFS4_CONTENT_APP_BLOCK = 1, NFS4_CONTENT_APP_BLOCK = 1,
NFS4_CONTENT_HOLE = 2 NFS4_CONTENT_HOLE = 2
}; };
New operations might need to differentiate between wanting to access New operations might need to differentiate between wanting to access
data versus an ADB. Also, future minor versions might want to data versus an ADB. Also, future minor versions might want to
introduce new data formats. This enumeration allows that to occur. introduce new data formats. This enumeration allows that to occur.
5.2. Operation 64: INITIALIZE 5.2. pNFS Considerations
The server has no concept of the structure imposed by the
application. It is only when the application writes to a section of
the file does order get imposed. In order to detect corruption even
before the application utilizes the file, the application will want
to initialize a range of ADBs. It uses the INITIALIZE operation to
do so.
5.2.1. ARGUMENT
/*
* We use data_content4 in case we wish to
* extend new types later. Note that we
* are explicitly disallowing data.
*/
union initialize_arg4 switch (data_content4 content) {
case NFS4_CONTENT_APP_BLOCK:
app_data_block4 ia_adb;
case NFS4_CONTENT_HOLE:
length4 ia_hole_length;
default:
void;
};
struct INITIALIZE4args {
/* CURRENT_FH: file */
stateid4 ia_stateid;
stable_how4 ia_stable;
offset4 ia_offset;
initialize_arg4 ia_data<>;
};
5.2.2. RESULT
struct INITIALIZE4resok {
count4 ir_count;
stable_how4 ir_committed;
verifier4 ir_writeverf;
data_content4 ir_sparse;
};
union INITIALIZE4res switch (nfsstat4 status) {
case NFS4_OK:
INITIALIZE4resok resok4;
default:
void;
};
5.2.3. DESCRIPTION
When the client invokes the INITIALIZE operation, it has two desired
results:
1. The structure described by the app_data_block4 be imposed on the
file.
2. The contents described by the app_data_block4 be sparse.
If the server supports the INITIALIZE operation, it still might not
support sparse files. So if it receives the INITIALIZE operation,
then it MUST populate the contents of the file with the initialized
ADBs. In other words, if the server supports INITIALIZE, then it
supports the concept of ADBs. [[Comment.1: Do we want to support an
asynchronous INITIALIZE? Do we have to? --TH]]
If the data was already initialized, There are two interesting
scenarios:
1. The data blocks are allocated.
2. Initializing in the middle of an existing ADB.
If the data blocks were already allocated, then the INITIALIZE is a
hole punch operation. If INITIALIZE supports sparse files, then the
data blocks are to be deallocated. If not, then the data blocks are
to be rewritten in the indicated ADB format. [[Comment.2: Need to
document interaction between space reservation and hole punching?
--TH]]
Since the server has no knowledge of ADBs, it should not report
misaligned creation of ADBs. Even while it can detect them, it
cannot disallow them, as the application might be in the process of
changing the size of the ADBs. Thus the server must be prepared to
handle an INITIALIZE into an existing ADB.
This document does not mandate the manner in which the server stores
ADBs sparsely for a file. It does assume that if ADBs are stored
sparsely, then the server can detect when an INITIALIZE arrives that
will force a new ADB to start inside an existing ADB. For example,
assume that ADBi has a adb_block_size of 4k and that an INITIALIZE
starts 1k inside ADBi. The server should [[Comment.3: Need to flesh
this out. --TH]]
5.3. Operation 65: READ_PLUS
If the client sends a READ operation, it is explicitly stating that
it is not supporting sparse files. So if a READ occurs on a sparse
ADB, then the server must expand such ADBs to be raw bytes. If a
READ occurs in the middle of an ADB, the server can only send back
bytes starting from that offset.
Such an operation is inefficient for transfer of sparse sections of
the file. As such, READ is marked as OBSOLETE in NFSv4.2. Instead,
a client should issue READ_PLUS. Note that as the client has no a
priori knowledge of whether an ADB is present or not, it should
always use READ_PLUS.
5.3.1. ARGUMENT
struct READ_PLUS4args {
/* CURRENT_FH: file */
stateid4 rpa_stateid;
offset4 rpa_offset;
count4 rpa_count;
};
5.3.2. RESULT
union read_plus_content switch (data_content4 content) {
case NFS4_CONTENT_DATA:
opaque rpc_data<>;
case NFS4_CONTENT_APP_BLOCK:
app_data_block4 rpc_block;
case NFS4_CONTENT_HOLE:
length4 rpc_hole_length;
default:
void;
};
/*
* Allow a return of an array of contents.
*/
struct read_plus_res4 {
bool rpr_eof;
read_plus_content rpr_contents<>;
};
union READ_PLUS4res switch (nfsstat4 status) {
case NFS4_OK:
read_plus_res4 resok4;
default:
void;
};
5.3.3. DESCRIPTION
Over the given range, READ_PLUS will return all data and ADBs found
as an array of read_plus_content. It is possible to have consecutive
ADBs in the array as either different definitions of ADBs are present
or as the guard pattern changes.
Edge cases exist for ABDs which either begin before the rpa_offset
requested by the READ_PLUS or end after the rpa_count requested -
both of which may occur as not all applications which access the file
are aware of the main application imposing a format on the file
contents, i.e., tar, dd, cp, etc. READ_PLUS MUST retrieve whole
ADBs, but it need not retrieve an entire sequences of ADBs.
The server MUST return a whole ADB because if it does not, it must
expand that partial ADB before it sends it to the client. E.g., if
an ADB had a block size of 64k and the READ_PLUS was for 128k
starting at an offset of 32k inside the ADB, then the first 32k would
be converted to data.
5.4. pNFS Considerations
While this document does not mandate how sparse ADBs are recorded on While this document does not mandate how sparse ADBs are recorded on
the server, it does make the assumption that such information is not the server, it does make the assumption that such information is not
in the file. I.e., the information is metadata. As such, the in the file. I.e., the information is metadata. As such, the
INITIALIZE operation is defined to be not supported by the DS - it INITIALIZE operation is defined to be not supported by the DS - it
must be issued to the MDS. But since the client must not assume a must be issued to the MDS. But since the client must not assume a
priori whether a read is sparse or not, the READ_PLUS operation MUST priori whether a read is sparse or not, the READ_PLUS operation MUST
be supported by both the DS and the MDS. I.e., the client might be supported by both the DS and the MDS. I.e., the client might
impose on the MDS to asynchronously read the data from the DS. impose on the MDS to asynchronously read the data from the DS.
Furthermore, each DS MUST not report to a client either a sparse ADB Furthermore, each DS MUST not report to a client either a sparse ADB
or data which belongs to another DS. One implication of this or data which belongs to another DS. One implication of this
requirement is that the app_data_block4's adb_block_size MUST be requirement is that the app_data_block4's adb_block_size MUST be
either be the stripe width or the stripe width must be an even either be the stripe width or the stripe width must be an even
multiple of it. multiple of it.
The second implication here is that the DS must be able to use the The second implication here is that the DS must be able to use the
Control Protocol to determine from the MDS where the sparse ADBs Control Protocol to determine from the MDS where the sparse ADBs
occur. [[Comment.4: Need to discuss what happens if after the file occur. [[Comment.1: Need to discuss what happens if after the file
is being written to and an INITIALIZE occurs? --TH]] Perhaps instead is being written to and an INITIALIZE occurs? --TH]] Perhaps instead
of the DS pulling from the MDS, the MDS pushes to the DS? Thus an of the DS pulling from the MDS, the MDS pushes to the DS? Thus an
INITIALIZE causes a new push? [[Comment.5: Still need to consider INITIALIZE causes a new push? [[Comment.2: Still need to consider
race cases of the DS getting a WRITE and the MDS getting an race cases of the DS getting a WRITE and the MDS getting an
INITIALIZE. --TH]] INITIALIZE. --TH]]
5.5. An Example of Detecting Corruption 5.3. An Example of Detecting Corruption
In this section, we define an ADB format in which corruption can be In this section, we define an ADB format in which corruption can be
detected. Note that this is just one possible format and means to detected. Note that this is just one possible format and means to
detect corruption. detect corruption.
Consider a very basic implementation of an operating system's disk Consider a very basic implementation of an operating system's disk
blocks. A block is either data or it is an indirect block which blocks. A block is either data or it is an indirect block which
allows for files to be larger than one block. It is desired to be allows for files to be larger than one block. It is desired to be
able to initialize a block. Lastly, to quickly unlink a file, a able to initialize a block. Lastly, to quickly unlink a file, a
block can be marked invalid. The contents remain intact - which block can be marked invalid. The contents remain intact - which
skipping to change at page 52, line 11 skipping to change at page 34, line 5
minimum amount of data we incorporated into our generic framework. minimum amount of data we incorporated into our generic framework.
I.e., the guard pattern is sufficient in allowing applications to I.e., the guard pattern is sufficient in allowing applications to
design their own corruption detection. design their own corruption detection.
Finally, it is important to note that none of these corruption checks Finally, it is important to note that none of these corruption checks
occur in the transport layer. The server and client components are occur in the transport layer. The server and client components are
totally unaware of the file format and might report everything as totally unaware of the file format and might report everything as
being transferred correctly even in the case the application detects being transferred correctly even in the case the application detects
corruption. corruption.
5.6. Example of READ_PLUS 5.4. Example of READ_PLUS
The hypothetical application presented in Section 5.5 can be used to The hypothetical application presented in Section 5.3 can be used to
illustrate how READ_PLUS would return an array of results. A file is illustrate how READ_PLUS would return an array of results. A file is
created and initialized with 100 4k ADBs in the FREE state: created and initialized with 100 4k ADBs in the FREE state:
INITIALIZE {0, 4k, 100, 0, 0, 8, 0xfeedface} INITIALIZE {0, 4k, 100, 0, 0, 8, 0xfeedface}
Further, assume the application writes a single ADB at 16k, changing Further, assume the application writes a single ADB at 16k, changing
the guard pattern to 0xcafedead, we would then have in memory: the guard pattern to 0xcafedead, we would then have in memory:
0 -> (16k - 1) : 4k, 4, 0, 0, 8, 0xfeedface 0 -> (16k - 1) : 4k, 4, 0, 0, 8, 0xfeedface
16k -> (20k - 1) : 00 00 00 05 ca fe de ad XX XX ... XX XX 16k -> (20k - 1) : 00 00 00 05 ca fe de ad XX XX ... XX XX
20k -> 400k : 4k, 95, 0, 6, 0xfeedface 20k -> 400k : 4k, 95, 0, 6, 0xfeedface
And when the client did a READ_PLUS of 64k at the start of the file, And when the client did a READ_PLUS of 64k at the start of the file,
it would get back a result of an ADB, some data, and a final ADB: it would get back a result of an ADB, some data, and a final ADB:
ADB {0, 4, 0, 0, 8, 0xfeedface} ADB {0, 4, 0, 0, 8, 0xfeedface}
data 4k data 4k
ADB {20k, 4k, 59, 0, 6, 0xfeedface} ADB {20k, 4k, 59, 0, 6, 0xfeedface}
5.7. Zero Filled Holes 5.5. Zero Filled Holes
As applications are free to define the structure of an ADB, it is As applications are free to define the structure of an ADB, it is
trivial to define an ADB which supports zero filled holes. Such a trivial to define an ADB which supports zero filled holes. Such a
case would encompass the traditional definitions of a sparse file and case would encompass the traditional definitions of a sparse file and
hole punching. For example, to punch a 64k hole, starting at 100M, hole punching. For example, to punch a 64k hole, starting at 100M,
into an existing file which has no ADB structure: into an existing file which has no ADB structure:
INITIALIZE {100M, 64k, 1, NFS4_UINT64_MAX, INITIALIZE {100M, 64k, 1, NFS4_UINT64_MAX,
0, NFS4_UINT64_MAX, 0x0} 0, NFS4_UINT64_MAX, 0x0}
skipping to change at page 62, line 13 skipping to change at page 44, line 13
}; };
7.5.2. RESULT 7.5.2. RESULT
union read_plus_content switch (data_content4 content) { union read_plus_content switch (data_content4 content) {
case NFS4_CONTENT_DATA: case NFS4_CONTENT_DATA:
opaque rpc_data<>; opaque rpc_data<>;
case NFS4_CONTENT_APP_BLOCK: case NFS4_CONTENT_APP_BLOCK:
app_data_block4 rpc_block; app_data_block4 rpc_block;
case NFS4_CONTENT_HOLE: case NFS4_CONTENT_HOLE:
length4 rpc_hole_length; hole_info4 rpc_hole;
default: default:
void; void;
}; };
/* /*
* Allow a return of an array of contents. * Allow a return of an array of contents.
*/ */
struct read_plus_res4 { struct read_plus_res4 {
bool rpr_eof; bool rpr_eof;
read_plus_content rpr_contents<>; read_plus_content rpr_contents<>;
skipping to change at page 65, line 52 skipping to change at page 47, line 52
| Byte-Range | Contents | | Byte-Range | Contents |
+-------------+----------+ +-------------+----------+
| 0-15999 | Hole | | 0-15999 | Hole |
| 16K-31999 | Non-Zero | | 16K-31999 | Non-Zero |
| 32K-255999 | Hole | | 32K-255999 | Hole |
| 256K-287999 | Non-Zero | | 256K-287999 | Non-Zero |
| 288K-353999 | Hole | | 288K-353999 | Hole |
| 354K-417999 | Non-Zero | | 354K-417999 | Non-Zero |
+-------------+----------+ +-------------+----------+
Table 3 Table 1
Under the given circumstances, if a client was to read the file from Under the given circumstances, if a client was to read the file from
beginning to end with a max read size of 64K, the following will be beginning to end with a max read size of 64K, the following will be
the result. This assumes the client has already opened the file and the result. This assumes the client has already opened the file and
acquired a valid stateid and just needs to issue READ_PLUS requests. acquired a valid stateid and just needs to issue READ_PLUS requests.
1. READ_PLUS(s, 0, 64K) --> NFS_OK, readplusrestype4 = READ_OK, eof 1. READ_PLUS(s, 0, 64K) --> NFS_OK, readplusrestype4 = READ_OK, eof
= false, data<>[32K]. Return a short read, as the last half of = false, data<>[32K]. Return a short read, as the last half of
the request was all zeroes. Note that the first hole is read the request was all zeroes. Note that the first hole is read
back as all zeros as it is below the hole threshhold. back as all zeros as it is below the hole threshhold.
skipping to change at page 68, line 13 skipping to change at page 50, line 13
another and have it more closely resemble the original. another and have it more closely resemble the original.
7.7.5. Dense and Sparse pNFS File Layouts 7.7.5. Dense and Sparse pNFS File Layouts
The hole information returned form a data server must be understood The hole information returned form a data server must be understood
by pNFS clients using both Dense or Sparse file layout types. Does by pNFS clients using both Dense or Sparse file layout types. Does
the current READ_PLUS return value work for both layout types? Does the current READ_PLUS return value work for both layout types? Does
the data server know if it is using dense or sparse so that it can the data server know if it is using dense or sparse so that it can
return the correct hole_offset and hole_length values? return the correct hole_offset and hole_length values?
8. Security Considerations 8. Labeled NFS
9. IANA Considerations WARNING: Need to pull out the requirements.
This section uses terms that are defined in [21]. 8.1. Introduction
10. References Mandatory Access Control (MAC) systems have been mainstreamed in
modern operating systems such as Linux (R), FreeBSD (R), Solaris
(TM), and Windows Vista (R). MAC systems bind security attributes to
subjects (processes) and objects within a system. These attributes
are used with other information in the system to make access control
decisions.
10.1. Normative References Access control models such as Unix permissions or Access Control
Lists are commonly referred to as Discretionary Access Control (DAC)
models. These systems base their access decisions on user identity
and resource ownership. In contrast MAC models base their access
control decisions on the label on the subject (usually a process) and
the object it wishes to access. These labels may contain user
identity information but usually contain additional information. In
DAC systems users are free to specify the access rules for resources
that they own. MAC models base their security decisions on a system
wide policy established by an administrator or organization which the
users do not have the ability to override. DAC systems offer no real
protection against malicious or flawed software due to each program
running with the full permissions of the user executing it.
Inversely MAC models can confine malicious or flawed software and
usually act at a finer granularity than their DAC counterparts.
People desire to use NFSv4 with these systems. A mechanism is
required to provide security attribute information to NFSv4 clients
and servers. This mechanism has the following requirements:
(1) Clients must be able to convey to the server the security
attribute of the subject making the access request. The server
may provide a mechanism to enforce MAC policy based on the
requesting subject's security attribute.
(2) Server must be able to store and retrieve the security attribute
of exported files as requested by the client.
(3) Server must provide a mechanism for notifying clients of
attribute changes of files on the server.
(4) Clients and Servers must be able to negotiate Label Formats and
Domains of Interpretation (DOI) and provide a mechanism to
translate between them as needed.
These four requirements are key to the system with only requirements
(2) and (3) requiring changes to NFSv4. The ability to convey the
security attribute of the subject as described in requirement (1)
falls upon the RPC layer to implement (see [6]). Requirement (4)
allows communication between different MAC implementations. The
management of label formats, DOIs, and the translation between them
does not require any support from NFSv4 on a protocol level and is
out of the scope of this document.
The first change necessary is to devise a method for transporting and
storing security label data on NFSv4 file objects. Security labels
have several semantics that are met by NFSv4 recommended attributes
such as the ability to set the label value upon object creation.
Access control on these attributes are done through a combination of
two mechanisms. As with other recommended attributes on file objects
the usual DAC checks (ACLs and permission bits) will be performed to
ensure that proper file ownership is enforced. In addition a MAC
system MAY be employed on the client, server, or both to enforce
additional policy on what subjects may modify security label
information.
The second change is to provide a method for the server to notify the
client that the attribute changed on an open file on the server. If
the file is closed, then during the open attempt, the client will
gather the new attribute value. The server MUST not communicate the
new value of the attribute, the client MUST query it. This
requirement stems from the need for the client to provide sufficient
access rights to the attribute.
The final change necessary is a modification to the RPC layer used in
NFSv4 in the form of a new version of the RPCSEC_GSS [7] framework.
In order for an NFSv4 server to apply MAC checks it must obtain
additional information from the client. Several methods were
explored for performing this and it was decided that the best
approach was to incorporate the ability to make security attribute
assertions through the RPC mechanism. RPCSECGSSv3 [6] outlines a
method to assert additional security information such as security
labels on gss context creation and have that data bound to all RPC
requests that make use of that context.
8.2. Definitions
Label Format Specifier (LFS): is an identifier used by the client to
establish the syntactic format of the security label and the
semantic meaning of its components. These specifiers exist in a
registry associated with documents describing the format and
semantics of the label.
Label Format Registry: is the IANA registry containing all
registered LFS along with references to the documents that
describe the syntactic format and semantics of the security label.
Policy Identifier (PI): is an optional part of the definition of a
Label Format Specifier which allows for clients and server to
identify specific security policies.
Domain of Interpretation (DOI): represents an administrative
security boundary, where all systems within the DOI have
semantically coherent labeling. That is, a security attribute
must always mean exactly the same thing anywhere within the DOI.
Object: is a passive resource within the system that we wish to be
protected. Objects can be entities such as files, directories,
pipes, sockets, and many other system resources relevant to the
protection of the system state.
Subject: A subject is an active entity usually a process which is
requesting access to an object.
Multi-Level Security (MLS): is a traditional model where objects are
given a sensitivity level (Unclassified, Secret, Top Secret, etc)
and a category set [21].
8.3. MAC Security Attribute
MAC models base access decisions on security attributes bound to
subjects and objects. This information can range from a user
identity for an identity based MAC model, sensitivity levels for
Multi-level security, or a type for Type Enforcement. These models
base their decisions on different criteria but the semantics of the
security attribute remain the same. The semantics required by the
security attributes are listed below:
o Must provide flexibility with respect to MAC model.
o Must provide the ability to atomically set security information
upon object creation
o Must provide the ability to enforce access control decisions both
on the client and the server
o Must not expose an object to either the client or server name
space before its security information has been bound to it.
NFSv4 provides several options for implementing the security
attribute. The first option is to implement the security attribute
as a named attribute. Named attributes provide flexibility since
they are treated as an opaque field but lack a way to atomically set
the attribute on creation. In addition, named attributes themselves
are file system objects which need to be assigned a security
attribute. This raises the question of how to assign security
attributes to the file and directories used to hold the security
attribute for the file in question. The inability to atomically
assign the security attribute on file creation and the necessity to
assign security attributes to its sub-components makes named
attributes unacceptable as a method for storing security attributes.
The second option is to implement the security attribute as a
recommended attribute. These attributes have a fixed format and
semantics, which conflicts with the flexible nature of the security
attribute. To resolve this the security attribute consists of two
components. The first component is a LFS as defined in [22] to allow
for interoperability between MAC mechanisms. The second component is
an opaque field which is the actual security attribute data. To
allow for various MAC models NFSv4 should be used solely as a
transport mechanism for the security attribute. It is the
responsibility of the endpoints to consume the security attribute and
make access decisions based on their respective models. In addition,
creation of objects through OPEN and CREATE allows for the security
attribute to be specified upon creation. By providing an atomic
create and set operation for the security attribute it is possible to
enforce the second and fourth requirements. The recommended
attribute FATTR4_SEC_LABEL will be used to satisfy this requirement.
8.3.1. Interpreting FATTR4_SEC_LABEL
The XDR [11] necessary to implement Labeled NFSv4 is presented in
Figure 6:
const FATTR4_SEC_LABEL = 81;
typedef uint32_t policy4;
struct labelformat_spec4 {
policy4 lfs_lfs;
policy4 lfs_pi;
};
struct sec_label_attr_info {
labelformat_spec4 slai_lfs;
opaque slai_data<>;
};
Figure 6
The FATTR4_SEC_LABEL contains an array of two components with the
first component being an LFS. It serves to provide the receiving end
with the information necessary to translate the security attribute
into a form that is usable by the endpoint. Label Formats assigned
an LFS may optionally choose to include a Policy Identifier field to
allow for complex policy deployments. The LFS and Label Format
Registry are described in detail in [22]. The translation used to
interpret the security attribute is not specified as part of the
protocol as it may depend on various factors. The second component
is an opaque section which contains the data of the attribute. This
component is dependent on the MAC model to interpret and enforce.
In particular, it is the responsibility of the LFS specification to
define a maximum size for the opaque section, slai_data<>. When
creating or modifying a label for an object, the client needs to be
guaranteed that the server will accept a label that is sized
correctly. By both client and server being part of a specific MAC
model, the client will be aware of the size.
8.3.2. Delegations
In the event that a security attribute is changed on the server while
a client holds a delegation on the file, the client should follow the
existing protocol with respect to attribute changes. It should flush
all changes back to the server and relinquish the delegation.
8.3.3. Permission Checking
It is not feasible to enumerate all possible MAC models and even
levels of protection within a subset of these models. This means
that the NFSv4 client and servers cannot be expected to directly make
access control decisions based on the security attribute. Instead
NFSv4 should defer permission checking on this attribute to the host
system. These checks are performed in addition to existing DAC and
ACL checks outlined in the NFSv4 protocol. Section 8.7 gives a
specific example of how the security attribute is handled under a
particular MAC model.
8.3.4. Object Creation
When creating files in NFSv4 the OPEN and CREATE operations are used.
One of the parameters to these operations is an fattr4 structure
containing the attributes the file is to be created with. This
allows NFSv4 to atomically set the security attribute of files upon
creation. When a client is MAC aware it must always provide the
initial security attribute upon file creation. In the event that the
server is the only MAC aware entity in the system it should ignore
the security attribute specified by the client and instead make the
determination itself. A more in depth explanation can be found in
Section 8.7.
8.3.5. Existing Objects
Note that under the MAC model, all objects must have labels.
Therefore, if an existing server is upgraded to include LNFS support,
then it is the responsibility of the security system to define the
behavior for existing objects. For example, if the security system
is LFS 0, which means the server just stores and returns labels, then
existing files should return labels which are set to an empty value.
8.3.6. Label Changes
As per the requirements, when a file's security label is modified,
the server must notify all clients which have the file opened of the
change in label. It does so with CB_ATTR_CHANGED. There are
preconditions to making an attribute change imposed by NFSv4 and the
security system might want to impose others. In the process of
meeting these preconditions, the server may chose to either serve the
request in whole or return NFS4ERR_DELAY to the SETATTR operation.
If there are open delegations on the file belonging to client other
than the one making the label change, then the process described in
Section 8.3.2 must be followed.
As the server is always presented with the subject label from the
client, it does not necessarily need to communicate the fact that the
label has changed to the client. In the cases where the change
outright denies the client access, the client will be able to quickly
determine that there is a new label in effect. It is in cases where
the client may share the same object between multiple subjects or a
security system which is not strictly hierarchical that the
CB_ATTR_CHANGED callback is very useful. It allows the server to
inform the clients that the cached security attribute is now stale.
In the scenario presented in Section 8.8.5, the clients are smart and
the server has a very simple security system which just stores the
labels. In this system, the MAC label check always allows access,
regardless of the subject label.
The way in which MAC labels are enforced is by the smart client. So
if client A changes a security label on a file, then the server MUST
inform all clients that have the file opened that the label has
changed via CB_ATTR_CHANGED. Then the clients MUST retrieve the new
label and MUST enforce access via the new attribute values.
[[Comment.3: Describe a LFS of 0, which will be the means to indicate
such a deployment. In the current LFR, 0 is marked as reserved. If
we use it, then we define the default LFS to be used by a LNFS aware
server. I.e., it lets smart clients work together in the face of a
dumb server. Note that will supporting this system is optional, it
will make for a very good debugging mode during development. I.e.,
even if a server does not deploy with another security system, this
mode gets your foot in the door. --TH]]
8.4. Procedure 16: CB_ATTR_CHANGED - Notify Client that the File's
Attributes Changed
8.4.1. ARGUMENTS
struct CB_ATTR_CHANGED4args {
nfs_fh4 acca_fh;
bitmap4 acca_critical;
bitmap4 acca_info;
};
8.4.2. RESULTS
struct CB_ATTR_CHANGED4res {
nfsstat4 accr_status;
};
8.4.3. DESCRIPTION
The CB_ATTR_CHANGED callback operation is used by the server to
indicate to the client that the file's attributes have been modified
on the server. The server does not convey how the attributes have
changed, just that they have been modified. The server can inform
the client about both critical and informational attribute changes in
the bitmask arguments. The client SHOULD query the server about all
attributes set in acca_critical. For all changes reflected in
acca_info, the client can decide whether or not it wants to poll the
server.
The CB_ATTR_CHANGED callback operation with the FATTR4_SEC_LABEL set
in acca_critical is the method used by the server to indicate that
the MAC label for the file referenced by acca_fh has changed. In
many ways, the server does not care about the result returned by the
client.
8.5. pNFS Considerations
This section examines the issues in deploying LNFS in a pNFS
community of servers.
8.5.1. MAC Label Checks
The new FATTR4_SEC_LABEL attribute is metadata information and as
such the DS is not aware of the value contained on the MDS.
Fortunately, the NFSv4.1 protocol [2] already has provisions for
doing access level checks from the DS to the MDS. In order for the
DS to validate the subject label presented by the client, it SHOULD
utilize this mechanism.
If a file's FATTR4_SEC_LABEL is changed, then the MDS should utilize
CB_ATTR_CHANGED to inform the client of that fact. If the MDS is
maintaining
8.6. Discovery of Server LNFS Support
The server can easily determine that a client supports LNFS when it
queries for the FATTR4_SEC_LABEL label for an object. Note that it
cannot assume that the presence of RPCSEC_GSSv3 indicates LNFS
support. The client might need to discover which LFS the server
supports.
A server which supports LNFS MUST allow a client with any subject
label to retrieve the FATTR4_SEC_LABEL attribute for the root
filehandle, ROOTFH. The following compound must always succeed as
far as a MAC label check is concerned:
PUTROOTFH, GETATTR {FATTR4_SEC_LABEL}
Note that the server might have imposed a security flavor on the root
that precludes such access. I.e., if the server requires kerberized
access and the client presents a compound with AUTH_SYS, then the
server is allowed to return NFS4ERR_WRONGSEC in this case. But if
the client presents a correct security flavor, then the server MUST
return the FATTR4_SEC_LABEL attribute with the supported LFS filled
in.
8.7. MAC Security NFS Modes of Operation
A system using Labeled NFS may operate in three modes. The first
mode provides the most protection and is called "full mode". In this
mode both the client and server implement a MAC model allowing each
end to make an access control decision. The remaining two modes are
variations on each other and are called "smart client" and "smart
server" modes. In these modes one end of the connection is not
implementing a MAC model and because of this these operating modes
offer less protection than full mode.
8.7.1. Full Mode
Full mode environments consist of MAC aware NFSv4 servers and clients
and may be composed of mixed MAC models and policies. The system
requires that both the client and server have an opportunity to
perform an access control check based on all relevant information
within the network. The file object security attribute is provided
using the mechanism described in Section 8.3. The security attribute
of the subject making the request is transported at the RPC layer
using the mechanism described in RPCSECGSSv3 [6].
8.7.1.1. Initial Labeling and Translation
The ability to create a file is an action that a MAC model may wish
to mediate. The client is given the responsibility to determine the
initial security attribute to be placed on a file. This allows the
client to make a decision as to the acceptable security attributes to
create a file with before sending the request to the server. Once
the server receives the creation request from the client it may
choose to evaluate if the security attribute is acceptable.
Security attributes on the client and server may vary based on MAC
model and policy. To handle this the security attribute field has an
LFS component. This component is a mechanism for the host to
identify the format and meaning of the opaque portion of the security
attribute. A full mode environment may contain hosts operating in
several different LFSs and DOIs. In this case a mechanism for
translating the opaque portion of the security attribute is needed.
The actual translation function will vary based on MAC model and
policy and is out of the scope of this document. If a translation is
unavailable for a given LFS and DOI then the request SHOULD be
denied. Another recourse is to allow the host to provide a fallback
mapping for unknown security attributes.
8.7.1.2. Policy Enforcement
In full mode access control decisions are made by both the clients
and servers. When a client makes a request it takes the security
attribute from the requesting process and makes an access control
decision based on that attribute and the security attribute of the
object it is trying to access. If the client denies that access an
RPC call to the server is never made. If however the access is
allowed the client will make a call to the NFS server.
When the server receives the request from the client it extracts the
security attribute conveyed in the RPC request. The server then uses
this security attribute and the attribute of the object the client is
trying to access to make an access control decision. If the server's
policy allows this access it will fulfill the client's request,
otherwise it will return NFS4ERR_ACCESS.
Implementations MAY validate security attributes supplied over the
network to ensure that they are within a set of attributes permitted
from a specific peer, and if not, reject them. Note that a system
may permit a different set of attributes to be accepted from each
peer. An example of this can be seen in Section 8.8.7.1.
8.7.2. Smart Client Mode
Smart client environments consist of NFSv4 servers that are not MAC
aware but NFSv4 clients that are. Clients in this environment are
may consist of groups implementing different MAC models policies.
The system requires that all clients in the environment be
responsible for access control checks. Due to the amount of trust
placed in the clients this mode is only to be used in a trusted
environment.
8.7.2.1. Initial Labeling and Translation
Just like in full mode the client is responsible for determining the
initial label upon object creation. The server in smart client mode
does not implement a MAC model, however, it may provide the ability
to restrict the creation and labeling of object with certain labels
based on different criteria as described in Section 8.7.1.2.
In a smart client environment a group of clients operate in a single
DOI. This removes the need for the clients to maintain a set of DOI
translations. Servers should provide a method to allow different
groups of clients to access the server at the same time. However it
should not let two groups of clients operating in different DOIs to
access the same files.
8.7.2.2. Policy Enforcement
In smart client mode access control decisions are made by the
clients. When a client accesses an object it obtains the security
attribute of the object from the server and combines it with the
security attribute of the process making the request to make an
access control decision. This check is in addition to the DAC checks
provided by NFSv4 so this may fail based on the DAC criteria even if
the MAC policy grants access. As the policy check is located on the
client an access control denial should take the form that is native
to the platform.
8.7.3. Smart Server Mode
Smart server environments consist of NFSv4 servers that are MAC aware
and one or more MAC unaware clients. The server is the only entity
enforcing policy, and may selectively provide standard NFS services
to clients based on their authentication credentials and/or
associated network attributes (e.g., IP address, network interface).
The level of trust and access extended to a client in this mode is
configuration-specific.
8.7.3.1. Initial Labeling and Translation
In smart server mode all labeling and access control decisions are
performed by the NFSv4 server. In this environment the NFSv4 clients
are not MAC aware so they cannot provide input into the access
control decision. This requires the server to determine the initial
labeling of objects. Normally the subject to use in this calculation
would originate from the client. Instead the NFSv4 server may choose
to assign the subject security attribute based on their
authentication credentials and/or associated network attributes
(e.g., IP address, network interface).
In smart server mode security attributes are contained solely within
the NFSv4 server. This means that all security attributes used in
the system remain within a single LFS and DOI. Since security
attributes will not cross DOIs or change format there is no need to
provide any translation functionality above that which is needed
internally by the MAC model.
8.7.3.2. Policy Enforcement
All access control decisions in smart server mode are made by the
server. The server will assign the subject a security attribute
based on some criteria (e.g., IP address, network interface). Using
the newly calculated security attribute and the security attribute of
the object being requested the MAC model makes the access control
check and returns NFS4ERR_ACCESS on a denial and NFS4_OK on success.
This check is done transparently to the client so if the MAC
permission check fails the client may be unaware of the reason for
the permission failure. When operating in this mode administrators
attempting to debug permission failures should be aware to check the
MAC policy running on the server in addition to the DAC settings.
8.8. Use Cases
MAC labeling is meant to allow NFSv4 to be deployed in site
configurable security schemes. The LFS and opaque data scheme allows
for flexibility to meet these different implementations. In this
section, we provide some examples of how NFSv4 could be deployed to
meet existing needs. This is not an exhaustive listing.
8.8.1. Full MAC labeling support for remotely mounted filesystems
In this case, we assume a local networked environment where the
servers and clients are under common administrative control. All
systems in this network have the same MAC implementation and
semantically identical MAC security labels for objects (i.e. labels
mean the same thing on different systems, even if the policies on
each system may differ to some extent). Clients will be able to
apply fine-grained MAC policy to objects accessed via NFS mounts, and
thus improve the overall consistency of MAC policy application within
this environment.
An example of this case would be where user home directories are
remotely mounted, and fine-grained MAC policy is implemented to
protect, for example, private user data from being read by malicious
web scripts running in the user's browser. With Labeled NFS, fine-
grained MAC labeling of the user's files will allow the local MAC
policy to be implemented and provide the desired protection.
8.8.2. MAC labeling of virtual machine images stored on the network
Virtualization is now a commonly implemented feature of modern
operating systems, and there is a need to ensure that MAC security
policy is able to to protect virtualized resources. A common
implementation scheme involves storing virtualized guest filesystems
on a networked server, which are then mounted remotely by guests upon
instantiation. In this case, there is a need to ensure that the
local guest kernel is able to access fine-grained MAC labels on the
remotely mounted filesystem so that its MAC security policy can be
applied.
8.8.3. International Traffic in Arms Regulations (ITAR)
The International Traffic in Arms Regulations (ITAR) is put forth by
the United States Department of State, Directorate of Defense and
Trade Controls. ITAR places strict requirements on the export and
thus access of defense articles and defense services. Organizations
that manage projects with articles and services deemed as within the
scope of ITAR must ensure the regulations are met. The regulations
require an assurance that ITAR information is accessed on a need-to-
know basis, thus requiring strict, centrally managed access controls
on items labeled as ITAR. Additionally, organizations must be able
to prove that the controls were adequately maintained and that
foreign nationals were not permitted access to these defense articles
or service. ITAR control applicability may be dynamic; information
may become subject to ITAR after creation (e.g., when the defense
implications of technology are recognized).
8.8.4. Legal Hold/eDiscovery
Increased cases of legal holds on electronic sources of information
(ESI) have resulted in organizations taking a pro-active approach to
reduce the scope and thus costs associated with these activities.
ESI Data Maps are increasing in use and require support in operating
systems to strictly manage access controls in the case of a legal
hold. The sizeable quantity of information involved in a legal
discovery request may preclude making a copy of the information to a
separate system that manages the legal hold on the copies; this
results in a need to enforce the legal hold on the original
information.
Organizations are taking steps to map out the sources of information
that are most likely to be placed under a legal hold, these efforts
result in ESI Data Maps. ESI Data Maps specify the Electronic Source
of Information and the requirements for sensitivity and criticality.
In the case of a legal hold, the ESI data map and labels can be used
to ensure the legal hold is properly enforced on the predetermined
set of information. An ESI data map narrows the scope of a legal
hold to the predetermined ESI. The information must then be
protected at a level of security of which the weight and
admissibility of that evidence may be proved in a court of law.
Current systems use application level controls and do not adequately
meet the requirements. Labels may be used in advance when an ESI
data map exercise is conducted with controls being applied at the
time of a hold or labels may be applied to data sets during an
eDiscovery exercise to ensure the data protections are adequate
during the legal hold period.
Note that this use case requires multi-attribute labels, as both
information sensitivity (e.g., to disclosure) and information
criticality (e.g., to continued business operations) need to be
captured.
8.8.5. Simple security label storage
In this case, a mixed and loosely administered network is assumed,
where nodes may be running a variety of operating systems with
different security mechanisms and security policies. It is desired
that network file servers be simply capable of storing and retrieving
MAC security labels for clients which use such labels. The Labeled
NFS protocol would be implemented here solely to enable transport of
MAC security labels across the network. It should be noted that in
such an environment, overall security cannot be as strongly enforced
as in case (a), and that this scheme is aimed at allowing MAC-capable
clients to function with local MAC security policy enabled rather
than perhaps disabling it entirely.
8.8.6. Diskless Linux
A number of popular operating system distributions depend on a
mandatory access control (MAC) model to implement a kernel-enforced
security policy. Typically, such models assign particular roles to
individual processes, which limit or permit performing certain
operations on a set of files, directories, sockets, or other objects.
While the enforcing of the policy is typically a matter for the
diskless NFS client itself, the filesystem objects in such models
will typically carry MAC labels that are used to define policy on
access. These policies may, for instance, describe privilege
transitions that cannot be replicated using standard NFS ACL based
models.
For instance on a SYSV compatible system, if the 'init' process
spawns a process that attempts to start the 'NetworkManager'
executable, there may be a policy that sets up a role transition if
the 'init' process and 'NetworkManager' file labels match a
particular rule. Without this role transition, the process may find
itself having insufficient privileges to perform its primary job of
configuring network interfaces.
In setups of this type, a lot of the policy targets (such as sockets
or privileged system calls) are entirely local to the client. The
use of RPCSEC_GSSv3 for enforcing compliance at the server level is
therefore of limited value. The ability to permanently label files
and have those labels read back by the client is, however, crucial to
the ability to enforce that policy.
8.8.7. Multi-Level Security
In a MLS system objects are generally assigned a sensitivity level
and a set of compartments. The sensitivity levels within the system
are given an order ranging from lowest to highest classification
level. Read access to an object is allowed when the sensitivity
level of the subject "dominates" the object it wants to access. This
means that the sensitivity level of the subject is higher than that
of the object it wishes to access and that its set of compartments is
a super-set of the compartments on the object.
The rest of the section will just use sensitivity levels. In general
the example is a client that wishes to list the contents of a
directory. The system defines the sensitivity levels as Unclassified
(U), Secret (S), and Top Secret (TS). The directory to be searched
is labeled Top Secret which means access to read the directory will
only be granted if the subject making the request is also labeled Top
Secret.
8.8.7.1. Full Mode
In the first part of this example a process on the client is running
at the Secret level. The process issues a readdir system call which
enters the kernel. Before translating the readdir system call into a
request to the NFSv4 server the host operating system will consult
the MAC module to see if the operation is allowed. Since the process
is operating at Secret and the directory to be accessed is labeled
Top Secret the MAC module will deny the request and an error code is
returned to user space.
Consider a second case where instead of running at Secret the process
is running at Top Secret. In this case the sensitivity of the
process is equal to or greater than that of the directory so the MAC
module will allow the request. Now the readdir is translated into
the necessary NFSv4 call to the server. For the RPC request the
client is using the proper credential to assert to the server that
the process is running at Top Secret.
When the server receives the request it extracts the security label
from the RPC session and retrieves the label on the directory. The
server then checks with its MAC module if a Top Secret process is
allowed to read the contents of the Top Secret directory. Since this
is allowed by the policy then the server will return the appropriate
information back to the client.
In this example the policy on the client and server were both the
same. In the event that they were running different policies a
translation of the labels might be needed. In this case it could be
possible for a check to pass on the client and fail on the server.
The server may consider additional information when making its policy
decisions. For example the server could determine that a certain
subnet is only cleared for data up to Secret classification. If that
constraint was in place for the example above the client would still
succeed, but the server would fail since the client is asserting a
label that it is not able to use (Top Secret on a Secret network).
8.8.7.2. Smart Client Mode
In smart client mode the example is identical to the first part of a
full mode operation. A process on the client labeled Secret wishes
to access a Top Secret directory. As in the full mode example this
is denied since Secret does not dominate Top Secret. If the process
were operating at Top Secret it would pass the local access control
check and the NFSv4 operation would proceed as in a normal NFSv4
environment.
8.8.7.3. Smart Server Mode
In a smart server mode the client behaves as if it were in a normal
NFSv4 environment. Since the process on the client does not provide
a security attribute the server must define a mechanism for labeling
all requests from a client. Assume that the server is using the same
criteria used in the full mode example. The server sees the request
as coming from a subnet that is a Secret network. The server
determines that all clients on that subnet will have their requests
labeled with Secret. Since the directory on the server is labeled
Top Secret and Secret does not dominate Top Secret the server would
fail the request with NFS4ERR_ACCESS.
8.9. Security Considerations
This entire document deals with security issues.
Depending on the level of protection the MAC system offers there may
be a requirement to tightly bind the security attribute to the data.
When either the client is in Smart Client Mode or server is in Smart
Server Mode, it is important to realize that the other side is not
enforcing MAC protections. Alternate methods might be in use to
handle the lack of MAC support and care should be taken to identify
and mitigate threats from possible tampering outside of these
methods.
An example of this is that a smart server that modifies READDIR or
LOOKUP results based on the client's subject label might want to
always construct the same subject label for a client which does not
present one. This will prevent a non-LNFS client from mixing entries
in the directory cache.
9. Security Considerations
10. Operations: REQUIRED, RECOMMENDED, or OPTIONAL
The following tables summarize the operations of the NFSv4.2 protocol
and the corresponding designation of REQUIRED, RECOMMENDED, and
OPTIONAL to implement or MUST NOT implement. The designation of MUST
NOT implement is reserved for those operations that were defined in
either NFSv4.0 or NFSV4.1 and MUST NOT be implemented in NFSv4.2.
For the most part, the REQUIRED, RECOMMENDED, or OPTIONAL designation
for operations sent by the client is for the server implementation.
The client is generally required to implement the operations needed
for the operating environment for which it serves. For example, a
read-only NFSv4.2 client would have no need to implement the WRITE
operation and is not required to do so.
The REQUIRED or OPTIONAL designation for callback operations sent by
the server is for both the client and server. Generally, the client
has the option of creating the backchannel and sending the operations
on the fore channel that will be a catalyst for the server sending
callback operations. A partial exception is CB_RECALL_SLOT; the only
way the client can avoid supporting this operation is by not creating
a backchannel.
Since this is a summary of the operations and their designation,
there are subtleties that are not presented here. Therefore, if
there is a question of the requirements of implementation, the
operation descriptions themselves must be consulted along with other
relevant explanatory text within this either specification or that of
NFSv4.1 [2]..
The abbreviations used in the second and third columns of the table
are defined as follows.
REQ REQUIRED to implement
REC RECOMMEND to implement
OPT OPTIONAL to implement
MNI MUST NOT implement
For the NFSv4.2 features that are OPTIONAL, the operations that
support those features are OPTIONAL, and the server would return
NFS4ERR_NOTSUPP in response to the client's use of those operations.
If an OPTIONAL feature is supported, it is possible that a set of
operations related to the feature become REQUIRED to implement. The
third column of the table designates the feature(s) and if the
operation is REQUIRED or OPTIONAL in the presence of support for the
feature.
The OPTIONAL features identified and their abbreviations are as
follows:
pNFS Parallel NFS
FDELG File Delegations
DDELG Directory Delegations
COPY Server Side Copy
ADB Application Data Blocks
Operations
+----------------------+--------------------+-----------------------+
| Operation | REQ, REC, OPT, or | Feature (REQ, REC, or |
| | MNI | OPT) |
+----------------------+--------------------+-----------------------+
| ACCESS | REQ | |
| BACKCHANNEL_CTL | REQ | |
| BIND_CONN_TO_SESSION | REQ | |
| CLOSE | REQ | |
| COMMIT | REQ | |
| COPY | OPT | COPY (REQ) |
| COPY_ABORT | OPT | COPY (REQ) |
| COPY_NOTIFY | OPT | COPY (REQ) |
| COPY_REVOKE | OPT | COPY (REQ) |
| COPY_STATUS | OPT | COPY (REQ) |
| CREATE | REQ | |
| CREATE_SESSION | REQ | |
| DELEGPURGE | OPT | FDELG (REQ) |
| DELEGRETURN | OPT | FDELG, DDELG, pNFS |
| | | (REQ) |
| DESTROY_CLIENTID | REQ | |
| DESTROY_SESSION | REQ | |
| EXCHANGE_ID | REQ | |
| FREE_STATEID | REQ | |
| GETATTR | REQ | |
| GETDEVICEINFO | OPT | pNFS (REQ) |
| GETDEVICELIST | OPT | pNFS (OPT) |
| GETFH | REQ | |
| INITIALIZE | OPT | ADB (REQ) |
| GET_DIR_DELEGATION | OPT | DDELG (REQ) |
| LAYOUTCOMMIT | OPT | pNFS (REQ) |
| LAYOUTGET | OPT | pNFS (REQ) |
| LAYOUTRETURN | OPT | pNFS (REQ) |
| LINK | OPT | |
| LOCK | REQ | |
| LOCKT | REQ | |
| LOCKU | REQ | |
| LOOKUP | REQ | |
| LOOKUPP | REQ | |
| NVERIFY | REQ | |
| OPEN | REQ | |
| OPENATTR | OPT | |
| OPEN_CONFIRM | MNI | |
| OPEN_DOWNGRADE | REQ | |
| PUTFH | REQ | |
| PUTPUBFH | REQ | |
| PUTROOTFH | REQ | |
| READ | OPT | |
| READDIR | REQ | |
| READLINK | OPT | |
| READ_PLUS | OPT | ADB (REQ) |
| RECLAIM_COMPLETE | REQ | |
| RELEASE_LOCKOWNER | MNI | |
| REMOVE | REQ | |
| RENAME | REQ | |
| RENEW | MNI | |
| RESTOREFH | REQ | |
| SAVEFH | REQ | |
| SECINFO | REQ | |
| SECINFO_NO_NAME | REC | pNFS file layout |
| | | (REQ) |
| SEQUENCE | REQ | |
| SETATTR | REQ | |
| SETCLIENTID | MNI | |
| SETCLIENTID_CONFIRM | MNI | |
| SET_SSV | REQ | |
| TEST_STATEID | REQ | |
| VERIFY | REQ | |
| WANT_DELEGATION | OPT | FDELG (OPT) |
| WRITE | REQ | |
+----------------------+--------------------+-----------------------+
Callback Operations
+-------------------------+-------------------+---------------------+
| Operation | REQ, REC, OPT, or | Feature (REQ, REC, |
| | MNI | or OPT) |
+-------------------------+-------------------+---------------------+
| CB_COPY | OPT | COPY (REQ) |
| CB_GETATTR | OPT | FDELG (REQ) |
| CB_LAYOUTRECALL | OPT | pNFS (REQ) |
| CB_NOTIFY | OPT | DDELG (REQ) |
| CB_NOTIFY_DEVICEID | OPT | pNFS (OPT) |
| CB_NOTIFY_LOCK | OPT | |
| CB_PUSH_DELEG | OPT | FDELG (OPT) |
| CB_RECALL | OPT | FDELG, DDELG, pNFS |
| | | (REQ) |
| CB_RECALL_ANY | OPT | FDELG, DDELG, pNFS |
| | | (REQ) |
| CB_RECALL_SLOT | REQ | |
| CB_RECALLABLE_OBJ_AVAIL | OPT | DDELG, pNFS (REQ) |
| CB_SEQUENCE | OPT | FDELG, DDELG, pNFS |
| | | (REQ) |
| CB_WANTS_CANCELLED | OPT | FDELG, DDELG, pNFS |
| | | (REQ) |
+-------------------------+-------------------+---------------------+
11. NFSv4.2 Operations
11.1. Operation 59: COPY - Initiate a server-side copy
11.1.1. ARGUMENT
const COPY4_GUARDED = 0x00000001;
const COPY4_METADATA = 0x00000002;
struct COPY4args {
/* SAVED_FH: source file */
/* CURRENT_FH: destination file or */
/* directory */
offset4 ca_src_offset;
offset4 ca_dst_offset;
length4 ca_count;
uint32_t ca_flags;
component4 ca_destination;
netloc4 ca_source_server<>;
};
11.1.2. RESULT
union COPY4res switch (nfsstat4 cr_status) {
case NFS4_OK:
stateid4 cr_callback_id<1>;
default:
length4 cr_bytes_copied;
};
11.1.3. DESCRIPTION
The COPY operation is used for both intra- and inter-server copies.
In both cases, the COPY is always sent from the client to the
destination server of the file copy. The COPY operation requests
that a file be copied from the location specified by the SAVED_FH
value to the location specified by the combination of CURRENT_FH and
ca_destination.
The SAVED_FH must be a regular file. If SAVED_FH is not a regular
file, the operation MUST fail and return NFS4ERR_WRONG_TYPE.
In order to set SAVED_FH to the source file handle, the compound
procedure requesting the COPY will include a sub-sequence of
operations such as
PUTFH source-fh
SAVEFH
If the request is for a server-to-server copy, the source-fh is a
filehandle from the source server and the compound procedure is being
executed on the destination server. In this case, the source-fh is a
foreign filehandle on the server receiving the COPY request. If
either PUTFH or SAVEFH checked the validity of the filehandle, the
operation would likely fail and return NFS4ERR_STALE.
In order to avoid this problem, the minor version incorporating the
COPY operations will need to make a few small changes in the handling
of existing operations. If a server supports the server-to-server
COPY feature, a PUTFH followed by a SAVEFH MUST NOT return
NFS4ERR_STALE for either operation. These restrictions do not pose
substantial difficulties for servers. The CURRENT_FH and SAVED_FH
may be validated in the context of the operation referencing them and
an NFS4ERR_STALE error returned for an invalid file handle at that
point.
The CURRENT_FH and ca_destination together specify the destination of
the copy operation. If ca_destination is of 0 (zero) length, then
CURRENT_FH specifies the target file. In this case, CURRENT_FH MUST
be a regular file and not a directory. If ca_destination is not of 0
(zero) length, the ca_destination argument specifies the file name to
which the data will be copied within the directory identified by
CURRENT_FH. In this case, CURRENT_FH MUST be a directory and not a
regular file.
If the file named by ca_destination does not exist and the operation
completes successfully, the file will be visible in the file system
namespace. If the file does not exist and the operation fails, the
file MAY be visible in the file system namespace depending on when
the failure occurs and on the implementation of the NFS server
receiving the COPY operation. If the ca_destination name cannot be
created in the destination file system (due to file name
restrictions, such as case or length), the operation MUST fail.
The ca_src_offset is the offset within the source file from which the
data will be read, the ca_dst_offset is the offset within the
destination file to which the data will be written, and the ca_count
is the number of bytes that will be copied. An offset of 0 (zero)
specifies the start of the file. A count of 0 (zero) requests that
all bytes from ca_src_offset through EOF be copied to the
destination. If concurrent modifications to the source file overlap
with the source file region being copied, the data copied may include
all, some, or none of the modifications. The client can use standard
NFS operations (e.g., OPEN with OPEN4_SHARE_DENY_WRITE or mandatory
byte range locks) to protect against concurrent modifications if the
client is concerned about this. If the source file's end of file is
being modified in parallel with a copy that specifies a count of 0
(zero) bytes, the amount of data copied is implementation dependent
(clients may guard against this case by specifying a non-zero count
value or preventing modification of the source file as mentioned
above).
If the source offset or the source offset plus count is greater than
or equal to the size of the source file, the operation will fail with
NFS4ERR_INVAL. The destination offset or destination offset plus
count may be greater than the size of the destination file. This
allows for the client to issue parallel copies to implement
operations such as "cat file1 file2 file3 file4 > dest".
If the destination file is created as a result of this command, the
destination file's size will be equal to the number of bytes
successfully copied. If the destination file already existed, the
destination file's size may increase as a result of this operation
(e.g. if ca_dst_offset plus ca_count is greater than the
destination's initial size).
If the ca_source_server list is specified, then this is an inter-
server copy operation and the source file is on a remote server. The
client is expected to have previously issued a successful COPY_NOTIFY
request to the remote source server. The ca_source_server list
SHOULD be the same as the COPY_NOTIFY response's cnr_source_server
list. If the client includes the entries from the COPY_NOTIFY
response's cnr_source_server list in the ca_source_server list, the
source server can indicate a specific copy protocol for the
destination server to use by returning a URL, which specifies both a
protocol service and server name. Server-to-server copy protocol
considerations are described in Section 4.2.3 and Section 4.4.1.
The ca_flags argument allows the copy operation to be customized in
the following ways using the guarded flag (COPY4_GUARDED) and the
metadata flag (COPY4_METADATA).
If the guarded flag is set and the destination exists on the server,
this operation will fail with NFS4ERR_EXIST.
If the guarded flag is not set and the destination exists on the
server, the behavior is implementation dependent.
If the metadata flag is set and the client is requesting a whole file
copy (i.e., ca_count is 0 (zero)), a subset of the destination file's
attributes MUST be the same as the source file's corresponding
attributes and a subset of the destination file's attributes SHOULD
be the same as the source file's corresponding attributes. The
attributes in the MUST and SHOULD copy subsets will be defined for
each NFS version.
For NFSv4.1, Table 2 and Table 3 list the REQUIRED and RECOMMENDED
attributes respectively. A "MUST" in the "Copy to destination file?"
column indicates that the attribute is part of the MUST copy set. A
"SHOULD" in the "Copy to destination file?" column indicates that the
attribute is part of the SHOULD copy set.
+--------------------+----+---------------------------+
| Name | Id | Copy to destination file? |
+--------------------+----+---------------------------+
| supported_attrs | 0 | no |
| type | 1 | MUST |
| fh_expire_type | 2 | no |
| change | 3 | SHOULD |
| size | 4 | MUST |
| link_support | 5 | no |
| symlink_support | 6 | no |
| named_attr | 7 | no |
| fsid | 8 | no |
| unique_handles | 9 | no |
| lease_time | 10 | no |
| rdattr_error | 11 | no |
| filehandle | 19 | no |
| suppattr_exclcreat | 75 | no |
+--------------------+----+---------------------------+
Table 2
+--------------------+----+---------------------------+
| Name | Id | Copy to destination file? |
+--------------------+----+---------------------------+
| acl | 12 | MUST |
| aclsupport | 13 | no |
| archive | 14 | no |
| cansettime | 15 | no |
| case_insensitive | 16 | no |
| case_preserving | 17 | no |
| change_policy | 60 | no |
| chown_restricted | 18 | MUST |
| dacl | 58 | MUST |
| dir_notif_delay | 56 | no |
| dirent_notif_delay | 57 | no |
| fileid | 20 | no |
| files_avail | 21 | no |
| files_free | 22 | no |
| files_total | 23 | no |
| fs_charset_cap | 76 | no |
| fs_layout_type | 62 | no |
| fs_locations | 24 | no |
| fs_locations_info | 67 | no |
| fs_status | 61 | no |
| hidden | 25 | MUST |
| homogeneous | 26 | no |
| layout_alignment | 66 | no |
| layout_blksize | 65 | no |
| layout_hint | 63 | no |
| layout_type | 64 | no |
| maxfilesize | 27 | no |
| maxlink | 28 | no |
| maxname | 29 | no |
| maxread | 30 | no |
| maxwrite | 31 | no |
| max_hole_punch | 31 | no |
| mdsthreshold | 68 | no |
| mimetype | 32 | MUST |
| mode | 33 | MUST |
| mode_set_masked | 74 | no |
| mounted_on_fileid | 55 | no |
| no_trunc | 34 | no |
| numlinks | 35 | no |
| owner | 36 | MUST |
| owner_group | 37 | MUST |
| quota_avail_hard | 38 | no |
| quota_avail_soft | 39 | no |
| quota_used | 40 | no |
| rawdev | 41 | no |
| retentevt_get | 71 | MUST |
| retentevt_set | 72 | no |
| retention_get | 69 | MUST |
| retention_hold | 73 | MUST |
| retention_set | 70 | no |
| sacl | 59 | MUST |
| space_avail | 42 | no |
| space_free | 43 | no |
| space_freed | 78 | no |
| space_reserved | 77 | MUST |
| space_total | 44 | no |
| space_used | 45 | no |
| system | 46 | MUST |
| time_access | 47 | MUST |
| time_access_set | 48 | no |
| time_backup | 49 | no |
| time_create | 50 | MUST |
| time_delta | 51 | no |
| time_metadata | 52 | SHOULD |
| time_modify | 53 | MUST |
| time_modify_set | 54 | no |
+--------------------+----+---------------------------+
Table 3
[NOTE: The source file's attribute values will take precedence over
any attribute values inherited by the destination file.]
In the case of an inter-server copy or an intra-server copy between
file systems, the attributes supported for the source file and
destination file could be different. By definition,the REQUIRED
attributes will be supported in all cases. If the metadata flag is
set and the source file has a RECOMMENDED attribute that is not
supported for the destination file, the copy MUST fail with
NFS4ERR_ATTRNOTSUPP.
Any attribute supported by the destination server that is not set on
the source file SHOULD be left unset.
Metadata attributes not exposed via the NFS protocol SHOULD be copied
to the destination file where appropriate.
The destination file's named attributes are not duplicated from the
source file. After the copy process completes, the client MAY
attempt to duplicate named attributes using standard NFSv4
operations. However, the destination file's named attribute
capabilities MAY be different from the source file's named attribute
capabilities.
If the metadata flag is not set and the client is requesting a whole
file copy (i.e., ca_count is 0 (zero)), the destination file's
metadata is implementation dependent.
If the client is requesting a partial file copy (i.e., ca_count is
not 0 (zero)), the client SHOULD NOT set the metadata flag and the
server MUST ignore the metadata flag.
If the operation does not result in an immediate failure, the server
will return NFS4_OK, and the CURRENT_FH will remain the destination's
filehandle.
If an immediate failure does occur, cr_bytes_copied will be set to
the number of bytes copied to the destination file before the error
occurred. The cr_bytes_copied value indicates the number of bytes
copied but not which specific bytes have been copied.
A return of NFS4_OK indicates that either the operation is complete
or the operation was initiated and a callback will be used to deliver
the final status of the operation.
If the cr_callback_id is returned, this indicates that the operation
was initiated and a CB_COPY callback will deliver the final results
of the operation. The cr_callback_id stateid is termed a copy
stateid in this context. The server is given the option of returning
the results in a callback because the data may require a relatively
long period of time to copy.
If no cr_callback_id is returned, the operation completed
synchronously and no callback will be issued by the server. The
completion status of the operation is indicated by cr_status.
If the copy completes successfully, either synchronously or
asynchronously, the data copied from the source file to the
destination file MUST appear identical to the NFS client. However,
the NFS server's on disk representation of the data in the source
file and destination file MAY differ. For example, the NFS server
might encrypt, compress, deduplicate, or otherwise represent the on
disk data in the source and destination file differently.
In the event of a failure the state of the destination file is
implementation dependent. The COPY operation may fail for the
following reasons (this is a partial list).
NFS4ERR_MOVED: The file system which contains the source file, or
the destination file or directory is not present. The client can
determine the correct location and reissue the operation with the
correct location.
NFS4ERR_NOTSUPP: The copy offload operation is not supported by the
NFS server receiving this request.
NFS4ERR_PARTNER_NOTSUPP: The remote server does not support the
server-to-server copy offload protocol.
NFS4ERR_PARTNER_NO_AUTH: The remote server does not authorize a
server-to-server copy offload operation. This may be due to the
client's failure to send the COPY_NOTIFY operation to the remote
server, the remote server receiving a server-to-server copy
offload request after the copy lease time expired, or for some
other permission problem.
NFS4ERR_FBIG: The copy operation would have caused the file to grow
beyond the server's limit.
NFS4ERR_NOTDIR: The CURRENT_FH is a file and ca_destination has non-
zero length.
NFS4ERR_WRONG_TYPE: The SAVED_FH is not a regular file.
NFS4ERR_ISDIR: The CURRENT_FH is a directory and ca_destination has
zero length.
NFS4ERR_INVAL: The source offset or offset plus count are greater
than or equal to the size of the source file.
NFS4ERR_DELAY: The server does not have the resources to perform the
copy operation at the current time. The client should retry the
operation sometime in the future.
NFS4ERR_METADATA_NOTSUPP: The destination file cannot support the
same metadata as the source file.
NFS4ERR_WRONGSEC: The security mechanism being used by the client
does not match the server's security policy.
11.2. Operation 60: COPY_ABORT - Cancel a server-side copy
11.2.1. ARGUMENT
struct COPY_ABORT4args {
/* CURRENT_FH: desination file */
stateid4 caa_stateid;
};
11.2.2. RESULT
struct COPY_ABORT4res {
nfsstat4 car_status;
};
11.2.3. DESCRIPTION
COPY_ABORT is used for both intra- and inter-server asynchronous
copies. The COPY_ABORT operation allows the client to cancel a
server-side copy operation that it initiated. This operation is sent
in a COMPOUND request from the client to the destination server.
This operation may be used to cancel a copy when the application that
requested the copy exits before the operation is completed or for
some other reason.
The request contains the filehandle and copy stateid cookies that act
as the context for the previously initiated copy operation.
The result's car_status field indicates whether the cancel was
successful or not. A value of NFS4_OK indicates that the copy
operation was canceled and no callback will be issued by the server.
A copy operation that is successfully canceled may result in none,
some, or all of the data copied.
If the server supports asynchronous copies, the server is REQUIRED to
support the COPY_ABORT operation.
The COPY_ABORT operation may fail for the following reasons (this is
a partial list):
NFS4ERR_NOTSUPP: The abort operation is not supported by the NFS
server receiving this request.
NFS4ERR_RETRY: The abort failed, but a retry at some time in the
future MAY succeed.
NFS4ERR_COMPLETE_ALREADY: The abort failed, and a callback will
deliver the results of the copy operation.
NFS4ERR_SERVERFAULT: An error occurred on the server that does not
map to a specific error code.
11.3. Operation 61: COPY_NOTIFY - Notify a source server of a future
copy
11.3.1. ARGUMENT
struct COPY_NOTIFY4args {
/* CURRENT_FH: source file */
netloc4 cna_destination_server;
};
11.3.2. RESULT
struct COPY_NOTIFY4resok {
nfstime4 cnr_lease_time;
netloc4 cnr_source_server<>;
};
union COPY_NOTIFY4res switch (nfsstat4 cnr_status) {
case NFS4_OK:
COPY_NOTIFY4resok resok4;
default:
void;
};
11.3.3. DESCRIPTION
This operation is used for an inter-server copy. A client sends this
operation in a COMPOUND request to the source server to authorize a
destination server identified by cna_destination_server to read the
file specified by CURRENT_FH on behalf of the given user.
The cna_destination_server MUST be specified using the netloc4
network location format. The server is not required to resolve the
cna_destination_server address before completing this operation.
If this operation succeeds, the source server will allow the
cna_destination_server to copy the specified file on behalf of the
given user. If COPY_NOTIFY succeeds, the destination server is
granted permission to read the file as long as both of the following
conditions are met:
o The destination server begins reading the source file before the
cnr_lease_time expires. If the cnr_lease_time expires while the
destination server is still reading the source file, the
destination server is allowed to finish reading the file.
o The client has not issued a COPY_REVOKE for the same combination
of user, filehandle, and destination server.
The cnr_lease_time is chosen by the source server. A cnr_lease_time
of 0 (zero) indicates an infinite lease. To renew the copy lease
time the client should resend the same copy notification request to
the source server.
To avoid the need for synchronized clocks, copy lease times are
granted by the server as a time delta. However, there is a
requirement that the client and server clocks do not drift
excessively over the duration of the lease. There is also the issue
of propagation delay across the network which could easily be several
hundred milliseconds as well as the possibility that requests will be
lost and need to be retransmitted.
To take propagation delay into account, the client should subtract it
from copy lease times (e.g., if the client estimates the one-way
propagation delay as 200 milliseconds, then it can assume that the
lease is already 200 milliseconds old when it gets it). In addition,
it will take another 200 milliseconds to get a response back to the
server. So the client must send a lease renewal or send the copy
offload request to the cna_destination_server at least 400
milliseconds before the copy lease would expire. If the propagation
delay varies over the life of the lease (e.g., the client is on a
mobile host), the client will need to continuously subtract the
increase in propagation delay from the copy lease times.
The server's copy lease period configuration should take into account
the network distance of the clients that will be accessing the
server's resources. It is expected that the lease period will take
into account the network propagation delays and other network delay
factors for the client population. Since the protocol does not allow
for an automatic method to determine an appropriate copy lease
period, the server's administrator may have to tune the copy lease
period.
A successful response will also contain a list of names, addresses,
and URLs called cnr_source_server, on which the source is willing to
accept connections from the destination. These might not be
reachable from the client and might be located on networks to which
the client has no connection.
If the client wishes to perform an inter-server copy, the client MUST
send a COPY_NOTIFY to the source server. Therefore, the source
server MUST support COPY_NOTIFY.
For a copy only involving one server (the source and destination are
on the same server), this operation is unnecessary.
The COPY_NOTIFY operation may fail for the following reasons (this is
a partial list):
NFS4ERR_MOVED: The file system which contains the source file is not
present on the source server. The client can determine the
correct location and reissue the operation with the correct
location.
NFS4ERR_NOTSUPP: The copy offload operation is not supported by the
NFS server receiving this request.
NFS4ERR_WRONGSEC: The security mechanism being used by the client
does not match the server's security policy.
11.4. Operation 62: COPY_REVOKE - Revoke a destination server's copy
privileges
11.4.1. ARGUMENT
struct COPY_REVOKE4args {
/* CURRENT_FH: source file */
netloc4 cra_destination_server;
};
11.4.2. RESULT
struct COPY_REVOKE4res {
nfsstat4 crr_status;
};
11.4.3. DESCRIPTION
This operation is used for an inter-server copy. A client sends this
operation in a COMPOUND request to the source server to revoke the
authorization of a destination server identified by
cra_destination_server from reading the file specified by CURRENT_FH
on behalf of given user. If the cra_destination_server has already
begun copying the file, a successful return from this operation
indicates that further access will be prevented.
The cra_destination_server MUST be specified using the netloc4
network location format. The server is not required to resolve the
cra_destination_server address before completing this operation.
The COPY_REVOKE operation is useful in situations in which the source
server granted a very long or infinite lease on the destination
server's ability to read the source file and all copy operations on
the source file have been completed.
For a copy only involving one server (the source and destination are
on the same server), this operation is unnecessary.
If the server supports COPY_NOTIFY, the server is REQUIRED to support
the COPY_REVOKE operation.
The COPY_REVOKE operation may fail for the following reasons (this is
a partial list):
NFS4ERR_MOVED: The file system which contains the source file is not
present on the source server. The client can determine the
correct location and reissue the operation with the correct
location.
NFS4ERR_NOTSUPP: The copy offload operation is not supported by the
NFS server receiving this request.
11.5. Operation 63: COPY_STATUS - Poll for status of a server-side copy
11.5.1. ARGUMENT
struct COPY_STATUS4args {
/* CURRENT_FH: destination file */
stateid4 csa_stateid;
};
11.5.2. RESULT
struct COPY_STATUS4resok {
length4 csr_bytes_copied;
nfsstat4 csr_complete<1>;
};
union COPY_STATUS4res switch (nfsstat4 csr_status) {
case NFS4_OK:
COPY_STATUS4resok resok4;
default:
void;
};
11.5.3. DESCRIPTION
COPY_STATUS is used for both intra- and inter-server asynchronous
copies. The COPY_STATUS operation allows the client to poll the
server to determine the status of an asynchronous copy operation.
This operation is sent by the client to the destination server.
If this operation is successful, the number of bytes copied are
returned to the client in the csr_bytes_copied field. The
csr_bytes_copied value indicates the number of bytes copied but not
which specific bytes have been copied.
If the optional csr_complete field is present, the copy has
completed. In this case the status value indicates the result of the
asynchronous copy operation. In all cases, the server will also
deliver the final results of the asynchronous copy in a CB_COPY
operation.
The failure of this operation does not indicate the result of the
asynchronous copy in any way.
If the server supports asynchronous copies, the server is REQUIRED to
support the COPY_STATUS operation.
The COPY_STATUS operation may fail for the following reasons (this is
a partial list):
NFS4ERR_NOTSUPP: The copy status operation is not supported by the
NFS server receiving this request.
NFS4ERR_BAD_STATEID: The stateid is not valid (see Section 4.3.2
below).
NFS4ERR_EXPIRED: The stateid has expired (see Copy Offload Stateid
section below).
11.6. Operation 64: INITIALIZE
The server has no concept of the structure imposed by the
application. It is only when the application writes to a section of
the file does order get imposed. In order to detect corruption even
before the application utilizes the file, the application will want
to initialize a range of ADBs. It uses the INITIALIZE operation to
do so.
11.6.1. ARGUMENT
/*
* We use data_content4 in case we wish to
* extend new types later. Note that we
* are explicitly disallowing data.
*/
union initialize_arg4 switch (data_content4 content) {
case NFS4_CONTENT_APP_BLOCK:
app_data_block4 ia_adb;
case NFS4_CONTENT_HOLE:
hole_info4 ia_hole;
default:
void;
};
struct INITIALIZE4args {
/* CURRENT_FH: file */
stateid4 ia_stateid;
stable_how4 ia_stable;
initialize_arg4 ia_data<>;
};
11.6.2. RESULT
struct INITIALIZE4resok {
count4 ir_count;
stable_how4 ir_committed;
verifier4 ir_writeverf;
data_content4 ir_sparse;
};
union INITIALIZE4res switch (nfsstat4 status) {
case NFS4_OK:
INITIALIZE4resok resok4;
default:
void;
};
11.6.3. DESCRIPTION
When the client invokes the INITIALIZE operation, it has two desired
results:
1. The structure described by the app_data_block4 be imposed on the
file.
2. The contents described by the app_data_block4 be sparse.
If the server supports the INITIALIZE operation, it still might not
support sparse files. So if it receives the INITIALIZE operation,
then it MUST populate the contents of the file with the initialized
ADBs. In other words, if the server supports INITIALIZE, then it
supports the concept of ADBs. [[Comment.4: Do we want to support an
asynchronous INITIALIZE? Do we have to? --TH]]
If the data was already initialized, There are two interesting
scenarios:
1. The data blocks are allocated.
2. Initializing in the middle of an existing ADB.
If the data blocks were already allocated, then the INITIALIZE is a
hole punch operation. If INITIALIZE supports sparse files, then the
data blocks are to be deallocated. If not, then the data blocks are
to be rewritten in the indicated ADB format. [[Comment.5: Need to
document interaction between space reservation and hole punching?
--TH]]
Since the server has no knowledge of ADBs, it should not report
misaligned creation of ADBs. Even while it can detect them, it
cannot disallow them, as the application might be in the process of
changing the size of the ADBs. Thus the server must be prepared to
handle an INITIALIZE into an existing ADB.
This document does not mandate the manner in which the server stores
ADBs sparsely for a file. It does assume that if ADBs are stored
sparsely, then the server can detect when an INITIALIZE arrives that
will force a new ADB to start inside an existing ADB. For example,
assume that ADBi has a adb_block_size of 4k and that an INITIALIZE
starts 1k inside ADBi. The server should [[Comment.6: Need to flesh
this out. --TH]]
11.7. Modification to Operation 42: EXCHANGE_ID - Instantiate Client ID
11.7.1. ARGUMENT
/* new */
const EXCHGID4_FLAG_SUPP_FENCE_OPS = 0x00000004;
11.7.2. RESULT
Unchanged
11.7.3. MOTIVATION
Enterprise applications require guarantees that an operation has
either aborted or completed. NFSv4.1 provides this guarantee as long
as the session is alive: simply send a SEQUENCE operation on the same
slot with a new sequence number, and the successful return of
SEQUENCE indicates the previous operation has completed. However, if
the session is lost, there is no way to know when any in progress
operations have aborted or completed. In hindsight, the NFSv4.1
specification should have mandated that DESTROY_SESSION abort/
complete all outstanding operations.
11.7.4. DESCRIPTION
A client SHOULD request the EXCHGID4_FLAG_SUPP_FENCE_OPS capability
when it sends an EXCHANGE_ID operation. The server SHOULD set this
capability in the EXCHANGE_ID reply whether the client requests it or
not. If the client ID is created with this capability then the
following will occur:
o The server will not reply to DESTROY_SESSION until all operations
in progress are completed or aborted.
o The server will not reply to subsequent EXCHANGE_ID invoked on the
same Client Owner with a new verifier until all operations in
progress on the Client ID's session are completed or aborted.
o When DESTROY_CLIENTID is invoked, if there are sessions (both idle
and non-idle), opens, locks, delegations, layouts, and/or wants
(Section 18.49) associated with the client ID are removed.
Pending operations will be completed or aborted before the
sessions, opens, locks, delegations, layouts, and/or wants are
deleted.
o The NFS server SHOULD support client ID trunking, and if it does
and the EXCHGID4_FLAG_SUPP_FENCE_OPS capability is enabled, then a
session ID created on one node of the storage cluster MUST be
destroyable via DESTROY_SESSION. In addition, DESTROY_CLIENTID
and an EXCHANGE_ID with a new verifier affects all sessions
regardless what node the sessions were created on.
11.8. Operation 65: READ_PLUS
If the client sends a READ operation, it is explicitly stating that
it is not supporting sparse files. So if a READ occurs on a sparse
ADB, then the server must expand such ADBs to be raw bytes. If a
READ occurs in the middle of an ADB, the server can only send back
bytes starting from that offset.
Such an operation is inefficient for transfer of sparse sections of
the file. As such, READ is marked as OBSOLETE in NFSv4.2. Instead,
a client should issue READ_PLUS. Note that as the client has no a
priori knowledge of whether an ADB is present or not, it should
always use READ_PLUS.
11.8.1. ARGUMENT
struct READ_PLUS4args {
/* CURRENT_FH: file */
stateid4 rpa_stateid;
offset4 rpa_offset;
count4 rpa_count;
};
11.8.2. RESULT
union read_plus_content switch (data_content4 content) {
case NFS4_CONTENT_DATA:
opaque rpc_data<>;
case NFS4_CONTENT_APP_BLOCK:
app_data_block4 rpc_block;
case NFS4_CONTENT_HOLE:
hole_info4 rpc_hole;
default:
void;
};
/*
* Allow a return of an array of contents.
*/
struct read_plus_res4 {
bool rpr_eof;
read_plus_content rpr_contents<>;
};
union READ_PLUS4res switch (nfsstat4 status) {
case NFS4_OK:
read_plus_res4 resok4;
default:
void;
};
11.8.3. DESCRIPTION
Over the given range, READ_PLUS will return all data and ADBs found
as an array of read_plus_content. It is possible to have consecutive
ADBs in the array as either different definitions of ADBs are present
or as the guard pattern changes.
Edge cases exist for ABDs which either begin before the rpa_offset
requested by the READ_PLUS or end after the rpa_count requested -
both of which may occur as not all applications which access the file
are aware of the main application imposing a format on the file
contents, i.e., tar, dd, cp, etc. READ_PLUS MUST retrieve whole
ADBs, but it need not retrieve an entire sequences of ADBs.
The server MUST return a whole ADB because if it does not, it must
expand that partial ADB before it sends it to the client. E.g., if
an ADB had a block size of 64k and the READ_PLUS was for 128k
starting at an offset of 32k inside the ADB, then the first 32k would
be converted to data.
12. NFSv4.2 Callback Operations
12.1. Operation 15: CB_COPY - Report results of a server-side copy
12.1.1. ARGUMENT
union copy_info4 switch (nfsstat4 cca_status) {
case NFS4_OK:
void;
default:
length4 cca_bytes_copied;
};
struct CB_COPY4args {
nfs_fh4 cca_fh;
stateid4 cca_stateid;
copy_info4 cca_copy_info;
};
12.1.2. RESULT
struct CB_COPY4res {
nfsstat4 ccr_status;
};
12.1.3. DESCRIPTION
CB_COPY is used for both intra- and inter-server asynchronous copies.
The CB_COPY callback informs the client of the result of an
asynchronous server-side copy. This operation is sent by the
destination server to the client in a CB_COMPOUND request. The copy
is identified by the filehandle and stateid arguments. The result is
indicated by the status field. If the copy failed, cca_bytes_copied
contains the number of bytes copied before the failure occurred. The
cca_bytes_copied value indicates the number of bytes copied but not
which specific bytes have been copied.
In the absence of an established backchannel, the server cannot
signal the completion of the COPY via a CB_COPY callback. The loss
of a callback channel would be indicated by the server setting the
SEQ4_STATUS_CB_PATH_DOWN flag in the sr_status_flags field of the
SEQUENCE operation. The client must re-establish the callback
channel to receive the status of the COPY operation. Prolonged loss
of the callback channel could result in the server dropping the COPY
operation state and invalidating the copy stateid.
If the client supports the COPY operation, the client is REQUIRED to
support the CB_COPY operation.
The CB_COPY operation may fail for the following reasons (this is a
partial list):
NFS4ERR_NOTSUPP: The copy offload operation is not supported by the
NFS client receiving this request.
13. IANA Considerations
This section uses terms that are defined in [23].
14. References
14.1. Normative References
[1] Bradner, S., "Key words for use in RFCs to Indicate Requirement [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement
Levels", March 1997. Levels", March 1997.
[2] Shepler, S., Eisler, M., and D. Noveck, "Network File System [2] Shepler, S., Eisler, M., and D. Noveck, "Network File System
(NFS) Version 4 Minor Version 1 Protocol", RFC 5661, (NFS) Version 4 Minor Version 1 Protocol", RFC 5661,
January 2010. January 2010.
[3] Haynes, T., "Network File System (NFS) Version 4 Minor Version [3] Haynes, T., "Network File System (NFS) Version 4 Minor Version
2 External Data Representation Standard (XDR) Description", 2 External Data Representation Standard (XDR) Description",
March 2011. March 2011.
[4] Halevy, B., Welch, B., and J. Zelenka, "Object-Based Parallel [4] Halevy, B., Welch, B., and J. Zelenka, "Object-Based Parallel
NFS (pNFS) Operations", RFC 5664, January 2010. NFS (pNFS) Operations", RFC 5664, January 2010.
[5] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform [5] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform
Resource Identifier (URI): Generic Syntax", STD 66, RFC 3986, Resource Identifier (URI): Generic Syntax", STD 66, RFC 3986,
January 2005. January 2005.
[6] Williams, N., "Remote Procedure Call (RPC) Security Version 3", [6] Haynes, T. and N. Williams, "Remote Procedure Call (RPC)
draft-williams-rpcsecgssv3 (work in progress), 2008. Security Version 3", draft-williams-rpcsecgssv3 (work in
progress), 2011.
[7] Shepler, S., Eisler, M., and D. Noveck, "Network File System [7] Eisler, M., Chiu, A., and L. Ling, "RPCSEC_GSS Protocol
Specification", RFC 2203, September 1997.
[8] Shepler, S., Eisler, M., and D. Noveck, "Network File System
(NFS) Version 4 Minor Version 1 External Data Representation (NFS) Version 4 Minor Version 1 External Data Representation
Standard (XDR) Description", RFC 5662, January 2010. Standard (XDR) Description", RFC 5662, January 2010.
[8] Black, D., Glasgow, J., and S. Fridella, "Parallel NFS (pNFS) [9] Black, D., Glasgow, J., and S. Fridella, "Parallel NFS (pNFS)
Block/Volume Layout", RFC 5663, January 2010. Block/Volume Layout", RFC 5663, January 2010.
[9] Eisler, M., Chiu, A., and L. Ling, "RPCSEC_GSS Protocol 14.2. Informative References
Specification", RFC 2203, September 1997.
10.2. Informative References
[10] Haynes, T. and D. Noveck, "Network File System (NFS) version 4 [10] Haynes, T. and D. Noveck, "Network File System (NFS) version 4
Protocol", draft-ietf-nfsv4-rfc3530bis-09 (Work In Progress), Protocol", draft-ietf-nfsv4-rfc3530bis-09 (Work In Progress),
March 2011. March 2011.
[11] Eisler, M., "XDR: External Data Representation Standard", [11] Eisler, M., "XDR: External Data Representation Standard",
RFC 4506, May 2006. RFC 4506, May 2006.
[12] Lentini, J., Everhart, C., Ellard, D., Tewari, R., and M. Naik, [12] Lentini, J., Everhart, C., Ellard, D., Tewari, R., and M. Naik,
"NSDB Protocol for Federated Filesystems", "NSDB Protocol for Federated Filesystems",
skipping to change at page 69, line 51 skipping to change at page 90, line 48
11g Release 1 (11.1)", August 2008. 11g Release 1 (11.1)", August 2008.
[19] McDougall, R. and J. Mauro, "Section 11.4.3, Detecting Memory [19] McDougall, R. and J. Mauro, "Section 11.4.3, Detecting Memory
Corruption of Solaris Internals", 2007. Corruption of Solaris Internals", 2007.
[20] Bairavasundaram, L., Goodson, G., Schroeder, B., Arpaci- [20] Bairavasundaram, L., Goodson, G., Schroeder, B., Arpaci-
Dusseau, A., and R. Arpaci-Dusseau, "An Analysis of Data Dusseau, A., and R. Arpaci-Dusseau, "An Analysis of Data
Corruption in the Storage Stack", Proceedings of the 6th USENIX Corruption in the Storage Stack", Proceedings of the 6th USENIX
Symposium on File and Storage Technologies (FAST '08) , 2008. Symposium on File and Storage Technologies (FAST '08) , 2008.
[21] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA [21] "Section 46.6. Multi-Level Security (MLS) of Deployment Guide:
Deployment, configuration and administration of Red Hat
Enterprise Linux 5, Edition 6", 2011.
[22] Quigley, D. and J. Lu, "Registry Specification for MAC Security
Label Formats", draft-quigley-label-format-registry (work in
progress), 2011.
[23] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA
Considerations Section in RFCs", BCP 26, RFC 5226, May 2008. Considerations Section in RFCs", BCP 26, RFC 5226, May 2008.
[22] Nowicki, B., "NFS: Network File System Protocol specification", [24] Nowicki, B., "NFS: Network File System Protocol specification",
RFC 1094, March 1989. RFC 1094, March 1989.
[23] Callaghan, B., Pawlowski, B., and P. Staubach, "NFS Version 3 [25] Callaghan, B., Pawlowski, B., and P. Staubach, "NFS Version 3
Protocol Specification", RFC 1813, June 1995. Protocol Specification", RFC 1813, June 1995.
[24] Srinivasan, R., "Binding Protocols for ONC RPC Version 2", [26] Srinivasan, R., "Binding Protocols for ONC RPC Version 2",
RFC 1833, August 1995. RFC 1833, August 1995.
[25] Eisler, M., "NFS Version 2 and Version 3 Security Issues and [27] Eisler, M., "NFS Version 2 and Version 3 Security Issues and
the NFS Protocol's Use of RPCSEC_GSS and Kerberos V5", the NFS Protocol's Use of RPCSEC_GSS and Kerberos V5",
RFC 2623, June 1999. RFC 2623, June 1999.
[26] Callaghan, B., "NFS URL Scheme", RFC 2224, October 1997. [28] Callaghan, B., "NFS URL Scheme", RFC 2224, October 1997.
[27] Shepler, S., "NFS Version 4 Design Considerations", RFC 2624, [29] Shepler, S., "NFS Version 4 Design Considerations", RFC 2624,
June 1999. June 1999.
[28] Reynolds, J., "Assigned Numbers: RFC 1700 is Replaced by an On- [30] Reynolds, J., "Assigned Numbers: RFC 1700 is Replaced by an On-
line Database", RFC 3232, January 2002. line Database", RFC 3232, January 2002.
[29] Linn, J., "The Kerberos Version 5 GSS-API Mechanism", RFC 1964, [31] Linn, J., "The Kerberos Version 5 GSS-API Mechanism", RFC 1964,
June 1996. June 1996.
[30] Shepler, S., Callaghan, B., Robinson, D., Thurlow, R., Beame, [32] Shepler, S., Callaghan, B., Robinson, D., Thurlow, R., Beame,
C., Eisler, M., and D. Noveck, "Network File System (NFS) C., Eisler, M., and D. Noveck, "Network File System (NFS)
version 4 Protocol", RFC 3530, April 2003. version 4 Protocol", RFC 3530, April 2003.
Appendix A. Acknowledgments Appendix A. Acknowledgments
For the pNFS Access Permissions Check, the original draft was by For the pNFS Access Permissions Check, the original draft was by
Sorin Faibish, David Black, Mike Eisler, and Jason Glasgow. The work Sorin Faibish, David Black, Mike Eisler, and Jason Glasgow. The work
was influenced by discussions with Benny Halevy and Bruce Fields. A was influenced by discussions with Benny Halevy and Bruce Fields. A
review was done by Tom Haynes. review was done by Tom Haynes.
skipping to change at page 71, line 11 skipping to change at page 92, line 18
Williams. Williams.
For the NFS space reservation operations, the original draft was by For the NFS space reservation operations, the original draft was by
Mike Eisler, James Lentini, Manjunath Shankararao, and Rahul Iyer. Mike Eisler, James Lentini, Manjunath Shankararao, and Rahul Iyer.
For the sparse file support, the original draft was by Dean For the sparse file support, the original draft was by Dean
Hildebrand and Marc Eshel. Valuable input and advice was received Hildebrand and Marc Eshel. Valuable input and advice was received
from Sorin Faibish, Bruce Fields, Benny Halevy, Trond Myklebust, and from Sorin Faibish, Bruce Fields, Benny Halevy, Trond Myklebust, and
Richard Scheffenegger. Richard Scheffenegger.
For Labeled NFS, the original draft was by David Quigley, James
Morris, Jarret Lu, and Tom Haynes. Peter Staubach, Trond Myklebust,
Sorrin Faibish, Nico Williams, and David Black also contributed in
the final push to get this accepted.
Appendix B. RFC Editor Notes Appendix B. RFC Editor Notes
[RFC Editor: please remove this section prior to publishing this [RFC Editor: please remove this section prior to publishing this
document as an RFC] document as an RFC]
[RFC Editor: prior to publishing this document as an RFC, please [RFC Editor: prior to publishing this document as an RFC, please
replace all occurrences of RFCTBD10 with RFCxxxx where xxxx is the replace all occurrences of RFCTBD10 with RFCxxxx where xxxx is the
RFC number of this document] RFC number of this document]
Author's Address Author's Address
 End of changes. 47 change blocks. 
1013 lines changed or deleted 2013 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/