Networking Working Group J. Martocci, Ed. Internet-Draft Johnson Controls Inc. Intended status: Informational Pieter De Mil Expires:July 14,August 2, 2009 Ghent University IBCN W. Vermeylen Arts Centre Vooruit Nicolas Riou Schneider ElectricJanuary 14,February 2, 2009 Building Automation Routing Requirements in Low Power and Lossy Networksdraft-ietf-roll-building-routing-reqs-02draft-ietf-roll-building-routing-reqs-03 Status of this Memo This Internet-Draft is submitted to IETF in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire onJuly 14,August 2, 2009. Copyright Notice Copyright (c) 2009 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Abstract The Routing Over Low power and Lossy network (ROLL) Working Group has been chartered to work on routing solutions for Low Power and Lossy networks (LLN) in various markets: Industrial, Commercial (Building), Home and Urban. Pursuant to this effort, this document defines the routing requirements for building automation. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC-2119. Table of Contents 1. Terminology....................................................4 2. Introduction...................................................42.1.3. Facility Management System (FMS)Topology.................5 2.1.1. Introduction.........................................5 2.1.2. Sensors/Actuators....................................6 2.1.3.Topology......................5 3.1. Introduction..............................................5 3.2. Sensors/Actuators.........................................6 3.3. AreaControllers.....................................6 2.1.4.Controllers..........................................7 3.4. ZoneControllers.....................................7 2.2.Controllers..........................................7 4. InstallationMethods......................................7 2.2.1.Methods...........................................7 4.1. Wired CommunicationMedia............................7 2.2.2.Media.................................7 4.2. Device Density............................................8 4.2.1. HVAC Device Density..................................8 4.2.2. Fire Device Density..................................8 4.2.3. Lighting Device Density..............................9 4.2.4. Physical Security DeviceDensity.......................................7 2.2.3.Density.....................9 4.3. InstallationProcedure...............................9 3. Building Automation Applications..............................10 3.1. Locking and Unlocking the Building.......................10 3.2. Building Energy Conservation.............................10 3.3. Inventory and Remote Diagnosis of Safety Equipment.......11 3.4. Life Cycle of Field Devices..............................11 3.5. Surveillance.............................................11 3.6. Emergency................................................12 3.7. Public Address...........................................12 4.Procedure....................................9 5. Building Automation RoutingRequirements......................12 4.1. Installation.............................................13 4.1.1.Requirements......................10 5.1. Installation.............................................10 5.1.1. Zero-Configurationinstallation.....................13 4.1.2.Installation.....................11 5.1.2. Sleepingdevices....................................13 4.1.3.Devices....................................11 5.1.3. LocalTesting.......................................14 4.1.4.Testing.......................................11 5.1.4. DeviceReplacement..................................14 4.2. Scalability..............................................15 4.2.1.Replacement..................................12 5.2. Scalability..............................................12 5.2.1. NetworkDomain......................................15 4.2.2. Peer-to-peer Communication..........................15 4.3. Mobility.................................................15 4.3.1.Domain......................................12 5.2.2. Peer-to-Peer Communication..........................12 5.3. Mobility.................................................13 5.3.1. Mobile DeviceAssociation...........................15 4.4.Requirements..........................13 5.4. Resource ConstrainedDevices.............................16 4.4.1.Devices.............................14 5.4.1. Limited Processing PowerSensors/Actuators..........16 4.4.2.for Non-routing Devices....14 5.4.2. Limited Processing PowerControllers................16 4.5. Addressing...............................................16 4.5.1. Unicast/Multicast/Anycast...........................16 4.6. Manageability............................................17 4.6.1.for Routing Devices........14 5.5. Addressing...............................................14 5.5.1. Unicast/Multicast/Anycast...........................14 5.6. Manageability............................................14 5.6.1. FirmwareUpgrades...................................17 4.6.2. Diagnostics.........................................17 4.6.3.Upgrades...................................15 5.6.2. Diagnostics.........................................15 5.6.3. RouteTracking......................................17 4.7. Compatibility............................................17 4.7.1. IPv4 Compatibility..................................18 4.7.2. Maximum Packet Size.................................18 4.8.Tracking......................................15 5.7. RouteSelection..........................................18 4.8.1.Selection..........................................15 5.7.1. PathCost...........................................18 4.8.2.Cost...........................................15 5.7.2. PathAdaptation.....................................18 4.8.3.Adaptation.....................................16 5.7.3. RouteRedundancy....................................18 4.8.4.Redundancy....................................16 5.7.4. Route DiscoveryTime................................18 4.8.5.Time................................16 5.7.5. RoutePreference....................................19 4.8.6. Path Persistence....................................19 5. Traffic Pattern...............................................19Preference....................................16 6.Open issues...................................................20Traffic Pattern...............................................16 7. SecurityConsiderations.......................................20Considerations.......................................17 7.1. Security Requirements....................................18 7.1.1. Authentication......................................18 7.1.2. Encryption..........................................18 7.1.3. Disparate Security Policies.........................19 8. IANAConsiderations...........................................20Considerations...........................................19 9.Acknowledgments...............................................20Acknowledgments...............................................19 10.References...................................................20References...................................................19 10.1. NormativeReferences....................................20References....................................19 10.2. InformativeReferences..................................21References..................................20 11. Appendix A: Additional BuildingRequirements.................21Requirements.................20 11.1. Additional Commercial ProductRequirements..............21Requirements..............20 11.1.1. Wired and WirelessImplementations.................21Implementations.................20 11.1.2. World-wideApplicability...........................21Applicability...........................20 11.1.3. Support of the BACnet Building Protocol............21 11.1.4. Support of the LON Building Protocol...............21 11.1.5. Energy HarvestedSensors...........................22Sensors...........................21 11.1.6. CommunicationDistance.............................22Distance.............................21 11.1.7. Automatic GainControl.............................22Control.............................21 11.1.8.Cost...............................................22Cost...............................................21 11.1.9. IPv4 Compatibility.................................21 11.2. Additional Installation and Commissioning Requirements..22 11.2.1. Device Setup Time..................................22 11.2.2. Unavailability of an IT network....................22 11.3. Additional Network Requirements.........................22 11.3.1. TCP/UDP............................................22 11.3.2. Data RatePerformance..............................23Performance..............................22 11.3.3. High SpeedDownloads...............................23Downloads...............................22 11.3.4. InterferenceMitigation............................23Mitigation............................22 11.3.5. Real-time PerformanceMeasures.....................23Measures.....................22 11.3.6. PacketReliability.................................23Reliability.................................22 11.3.7. Merging Commissioned Islands.......................23 11.3.8. Adjustable System TableSizes......................24Sizes......................23 11.4. PrioritizedRouting.....................................24Routing.....................................23 11.4.1. PacketPrioritization..............................24Prioritization..............................23 11.5. ConstrainedDevices.....................................24Devices.....................................23 11.5.1. Proxying for Constrained Devices...................24 11.6. Reliability.............................................24 11.6.1. Device Integrity...................................24 11.7. Path Persistence........................................24 12. Appendix B: FMS Use-Cases....................................24 12.1. Locking and Unlocking the Building......................25 12.2. Building Energy Conservation............................25 12.3. Inventory and Remote Diagnosis of Safety Equipment......25 12.4. Life Cycle of Field Devices.............................26 12.5. Surveillance............................................26 12.6. Emergency...............................................26 12.7. Public Address..........................................27 1. Terminology For description of the terminology used in this specification, please seethe Terminology ID referenced in Section 10.1.[I-D.ietf-roll-terminology]. 2. Introduction Commercial buildings have been fitted with pneumatic and subsequently electronic communication pathways connecting sensors to their controllers for over one hundred years. Recent economic and technical advances in wireless communication allow facilities to increasingly utilize a wireless solution in lieu of a wired solution; thereby reducing installation costs while maintaining highly reliant communication. The cost benefits and ease of installation of wireless sensors allow customers to further instrument their facilities with additional sensors; providing tighter control while yielding increased energy savings. Wireless solutions will be adapted from their existing wired counterparts in many of the building applications including, but not limited to Heating, Ventilation, and Air Conditioning (HVAC), Lighting, Physical Security, Fire, and Elevator systems. These devices will be developed to reduce installation costs; while increasing installation and retrofit flexibility, as well as increasing the sensing fidelity to improve efficiency and building service quality. Sensing devices may be battery or mains powered. Actuators and area controllers will be mains powered. Still it is envisioned to see a mix of wired and wireless sensors and actuators within buildings. Facility Management Systems (FMS) are deployed in a large set of vertical markets including universities; hospitals; government facilities; Kindergarten through High School (K-12); pharmaceutical manufacturing facilities; and single-tenant or multi-tenant office buildings. These buildings range in size from 100K sqft structures (5 story office buildings), to 1M sqft skyscrapers (100 story skyscrapers) to complex government facilities such as the Pentagon. The described topology is meant to be the model to be used in all these types of environments, but clearly must be tailored to the building class, building tenant and vertical market being served. The following sections describe the sensor, actuator, area controller and zone controller layers of the topology. (NOTE: The Building Controller and Enterprise layers of the FMS are excluded from this discussion since they typically deal in communication rates requiringWLANLAN/WLAN communication technologies).2.1.Section 3 describes FMS architectures commonly installed in commercial buildings. Section 4 describes installation methods deployed for new and remodeled construction. Appendix B describes various FMS use-cases and the interaction with humans for energy conservation and life-safety applications. Sections 3, 4, and Appendix B are mainly included for educational purposes. The aim of this document is to provide the set of IPv6 routing requirements for LLNs in buildings as described in Section 5. 3. Facility Management System (FMS) Topology2.1.1.3.1. Introduction To understand the network systems requirements of a facility management system in a commercial building, this document uses a framework to describe the basic functions and composition of the system. An FMS is a hierarchical system of sensors, actuators, controllers and user interface devices based on spatial extent. Additionally, an FMS may also be divided functionally across alike, but different building subsystems such as HVAC, Fire, Security, Lighting, Shutters and Elevator control systems as denoted in Figure 1. Much of the makeup of an FMS is optional and installed at the behest of the customer. Sensors and actuators have no standalone functionality. All other devices support partial or complete standalone functionality. These devices can optionally be tethered to form a more cohesive system. The customer requirements dictate the level of integration within the facility. This architecture provides excellent fault tolerance since each node is designed to operate in an independent mode if the higher layers are unavailable. +------+ +-----+ +------+ +------+ +------+ +------+ Bldg App'ns | | | | | | | | | | | | | | | | | | | | | | | | Building Cntl | | | | | S | | L | | S | | E | | | | | | E | | I | | H | | L | Area Control | H | | F | | C | | G | | U | | E | | V | | I | | U | | H | | T | | V | Zone Control | A | | R | | R | | T | | T | | A | | C | | E | | I | | I | | E | | T | Actuators | | | | | T | | N | | R | | O | | | | | | Y | | G | | S | | R | Sensors | | | | | | | | | | | | +------+ +-----+ +------+ +------+ +------+ +------+ Figure 1: Building Systems and Devices2.1.2.3.2. Sensors/Actuators As Figure 1 indicates an FMS may be composed of many functional stacks or silos that are interoperably woven together via Building Applications. Each silo has an array of sensors that monitor the environment and actuators that effect the environment as determined by the upper layers of the FMS topology. The sensors typically are the fringe of the network structure providing environmental data into the system. The actuators are the sensors counterparts modifying the characteristics of the system based on the input sensor data and the applications deployed.2.1.3.3.3. Area Controllers An area describes a small physical locale within a building, typically a room. HVAC (temperature and humidity) and Lighting (room lighting, shades, solar loads) vendors oft times deploy area controllers. Area controls are fed by sensor inputs that monitor the environmental conditions within the room. Common sensors found in many rooms that feed the area controllers include temperature, occupancy, lighting load, solar load and relative humidity. Sensors found in specialized rooms (such as chemistry labs) might include air flow, pressure, CO2 and CO particle sensors. Room actuation includes temperature setpoint, lights and blinds/curtains.2.1.4.3.4. Zone Controllers Zone Control supports a similar set of characteristics as the Area Control albeit to an extended space. A zone is normally a logical grouping or functional division of a commercial building. A zone may also coincidentally map to a physical locale such as a floor. Zone Control may have direct sensor inputs (smoke detectors for fire), controller inputs (room controllers for air-handlers in HVAC) or both (door controllers and tamper sensors for security). Like area/room controllers, zone controllers are standalone devices that operate independently or may be attached to the larger network for more synergistic control.2.2.4. Installation Methods2.2.1.4.1. Wired Communication Media Commercial controllers are traditionally deployed in a facility using twisted pair serial media following the EIA-485 electrical standard operating nominally at 38400 to 76800 baud. This allows runs to 5000 ft without a repeater. With the maximum of three repeaters, a single communication trunk can serpentine 15000 ft. EIA-485 is a multi-drop media allowing upwards to 255 devices to be connected to a single trunk. Most sensors and virtually all actuators currently used in commercial buildings are "dumb", non-communicating hardwired devices. However, sensor buses are beginning to be deployed by vendors which are used for smart sensors and point multiplexing. The Fire industry deploys addressable fire devices, which usually use some form of proprietary communication wiring driven by fire codes.2.2.2.4.2. Device Density Device density differs depending on the application and as dictated by the local building code requirements. The following sections detail typical installation densities for different applications.2.2.2.1.4.2.1. HVAC Device Density HVAC room applications typically have sensors/actuators and controllers spaced about 50ft apart. In most cases there is a 3:1 ratio of sensors/actuators to controllers. That is, for each room there is an installed temperature sensor, flow sensor and damper actuator for the associated room controller. HVAC equipment room applications are quite different. An air handler system may have a single controller with upwards to 25 sensors and actuators within 50 ft of the air handler. A chiller or boiler is also controlled with a single equipment controller instrumented with 25 sensors and actuators. Each of these devices would be individually addressed since the devices are mandated or optional as defined by the specified HVAC application. Air handlers typically serve one or two floors of the building. Chillers and boilers may be installed per floor, but many times service a wing, building or the entire complex via a central plant. These numbers are typical. In special cases, such as clean rooms, operating rooms, pharmaceuticals and labs, the ratio of sensors to controllers can increase by a factor of three. Tenant installations such as malls would opt for packaged units where much of the sensing and actuation is integrated into the unit. Here a single device address would serve the entire unit.2.2.2.2.4.2.2. Fire Device Density Fire systems are much more uniformly installed with smoke detectors installed about every 50 feet. This is dictated by local building codes. Fire pull boxes are installed uniformly about every 150 feet. A fire controller will service a floor or wing. The fireman's fire panel will service the entire building and typically is installed in the atrium.2.2.2.3.4.2.3. Lighting Device Density Lighting is also very uniformly installed with ballasts installed approximately every 10 feet. A lighting panel typically serves 48 to 64 zones. Wired systems typically tether many lights together into a single zone. Wireless systems configure each fixture independently to increase flexibility and reduce installation costs.2.2.2.4.4.2.4. Physical Security Device Density Security systems are non-uniformly oriented with heavy density near doors and windows and lighter density in the building interior space. The recent influx of interior and perimeter camera systems is increasing the security footprint. These cameras are atypical endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates per camera as contrasted by the few Kbits/s needed by most other FMS sensing equipment. Previously, camera systems had been deployed on proprietary wired high speed network. More recent implementations utilize wired or wireless IP cameras integrated to the enterprise LAN.2.2.3.4.3. Installation Procedure Wired FMS installation is a multifaceted procedure depending on the extent of the system and the software interoperability requirement. However, at the sensor/actuator and controller level, the procedure is typically a two or three step process. Most FMS equipmentiswill utilize 24 VACequipmentpower sources that can be installed by a low-voltage electrician. He/she arrives on-site during the construction of the building prior to the sheet wall and ceiling installation. This allows him/her to allocate wall space, easily land the equipment and run the wired controller and sensor networks. The Building Controllers and Enterprise network are not normally installed until months later. The electrician completes his task by running a wire verification procedure that shows proper continuity between the devices and proper local operation of the devices. Later in the installation cycle, the higher order controllers are installed, programmed and commissioned together with the previously installed sensors, actuators and controllers. In most cases the IP network is still not operable. The Building Controllers are completely commissioned using a crossover cable or a temporary IP switch together with static IP addresses. Once the IP network is operational, the FMS may optionally be added to the enterprise network. The wireless installation process must follow the same work flow. The electrician will install the products as before and run local functional tests between the wireless device to assure operation before leaving the job. The electrician does not carry a laptop so the commissioning must be built into the device operation. The wireless installation process must follow the same work flow. The electrician will install the products as before and run local functional tests between the wireless devices to assure operation before leaving the job. The electrician does not carry a laptop so the commissioning must be built into the device operation.3.5. Building AutomationApplications Vooruit is an arts centre in a restored monument which dates from 1913. This complex monument consists of over 350 different rooms including a meeting rooms, large public halls and theaters serving as many as 2500 guests. A number of use cases regarding VooruitRouting Requirements Following aredescribed inthefollowing text. The situationsbuilding automation routing requirements for a network used to integrate building sensor actuator andneeds described in these use cases can also be found in all automated large buildings, such as airports and hospitals. 3.1. Locking and Unlockingcontrol products. These requirements have been limited to routing requirements only. These requirements are written not presuming any preordained network topology, physical media (wired) or radio technology (wireless). See Appendix A for additional requirements that have been deemed outside theBuilding The memberscope of this document yet will pertain to thecleaning staff arrives first in the morning unlocking the building (or a partsuccessful deployment ofit) from thebuilding automation systems. 5.1. Installation Building controlroom. This means that several doorssystems typically areunlocked; the alarmsinstalled and tested by electricians having little computer knowledge and no network knowledge whatsoever. These systems areswitched off; the heating turns on; some lights switch on, etc. Similarly, the last person leavingoften installed during the buildinghas to lockconstruction phase before thebuilding. This will lock alldrywall and ceilings are in place. For new construction projects, theouter doors, turnbuilding enterprise IP network is not in place during installation of thealarms on, switch off heating and lights, etc. The ''building locked'' or ''building unlocked'' event needsbuilding control system. In retrofit applications, pulling wires from sensors to controllers can bedelivered to a subsetcostly and in some applications (e.g. museums) not feasible. Local (ad hoc) testing ofall thesensors andactuators. It canroom controllers must bebeneficial if those field devices form a group (e.g. ''all-sensors- actuators-interested-in-lock/unlock-events). Alternatively,completed before theareatradesperson can complete his/her work. This testing allows the tradesperson to verify correct client (e.g. light switch) andzone controllers could form a group whereserver (e.g. light ballast) before leaving thearrivaljobsite. In traditional wired systems correct operation ofsuch an event results in each area and zone controller initiating unicast or multicast within the LLN. This use case is also described in the home automation, although the requirement about preventing the "popcorn effect" draft [I-D.ietf- roll-home-routing-reqs] can be relaxedalittle bit in building automation. It would be nice if lights, roll-down shutters and other actuators inlight switch/ballast pair was as simple as flipping on thesame room or area with transparent walls executelight switch. In wireless applications, thecommand around (not 'at')tradesperson has to assure the sametime (a toleranceoperation, yet be sure the operation of200 ms is allowed). 3.2. Building Energy Conservation A room thatthe light switch isnot in use should not be heated, air conditioned or ventilated andassociated to thelighting shouldproper ballast. System level commissioning will later beturned off. Indeployed using abuildingmore computer savvy person witha lot of rooms it can happen quite frequently that someone forgetsaccess toswitch off the HVAC and lighting. This isareal waste of valuable energy. To preventcommissioning device (e.g. a laptop computer). The completely installed and commissioned enterprise IP network may or may not be in place at thisfrom happening, the janitor can programtime. Following are thebuilding accordinginstallation routing requirements. 5.1.1. Zero-Configuration Installation It MUST be possible tothe day's schedule. This way lightingfully commission network devices without requiring any additional commissioning device (e.g. laptop). 5.1.2. Sleeping Devices Sensing devices will, in some cases, utilize battery power or energy harvesting techniques for power andHVAC is turned on prior to the use of a room, and turned off afterwards. Using suchwill operate mostly in asystem Vooruit has realizedsleep mode to maintain power consumption within asaving of 35% onmodest budget. The routing protocol MUST take into account device characteristics such as power budget. If such devices provide routing, rather than merely host connectivity, thegas and electricity bills. 3.3. Inventory and Remote Diagnosis of Safety Equipment Each month Vooruit is obligedenergy costs associated with such routing needs tomake an inventory offit within the power budget. If the mechanisms for duty cycling dictate very long response times or specific temporal scheduling, routing will need to take such constraints into account. Typically, batteries need to be operational for at least 5 years when the sensing device is transmitting itssafety equipment.data(e.g. 64 bytes) once per minute. Thistask takes two working days. Each fire extinguisher (100), fire blanket (10), fire-resistant door (120) and evacuation plan (80)requires that sleeping devices mustbe checked for presencehave minimal link on time when they awake andproper operation. Alsotransmit onto thebattery and lamp of every safety lampnetwork. Moreover, maintaining the ability to receive inbound data must bechecked before each public event (safety laws). Automating this process using asset tracking and low-power wireless technologies would reduce a heavy burdenaccomplished with minimal link onworking hours. It is important that these messages are delivered very reliably and that thetime. In many cases, proxies with unconstrained powerconsumption of the sensors/actuators attachedbudgets are used tothis safety equipment is kept atcache the inbound data for avery low level. 3.4. Life Cycle of Field Devices Some field devices (e.g. smoke detectors) are replaced periodically. The ease by which devices are added and deleted fromsleeping device until thenetwork is very importantdevice awakens. In such cases, the routing protocol MUST discover the capability of a node tosupport augmenting sensors/actuatorsact as a proxy duringconstruction. A secure mechanism is neededpath calculation; deliver the packet toremovetheoldassigned proxy for later delivery to the sleeping device upon its next awake cycle. 5.1.3. Local Testing The local sensors andinstallrequisite actuators and controllers must be testable within thenew device. Newlocale (e.g. room) to assure communication connectivity and local operation without requiring other systemic devices. Routing should allow for temporary ad hoc paths to be established that are updated as the network physically and functionally expands. 5.1.4. Device Replacement Replacement devices need to beauthenticated before they can participateplug-and-play with no additional setup compared to what is normally required for a new device. Devices referencing data in therouting process ofreplaced device must be able to reference data in its replacement without being reconfigured to refer to theLLN. Afternew device. Thus, such a reference cannot be a hardware identifier, such as theauthentication, zero-configuration ofMAC address, nor a hard-coded route. If such a reference is an IP address, therouting protocolreplacement device must be assigned the IP addressed previously bound to the replaced device. Or if the logical equivalent of a hostname isnecessary. 3.5. Surveillance Ingress and egress are real-time applications needing response times below 500msec, for exampleused forcardkey authorization. Itthe reference, it must bepossibletranslated toconfigure doors individuallythe replacement IP address. 5.2. Scalability Building control systems are designed for facilities from 50000 sq. ft. torestrict use1M+ sq. ft. The networks that support these systems must cost-effectively scale accordingly. In larger facilities installation may occur simultaneously ona per person basis with respect to time-of-day and person entering. While much of the surveillance application involves sensing and actuation at the door and communication withvarious wings or floors, yet thecentralized security system, other aspects, including tamper, door ajar, and forced entry notification,end system must seamlessly merge. Following aretothe scalability requirements. 5.2.1. Network Domain The routing protocol MUST bedeliveredable toone or more fixed or mobile usersupport networks with at least 2000 nodes supporting at least 1000 routing deviceswithin 5 seconds. 3.6. Emergency In case of an emergency it is very important that all the visitors be evacuated as quickly as possible. The fire and smoke detectors set off an alarmandalert the mobile personnel on their user device1000 non- routing device. Subnetworks (e.g.PDA). All emergency exits are instantly unlocked and the emergency lighting guidesrooms, primary equipment) within thevisitorsnetwork must support upwards tothese exits.255 sensors and/or actuators. 5.2.2. Peer-to-Peer Communication Thenecessary sprinklers are activated and the electricity grid monitored if it becomes necessary to shut down some partsdata domain for commercial FMS systems may sprawl across a vast portion of thebuilding. Emergency services are notified instantly. A wireless system could bringphysical domain. For example, a chiller may reside insome extra safety features. Locating fire fighters and guiding them throughthebuilding could be a life-saving application. These life critical applications ought to take precedence over other network traffic. Commands entered during these emergencies havefacility's basement due tobe properly authenticated by device, user,its size, yet the associated cooling towers will reside on the roof. The cold-water supply andcommand request. 3.7. Public Address It shouldreturn pipes serpentine through all the intervening floors. The feedback control loops for these systems require data from across the facility. A network device must bepossible to send audio and text messagesable tothe visitorscommunicate in a peer-to-peer manner with any other device on thebuilding. These messages can be very diverse, e.g. ASCII text boards displayingnetwork. Thus, thename ofrouting protocol MUST provide routes between arbitrary hosts within theeventappropriate administrative domain. 5.3. Mobility Most devices are affixed to walls or installed on ceilings within buildings. Hence the mobility requirements for commercial buildings are few. However, in wireless environments location tracking of occupants and assets is gaining favor. Asset tracking applications require monitoring movement with granularity of aroom, audio announcements such as delaysminute. This soft real-time performance requirement is reflected in theprogram, lost and found children, evacuation orders, etc. The controlperformance requirements below. 5.3.1. Mobile Device Requirements To minimize networkis expecteddynamics, mobile devices SHOULD not beableallowed toreadily senseact as forwarding devices (routers) for other devices in thepresence ofLLN. A mobile device that moves within anaudienceLLN SHOULD reestablish end-to- end communication to a fixed device also inan area and deliver applicable message content. 4. Building Automation Routing Requirements Following arethebuilding automation routing requirements for aLLN within 2 seconds. The networkused to integrate building sensor actuator and control products. These requirements have been limitedconvergence time should be less than 5 seconds once the mobile device stops moving. A mobile device that moves outside of an LLN SHOULD reestablish end- to-end communication torouting requirements only. These requirements are written not presuming any preordaineda fixed device in the new LLN within 5 seconds. The networktopology, physical media (wired) or radio technology (wireless). See Appendixconvergence time should be less than 5 seconds once the mobile device stops moving. Afor additional requirementsmobile device thathave been deemedmoves outsidethe scopeofthis document yet will pertainone LLN into another LLN SHOULD reestablish end-to-end communication to a fixed device in thesuccessful deployment of building automation systems. 4.1. Installation Building control systems typically are installed and tested by electricians having little computer knowledge and noold LLN within 10 seconds. The networkknowledge whatsoever. These systems are often installed during the building construction phase beforeconvergence time should be less than 10 seconds once thedrywall and ceilings aremobile device stops. A mobile device that moves outside of one LLN into another LLN SHOULD reestablish end-to-end communication to another mobile device inplace. For new construction projects,thebuilding enterprise IPnew LLN within 20 seconds. The networkis not in place during installation ofconvergence time should be less than 30 seconds once thebuilding control system. In retrofit applications, pulling wires from sensorsmobile devices stop moving. A mobile device that moves outside of one LLN into another LLN SHOULD reestablish end-to-end communication tocontrollers can be costly anda mobile device insome applications (e.g. museums) not feasible. Local (ad hoc) testing of sensors and room controllers mustthe old LLN within 30 seconds. The network convergence time should becompleted beforeless than 30 seconds once thetradesperson can complete his/her work. This testing allows the tradesperson to verify correct client (e.g. light switch)mobile devices stop moving. 5.4. Resource Constrained Devices Sensing andserver (e.g. light ballast) before leaving the jobsite. In traditional wired systems correct operation of a light switch/ballast pair was as simple as flipping on the light switch. In wireless applications, the tradesperson has to assure the same operation, yetactuator device processing power and memory may besure the operation4 orders ofthe light switch is associated to the proper ballast. System level commissioning will later be deployed using amagnitude less (i.e. 10,000x) than many morecomputer savvy person with accesstraditional client devices on an IP network. The routing mechanisms must therefore be tailored toa commissioning device (e.g. a laptop computer).fit these resource constrained devices. 5.4.1. Limited Processing Power for Non-routing Devices. Thecompletely installedsoftware size requirement for non-routing devices (e.g. sleeping sensors andcommissioned enterprise IP network may or may notactuators) SHOULD be implementable inplace at this time. Following are the installation8-bit devices with no more than 128KB of memory. 5.4.2. Limited Processing Power for Routing Devices The software size requirements for routingrequirements. 4.1.1. Zero-Configuration installation It MUST be possible to fully commission networkdeviceswithout requiring any additional commissioning device(e.g.laptop). The device MAY support uproom controllers) SHOULD be implementable in 8-bit devices with no more than 256KB of flash memory. 5.5. Addressing Facility Management systems require different communication schemes tosixteen integrated switchessolicit or post network information. Broadcasts or anycasts need be used touniquely identifyresolve unresolved references within a device when the deviceonfirst joins the network.4.1.2. Sleeping devices Sensing devices will, in cases, utilize battery power or energy harvesting techniques for power and will operate inAs with any network communication, broadcasting should be minimized. This is especially amostly sleeping mode to maintain power consumption withinproblem for small embedded devices with limited network bandwidth. In many cases amodest budget.global broadcast could be replaced with a multicast since the application knows the application domain. Broadcasts and multicasts are typically used for network joins and application binding in embedded systems. 5.5.1. Unicast/Multicast/Anycast Routing MUSTrecognize the constraints associatedsupport anycast, unicast, and multicast. 5.6. Manageability In addition to thepower budgetinitial installation ofsuch low duty cycle devices. If such devices provide routing, rather than merely host connectivity,theenergy costs associated with such routing need to fit withinsystem (see Section 4.1), it is equally important for thepower budget. Ifongoing maintenance of themechanisms for duty cycling dictate very long response times or specific temporal scheduling, routing and forwarding will need to take such constraints into account. Communicationsystem tothese mostly sleeping devices MUSTbebidirectional. Typically, batteries needsimple and inexpensive. 5.6.1. Firmware Upgrades To support high speed code downloads, routing MUST support transports that provide parallel downloads tobe operational for at least 5 years whentargeted devices yet guarantee packet delivery. In cases where the spatial position of thesensing device is transmitting its data(e.g. 64 bytes) once per minute. This requires that sleepingdevices requires multiple hops, the algorithm musthave minimal link on time when they awake and transmit ontorecurse through thenetwork. Moreover, maintainingnetwork until all targeted devices have been serviced. 5.6.2. Diagnostics To improve diagnostics, theabilitynetwork layer SHOULD be able toreceive inbound data mustbeaccomplished with minimal link on time. In many cases, proxies with unconstrained power budgets are used to cache the inbound data forplaced in and out of 'verbose' mode. Verbose mode is asleeping device until the device awakens. In such cases,temporary debugging mode that provides additional communication information including at least total number of routingMUST recognize the selected proxy for the sleeping device. 4.1.3. Local Testing The local sensors and requisite actuatorspackets sent andcontrollers must be testable within the locale (e.g. room) to assure communication connectivityreceived, number of routing failure (no route available), neighbor table, andlocal operation without requiring other systemic devices. Routing must allow for temporary ad hoc paths torouting table entries. 5.6.3. Route Tracking Route diagnostics SHOULD beestablished that are updatedsupported providing information such asthe network physically and functionally expands. 4.1.4. Device Replacement Replacement devices need to be plug-and-playpath quality; number of hops; available alternate active paths withno additional setup compared to whatassociated costs. Path quality isnormally required for a new device. Devices referencing data inthereplaced device must be able to reference data in its replacement without being reconfiguredrelative measure of 'goodness' of the selected source toreferdestination path as compared tothe new device. Thus, such a reference cannotalternate paths. This composite value may be measured as ahardware identifier, suchfunction of hop count, signal strength, available power, existing active paths or any other criteria deemed by ROLL as theMAC address, nor a hardcoded route. If such a reference is an IP address,path cost differentiator. 5.7. Route Selection Route selection determines reliability and quality of thereplacement device must be assignedcommunication paths among theIP addressed previously bound todevices. Optimizing thereplaced device. Or if the logical equivalent of a hostname is used for the reference, it must be translated to the replacement IP address. 4.2. Scalability Building control systemsroutes over time resolve any nuances developed at system startup when nodes aredesigned for facilities from 50000 sq. ft.asynchronously adding themselves to1M+ sq. ft. The networks that support these systems must cost-effectively scale accordingly. In larger facilities installation may occur simultaneously on various wings or floors, yettheend system must seamlessly merge. Following arenetwork. Path adaptation will reduce latency if thescalability requirements. 4.2.1. Network Domainpath costs consider hop count as a cost attribute. 5.7.1. Path Cost The routing protocol MUSTbe able to support networks with at least 1000 routers and 1000 hosts. Subnetworks (e.g. rooms, primary equipment) within the network mustsupportupwards to 255 sensors and/or actuators. . 4.2.2. Peer-to-peer Communication The data domain for commercial FMS systems may sprawl acrossavast portionmetric ofthe physical domain. For example, a chiller may reside in the facility's basement dueroute quality and optimize path selection according toits size, yet the associated cooling towers will reside onsuch metrics within constraints established for links along theroof. The cold-water supplypaths. These metrics SHOULD reflect metrics such as signal strength, available bandwidth, hop count, energy availability andreturn pipes serpentine through allcommunication error rates. 5.7.2. Path Adaptation Communication paths MUST adapt toward theintervening floors.chosen metric(s) (e.g. signal quality) optimality in time. 5.7.3. Route Redundancy Thefeedback control loops for these systems require data from across the facility. Anetworkdevice mustlayer SHOULD beableconfigurable tocommunicate inallow secondary and tertiary paths to be established and used upon failure of the primary path. 5.7.4. Route Discovery Time Mission critical commercial applications (e.g. Fire, Security) require reliable communication and guaranteed end-to-end delivery of all messages in apeer-to-peer manner with any other device ontimely fashion. Application layer time-outs must be selected judiciously to cover anomalous conditions such as lost packets and/or path discoveries; yet not be set too large to over damp thenetwork. Thus,network response. If route discovery occurs during packet transmission time, it SHOULD NOT add more than 120ms of latency to therouting protocol MUST provide routes between arbitrary hosts withinpacket delivery time. 5.7.5. Route Preference Route cost algorithms SHOULD allow theappropriate administrative domain. 4.3. Mobility Most devices are affixedinstaller towalls or installedoptionally select 'preferred' paths based onceilings within buildings. Hencethemobility requirements for commercial buildings are few. However, in wireless environments location trackingknown spatial layout ofoccupants and assets is gaining favor. 4.3.1. Mobile Device Association Mobile devices SHOULD be capablethe communicating devices. 6. Traffic Pattern The independent nature ofunjoining (handing-off) from an old network joining ontothe automation systems within anewbuilding plays heavy onto the network traffic patterns. Much of the real-time sensor data stays within15 seconds. 4.4. Resource Constrained Devices Sensing and actuator device processing powerthe local environment. Alarming andmemoryother event data will percolate to higher layers. Systemic data may be4 orders of magnitude less (i.e. 10,000x) than many more traditional client deviceseither polled or event based. Polled data systems will generate a uniform packet load onan IPthe network.The routing mechanisms MUST therefore be tailored to fit these resource constrained devices. 4.4.1. Limited Processing Power Sensors/Actuators The software stack requirements for sensorsThis architecture has proven not scalable. Most vendors have developed event based systems which pass data on event. These systems are highly scalable andactuators MUST be implementable in 8-bit devices with no more than 128KB of flash memory (including at least 32KB forgenerate low data on theapplication code) and no more than 8KB of RAM (includingnetwork atleast 1KB RAM available forquiescence. Unfortunately, theapplication). 4.4.2. Limited Processing Power Controllers The software stack requirements for room controllers SHOULDsystems will generate a heavy load on startup since all the initial data must migrate to the controller level. They also will generate a temporary but heavy load during firmware upgrades. This latter load can normally beimplementable in 8-bit devices with no more than 256KB of flash memory (including at least 32KBmitigated by performing these downloads during off-peak hours. Devices will need to reference peers occasionally for sensor data or to coordinate across systems. Normally, though, data will migrate from theapplication code) and no more than 8KB of RAM (includingsensor level upwards through the local, area then supervisory level. Bottlenecks will typically form atleast 1KB RAM available fortheapplication) 4.5. Addressing Facility Management systems require different communication schemafunnel point from the area controllers tosolicitthe supervisory controllers. Initial system startup after a controlled outage orpostunexpected power failure puts tremendous stress on the networkinformation. Broadcasts or anycasts need be used to resolve unresolved references withinand on the routing algorithms. An FMS system is comprised of adevice whenmyriad of control algorithms at thedevice first joinsroom, area, zone, and enterprise layers. When these control algorithms are at quiescence, the real-time data changes are small and thenetwork. As with anynetworkcommunication, broadcasting shouldwill not saturate. However, upon any power loss, the control loops and real-time data quickly atrophy. A ten minute outage may take many hours to regain control. Upon restart all lines-powered devices power-on instantaneously. However due to application startup and self tests, these devices will attempt to join the network randomly. Empirical testing indicates that routing paths acquired during startup will tend to beminimized.very oblique since the available neighbor lists are incomplete. Thisis especially a problemdemands an adaptive routing protocol to allow forsmall embedded devices with limitedpath optimization as the networkbandwidth. In many cases a global broadcast couldstabilizes. 7. Security Considerations Security policies, especially wireless encryption and device authentication needs to bereplacedconsidered, especially witha multicast sinceconcern to theapplication knowsimpact on theapplication domain. Broadcastsprocessing capabilities andmulticastsadditional latency incurred on the sensors, actuators and controllers. FMS systems are typicallyused for network joins and application bindinghighly configurable inembedded systems. 4.5.1. Unicast/Multicast/Anycast Routing MUST support anycast, unicast, multicastthe field andbroadcast services (or IPv6 equivalent). 4.6. Manageability In addition tohence theinitial installationsecurity policy is most often dictated by the type of building to which thesystem (see Section 4.1), itFMS isequally importantbeing installed. Single tenant owner occupied office buildings installing lighting or HVAC control are candidates for implementing low or even no security on theongoing maintenance ofLLN. Antithetically, military or pharmaceutical facilities require strong security policies. As noted in thesystem toinstallation procedures above, security policies must besimple and inexpensive. 4.6.1. Firmware Upgrades To support high speed code downloads, routing MUST support transports that provide parallel downloadsfacile totargeted devicesallow no security during the installation phase (prior to building occupancy), yetguarantee packet delivery. 4.6.2. Diagnostics To improve diagnostics,easily raise the security level networklayerwide during the commissioning phase of the system. 7.1. Security Requirements 7.1.1. Authentication Authentication SHOULD beable tooptional on the LLN. Authentication SHOULD beplaced infully configurable on-site. Authentication policy andout of 'verbose' mode. Verbose mode isupdates MUST be transmittable over-the-air. Authentication SHOULD occur upon joining or rejoining atemporary debugging mode that provides additional communication information includingnetwork. However, once authenticated devices SHOULD not need to reauthenticate themselves with any other devices in the LLN. Packets may need authentication atleast total number of packets sent,the source and destination nodes, however, packetsreceived, number of failed communication attempts, neighbor tablerouted through intermediate hops should not need to be reauthenticated at each hop. 7.1.2. Encryption 7.1.2.1. Encryption Levels Encryption SHOULD be optional on the LLN. Encryption SHOULD be fully configurable on-site. Encryption policy androuting table entries. 4.6.3. Route Tracking Route diagnosticsupdates SHOULD besupported providing information such as path quality; number of hops; available alternate active paths with associated costs. 4.7. Compatibility The building automation industry adheres to application layer protocol standards to achieve vendor interoperability. These standards are BACnettransmittable over-the-air andLON. It is estimated that fully 80% of the customer bid requests received world-widein-the-clear. 7.1.2.2. Security Policy Flexibility In most facilities authentication and encryption willrequire compliancebe turned off during installation. More complex encryption policies might be put in force at commissioning time. New encryption policies MUST be allowed toone or both of these standards. ROLL routing will therefore needbe presented todovetailall devices in the LLN over the network without needing tothesevisit each device. 7.1.2.3. Encryption Types Data encryption of packets MUST optionally be supported by use of either a network wide key and/or applicationprotocolskey. The network key would apply toassure acceptanceall devices in thebuilding automation industry. These protocols have been in place for over 10 years. Many sites will require backwards compatibility withLLN. The application key would apply to a subset of devices on theexisting legacy devices. 4.7.1. IPv4 CompatibilityLLN. Therouting protocol MUST support intercommunication among IPv4network key andIPv6 devices.. 4.7.2. Maximum Packet Size Routingapplication keys would be mutually exclusive. Forwarding devices in the mesh MUSTsupport packet sizes to 1526 octets (tobebackwards compatible with 802.3 subnetworks) 4.8. Route Selection Route selection determines reliability and qualityable to forward a packet encrypted with an application key without needing to have the application key. 7.1.2.4. Packet Encryption The encryption policy MUST support encryption of thecommunication paths amongpayload only or thedevices. Optimizingentire packet. Payload only encryption would eliminate theroutes over time resolve any nuances developeddecryption/re-encryption overhead atsystem startup when nodes are asynchronously adding themselvesevery hop. 7.1.3. Disparate Security Policies Due to thenetwork. Path adaptation will reduce latency if the path costs consider hop count as a cost attribute. 4.8.1. Path Cost The routing protocol MUST support a metriclimited resources ofroute quality and optimize path selection according to such metricsan LLN, the security policy defined withinconstraints established for links alongthepaths. These metrics SHOULD reflect metrics such as signal strength, available bandwidth, hop count, energy availability and communication error rates. 4.8.2. Path Adaptation Communication pathsLLN MUSTadapt toward the chosen metric(s) (e.g. signal quality) optimality in time. 4.8.3. Route Redundancy The network layer SHOULDbeconfigurable to allow secondary and tertiary pathsable tobe established and used upon failurediffer from that of theprimary path. 4.8.4. Route Discovery Time Mission critical commercial applications (e.g. Fire,Security) require reliable communication and guaranteed end-to-end deliveryrest ofall messages in a timely fashion. Application layer time-outs must be selected judiciously to cover anomalous conditions such as lost packets and/or path discoveries; yet not be set too large to over dampthe IP networkresponse. Route discovery occurring during packet transmissionwithin the facility yet packets MUSTnot exceed 120 msecs. 4.8.5. Route Preference Thestill be able to routediscovery mechanism SHOULD allow a source node (sensor)todictate a configured destination node (controller) as a preferred routing path. 4.8.6. Path Persistence To eliminate high network traffic in power-failorbrown-out conditions previously established routes SHOULD be rememberedthrough the LLN from/to these networks. 8. IANA Considerations This document includes no request to IANA. 9. Acknowledgments In addition to the authors, J. P. Vasseur, David Culler, Ted Humpal andinvoked priorZach Shelby are gratefully acknowledged for their contributions toestablishing new routesthis document. This document was prepared using 2-Word-v2.0.template.dot. 10. References 10.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. 10.2. Informative References [1] [I-D.ietf-roll-home-routing-reqs] Brandt, A., Buron, J., and G. Porcu, "Home Automation Routing Requirements in Low Power and Lossy Networks", draft-ietf-roll-home-routing-reqs-06 (work in progress), November 2008. [2] [I-D.ietf-roll-indus-routing-reqs] Networks, D., Thubert, P., Dwars, S., and T. Phinney, "Industrial Routing Requirements in Low Power and Lossy Networks", draft-ietf-roll-indus- routing-reqs-03 (work in progress), December 2008. [3] [I-D.ietf-roll-terminology]Vasseur, J., "Terminology in Low power And Lossy Networks", draft-ietf-roll-terminology-00 (work in progress), October 2008. [4] "RS-485 EIA Standard: Standard for Electrical Characteristics of Generators and Receivers for use in Balanced [5] "BACnet: A Data Communication Protocol for Building and Automation Control Networks" ANSI/ASHRAE Standard 135-2004", 2004 11. Appendix A: Additional Building Requirements Appendix A contains additional informative building requirements that were deemed out of scope forthose devices reenteringthenetwork. 5. Traffic Patternrouting document yet provided ancillary informational substance to the reader. Theindependent naturerequirements should be addressed by ROLL or other WGs before adoption by the building automation industry. 11.1. Additional Commercial Product Requirements 11.1.1. Wired and Wireless Implementations Solutions must support both wired and wireless implementations. 11.1.2. World-wide Applicability Wireless devices must be supportable at the 2.4Ghz ISM band. Wireless devices should be supportable at the 900 and 868 ISM bands as well. 11.1.3. Support of the BACnet Building Protocol Devices implementing the ROLL features should support the BACnet protocol. 11.1.4. Support of theautomation systems within a building plays heavy ontoLON Building Protocol Devices implementing the ROLL features should support the LON protocol. 11.1.5. Energy Harvested Sensors RFDs should target for operation using viable energy harvesting techniques such as ambient light, mechanical action, solar load, air pressure and differential temperature. 11.1.6. Communication Distance A source device may be upwards to 1000 feet from its destination. Communication may need to be established between these devices without needing to install other intermediate 'communication only' devices such as repeaters 11.1.7. Automatic Gain Control For wireless implementations, the device radios should incorporate automatic transmit power regulation to maximize packet transfer and minimize network interference regardless of network size or density. 11.1.8. Cost The total installed infrastructure cost including but not limited to the media, required infrastructure devices (amortized across thenetwork traffic patterns. Muchnumber ofthe real-time sensor data stays within the local environment. Alarming and other event data will percolatedevices); labor tohigher layers. Systemic data may be either polled or event based. Polled data systems will generate a uniform packet load oninstall and commission thenetwork. This architecture has provennetwork must notscalable. Most vendors have developed event based systems which passes data on event. These systems are highly scalableexceed $1.00/foot for wired implementations. Wireless implementations (total installed cost) must cost no more than 80% of wired implementations. 11.1.9. IPv4 Compatibility The routing protocol must support cost-effective intercommunication among IPv4 andgenerate low data onIPv6 devices. 11.2. Additional Installation and Commissioning Requirements 11.2.1. Device Setup Time Network setup by the installer must take no longer than 20 seconds per device installed. 11.2.2. Unavailability of an IT networkat quiescence. Unfortunately, the systems will generate a heavy load on startup since all the initial dataProduct commissioning mustmigrate to the controller level. They also will generate a temporary but heavy load during firmware upgrades. This latter load can normallybemitigatedperformed byperforming these downloads during off-peak hours. Devices will need to reference peers occasionally for sensor data oran application engineer prior tocoordinate across systems. Normally, though, data will migrate fromthesensor level upwards throughinstallation of the IT network. 11.3. Additional Network Requirements 11.3.1. TCP/UDP Connection based and connectionless services must be supported 11.3.2. Data Rate Performance An effective data rate of 20kbits/s is thelocal, area then supervisory level. Bottlenecks will typically form atlowest acceptable operational data rate acceptable on thefunnel point fromnetwork. 11.3.3. High Speed Downloads Devices receiving a download MAY cease normal operation, but upon completion of thearea controllers todownload must automatically resume normal operation. 11.3.4. Interference Mitigation The network must automatically detect interference and seamlessly migrate thesupervisory controllers. 6. Open issues Other itemsnetwork hosts channel tobe addressed in further revisions of this document include: All known open items completed 7. Security Considerations Security policies, especially wireless encryptionimprove communication. Channel changes andoverall device authentication neednodes response tobe considered. These issues are out of scope fortherouting requirements, but could have an impact onchannel change must occur within 60 seconds. 11.3.5. Real-time Performance Measures A node transmitting a 'request with expected reply' to another node must send theprocessing capabilities ofmessage to thesensorsdestination andcontrollers. As noted above,receive theFMS systems are typically highly configurableresponse inthe fieldnot more than 120 msec. This response time should be achievable with 5 or less hops in each direction. This requirement assumes network quiescence andhencea negligible turnaround time at thesecurity policy is most often dictateddestination node. 11.3.6. Packet Reliability Reliability must meet the following minimum criteria : < 1% MAC layer errors on all messages; After no more than three retries < .1% Network layer errors on all messages; After no more than three additional retries; < 0.01% Application layer errors on all messages. Therefore application layer messages will fail no more than once every 100,000 messages. 11.3.7. Merging Commissioned Islands Subsystems are commissioned by various vendors at various times during building construction. These subnetworks must seamlessly merge into networks and networks must seamlessly merge into internetworks since thetypeend user wants a holistic view ofbuilding to whichtheFMS is being installed. 8. IANA Considerations This document includes no request to IANA. 9. Acknowledgments J. P. Vasseur, Ted Humpal and Zach Shelby are gratefully acknowledged for their contributionssystem. 11.3.8. Adjustable System Table Sizes Routing must support adjustable router table entry sizes on a per node basis tothis document. This document was prepared using 2-Word-v2.0.template.dot. 10. References 10.1. Normative References draft-ietf-roll-home-routing-reqs-03 draft-ietf-roll-terminology-00.txt 10.2. Informative References ''RS-485 EIA Standard: Standard for Electrical Characteristics of Generators and Receivers for usemaximize limited RAM inBalanced Digital Multipoint ''BACnet: A Data Communication Protocol for Building and Automation Control Networks'' ANSI/ASHRAE Standard 135-2004'', 2004 ''LON: OPEN DATA COMMUNICATION IN BUILDING AUTOMATION, CONTROLS AND BUILDING MANAGEMENT - BUILDING NETWORK PROTOCOL - PART 1: PROTOCOL STACK'', 11/25/2005 11. Appendix A: Additional Building Requirements Appendix A contains additional building requirements that were deemed out of scope forthe devices. 11.4. Prioritized Routing Network and application routingdocument yet provided ancillary informational substance to the reader. The requirements will needprioritization is required to assure that mission critical applications (e.g. Fire Detection) cannot beaddressed by ROLL or other WGs before adoption bydeferred while less critical application access thebuilding automation industry willnetwork. 11.4.1. Packet Prioritization Routers must support quality of service prioritization to assure timely response for critical FMS packets. 11.5. Constrained Devices The network may beconsidered. 11.1. Additional Commercial Product Requirements 11.1.1. Wiredcomposed of a heterogeneous mix of full, battery andWireless Implementations Solutionsenergy harvested devices. The routing protocol must support these constrained devices. 11.5.1. Proxying for Constrained Devices Routing must supportboth wiredin-bound packet caches for low-power (battery andwireless implementations. 11.1.2. World-wide Applicability Wirelessenergy harvested) devices when these devices are not accessible on the network. These devices must have a designated powered proxying device to which packets will besupportable attemporarily routed and cached until the2.4Ghz ISM band. Wirelessconstrained device accesses the network. 11.6. Reliability 11.6.1. Device Integrity Commercial Building devicesshouldmust all besupportable atperiodically scanned to assure that the900device is viable and868 ISM bandscan communicate data and alarm information aswell. 11.1.3. Support of the BACnet Building Protocol Devices implementing the ROLL featuresneeded. Network routers shouldsupport the BACnet protocol. 11.1.4. Support of the LON Building Protocol Devices implementingmaintain previous packet flow information temporally to minimize overall network overhead. 11.7. Path Persistence To eliminate high network traffic in power-fail or brown-out conditions previously established routes SHOULD be remembered and invoked prior to establishing new routes for those devices reentering theROLL features should supportnetwork. 12. Appendix B: FMS Use-Cases Appendix B contains FMS use-cases that describes theLON protocol. 11.1.5. Energy Harvested Sensors RFDs should targetuse of sensors and controllers foroperation using viablevarious applications with a commercial building and how they interplay with energyharvesting techniques suchconservation and life-safety applications. The Vooruit arts centre is a restored monument which dates from 1913. This complex monument consists of over 350 different rooms including a meeting rooms, large public halls and theaters serving asambient light, mechanical action, solar load, air pressuremany as 2500 guests. A number of use cases regarding Vooruit are described in the following text. The situations anddifferential temperature. 11.1.6. Communication Distance A source device may be upwards to 1000 feet from its destination. Communication may need to be established betweenneeds described in thesedevices without needing to install other intermediate 'communication only' devicesuse cases can also be found in all automated large buildings, such asrepeaters 11.1.7. Automatic Gain Control For wireless implementations, the device radios should incorporate automatic transmit power regulation to maximize packet transferairports andminimize network interference regardless of network size or density. 11.1.8. Costhospitals. 12.1. Locking and Unlocking the Building Thetotal installed infrastructure cost including but not limited tomember of themedia, required infrastructure devices (amortized acrosscleaning staff arrives first in thenumber of devices); labor to install and commissionmorning unlocking thenetwork must not exceed $1.00/foot for wired implementations. Wireless implementations (total installed cost) must cost no more than 80%building (or a part ofwired implementations. 11.2. Additional Installation and Commissioning Requirements 11.2.1. Device Setup Time Network setup byit) from theinstaller must take no longer than 20 seconds per device installed. 11.2.2. Unavailability of an IT network Product commissioning must be performed by an application engineer priorcontrol room. This means that several doors are unlocked; the alarms are switched off; the heating turns on; some lights switch on, etc. Similarly, the last person leaving the building has to lock theinstallation ofbuilding. This will lock all theIT network. 11.3. Additional Network Requirements 11.3.1. TCP/UDP Connection basedouter doors, turn the alarms on, switch off heating andconnectionless services mustlights, etc. The "building locked" or "building unlocked" event needs to besupported 11.3.2. Data Rate Performance An effective data ratedelivered to a subset of20kbits/s isall thelowest acceptable operational data rate acceptable onsensors and actuators. It can be beneficial if those field devices form a group (e.g. "all-sensors- actuators-interested-in-lock/unlock-events). Alternatively, thenetwork. 11.3.3. High Speed Downloads Devices receivingarea and zone controllers could form adownload MAY cease normal operation, but upon completion ofgroup where thedownload must automatically resume normal operation. 11.3.4. Interference Mitigation The network must automatically detect interferencearrival of such an event results in each area andseamlessly migratezone controller initiating unicast or multicast within thenetwork hosts channel to improve communication. Channel changesLLN. This use case is also described in the home automation, although the requirement about preventing the "popcorn effect" I-D.ietf-roll-home- routing-reqs] can be relaxed a bit in building automation. It would be nice if lights, roll-down shutters andnodes response toother actuators in thechannel change must occur within 60 seconds. 11.3.5. Real-time Performance Measuressame room or area with transparent walls execute the command around (not 'at') the same time (a tolerance of 200 ms is allowed). 12.2. Building Energy Conservation Anode transmittingroom that is not in use should not be heated, air conditioned or ventilated and the lighting should be turned off or dimmed. In a'requestbuilding withexpected reply'many rooms it can happen quite frequently that someone forgets toanother node must sendswitch off themessageHVAC and lighting, thereby wasting valuable energy. To prevent this occurrence, the facility manager might program the building according to the day's schedule. This way lighting and HVAC is turned on prior to thedestinationuse of a room, andreceiveturned off afterwards. Using such a system Vooruit has realized a saving of 35% on theresponse in not more than 120 msec.gas and electricity bills. 12.3. Inventory and Remote Diagnosis of Safety Equipment Each month Vooruit is obliged to make an inventory of its safety equipment. Thisresponse time shouldtask takes two working days. Each fire extinguisher (100), fire blanket (10), fire-resistant door (120) and evacuation plan (80) must beachievable with 5 or less hops in each direction. This requirement assumes network quiescencechecked for presence anda negligible turnaround time atproper operation. Also thedestination node. 11.3.6. Packet Reliability Reliabilitybattery and lamp of every safety lamp mustmeet the following minimum criteria : < 1% MAC layer errors on all messages; After no more than three retries < .1% Network layer errors on all messages; After no more than three additional retries; < 0.01% Application layer errorsbe checked before each public event (safety laws). Automating this process using asset tracking and low-power wireless technologies would reduce a heavy burden onall messages. Therefore application layerworking hours. It is important that these messageswill fail no more than once every 100,000 messages. 11.3.7. Merging Commissioned Islands Subsystemsarecommissioned by various vendorsdelivered very reliably and that the power consumption of the sensors/actuators attached to this safety equipment is kept atvarious timesa very low level. 12.4. Life Cycle of Field Devices Some field devices (e.g. smoke detectors) are replaced periodically. The ease by which devices are added and deleted from the network is very important to support augmenting sensors/actuators duringbuildingconstruction.These subnetworks must seamlessly merge into networksA secure mechanism is needed to remove the old device andnetworks must seamlessly merge into internetworks sinceinstall theend user wants a holistic viewnew device. New devices need to be authenticated before they can participate in the routing process of thesystem. 11.3.8. Adjustable System Table Sizes RoutingLLN. After the authentication, zero-configuration of the routing protocol is necessary. 12.5. Surveillance Ingress and egress are real-time applications needing response times below 500msec, for example for cardkey authorization. It mustsupport adjustable router table entry sizesbe possible to configure doors individually to restrict use on a pernodeperson basis with respect tomaximize limited RAM intime-of-day and person entering. While much of thedevices. 11.4. Prioritized Routing Networksurveillance application involves sensing and actuation at the door and communication with the centralized security system, other aspects, including tamper, door ajar, and forced entry notification, are to be delivered to one or more fixed or mobile user devices within 5 seconds. 12.6. Emergency In case of an emergency it is very important that all the visitors be evacuated as quickly as possible. The fire and smoke detectors set off an alarm and alert the mobile personnel on their user device (e.g. PDA). All emergency exits are instantly unlocked and the emergency lighting guides the visitors to these exits. The necessary sprinklers are activated and the electricity grid monitored if it becomes necessary to shut down some parts of the building. Emergency services are notified instantly. A wireless system could bring in some extra safety features. Locating fire fighters andapplication routing prioritization is required to assure that mission critical applications (e.g. Fire Detection) cannotguiding them through the building could bedeferred while lessa life-saving application. These life criticalapplication access the network. 11.4.1. Packet Prioritization Routers must support quality of service prioritizationapplications ought toassure timely response for critical FMS packets. 11.5. Constrained Devices Thetake precedence over other networkmaytraffic. Commands entered during these emergencies have to becomposed of a heterogeneous mix of full, batteryproperly authenticated by device, user, andenergy harvested devices. The routing protocol must support these constrained devices. 11.5.1. Proxying for Constrained Devices Routing must support in-bound packet caches for low-power (batterycommand request. 12.7. Public Address It should be possible to send audio andenergy harvested) devices when these devices are not accessible ontext messages to thenetwork.visitors in the building. Thesedevices must have a designated powered proxying device to which packets willmessages can betemporarily routed and cached untilvery diverse, e.g. ASCII text boards displaying theconstrained device accessesname of thenetwork. 11.6. Reliability 11.6.1. Device Integrity Commercial Building devices must allevent in a room, audio announcements such as delays in the program, lost and found children, evacuation orders, etc. The control network is expected beperiodically scannedable toassure thatreadily sense thedevice is viable and can communicate datapresence of an audience in an area andalarm information as needed. Network routers should maintain previous packet flow information temporally to minimize overall network overhead.deliver applicable message content. Authors' Addresses Jerry Martocci Johnson Control 507 E. Michigan Street Milwaukee, Wisconsin, 53202 USA Phone: 414.524.4010 Email: jerald.p.martocci@jci.com Nicolas Riou Schneider Electric Technopole 38TEC T3 37 quai Paul Louis Merlin 38050 Grenoble Cedex 9 France Phone: +33 4 76 57 66 15 Email: nicolas.riou@fr.schneider-electric.com Pieter De Mil Ghent University - IBCN G. Crommenlaan 8 bus 201 Ghent 9050 Belgium Phone: +32-9331-4981 Fax: +32--9331--4899 Email: pieter.demil@intec.ugent.be Wouter Vermeylen Arts Centre Vooruit ??? Ghent 9000 Belgium Phone: ??? Fax: ??? Email: wouter@vooruit.be