idnits 2.17.00 (12 Aug 2021) /tmp/idnits30561/draft-irtf-coinrg-use-cases-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document date (7 March 2022) is 68 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 0 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 COINRG I. Kunze 3 Internet-Draft K. Wehrle 4 Intended status: Informational RWTH Aachen 5 Expires: 8 September 2022 D. Trossen 6 Huawei 7 M.J. Montpetit 8 Concordia 9 X. de Foy 10 InterDigital Communications, LLC 11 D. Griffin 12 M. Rio 13 UCL 14 7 March 2022 16 Use Cases for In-Network Computing 17 draft-irtf-coinrg-use-cases-02 19 Abstract 21 Computing in the Network (COIN) comes with the prospect of deploying 22 processing functionality on networking devices, such as switches and 23 network interface cards. While such functionality can be beneficial 24 in several contexts, it has to be carefully placed into the context 25 of the general Internet communication. 27 This document discusses some use cases to demonstrate how real 28 applications can benefit from COIN and to showcase essential 29 requirements that have to be fulfilled by COIN applications. 31 Status of This Memo 33 This Internet-Draft is submitted in full conformance with the 34 provisions of BCP 78 and BCP 79. 36 Internet-Drafts are working documents of the Internet Engineering 37 Task Force (IETF). Note that other groups may also distribute 38 working documents as Internet-Drafts. The list of current Internet- 39 Drafts is at https://datatracker.ietf.org/drafts/current/. 41 Internet-Drafts are draft documents valid for a maximum of six months 42 and may be updated, replaced, or obsoleted by other documents at any 43 time. It is inappropriate to use Internet-Drafts as reference 44 material or to cite them other than as "work in progress." 46 This Internet-Draft will expire on 8 September 2022. 48 Copyright Notice 50 Copyright (c) 2022 IETF Trust and the persons identified as the 51 document authors. All rights reserved. 53 This document is subject to BCP 78 and the IETF Trust's Legal 54 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 55 license-info) in effect on the date of publication of this document. 56 Please review these documents carefully, as they describe your rights 57 and restrictions with respect to this document. Code Components 58 extracted from this document must include Revised BSD License text as 59 described in Section 4.e of the Trust Legal Provisions and are 60 provided without warranty as described in the Revised BSD License. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 65 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 66 3. Providing New COIN Experiences . . . . . . . . . . . . . . . 6 67 3.1. Mobile Application Offloading . . . . . . . . . . . . . . 6 68 3.1.1. Description . . . . . . . . . . . . . . . . . . . . . 6 69 3.1.2. Characterization . . . . . . . . . . . . . . . . . . 7 70 3.1.3. Existing Solutions . . . . . . . . . . . . . . . . . 9 71 3.1.4. Opportunities . . . . . . . . . . . . . . . . . . . . 9 72 3.1.5. Research Questions . . . . . . . . . . . . . . . . . 9 73 3.1.6. Requirements . . . . . . . . . . . . . . . . . . . . 10 74 3.2. Extended Reality and Immersive Media . . . . . . . . . . 11 75 3.2.1. Description . . . . . . . . . . . . . . . . . . . . . 11 76 3.2.2. Characterization . . . . . . . . . . . . . . . . . . 11 77 3.2.3. Existing Solutions . . . . . . . . . . . . . . . . . 12 78 3.2.4. Opportunities . . . . . . . . . . . . . . . . . . . . 13 79 3.2.5. Research Questions . . . . . . . . . . . . . . . . . 13 80 3.2.6. Requirements . . . . . . . . . . . . . . . . . . . . 14 81 3.3. Personalised and interactive performing arts . . . . . . 14 82 3.3.1. Description . . . . . . . . . . . . . . . . . . . . . 15 83 3.3.2. Characterization . . . . . . . . . . . . . . . . . . 15 84 3.3.3. Existing solutions . . . . . . . . . . . . . . . . . 17 85 3.3.4. Opportunities . . . . . . . . . . . . . . . . . . . . 17 86 3.3.5. Research Questions: . . . . . . . . . . . . . . . . . 17 87 3.3.6. Requirements . . . . . . . . . . . . . . . . . . . . 18 88 4. Supporting new COIN Systems . . . . . . . . . . . . . . . . . 18 89 4.1. Industrial Network Scenario . . . . . . . . . . . . . . . 19 90 4.2. In-Network Control / Time-sensitive applications . . . . 20 91 4.2.1. Description . . . . . . . . . . . . . . . . . . . . . 20 92 4.2.2. Characterization . . . . . . . . . . . . . . . . . . 21 93 4.2.3. Existing Solutions . . . . . . . . . . . . . . . . . 21 94 4.2.4. Opportunities . . . . . . . . . . . . . . . . . . . . 22 95 4.2.5. Research Questions . . . . . . . . . . . . . . . . . 22 96 4.2.6. Requirements . . . . . . . . . . . . . . . . . . . . 23 97 4.3. Large Volume Applications - Filtering . . . . . . . . . . 23 98 4.3.1. Description . . . . . . . . . . . . . . . . . . . . . 23 99 4.3.2. Characterization . . . . . . . . . . . . . . . . . . 24 100 4.3.3. Existing Solutions . . . . . . . . . . . . . . . . . 25 101 4.3.4. Opportunities . . . . . . . . . . . . . . . . . . . . 25 102 4.3.5. Research Questions . . . . . . . . . . . . . . . . . 26 103 4.3.6. Requirements . . . . . . . . . . . . . . . . . . . . 26 104 4.4. Large Volume Applications - (Pre-)Preprocessing . . . . . 26 105 4.4.1. Description . . . . . . . . . . . . . . . . . . . . . 26 106 4.4.2. Characterization . . . . . . . . . . . . . . . . . . 26 107 4.4.3. Existing Solutions . . . . . . . . . . . . . . . . . 27 108 4.4.4. Opportunities . . . . . . . . . . . . . . . . . . . . 27 109 4.4.5. Research Questions . . . . . . . . . . . . . . . . . 27 110 4.4.6. Requirements . . . . . . . . . . . . . . . . . . . . 27 111 4.5. Industrial Safety . . . . . . . . . . . . . . . . . . . . 28 112 4.5.1. Description . . . . . . . . . . . . . . . . . . . . . 28 113 4.5.2. Characterization . . . . . . . . . . . . . . . . . . 28 114 4.5.3. Existing Solutions . . . . . . . . . . . . . . . . . 28 115 4.5.4. Opportunities . . . . . . . . . . . . . . . . . . . . 29 116 4.5.5. Research Questions . . . . . . . . . . . . . . . . . 29 117 4.5.6. Requirements . . . . . . . . . . . . . . . . . . . . 29 118 5. Improving existing COIN capabilities . . . . . . . . . . . . 29 119 5.1. Content Delivery Networks . . . . . . . . . . . . . . . . 29 120 5.1.1. Description . . . . . . . . . . . . . . . . . . . . . 29 121 5.1.2. Characterization . . . . . . . . . . . . . . . . . . 30 122 5.1.3. Existing Solutions . . . . . . . . . . . . . . . . . 30 123 5.1.4. Opportunities . . . . . . . . . . . . . . . . . . . . 30 124 5.1.5. Research Questions . . . . . . . . . . . . . . . . . 30 125 5.1.6. Requirements . . . . . . . . . . . . . . . . . . . . 31 126 5.2. Compute-Fabric-as-a-Service (CFaaS) . . . . . . . . . . . 31 127 5.2.1. Description . . . . . . . . . . . . . . . . . . . . . 31 128 5.2.2. Characterization . . . . . . . . . . . . . . . . . . 31 129 5.2.3. Existing Solutions . . . . . . . . . . . . . . . . . 32 130 5.2.4. Opportunities . . . . . . . . . . . . . . . . . . . . 32 131 5.2.5. Research Questions . . . . . . . . . . . . . . . . . 32 132 5.2.6. Requirements . . . . . . . . . . . . . . . . . . . . 33 133 5.3. Virtual Networks Programming . . . . . . . . . . . . . . 33 134 5.3.1. Description . . . . . . . . . . . . . . . . . . . . . 33 135 5.3.2. Characterization . . . . . . . . . . . . . . . . . . 34 136 5.3.3. Existing Solutions . . . . . . . . . . . . . . . . . 36 137 5.3.4. Opportunities . . . . . . . . . . . . . . . . . . . . 36 138 5.3.5. Research Questions . . . . . . . . . . . . . . . . . 37 139 5.3.6. Requirements . . . . . . . . . . . . . . . . . . . . 38 140 6. Enabling new COIN capabilities . . . . . . . . . . . . . . . 38 141 6.1. Distributed AI . . . . . . . . . . . . . . . . . . . . . 38 142 6.1.1. Description . . . . . . . . . . . . . . . . . . . . . 38 143 6.1.2. Characterization . . . . . . . . . . . . . . . . . . 39 144 6.1.3. Existing Solutions . . . . . . . . . . . . . . . . . 39 145 6.1.4. Opportunities . . . . . . . . . . . . . . . . . . . . 39 146 6.1.5. Research Questions . . . . . . . . . . . . . . . . . 40 147 6.1.6. Requirements . . . . . . . . . . . . . . . . . . . . 40 148 7. Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 40 149 7.1. Opportunities . . . . . . . . . . . . . . . . . . . . . . 40 150 7.2. Research Questions . . . . . . . . . . . . . . . . . . . 41 151 7.2.1. Categorization . . . . . . . . . . . . . . . . . . . 41 152 7.2.2. Analysis . . . . . . . . . . . . . . . . . . . . . . 42 153 7.3. Requirements . . . . . . . . . . . . . . . . . . . . . . 49 154 8. Security Considerations . . . . . . . . . . . . . . . . . . . 49 155 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 49 156 10. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 49 157 11. List of Use Case Contributors . . . . . . . . . . . . . . . . 50 158 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 50 159 12.1. Normative References . . . . . . . . . . . . . . . . . . 50 160 12.2. Informative References . . . . . . . . . . . . . . . . . 50 161 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 53 163 1. Introduction 165 The Internet was designed as a best-effort packet network that offers 166 limited guarantees regarding the timely and successful transmission 167 of packets. Data manipulation, computation, and more complex 168 protocol functionality is generally provided by the end-hosts while 169 network nodes are kept simple and only offer a "store and forward" 170 packet facility. This design choice has shown suitable for a wide 171 variety of applications and has helped in the rapid growth of the 172 Internet. 174 However, with the expansion of the Internet, there are more and more 175 fields that require more than best-effort forwarding including strict 176 performance guarantees or closed-loop integration to manage data 177 flows. In this context, allowing for a tighter integration of 178 computing and networking resources, enabling a more flexible 179 distribution of computation tasks across the network, e.g., beyond 180 'just' endpoints, may help to achieve the desired guarantees and 181 behaviors as well as increase overall performance. The vision of 182 'in-network computing' and the provisioning of such capabilities that 183 capitalize on joint computation and communication resource usage 184 throughout the network is core to the efforts in the COIN RG; we 185 refer to those capabilities as 'COIN capabilities' in the remainder 186 of the document. 188 We believe that such vision of 'in-network computing' can be best 189 outlined along four dimensions of use cases, namely those that (i) 190 provide new user experiences through the utilization of COIN 191 capabilities (referred to as 'COIN experiences'), (ii) enable new 192 COIN systems, e.g., through new interactions between communication 193 and compute providers, (iii) improve on already existing COIN 194 capabilities and (iv) enable new COIN capabilities. Sections 3 195 through 6 capture those categories of use cases and provide the main 196 structure of this document. The goal is to present how the presence 197 of computing resources inside the network impacts existing services 198 and applications or allows for innovation in emerging fields. 200 Through delving into some individual examples within each of the 201 above categories, we aim to outline opportunities and propose 202 possible research questions for consideration by the wider community 203 when pushing forward the 'in-network computing' vision. Furthermore, 204 insights into possible requirements for an evolving solution space of 205 collected COIN capabilities is another objective of the individual 206 use case descriptions. This results in the following taxonomy used 207 to describe each of the use cases: 209 1. Description: Purpose of the use case and explanation of the use 210 case behavior 212 2. Characterization: Explanation of the services that are being 213 utilized and realized as well as the semantics of interactions in 214 the use case. 216 3. Existing solutions: Describe, if existing, current methods that 217 may realize the use case. 219 4. Opportunities: Outline how COIN capabilities may support or 220 improve on the use case in terms of performance and other 221 metrics. 223 5. Research questions: State essential questions that are suitable 224 for guiding research to achieve the outlined opportunities 226 6. Requirements: Describe the requirements for any solutions for 227 COIN capabilities that may need development along the 228 opportunities outlined in item 4; here, we limit requirements to 229 those COIN capabilities, recognizing that any use case will 230 realistically hold many additional requirements for its 231 realization. 233 In Section 7, we will summarize the key research questions across all 234 use cases and identify key requirements across all use cases. This 235 will provide a useful input into future roadmapping on what COIN 236 capabilities may emerge and how solutions of such capabilities may 237 look like. It will also identify what open questions remain for 238 these use cases to materialize as well as define requirements to 239 steer future (COIN) research work. 241 2. Terminology 243 The following terminology has been partly aligned with 244 [I-D.draft-kutscher-coinrg-dir]: 246 (COIN) Program: a set of computations requested by a user 248 (COIN) Program Instance: one currently executing instance of a 249 program 251 (COIN) Function: a specific computation that can be invoked as part 252 of a program 254 COIN Capability: a feature enabled through the joint processing of 255 computation and communication resources in the network 257 COIN Experience: a new user experience brought about through the 258 utilization of COIN capabilities 260 Programmable Network Devices (PNDs): network devices, such as network 261 interface cards and switches, which are programmable, e.g., using P4 262 or other languages. 264 (COIN) Execution Environment: a class of target environments for 265 function execution, for example, a JVM-based execution environment 266 that can run functions represented in JVM byte code 268 COIN System: the PNDs (and end systems) and their execution 269 environments, together with the communication resources 270 interconnecting them, operated by a single provider or through 271 interactions between multiple providers that jointly offer COIN 272 capabilities 274 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 275 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 276 document are to be interpreted as described in RFC 2119 [RFC2119]. 278 3. Providing New COIN Experiences 280 3.1. Mobile Application Offloading 282 3.1.1. Description 284 The scenario can be exemplified in an immersive gaming application, 285 where a single user plays a game using a VR headset. The headset 286 hosts functions that "display" frames to the user, as well as the 287 functions for VR content processing and frame rendering combining 288 with input data received from sensors in the VR headset. 290 Once this application is partitioned into constituent (COIN) programs 291 and deployed throughout a COIN system, utilizing the COIN execution 292 environment, only the "display" (COIN) programs may be left in the 293 headset, while the compute intensive real-time VR content processing 294 (COIN) programs can be offloaded to a nearby resource rich home PC or 295 a PND in the operator's access network, for a better execution 296 (faster and possibly higher resolution generation). 298 3.1.2. Characterization 300 Partitioning a mobile application into several constituent (COIN) 301 programs allows for denoting the application as a collection of 302 (COIN) functions for a flexible composition and a distributed 303 execution. In our example above, most functions of a mobile 304 application can be categorized into any of three, "receiving", 305 "processing" and "displaying" function groups. 307 Any device may realize one or more of the (COIN) programs of a mobile 308 application and expose them to the (COIN) system and its constituent 309 (COIN) execution environments. When the (COIN) program sequence is 310 executed on a single device, the outcome is what you see today as 311 applications running on mobile devices. 313 However, the execution of (COIN) functions may be moved to other 314 (e.g., more suitable) devices, including PNDs, which have exposed the 315 corresponding (COIN) programs as individual (COIN) program instances 316 to the (COIN) system by means of a 'service identifier'. The result 317 of the latter is the equivalent to 'mobile function offloading', for 318 possible reduction of power consumption (e.g., offloading CPU 319 intensive process functions to a remote server) or for improved end 320 user experience (e.g., moving display functions to a nearby smart TV) 321 by selecting more suitable placed (COIN) program instances in the 322 overall (COIN) system. 324 Figure 1 shows one realization of the above scenario, where a 'DPR 325 app' is running on a mobile device (containing the partitioned 326 Display(D), Process(P) and Receive(R) COIN programs) over an SDN 327 network. The packaged applications are made available through a 328 localized 'playstore server'. The mobile application installation is 329 realized as a 'service deployment' process, combining the local app 330 installation with a distributed (COIN) program deployment (and 331 orchestration) on most suitable end systems or PNDs ('processing 332 server'). 334 +----------+ Processing Server 335 Mobile | +------+ | 336 +---------+ | | P | | 337 | App | | +------+ | 338 | +-----+ | | +------+ | 339 | |D|P|R| | | | SR | | 340 | +-----+ | | +------+ | Internet 341 | +-----+ | +----------+ / 342 | | SR | | | / 343 | +-----+ | +----------+ +------+ 344 +---------+ /|SDN Switch|_____|Border| 345 +-------+ / +----------+ | SR | 346 | 5GAN |/ | +------+ 347 +-------+ | 348 +---------+ | 349 |+-------+| +----------+ 350 ||Display|| /|SDN Switch| 351 |+-------+| +-------+ / +----------+ 352 |+-------+| /|WIFI AP|/ 353 || D || / +-------+ +--+ 354 |+-------+|/ |SR| 355 |+-------+| /+--+ 356 || SR || +---------+ 357 |+-------+| |Playstore| 358 +---------+ | Server | 359 TV +---------+ 361 Figure 1: Application Function Offloading Example. 363 Such localized deployment could, for instance, be provided by a 364 visiting site, such as a hotel or a theme park. Once the 365 'processing' (COIN) program is terminated on the mobile device, the 366 'service routing' (SR) elements in the network route (service) 367 requests instead to the (previously deployed) 'processing' (COIN) 368 program running on the processing server over an existing SDN 369 network. Here, capabilities and other constraints for selecting the 370 appropriate (COIN) program, in case of having deployed more than one, 371 may be provided both in the advertisement of the (COIN) program and 372 the service request itself. 374 As an extension to the above scenarios, we can also envision that 375 content from one processing (COIN) program may be distributed to more 376 than one display (COIN) program, e.g., for multi/many-viewing 377 scenarios, thereby realizing a service-level multicast capability 378 towards more than one (COIN) program. 380 3.1.3. Existing Solutions 382 NOTE: material on solutions like ETSI MEC will be added here later 384 3.1.4. Opportunities 386 * The packaging of (COIN) programs into existing mobile application 387 packaging may enable the migration from current (mobile) device- 388 centric execution of those mobile application towards a possible 389 distributed execution of the constituent (COIN) programs that are 390 part of the overall mobile application. 392 * The orchestration for deploying (COIN) program instances in 393 specific end systems and PNDs alike may open up the possibility 394 for localized infrastructure owners, such as hotels or venue 395 owners, to offer their compute capabilities to their visitors for 396 improved or even site-specific experiences. 398 * The execution of (current mobile) app-level (COIN) programs may 399 speed up the execution of said (COIN) program by relocating the 400 execution to more suitable devices, including PNDs. 402 * The support for service-level routing of requests (service routing 403 in [APPCENTRES] may support higher flexibility when switching from 404 one (COIN) program instance to another, e.g., due to changing 405 constraints for selecting the new (COIN) program instance. 407 * The ability to identifying service-level in-network computing 408 elements will allow for routing service requests to those COIN 409 elements, including PNDs, therefore possibly allowing for new in- 410 network functionality to be included in the mobile application. 412 * The support for constraint-based selection of a specific (COIN) 413 program instance over others (constraint-based routing in 414 [APPCENTRES]) may allow for a more flexible and app-specific 415 selection of (COIN) program instances, thereby allowing for better 416 meeting the app-specific and end user requirements. 418 3.1.5. Research Questions 420 * RQ 3.1.1: How to combine service-level orchestration frameworks 421 with app-level packaging methods? 423 * RQ 3.1.2: How to reduce latencies involved in (COIN) program 424 interactions where (COIN) program instance locations may change 425 quickly? 427 * RQ 3.1.3: How to signal constraints used for routing requests 428 towards (COIN) program instances in a scalable manner? 430 * RQ 3.1.4: How to identify (COIN) programs and program instances? 432 * RQ 3.1.5: How to identify specific choice of (COIN) program 433 instances over others? 435 * RQ 3.1.6: How to provide affinity of service requests towards 436 (COIN) program instances, i.e., longer-term transactions with 437 ephemeral state established at a specific (COIN) program instance? 439 * RQ 3.1.7: How to provide constraint-based routing decisions at 440 packet forwarding speed? 442 * RQ 3.1.8: What in-network capabilities may support the execution 443 of (COIN) programs and their instances? 445 3.1.6. Requirements 447 * Req 3.1.1: Any COIN system MUST provide means for routing of 448 service requests between resources in the distributed environment. 450 * Req 3.1.2: Any COIN system MUST provide means for identifying 451 services exposed by (COIN) programs for directing service requests 453 * (Req 3.1.3: Any COIN system MUST provide means for identifying 454 (COIN) program instances for directing (affinity) requests to a 455 specific (COIN) program instance 457 * Req 3.1.4: Any COIN system MUST provide means for dynamically 458 choosing the best possible service sequence of one or more (COIN) 459 programs for a given application experience, i.e., support for 460 chaining (COIN) program executions. 462 * Req 3.1.5: Means for discovering suitable (COIN) programs SHOULD 463 be provided. 465 * Req 3.1.6: Any COIN system MUST provide means for pinning the 466 execution of a service of a specific (COIN) program to a specific 467 resource, i.e., (COIN) program instance in the distributed 468 environment. 470 * Req 3.1.7: Any COIN system SHOULD provide means for packaging 471 micro-services for deployments in distributed networked computing 472 environments. 474 * Req 3.1.8: The packaging MAY include any constraints regarding the 475 deployment of (COIN) program instances in specific network 476 locations or compute resources, including PNDs. 478 * Req 3.1.9: Such packaging SHOULD conform to existing application 479 deployment models, such as mobile application packaging, TOSCA 480 orchestration templates or tar balls or combinations thereof. 482 * Req 3.1.10: Any COIN system MUST provide means for real-time 483 synchronization and consistency of distributed application states. 485 3.2. Extended Reality and Immersive Media 487 3.2.1. Description 489 Virtual Reality (VR), Augmented Reality (AR) and immersive media (the 490 metaverse) taken together as Extended Reality (XR) are the drivers of 491 a number of advances in interactive technologies. XR is one example 492 of the Multisource-Multidestination Problem that combines video, 493 haptics, and tactile experiences in interactive or networked multi- 494 party and social interactions. While initially associated with 495 gaming and entertainment, XR applications now include remote 496 diagnosis, maintenance, telemedicine, manufacturing and assembly, 497 autonomous systems, smart cities, and immersive classrooms. 499 Because XR requirements include the need to provide real-time 500 interactivity for immersive and increasingly mobile immersive 501 applications with tactile and time-sensitive data and high bandwidth 502 for high resolution images and local rendering for 3D images and 503 holograms, they are difficult to run over traditional networks; in 504 consequence innovation is needed to deply the full potential of the 505 applications. 507 3.2.2. Characterization 509 Collaborative XR experiences are difficult to deliver with a client- 510 server cloud-based solution as they require a combination of: stream 511 synchronization, low delays and delay variations, means to recover 512 from losses and optimized caching and rendering as close as possible 513 to the user at the network edge. XR deals with personal information 514 and potentially protected content this an XR application must also 515 provide a secure environment and ensure user privacy. Additionally, 516 the sheer amount of data needed for and generated by the XR 517 applications can use recent trend analysis and mechanisms, including 518 machine learning to find these trends and reduce the size of the data 519 sets. Video holography and haptics require very low delay or 520 generate large amounts of data, both requiring a careful look at data 521 filtering and reduction, functional distribution and partitioning. 523 The operation of XR over networks requires some computing in the 524 nodes from content source to destination. But a lot of these remain 525 in the realm of research to resolve the resource allocation problem 526 and provide adequate quality of experience. These include multi- 527 variate and heterogeneous goal optimization problems at merging nodes 528 requiring advanced analysis. Image rendering and video processing in 529 XR leverages different HW capabilities combinations of CPU and GPU at 530 the edge (even at the mobile edge) and in the fog network where the 531 content is consumed. It is important to note that the use of in- 532 network computing for XR does not imply a specific protocol but 533 targets an architecture enabling the deployment of the services. 535 3.2.3. Existing Solutions 537 In-network computing for XR profits from the heritage of extensive 538 research in the past years on Information Centric Networking, Machine 539 Learning, network telemetry, imaging and IoT as well as distributed 540 security and in-network coding. 542 * Enabling Scalable Edge Video Analytics with Computing-In-Network 543 (Jun Chen Jiang of the University of Chicago): this work brings a 544 periodical re-profiling to adapt the video pipeline to the dynamic 545 video content that is a characteristic of XR. The implication is 546 that we "need tight network-app coupling" for real time video 547 analytics. 549 * VR journalism, interactive VR movies and meetings in cyberspace 550 (many projects PBS, MIT interactive documentary lab, Huawei 551 research - references to be provided): typical VR is not made for 552 multiparty and these applications require a tight coupling of the 553 local and remote rendering and data capture and combinations of 554 cloud (for more static information) and edge (for dynamic 555 content). 557 * Local rendering of holographic content using near field 558 computation (heritage from advances cockpit interactions - looking 559 for non military papers): a lot has been said recently of the 560 large amounts of data necessary to transmit and use holographic 561 imagery in communications. Transmitting the near field 562 information and rendering the image locally allows to reduce the 563 data rates by 1 or 2. 565 * ICE-AR [ICE] project at UCLA (Jeff Burke): while this project is a 566 showcase of the NDN network artchitecture it also uses a lof of 567 edge-cloud capabilities for example for inter-server games and 568 advanced video applications. 570 3.2.4. Opportunities 572 * Reduced latency: the physical distance between the content cloud 573 and the users must be short enough to limit the propagation delay 574 to the 20 ms usually cited for XR applications; the use of local 575 CPU and IoT devices for range of interest (RoI) detection and 576 fynamic rendering may enable this. 578 * Video transmission: better transcoding and use of advanced 579 context-based compression algorithms, pre-fetching and pre-caching 580 and movement prediction not only in the cloud. 582 * Monitoring: telemetry is a major research topic for COIN and it 583 enables to monitor and distribute the XR services. 585 * Network access: push some networking functions in the kernel space 586 into the user space to enable the deployment of stream specific 587 algorithms for congestion control and application-based load 588 balancing based on machine learning and user data patterns. 590 * Functional decomposition: functional decomposition, localization 591 and discovery of computing and storage resources in the network. 592 But it is not only finding the best resources but qualifying those 593 resources in terms of reliability especially for mission critical 594 services in XR (medicine for example). This could include 595 intelligence services. 597 3.2.5. Research Questions 599 * RQ 3.2.1: Can current programmable network entities be sufficient 600 to provide the speed required to provide and execute complex 601 filtering operations that includes metadata analysis for complex 602 and dynamic scene rendering? 604 * RQ 3.2.2: How can the interoperability of CPU/GPU be optimized to 605 combine low level packet filtering with the higher layer 606 processors needed for image processing and haptics? 608 * RQ 3.2.3: Can the use of joint learning algorithms across both 609 data center and edge computers be used to create optimal 610 functionality allocation and the creation of semi-permanent 611 datasets and analytics for usage trending resulting in better 612 localization of XR functions? 614 * RQ 3.2.4: Can COIN improve the dynamic distribution of control, 615 forwarding and storage resources and related usage models in XR? 617 3.2.6. Requirements 619 * Req 3.2.1: Allow joint collaboration. 621 * Req 3.2.2: Provide multi-views. 623 * Req 3.2.3: Include extra streams dynamically for data intensive 624 services, manufacturing and industrial processes. 626 * Req 3.2.4: Enable multistream, multidevice, multidestination 627 applications. 629 * Req 3.2.5: Use new Internet Architectures at the edge for improved 630 performance and performance management. 632 * Req 3.2.6: Integrate with holography, 3D displays and image 633 rendering processors. 635 * Req 3.2.7: All the use of multicast distribution and processing as 636 well as peer to peer distribution in bandwidth and capacity 637 constrained environments. 639 * Req 3.2.8: Evaluate the integration local and fog caching with 640 cloud-based pre-rendering. 642 * Req 3.2.9: Evaluate ML-based congestion control to manage XR 643 sessions quality of service and to determine how to priortize 644 data. 646 * Req 3.2.10: Consider higher layer protocols optimization to reduce 647 latency especially in data intensive applications at the edge. 649 * Req 3.2.11: Provide trust, including blockchains and smart- 650 contracts to enable secure community building across domains. 652 * Req 3.2.12: Support nomadicity and mobility (link to mobile edge). 654 * Req 3.2.13: Use 5G slicing to create independent session-driven 655 processing/rendering. 657 * Req 3.2.14: Provide performance optimization by data reduction, 658 tunneling, session virtualization and loss protection. 660 * Req 3.2.15: Use AI/ML for trend analysis and data reduction when 661 appropriate. 663 3.3. Personalised and interactive performing arts 664 3.3.1. Description 666 This use case covers live productions of the performing arts where 667 the performers and audience are in different physical locations. The 668 performance is conveyed to the audience through multiple networked 669 streams which may be tailored to the requirements of individual 670 audience members; and the performers receive live feedback from the 671 audience. 673 There are two main aspects: i) to emulate as closely as possible the 674 experience of live performances where the performers and audience are 675 co-located in the same physical space, such as a theatre; and ii) to 676 enhance traditional physical performances with features such as 677 personalisation of the experience according to the preferences or 678 needs of the audience members. 680 Examples of personalisation include: 682 * Viewpoint selection such as choosing a specific seat in the 683 theatre or for more advanced positioning of the audience member's 684 viewpoint outside of the traditional seating - amongst, above or 685 behind the performers (but within some limits which may be imposed 686 by the performers or the director for artistic reasons); 688 * Augmentation of the performance with subtitles, audio-description, 689 actor-tagging, language translation, advertisements/product- 690 placement, other enhancements/filters to make the performance 691 accessible to disabled audience members (removal of flashing 692 images for epileptics, alternative colour schemes for colour-blind 693 audience members, etc.). 695 3.3.2. Characterization 697 There are several chained functional entities which are candidates 698 for being deployed as (COIN) Programs. 700 * Performer aggregation and editing functions 702 * Distribution and encoding functions 704 * Personalisation functions 706 - to select which of the existing streams should be forwarded to 707 the audience member 709 - to augment streams with additional metadata such as subtitles 710 - to create new streams after processing existing ones: to 711 interpolate between camera angles to create a new viewpoint or 712 to render point clouds from the audience member's chosen 713 perspective 715 - to undertake remote rendering according to viewer position, 716 e.g. creation of VR headset display streams according to 717 audience head position - when this processing has been 718 offloaded from the viewer's end-system to the in-network 719 function due to limited processing power in the end-system, or 720 to limited network bandwidth to receive all of the individual 721 streams to be processed. 723 * Audience feedback sensor processing functions 725 * Audience feedback aggregation functions 727 These are candidates for deployment as (COIN) Programs in PNDs rather 728 than being located in end-systems (at the performers' site, the 729 audience members' premises or in a central cloud location) for 730 several reasons: 732 * Personalisation of the performance according to audience 733 preferences and requirements makes it unfeasible to be done in a 734 centralised manner at the performer premises: the computational 735 resources and network bandwidth would need to scale with the 736 number of audience members' personalised streams. 738 * Rendering of VR headset content to follow viewer head movements 739 has an upper bound on lag to maintain viewer QoE, which requires 740 the processing to be undertaken sufficiently close to the viewer 741 to avoid large network latencies. 743 * Viewer devices may not have the processing-power to undertake the 744 personalisation or the viewers' network may not have the capacity 745 to receive all of the constituent streams to undertake the 746 personalisation functions. 748 * There are strict latency requirements for live and interactive 749 aspects that require the deviation from the direct network path 750 from performers to audience to be minimised, which reduces the 751 opportunity to route streams via large-scale processing 752 capabilities at centralised data-centres. 754 3.3.3. Existing solutions 756 Note: Existing solutions for some aspects of this use case are 757 covered in the Mobile Application Offloading, Extended Reality, and 758 Content Delivery Networks use cases. 760 3.3.4. Opportunities 762 * Executing media processing and personalisation functions on-path 763 as (COIN) Programs in PNDs will avoid detour/stretch to central 764 servers which increases latency as well as the consumption of 765 bandwidth on more network resources (links and routers). For 766 example, in this use case the chain of (COIN) Programs and 767 propagation over the interconnecting network segments for 768 performance capture, aggregation, distribution, personalisation, 769 consumption, capture of audience response, feedback processing, 770 aggregation, rendering should be achieved within an upper bound of 771 latency (the tolerable amount is to be defined, but in the order 772 of 100s of ms to mimic performers perceiving audience feedback, 773 such as laugher or other emotional responses in a theatre 774 setting). 776 * Processing of media streams allows (COIN) Programs, PNDs and the 777 wider (COIN) System/Environment to be contextual aware of flows 778 and their requirements which can be used for determining network 779 treatment of the flows, e.g. path selection, prioritisation, 780 multi-flow coordination, synchronisation & resilience. 782 3.3.5. Research Questions: 784 * RQ 3.3.1: In which PNDs should (COIN) Programs for aggregation, 785 encoding and personalisation functions be located? Close to the 786 performers or close to the audience members? 788 * RQ 3.3.2: How far from the direct network path from performer to 789 audience should (COIN) programs be located, considering the 790 latency implications of path-stretch and the availability of 791 processing capacity at PNDs? How should tolerances be defined by 792 users? 794 * RQ 3.3.3: Should users decide which PNDs should be used for 795 executing (COIN) Programs for their flows or should they express 796 requirements and constraints that will direct decisions by the 797 orchestrator/manager of the COIN System? 799 * RQ 3.3.4: How to achieve network synchronisation across multiple 800 streams to allow for merging, audio-video interpolation and other 801 cross-stream processing functions that require time 802 synchronisation for the integrity of the output? How can this be 803 achieved considering that synchronisation may be required between 804 flows that are: i) on the same data pathway through a PND/router, 805 ii) arriving/leaving through different ingress/egress interfaces 806 of the same PND/router, iii) routed through disjoint paths through 807 different PNDs/routers? 809 * RQ 3.3.5: Where will COIN Programs will be executed? In the data- 810 plane of PNDs, in other on-router computational capabilities 811 within PNDs, or in adjacent computational nodes? 813 * RQ 3.3.6: Are computationally-intensive tasks - such as video 814 stitching or media recognition and annotation - considered as 815 suitable candidate (COIN) Programs or should they be implemented 816 in end-systems? 818 * RQ 3.3.7: If the execution of COIN Programs is offloaded to 819 computational nodes outside of PNDs, e.g. for processing by GPUs, 820 should this still be considered as in-network processing? Where 821 is the boundary between in-network processing capabilities and 822 explicit routing of flows to endsystems? 824 3.3.6. Requirements 826 * Req 3.3.1: Users should be able to specify requirements on network 827 and processing metrics (such as latency and throughput bounds) and 828 the COIN System should be able to respect those requirements and 829 constraints when routing flows and selecting PNDs for executing 830 (COIN) Programs. 832 * Req 3.3.2: A COIN System should be able to synchronise flow 833 treatment and processing across multiple related flows which may 834 be on disjoint paths. 836 4. Supporting new COIN Systems 838 While the best-effort nature of the Internet enables a wide variety 839 of applications, there are several domains whose requirements are 840 hard to satisfy over regular best-effort networks. 842 Consequently, there is a large number of specialized appliances and 843 protocols designed to provide the required strict performance 844 guarantees, e.g., regarding real-time capabilities. 846 Time-Sensitive-Networking [TSN] as an enhancement to the standard 847 Ethernet, e.g., tries to achieve these requirements on the link layer 848 by statically reserving shares of the bandwidth. However, solutions 849 on the link layer alone are not always sufficient. 851 The industrial domain, e.g., currently evolves towards increasingly 852 interconnected systems in turn increasing the complexity of the 853 underlying networks, making them more dynamic, and creating more 854 diverse sets of requirements. Concepts satisfying the dynamic 855 performance requirements of modern industrial applications thus 856 become harder to develop. In this context, COIN offers new 857 possibilities as it allows to flexibly distribute computation tasks 858 across the network and enables novel forms of interaction between 859 communication and computation providers. 861 This document illustrates the potential for new COIN systems using 862 the example of the industrial domain by characterizing and analyzing 863 specific scenarios to showcase potential requirements, as specifying 864 general requirements is difficult due to the domain's mentioned 865 diversity. 867 4.1. Industrial Network Scenario 869 Common components of industrial networks can be divided into three 870 categories as illustrated in Figure 2. Following 871 [I-D.mcbride-edge-data-discovery-overview], EDGE DEVICES, such as 872 sensors and actuators, constitute the boundary between the physical 873 and digital world. They communicate the current state of the 874 physical world to the digital world by transmitting sensor data or 875 let the digital world interact with the physical world by executing 876 actions after receiving (simple) control information. The processing 877 of the sensor data and the creation of the control information is 878 done on COMPUTING DEVICES. They range from small-powered controllers 879 close to the EDGE DEVICES, to more powerful edge or remote clouds in 880 larger distances. The connection between the EDGE and COMPUTING 881 DEVICES is established by NETWORKING DEVICES. In the industrial 882 domain, they range from standard devices, e.g., typical Ethernet 883 switches, which can interconnect all Ethernet-capable hosts, to 884 proprietary equipment with proprietary protocols only supporting 885 hosts of specific vendors. 887 -------- 888 |Sensor| ------------| ~~~~~~~~~~~~ ------------ 889 -------- ------------- { Internet } --- |Remote Cloud| 890 . |Access Point|--- ~~~~~~~~~~~~ ------------ 891 -------- ------------- | | 892 |Sensor| ----| | | | 893 -------- | | -------- | 894 . | | |Switch| ---------------------- 895 . | | -------- | 896 . | | ------------ | 897 ---------- | |----------------- | Controller | | 898 |Actuator| ------------ ------------ | 899 ---------- | -------- ------------ 900 . |----|Switch|---------------------------| Edge Cloud | 901 ---------- -------- ------------ 902 |Actuator| ---------| 903 ---------- 905 |-----------| |------------------| |-------------------| 906 EDGE DEVICES NETWORKING DEVICES COMPUTING DEVICES 908 Figure 2: Industrial networks show a high level of heterogeneity. 910 4.2. In-Network Control / Time-sensitive applications 912 4.2.1. Description 914 The control of physical processes and components of a production line 915 is essential for the growing automation of production and ideally 916 allows for a consistent quality level. Traditionally, the control 917 has been exercised by control software running on programmable logic 918 controllers (PLCs) located directly next to the controlled process or 919 component. This approach is best-suited for settings with a simple 920 model that is focused on a single or few controlled components. 922 Modern production lines and shop floors are characterized by an 923 increasing amount of involved devices and sensors, a growing level of 924 dependency between the different components, and more complex control 925 models. A centralized control is desirable to manage the large 926 amount of available information which often has to be pre-processed 927 or aggregated with other information before it can be used. PLCs are 928 not designed for this array of tasks and computations could 929 theoretically be moved to more powerful devices. These devices are 930 no longer close to the controlled objects and induce additional 931 latency. Moving compute functionality onto COIN execution 932 environments inside the network offers a new solution space to these 933 challenges. 935 4.2.2. Characterization 937 A control process consists of two main components as illustrated in 938 Figure 3: a system under control and a controller. 940 In feedback control, the current state of the system is monitored, 941 e.g., using sensors and the controller influences the system based on 942 the difference between the current and the reference state to keep it 943 close to this reference state. 945 reference 946 state ------------ -------- Output 947 ----------> | Controller | ---> | System | ----------> 948 ^ ------------ -------- | 949 | | 950 | observed state | 951 | --------- | 952 -------------------| Sensors | <----- 953 --------- 955 Figure 3: Simple feedback control model. 957 Apart from the control model, the quality of the control primarily 958 depends on the timely reception of the sensor feedback which can be 959 subject to tight latency constraints, often in the single-digit 960 millisecond range. While low latencies are essential, there is an 961 even greater need for stable and deterministic levels of latency, 962 because controllers can generally cope with different levels of 963 latency, if they are designed for them, but they are significantly 964 challenged by dynamically changing or unstable latencies. The 965 unpredictable latency of the Internet exemplifies this problem if, 966 e.g., off-premise cloud platforms are included. 968 4.2.3. Existing Solutions 970 Control functionality is traditionally executed on PLCs close to the 971 machinery. These PLCs typically require vendor-specific 972 implementations and are often hard to upgrade and update which makes 973 such control processes inflexible and difficult to manage. Moving 974 computations to more freely programmable devices thus has the 975 potential of significantly improving the flexibility. In this 976 context, directly moving control functionality to (central) cloud 977 environments is generally possible, yet only feasible if latency 978 constraints are lenient. 980 4.2.4. Opportunities 982 COIN offers the possibility of bringing the system and the controller 983 closer together, thus possibly satisfying the latency requirements, 984 by performing simple control logic on PNDs and/or in COIN execution 985 environments. 987 While control models, in general, can become involved, there is a 988 variety of control algorithms that are composed of simple 989 computations such as matrix multiplication. These are supported by 990 some PNDs and it is thus possible to compose simplified 991 approximations of the more complex algorithms and deploy them in the 992 network. While the simplified versions induce a more inaccurate 993 control, they allow for a quicker response and might be sufficient to 994 operate a basic tight control loop while the overall control can 995 still be exercised from the cloud. 997 Opportunities: 999 * Execute simple (end-host) COIN functions on PNDs to satisfy tight 1000 latency constraints of control processes 1002 4.2.5. Research Questions 1004 Bringing the required computations to PNDs is challenging as these 1005 devices typically only allow for integer precision computation while 1006 floating-point precision is needed by most control algorithms. 1007 Additionally, computational capabilities vary for different available 1008 PNDs [KUNZE]. Yet, early approaches like [RUETH] and [VESTIN] have 1009 already shown the general applicability of such ideas, but there are 1010 still a lot of open research questions not limited to the following: 1012 Research Questions: 1014 * RQ 4.2.1: How to derive simplified versions of the global 1015 (control) function? 1017 - How to account for the limited computational precision of PNDs? 1019 - How to find suitable tradeoffs regarding simplicity of the 1020 control function ("accuracy of the control") and implementation 1021 complexity ("implementability")? 1023 * RQ 4.2.2: How to distribute the simplified versions in the 1024 network? 1025 - Can there be different control levels, e.g., "quite inaccurate 1026 & very low latency" (PNDs, deep in the network), "more accurate 1027 & higher latency" (more powerful COIN execution environments, 1028 farer away), "very accurate & very high latency" (cloud 1029 environments, far away)? 1031 - Who decides which control instance is executed and how? 1033 - How do the different control instances interact? 1035 4.2.6. Requirements 1037 * Req 4.2.1: The interaction between the COIN execution environments 1038 and the global controller SHOULD be explicit. 1040 * Req 4.2.2: The interaction between the COIN execution environments 1041 and the global controller MUST NOT negatively impact the control 1042 quality. 1044 * Req 4.2.3: Actions of the COIN execution environments MUST be 1045 overridable by the global controller. 1047 * Req 4.2.4: Functions in COIN execution environments SHOULD be 1048 executed with predictable delay. 1050 * Req 4.2.5: Functions in COIN execution environments MUST be 1051 executed with predictable accuracy. 1053 4.3. Large Volume Applications - Filtering 1055 4.3.1. Description 1057 In modern industrial networks, processes and machines can be 1058 monitored closely resulting in large volumes of available 1059 information. This data can be used to find previously unknown 1060 correlations between different parts of the value chain, e.g., by 1061 deploying machine learning (ML) techniques, which in turn helps to 1062 improve the overall production system. Newly gained knowledge can be 1063 shared between different sites of the same company or even between 1064 different companies [PENNEKAMP]. 1066 Traditional company infrastructure is neither equipped for the 1067 management and storage of such large amounts of data nor for the 1068 computationally expensive training of ML approaches. Off-premise 1069 cloud platforms offer cost-effective solutions with a high degree of 1070 flexibility and scalability, however, moving all data to off-premise 1071 locations poses infrastructural challenges. Pre-processing or 1072 filtering the data already in COIN execution environments can be a 1073 new solution to this challenge. 1075 4.3.2. Characterization 1077 4.3.2.1. General Characterization of Large Volume Applications 1079 Processes in the industrial domain are monitored by distributed 1080 sensors which range from simple binary (e.g., light barriers) to 1081 sophisticated sensors measuring the system with varying degrees of 1082 resolution. Sensors can further serve different purposes, as some 1083 might be used for time-critical process control while others are only 1084 used as redundant fallback platforms. Overall, there is a high level 1085 of heterogeneity which makes managing the sensor output a challenging 1086 task. 1088 Depending on the deployed sensors and the complexity of the observed 1089 system, the resulting overall data volume can easily be in the range 1090 of several Gbit/s [GLEBKE]. Using off-premise clouds for managing 1091 the data requires uploading or streaming the growing volume of sensor 1092 data using the companies' Internet access which is typically limited 1093 to a few hundred of Mbit/s. While large networking companies can 1094 simply upgrade their infrastructure, most industrial companies rely 1095 on traditional ISPs for their Internet access. Higher access speeds 1096 are hence tied to higher costs and, above all, subject to the supply 1097 of the ISPs and consequently not always available. A major challenge 1098 is thus to devise a methodology that is able to handle such amounts 1099 of data over limited access links. 1101 Another aspect is that business data leaving the premise and control 1102 of the company further comes with security concerns, as sensitive 1103 information or valuable business secrets might be contained in it. 1104 Typical security measures such as encrypting the data make COIN 1105 techniques hardly applicable as they typically work on unencrypted 1106 data. Adding security to COIN approaches, either by adding 1107 functionality for handling encrypted data or devising general 1108 security measures, is thus an auspicious field for research which we 1109 describe in more detail in Section 8. 1111 4.3.2.2. Specific Characterization for Filtering Solutions 1113 Sensors are often set up redundantly, i.e., part of the collected 1114 data might also be redundant. Moreover, they are often hard to 1115 configure or not configurable at all which is why their resolution or 1116 sampling frequency is often larger than required. Consequently, it 1117 is likely that more data is transmitted than is needed or desired. 1119 4.3.3. Existing Solutions 1121 Current approaches for handling such large amounts of information 1122 typically build upon stream processing frameworks such as Apache 1123 Flink. While they allow for handling large volume applications, they 1124 are tied to performant server machines and upscaling the information 1125 density also requires a corresponding upscaling of the compute 1126 infrastructure. 1128 4.3.4. Opportunities 1130 PNDs and COIN execution environments are in a unique position to 1131 reduce the data rates due to their line-rate packet processing 1132 capabilities. Using these capabilities, it is possible to filter out 1133 redundant or undesired data before it leaves the premise using simple 1134 traffic filters that are deployed in the on-premise network. There 1135 are different approaches to how this topic can be tackled. 1137 A first step could be to scale down the available sensor data to the 1138 data rate that is needed. For example, if a sensor transmits with a 1139 frequency of 5 kHz, but the control entity only needs 1 kHz, only 1140 every fifth packet containing sensor data is let through. 1141 Alternatively, sensor data could be filtered down to a lower 1142 frequency while the sensor value is in an uninteresting range, but 1143 let through with higher resolution once the sensor value range 1144 becomes interesting. 1146 While the former variant is oblivious to the semantics of the sensor 1147 data, the latter variant requires an understanding of the current 1148 sensor levels. In any case, it is important that end-hosts are 1149 informed about the filtering so that they can distinguish between 1150 data loss and data filtered out on purpose. 1152 Opportunities: 1154 * (Semantic) packet filtering based on packet header and payload, as 1155 well as multi-packet information 1157 4.3.5. Research Questions 1159 * RQ 4.3.1: How to design COIN programs for (semantic) packet 1160 filtering? 1162 - Which criteria for filtering make sense? 1164 * RQ 4.3.2: How to distribute and coordinate COIN programs? 1166 * RQ 4.3.3: How to dynamically change COIN programs? 1168 * RQ 4.3.4: How to signal traffic filtering by COIN programs to end- 1169 hosts? 1171 4.3.6. Requirements 1173 * Req 4.3.1: Filters MUST conform to application-level syntax and 1174 semantics. 1176 * Req 4.3.2: Filters MAY leverage packet header and payload 1177 information. 1179 * Req 4.3.3: Filters SHOULD be reconfigurable at run-time. 1181 4.4. Large Volume Applications - (Pre-)Preprocessing 1183 4.4.1. Description 1185 See Section 4.3.1. 1187 4.4.2. Characterization 1189 4.4.2.1. General Characterization of Large Volume Applications 1191 See Section 4.3.2.1. 1193 4.4.2.2. Specific Characterization for Preprocessing Solutions 1195 There are manifold computations that can be performed on the sensor 1196 data in the cloud. Some of them are very complex or need the 1197 complete sensor data during the computation, but there are also 1198 simpler operations which can be done on subsets of the overall 1199 dataset or earlier on the communication path as soon as all data is 1200 available. One example is finding the maximum of all sensor values 1201 which can either be done iteratively at each intermediate hop or at 1202 the first hop, where all data is available. 1204 4.4.3. Existing Solutions 1206 See Section 4.3.3. 1208 4.4.4. Opportunities 1210 Using expert knowledge about the exact computation steps and the 1211 concrete transmission path of the sensor data, simple computation 1212 steps can be deployed in the on-premise network to reduce the overall 1213 data volume and potentially speed up the processing time in the 1214 cloud. 1216 Related work has already shown that in-network aggregation can help 1217 to improve the performance of distributed ML applications [SAPIO]. 1218 Investigating the applicability of stream data processing techniques 1219 to PNDs is also interesting, because sensor data is usually streamed. 1221 Opportunities: 1223 * (Semantic) data (pre-)processing, e.g., in the form of 1224 computations across multiple packets and potentially leveraging 1225 packet payload 1227 4.4.5. Research Questions 1229 * RQ 4.4.1: Which kinds of COIN programs can be leveraged for 1230 (pre-)processing steps? 1232 - How complex can they become? 1234 * RQ 4.4.2: How to distribute and coordinate COIN programs? 1236 * RQ 4.4.3: How to dynamically change COIN programs? 1238 * RQ 4.4.4: How to incorporate the (pre-)processing steps into the 1239 overall system? 1241 4.4.6. Requirements 1243 * Req 4.4.1: Preprocessors MUST conform to application-level syntax 1244 and semantics. 1246 * Req 4.4.2: Preprocessors MAY leverage packet header and payload 1247 information. 1249 * Req 4.4.3: Preprocessors SHOULD be reconfigurable at run-time. 1251 4.5. Industrial Safety 1253 4.5.1. Description 1255 Despite an increasing automation in production processes, human 1256 workers are still often necessary. Consequently, safety measures 1257 have a high priority to ensure that no human life is endangered. In 1258 traditional factories, the regions of contact between humans and 1259 machines are well-defined and interactions are simple. Simple safety 1260 measures like emergency switches at the working positions are enough 1261 to provide a decent level of safety. 1263 Modern factories are characterized by increasingly dynamic and 1264 complex environments with new interaction scenarios between humans 1265 and robots. Robots can either directly assist humans or perform 1266 tasks autonomously. The intersect between the human working area and 1267 the robots grows and it is harder for human workers to fully observe 1268 the complete environment. Additional safety measures are essential 1269 to prevent accidents and support humans in observing the environment. 1271 4.5.2. Characterization 1273 Industrial safety measures are typically hardware solutions because 1274 they have to pass rigorous testing before they are certified and 1275 deployment-ready. Standard measures include safety switches and 1276 light barriers. Additionally, the working area can be explicitly 1277 divided into 'contact' and 'safe' areas, indicating when workers have 1278 to watch out for interactions with machinery. 1280 These measures are static solutions, potentially relying on 1281 specialized hardware, and are challenged by the increased dynamics of 1282 modern factories where the factory configuration can be changed on 1283 demand. Software solutions offer higher flexibility as they can 1284 dynamically respect new information gathered by the sensor systems, 1285 but in most cases they cannot give guaranteed safety. 1287 4.5.3. Existing Solutions 1289 Due to the importance of safety, there is a wide range of software- 1290 based approaches aiming at enhancing security. One example are tag- 1291 based systems, e.g., using RFID, where drivers of forklifts can be 1292 warned if pedestrian workers carrying tags are nearby. Such 1293 solutions, however, require setting up an additional system and do 1294 not leverage existing sensor data. 1296 4.5.4. Opportunities 1298 COIN systems could leverage the increased availability of sensor data 1299 and the detailed monitoring of the factories to enable additional 1300 safety measures. Different safety indicators within the production 1301 hall can be combined within the network so that PNDs can give early 1302 responses if a potential safety breach is detected. 1304 One possibility could be to track the positions of human workers and 1305 robots. Whenever a robot gets too close to a human in a non-working 1306 area or if a human enters a defined safety zone, robots are stopped 1307 to prevent injuries. More advanced concepts could also include image 1308 data or combine arbitrary sensor data. 1310 Opportunities: 1312 * Execute simple (end-host) COIN functions on PNDs to create early 1313 emergency reactions based on diverse sensor feedback 1315 4.5.5. Research Questions 1317 * RQ 4.5.1: Which additional safety measures can be provided? 1319 - Do these measures actually improve safety? 1321 * RQ 4.5.2: Which sensor information can be combined and how? 1323 4.5.6. Requirements 1325 * Req 4.5.1: COIN-based safety measures MUST NOT degrade existing 1326 safety measures. 1328 * Req 4.5.2: COIN-based safety measures MAY enhance existing safety 1329 measures. 1331 5. Improving existing COIN capabilities 1333 5.1. Content Delivery Networks 1335 5.1.1. Description 1337 Delivery of content to end users often relies on Content Delivery 1338 Networks (CDNs) storing said content closer to end users for latency 1339 reduced delivery with DNS-based indirection being utilized to serve 1340 the request on behalf of the origin server. 1342 5.1.2. Characterization 1344 From the perspective of this draft, a CDN can be interpreted as a 1345 (network service level) set of (COIN) programs, implementing a 1346 distributed logic for distributing content from the origin server to 1347 the CDN ingress and further to the CDN replication points which 1348 ultimately serve the user-facing content requests. 1350 5.1.3. Existing Solutions 1352 NOTE: material on solutions will be added here later 1354 Studies such as those in [FCDN] have shown that content distribution 1355 at the level of named content, utilizing efficient (e.g., Layer 2) 1356 multicast for replication towards edge CDN nodes, can significantly 1357 increase the overall network and server efficiency. It also reduces 1358 indirection latency for content retrieval as well as reduces required 1359 edge storage capacity by benefiting from the increased network 1360 efficiency to renew edge content more quickly against changing 1361 demand. 1363 5.1.4. Opportunities 1365 * Supporting service-level routing of requests (service routing in 1366 [APPCENTRES]) to specific (COIN) program instances may improve on 1367 end user experience in faster retrieving (possibly also more, 1368 e.g., better quality) content. 1370 * Supporting the constraint-based selection of a specific (COIN) 1371 program instance over others (constraint-based routing in 1372 [APPCENTRES]) may improve the overall end user experience by 1373 selecting a 'more suitable' (COIN) program instance over another, 1374 e.g., avoiding/reducing overload situation in specific (COIN) 1375 program instances. 1377 * Supporting Layer 2 capabilities for multicast (compute 1378 interconnection and collective communication in [APPCENTRES]) may 1379 increase the network utilization and therefore increase the 1380 overall system utilization. 1382 5.1.5. Research Questions 1384 In addition to those request question for Section 3.1: 1386 * RQ 5.1.1: How to utilize L2 multicast to improve on CDN designs? 1387 How to utilize in-network capabilities in those designs? 1389 * RQ 5.1.2: What forwarding methods may support the required 1390 multicast capabilities (see [FCDN]) 1392 * RQ 5.1.3: What are the right routing constraints that reflect both 1393 compute and network capabilities? 1395 * RQ 5.1.4: Could traffic steering be performed at the data path and 1396 per service request? If so, what would be performance 1397 improvements? 1399 * RQ 5.1.5: How could storage be traded off against frequent, 1400 multicast-based, replication (see [FCDN])? 1402 * RQ 5.1.6: What scalability limits exist for L2 multicast 1403 capabilities? How to overcome them? 1405 5.1.6. Requirements 1407 Requirements 3.1.1 through 3.1.6 also apply for CDN service access. 1408 In addition: 1410 * Req 5.1.1: Any solution SHOULD utilize Layer 2 multicast 1411 transmission capabilities for responses to concurrent service 1412 requests. 1414 5.2. Compute-Fabric-as-a-Service (CFaaS) 1416 5.2.1. Description 1418 Layer 2 connected compute resources, e.g., in regional or edge data 1419 centres, base stations and even end-user devices, provide the 1420 opportunity for infrastructure providers to offer CFaaS type of 1421 offerings to application providers. App and service providers may 1422 utilize the compute fabric exposed by this CFaaS offering for the 1423 purposes defined through their applications and services. In other 1424 words, the compute resources can be utilized to execute the desired 1425 (COIN) programs of which the application is composed, while utilizing 1426 the inter-connection between those compute resources to do so in a 1427 distributed manner. 1429 5.2.2. Characterization 1431 We foresee those CFaaS offerings to be tenant-specific, a tenant here 1432 defined as the provider of at least one application. For this, we 1433 foresee an interaction between CFaaS provider and tenant to 1434 dynamically select the appropriate resources to define the demand 1435 side of the fabric. Conversely, we also foresee the supply side of 1436 the fabric to be highly dynamic with resources being offered to the 1437 fabric through, e.g., user-provided resources (whose supply might 1438 depend on highly context-specific supply policies) or infrastructure 1439 resources of intermittent availability such as those provided through 1440 road-side infrastructure in vehicular scenarios. 1442 The resulting dynamic demand-supply matching establishes a dynamic 1443 nature of the compute fabric that in turn requires trust 1444 relationships to be built dynamically between the resource 1445 provider(s) and the CFaaS provider. This also requires the 1446 communication resources to be dynamically adjusted to interconnect 1447 all resources suitably into the (tenant-specific) fabric exposed as 1448 CFaaS. 1450 5.2.3. Existing Solutions 1452 NOTE: material on solutions will be added here later 1454 5.2.4. Opportunities 1456 * Supporting service-level routing of compute resource requests 1457 (service routing in [APPCENTRES]) may allow for utilizing the 1458 wealth of compute resources in the overall CFaaS fabric for 1459 execution of distributed applications, where the distributed 1460 constituents of those applications are realized as (COIN) programs 1461 and executed within a COIN system as (COIN) program instances. 1463 * Supporting the constraint-based selection of a specific (COIN) 1464 program instance over others (constraint-based routing in 1465 [APPCENTRES]) will allow for optimizing both the CFaaS provider 1466 constraints as well as tenant-specific constraints. 1468 * Supporting Layer 2 capabilities for multicast (compute 1469 interconnection and collective communication in [APPCENTRES]) will 1470 allow for increasing both network utilization but also possible 1471 compute utilization (due to avoiding unicast replication at those 1472 compute endpoints), thereby decreasing total cost of ownership for 1473 the CFaaS offering. 1475 5.2.5. Research Questions 1477 Similar to those for Section 3.1. In addition: 1479 * RQ 5.2.1: How to convey tenant-specific requirements for the 1480 creation of the L2 fabric? 1482 * RQ 5.2.2: How to dynamically integrate resources, particularly 1483 when driven by tenant-level requirements and changing service- 1484 specific constraints? 1486 * RQ 5.2.3: How to utilize in-network capabilities to aid the 1487 availability and accountability of resources, i.e., what may be 1488 (COIN) programs for a CFaaS environment that in turn would utilize 1489 the distributed execution capability of a COIN system? 1491 5.2.6. Requirements 1493 For the provisioning of services atop the CFaaS, requirements 3.1.1 1494 through 3.1.6 should be addressed, too. In addition: 1496 * Req 5.2.1: Any solution SHOULD expose means to specify the 1497 requirements for the tenant-specific compute fabric being utilized 1498 for the service execution. 1500 * Req 5.2.2: Any solution SHOULD allow for dynamic integration of 1501 compute resources into the compute fabric being utilized for the 1502 app execution; those resources include, but are not limited to, 1503 end user provided resources. From a COIN system perspective, new 1504 resources must be possible to be exposed as possible (COIN) 1505 execution environments. 1507 * Req 5.2.3: Any solution MUST provide means to optimize the inter- 1508 connection of compute resources, including those dynamically added 1509 and removed during the provisioning of the tenant-specific compute 1510 fabric. 1512 * Req 5.2.4: Any solution MUST provide means for ensuring 1513 availability and usage of resources is accounted for. 1515 5.3. Virtual Networks Programming 1517 5.3.1. Description 1519 The term "virtual network programming" is proposed to describe 1520 mechanisms by which tenants deploy and operate COIN programs in their 1521 virtual network. Such COIN programs can for example be P4 programs, 1522 OpenFlow rules, or higher layer programs. This feature can enable 1523 other use cases described in this draft to be deployed using virtual 1524 networks services, over underlying networks such as datacenters, 1525 mobile networks, or other fixed or wireless networks. 1527 For example COIN programs could perform the following on a tenant's 1528 virtual network: 1530 * Allow or block flows, and request rules from an SDN controller for 1531 each new flow, or for flows to or from specific hosts that needs 1532 enhanced security 1534 * Forward a copy of some flows towards a node for storage and 1535 analysis 1537 * Update counters based on specific sources/destinations or 1538 protocols, for detailed analytics 1540 * Associate traffic between specific endpoints, using specific 1541 protocols, or originated from a given application, to a given 1542 slice, while other traffic use a default slice 1544 * Experiment with a new routing protocol (e.g., ICN), using a P4 1545 implementation of a router for this protocol 1547 5.3.2. Characterization 1549 To provide a concrete example of virtual COIN programming, we 1550 consider a use case using a 5G underlying network, the 5GLAN 1551 virtualization technology, and the P4 programming language and 1552 environment. Section 5.1 of [I-D.ravi-icnrg-5gc-icn] provides a 1553 description of the 5G network functions and interfaces relevant to 1554 5GLAN, which are otherwise specified in [TS23.501] and [TS23.502]. 1555 From the 5GLAN service customer/tenant standpoint, the 5G network 1556 operates as a switch. 1558 In the use case depicted in Figure 4, the tenant operates a network 1559 including a 5GLAN network segment (seen as a single logical switch), 1560 as well as fixed segments. This can be in a plant or enterprise 1561 network, using for an example a 5G Non-Public Network (NPN). The 1562 tenant uses P4 programs to determine the operation of the fixed and 1563 5GLAN switches. The tenant provisions a 5GLAN P4 program into the 1564 mobile network, and can also operate a controller. The mobile 1565 devices (or User Equipment nodes) UE1, UE2, UE3 and UE4 are in the 1566 same 5GLAN, as well as Device1 and Device2 (through UE4). 1568 ..... Tenant ........ 1569 P4 program : : 1570 deployment : Operation : 1571 V : 1572 +-----+ air interface +----------------+ : 1573 | UE1 +----------------+ | : 1574 +-----+ | | : 1575 | | : 1576 +-----+ | | V 1577 | UE2 +----------------+ 5GLAN | +------------+ 1578 +-----+ | Logical +------+ Controller | 1579 | Switch | P4 +-------+----+ 1580 +-----+ | | runtime | 1581 | UE3 +----------------+ | API | 1582 +-----+ | | | 1583 | | | 1584 +-----+ | | | 1585 +-+ UE4 +----------------+ | | 1586 | +-----+ +----------------+ | 1587 | | 1588 | Fixed or wireless connection | 1589 | P4 runtime API | 1590 | +---------+ +-------------------------------+ 1591 +--+ Device1 | | 1592 | +---------+ | 1593 | | 1594 | +---------+ +------+-----+ 1595 `--+ Device2 +----+ P4 Switch +--->(fixed network) 1596 +---------+ +------------+ 1598 Figure 4: 5G Virtual Network Programming Overview 1600 Looking in more details in Figure 5, the 5GLAN P4 program can be 1601 split between multiple data plane nodes (PDU Session Anchor (PSA) 1602 User Plane Functions (UPF), other UPFs, or even mobile devices), 1603 although in some cases the P4 program may be hosted on a single node. 1604 In the most general case, a distributed deployment is useful to keep 1605 traffic on optimal paths, because, except in simple cases, within a 1606 5GLAN all traffic will not pass through a single node. In this 1607 example, P4 programs could be deployed in UPF1, UPF2, UPF3, UE3 and 1608 UE4. UE1-UE2 traffic is using a local switch on PSA UPF1, UE1-UE3 1609 traffic is tunneled between PSA UPF1 and PSA UPF2 through the N19 1610 interface, and UE1-UE4 traffic is forwarded throughan external Data 1611 Network (DN). Traffic between Device1 and Device2 is forwarded 1612 through UE4. 1614 +-----+ +-----+ +------------+ 1615 | AMF | | SMF | | Controller | 1616 +-+-+-+ +--+--+ +-----+------+ 1617 / | | P4| 1618 +---------+ | N4| Runtime| 1619 N1 / |N2 | V 1620 +------+ | | (all P4 programs*) 1621 / | | 1622 +--+--+ air interface +---+-----+ N3 +-+--+----------+ N6 +----+ 1623 | UE1 +----------------+ (R)AN +----+ PSA UPF1* +----->+ | 1624 +-----+ +---------+ +-+-------+-----+ | | 1625 | | | | | | | 1626 +--+--+ +---+-----+ | | | | | 1627 | UE2 +----------------+ (R)AN +------' | | N19 | DN | 1628 +-----+ +---------+ | | | | 1629 | | | | | | 1630 +--+--+ +---+-----+ +----+----+-----+ | | 1631 | UE3*+----------------+ (R)AN +----+ PSA UPF2* + | | 1632 +-----+ +---------+ +---------+-----+ | | 1633 | | | | N19 | | 1634 +--+--+ +---+-----+ +----+----+-----+ N6 | | 1635 +-+ UE4*+----------------+ (R)AN +----+ PSA UPF3* +----->+ | 1636 | +-----+ +---------+ +---------------+ +----+ 1637 | 1638 | Fixed or wireless connection 1639 | 1640 | +---------+ 1641 +--+ Device1 | (* indicates the presence of a P4 program) 1642 | +---------+ 1643 | 1644 | +---------+ +------------+ 1645 `--+ Device2 +----+ P4 Switch* +--->(fixed network) 1646 +---------+ +------------+ 1648 Figure 5: 5G Virtual Network Programming Details 1650 5.3.3. Existing Solutions 1652 Research has been conducted, for example by [Stoyanov], to enable P4 1653 network programming of individual virtual switches. To our 1654 knowledge, no complete solution has been developped for deploying 1655 virtual COIN programs over mobile or datacenter networks. 1657 5.3.4. Opportunities 1659 Virtual network programming by tenants could bring benefits such as: 1661 * A unified programming model, which can facilitate porting in- 1662 network computing between data centers, 5G networks, and other 1663 fixed and wireless networks, as well as sharing controller, code 1664 and expertise. 1666 * Increasing the level of customization available to customers/ 1667 tenants of mobile networks or datacenters, when compared with 1668 typical configuration capabilities. For example, 5G network 1669 evolution points to an ever increasing specialization and 1670 customization of private mobile networks, which could be handled 1671 by tenants using a programming model similar to P4. 1673 * Using network programs to influence underlying network service 1674 (e.g., request specific QoS for some flows in 5G or datacenters), 1675 to increases the level of in-depth customization available to 1676 tenants. 1678 5.3.5. Research Questions 1680 * RQ 5.3.1: Underlying Network Awareness: a virtual COIN program can 1681 be able to influence, and be influenced by, the underling network 1682 (e.g., the 5G network or data center). For example, a virtual 1683 COIN program may be aware of the slice used by a flow, and 1684 possibly influence slice selection. Since some information and 1685 actions may be available on some nodes and not others, underlying 1686 network awareness may impose additional constraints on distributed 1687 network programs location. 1689 * RQ 5.3.2: Splitting/Distribution: a virtual COIN program may need 1690 to be deployed across multiple computing nodes, leading to 1691 research questions around instance placement and distribution. As 1692 a primary reason for this, program logic should be applied exactly 1693 once or at least once per packet, while allowing optimal 1694 forwarding path by the underlying network. For example, a 5GLAN 1695 P4 program may need to run on multiple UPFs. Research challenges 1696 include defining manual (by the programmer) or automatic methods 1697 to distribute COIN programs that use a low or minimal amount of 1698 resources. Distributed P4 programs are studied in 1699 [I-D.hsingh-coinrg-reqs-p4comp] and [Sultana]. 1701 * RQ 5.3.3: Multi-Tenancy Support: multiple virtual COIN program 1702 instances can run on the same compute node. While mechanism were 1703 proposed for P4 multi-tenancy in a switch [Stoyanov], research 1704 questions remains, about isolation between tenants, fair 1705 repartition of resources. 1707 * RQ 5.3.4: Security: how can tenants and underlying networks be 1708 protected against security risks, including overuse or misuse of 1709 network resources, injection of traffic, access to unauthorized 1710 traffic? 1712 * RQ 5.3.5: Higher layer processing: can a virtual network model 1713 facilitate the deployment of COIN programs acting on application 1714 layer data? This is an open question since the present section 1715 focused on packet/flow processing. 1717 5.3.6. Requirements 1719 * Req 5.3.1: A COIN system supporting virtualization should enable 1720 tenants to deploy COIN programs onto their virtual networks. 1722 * Req 5.3.2: A virtual COIN program should process flows/packets 1723 once and only once (or at least once for idempotent operations), 1724 even if the program is distributed over multiple PNDs. 1726 * Req 5.3.3: Multi-tenancy should be supported for virtual COIN 1727 programs, i.e., instances of virtual COIN programs from different 1728 tenants can share underlying PNDs. This includes requirements for 1729 secure isolation between tenants, and fair (or policy-based) 1730 sharing of computing resources. 1732 * Req 5.3.4: Virtual COIN programs should support mobility of 1733 endpoints. 1735 6. Enabling new COIN capabilities 1737 6.1. Distributed AI 1739 6.1.1. Description 1741 There is a growing range of use cases demanding for the realization 1742 of AI capabilities among distributed endpoints. Such demand may be 1743 driven by the need to increase overall computational power for large- 1744 scale problems. From a COIN perspective, those capabilities may be 1745 realized as (COIN) programs and executed throughout the COIN system, 1746 including in PNDs. 1748 Some solutions may desire the localization of reasoning logic, e.g., 1749 for deriving attributes that better preserve privacy of the utilized 1750 raw input data. Quickly establishing (COIN) program instances in 1751 nearby compute resources, including PNDs, may even satisfy such 1752 localization demands on-the-fly (e.g., when a particular use is being 1753 realized, then terminated after a given time). 1755 6.1.2. Characterization 1757 Examples for large-scale AI problems include biotechnology and 1758 astronomy related reasoning over massive amounts of observational 1759 input data. Examples for localizing input data for privacy reasons 1760 include radar-like application for the development of topological 1761 mapping data based on (distributed) radio measurements at base 1762 stations (and possibly end devices), while the processing within 1763 radio access networks (RAN) already constitute a distributed AI 1764 problem to a certain extent albeit with little flexibility in 1765 distributing the execution of the AI logic. 1767 6.1.3. Existing Solutions 1769 Reasoning frameworks, such as TensorFlow, may be utilized for the 1770 realization of the (distributed) AI logic, building on remote service 1771 invocation through protocols such as gRPC [GRPC] or MPI [MPI] with 1772 the intention of providing an on-chip NPU (neural processor unit) 1773 like abstraction to the AI framework. 1775 NOTE: material on solutions like ETSI MEC and 3GPP work will be added 1776 here later 1778 6.1.4. Opportunities 1780 * Supporting service-level routing of requests (service routing in 1781 [APPCENTRES]), with AI services being exposed to the network and 1782 executed as part of (COIN) programs in selected (COIN) program 1783 instances, may provide a highly distributed execution of the 1784 overall AI logic, thereby addressing, e.g., localization but also 1785 computational concerns (scale-in/out). 1787 * The support for constraint-based selection of a specific (COIN) 1788 program instance over others (constraint-based routing in 1789 [APPCENTRES]) may allow for utilizing the most suitable HW 1790 capabilities (e.g., support for specific AI HW assistance in the 1791 COIN element, including a PND), while also allowing to select 1792 resources, e.g., based on available compute ability such as number 1793 of cores to be used. 1795 * Supporting collective communication between multiple instances of 1796 AI services, i.e., (COIN) program instances, may positively impact 1797 network but also compute utilization by moving from unicast 1798 replication to network-assisted multicast operation. 1800 6.1.5. Research Questions 1802 * RQ 6.1.1: similar to use case in Section 3.1 1804 * RQ 6.1.2: What are the communication patterns that may be 1805 supported by collective communication solutions? 1807 * RQ 6.1.3: How to achieve scalable multicast delivery with rapidly 1808 changing receiver sets? 1810 * RQ 6.1.4: What in-network capabilities may support the collective 1811 communication patterns found in distributed AI problems? 1813 * RQ 6.1.5: How to provide a service routing capability that 1814 supports any invocation protocol (beyond HTTP)? 1816 6.1.6. Requirements 1818 Requirements 3.1.1 through 3.1.6 also apply for general distributed 1819 AI capabilities. In addition: 1821 * Req 6.1.1: Any COIN system MUST provide means to specify the 1822 constraints for placing (AI) execution logic in the form of (COIN) 1823 programs in certain logical execution points (and their associated 1824 physical locations), including PNDs. 1826 * Req 6.1.2: Any COIN system MUST provide support for app/micro- 1827 service specific invocation protocols for requesting (COIN) 1828 program services exposed to the COIN system. 1830 7. Analysis 1832 The goal of this analysis is to identify aspects that are relevant 1833 across all use cases to help in shaping the research agenda of 1834 COINRG. For this purpose, this section will condense the 1835 opportunities, research questions, as well as requirements of the 1836 different presented use cases and analyze these for similarities 1837 across the use cases. 1839 Through this, we intend to identify cross-cutting opportunities, 1840 research questions as well as requirements (for COIN system 1841 solutions) that may aid the future work of COINRG as well as the 1842 larger research community. 1844 7.1. Opportunities 1846 To be added later. 1848 7.2. Research Questions 1850 After carefully considering the different use cases along with their 1851 research questions, we propose the following layered categorization 1852 to structure the content of the research questions which we 1853 illustrate in Figure 6. 1855 +--------------------------------------------------------------+ 1856 + Applicability Areas + 1857 + .............................................................+ 1858 + Transport | App | Data | Routing & | (Industrial) + 1859 + | Design | Processing | Forwarding | Control + 1860 +--------------------------------------------------------------+ 1862 +--------------------------------------------------------------+ 1863 + Distributed Computing FRAMEWORKS and LANGUAGES to COIN + 1864 +--------------------------------------------------------------+ 1866 +--------------------------------------------------------------+ 1867 + ENABLING TECHNOLOGIES for COIN + 1868 +--------------------------------------------------------------+ 1870 +--------------------------------------------------------------+ 1871 + VISION(S) for COIN + 1872 +--------------------------------------------------------------+ 1874 Figure 6: Research Questions Categorization 1876 7.2.1. Categorization 1878 Three categories deal with concretizing fundamental building blocks 1879 of COIN and COIN itself. 1881 * VISION(S) for COIN: Questions that aim at defining and shaping the 1882 exact scope of COIN. 1884 * ENABLING TECHNOLOGIES for COIN: Questions that target the 1885 capabilities of the technologies and devices intended to be used 1886 in COIN. 1888 * Distributed Computing FRAMEWORKS and LANGUAGES to COIN: Questions 1889 that aim at concretizing how a framework or languages for 1890 deploying and operating COIN systems might look like. 1892 Additionally, there are use-case near research questions that are 1893 heavily influenced by the specific constraints and goals of the use 1894 cases. We call this category "applicability areas" and refine it 1895 into the following subgroups: 1897 * Transport: 1899 * App Design: 1901 * Data Processing: 1903 * Routing & Forwarding: 1905 * (Industrial) Control 1907 7.2.2. Analysis 1909 7.2.2.1. VISION(S) for COIN 1911 The following research questions presented in the use cases belong to 1912 this category: 1914 3.1.8, 3.2.1, 3.3.5, 3.3.6, 3.3.7, 5.3.3, 6.1.2, 6.1.4 1916 The research questions centering around the COIN VISION dig into what 1917 is considered COIN and what scope COIN functionality should have. In 1918 contrast to the ENABLING TECHNOLOGIES, this section looks at the 1919 problem from a more philosophical perspective. 1921 7.2.2.1.1. Where to perform computations 1923 The first aspect of this is where/on which devices COIN programs 1924 will/should be executed (3.3.5). In particular, it is debatable 1925 whether COIN programs will/should only be executed in PNDs or whether 1926 other "adjacent" computational nodes are also in scope. In case of 1927 the latter, an arising question is whether such computations are 1928 still to be considered as "in-network processing" and where the exact 1929 line is between "in-network processing" and "routing to end systems" 1930 (3.3.7). In this context, it is also interesting to reason about the 1931 desired feature sets of PNDs (and other COIN execution environments) 1932 as these will shift the line between "in-network processing" and 1933 "routing to end systems" (3.1.8). 1935 7.2.2.1.2. Are tasks suitable for COIN 1937 Digging deeper into the desired feature sets, some research questions 1938 address the question of which domains are to be considered of 1939 interest/relevant to COIN. For example, whether computationally- 1940 intensive tasks are suitable candidates for (COIN) Programs (3.3.6). 1942 7.2.2.1.3. (Is COIN)/(What parts of COIN are) suitable for the tasks 1944 Turning the previous aspect around, some questions try to reason 1945 whether COIN can be sensibly used for specific tasks. For example, 1946 it is a question of whether current PNDs are fast and expressive 1947 enough for complex filtering operations (3.2.1). 1949 There are also more general notions of this question, e.g., what "in- 1950 network capabilities" might be used to address certain problem 1951 patterns (6.1.4) and what new patterns might be supported (6.1.2). 1952 What is interesting about these different questions is that the 1953 former raises the question of whether COIN can be used for specific 1954 tasks while the latter asks which tasks in a larger domain COIN might 1955 be suitable for. 1957 7.2.2.1.4. What are desired forms for deploying COIN functionality 1959 The final topic addressed in this part deals with the deployment 1960 vision for COIN programs (5.3.3). 1962 In general, multiple programs can be deployed on a single PND/COIN 1963 element. However, to date, multi-tenancy concepts are, above all, 1964 available for "end-host-based" platforms, and, as such, there are 1965 manifold questions centering around (1) whether multi-tenancy is 1966 desirable for PNDs/COIN elements and (2) how exactly such 1967 functionality should be shaped out, e.g., which (new forms of) 1968 hardware support needs to be provided by PNDs/COIN elements. 1970 7.2.2.2. ENABLING TECHNOLOGIES for COIN 1972 The following research questions presented in the use cases belong to 1973 this category: 1975 3.1.7, 3.1.8, 3.2.2, 4.3.4, 4.4.4, 5.1.1, 5.1.2, 5.1.6, 5.3.1, 6.1.3, 1976 6.1.4, 1978 The research questions centering around the ENABLING TECHNOLOGIES for 1979 COIN dig into what technologies are needed to enable COIN, which of 1980 the existing technologies can be reused for COIN and what might be 1981 needed to make the VISION(S) for COIN a reality. In contrast to the 1982 VISION(S), this section looks at the problem from a practical 1983 perspective. 1985 7.2.2.2.1. COIN compute technologies 1987 Picking up on the topics discussed in Section 7.2.2.1.1 and 1988 Section 7.2.2.1.2, this category deals with how such technologies 1989 might be realized in PNDs and with which functionality should even be 1990 realized (3.1.8). 1992 7.2.2.2.2. Forwarding technology 1994 Another group of research questions focuses on "traditional" 1995 networking tasks, i.e., L2/L3 switching and routing decisions. 1997 For example, how COIN-powered routing decisions can be provided at 1998 line-rate (3.1.7). Similarly, how (L2) multicast can be used for 1999 COIN (vice versa) (5.1.1), which (new) forwarding capabilities might 2000 be required within PNDs to support the concepts (5.1.2), and how 2001 scalability limits of existing multicast capabilities might be 2002 overcome using COIN (5.1.6). 2004 In this context, it is also interesting how these technologies can be 2005 used to address quickly changing receiver sets (6.1.3), especially in 2006 the context of collective communication (6.1.4). 2008 7.2.2.2.3. Incorporating COIN in existing systems 2010 Some research questions deal with questions around how COIN 2011 (functionality) can be included in existing systems. 2013 For example, if COIN is used to perform traffic filtering, how end- 2014 hosts can be made aware that data/information/traffic is deliberately 2015 withheld (4.3.4). Similarly, if data is pre-processed by COIN, how 2016 can end-hosts be signaled the new semantics of the received data 2017 (4.4.4). 2019 In particular, these are not only questions concerning the 2020 functionality scope of PNDs or protocols but might also depend on how 2021 programming frameworks for COIN are designed. Overall, this category 2022 deals with how to handle knowledge and action imbalances between 2023 different nodes within COIN networks (5.3.1). 2025 7.2.2.2.4. Enhancing device interoperability 2027 Finally, the increasing diversity of devices within COIN raises 2028 interesting questions of how the capabilities of the different 2029 devices can be combined and optimized (3.2.2). 2031 7.2.2.3. Distributed Computing FRAMEWORKS and LANGUAGES to COIN 2033 The following research questions presented in the use cases belong to 2034 this category: 2036 3.1.1, 3.2.3, 3.3.1, 3.3.2, 3.3.3, 3.3.5, 4.2.1, 4.2.2, 4.3.2/4.4.2, 2037 4.3.3/4.4.3, 4.3.4, 4.4.4, 5.2.1, 5.2.2, 5.2.3, 5.3.1, 5.3.2, 5.3.3, 2038 5.3.4, 5.3.5, 2040 This category mostly deals with how COIN programs can be deployed and 2041 orchestrated. 2043 7.2.2.3.1. COIN program composition 2045 One aspect of this topic is how the exact functional scope of COIN 2046 programs can/should be defined. For example, it might be an idea to 2047 define an "overall" program that then needs to be deployed to several 2048 devices (5.3.2). In that case, how should this composition be done: 2049 manually or automatically? Further aspects to consider here are how 2050 the different computational capabilities of the available devices can 2051 be taken into account and how these can be leveraged to obtain 2052 suitable distributed versions of the overall program (4.2.1). 2054 In particular, it is an open question of how "service-level" 2055 frameworks can be combined with "app-level" packaging methods (3.1.1) 2056 or whether virtual network models can help facilitate the composition 2057 of COIN programs (5.3.5). This topic also again includes the 2058 considerations regarding multi-tenancy support (5.3.3, cf. 2059 Section 7.2.2.1.4) as such function distribution might necessitate 2060 deploying functions of several entities on a single device. 2062 7.2.2.3.2. COIN function placement 2064 In this context, another interesting aspect is where exactly 2065 functions should be placed and who should influence these decisions. 2066 Such function placement could, e.g., be guided by the available 2067 devices (3.3.5, c.f. Section 7.2.2.1.1) and their position with 2068 regards to the communicating entities (3.3.1), and it could also be 2069 specified in terms of the "distance" from the "direct" network path 2070 (3.3.2). 2072 However, it might also be an option to leave the decision to users or 2073 at least provide means to express requirements/constraints (3.3.3). 2074 Here, the main question is how tenant-specific requirements can 2075 actually be conveyed (5.2.1). 2077 7.2.2.3.3. COIN function deployment 2079 Once the position for deployment is fixed, a next problem that arises 2080 is how the functions can actually be deployed (4.3.2,4.4.2). Here, 2081 first relevant questions are how COIN programs/program instances can 2082 be identified (3.1.4) and how preferences for specific COIN program 2083 instances can be noted (3.1.5). It is then interesting to define how 2084 different COIN program can be coordinated (4.3.2,4.4.2), especially 2085 if there are program dependencies (4.2.2, cf. Section 7.2.2.3.1). 2087 7.2.2.3.4. COIN dynamic system operation 2089 In addition to static solutions to the described problems, the 2090 increasing dynamics of today's networks will also require dynamic 2091 solutions. For example, it might be necessary to dynamically change 2092 COIN programs at run-time (4.3.3, 4.4.3) or to include new resources, 2093 especially if service-specific constraints or tenant requirements 2094 change (5.2.2). It will be interesting to see if COIN frameworks can 2095 actually support the sometimes required dynamic changes (3.2.4). In 2096 this context, providing availability and accountability of resources 2097 can also be an important aspect. 2099 7.2.2.3.5. COIN system integration 2101 COIN systems will potentially not only exist in isolation, but will 2102 have to interact with existing systems. Thus, there are also several 2103 questions addressing the integration of COIN systems into existing 2104 ones. As already described in Section 7.2.2.2.3, the semantics of 2105 changes made by COIN programs, e.g., filtering packets or changing 2106 payload, will have to be communicated to end-hosts (4.3.4,4.4.4). 2107 Overall, there has to be a common middleground so that COIN systems 2108 can provide new functionality while not breaking "legacy" systems. 2109 How to bridge different levels of "network awareness" (5.3.1) in an 2110 explicit and general manner might be a crucial aspect to investigate. 2112 7.2.2.3.6. COIN system properties - optimality, security and more 2114 A final category deals with meta objectives that should be tackled 2115 while thinking about how to realize the new concepts. In particular, 2116 devising strategies for achieving an optimal function allocation/ 2117 placement are important to effectively the high heterogeneity of the 2118 involved devices (3.2.3). 2120 On another note, security in all its facets needs to be considered as 2121 well, e.g., how to protect against misuse of the systems, 2122 unauthorized traffic and more (5.3.4). We acknowledge that these 2123 issues are not yet discussed in detail in this document. 2125 7.2.2.4. Applicability Area - Transport 2127 The following research questions presented in the use cases belong to 2128 this category: 2130 3.1.2 2132 Further research questions concerning transport solutions are 2133 discussed in more detail in [TRANSPORT]. 2135 Today's transport protocols are generally intended for end-to-end 2136 communications. Thus, one important question is how COIN program 2137 interactions should be handled, especially if the deployment 2138 locations of the program instances change (quickly) (3.1.2). 2140 7.2.2.5. Applicability Area - App Design 2142 The following research questions presented in the use cases belong to 2143 this category: 2145 4.3.1, 5.1.1, 5.1.3, 5.1.5 2147 The possibility of incorporating COIN resources into application 2148 programs increases the scope for how applications can be designed and 2149 implemented. In this context, the general question of how the 2150 applications can be designed and which (low-level) triggers could be 2151 included in the program logic comes up (4.3.1). Similarly, providing 2152 sensible constraints to route between compute and network 2153 capabilities (when both kinds of capabilities are included) is also 2154 important (5.1.3). Many of these considerations boil down to a 2155 question of trade-off, e.g, between storage and frequent updates 2156 (5.1.5), and how (new) COIN capabilities can be sensibly used for 2157 novel application design (5.1.1). 2159 7.2.2.6. Applicability Area - Data Processing 2161 The following research questions presented in the use cases belong to 2162 this category: 2164 3.2.3, 4.4.1, 4.5.2 2166 Many of the use cases deal with novel ways of processing data using 2167 COIN. Interesting questions in this context are which types of COIN 2168 programs can be used to (pre-)process data (4.4.1) and which parts of 2169 packet information can be used for these processing steps, e.g., 2170 payload vs. header information (4.5.2). Additionally, data 2171 processing within COIN might even be used to support a better 2172 localization of the COIN functionality (3.2.3). 2174 7.2.2.7. Applicability Area - Routing & Forwarding 2176 The following research questions presented in the use cases belong to 2177 this category: 2179 3.1.2, 3.1.3, 3.1.4, 3.1.5, 3.1.6, 5.1.2, 5.1.3, 5.1.4, 6.1.5, 2181 Being a central functionality of traditional networking devices, 2182 routing and forwarding are also prime candidates to profit from 2183 enhanced COIN capabilities. In this context, a central question, 2184 also raised as part of the framework in Section 7.2.2.3.3, is how 2185 different COIN entities can be identified (3.1.4) and how the choice 2186 for a specific instance can be signalled (3.1.5). Building upon 2187 this, next questions are which constraints could be used to make the 2188 forwarding/routing decisions (5.1.3), how these constraints can be 2189 signalled in a scalable manner (3.1.3), and how quickly changing COIN 2190 program locations can be included in these concepts, too (3.1.2). 2192 Once specific instances are chosen, higher-level questions revolve 2193 around "affinity". In particular, how affinity on service-level can 2194 be provided (3.1.6), whether traffic steering should actually be 2195 performed on this level of granularity or rather on a lower level 2196 (5.1.4) and how invocation for arbitrary application-level protocols, 2197 e.g., beyond HTTP, can be supported (6.1.5). Overall, a question is 2198 what specific forwarding methods should or can be supported using 2199 COIN (5.1.2). 2201 7.2.2.8. Applicability Area - (Industrial) Control 2203 The following research questions presented in the use cases belong to 2204 this category: 2206 3.2.4, 3.3.1, 3.3.4, 4.2.1, 4.4.1, 4.5.1 2208 The final applicability area deals with use cases exercising some 2209 kind of control functionality. These processes, above all, require 2210 low latencies and might thus especially profit from COIN 2211 functionality. Consequently, the aforementioned question of function 2212 placement (cf. Section 7.2.2.3.2, e.g., close to one of the end- 2213 points or deep in the network, is also a very relevant question for 2214 this category of applications (3.3.1). 2216 Focusing more explicitly on control processes, one idea is to deploy 2217 different controllers with different control granularities within a 2218 COIN system. On the one hand, it is an interesting question how 2219 these controllers with different granularities can be derived based 2220 on one original controller (4.2.1). On the other hand, how to 2221 achieve synchronisation between these controllers or, more generally, 2222 between different entities or flows/streams within the COIN system is 2223 also a relevant problem (3.3.4). Finally, it is still to be found 2224 out whether using COIN for such control processes indeed improves the 2225 existing systems, e.g., in terms of safety (4.5.1) or in terms of 2226 performance (3.2.4). 2228 7.3. Requirements 2230 To be added later. 2232 8. Security Considerations 2234 Note: This section will need consolidation once new use cases are 2235 added to the draft. Current in-network computing approaches 2236 typically work on unencrypted plain text data because today's 2237 networking devices usually do not have crypto capabilities. 2239 As is already mentioned in Section 4.3.2, this above all poses 2240 problems when business data, potentially containing business secrets, 2241 is streamed into remote computing facilities and consequently leaves 2242 the control of the company. Insecure on-premise communication within 2243 the company and on the shop-floor is also a problem as machines could 2244 be intruded from the outside. 2246 It is thus crucial to deploy security and authentication 2247 functionality on on-premise and outgoing communication although this 2248 might interfere with in-network computing approaches. Ways to 2249 implement and combine security measures with in-network computing are 2250 described in more detail in [I-D.fink-coin-sec-priv]. 2252 9. IANA Considerations 2254 N/A 2256 10. Conclusion 2258 This draft presented use cases gathererd from several fields that can 2259 and could profit from capabilities that are provided by in-network 2260 and, more generally, distributed compute capabilities. We 2261 distinguished between use cases in which COIN may (i) enable new 2262 experiences, (ii) expose new features or (iii) improve on existing 2263 system capabilities, and (iv) other use cases where COIN capabilities 2264 enable totally new applications, for example, in industrial 2265 networking. 2267 Beyond the mere description and characterization of those use cases, 2268 we identified opportunities arising from utilizing COIN capabilities 2269 as well as research questions that may need to be addressed to reap 2270 those opportunities. We also outlined possible requirements for 2271 realizing a COIN system addressing these use cases. 2273 But of course this is only a snapshot of the potential COIN use 2274 cases. In fact, the decomposition of many current client server 2275 applications into node by node transit could identify other 2276 opportunities for adding computing to forwarding notably in supply- 2277 chain, health care, intelligent cities and transportation and even 2278 financial services (amonsts others). As these become better defined 2279 they will be added to the list presented here. We are, however, 2280 confident that our analysis across all use cases in those dimensions 2281 of opportunities, research questions, and requirements has identified 2282 commonalities that will support future work in this space. Hence, 2283 the use cases presented are directly positioned as input into the 2284 milestones of the COIN RG in terms of required functionalities. 2286 11. List of Use Case Contributors 2288 * Dirk Trossen has contributed the following use cases: Section 3.1, 2289 Section 5.1, Section 5.2, Section 6.1. 2291 * Marie-Jose Montpetit has contributed the XR use case 2292 (Section 3.2). 2294 * David Griffin and Miguel Rio have contributed the use case on 2295 performing arts (Section 3.3). 2297 * Ike Kunze and Klaus Wehrle have contributed the industrial use 2298 cases (Section 4). 2300 * Xavier De Foy has contributed the use case on virtual networks 2301 programming (Section 5.3) 2303 12. References 2305 12.1. Normative References 2307 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2308 Requirement Levels", BCP 14, RFC 2119, 2309 DOI 10.17487/RFC2119, March 1997, 2310 . 2312 12.2. Informative References 2314 [APPCENTRES] 2315 Trossen, D., Sarathchandra, C., and M. Boniface, "In- 2316 Network Computing for App-Centric Micro-Services", Work in 2317 Progress, Internet-Draft, draft-sarathchandra-coin- 2318 appcentres-04, 26 January 2021, . 2322 [FCDN] Al-Naday, M., Reed, M.J., Riihijarvi, J., Trossen, D., 2323 Thomos, N., and M. Al-Khalidi, "A Flexible and Efficient 2324 CDN Infrastructure without DNS Redirection of Content 2325 Reflection", . 2327 [GLEBKE] Glebke, R., Henze, M., Wehrle, K., Niemietz, P., Trauth, 2328 D., Mattfeld MBA, P., and T. Bergs, "A Case for Integrated 2329 Data Processing in Large-Scale Cyber-Physical Systems", 2330 Proceedings of the Annual Hawaii International Conference 2331 on System Sciences, DOI 10.24251/hicss.2019.871, 2019, 2332 . 2334 [GRPC] "High performance open source universal RPC framework", 2335 . 2337 [I-D.draft-kutscher-coinrg-dir] 2338 Kutscher, D., Kaerkkaeinen, T., and J. Ott, "Directions 2339 for Computing in the Network", Work in Progress, Internet- 2340 Draft, draft-kutscher-coinrg-dir-02, 31 July 2020, 2341 . 2344 [I-D.fink-coin-sec-priv] 2345 Fink, I. B. and K. Wehrle, "Enhancing Security and Privacy 2346 with In-Network Computing", Work in Progress, Internet- 2347 Draft, draft-fink-coin-sec-priv-03, 22 October 2021, 2348 . 2351 [I-D.hsingh-coinrg-reqs-p4comp] 2352 Singh, H. and M. Montpetit, "Requirements for P4 Program 2353 Splitting for Heterogeneous Network Nodes", Work in 2354 Progress, Internet-Draft, draft-hsingh-coinrg-reqs-p4comp- 2355 03, 18 February 2021, . 2358 [I-D.mcbride-edge-data-discovery-overview] 2359 McBride, M., Kutscher, D., Schooler, E., Bernardos, C. J., 2360 Lopez, D. R., and X. D. Foy, "Edge Data Discovery for 2361 COIN", Work in Progress, Internet-Draft, draft-mcbride- 2362 edge-data-discovery-overview-05, 1 November 2020, 2363 . 2366 [I-D.ravi-icnrg-5gc-icn] 2367 Ravindran, R., Suthar, P., Trossen, D., Wang, C., and G. 2368 White, "Enabling ICN in 3GPP's 5G NextGen Core 2369 Architecture", Work in Progress, Internet-Draft, draft- 2370 ravi-icnrg-5gc-icn-04, 31 May 2019, 2371 . 2374 [ICE] Burke, J., "ICN-Enabled Secure Edge Networking with 2375 Augmented Reality: ICE-AR.", ICE-AR Presentation at 2376 NDNCOM. , 2018, . 2380 [KUNZE] Kunze, I., Glebke, R., Scheiper, J., Bodenbenner, M., 2381 Schmitt, R., and K. Wehrle, "Investigating the 2382 Applicability of In-Network Computing to Industrial 2383 Scenarios", 2021 4th IEEE International Conference on 2384 Industrial Cyber-Physical Systems (ICPS), 2385 DOI 10.1109/icps49255.2021.9468247, May 2021, 2386 . 2388 [MPI] Vishnu, A., Siegel, C., and J. Daily, "Scaling Distributed 2389 Machine Learning with In-Network Aggregation", 2390 . 2392 [PENNEKAMP] 2393 Pennekamp, J., Henze, M., Schmidt, S., Niemietz, P., Fey, 2394 M., Trauth, D., Bergs, T., Brecher, C., and K. Wehrle, 2395 "Dataflow Challenges in an Internet of Production: A 2396 Security & Privacy Perspective", Proceedings of the ACM 2397 Workshop on Cyber-Physical Systems Security & Privacy - 2398 CPS-SPC'19, DOI 10.1145/3338499.3357357, 2019, 2399 . 2401 [RUETH] Rueth, J., Glebke, R., Wehrle, K., Causevic, V., and S. 2402 Hirche, "Towards In-Network Industrial Feedback Control", 2403 Proceedings of the 2018 Morning Workshop on In- 2404 Network Computing, DOI 10.1145/3229591.3229592, August 2405 2018, . 2407 [SAPIO] Sapio, A., "Scaling Distributed Machine Learning with In- 2408 Network Aggregation", 2019, 2409 . 2411 [Stoyanov] Stoyanov, R. and N. Zilberman, "MTPSA: Multi-Tenant 2412 Programmable Switches", ACM P4 Workshop in Europe 2413 (EuroP4'20) , 2020, 2414 . 2416 [Sultana] Sultana, N., Sonchack, J., Giesen, H., Pedisich, I., Han, 2417 Z., Shyamkumar, N., Burad, S., DeHon, A., and B.T. Loo, 2418 "Flightplan: Dataplane Disaggregation and Placement for P4 2419 Programs", 2020, 2420 . 2422 [TRANSPORT] 2423 Kunze, I., Wehrle, K., and D. Trossen, "Transport Protocol 2424 Issues of In-Network Computing Systems", Work in Progress, 2425 Internet-Draft, draft-kunze-coinrg-transport-issues-05, 25 2426 October 2021, . 2429 [TS23.501] 501, 3gpp-23., "Technical Specification Group Services and 2430 System Aspects; System Architecture for the 5G System; 2431 Stage 2 (Rel.17)", 3GPP , 2021, 2432 . 2434 [TS23.502] 502, 3gpp-23., "Technical Specification Group Services and 2435 System Aspects; Procedures for the 5G System; Stage 2 2436 (Rel.17)", 3GPP , 2021, 2437 . 2439 [TSN] "IEEE Time-Sensitive Networking (TSN) Task Group", 2440 . 2442 [VESTIN] Vestin, J., Kassler, A., and J. Akerberg, "FastReact: In- 2443 Network Control and Caching for Industrial Control 2444 Networks using Programmable Data Planes", 2018 IEEE 23rd 2445 International Conference on Emerging Technologies and 2446 Factory Automation (ETFA), DOI 10.1109/etfa.2018.8502456, 2447 September 2018, 2448 . 2450 Authors' Addresses 2452 Ike Kunze 2453 RWTH Aachen University 2454 Ahornstr. 55 2455 D-52074 Aachen 2456 Germany 2457 Email: kunze@comsys.rwth-aachen.de 2458 Klaus Wehrle 2459 RWTH Aachen University 2460 Ahornstr. 55 2461 D-52074 Aachen 2462 Germany 2463 Email: wehrle@comsys.rwth-aachen.de 2465 Dirk Trossen 2466 Huawei Technologies Duesseldorf GmbH 2467 Riesstr. 25C 2468 D-80992 Munich 2469 Germany 2470 Email: Dirk.Trossen@Huawei.com 2472 Marie-Jose Montpetit 2473 Concordia University 2474 Montreal 2475 Canada 2476 Email: marie@mjmontpetit.com 2478 Xavier de Foy 2479 InterDigital Communications, LLC 2480 1000 Sherbrooke West 2481 Montreal H3A 3G4 2482 Canada 2483 Email: xavier.defoy@interdigital.com 2485 David Griffin 2486 University College London 2487 Gower St 2488 London 2489 WC1E 6BT 2490 United Kingdom 2491 Email: d.griffin@ucl.ac.uk 2493 Miguel Rio 2494 University College London 2495 Gower St 2496 London 2497 WC1E 6BT 2498 United Kingdom 2499 Email: miguel.rio@ucl.ac.uk