The Internet Governance Forum’s meetings bring together Internet user communities, businesses, technical folk, and a set of UN and government bodies. The key word in there is “forum,” as the IGF provides a useful venue for those different groups to talk to each other. Just as with a technical standards meeting, while the agenda tends to be packed with substantive sessions, a lot of the most valuable conversation takes places in the hallways.
At the recent IGF in Geneva, there were some IETF folk present to listen to the larger community and to occasionally speak about the issues which touched on our technical work. Internet protocols were not really at the forefront of the IGF this time, with just a handful of sessions looking at technical topics such as the use of distributed ledgers as alternatives to traditional registries and the feasibility of source attribution for network attacks. But it is interesting to reflect on what was at the forefront.
The rise of Internet shutdowns as a political tool was discussed in a number of sessions, and it is becoming an increasingly serious issue. As people build their lives and livelihoods on the Internet, the ability to withdraw a country or specific territories from it begins to parallel the impact of closing the roads–communication, commerce, and relief are all disrupted. The cases studies given were quite compelling, but the cancellation of some sessions because the speakers could not travel to tell their stories spoke volumes as well. The Internet Society has been an early opponent of this political strategy and they and partners like Access Now continue to track incidents and raise awareness of the issue. While the IETF usually thinks about fragmentation in terms of interoperability, it is possibly useful for us to consider what technical tools work in conditions like these. Delay tolerant networking and store-and-forward protocols, for example, work in conditions of intermittent connectivity, and we may want to consider these shutdowns as use cases for those approaches in new work.
Many other topics were actually about the systems and platforms which run on top of the network, rather than about network connectivity or infrastructure. Machine learning and other artificial intelligence (AI) topics were a common thread throughout the week, for example, even though these are not really Internet-specific issues. While there were a number of panels which saw AI in a positive light, there was also a great deal of concern expressed that AI systems would reduce the need for human knowledge workers. Because many communities hope to grow their local economies by providing that work to the global economy via the Internet, some saw AI as posing a worrying potential counterbalance to the promise of the Internet as an engine of growth and participation.
The IGF always has a strong focus on human rights, and this year was no exception. There was even a session specifically about human rights impact assessments in the context of standards development, with a section focused on how such assessments might work in the IETF. The panel (in which Alissa participated) discussed how rights-related considerations–security, decentralization, privacy–have implicitly and explicitly been factored into IETF standards development for decades, and also touched on the recent work in the IRTF to more concretely define human rights considerations relevant to Internet protocols.
In general, the IGF is a place where concerns are expressed across the usual community boundaries, so that each community can consider how to take action. But this year seemed to have a darker tone than many previous years, possibly because of world events. The focus kept returning to actors who would use the Internet’s facilities to mar the lives of others–by launching attacks, spreading misinformation, or inciting violence–rather than on the many cases where the Internet has enabled new industries, new communities, and new ways of life.
Looking back from the vantage point of the new year, we think the challenge to the technical community is to remind the rest of the world that we can continue to build on the Internet’s foundation as a force for good in the world. Just as we learned over the last several years that we needed to increase our focus on privacy and we rose to that challenge, we can use the same tools of decentralization and user control to rise to these newly identified challenges. Much of that work may not take place here in the IETF, since relatively few of the work items are at the protocol layer, but we can do our part. Along with that new work, we can remind folks of the value in our basic model: open protocols and open processes that enable any network to voluntarily join the Internet and any user to contribute content. What that model has produced remains amazing, and knowledge of that should continue to spark light in what others may cast as dark times.
Thanks to everyone who provided further input about the revamped www.ietf.org website around IETF 100. This was the third round of IETF community input this year. In addition, we gathered input from people for whom the IETF is important but don’t necessarily participate in the IETF. This input has led to a better revamped website in a variety of ways, such as organizing the content to make it more intuitive to access for people new to the IETF, improving accessibility, and making it work better on mobile devices.
The latest round of input from IETF participants raised a few additional points to be addressed:
Some feedback noted that the “Quick Links” page accessible via 1-click (via the “Tools” top-menu item) from anywhere in the www.ietf.org website was difficult to parse. In response, the layout for displaying the items/links has been updated. See https://beta.ietf.org/links/. And, of course, this can be further refined.
It was also suggested that the meeting pages on the new site follow the current practice of making all the links immediately visible. The content management system template for those pages will be updated to make all meeting pages are displayed that way by default.
There were a few additional comments about things such as the crispness of font display, vertical padding, and font sizes but it was not clear to what extent these are general issues or what action might be taken at this point to address them. Feedback to improve the website is encouraged and can be submitted via firstname.lastname@example.org.
Based on the input received, the current plan is to move the revamped website to production on 11 January 2018. This plan has been shared with the Community Review Committee established by the IETF website revamp project’s Scope of Work, with IETF Tools team, and with the IAOC and the IESG.
While the goal is to anticipate and implement measures needed to ensure a smooth transition, the IETF Secretariat and the web development vendor are prepared to pay extra attention to fixing quickly any issues that arise in the weeks after the cut over to production.
URL continuity will be an area of focus during the transition. Based on the experience of previous IETF website transitions, there will likely be some URLs that end up resolving differently after the new website goes into production. Specific plans for maintaining continuity for all the URLs specified the SOW, as well as others identified in the process of the website development, will be documented and, to the extent possible, tested before the move to production takes place.
For the past few months, in preparation for the cut over, we have been maintaining two parallel websites. That work needs to stop are soon as possible.The existing website will continue to exist in its current state, and will accessible in some manner, just as the previous website does at https://www.ietf.org/old/2009/. The URL for that has not yet been decided, but will be determined before the new website goes live.
For years, the IETF has been driving the industry transition from an overloaded Software Defined Networking (SDN) buzzword to data modeling-driven management. With a SDN pragmatic definition in mind, such that “SDN functionally enables the network to be accessed by operators programmatically, allowing for automated management and orchestration techniques; application of configuration policy across multiple routers, switches, and servers; and the decoupling of the application that performs these operations from the network device’s operating system.”, we’ve been executing on all of these requirements. Our IETF deliverables are: the YANG modeling language, protocols such as NETCONF or RESTCONF, encodings such as XML and JSON, and some YANG modules.
Let me start with a reflection. How do we know we’ve been fighting this uphill battle long enough and that it’s all downhill from here? Is it when we have published a core set of YANG models? When the technology is implemented by vendors? When the technology is deployed by operators? When other Standard Development Organizations/consortia/open-source projects embrace this technology? Probably most (or even all) of the above.
Now, an anecdote from this last IETF meeting. I was in a bar, connecting with IETF friends when, at some point in time, the discussion centered around YANG. In the past I would have lead the discussion, trying to convince and influence the crowd, but this time it was not necessary. I was quietly and happily sipping my beer while, Alia (an IETF routing Area Director), debated the importance of YANG. I enjoyed that moment so much and I remember observing that it is a success when someone else does your job. When someone else makes your speech. Then you know you can safely pass the baton. Now, downhill does not mean that there are no more issues to resolve, so let’s review the YANG models state of affairs.
1. The Network Management Datastore Architecture (NMDA) Impact
We keep specifying YANG modules at the IETF. See the graphical evolution here and all the published YANG data modules here. Why does it take so long, you may ask? Well, the world of standardization is never fast enough as quality and consensus come at a price. However, in this particular case, the main reason we aren’t fast enough is that we’re busy finalizing the “Network Management Datastore Architecture“, a new way to design YANG modules. To help grasp the concepts of this architecture we can look at pieces of the draft:
Network management data objects can often take two different values,
the value configured by the user or an application (configuration)
and the value that the device is actually using (operational state).
The original model of datastores required these data objects to be
modeled twice in the YANG schema, as "config true" objects and as
"config false" objects. The convention adopted by the interfaces
data model ([RFC7223]) and the IP data model ([RFC7277]) was using
two separate branches rooted at the root of the data tree, one branch
for configuration data objects and one branch for operational state
The duplication of definitions and the ad-hoc separation of
operational state data from configuration data leads to a number of
problems. Having configuration and operational state data in
separate branches in the data model is operationally complicated and
impacts the readability of module definitions. Furthermore, the
relationship between the branches is not machine readable and filter
expressions operating on configuration and on related operational
state are different.
With the revised architectural model of datastores defined in this
document, the data objects are defined only once in the YANG schema
but independent instantiations can appear in two different
datastores, one for configured values and one for operational state
values. This provides a more elegant and simpler solution to the
To illustrate the NMDA principles, below is a comparison of the pyang tree for the RFC 7223 “Original Datastores Model” ietf-interfaces on the left and the new NMDA-compliant RFC7223bis draft ietf-interfaces on the right. With NMDA, the “intended” and “applied” values would be reported in different datastores and the link between those “intended” and “applied” values is now machine readable. This will lead to a “cleaner” tree definition. Indeed, the interfaces-state sub-tree disappears in the new YANG module.
An easy way to check if your YANG module is NMDA-compliant is to look it up in the YANG catalog metadata tool. For example, the report for the new ietf-interfaces YANG module is here: the following entry shows the NMDA compliance.
I like this definition of “Soon”, just for the fun of it! To be more serious, with Routing Area Directors, we evaluated the situation of most IETF YANG modules, as one of the wrap up meetings at IETF 100. Many are currently in last call and/or under YANG doctors review and will end up on the IESG plate soon. I’ll do my best to progress those before I step down as Area Director next March.
The Tooling and YANG Module Metadata Become More and More Important
Producing YANG modules is one step in the right direction, but having the right tools and the right YANG module metadata (like health metrics) becomes equally important. Instead of repeating myself, let me point you to the progress on that front, in this YANG Catalog Latest Developments (IETF 100 Hackathon) recent blog.
Collaboration Across Standard Development Organizations
During the IETF 100, we had an IEEE/IESG breakfast meeting with a primary topic of discussion: YANG. We obviously discussed the NMDA compliance for the IEEE YANG modules. A single query to the yangcatalog produced the right output for our discussion: “return the latest version of all YANG modules where the organization is IEEE”.
From there, it’s easy to check whether each YANG module reports the tree-type metadata as “nmda-compatible”.
Next to IEEE, the Broadband Forum also inquired about the NMDA status, timeline, and possible transition in this liaison statement. With the NMDA architecture and the couple of key NMDA-complaint YANG modules being close to completion, it’s time to reply to those liaisons and also proactively warns the other SDOs, consortia, and open-source projects. Helping with the NMDA transition, we have to understand which specific YANG modules those SDOs care about (basically, which YANG modules they will augment) and work on a transition together; directly moving to the NMDA complaint version or still relying on the previous non-NMDA version?
YANG Module Update Procedure
YANG specifies strict rules [RFC7950] for updating YANG modules (when keeping the same YANG module name), imposing backward compatible changes. However, this causes some issues in the world of automation! For example, the YANG paths must be changed in controller/orchestration when an YANG module with a new YANG name is introduced. It is not possible to know that one YANG module obsolete or update another YANG module without going through a level of indirection that is the RFC document obsolete or update tag. The IETF, as openconfig did some time ago with the semver concept, faced the first occurrences of this issues. Some more background information in this IETF draft. which was discussed in the NETMOD Working Group.
A Lot of Telemetry Documents
Building on YANG, there was much work on YANG based Telemetry at IETF 100. These successes include an award at the IETF Hackathon, the progression of six adopted working group drafts in NETCONF, the addition of four new proposals from new authors, and NETCONF closing in on Working Group Last Call on three of these drafts. Based on the traction of the technology in the industry, it is also becoming relevant to working groups beyond NETCONF. Some progress includes:
The team from hackers.mu that participated remotely in the IETF Hackathon on 11-12 November 2017.
Hackers.mu is a developer group based in Mauritius made up of a wide range of people from different backgrounds: high school students, university students, professional engineers, and advisors to the minister of ICT. We participated remotely in the IETF Hackathons held in conjunction with IETF 98 and IETF 99 in the Automatic Multicast Tunneling (AMT) and Human Rights Protocol Considerations (HRPC) projects, respectively. After hearing about the recent changes happening in the TLS Working Group, we decided to work on TLS implementations for IETF Hackathon held just before IETF 100. We packed our laptops and headed to Pereybere, which is found in the north of the Island.
Hackers.mu remotely participating in the IETF Hackathon
We stayed at a very comfortable location with proper A/C. We deployed our network, and connectivity was provided via a 3G mobile dongle. A big thank you to the TLS champions who were very helpful and considerable on Instant Messaging. After we showed them our initial code, they directed us to a bunch of servers that we could use to test. Also, it was very helpful to work alongside the people actually implementing the next iteration of the TLS draft. We were able to see how they were changing the implementation to work around problematic middleboxes. We learned a lot.
We had 8 people from Mauritius and 1 Mauritian from Denmark: Codarren Velvindron, Nitin Mutkawoa, Pirabarlen Cheenaramen (working from .dk), Nigel Yong, Sheik Meeran
Ashmith Kifah, Muzaffar Auhammud, Yasir Auleear and Yashvi Paupiah. We worked on the following Open Source software: wget, curl, monit, ftimes, aria2c, stunnel nagios plugins and hitch. Our project presentation slides are available here. A few of our members woke up every morning and went for a 30 minute swim in the morning before going back to the Hackathon room. The beach was less than 5 minutes from our Hackathon venue.
Overall, it was a fun IETF 100 Hackathon, and we really enjoyed how intensive it was. Along the way, we learned a lot about TLS internals, and the subtle details of the different implementations. Also, we would like to thank the hardworking people behind the remote participation infrastructure. They have done an amazing job! We were able to watch live from Mauritius the IETF Hackathon awards session. The TLS team won a prize for best remote participation!
We are looking forward to participating in the next IETF Hackathon scheduled for 17-18 March 2018, just before the IETF 101 meeting in London.
IETF 100 wrapped up just over a week ago in steamy Singapore. In addition to our usual productive working group sessions, hallway conversations, and ad hoc collaboration, we took the opportunity to mark the milestone of the 100th meeting with looks backward and forward in the IETF’s trajectory (plus some bubbles and sweets) at the plenary session.
We also got to share our appreciation for three individuals who have been working in support of the IETF for many years: Ray Pelletier, our recently retired IETF Administrative Director; Jorge Contreras, who will be stepping down as the IETF’s legal counsel at the end of this year; and Nevil Brownlee, whose term as the Independent Submission Editor will conclude in February. Many thanks and best wishes go out to them!
We started off the week with the IETF Hackathon, whose attendance continues to swell. Two hundred participants spent the weekend collaborating in teams on a wide variety of IETF-related implementation projects. As this third year of IETF Hackathons comes to a close, we have many teams viewing it as a requisite part of their IETF experience, including those working on YANG, DNS, I2NSF, TLS, and more. This time around we saw a number of teams with maturing implementations put more focus on interop, which was exciting to see.
It seemed like every time you turned a corner at IETF 100 you would run into someone talking about encryption, network operations, and the interaction between the two. TSVWG and OPSAWG both hosted generalized discussions of these issues. The QUIC working group continued its extended discussion of the implications of exposing cleartext bit(s) in the protocol. And the plenary session provided an opportunity for exchange of views between the community and area directors on this topic. Discussion has continued on the mailing lists since the meeting’s conclusion, and as passionately as participants feel about this, we can expect it to continue apace.
In other security news, we had two productive BoF sessions — SUIT and TEEP — focused on different aspects of securely provisioning and updating IoT and other devices. Both sessions were useful for clarifying the scope of the proposed work, as well as the relationship between the two. There appears to be substantial interest in taking on new work in both cases if the charter details and potential interactions with other existing work efforts can be sorted out.
The Routing Area also held a BOF (DCROUTING) to discuss characteristics and requirements of routing in a data center and to gauge interest from the community in engaging in new work in this area. There is significant interest to work on new protocols which will be purpose-built to address the data center. The Area Directors and the proponents will work on proposed charters in the coming weeks.
On the non-technical side of things, we made good progress in our community discussion about re-factoring the IETF’s administrative arrangements, also known as IASA 2.0. The work of the IASA 2.0 design team was well received and the ensuing discussion narrowed down the set of options for further consideration and development in the coming months. On the meeting’s last day the IETF leadership had an opportunity to share the background and status of these discussions with the ISOC Board of Trustees, who expressed their willingness to work together as this project moves forward.
Many thanks to our meeting host Cisco, not only for hosting the meeting, but also for sponsoring the hackathon, putting on an excellent social event in the sublime S.E.A. Aquarium, and making a long-term commitment to support the IETF as an IETF Global Host. Our meetings wouldn’t be the same without their support and that of all our sponsors!
The YANG team delivered again at the IETF 100 hackathon. With our goal to help YANG model users and designers, we developed new automation tools. As a reminder, we have been present since the very first hackathon at IETF 92. Even though many were not physically present in Singapore, we represented a virtual team of many members. This virtual team included those who have worked through on the year on projects, including some full time on tool development and maintenance. The team won the “Best Continuing Work” award during this IETF 100 hackathon and this is well deserved. And no, I’m not THAT biased 🙂 .
Dave Ward stressed the importance of the YANG Catalog during his presentation at IETF 100 titled, “3 years on: Open Standards, Open Source, Open Loop.” (video here, slides here). His point (among others) was that the IETF should focus on the deployment of the product of the RFCs (so YANG modules in this case) as opposed to the RFC publication. Here are a couple of Dave’s relevant quotes: “Publishing an RFC SHOULD not be the metric for IETF success!”, “A technology is successful when it’s deployed.”, “Develop tooling & metadata at the same time as specification.”, “Create your dependency map and reach out to your IETF customers.”. The YANG catalog was taken as THE example in this train of thought.
At this hackathon, Joe Clarke and Miroslav Kovac demonstrated a new tool called YANG Suite, along with its integration with the YANG catalog set of tools and its integration with the YANG Development Kit (YDK).
You might remember the YANG Explorer tool, an useful tool, demonstrated a few IETF hachatons ago. It has been suffering from an important inconvenience: it is flash-based. YANG Suite is the next generation YANG Explorer, without this limitation.
YANG Suite automatically imports YANG modules (and dependent YANG modules) from the catalog. The YANG module trees are parsed and displayed. From there we can generate CRUD (Create, Read, Update, Delete) RPCs, by interacting with the GUI. YANG Suite also integrates with YDK, which facilitates network programmability using YANG data models. YDK can generate APIs in a variety of programming languages using YANG models. These APIs can then be used to simplify the implementation of applications for network automation. YDK has two main components: an API generator (YDK-gen) and a set of generated APIs. Today, YDK-gen takes YANG models as input and produces Python APIs (YDK-Py) that mirror the structure of the models.
If you prefer a different set of python tool, YANG Suite also offers ncclient python library for NETCONF server. If you want to generate APIs from another language, you can develop a new YANG Suite package. As always, the tools are opensource: This is work in progress and the documentation will follow soon.
With the goal in mind to create a full toolchain, we integrated the YANG Suite directly in the YANG catalog, as shown in the previous figure. What does it mean? From the YANG catalog, you search for the relevant YANG module(s), evaluate its relevance with some health metrics (validation result, maturity, number of import, etc.), check the related metadata, launch the YANG Suite, and generate the python script based on the introduced value in the GUI such as YANG module content, CRUD operations, datastore, etc. The end user can now focus on automation as opposed to having to know the YANG language. For the end user, YANG is a means to an end, and should be hidden.
For the IETF draft writers, Henrik Levkowetz added links to the YANG catalog metadata and impact analysis directly in the data tracker. A real gain of productivity! The next step is to work on the synchronization of the data, with an direct update as soon as the draft is posted.
Dependencies and dependent YANG modules
Use case: Provide a comprehensive store for metadata from which to drive tools
Tree type: NMDA, transitional-extra, openconfig
Use case: Illustrate whether or not modules are NMDA-compliant
Add leafs for semantic versioning (semver): semantic_version and derived_semantic_version
Use case: Given a module, compare its semantic version over multiple revisions to understand what types of changes (e.g., backward-incompatible changes) have been made.Do the same given a vendor, platform, software release for all modules
Below is an example of semantically different YANG module revisions.
Vladimir Vassilev worked on automated testcases, running against confd and netconfd based raw NETCONF session scripting. Practically he focused on the the ietf-routing YANG modules in netconfd, both the non-NMDA and the NMDA (in progress) versions. I believe associating some validation tests with YANG modules in the catalog would be an extremely useful addition. Recently, I received the feedback that working on very simple scenarios for service YANG modules is a complex task: working from example is the way to go.
IETF 100 is just around the corner. It will offer all the usual opportunities for high-bandwidth exchange among IETF participants and collaboration around specs, coding and interop work. See the post below for some highlights. With the 100th meeting being viewed as a milestone by some, we’ll also be marking the occasion in a few small but special ways here and there throughout the week. Be sure to look out for those on the ground in Singapore.
We will once again be hosting the Hackathon on Saturday and Sunday. We’ll have a number of teams returning to carry forward their work from past hackathons, plus teams bringing new projects focusing on IPv6 transition technologies, JMAP, and more.
Folks are invited as always to join the Code Sprint on Saturday to work on tools for the IETF community. We’re always looking for more volunteers, so please join!
Sunday afternoon’s tutorial sessions will focus on two standardization efforts nearing completion in the IETF: TLS 1.3 and WebRTC. Come learn from the experts!
The two working-group-forming Birds of a Feather (BoF) sessions at this meeting will both be in the security area. Trusted Execution Environment Provisioning (TEEP) aims to standardize protocol(s) for provisioning applications into trusted execution environments (TEEs). Software Updates for Internet of Things (SUIT) is looking at firmware update solutions for Internet of Things (IoT) devices. Energy and interest in solutions to securely bootstrap constrained devices onto the network continues to grow.
We’ll have two working groups meeting for the first time, both in the Applications and Real-Time (ART) area. The DNS over HTTPS (DOH) working group is standardizing encodings for DNS queries and responses that are suitable for use in HTTPS, allowing the DNS to function in environments where problems are experienced with existing DNS transports. The Email mailstore and eXtensions To Revise or Amend (EXTRA) working group is dealing with updates and extensions to key email related protocols. Also meeting for the first time will be the proposed Decentralized Internet Infrastructure Research Group (DINRG), which is investigating open research issues in decentralizing infrastructure services such as trust management, identity management, name resolution, resource/asset ownership management, and resource discovery.
Folks looking for interesting area-wide discussions might want to check out the open area meetings in the transport and routing areas. The former will feature a discussion about current practices in coordinating specs and interop testing for QUIC and HTTP, while the latter will include an update from the routing area YANG architecture design team.
While for some the 100th meeting is an occasion to reflect on the IETF’s history, the technical plenary will be taking a look forward. The plenary will present a panel discussion featuring Monique Morrow, Jun Murai, and Henning Schulzrinne. They’ll be sharing their unique perspectives on what the Internet will look like in thirty years.
We’ll be running a new experiment at this meeting to give working group chairs the ability to organize sessions focused on running code. This will allow for groups to informally meet to brainstorm, code, and test ideas in the Code Lounge, a portion of the IETF lounge set aside for such activities. Working group chairs can sign up to reserve a time slot.
We wouldn’t be able to hold IETF meetings without the support of our sponsors. Big thanks to IETF 100 host Cisco! And to all of our sponsors for the meeting.
HTTPS (HTTP over TLS) is possibly the mostwidely used security protocol in existence. HTTPS is a two-party protocol; it involves a single client and a single server. This aspect of the protocol limits the ways in which it can be used.
The recently published RFC 8188 provides protocol designers a new option for building multi-party protocols with HTTPS by defining a standardized format for encrypting HTTP message bodies. While this tool is less capable than other encryption formats, like CMS (RFC 5652) or JOSE (RFC 7516), it is designed for simplicity and ease-of-integration with existing HTTP semantics.
The WebPush protocol (RFC 8030) provides an example of the how the encrypted HTTP content coding could be used.
In WebPush, there are three parties: a user agent (in most cases this is a Web browser), an application server, and a push service. The push service is an HTTP server that has a special relationship with the user agent. The push service can wake a user agent from sleep and contact it even though it might be behind a firewall or NAT.
The application server uses the push service to send a push message to a user agent. The push service receives a message from the application server, and then forwards the contents of the push message to the user agent at the next opportunity. It is important here to recognize that the push service only forwards messages. It has no need to see or modify push messages. Both the user agent and the application server only communicate via the push service, but they both want some assurance that the push service cannot read or modify push messages. Nor do they want the push service to be able to create false push messages.
For example, an alerting service might use WebPush to deliver alerts to mobile devices without increased battery drain. Push message encryption ensures that these messages are trustworthy and allows the messages to contain confidential information.
The document draft-ietf-webpush-encryption, which was recently approved for publication as an RFC, describes how push messages can be encrypted using RFC 8188. The encrypted content coding ensures that the push service has access to the information it needs, such as URLs and HTTP header fields, but that the content of push messages is protected.
The IAB held a workshop on Explicit Internet Naming Systems last week in Vancouver, B.C., and there are a couple of interesting early conclusions to draw. The first conclusion is actually about the form of the workshop, which was an experiment by the IAB. While many of our workshops run like mini conferences, with paper presentations and follow-on questions, this workshop was structured as a retreat. There was a relatively small number of participants gathered around a common table space, with sessions organized as joint discussions around specific topics. Moderators kept the conversations on topic, and discussants kept it moving forward if it lagged.
The result was one of the most interactive workshops I’ve attended. While we did have to run a queue in most sessions (and the queues could get a bit long), the conversations had real give-and-take, more like an IETF hallway discussion than a series of mic line comments.
While I don’t expect that this style would be appropriate for all our workshops, it’s useful to know that this retreat style can work. I suspect we would use it again in other situations where the IAB is trying to step back from the current framing of an issue and synthesize a set of new approaches.
A second early conclusion is that the IAB was right in suspecting that its previous framing of the issues around Internet naming and internationalization wasn’t quite right. Among other things, that framing had us trying to push human interface considerations up the stack and away from the protocol mechanics that worked on what we saw as identifiers. One clear conclusion from this workshop was that the choice of identifier structure and protocol mechanics will constrain the set of possible human interfaces. When those constraints don’t match the needs of the human users, the resulting friction generates a lot of heat (and not much light). One suggestion for follow on work from the workshop will be to document the user interface considerations that arise from using different types of identifiers, so that new systems can recognize more easily the consequences of the identifier types they choose.
An additional point that came up multiple times was the role of implicit context in transforming references in speech or writing into identifiers that drive specific protocol mechanics. While the shorthand for this is “the side of the bus” problem, the space is much larger and includes heuristic search systems ranging from the educated guess through to highly personalized algorithmic responses. The participants saw a couple of possible ways in which standards developed in this area might advance how these tuples of context elements and references can be safely used to mint or manage identifiers. A first step in that will be to suggest that the IAB look at language tags, network provider identifiers, and similar common representations of context to see how they function across protocols. Follow on work from that might include developing common vocabularies, serialization formats, and analyzing privacy implications.
Like many others, I came away from the workshop with the realization that there is a dauntingly large amount of work to be done in this space. The workshop participants are drafting more than a half dozen follow-on recommendations for the IAB, as well as describing a potential research group and producing some individual drafts. Despite the amount of work facing us, I and many other participants left the room more hopeful that we came in, both that we can make progress and that some of the tools we need are already available.
If you’d like to join in the conversation, you can share your comments on Internet naming by email to email@example.com or directly with the IAB at firstname.lastname@example.org.
Before each IETF meeting, the Internet Engineering Steering Group (IESG) collects proposals for Birds of a Feather (BOF) sessions. These sessions are designed to help determine whether new working groups should be formed or to generate discussion about a topic within the IETF community. We decide which ones are ready for community discussion on the IETF meeting agenda, with input from the Internet Architecture Board (IAB). We did this last week in preparation for IETF 100 and I wanted to report the conclusions:
Software Updates for Internet of Things (SUIT) will be having a working-group-forming BOF session at IETF 100. The SUIT work is focused on developing a modern interoperable approach for securely updating the software in Internet of Things (IoT) devices. Security experts, researchers, and regulators recommend that all IoT devices be equipped with a secure firmware update mechanism, but current approaches are largely proprietary. The SUIT BOF will discuss an architecture for IoT firmware updates and a manifest format for describing meta-data about firmware images. The SUIT mailing list is here.
Trusted Execution Environment Provisioning (TEEP) will be reconvening for a second BOF after an initial session at IETF 98 and a tutorial at IETF 99. The goal of TEEP is to standardize protocol(s) for provisioning applications into secure areas now supported on some computer processors, known as Trusted Execution Environments (TEEs). TEEs are currently found in home routers, set-top boxes, smart phones, tablets and wearables. Most of these systems use proprietary application layer protocols. TEEP aims to produce an interoperable application-layer security protocol that enables the configuration of security credentials and software running in a TEE. The TEEP mailing list is here.
Data Center Routing (DCROUTING) will be having a non-working-group-forming BOF. Over the last year, there have been discussions in a number of routing area working groups about proposals aimed at routing within a data center. Because of their topologies (traditional and emerging), traffic patterns, need for fast restoration, and need for low human intervention, among other things, data centers are driving a set of routing solutions specific to them. The intent of this BOF is to discuss the special circumstances that surround routing in the data center and potential new solutions. The objective is not to select a single solution, but to determine whether there is interest and energy in the community to work on any of the proposals. The mailing list is here.
IETF Administrative Support Activity 2.0 (IASA 2.0) will be having a non-working-group-forming BOF to continue discussions that have been taking place over the last year regarding refactoring the IETF Administrative Support Activity (IASA). The IASA 2.0 design team has been incorporating feedback from IETF 99 and further refining and expanding their documentation of the problem, requirements, and solution options. The goal of this session will be to determine the sense of the community about the direction for IASA 2.0. The mailing list is here.
We also received a proposal for a WG-forming BOF concerning Common Operation and Management on Network Slicing (COMS), focused on standardizing an information model to support network slicing in 5G. While the scope of this work has narrowed considerably since IETF 99 based on feedback received there, the new proposal was not approved for this meeting cycle. Further work is needed. The Operations and Management (OPS) area directors and interested IAB members will continue working with the proponents prior to IETF 100. The Operations and Management Area Working Group (OPSAWG) may serve as a venue for related discussions if that work bears fruit.
Finally, we’ll have two newly chartered working groups meeting for the first time at IETF 100: Email mailstore and eXtensions To Revise or Amend (EXTRA) and DNS over HTTPS (DOH). EXTRA is chartered to work on updates, extensions, and revisions to the email-related protocols IMAP, Sieve, and ManageSieve. DOH will be standardizing encodings for DNS queries and responses that are suitable for use in HTTPS, enabling the domain name system to function over certain paths where existing DNS methods experience problems. The mailing lists are here: extra, doh. A third new working group, IDentity Enabled Networks (IDEAS), was proposed but not chartered due to a number of concerns expressed during IETF community review of the charter.
Together with the rest of the IETF’s ongoing work, it will be exciting to see all of the new efforts kick off in Singapore.