A project for the next generation Internet
Preamble to the reading of this article, the initial version of which was published in October 2019.
I often refer in this text to Louis Pouzin, a French engineer who had a major role in the development of the Internet. He is the inventor with his team of the Cyclades project in the early 1970s of the “datagram” which is at the deepest heart of the “engine” that propagates “data packets” in the network with the TCP/IP protocol. He is recognized as its inspirer by all the connoisseurs and he was decorated by the Queen of England in 2013 as such. In the conference held on May 13, 2020 (at mn 59), Louis Pouzin did me the honor of quoting this article, he described it as “very well done, very readable”, :
A brick for European digital sovereignty?
This year 2019 is the year of the :
50 years of the first transmission of a message between 2 computers which was celebrated on October 29th at the initiative of the SIF, CNRS/INS2I, INRIA, CNAM in the presence, in particular, of Louis Pouzin,
45 years (March 1974) of Louis Pouzin’s founding proposal IFIP W.G. 6.1 Genaral Note 60 “A proposal for interconnecting packet switching networks” INWG-60, matured in the Cyclades Project,
40 years from the stabilization of the first versions of the protocol (Transmission Control Protocol) and IP (Internet Protocol) at the heart of the Internet, successor of the Arpanet network, incorporating the concept of Datagram,
30 years of the WWW “invented” at CERN.
With the strong support of the American authorities, the “information superhighways”, or information highways, have been deployed at lightning speed, and billions of people are now “connected”, whether from their workstation (PC) or their smartphone launched by Apple (invented and developed by a Frenchman, Jean-Marie Hullot, who recently passed away).
As new extensions to the reach of this network are envisaged, with multitudes of “IoT connected objects” as close as possible to our businesses: smart cities, connected and autonomous cars, telemedicine and, deep within our industries with the architectures of the Factory of the Future, can we make our economy and our culture rely on this single protocol?
Louis Pouzin, one of the fathers of the Internet, inventor of the datagram with his team from the Cyclades project in the 1970s at the IRIA, which was to become INRIA, makes an unquestionable judgment: “The Internet is built on a swamp. The communication protocol used today provides no protection. It’s a prototype that should have been rebuilt a long time ago” in maintenance at the Express 15 02 2015. The principle of the datagram was taken up by Vinçon G. Cerf and Robert (Bob) Kahn in the TCP/IP protocol.
Congenital” weaknesses :
While having a layered connection model based on the almost universal TCP/IP protocols is certainly a considerable advantage, which has enabled its massive deployment, it nevertheless has some weaknesses directly related to the choices of its designers. Here are the most critical ones, which everyone can see the importance of:
- Security, in particular confidentiality and integrity, have not been taken into account “by design”, patches are necessary to, for example, avoid intrusions and encrypt messages so that they cannot be easily captured, “listened to” or modified. In the early 2000s, the European automotive industry developed the private network ENX (European Network eXchange) to support the exchange of confidential information between manufacturers and suppliers, particularly for engineering and the co-design of new models.
- The only “physical” address to identify the process (application such as the browser installed on the PC or smartphone) is difficult to reconcile with the native operation of mobile terminals that connect successively via separate relays.
- The address is potentially known to everyone and can be used to solicit application servers, fraudulently enter systems and corrupt them. RFC 1122 states: “The Internet architecture generally provides little protection against spoofing of IP source addresses, so any security mechanism that is based upon verifying the IP source address of a datagram should be treated with suspicion. However, in restricted environments some source-address checking may be possible. For example, there might be a secure LAN whose gateway to the rest of the Internet discarded any incoming datagram with a source address that spoofed the LAN address. In this case, a host on the LAN could use the source address to test for local vs. remote source. This problem is complicated by source routing, and some have suggested that source-routed datagram forwarding by hosts (see Section 3.3.5) should be outlawed for security reasons.”
- The protocol does not adapt well to specific data transfer needs such as voice (IP phone), radio broadcasting, video streaming, and so on.
- The choice of “best-effort” quality does not make it possible to design new services offering guaranteed quality of service (at an appropriate cost).
- The United States is seeking to control the allocation of addresses, and via ICANN, the allocation of domain names and addresses www.xxxx.yyy . This “organization” earns significant revenues, but there are alternatives such as the open roots proposed by Louis Pouzin with the Openroot offer.
- Network specialists have been aware of these shortcomings for many years, and work is underway to define the operating modes of an Internet of the future.
The IETF (Internet Engineering Task Force), which develops Internet standards such as TCP/IP, published the IPV4 version of the IP protocol in 1981. Since the 1990s, the IETF has been working to evolve this protocol in order to remedy some of its congenital weaknesses and limitations. The specifications of the new protocol, named IPV6, were published in 1998, and were standardized in 2017, almost 20 years later!
However, the deployment of IPV6 faces serious difficulties, equipment compatibility is problematic, on the other hand, China, which is investing heavily in IPv6, particularly Huawei, may challenge Europe’s digital sovereignty. Shouldn’t we therefore approach the subject from a “back to basics” perspective?
This is the approach that John Day and Louis Pouzin, two Internet veterans, have been proposing for several years.
For John Day, pioneer of Arpanet, the ancestor of the Internet, “IPV6 is only a poor palliative to the known limits of TCP/IP protocols”; he proposed an innovative approach in his book “Patterns in Network Architecture: A Return to Fundamentals” in 2007.
For Louis Pouzin “Everyone knows that the current TCP/IP communication protocol is obsolete and cannot be improved. The only thing you can do is to keep tinkering with it, making it more and more complicated and unstable”.
In the article entitled “Moving beyond TCP/IP” from April 2010, Fred Goldstein and John Day list them: “IP doesn’t handle addressing or multi-homing well. …] TCP and IP have been separated in the wrong direction. …] IP lacks an addressing architecture. […] IP is overloaded by local peering. […] and finally, IP is poorly adapted for streaming”.
“The main issues of todayʼs Internet are the inability to provide security, multi-homing, and Quality of Service, the address space exhaustion, the complexity in providing mobility, the increasing size of the router tables and the poor utilization of available resources”. In “is internet an unfinished demo ?”
Louis Pouzin, “father” of the world-renowned “datagram” :
Louis Pouzin has a certain “authority” in the field of networks for having brought, with his team from the Cyclades project in the 1970s, considerable advances in computer network architectures.
The best known, but not the only one, is the datagram, which opened up an innovative path in packet transmission networks, imagined in the 1960s by Donald Davies and taken up by Paul Baran, by proposing an alternative to “switched” networks. The latter extended the telephone protocol model by establishing, at the opening of the session, a continuous path, a “route” in the topography of the network of switches (telephone exchanges). In this model, the data packets follow each other in the order of transmission and the operation is based on 2 strict rules:
The continuity of the links between the network arcs selected to establish the “route” between transmitter and receiver is maintained continuously throughout the session,
Packets follow each other, without any loss, each segment of the route must strictly insure it.
These requirements have significant consequences on the resource consumption of the algorithms that ensure them, and the reservation of bandwidth in the mobilized segments (a precious resource) is significantly increased.
On the contrary, the datagram concept frees itself from this pre-established path constraint at the opening of the session by allowing the different packets to follow distinct paths, which was very useful in a Cold War context, where a particular data path segment could be destroyed by an attack. This mode of operation implies 2 consequences to be taken into account in the protocol to be invented:
Packets, following generally different paths, can arrive at the recipient in disorder, so they must be numbered at the start and put back in order at the end (TCP protocol),
Some packets may not arrive at their destination within an acceptable period of time, or may get “lost” in some way, so a mechanism must be provided to request re-issuance.
This conceptual “break” has some advantages:
It considerably simplifies the role of the intermediate nodes, the routers, which now have, basically, only one mission: to propagate the packets addressed to them to the ultimate destination,
The bandwidth of the network strands is much better used, better shared between the flows,
The “intelligence” of the protocol is mainly concentrated at both ends, in the immediate vicinity of the related application processes.
It is easy to imagine that coming rather from IT models, this approach was difficult to accept by engineers who had been trained at length in the principles of telephony. As “telecom” engineers were in place in the structures of the telephone companies, which had a monopoly on infrastructure, these innovative ideas were rejected in France to the detriment of a “switched” X25 digital network that was standardized. It had its hour of glory with the Numéris network marketed by France Telécom’s Transpac subsidiary and enabled the deployment of Minitel.
The concept was, however, adopted by the Americans, at the heart of the TCP/IP protocol, which was in the process of gestation following Arpanet. Louis Pouzin was one of the rare engineers decorated by the Queen of England in 2013 for his contributions to the development of the Internet!
A possible alternative, in rupture, RINA?
The founding idea of the approach is to reduce complexity by going back to the fundamentals of network needs, by affirming that compliance with more comprehensive requirements, including security and performance, could be achieved by defining more formal, more systematic and easily verifiable construction rules. It is also a question of ensuring that any components that may fail are quickly identifiable, in fact turning their backs on the stack of patches, which are often incompatible.
Day rediscovered Bob Metcalfe’s (inventor of Ethernet, the packet-switched local area network protocol) 1972 slogan that the fundamental function of the network is “to provide communication between processes, and only that” and made it his own, highlighting it in his founding text published in August 2008.
“Networking is IPC”: A Guiding Principle to a Better Internet
Networking is inter-process communication.—Robert Metcalfe, 1972
Position PaperTechnical Report BUCS-TR-2008-019
The key point of the conceptual model developed is to be usable, recursively, in successive nested levels to “accommodate” particular modes of communication linked to differentiated technologies, preserving only the “behavior” properties that ensure consistency with the higher levels. A rule of ultimate end-to-end consistency.
In this respect, it differs from the ISO’s 7-layer model, or the Internet’s 4-layer model, of which it should be noted that the architecture chosen for TCP/IP, defined before the publication of the ISO model, does not respect the strict distribution of functions between the layers, which explains some of the protocol’s congenital flaws.
John Day also comes back to a very structuring choice of TCP/IP, which bases its exchanges on the identification of communicating processes by their physical addresses, which, moreover, are universal and “public”.
On the contrary, RINA distinguishes subspaces in which the communicating processes are identified by a name (unknown property of TCP/IP) and have physical addresses that :
Are limited in scope to the sub-space, to the perimeter concerned, a bit like the local variables in a block of a structured program,
Can match multiple physical addresses to a name, thus to a process.
This property is very important to integrate, natively, the operation of mobile terminals, typically cell phones that can “bound” successively on several relays or Wi-Fi, Bluetooth, Iot links that can be established with several router relays.
In addition, each “segment” of the communication can “negotiate” the chosen level of service, including modes more severe than “best effort”, for example by prohibiting message loss. The architecture thus integrates, by design, the behaviors of traditional Internet and switched networks.
The RINA project thus proposes a true architecture of communication network between processes (Inter-Process-Communication – IPC). The conceptual component implemented in this distributed architecture is called DIF (Distributed Inter-Process-Communication Facility), the set of behaviour rules are attached to it.
Another article, more technical, will soon be proposed to detail, translated into French, these fundamental concepts of the proposed architecture. It will detail a few points by illustrating the modes of operation to meet some specific needs of today and, even more, of tomorrow.
Some strategic areas for the application of this architecture :
Many areas that condition our daily lives are undergoing profound change under the pressure of “digital” technologies, of which communication and connectivity are the strongest lever, offering extraordinary possibilities and, at the same time, developing major risks that go far beyond what the authors of the novels of anticipation had imagined: Georges Orwell’s 1984 and Aldous Huxley’s The Best of the Worlds.
Telecommunications, Wi-Fi, 5G …
Mobile telephony, voice over IP, streaming, video, television on demand, Internet rebroadcasting on all types of terminals, including our cell phones, and Wi-Fi networks have invaded our daily lives. We are constantly “connected”, a new syndrome is emerging as we are tracked throughout our activities, without even being able to refuse it on smartphones equipped with the Android OS provided by Google!
And now 5G, of which China’s Huawei (a giant company that started from scratch when “our” great Alcatel, which has now almost disappeared, was at its zenith) is one of the rare specialists in the world. Many questions about potential “back doors” allowing the Chinese government to gather confidential information are suspected, on the agenda of many interrogations. The USA has excluded this company from its suppliers.
“IoT” Internet of Things
In recent years the idea of extending Internet-type connections to a multitude of objects has been developing. These objects are and will be more and more able to capture various information as close as possible to our everyday life and will be able to transmit them to “big data” applications most often imagined “in the cloud”, “digested” by Artificial Intelligence.
Imagination is limitless, and many small, “start-up” teams are proposing clever devices that can be integrated into our equipment, based on connecting our homes to the Internet via the “Box” offers of ISPs (Internet Service Providers), which act as routers.
Access to information is most often provided through an application on a SmartPhone. The problem is that these small teams, with limited means, are not sufficiently aware of security issues and that, bound by very limited budgets and very (too?) short deadlines, they often only provide a “basic” Internet connection.
Therefore they offer an easy entry point for hackers. Actions can then be launched by usurping the quality of owner, the consequences of which can be extremely serious, allowing, for example, intrusions without breaking and entering after neutralizing a surveillance system or ordering the opening of a “modern” lock.
These objects can also, quite easily, be infected by malicious programs that can be exploited for various fraudulent actions, including server overload attacks causing denial of service on the targeted applications.
A typical case has been observed with a network of connected cameras.
Factory of the future IIOT
The application of these technologies is envisaged for the rapidly changing field of industry. Action plans for Industry of the Future and the German Factory 4.0 initiative propose Internet-based architectures to exploit information captured deep within production processes, “fed back” into the cloud for “big data” analysis. New applications are being considered, and they herald advances such as in preventive maintenance.
This is often referred to as IIOT (Industrial Internet of Things).
Car communicating (with the infrastructure and between mobiles) and autonomous
Innovative developments are underway on the new generations of automobiles that will make intensive use of communications networks.
For example, obtaining information from infrastructures, signalling adjusted to traffic conditions in almost real time: speed limits, announcements of work zones, road congestion due to accidents. Also, in urban traffic, the anticipation of changes in traffic light conditions enables deceleration to be adjusted.
Communication between vehicles giving the following vehicles on the lane advance information on the traffic conditions observed by the preceding vehicles. Reduction of the surprise effect with emergency braking, a fluidity of flow reducing the “accordion” effects.
Finally, the possibility of updating the numerous onboard software “on the fly” to correct programming errors that will remain despite the progress that can be expected in formal methods of specification, programming and testing.
It is easy to imagine the level of robustness that these communications will have to respect in order to prevent updates from being compromised by malicious intruders who would usurp the identity of the manufacturer’s site and introduce software that could take control of certain vehicles and give them aberrant movement orders! The RINA approach is probably a way forward in this essential direction.
Other related areas :
The organization of Smart Cities, Transport, Energy Networks, Water raise similar, interrelated issues and would deserve comparable analyses.
Finally, critical areas in terms of sovereignty could be candidates for an architectural review in a RINA approach, in particular:
- Healthcare applications, including the personal medical record,
- a reference base of digital identities and
- a sovereign secure Cloud that protects us from the US CLOUD Act.
Initiatives and support
For some years now, the European Commission has been supporting projects that develop proposals in a very concrete way.
It is regrettable that the French authorities are not more committed to support this project by listening, this time, finally, to the disinterested advice of Louis Pouzin. His recent promotion as an officer of the Legion of Honor is a sign of recognition, it is insufficient.
Reclaiming digital sovereignty is within our grasp
At a time when Europe is becoming increasingly aware of the American hegemony it has allowed to develop over digital technologies, and when confidence in these technologies poses a real strategic societal problem, the proposals of the RINA project are such as to distinguish a “European digital space”, connectable to today’s Internet world, enabling more rigorous enforcement of the rules that European citizens are calling for.
Thus, for example, applications that really respect the rules of the RGPD could be identified as such and would be protected from extra-territorial laws of the USA such as the CLOUD Act.
Armenia has chosen the RINA architecture to structure its sovereign applications. Will France be late for a long time?
“The perfection of means and the confusion of goals seem to characterize our era”Albert Einstein
This text was written with the careful rereading of Gérard Peliks, one of the French specialists in cybersecurity, for which he is thanked.