Author | Wang Qing Editor | Xin Xiaoliang
Produced by | "New Programmer" Editorial Department
Since the rise of the Internet in the 1990s, each of us has begun to look forward to sharing computing resources at a cost. From HTTP to Web services, and from grid computing to P2P computing, we are always one step away from cloud technology . It was not until the rise of virtualization technology that the explosion of cloud computing was officially ushered in.
This article reviews the development of virtualization technology in the early 21st century gave birth to cloud computing infrastructure and corresponding open source project OpenStack. At the same time, we will think and summarize the various routes of cloud computing and the cloud-native technology that will be sprouted later. Finally, look forward to the future trends and application scenarios of open source cloud computing software.

Wang Qing, Director of Intel Cloud Infrastructure Software, has served as an individual independent director of the Kaiyuan Infrastructure Foundation for 8 consecutive years since 2015, Chairman of the SODA Sub-Funding Alliance Committee under the Linux Foundation, and also serves as a member of the Mulan Kaiyuan Community Technical Committee and a Standing Committee member of the Open Source Development Committee of the Chinese Computer Society.

Virtualization gave birth to cloud computing infrastructure
1959, at the International Information Processing Conference, a report titled "Time Sharing in Large High Speed Computers" first proposed the concept of "virtualization", and since then the development of virtualization began. Subsequently, IBM launched a time-sharing system, which allows multiple users to remotely share the usage time of the same high-performance computing device, which is also considered the most primitive virtualization technology.
In 1972, IBM released the virtual machine (Virtual Machine, abbreviated as VM) technology for creating flexible mainframes. It can adjust and control resources according to the user's dynamic application needs, so that expensive mainframe resources can be used as much as possible. Virtualization has entered the era of mainframes. The IBM System 370 series generates many virtual machine instances on physical hardware through a program called Virtual Machine Monitor (VMM), which can run independent operating system software, thus making virtual machines popular.
In 1998, the debut of VMWare opened the x86 era of virtualization, and the development of virtualization entered a period of explosive growth. In 2003, the open source virtualization management software Xen was released. In November 2005, Intel released the new Xeon MP processor system 7000 series, and the first hardware-assisted virtualization technology in the history of x86 platform - VT technology was born. In the following years, AMD, Oracle, RedHat, Novell, Citrix, Cisco, HP and others have successively entered the virtualization market.
Virtualization technology is the cornerstone of cloud computing development. It is precisely because of the maturity and popularization of virtualization technology that it has created conditions for the subsequent vigorous development of cloud computing. In March 2006, Amazon launched the elastic computing cloud EC2, which charges based on the resources used by users, opening the first year of cloud computing commercialization. Since then, major companies such as Google, IBM, Yahoo, Intel, and HP have flocked to the cloud computing field. In July 2010, NASA, NASA, and Rackspace, Intel, AMD, Dell and other companies jointly announced the OpenStack open source plan, thus opening the era of open source infrastructure as a service (IaaS).

open source infrastructure pioneer OpenStack
In the field of open source infrastructure, OpenStack is an unquestionable leader. As an IaaS cloud platform, OpenStack is responsible for interacting with VMMs running on physical nodes to achieve management and control of various hardware resources; on the other hand, it provides users with virtual machines that meet the requirements.
is within OpenStack and mainly covers 6 core components, namely Compute, object storage, (Object Storage), authentication, block storage, network, and image service.The computing component provides virtual machine services according to needs, such as creating virtual machines or hot migration to virtual machines; the object storage component allows storing or retrieving objects, and can also be considered to allow storing or retrieving files, which can manage a large amount of unstructured data through the RESTful API in a low-cost way; the authentication component provides authentication and authorization for all OpenStack services, tracking users and their permissions, and providing a list of available services and APIs; the block storage component, as the name implies, provides block storage services; the network component is used to provide network connection services, allowing users to create their own virtual networks and connect to various network device interfaces; the mirror service component mainly provides storage, query and retrieval services for virtual machine images, and provides mirror services for virtual machines by providing a directory and repository of virtual disk images.
OpenStack has an OpenStack community that closely unites many users and developers. This community contains more than 100,000 community members from 195 countries and regions, and has nearly 700 corporate members. In this world, communities always follow the four open principles of open source, open design, open development and open community. Developers can discuss what design is optimal, and operation and maintenance personnel can also provide feedback and requirements for using OpenStack.
In the OpenStack community, various channels are opened to discuss with other members of the community, such as mailing lists, IRC channels or project meetings, regular summits and PTG (Project Teams Gathering), as well as various meetings held in various places from time to time. After more than twelve years of development, the OpenStack project has been very mature at the beginning of its establishment. Now the community is more about improving system stability, enhancing existing functions, and designing solutions to new scenarios and challenges such as data processors (DPU), computing power networks (CFN).

The network and storage of cloud computing infrastructure
infrastructure mainly revolves around three shared resources, namely computing, network and storage. As two of the three important components of infrastructure, network and storage have emerged in the community at the beginning of cloud computing, and many projects that have attracted attention from the industry have emerged.
In network technology, SDN technology has long been recognized by the industry for the future. However, how to integrate the standards of various manufacturers and even academic circles has always been a difficult problem. In 2013, a large number of traditional IT equipment manufacturers joined forces with several software companies to launch the OpenDaylight (ODL) project. The project has been developed to this day and has become one of the most influential projects in the open source SDN solution. Each version of ODL is named after an element in the periodic table of chemical elements, full of academic atmosphere.
Not only that, but technically, ODL projects also have their own unique features. First of all, this project is based on the Java development platform, making full use of the mature dynamic module technology (OSGI) on the Java platform. Based on the microservice architecture, it integrates various plug-ins very flexibly and efficiently to provide support for equipment from multiple manufacturers and various advanced network services. In the design of the northbound API interface, ODL not only provides a Restful API, but also provides an OSGI interface for function calling to cope with different northbound integration solutions. In the design of the southbound API interface, the convenience of multi-protocol support and multi-vendor equipment adaptation is fully considered. In addition, the integration of Neutron modules in OpenStack has always been one of the technical priorities of ODL projects.
In storage technology, the formulation of SDS storage solutions is also an important point that cannot be bypassed. The Ceph project is the most mainstream choice for open source distributed storage solutions, especially in the scenario of deployment with OpenStack.
Ceph was born in an academic project. In 2014, after the company that the original author started with was acquired by RedHat, RedHat became the main contributor and technical leader of the Ceph project. After 2015, Ceph officially began community management, and the community committee initially absorbed 8 member companies.Until 2018, Ceph established the foundation and absorbed companies such as Amihan, Canonical, China Mobile, DigitalOcean, Intel, ProphetStor Data Services, OVH, RedHat, SoftIron, SUSE, Western Digital, XSKY and ZTE . It is managed by the Linux Foundation and provides a neutral institution for the collaboration and growth of the Ceph community. The Ceph Foundation’s board of directors is not responsible for Ceph’s technical governance and has no direct control. Development and engineering activities are managed through traditional open source processes and are supervised by Ceph's technical leadership team. The
Ceph project provides a comprehensive distributed storage interface, covering all three cloud environment usage scenarios: object storage, block storage and file system. In OpenStack's cloud environment, since distributed block storage solutions often integrate Ceph's RDB technology, many users prefer to use Ceph's object storage services when choosing object storage solutions.
has a certain gap in performance in terms of requirements for cloud computing customers' production environment. There are still a certain gap in performance for pure community versions. There are many commercial customized products based on Ceph community version at home and abroad, which have greatly improved in terms of performance optimization, stability, user interface, etc.

Docker emerged, the road to the rise of cloud native
While the development of open source infrastructure, let’s take a look at another path of development and changes. In 2013, the Docker project was unveiled and had a profound impact on the computer industry in its subsequent development. Of course, before Docker was born, cloud computing infrastructure was mainly revolved around virtual machines, and few people thought about applying container technology to cloud computing scenarios. Moreover, whether in Linux, Solarishml3 or FreeBSD, the basic technologies related to various containers have matured, but only Docker has truly grasped the pain points and designed a friendly usage interface that directly meets the expectations of users. This is especially true for software developers. Therefore, the popularity of Docker first occurred in the field of DevOps, and then gradually spread to other fields.
Compared with virtualization technology, container technology represented by Docker does not bring any performance losses while achieving application isolation. This is a very obvious technical advantage. Therefore, in cloud computing scenarios that focus on computing performance, some users prefer to use containers as the underlying basic technology. Of course, safety is the shortcoming of container technology. Subsequently, many safety isolation enhancement technologies have emerged one after another, to make up for expected safety hazards from different perspectives.
In 2014, Google opened the container orchestration project Kubernetes. Kubernetes positions itself as the management software for container cloud environments, and fully considers the characteristics of container (mainly Docker) technology from the overall system design, which is one of the differences between it and its main competitor Mesos.
Kubernetes uses container-based elements in the definition of resources, and introduces the concept of "Pod", treating multiple container instances running the same application as a management and scheduling unit, namely Pod. A Pod needs to run in a single Minion, and the Minion can be understood as a host (Host). There are multiple minions in the cluster, which are managed uniformly by the central control node Master. Based on the Pod, Kubernetes abstracted the concept of Service, which is an abstraction of multiple Pods working together to provide services.
In short, Kubernetes has introduced a lot of new abstract concepts, and the problems such as scheduling, high availability, online upgrades and other problems that need to be solved in its cluster management are all designed around this.
In 2015, Google released Kubernetes 1.0 and revealed that its application manufacturers include well-known companies such as eBay, Samsung, and Box. It is realised that containers are changing the way enterprises deploy and manage applications, but the industry is still in the early stages of cloud-native and microservice applications. In July, Google, together with the Linux Foundation and industry partners Docker, IBM, VMWare, Intel, Cisco, Joyent, CoreOS, Mesosphere, Univa, RedHat, etc., established the Cloud Native Computing Foundation (CNCF) to promote the development of container-based cloud computing.
CNCF has two purposes: one is to work with open source community and partners to control the future development of Kubernetes, and the other is to develop new software to make the entire container tool set more robust. As the beginning of CNCF's growth, Kubernetes' future open source development is controlled by CNCF to ensure that it can operate well on any infrastructure (public cloud, private cloud, bare metal). CNCF's technical committee will promote open source and partner communities to jointly develop container toolsets. They will also evaluate other projects included in the Foundation to ensure the entire tool goes hand in hand. The concept of cloud native of
has different understandings among major manufacturers, and different communities will also have slightly different definitions of it. Considering the CNCF Foundation's position in the cloud native field, we quote its definition and understanding, that is, cloud native uses an open source software stack to solve the following three problems: 1. How to divide applications and services into multiple microservices; 2. How to package each of the above parts into containers; 3. Finally, how to dynamically orchestrate these containers to optimize the entire system resources.
CNCF Foundation maintains a panoramic view of cloud-native technology, which covers most influential cloud-native related open source projects. In the panoramic view, cloud native takes containers as its core technology and is divided into two layers: runtime and orchestration. The runtime is responsible for the computing, storage and network of containers; orchestration is responsible for the scheduling, service discovery and resource management of container clusters. Below is infrastructure and configuration management. Containers can run on various systems, including public clouds, private clouds, physical machines, etc., and they also rely on automated deployment tools, container mirroring tools, security tools and other operation and maintenance systems to work. Upwards is the application layer on the container platform, similar to the mobile phone application store, which is divided into several categories: database and data analysis, stream processing, SCM tools, CI/CD and application definitions. Each company will have a different application system according to business needs.
panoramic map also contains these two contents: platform and observation and analysis. A platform refers to platform-level services provided based on container technology, such as common PaaS services and Serverless services. Observation and analysis are the operation and maintenance of the container platform, which provides the current operation status of the container cluster from the aspects of logging and monitoring, which facilitates analysis and debugging.
briefly summarizes. Cloud Native contains two major contents: containers and microservices, covering projects such as Kubernetes, containerd, CRI-O, Istio, Envoy, Helm, Prometheus and etcd. Major manufacturers can freely and openly discuss technology and submit code in the community around cloud Native projects. For example, Intel is based on its hardware chips, for computing, storage and network resources, not only adds CRI-RM, NFD and NPD, CSI and CNI resource management and hardware capability discovery functions in Kubernetes, but also adds support for accelerator devices such as GPU, QAT and FPGA. In the service grid, Intel also continuously optimizes grid performance and enhances confidential computing security around Istio/Envoy.
As an open source community, the CNCF community has also established various channels such as mailing lists and Slack, established several special interest groups in different technical directions (SIG) and technical guidance groups (TAGs), and held various project technical meetings to encourage community members to openly and transparently discuss technical issues and decide on technical solutions. Moreover, the CNCF community also has regular KubeCon and CloudNativeCon conferences in North America, Europe and China, providing face-to-face opportunities to community members to promote advanced technology solutions, collect feedback information, and communicate technology more deeply.

Future open source cloud computing software trend
As technologies such as cloud infrastructure, network and storage are transferred to cloud native technologies represented by containers and microservices, the development of cloud computing will show new trends in the following technologies, and also bring new opportunities and challenges.
Edge Computing : In a sense, edge computing can be considered an extension and extension of cloud computing. To this day, building distributed edge computing infrastructure tools and architectures is still in its infancy, and there are still problems that need to be solved in the development of edge computing technology.For example, in response to complex network environments and difficulties in automated deployment in edge computing scenarios, Chinese manufacturers have launched open source projects such as KubeEdge, OpenYurt, SuperEdge in the CNCF community, and achieved cloud-edge collaboration through functions such as edge autonomy, cloud-edge traffic governance, and edge device management. I believe that with the popularization of 5G technology and mobile Internet technology in the future, major operators and cloud service providers will quickly popularize edge computing platforms in the society.
New virtualization and container fusion: With the emergence of various scenario requirements, virtualization and container technology are constantly integrating and innovating. For example, the application ratio of microVMs is gradually increasing, and microVMs have both operation speed and safety isolation capabilities. The integration of virtualization and container technology has become an important trend in cloud computing in the future. OCI uses standard definitions to manage container life cycles in a unified way. Under this standard, we have seen runtime forms of various containers such as Kata Containers, Firecracker, gVisor and Inclavare Containers.
In addition, due to Rust's performance advantages, many projects have been rewrited using Rust instead. In container runtime, RunK implements a standard OCI container runtime using the Rust language, which can create and run containers directly on the host. In this way, RunK runs faster than normal RunC and consumes less memory.
Finally, WebAssembly, as a new generation of portable, lightweight, and application virtual machines, has broad application prospects in scenarios such as the Internet of Things, edge computing and blockchain. WASM/WASI will become a cross-platform container implementation technology and will surely become a development trend that cannot be ignored.
Confidential calculation: As we all know, data has corresponding encryption mechanisms to effectively protect it in both storage state [1] and transmission state [2], ensuring the confidentiality and integrity of the data. The protection of data in use state [3] is urgently needed to fill the gap. Confidential computing is based on the execution of computing in a trusted execution environment of hardware to protect the data in use. It is based on establishing a trusted execution environment of hardware (TEE), such as Intel's SGX and TDX, ARM TrustZone, AMD SEV/SEV-ES/SEV-SNP, RISC-V Keystone and other technologies, providing guarantees for the secure use of data in a cloud-native environment. Currently, there are open source confidential computing projects such as Inclavare Containers and Confidential Containers in the CNCF community, which have become a new trend in cloud-native security.
Function as a Service (FaaS): Serverless computing or Serverless for short allows users to run applications and services without considering the server. It is an execution model where the cloud service provider is responsible for executing a piece of code by dynamically allocating resources and only charges for the amount of resources used to run the code. Code is usually run in a stateless container and can be triggered by various events, including HTTP requests, database events, queue services, monitoring alerts, file uploads, scheduled events, etc. The code sent to the cloud is usually executed in the form of a function. Therefore, Serverless is sometimes also called FaaS. In the future, we believe that FaaS will become more and more popular, especially in the fields of artificial intelligence , autonomous driving, etc. FaaS is undoubtedly more suitable for these scenarios. In addition to the superiority of code operation cost and scalability, FaaS can allow developers to pay more attention to business logic without caring about underlying resources, improving programming efficiency.
Data Processor (DPU): Since 2015, the central processor CPU frequency has become stable, and the marginal cost of data centers to improve computing power has been significantly increased. However, the surge in applications has led to a sharp increase in network traffic in contemporary data centers. In order to adapt to this huge traffic growth, the data center network has developed towards high bandwidth and new transmission systems. The data center computing power has encountered bottlenecks and is difficult to match the rapidly growing network transmission rate , which has stimulated DPU demand. On the other hand, the emergence of DPU can also reduce the CPU load and allow CPU to process business data more efficiently. DPU will surely become a new technology trend in data centers and cloud computing infrastructure.

Conclusion
The road to the highest pass is really as iron, and now we take a step back to the beginning.After more than ten years of rapid development, cloud computing has been widely popularized in all walks of life. Cloud computing has become a basic platform supporting the fields of big data, the Internet of Things, the metaverse and artificial intelligence, and is also one of the hot technologies that are truly fully implemented. In the future, with the emergence of new technologies, new scenarios and new demands, there will be no one voice in the cloud computing market, nor will there be a one-size-fits-all solution. Cloud will definitely support these new businesses through various routes, solutions, and derivative technologies, fully reflecting the trend of "a hundred flowers blooming and a hundred schools of thought contending".
[1] Storage state: the state in which data exists statically on the storage device;
[2] Transmission state: the state in which data is transmitted in the network or between different components of a host;
[3] Usage state: the state in which data is called by the running application at this time, for example, at this time the data is loaded into memory for calculation.
In the financial topic, more than a dozen traditional financial institutions, including Industrial and Commercial Bank of China , Postal Savings, CITIC Bank , Guangfa Bank , People's Bank of China , Ping An Technology , WeBank , WeBank , Ant Group and other technical experts from leading financial technology companies have brought us in-depth discussions and case analysis of various new generations of disruptive technologies. In-depth answers on how developers should better integrate into the financial industry and the talent cultivation methods of financial technology, and truly do a good job in technological innovation in financial technology and digital transformation .