Computational Intelligence and Data Analytics: Proceedings of ICCIDA 2022 Rajkumar Buyya - The ebook in PDF format is ready for download
Computational Intelligence and Data Analytics: Proceedings of ICCIDA 2022 Rajkumar Buyya - The ebook in PDF format is ready for download
com
https://ebookmeta.com/product/computational-intelligence-
and-data-analytics-proceedings-of-iccida-2022-rajkumar-
buyya/
OR CLICK HERE
DOWLOAD EBOOK
https://ebookmeta.com/product/data-driven-decision-making-using-
analytics-computational-intelligence-techniques-1st-edition/
ebookmeta.com
https://ebookmeta.com/product/dk-eyewitness-top-10-san-diego-pocket-
travel-guide-2nd-edition-dk-eyewitness/
ebookmeta.com
Community-Based Fisheries Management: A Global Perspective
1st Edition Devashish Kar
https://ebookmeta.com/product/community-based-fisheries-management-a-
global-perspective-1st-edition-devashish-kar/
ebookmeta.com
https://ebookmeta.com/product/stuck-in-the-mud-ree-drummond/
ebookmeta.com
https://ebookmeta.com/product/applied-inorganic-chemistry-
volume-3-from-magnetic-to-bioactive-materials-1st-edition-rainer-
pottgen/
ebookmeta.com
https://ebookmeta.com/product/husky-1st-edition-jessa-kane/
ebookmeta.com
The Nordic Economic, Social and Political Model:
Challenges in the 21st Century 1st Edition Anu Koivunen
https://ebookmeta.com/product/the-nordic-economic-social-and-
political-model-challenges-in-the-21st-century-1st-edition-anu-
koivunen/
ebookmeta.com
Lecture Notes on Data Engineering
and Communications Technologies 142
Rajkumar Buyya
Susanna Munoz Hernandez
Ram Mohan Rao Kovvur
T. Hitendra Sarma Editors
Computational
Intelligence
and Data
Analytics
Proceedings of ICCIDA 2022
Lecture Notes on Data Engineering
and Communications Technologies
Volume 142
Series Editor
Fatos Xhafa, Technical University of Catalonia, Barcelona, Spain
The aim of the book series is to present cutting edge engineering approaches to data
technologies and communications. It will publish latest advances on the engineering
task of building and deploying distributed, scalable and reliable data infrastructures
and communication systems.
The series will have a prominent applied focus on data technologies and commu-
nications with aim to promote the bridging from fundamental research on data
science and networking to data engineering and communications that lead to industry
products, business knowledge and standardisation.
Indexed by SCOPUS, INSPEC, EI Compendex.
All books published in the series are submitted for consideration in Web of Science.
Rajkumar Buyya · Susanna Munoz Hernandez ·
Ram Mohan Rao Kovvur · T. Hitendra Sarma
Editors
Computational Intelligence
and Data Analytics
Proceedings of ICCIDA 2022
Editors
Rajkumar Buyya Susanna Munoz Hernandez
Cloud Computing and Distributed Systems Computer Science School (FI)
(CLOUDS) Laboratory Technical University of Madrid
University of Melbourne Madrid, Spain
Melbourne, VIC, Australia
T. Hitendra Sarma
Ram Mohan Rao Kovvur Department of Information Technology
Department of Information Technology Vasavi College of Engineering
Vasavi College of Engineering Hyderabad, India
Hyderabad, India
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Singapore Pte Ltd. 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface
v
vi Preface
We sincerely thank Mr. Aninda Bose and his team of Springer Nature for their
strong support towards publishing this volume in the series of Lecture Notes on Data
Engineering and Communications Technologies—Indexed by Scopus, Inspec and Ei
Compendex.
vii
viii Contents
Data Analytics
Evaluating Models for Better Life Expectancy Prediction . . . . . . . . . . . . . 389
Amit, Reshov Roy, Rajesh Tanwar, and Vikram Singh
Future Gold Price Prediction Using Ensemble Learning
Techniques and Isolation Forest Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Nandipati Bhagya Lakshmi and Nagaraju Devarakonda
Second-Hand Car Price Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
N. Anil Kumar
x Contents
Dr. Ram Mohan Rao Kovvur received Ph.D. in Computer Science and Engi-
neering from Jawaharlal Nehru Technology University (JNTU) in the year 2014 with
Research specialization as Grid Computing. He has more than 25 years of teaching
xiii
xiv About the Editors
experience in various cadres and currently he is the Professor and Head, Informa-
tion Technology, Vasavi College of Engineering, Telangana, Hyderabad, India. He
received many prestigious awards from reputed organizations. Dr. Ram Mohan Rao
has published and presented more than 25 research articles in National and Inter-
national Journals and Conferences. He obtained a grant of Rs. 19.31 Lakhs from
AICTE under MODROBS and established Deep Learning Lab. As a part Research
work, he established a Grid Environment using Globus ToolKit (open source soft-
ware toolkit used for building Grid systems). Further he also Established Cloud Lab,
VCE using Aneka Platform (US Patented) of Manjrasoft Pvt Ltd. His research areas
include Distributed Systems and Cloud Computing and Data Science.
Dr. T. Hitendra Sarma received Ph.D. in Machine Learning from JNT Univer-
sity Anantapur, India, in December 2013. He has more than 14 years of teaching
and research experience. Served at different reputed institutes in various capaci-
ties. Currently, Dr. Sarma is working as Associate Professor at Vasavi College of
Engineering, Hyderabad. Dr. Sarma has published more than 30 research articles in
various peer-reviewed International Journals and Conferences by Springer, Elsevier,
and IEEE. His research interests include Machine Learning, Hyperspectral Image
Processing, Artificial Neural Networks, Data Mining and Data Science, etc. Dr.
Sarma holds a project funded by SERB, INDIA. Dr. Sarma is an active researcher.
He presented his research articles in reputed conferences like IEEE WCCI (2016
Vancouver, Canada), IEEE CEC (2018, Rio de Janeiro, Brazil) and IEEE ICECIE
(2019 Malaysia). He delivered an Invited Talk in the Third International Conference
on Data Mining (ICDM) 2017 at Hualien, Taiwan.
Container Orchestration in Edge and Fog
Computing Environments for Real-Time
IoT Applications
Abstract Resource management is the principal factor to fully utilize the potential
of Edge/Fog computing to execute real-time and critical IoT applications. Although
some resource management frameworks exist, the majority are not designed based
on distributed containerized components. Hence, they are not suitable for highly dis-
tributed and heterogeneous computing environments. Containerized resource man-
agement frameworks such as FogBus2 enable efficient distribution of framework’s
components alongside IoT applications’ components. However, the management,
deployment, health check, and scalability of a large number of containers are chal-
lenging issues. To orchestrate a multitude of containers, several orchestration tools are
developed. But, many of these orchestration tools are heavyweight and have a high
overhead, especially for resource-limited Edge/Fog nodes. Thus, for hybrid com-
puting environments, consisting of heterogeneous Edge/Fog and/or Cloud nodes,
lightweight container orchestration tools are required to support both resource-
limited resources at the Edge/Fog and resource-rich resources at the Cloud. Thus, in
this paper, we propose a feasible approach to build a hybrid and lightweight clus-
ter based on K3s, for the FogBus2 framework that offers containerized resource
management framework. This work addresses the challenge of creating lightweight
computing clusters in hybrid computing environments. It also proposes three design
patterns for the deployment of the FogBus2 framework in hybrid environments,
including (1) Host Network, (2) Proxy Server, and (3) Environment Variable. The
performance evaluation shows that the proposed approach improves the response
time of real-time IoT applications up to 29% with acceptable and low overhead.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 1
R. Buyya et al. (eds.), Computational Intelligence and Data Analytics,
Lecture Notes on Data Engineering and Communications Technologies 142,
https://doi.org/10.1007/978-981-19-3391-2_1
2 Z. Wang et al.
1 Introduction
Fig. 1 A visualization framework on how satellite and ground-based sensors can be fused utilizing
distributed computing paradigm such as edge computing in providing accurate real-time information
to end users
Fig. 2 A detailed system model on various sensors integration and their utilization in disseminating
data to inform end users
traditional Cloud computing paradigm that concentrates data and processing units
in Cloud data centers [7]. The key idea behind Edge and Fog computing is to bring
Cloud-like services to the edge of the network, resulting in less application latency
and a better quality of experience for users [8, 9]. Edge computing can cope with
medium to lightweight tasks. However, when the users’ requirements consist of com-
plex and resource-hungry tasks, Edge devices are often unable to efficiently satisfy
those requirements since they have limited computing resources [7, 10]. To address
these challenges, Fog computing, also referred to as hybrid computing, is becoming
a popular solution. Figure 3 depicts an overview of the Fog/Hybrid computing envi-
ronment. In our view, Edge computing only harnesses the closest resources to the
end users while Fog computing uses deployed resources at Edge and Cloud layers. In
such computing environments, Cloud can act as an orchestrator, which is responsible
for big and long-period data analysis. It can operate in areas such as management,
cyclical maintenance, and execution of computation-intensive tasks. Fog computing,
on the other hand, efficiently manages the analysis of real-time data to better support
the timely processing and execution of latency-sensitive tasks. However, in practice,
contradicting the strong market demand, Fog computing is still in its infancy, with
problems including no unified architecture, the large number and wide distribution
of Edge/Fog nodes, and lack of technical standards and specifications.
Meanwhile, container technology has been significantly developing in recent
years. Compared with physical and virtual machines, containers are very lightweight,
simple to deploy, support multiple architectures, have a short start-up time, and are
easy to expand and migrate. These features provide a suitable solution to the prob-
lem of severe heterogeneity of Edge/Fog nodes [11]. Container technology is being
dominantly used by industry and academia to run commercial, scientific, and big
Container Orchestration in Edge and Fog Computing … 5
data applications, build IoT systems, and deploy distributed containerized resource
management frameworks such as FogBus2 framework [12]. FogBus2, which is a
distributed and containerized framework, enables fast and efficient resource man-
agement in hybrid computing environments.
Considering the ever-increasing number of containerized applications and frame-
works, efficient management and orchestration of resources have become an impor-
tant challenge. While container orchestration tools such as Kubernetes have become
the ideal solution for managing and scaling deployments, nodes, and clusters in the
industry today [13], there are still many challenges with their practical deployments in
hybrid computing environments. Firstly, orchestration techniques need to consider
the heterogeneity of computing resources in different environments for complete
adaptability. Secondly, the complexity of installing and configuring hybrid comput-
ing environments should be addressed when implementing orchestration techniques.
Thirdly, a strategy needs to be investigated to solve potential conflicts between orches-
tration techniques and the network model in the hybrid computing environment.
Also, as Edge/Fog devices are resource-limited, lightweight orchestration techniques
should be deployed to free up the resources for the smooth execution of end-user
applications. Finally, integrating containerized resource management frameworks
with lightweight orchestration tools is another important yet challenging issue to
support the execution of a diverse range of IoT applications.
To address these problems, this paper investigates the feasibility of deploying
container orchestration tools in hybrid computing environments to enable scalability,
health checks, and fault tolerance for containers.
6 Z. Wang et al.
This section discusses the resource management framework and container orches-
tration tools, including the FogBus2 framework and K3s. Moreover, it also reviews
the existing works on container orchestration in the Cloud and Edge/Fog computing
environments.
Figure 4 shows an overview of the five main components of the FogBus2 and their
interactions.
achieve optimal operation of applications and services. K3s clusters allow pods (i.e.,
the smallest deployment unit) to be scheduled and managed on any node. Similar to
Kubernetes, K3s clusters also contain two types of nodes, with the server running the
control plane components and kubelet (i.e., the agent that runs on each node), and
the agent running only the kubelet [16]. Typically, a K3s cluster carries a server and
multiple agents. When the URL of a server is passed to a node, that node becomes
an agent; otherwise, it is a server in a separate K3s cluster [14, 16].
Rodriguez et al. [17] investigates multiple container orchestration tools and proposes
a taxonomy of different mechanisms that can be used to cope with fault tolerance,
availability, scalability, etc. Zhong et al. [18] proposed a Kubernetes-based con-
tainer orchestration technique for cost-effective container orchestration in Cloud
environments. The FLEDGE, developed by Goethals et al. [19], implements con-
tainer orchestration in an Edge environment that is compatible with Kubernetes.
Pires et al. [20] proposed a framework, named Caravela, that employs a decentralized
architecture, resource discovery, and scheduling algorithms. It leverages users’ vol-
untary Edge resources to build an independent environment where applications can
be deployed using standard Docker containers. Alam et al. [21] proposed a modular
architecture that runs on heterogeneous nodes. Based on lightweight virtualization, it
creates a dynamic system by combining modularity with the orchestration provided
by the Docker Swarm. Ermolenko et al. [22] studied a framework for deploying
IoT applications based on Kubernetes in the Edge-Cloud environment. It achieves
Container Orchestration in Edge and Fog Computing … 9
implementation, as usually the pod is created at the same time as the application
is deployed. To address this challenge, we evaluate three approaches and finally
decide to use host network mode to deploy the FogBus2 framework in the K3s
hybrid environment. Host network mode allows pods to use the network configuration
of virtual instances or physical hosts directly, which addresses the communication
challenge of the FogBus2 components and the conflict between K3s network planning
service and VPN. Figure 6 shows a high-level overview of our proposed design
pattern.
As shown in Table 1, Cloud nodes have public IP addresses, while in most cases,
devices in the Edge/Fog environment do not have public IP addresses. In this case,
in order to build a hybrid computing environment, we need to establish a VPN
connection to integrate the Cloud and Edge/Fog nodes. We use Wireguard to establish
a lightweight P2P VPN connection between all the nodes. In the implementation,
Table 1 Configuration of nodes in integrated computing environment
Node tag Node name Computing layer Specifications Public IP Private IP Port Preparation
A Nectar1 Cloud 16-core CPU, 64 45.113.235.156 192.0.0.1 Auto assign Docker
GB RAM
B Nectar2 Cloud 2-core CPU, 9 45.113.232.199 192.0.0.2 Auto assign Docker
GB RAM
C Nectar3 Cloud 2-core CPU, 9 45.113.232.232 192.0.0.3 Auto assign Docker
GB RAM
Container Orchestration in Edge and Fog Computing …
we install the Wireguard on each node and generate the corresponding configuration
scripts (based on the FogBus2 VPN scripts) to ensure that each node has direct access
to all other nodes in the cluster. A sample configuration script for the Wireguard VPN,
derived from FogBus2 scripts, is shown in Fig. 7.
Container Orchestration in Edge and Fog Computing … 13
The K3s server can be located at the Cloud or at the edge, while the remaining
four nodes act as K3s agents. As the aim of this research is to enable container
orchestration on the FogBus2 framework, we need to install and enable Docker on
both the server and agents before building K3s. First, we install and start the K3s
server in Docker mode. K3s allows users to choose the appropriate container tool,
but as all components of FogBus2 run natively in Docker containers, we use Docker
mode to initialize the K3s server to allow it to access the Docker images. Then, we
extract a token from the server, which will be used to join other agents to the server.
After that, we install the K3s on each agent, specifying the IP of the server and
the token obtained from the server during installation to ensure that all agents can
properly connect to the server. Figure 8 shows the successful deployment of the K3s
cluster.
In the native design of the FogBus2 framework, all components are running in con-
tainers. The pod, as the smallest unit created and deployed by K3s, can wrap one or
more containers. Any containers in the same pod will share the same namespace and
local network. Containers can easily communicate with other containers in the same
or different pod as if they were on the same machine while maintaining a degree of
isolation. So first, we are faced with the choice of assigning only one container per
pod (i.e., a component that the FogBus2 framework is built on) or allowing each pod
to manage multiple containers. The former design would balance the load on K3s
nodes as much as possible to facilitate better management by the controller, while
the latter design would reduce the time taken to communicate between components
and provide faster feedback to users. We decide to adopt the former design to achieve
batch orchestration and self-healing from failures.
In order to integrate all types of FogBus2 framework’s components into K3s, we
first define the YAML deployment files for necessary components. This file is used to
provide the object’s statute, which describes the expected state of the object, as well
14 Z. Wang et al.
as some basic information about the object. In our work, the YAML deployment file
serves to declare the number of replicas of the pod, the node it is built on, the name
of the image, the image pulling policy, the parameters for application initialization,
and the location of the mounted volumes. Code Snippet 1 illustrates the YAML
deployment file for the Master component of the FogBus2 framework.
1 # YAML d e p l o y m e n t f i l e for the M a s t e r c o m p o n e n t
2 # of the F o g B u s 2 f r a m e w o r k
3 a p i V e r s i o n : apps / v1
4 kind : D e p l o y m e n t
5 metadata :
6 labels :
7 app : fogbus2 - m a s t e r
8 name : fogbus2 - m a s t e r
9 spec :
10 replicas : 1
11 selector :
12 matchLabels :
13 app : fogbus2 - m a s t e r
14 strategy :
15 type : R e c r e a t e
16 template :
17 metadata :
18 labels :
19 app : fogbus2 - m a s t e r
20 spec :
21 containers :
22 - env :
23 - name : PGID
24 v a l u e : " 1000 "
25 - name : PUID
26 v a l u e : " 1000 "
27 - name : P Y T H O N U N B U F F E R E D
28 value : "0"
29 - name : TZ
30 value : Australia / Melbourne
31 i m a g e : c l o u d s l a b / fogbus2 - m a s t e r
32 imagePullPolicy : ""
33 name : fogbus2 - m a s t e r
34 args : [ " -- b i n d I P " , " 1 9 2 . 0 . 0 . 1 " ,
35 " -- b i n d P o r t " , " 5001 " ,
36 " -- r e m o t e L o g g e r I P " , " 1 9 2 . 0 . 0 . 1 " ,
37 " -- r e m o t e L o g g e r P o r t " , " 5000 " ,
38 " -- s c h e d u l e r N a m e " , " R o u n d R o b i n " ,
39 " -- c o n t a i n e r N a m e " ,
40 " TempContainerName "]
41 r e s o u r c e s : {}
42 volumeMounts :
43 - m o u n t P a t h : / var / run / d o c k e r . sock
44 name : fogbus2 - master - h o s t p a t h 0
45 - mountPath : / workplace /
46 name : fogbus2 - master - h o s t p a t h 1
47 - m o u n t P a t h : / w o r k p l a c e /. m y s q l . env
Container Orchestration in Edge and Fog Computing … 15
However, this design pattern sacrifices some of the functionality of K3s. When
pods are connected directly to the network environment where the hosts are located,
the K3s controller will not be able to optimally manage all the containers within the
cluster because these services require the K3s controller to have the highest level of
access to the network services used by the pods. If the pods are on a VPN network,
we will not be able to implement all the features of K3s. We use Host Network mode
to deploy the FogBus2 framework in the K3s cluster in this paper.
Proxy Server As the problem stems from a conflict between the communication
design of the FogBus2 framework and the communication model between pods in
the K3s cluster, we can create a proxy server that defines the appropriate routing poli-
cies to receive and forward messages from different applications. When a FogBus2
component needs to send a message to another component, we import the message
into the proxy server, which analyzes the message to extract the destination and for-
ward it to the IP address of the target component according to its internal routing
policy. This approach bypasses the native communication model of the FogBus2
framework, and all communication between applications is done through the proxy
server.
There are two types of communication methods in the FogBus2 framework, pro-
prietary methods and generic methods. The proprietary methods are used to commu-
nicate with fixed components, such as master and remote logger, whose IP addresses
are configured and stored as global variables when most components are initialized.
In contrast, the generic methods are used by all components and are called by compo-
nents to transmit their IP addresses as part of the message for the target component.
Therefore, to enable all components to send messages to the proxy server for pro-
cessing, we need to change the source code of the FogBus2 framework so that all
components are informed of the IP address of the proxy server at initialization and
to unify the two types of communication methods so that components will include
information about the target in the message and send it to the proxy server. As a result,
this design would involve a redesign of the communication model of the FogBus2
framework.
Container Orchestration in Edge and Fog Computing … 17
Environment Variable In the K3s cluster, when the application is deployed, the
cluster controller will automatically create a pod to manage the container in which
the application resides. However, in the YAML file, we can obtain the IP address of
the created pod when configuring the container information, which allows us to pass
it in as an environment variable when initializing the components of the FogBus2
framework. Then, the IP address bound to the component is the IP address of the
pod and the component can transmit this address to the target component when
communicating and receiving a message back.
However, in our experiments, we find that pods on different nodes have problems
communicating at runtime. We trace the flow of information transmitted and find
that the reason for this is the conflict between the network services configured within
the cluster and the VPN used to build the hybrid computing environment. The pods
possess unique IP addresses and use them to communicate with each other, but
these addresses cannot be recognized by the VPN on the nodes, which prevents the
information from being transferred from the hosts. To solve this problem, we have
proposed two solutions:
• Solution 1: K3s uses flannel as the Container Network Interface (CNI) by default.
We can change the default network service configuration of the K3s cluster and
override the default flannel interface with the Wireguard Ethereum Name Service.
• Solution 2: We can change the Wireguard settings to add the interface of the
network service created by the K3s controller to the VPN profile to allow incoming
or outgoing messages from a specific range of IP addresses.
4 Performance Evaluation
In this section, two experiments are conducted using three real-time applications to
evaluate the performance of orchestrated FogBus2 (O-FogBus2) and native FogBus2,
as well as the performance of FogBus2 in the hybrid versus Cloud environment. The
real-time applications used in the experiments are described in Table 2.
Fig. 9 Response times for Orchestrated FogBus2 (O-FogBus2) versus native FogBus2 in three
applications
This experiment studies the performance of O-FogBus2 deployed in the hybrid com-
puting environment versus the Cloud computing environment. Same as Sect. 4.1, the
environment setup for this experiment is shown in Table 1. For the hybrid computing
environment, the Master and one Actor are running on the Edge, and the Remote Log-
ger and two other Actors are running on the Cloud. And for the Cloud environment,
all the components are running on the Cloud.
Container Orchestration in Edge and Fog Computing … 19
Fig. 10 Response times for Orchestrated FogBus2 (O-FogBus2) in Hybrid versus Cloud deploy-
ment
Figure 10 depicts the response time of FogBus2 deployed in hybrid and Cloud
environments for three applications. For all tested applications, the average response
time is shorter by up to 29% when FogBus2 is running in the hybrid environment
than when FogBus2 is running in the Cloud. This is because the end users are usually
located at the edge of the network and the final result should be forwarded to them.
If all the components of FogBus2 are running in the Cloud, it will take longer and
will face the impact of the unstable Wide Area Network (WAN). Since FogBus2 is
designed for IoT devices to integrate Cloud and Edge/Fog environments, the intro-
duction of K3 does not deprive this function, so we believe that placing the entire
system in a hybrid computing environment can reasonably utilize the Cloud and
Edge/Fog computing resources and improve system performance.
challenge, the Proxy Server and Environment Variable design approaches can be
investigated to enable dynamic scalability. Secondly, lightweight security mecha-
nisms can be embedded into the container orchestration mechanisms. As IoT devices
are highly exposed to users, security and privacy become important. However, the
limited resources of Edge/Fog devices create difficulties for the implementation of
security mechanisms. Therefore, lightweight security mechanisms to ensure end-
to-end integrity and confidentiality of user information can be further investigated.
Next, integrating different orchestration tools, including KubeEdge, Docker Swarm,
and MicroK8s, can be considered as an important future direction. Different orches-
tration tools may be suitable for different computing environments, so it is essential
to find the best application scenarios for them. We can explore the impact of dif-
ferent integrated container orchestration tools for handling real-time and non-real-
time IoT applications. Also, a variety of scheduling policies can be implemented to
automate application deployment and improve resource usage efficiency for clus-
ters, ranging from heuristics to reinforcement learning techniques [2]. For example,
scheduling pods to nodes with smaller memory and CPU footprints to automatically
load-balancing on the cluster, or spreading replicative pods across different nodes
to avoid severe system failures. Furthermore, since machine learning techniques [2,
23] are becoming mature and widely used in various fields, we can consider integrat-
ing them into the Edge/Fog and Cloud computing environment. Machine learning
techniques can be used to analyze the state of the current computing environment,
improve the system’s ability to manage resources, and distribute workloads. As cur-
rent machine learning tools are often designed for powerful servers, future research
can optimize them to run on resource-constrained Edge/Fog devices. Finally, the
adopted techniques can consider the requirements of specific application domains
such as natural disaster management, which significantly affect human life.
References
1. Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of things (IoT): a vision, archi-
tectural elements, and future directions. Future Gener Comput Syst 29(7):1645–1660
2. Goudarzi M, Palaniswami MS, Buyya R (2021) A distributed deep reinforcement learning
technique for application placement in edge and fog computing environments. IEEE Trans
Mob Comput (accepted, in press)
3. Aazam M, Khan I, Alsaffar AA, Huh E-N (2014) Cloud of things: integrating internet of
things and cloud computing and the issues involved. In: Proceedings of the 11th International
Bhurban conference on applied sciences & technology (IBCAST) Islamabad, Pakistan, 14th–
18th January, 2014. IEEE, New York, pp 414–419
4. Goudarzi M, Wu H, Palaniswami M, Buyya R (2021) An application placement technique
for concurrent IoT applications in edge and fog computing environments. IEEE Trans Mob
Comput 20(4):1298–1311
5. Mohammad Goudarzi, Palaniswami M, Buyya R (2021) A distributed application placement
and migration management techniques for edge and fog computing environments. In: Proceed-
ings of the 16th conference on computer science and intelligence systems (FedCSIS). IEEE,
New York, pp 37–56
Container Orchestration in Edge and Fog Computing … 21
6. Ujjwal KC, Garg S, Hilton J, Aryal J, Forbes-Smith N (2019) Cloud computing in natural
hazard modeling systems: current research trends and future directions. Int J Disaster Risk
Reduct 38:101188
7. Buyya R, Srirama SN (2019) Fog and edge computing: principles and paradigms. Wiley
8. Dastjerdi AV, Buyya R (2016) Fog computing: helping the internet of things realize its potential.
Computer 49(8):112–116
9. Goudarzi M, Palaniswami M, Buyya R (2019) A fog-driven dynamic resource allocation tech-
nique in ultra dense femtocell networks. J Network Comput Appl 145:102407
10. Shi W, Cao J, Zhang Q, Li Y, Lanyu X (2016) Edge computing: vision and challenges. IEEE
Internet Things J 3(5):637–646
11. Bali A, Gherbi A (2019) Rule based lightweight approach for resources monitoring on IoT
edge devices. In: Proceedings of the 5th International workshop on container technologies and
container clouds, pp 43–48
12. Deng Q, Goudarzi M, Buyya R (2021) Fogbus2: a lightweight and distributed container-based
framework for integration of IoT-enabled systems with edge and cloud computing. In: Pro-
ceedings of the international workshop on big data in emergent distributed environments, pp
1–8
13. Cai Z, Buyya R (2022) Inverse queuing model based feedback control for elastic container
provisioning of web systems in Kubernetes. IEEE Trans Comput 71(2):337–348
14. Rancher Labs (2021) K3s—lightweight Kubernetes. https://rancher.com/docs/k3s/latest/en/.
Accessed 24 Jan 2022
15. Todorov MH (2021) Design and deployment of Kubernetes cluster on Raspberry pi OS. In:
Proceedings of the 29th National conference with international participation (TELECOM).
IEEE, New York, pp 104–107
16. Rancher Labs (2021) Architecture. https://rancher.com/docs/k3s/latest/en/architecture/.
Accessed 24 Jan 2022
17. Rodriguez MA, Buyya R (2019) Container-based cluster orchestration systems: a taxonomy
and future directions. Software: Pract Exp 49(5):698–719
18. Zhong Z, Buyya R (2020) A cost-efficient container orchestration strategy in Kubernetes-based
cloud computing infrastructures with heterogeneous resources. ACM Trans Internet Technol
(TOIT) 20(2):1–24
19. Goethals T, De Turck F, Volckaert B (2019) Fledge: Kubernetes compatible container orchestra-
tion on low-resource edge devices. In: Proceedings of the international conference on internet
of vehicles. Springer, Berlin, pp 174–189
20. Pires A, Simão J, Veiga L (2021) Distributed and decentralized orchestration of containers on
edge clouds. J Grid Comput 19(3):1–20
21. Alam M, Rufino J, Ferreira J, Ahmed SH, Shah N, Chen Y (2018) Orchestration of microser-
vices for IoT using docker and edge computing. IEEE Commun Maga 56(9):118–123
22. Ermolenko D, Kilicheva C, Muthanna A, Khakimov A (2021) Internet of things services orches-
tration framework based on Kubernetes and edge computing. In: Proceedings of the IEEE
conference of russian young researchers in electrical and electronic engineering (ElConRus).
IEEE, New York, pp 12–17
23. Agarwal S, Rodriguez MA, Buyya R (2021) A reinforcement learning approach to reduce
serverless function cold start frequency. In: Proceedings of the 21st IEEE/ACM international
symposium on cluster, cloud and internet computing (CCGrid). IEEE, New York, pp 797–803
Is Tiny Deep Learning the New Deep
Learning?
Manuel Roveri
Abstract The computing everywhere paradigm is paving the way for the pervasive
diffusion of tiny devices (such as Internet-of-Things or edge computing devices)
endowed with intelligent abilities. Achieving this goal requires machine and deep
learning solutions to be completely redesigned to fit the severe technological con-
straints on computation, memory, and power consumption typically characterizing
these tiny devices. The aim of this paper is to explore tiny machine learning (TinyML)
and introduce tiny deep learning (TinyDL) for the design, development, and deploy-
ment of machine and deep learning solutions for (an ecosystem of) tiny devices,
hence supporting intelligent and pervasive applications following the computing
everywhere paradigm.
1 Introduction
The technological evolution and the algorithmic revolution have always represented
two sides of the same coin in the machine learning (ML) field. On the one hand,
advances in technological solutions (e.g., the design of high performance and energy-
efficient hardware devices) have supported the design and development of increas-
ingly complex and technologically demanding ML algorithms and solutions [33,
38]. On the other hand, novel ML algorithms and solutions have been specifically
designed for target hardware devices (e.g., embedded devices or Internet-of-Things
[IoT] units), enabling these devices to be endowed with advanced intelligent func-
tionalities [34, 42].
Interestingly, deep learning (DL) is a relevant and valuable example of this strict
and fruitful relationship between the technological evolution and the algorithmic rev-
M. Roveri (B)
Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano, Milan, Italy
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 23
R. Buyya et al. (eds.), Computational Intelligence and Data Analytics,
Lecture Notes on Data Engineering and Communications Technologies 142,
https://doi.org/10.1007/978-981-19-3391-2_2
24 M. Roveri
olution. Indeed, since the appearance of the first deep neural networks [22, 24], DL
algorithms have completely revolutionized the ML field. Today, they represent the
state-of-the-art solution for recognition, detection, classification, and prediction (to
name but a few) in numerous different application scenarios [28, 31]. Noteworthily,
the basis of DL, namely the idea of deep-stacked processing layers in neural net-
works, dates back to the late1960s (see the seminal works in [11, 17]). At that time,
the available technological solutions were unable to support the effective and effi-
cient training of such deep neural networks. Thirty years later, the rise of hardware
accelerators, such as graphics processing units (GPUs) and tensor processing units
(TPUs), saw them become technological enablers of the DL revolution. This led to
what is today considered the standard computing paradigm in the field: DL models
trained and executed on hardware accelerators.
From scientific and technological perspectives, it is therefore crucial to identify the
next technological enabler capable of supporting the next algorithmic revolution. One
of the most promising and relevant technological directions is the “computing every-
where” paradigm [2, 18]. It represents a pervasive and heterogeneous ecosystem of
IoT and edge devices that support a wide range of ML-based pervasive applications,
from smart cars to smart homes and cities, and from Industry 4.0 to E-health. Due to
the technological evolution, these pervasive and heterogeneous devices are becoming
tinier and increasingly energy-efficient (often being battery-powered); hence, they
are able to support an effective and efficient on-device processing [2]. This is a cru-
cial ability since moving the processing and, in particular, the intelligent processing
as close as possible to where data are generated guarantees relevant advantages in
ML-based pervasive applications. Some of these advantages are as follows:
• an increase in the autonomy of these pervasive devices, which are therefore able
to make decisions locally (without sending acquired data to the Cloud through the
Internet for processing and then waiting for the results);
• a reduction in the latency with which a decision is made or a reaction is activated;
• a reduction in the required transmission bandwidth, hence enabling these devices
to operate even in areas where high-speed Internet connections are not available
(e.g., rural areas);
• an increase in the energy efficiency of these pervasive devices, since transmitting
the data is much more power-hungry than processing them locally;
• an increase in the privacy of these pervasive applications, since possibly sensitive
data remain on the device;
• the ability to exploit incremental or adaptive learning mechanisms to acquire fresh
knowledge directly from the field, hence improving or (whenever needed) main-
taining the accuracy of ML/DL models over time [8]; and
• the capability to distribute the inference and learning of ML/DL models in
the ecosystem of possibly heterogeneous pervasive devices (i.e., IoT and edge
devices).
The drawback of such an approach is that strict technological constraints char-
acterize these pervasive devices in terms of computation, memory, and power con-
sumption. Indeed, the CPU frequency of such tiny devices is typically in the order
Is Tiny Deep Learning the New Deep Learning? 25
of MHz, the RAM memory is in the order of a few hundred kB, and the power
consumption is typically below 100 mW. Such severe technological constraints pose
huge technical challenges from a design point of view on ML and, particularly, DL
solutions, which are typically highly demanding in terms of computation, memory,
and power consumption. This challenge is further emphasized both by the complexi-
ties in the development of embedded software for tiny devices (i.e., the firmware that
runs on them) and by the need to consider a strict co-design phase that comprises
hardware, software, and ML/DL algorithms.
This is exactly where tiny ML (hereinafter “TinyML”) comes into play, which
involves the design of ML solutions that can be executed on tiny devices, hence
being able to take into account constraints on memory, computation, and power con-
sumption. The aim of this paper is to shed the light on the state-of-the-art solutions
for the design of ML and DL solutions that can be executed on tiny devices. In
particular, this paper focuses on the design and development of DL solutions specif-
ically intended to be executed on tiny devices, hence paving the way for the tiny DL
(hereinafter “TinyDL”) revolution for smarter and more efficient pervasive systems
and applications.
The remainder of this paper is organized as follows. Section 2 provides an
overview of TinyML, and then Sect. 3 introduces TinyDL. Section 4 details approx-
imate computing mechanisms for TinyDL. Section 5 introduces specific TinyDL
solutions for the IoT. Lastly, Sect. 6 draws conclusions and presents open research
points.
TinyML [13, 42] is a new and promising area of ML and DL aimed at designing
ML solutions that can be executed on tiny devices. Solutions present in this area
aim to introduce tiny models and architectures characterized by reduced memory
and computational demands of the processing layers [9] as well as approximate
computing solutions, such as quantization [12] and pruning [27] to address the severe
technological constraints on computation, memory, and energy that characterize tiny
devices.
An overview of the TinyML paradigm is presented in Fig. 1. Specifically, TinyML
comprises the following two main modules:
• the hardware module: the physical resources of the tiny device, comprising the
embedded computing board, sensors, actuators, and battery (optional);
• the software module: all the software components that run on the tiny device, com-
prising the preprocessing module, TinyML module, and postprocessing module.
Data acquired through the sensors are preprocessed to remove noise or highlight
relevant features. The TinyML module receives as input the preprocessed data to
produce an inference (e.g., a classification, detection, or prediction) by means of the
26 M. Roveri
Fig. 1 Overview of tiny machine learning (TinyML) computing paradigm, which comprises an
embedded computing board, sensors, actuators, and software processing pipeline, including pre-
processing, TinyML model inference, and postprocessing
trained TinyML model. The output of the TinyML module is postprocessed to make
a decision or activate a reaction, which is then conducted by means of the actuators.
The basis of TinyML applications is that they are designed to be “always on”
in the sense that tiny devices continuously acquire and process data (through the
preprocessing, TinyML, and postprocessing modules); thus, decisions are made or
reactions are activated directly on the device.
Examples of TinyML applications are wake-word detection, where a given com-
mand or word acquired by a microphone is recognized by the TinyML model; person
detection, where images acquired by a camera are processed by the TinyML mod-
ule to detect the presence of persons therein; and gesture recognition, where data
acquired by MEMS accelerometers are processed by the TinyML module to recog-
nize gestures made by people.
The development chain of TinyML applications, detailed in Fig. 2, comprises
several steps that range from the hardware setup of tiny devices to the operational
mode of TinyML models.
Specifically, the development chain of TinyML applications comprises the fol-
lowing steps:
1. Hardware setup: The embedded computing board, sensors, and actuators are
selected for the purposes of the TinyML application. Remarkably, the choice of
the embedded computing board imposes technological constraints on memory
and computation for the design and development of the TinyML model.
2. Software setup: The development toolchain for the firmware of the embedded
computing board and the framework for the design of the TinyML application
Is Tiny Deep Learning the New Deep Learning? 27
(e.g., TensorFlow Lite for Micro or Quantized Pytorch) are selected and config-
ured.
3. Data collection: This step is intended to create the training set for training the
TinyML model (if needed, supervised information is provided by the expert).
4. TinyML model training: The selected TinyML model (e.g., a linear classifier,
decision tree, or feedforward neural network) is trained on the acquired training
set.
5. Firmware development: The trained TinyML model is included in the firmware.
The firmware comprises, in addition to the inference of the TinyML model, the
reading of the sensors, preprocessing, postprocessing, activation of the actuators,
and (if required) communication with other computing units or devices (e.g., an
edge computing unit or a gateway).
6. Firmware compilation: The developed firmware comprising all of the aforemen-
tioned software components is compiled for the embedded computing board
defined in Step 1 by means of the selected development toolchain selected in
Step 2. The compiled firmware is then flashed to the embedded computing board
(to accomplish this step the hardware constraints on memory and computation
posed by the embedded computing board must be satisfied).
7. TinyML operation: The tiny device, comprising the selected hardware and the
compiled and flashed firmware, operates in the environment for the purpose of the
given TinyML application. Information about effectiveness and efficiency can
be gathered by the tiny device to monitor the status of the TinyML application
over time (and, if needed, updates, patches, or bug-fixing can be introduced).
Notably, Steps 1–6 are conducted outside of the tiny devices (e.g., on the Cloud
or on personal computers), whereas only Step 7 is technically executed on it. This
approach guarantees sufficient computational load and memory for accomplishing
the goals with the highest computation and memory demands (i.e., training set cre-
Discovering Diverse Content Through
Random Scribd Documents
Posto se achasse costumado aos olhos admirativos, via agora em
toda a gente um aspecto parecido com a noticia de que elle ia casar.
As casuarinas de uma chacara, quietas antes que elle passasse por
ellas, disseram-lhe cousas mui particulares, que os levianos
attribuiriam á aragem que passava tambem, mas que os sapientes
reconheceriam ser nada menos que a linguagem nupcial das
casuarinas. Passaros saltavam de um lado para outro, pipilando um
madrigal. Um casal de borboletas,—que os japões têm por symbolo
da fidelidade, por observarem que, se pousam de flor em flor,
andam quasi sempre aos pares,—um casal dellas acompanhou por
muito tempo o passo do cavallo, indo pela cerca de uma chacara
que beirava o caminho, volteando aqui e alli, lepidas e amarellas. De
envolta com isto, um ar fresco, ceu azul, caras alegres de homens
montados em burros, pescoços estendidos pela janella fóra das
diligencias, para vel-o e ao seu garbo de noivo. Certo, era difficil crer
que todos aquelles gestos e attitudes da gente, dos bichos e das
arvores, exprimissem outro sentimento que não fosse a homenagem
nupcial da natureza.
As borboletas perderam-se em uma das moitas mais densas da
cêrca. Seguiu-se outra chacara, despida de arvores, portão aberto, e
ao fundo, fronteando com o portão, uma casa velha, que
encarquilhava os olhos sob a forma de cinco janellas de peitoril,
cançadas de perder moradores. Tambem ellas tinham visto bodas e
festins; o seculo ja as achou verdes de novidade e de esperança.
Não cuideis que esse aspecto contristou a alma do cavalleiro. Ao
contrario, elle possuia o dom particular de remoçar as ruinas e viver
da vida primitiva das cousas. Gostou até de ver a casa velhusca,
desbotada, em contraste com as borboletas tão vivas de ha pouco.
Parou o cavallo; evocou as mulheres que por alli entraram, outras
galas, outros rostos, outras maneiras. Porventura as proprias
sombras das pessoas felizes e extinctas vinham agora cumprimental-
o tambem, dizendo-lhe pela bocca invisivel todos os nomes sublimes
que pensavam delle. Chegou a ouvil-as e sorrir. Mas uma voz
estridula veiu mesclar-se ao concerto;—um papagaio, em gaiola
pendente da parede externa da casa: «Papagaio real, para Portugal;
quem passa? Currupá, papá. Grrrr... Grrrrr...» As sombras fugiram, o
cavallo foi andando. Carlos Maria aborrecia o papagaio, como
aborrecia o macaco, duas contrafacções da pessoa humana, dizia
elle.
—A felicidade que eu lhe der será assim tambem interrompida?
reflexionou andando.
Cambaxirras voaram de um para outro lado da rua, e pousaram
cantando a sua falla propria; foi uma reparação. Essa lingua sem
palavras era intelligivel, dizia uma porção de cousas claras e bellas.
Carlos Maria chegou a ver naquillo um symbolo de si mesmo.
Quando a mulher, aturdida dos papagaios do mundo, viesse caindo
de fastio, elle a faria erguer aos trillos da passarada divina, que
trazia em si, ideias de ouro, ditas por uma voz de ouro. Oh! como a
tornaria feliz! Já a antevia ajoelhada, com os braços postos nos seus
joelhos, a cabeça nas mãos e os olhos nelle, gratos, devotos,
amorosos, toda implorativa, toda nada.
CAPITULO CXXIII
CAPITULO CXXIV
Casaram-se; tres mezes depois foram para a Europa. Ao despedir-se
delles, D. Fernanda estava tão alegre como se viesse recebel-os de
volta; não chorava. O prazer de os ver felizes era maior que o
desgosto da separação.
—Você vae contente? perguntou a Maria Benedicta, pela ultima vez,
junto á amurada do paquete.
—Oh! muito!
A alma de D. Fernanda debruçou-se-lhe dos olhos, fresca,
ingenua,cantando um trecho italiano,—porque a suberba guasca
preferia a musica italiana,—talvez esta aria da Lucia: O' bell'alma
innamorata. Ou este pedaço do Barbeiro:
CAPITULO CXXV
Sophia não foi a bordo, adoeceu e mandou o marido. Não vão crer
que era pezar nem dor; por occasião do casamento, houve-se com
grande discrição, cuidou do enxoval da noiva e despediu-se della
com muitos beijos chorados. Mas ir a bordo pareceu-lhe vergonha.
Adoeceu; e, para não desmentir do pretexto, deixou-se estar no
quarto. Pegou de um romance recente; fora-lhe dado pelo Rubião.
Outras cousas alli lhe lembravam o mesmo homem, teteias de toda
a sorte, sem contar joias guardadas. Finalmente, uma singular
palavra que lhe ouvira, na noite do casamento da prima, até essa
veiu alli para o inventario das recordações do nosso amigo.
—A senhora é já a rainha de todas, disse-lhe elle em voz baixa;
espere que ainda a farei imperatriz.
Sophia não pode entender esta phrase enigmatica. Quiz suppor que
era uma alliciação de grandeza para tornal-a sua amante; mas a
vaidade que essa ideia trazia fel-a excluir desde logo. Rubião, posto
não fosse agora o mesmo homem encolhido e timido de outros
tempos, não se mostrava tão cheio de si que lhe pudesse attribuir
tão alta presumpção. Mas que era então a phrase? Talvez um modo
figurado de dizer que a amaria ainda mais. Sophia acreditava
possivel tudo. Não lhe faltavam galanteios; chegou a ouvir aquella
declaração de Carlos Maria, provavelmente ouvira outras, a que deu
somente a attenção da vaidade. E todas passaram; Rubião é que
persistia. Tinha pausas, filhas de suspeitas; mas as suspeitas iam
como vinham.
«Il mérite d'être aimé», leu Sophia na pagina aberta do romance,
quando ia continuar a leitura; fechou o livro, fechou os olhos, e
perdeu-se em si mesma. A escrava que entrou d'ahi a pouco,
trazendo-lhe um caldo, suppoz que a senhora dormia e retirou-se pé
ante pé.
CAPITULO CXXVI
CAPITULO CXXVII
CAPITULO CXXVIII
CAPITULO CXXIX
Não havia banco, nem logar de director, nem liquidação; mas, como
justificaria o Palha a proposta de separação, dizendo a pura
verdade? Dahi a invenção, tanto mais prompta, quanto o Palha tinha
amor aos bancos, e morria por um. A carreira daquelle homem era
cada vez mais prospera e vistosa. O negocio corria-lhe largo; um dos
motivos da separação era justamente não ter que dividir com outro
os lucros futuros. Palha, além do mais, possuia acções de toda a
parte, apolices de ouro do emprestimo Itaborahy, e fizera uns dous
fornecimentos para a guerra, de sociedade com um poderoso, nos
quaes ganhou muito. Já trazia apalavrado um architecto para lhe
construir um palacete. Vagamente pensava em baronia.
CAPITULO CXXX
—Quem diria que a gente do Palha nos trataria deste modo? Já não
valemos nada. Excusa de os defender...
—Não defendo, estou explicando; ha de ter havido confusão.
—Fazer annos, casar a prima, e nem um triste convite ao major, ao
grande major, ao impagavel major, ao velho amigo major. Eram os
nomes que me davam; eu era impagavel, amigo velho, grande e
outros nomes. Agora, nada, nem um triste convite, um recado de
boca, ao menos, por um moleque: «Nhanhã faz annos, ou casa a
prima, diz que a casa esta ás suas ordens, e que vão com luxo.»
Não iriamos; luxo não é para nós. Mas era alguma cousa, era
recado, um moleque, ao impagavel major...
—Papae!
Rubião, vendo a intervenção de D. Tonica, animou-se a defender
longamente a familia Palha. Era em casa da major, não já na rua
Dous de Dezembro, mas na dos Barbonos, modesto sobradinho.
Rubião passava, elle estava á janella, e chamou-o. D. Tonica não
teve tempo de sair da sala, para dar, ao menos, uma vista d'olhos ao
espelho; mal pôde passar a mão pelo cabello, compôr o laço de fita
ao pescoço e descer o vestido para cobrir os sapatos, que não eram
novos.
—Digo-lhe que póde ter havido confusão, insistiu Rubião; tudo anda
por lá muito atrapalhado com esta commissão das Alagoas.
—Lembra bem, interrompeu o major Siqueira; porque não metteram
minha filha na commissão das Alagoas? Qual! Ha já muito que
reparo nisto; antigamente não se fazia festa sem nós. Nós éramos a
alma de tudo. De certo tempo para cá começou a mudança;
entraram a receber-nos friamente, e o marido, se pode esquivar-se,
não me falla na rua. Isto começou ha tempos; mas antes disso sem
nós é que não se fazia nada. Que está o senhor a fallar de confusão?
Pois se na vespera dos annos della, já desconfiando que não nos
convidariam, fui ter com elle ao armazem. Poucas palavras, por mais
que lhe fallasse em D. Sophia; disfarçava. Afinal disse-lhe assim:
«Hontem, lá em casa, eu e Tonica estivemos discutindo sobre a data
dos annos de D. Sophia; ella dizia que tinha passado, eu disse que
não, que era hoje ou amanhã.» Não me respondeu, fingiu que
estava absorvido em uma conta, chamou o guarda-livros, e pediu
explicações. Eu entendi o bicho, e repeti a historia; fez a mesma
cousa. Sahi. Ora o Palha, um pé-rapado! Já o envergonho.
Antigamente: major, um brinde. Eu fazia muitos brindes, tinha certo
desembaraço. Jogavamos o voltarete. Agora está nas grandezas;
anda com gente fina. Ah! vaidades deste mundo! Pois não vi outro
dia a mulher delle, n'um coupé, com outra? A Sophia de coupé!
Fingiu que me não via, mas arranjou os olhos de modo que
percebesse se eu a via, se a admirava. Vaidades desta vida! Quem
nunca comeu azeite, quando come se lambusa.
—Perdão, mas os trabalhos da commissão exigem certo apparato.
—Sim, acudiu Siqueira, é por isso que minha filha não entrou na
commissão; é para não estragar as carruagens...
—Demais, o coupé podia ser da outra senhora, que ia com ella.
O major deu dous passos, com as mãos atraz, e parou deante de
Rubião.
—Da outra... ou do padre Mendes. Como vae o padre? Boa vida,
naturalmente.
—Mas, papae, póde não haver nada, interrompeu D. Tonica. Ella
sempre me trata bem, e quando estive doente no mez passado,
mandou saber pelo moleque, duas vezes...
—Pelo moleque! bradou o pae. Pelo moleque! Grande favor!
«Moleque, vae alli á casa daquelle reformado e pergunta lhe se a
filha tem passado melhor; não vou, porque estou lustrando as
unhas!» Grande favor! Tu não lustras as unhas! tu trabalhas! tu és
digna filha minha! pobre, mas honesta!
Aqui o major chorou, mas suspendeu de repente as lagrimas. A filha,
commovida, sentiu-se tambem vexada. Certo, a casa dizia a pobreza
da familia, poucas cadeiras, uma meza redonda velha, um canapé
gasto; nas paredes duas lithographias encaixilhadas em pinho
pintado de preto, um era o retrato do major em 1857, a outra
representava o Veronez em Veneza, comprado na rua do Senhor dos
Passos. Mas o trabalho da filha transparecia em tudo; os moveis
reluziam de asseio, a meza tinha um panno de crivo, feito por ella, o
canapé uma almofada. E era falso que D. Tonica não lustrasse as
unhas; não teria o pó nem a camurça, mas acudia-lhes com um
retalho de panno todas as manhãs.
CAPITULO CXXXI
CAPITULO CXXXII
CAPITULO CXXXIII
É
—É verdade, o philosopho.
E Rubião explicou aos novatos a allusão ao philosopho, e a razão do
nome do cão, que todos lhe attribuiam. Quincas Borba (o defuncto)
foi descripto e narrado como um dos maiores homens do tempo,—
superior aos seus patricios. Grande philosopho, grande alma, grande
amigo. E no fim, depois de algum silencio, batendo com os dedos na
borda da mesa, Rubião exclamou:
—Eu o faria ministro de Estado!
Um dos convivas exclamou, sem convicção, por simples officio:
—Oh! sem duvida!
Nenhum daquelles homens sabia, entretanto, o sacrificio que lhes
fazia o Rubião. Recusava jantares, passeios, interrompia
conversações apraziveis, só para correr a casa e jantar com elles.
Um dia achou meio de conciliar tudo. Não estando elle em casa ás
seis horas em ponto, os criados deviam pôr o jantar para os amigos.
Houve protestos; não, senhor, esperariam até sete ou oito horas. Um
jantar sem elle não tinha graça.
—Mas é que posso não vir, explicou Rubião.
Assim se cumpriu. Os convivas ajustaram bem os relogios pelos da
casa de Botafogo. Davam seis horas, todos á mesa. Nos dous
primeiros dias houve tal ou qual hesitação; mas os criados tinham
ordens severas. Ás vezes, Rubião chegava pouco depois. Eram então
risos, ditos, intrigas alegres. Um queria esperar, mas os outros... Os
outros desmentiam o o primeiro; ao contrario, foi este que os
arrastou, tal fome trazia,—a ponto que, se alguma cousa restava,
eram os pratos. E Rubião ria com todos.
CAPITULO CXXXIV
Fazer um capitulo só para dizer que, a principio, os convivas,
ausente o Rubião, fumavam os proprios charutos, depois do jantar,—
parecerá frivolo aos frivolos; mas os considerados dirão que algum
interesse haverá nesta circumstancia em apparencia minima.
De facto, uma noite, um dos mais antigos lembrou-se de ir ao
gabinete de Rubião; lá fôra algumas vezes, alli se guardavam as
caixas de charutos, não quatro nem cinco, mas vinte e trinta de
varias fabricas e tamanhos, muitas abertas. Um criado (o hespanhol)
accendeu o gaz. Os outros convivas seguiram o primeiro, escolheram
charutos e os que ainda não conheciam o gabinete admiraram os
moveis bem feitos e bem dispostos. A secretária captou as
admirações geraes; era de ebano, um primor de talha, obra severa e
forte. Uma novidade os esperava: dous bustos de marmore, postos
sobre ella, os dous Napoleões, o primeiro e o terceiro.
—Quando veiu isto?
—Hoje ao meio dia, respondeu o criado.
Dous bustos magnificos. Ao pé do olhar aquilino do tio, perdia-se no
vago o olhar scismatico do sobrinho. Contou o criado que o amo,
apenas recebidos e collocados os bustos, deixara-se estar grande
espaço em admiração, tão deslembrado do mais, que elle pode
miral-os tambem, sem admiral-os.—No me dicen nada estos dos
pícaros, concluiu o criado fazendo um gesto largo e nobre.
CAPITULO CXXXV
CAPITULO CXXXVI
CAPITULO CXXXVII
CAPITULO CXXXVIII
Rubião ainda quiz valer ao major, mas o ar de fastio com que Sophia
o interrompeu foi tal, que o nosso amigo preferiu perguntar-lhe se,
não chovendo na seguinte manhã, iriam sempre passear á Tijuca.
—Já fallei a Christiano; disse-me que tem um negocio, que fique
para domingo que vem.
Rubião, depois de um instante:
Vamos nós dous. Sahimos cedo, passeamos, almoçamos lá; ás tres
ou quatro horas estamos de volta...
Sophia olhou para elle, com tamanha vontade de acceitar o convite,
que Rubião não esperou resposta verbal.
—Está assentado, vamos, disse elle.
—Não.
—Como não?
E repetiu a pergunta, porque Sophia não lhe quiz explicar a
negativa, aliás, tão obvia. Obrigada a fazel-o, ponderou que o
marido ficaria com inveja, e era capaz de adiar o negocio só para ir
tambem. Não queria atrapalhar os negocios delle, e podiam esperar
oito dias. O olhar de Sophia acompanhava essa explicação, como um
clarim acompanharia um padre-nosso. Vontade tinha, oh! se tinha
vontade de ir na manhã seguinte, com Rubião, estrada acima, bem
posta no cavallo, não scismando á toa, nem poetica, mas valente,
fogo na cara, toda deste mundo, galopando, trotando, parando. Lá
no alto, desmontaria algum tempo; tudo só, a cidade ao longe e o
ceu por cima. Encostada ao cavallo, penteando-lhe as crinas com os
dedos, ouviria Rubião louvar-lhe a affouteza e o garbo... Chegou a
sentir um beijo na nuca...
CAPITULO CXL
Pois que se trata de cavallos, não fica mal dizer que a imaginação de
Sophia era agora um corsel brioso e petulante, capaz de galgar
morros e desbaratar mattos. Outra seria a comparação, se a
occasião fosse differente; mas corsel é o que vae melhor. Traz a
ideia do impeto, do sangue, da disparada, ao mesmo tempo que a
da serenidade com que torna ao caminho recto, e por fim á
cavallariça.
CAPITULO CXLI
CAPITULO CXLII
CAPITULO CXLIII
Fez-se o passeio á Tijuca, sem outro incidente mais que uma queda
do cavallo, ao descerem. Não foi Rubião que cahiu, nem o Palha,
mas a senhora deste, que vinha pensando em não sei quê, e
chicoteou o animal com raiva; elle espantou-se e deitou-a em terra.
Sophia cahiu com graça. Estava singularmente esbelta, vestida de
amazona, corpinho tentador de justeza. Othello exclamaria, se a
visse: «Oh! minha bella guerreira!» Rubião limitara-se a isto, ao
começar o passeio: «A senhora é um anjo!».
CAPITULO CXLIV
CAPITULO CXLV
Foi por esse tempo que Rubião poz em espanto a todos os seus
amigos. Na terça-feira seguinte ao domingo do passeio (era então
Janeiro de 1870) avisou a um barbeiro e cabelleireiro da rua do
Ouvidor que o mandasse barbear a casa, no outro dia, ás nove horas
da manhã. Lá foi um official francez,—chamado Lucien, creio eu,—
que entrou para o gabinete de Rubião, segundo as ordens dadas ao
criado.
—Uhm!... rosnou Quincas Borba, de cima dos joelhos do Rubião.
Lucien parou á porta do gabinete, e comprimemtou o dono da casa;
este, porem, não viu a cortezia, como não ouvira o signal do
Quincas Borba. Estava em uma longa cadeira de extensão, ermo do
espirito, que rompera o tecto e se perdera no ar. A quantas leguas
iria? Nem condor nem aguia o poderia dizer. Em marcha para a lua,
—não via cá em baixo mais que as felicidades perennes, chovidas
sobre elle, desde o berço, onde o embalaram fadas, até á praia de
Botafogo, aonde ellas o trouxeram, por um chão de rosas e bogaris.
Nenhum revez, nenhum mallogro, nenhuma pobreza;—vida placida,
cosida de goso, com rendas de superfluo. Em marcha para a lua!
Lucien relanceou os olhos pelo gabinete, onde fazia principal figura a
secretária, e sobre ella os dous bustos de Napoleão e Luiz Napoleão.
Relativamente a este ultimo, havia ainda, pendentes da parede, uma
gravura ou lithographia representando a Batalha de Solferino, e um
retrato da imperatriz Eugenia.
Rubião tinha nos pés um par de chinellas de damasco, bordadas a
ouro; na cabeça, um gorro com borla de seda preta. Na bocca, um
riso azul claro.
CAPITULO CXLVI
—Monsieur...
—Uhm! repetiu Quincas Borba, de pé nos joelhos do senhor.
Rubião voltou a si e deu com o barbeiro. Conhecia-o por tel-o visto
ultimamente na loja; ergueu-se da cadeira, Quincas Borba latia,
como a defendel-o contra o intruso.
—Socega! cala a boca! disse-lhe Rubião; e o cachorro foi, de orelha
baixa, metter-se por traz da cesta de papeis. Durante esse tempo,
Lucien desembrulhava os seus apparelhos.
—Monsieur veut se faire raser, n'est-ce pas? Pourquoi donc a-t-il
laisser croître cette belle barbe? Apparemment que c'est un voeu
d'amour? J'en connais qui ont fait de pareils sacrifices; j'ai même été
confident de quelques personnes aimables...
—Justamente! interrompeu Rubião.
Não entendera nada; posto soubesse algum francez, mal o
comprehendia lido—como sabemos,—e não o entendia fallado. Mas,
phenomeno curioso, não respondeu por impostura; ouviu as
palavras, como se fossem comprimento ou acclamação; e, ainda
mais curioso phenomeno, respondendo-lhe em portuguez, cuidava
fallar francez.
—Justamente! repetiu. Quero restituir a cara ao typo anterior; é
aquelle.
E, como apontasse para o busto de Napoleão III, respondeu-lhe o
barbeiro pela nossa lingua:
—Ah! o imperador! Bonito busto, em verdade. Obra fina. O senhor
comprou isto aqui ou mandou vir de Paris? São magnificos. Lá está o
primeiro, o grande; este era um genio. Se não fosse a traição, oh! os
traidores, vê o senhor? os traidores são peiores que as bombas de
Orsini.
—Orsini! um coitado!
—Pagou caro.
—Pagou o que devia. Mas não ha bombas nem Orsini contra o
destino de um grande homem, continuou Rubião. Quando a fortuna
de uma nação põe na cabeça de um grande homem a coroa
imperial, não ha maldades que valham... Orsini! um bobo!
Em poucos minutos, começou o barbeiro a deitar abaixo as barbas
do Rubião, para lhe deixar somente a pera e os bigodes de Napoleão
III; encarecia-lhe o trabalho; affirmava que era difficil compor
exactamente uma cousa como a outra, E á medida que lhe cortava
as barbas, ia-as gabando.—Que lindos fios! Era um grande e honesto
sacrificio que fazia, em verdade...
—Seu barbeiro, você é pernostico, interrompeu Rubião. Já lhe disse
o que quero; ponha-me a cara como estava. Alli tem o busto para
guial-o.
—Sim, senhor, cumprirei as suas ordens, e verá que semelhança vae
sair.
E zás, zás, deu os ultimos golpes ás barbas de Rubião, e começou a
rapar-lhe as faces e os queixos. Durou longo tempo a operação; o
barbeiro ia tranquillamente rapando, comparando, dividindo os olhos
entre o busto e o homem. Ás vezes, para melhor cotejal-os, recuava
dous passos, olhava-os alternadamente, inclinava-se, pedia ao
homem que se virasse de um lado ou de outro, e ia ver o lado
correspondente do busto.
—Vae bem? perguntava Rubião.
Lucien pedia-lhe com um gesto que se calasse, e proseguia.
Recortou a pera, deixou os bigodes, e escanhoou á vontade,
lentamente, amigamente, aborrecidamente, adivinhando com os
dedos alguma pontinha imperceptivel de cabello no queixo ou na
face, para não o consentir, nem por suspeita. Ás vezes Rubião,
cançado de estar a olhar para o tecto, emquanto o outro lhe
aperfeiçoava os queixos, pedia para descançar. Descançando,
apalpava o rosto e sentia pelo tacto a mudança.
—Os bigodes é que não estão muito compridos, observava.
—Falta arranjar-lhe as guias; aqui trago os ferrinhos para encurval-
os bem sobre o labio, e depois faremos as guias. Ah! eu prefiro
compor dez trabalhos originaes a uma só copia.
Volveram ainda dez minutos, antes que os bigodes e a pera fossem
bem retocados. Emfim, prompto. Rubião deu um salto, correu ao
espelho, no quarto, que ficava ao pé; era o outro, eram ambos, era
elle mesmo, em summa.
—Justamente! exclamou tornando ao gabinete, onde o barbeiro,
tendo arrecadado os apparelhos, fazia festas ao Quincas Borba.
E indo á secretária, abriu uma gaveta, tirou uma nota de vinte mil
réis, e deu-lh'a.
—Não tenho troco, disse o outro.
—Não precisa dar troco, acudiu Rubião com um gesto soberano; tire
o que houver de pagar á casa, e o resto é seu.
CAPITULO CXLVII
CAPITULO CXLVIII
CAPITULO CXLIX