0% found this document useful (0 votes)
30 views74 pages

Computational Intelligence and Data Analytics: Proceedings of ICCIDA 2022 Rajkumar Buyya - The ebook in PDF format is ready for download

The document provides information about the International Conference on Computational Intelligence and Data Analytics (ICCIDA 2022), highlighting its successful organization and the acceptance of 43 high-quality research papers. It emphasizes the aim of the conference to foster innovation and collaboration among researchers in the fields of computational intelligence and data analytics. Additionally, the document includes links to various eBooks available for download related to data analytics and computational intelligence topics.

Uploaded by

lovinedjeni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views74 pages

Computational Intelligence and Data Analytics: Proceedings of ICCIDA 2022 Rajkumar Buyya - The ebook in PDF format is ready for download

The document provides information about the International Conference on Computational Intelligence and Data Analytics (ICCIDA 2022), highlighting its successful organization and the acceptance of 43 high-quality research papers. It emphasizes the aim of the conference to foster innovation and collaboration among researchers in the fields of computational intelligence and data analytics. Additionally, the document includes links to various eBooks available for download related to data analytics and computational intelligence topics.

Uploaded by

lovinedjeni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Read Anytime Anywhere Easy Ebook Downloads at ebookmeta.

com

Computational Intelligence and Data Analytics:


Proceedings of ICCIDA 2022 Rajkumar Buyya

https://ebookmeta.com/product/computational-intelligence-
and-data-analytics-proceedings-of-iccida-2022-rajkumar-
buyya/

OR CLICK HERE

DOWLOAD EBOOK

Visit and Get More Ebook Downloads Instantly at https://ebookmeta.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Data Driven Decision Making using Analytics Computational


Intelligence Techniques 1st Edition

https://ebookmeta.com/product/data-driven-decision-making-using-
analytics-computational-intelligence-techniques-1st-edition/

ebookmeta.com

Data Analytics and Business Intelligence: Computational


Frameworks, Practices, and Applications 1st Edition
Vincent Charles
https://ebookmeta.com/product/data-analytics-and-business-
intelligence-computational-frameworks-practices-and-applications-1st-
edition-vincent-charles/
ebookmeta.com

Machine Intelligence and Data Science Applications


Proceedings of MIDAS 2022 1st Edition Amar Ramdane-Cherif
(Editor)
https://ebookmeta.com/product/machine-intelligence-and-data-science-
applications-proceedings-of-midas-2022-1st-edition-amar-ramdane-
cherif-editor/
ebookmeta.com

DK Eyewitness Top 10 San Diego Pocket Travel Guide 2nd


Edition Dk Eyewitness

https://ebookmeta.com/product/dk-eyewitness-top-10-san-diego-pocket-
travel-guide-2nd-edition-dk-eyewitness/

ebookmeta.com
Community-Based Fisheries Management: A Global Perspective
1st Edition Devashish Kar

https://ebookmeta.com/product/community-based-fisheries-management-a-
global-perspective-1st-edition-devashish-kar/

ebookmeta.com

K 12 Education as a Hermeneutic Adventurous Endeavor


Toward an Educational Way of Thinking 1st Edition Doron
Yosef-Hassidim
https://ebookmeta.com/product/k-12-education-as-a-hermeneutic-
adventurous-endeavor-toward-an-educational-way-of-thinking-1st-
edition-doron-yosef-hassidim/
ebookmeta.com

Stuck in the Mud Ree Drummond

https://ebookmeta.com/product/stuck-in-the-mud-ree-drummond/

ebookmeta.com

Applied Inorganic Chemistry Volume 3 From Magnetic to


Bioactive Materials 1st Edition Rainer Pöttgen

https://ebookmeta.com/product/applied-inorganic-chemistry-
volume-3-from-magnetic-to-bioactive-materials-1st-edition-rainer-
pottgen/
ebookmeta.com

Husky 1st Edition Jessa Kane

https://ebookmeta.com/product/husky-1st-edition-jessa-kane/

ebookmeta.com
The Nordic Economic, Social and Political Model:
Challenges in the 21st Century 1st Edition Anu Koivunen

https://ebookmeta.com/product/the-nordic-economic-social-and-
political-model-challenges-in-the-21st-century-1st-edition-anu-
koivunen/
ebookmeta.com
Lecture Notes on Data Engineering
and Communications Technologies 142

Rajkumar Buyya
Susanna Munoz Hernandez
Ram Mohan Rao Kovvur
T. Hitendra Sarma Editors

Computational
Intelligence
and Data
Analytics
Proceedings of ICCIDA 2022
Lecture Notes on Data Engineering
and Communications Technologies

Volume 142

Series Editor
Fatos Xhafa, Technical University of Catalonia, Barcelona, Spain
The aim of the book series is to present cutting edge engineering approaches to data
technologies and communications. It will publish latest advances on the engineering
task of building and deploying distributed, scalable and reliable data infrastructures
and communication systems.
The series will have a prominent applied focus on data technologies and commu-
nications with aim to promote the bridging from fundamental research on data
science and networking to data engineering and communications that lead to industry
products, business knowledge and standardisation.
Indexed by SCOPUS, INSPEC, EI Compendex.
All books published in the series are submitted for consideration in Web of Science.
Rajkumar Buyya · Susanna Munoz Hernandez ·
Ram Mohan Rao Kovvur · T. Hitendra Sarma
Editors

Computational Intelligence
and Data Analytics
Proceedings of ICCIDA 2022
Editors
Rajkumar Buyya Susanna Munoz Hernandez
Cloud Computing and Distributed Systems Computer Science School (FI)
(CLOUDS) Laboratory Technical University of Madrid
University of Melbourne Madrid, Spain
Melbourne, VIC, Australia
T. Hitendra Sarma
Ram Mohan Rao Kovvur Department of Information Technology
Department of Information Technology Vasavi College of Engineering
Vasavi College of Engineering Hyderabad, India
Hyderabad, India

ISSN 2367-4512 ISSN 2367-4520 (electronic)


Lecture Notes on Data Engineering and Communications Technologies
ISBN 978-981-19-3390-5 ISBN 978-981-19-3391-2 (eBook)
https://doi.org/10.1007/978-981-19-3391-2

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Singapore Pte Ltd. 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface

The International Conference on Computational Intelligence and Data Analytics


(ICCIDA 2022) has been recognized as a very successful conference hosted by
the Department of Information Technology, Vasavi College of Engineering, and
sponsored by Vasavi Academy of Education, during 8–9 January 2022 at Vasavi
College of Engineering (Autonomous), Hyderabad, Telangana, India.
ICCIDA 2022 aims to provide an exciting platform for the young researchers
to exchange their innovations and explore future prospects of research with the
global research community in academia and industry, in the areas of computational
intelligence, data analytics and its allied fields.
It is worth mentioning that ICCIDA 2022 has made its significance by attracting
175 high-quality research articles from several parts of the globe, including USA,
Canada, Europe, South Africa, Indonesia, Malaysia, Nepal, Bangladesh, Oman and
UAE, and from 18 different states of India. All the articles are critically peer-
reviewed, and finally, 43 papers are accepted and presented by the authors during the
conference. Further, there are six keynote talks delivered by the eminent researchers,
including Dr. Rajkumar Buyya (The University of Melbourne, Australia), Dr. Bing
Xue (Director (Computer Science) in School of Engineering and Computer Science
at Victoria University of Wellington, New Zealand), Dr. Manuel Roveri (Politecnico
di Milano, Italy), Dr. Marde Helbig (Griffith University, Australia), Dr. Atul Negi
(University of Hyderabad, India) and Dr. K. Raghavendra (Head High Performance
Computing and Drones, Advanced Data Processing Research Institute (ADRIN),
ISRO).
ICCIDA 2022 maintained all the necessary quality standards and was regarded as
a high-quality international conference with an acceptance ratio of 24.57%.
The sincere efforts of the Technical Program Committee members and the Orga-
nizing Committee members are highly appreciated. It is to put on records my sincere
thanks to the principal of VCE Prof. S. V. Ramana, CEO Sri P. Balaji and the
management of Vasavi College of Engineering for their constant encouragement
and generous support to make the event successful and also encouraging the active
researchers by giving rewards for the best papers.

v
vi Preface

We sincerely thank Mr. Aninda Bose and his team of Springer Nature for their
strong support towards publishing this volume in the series of Lecture Notes on Data
Engineering and Communications Technologies—Indexed by Scopus, Inspec and Ei
Compendex.

Hyderabad, India Dr. Ram Mohan Rao Kovvur


Conference Chair
Contents

Container Orchestration in Edge and Fog Computing


Environments for Real-Time IoT Applications . . . . . . . . . . . . . . . . . . . . . . . 1
Zhiyu Wang, Mohammad Goudarzi, Jagannath Aryal,
and Rajkumar Buyya
Is Tiny Deep Learning the New Deep Learning? . . . . . . . . . . . . . . . . . . . . . . 23
Manuel Roveri
Dynamic Multi-objective Optimization Using Computational
Intelligence Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Mardé Helbig
AI for Social Good—A Faustian Bargain . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Atul Negi

Computational Intelligence–Machine Learning


Text Summarization Approaches Under Transfer Learning
and Domain Adaptation Settings—A Survey . . . . . . . . . . . . . . . . . . . . . . . . . 73
Meenaxi Tank and Priyank Thakkar
An Effective Eye-Blink-based Cyber Secure PIN Password
Authentication System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Susmitha Mukund Kirsur, M. Dakshayini, and Mangala Gowri
A Comparison of Algorithms for Bayesian Network Learning
for Triple Word Form Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Soorya Surendran, Mithun Haridas, Greeshma Krishnan,
Nirmala Vasudevan, Georg Gutjahr, and Prema Nedungadi
Application of Machine Learning Algorithm in Identification
of Anaemia Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Lata Upadhye and Sangeetha Prasanna Ram

vii
viii Contents

Detection of Fruits Image Applying Decision Tree Classifier


Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Shivendra, Kasa Chiranjeevi, and Mukesh Kumar Tripathi
Disease Prediction Based on Symptoms Using Various Machine
Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Deep Rahul Shah and Dev Ajay Dhawan
Anti-Drug Response and Drug Side Effect Prediction Methods:
A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Davinder Paul Singh, Abhishek Gupta, and Baijnath Kaushik
Assessment of Segmentation Techniques for Irregular Border
Lesion Images in Melanoma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
K. Gnana Mayuri and L. Sathish Kumar
Secure Communication and Pothole Detection for UAV Platforms . . . . . . 183
S. Aruna, P. Lahari, P. Suraj, M. W. F. Junaid, and V. Sanjeev
An Empirical Study on Discovering Software Bugs Using Machine
Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
G. Ramesh, K. Shyam Sunder Reddy, Gandikota Ramu,
Y. C. A. Padmanabha Reddy, and J. Somasekar
Action Segmentation for RGB Video Frames Using Skeleton 3D
Data of NTURGB+D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Rosepreet Kaur Bhogal and V. Devendran
Prediction of Rainfall Using Different Machine Learning
Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
B. Leelavathy, Ram Mohan Rao Kovvur, Sai Rohit Sheela,
M. Dheeraj, and V. Vivek
A Comprehensive Survey of Datasets Used for Spam
and Genuineness Views Detection in Twitter . . . . . . . . . . . . . . . . . . . . . . . . . 223
Monal R. Torney, Kishor H. Walse, and Vilas M. Thakare

Computational Intelligence–Deep Learning


Indian Classical Dance Forms Classification Using Transfer
Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Challapalli Jhansi Rani and Nagaraju Devarakonda
Skin Cancer Classification for Dermoscopy Images Using Model
Based on Deep Learning and Transfer Learning . . . . . . . . . . . . . . . . . . . . . . 257
Vikash Kumar and Bam Bahadur Sinha
Contents ix

Deep Neural Network Architecture for Face Mask Detection


Against COVID-19 Pandemic Using Pre-trained Exception
Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
S. R. Surya and S. R. Resmi
MOOC-LSTM: The LSTM Architecture for Sentiment Analysis
on MOOCs Forum Posts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Purnachary Munigadiapa and T. Adilakshmi
License Plate Detection of Motorcyclists Without Helmets . . . . . . . . . . . . . 295
S. K. Chaya Devi, G. Vishal Reddy, Y. Aakarsh, and B. Gowtham
Object Detection and Tracking Using DeepSORT . . . . . . . . . . . . . . . . . . . . . 303
Divya Lingineni, Prasanna Dusi, Rishi Sai Jakkam, and Sai Yada
Continuous Investing in Advanced Fuzzy Technologies for Smart
City . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
V. Lakhno, V. Malyukov, D. Kasatkin, V. Chubaieskyi, S. Rzaieva,
and D. Rzaiev
Lesion Segmentation in Skin Cancer Detection Using UNet
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Shubhi Miradwal, Waquas Mohammad, Anvi Jain, and Fawwaz Khilji
Getting Around the Semantics Challenge in Hateful Memes . . . . . . . . . . . 341
Anind Kiran, Manah Shetty, Shreya Shukla, Varun Kerenalli,
and Bhaskarjyoti Das
Classification of Brain Tumor of Magnetic Resonance Images
Using Convolutional Neural Network Approach . . . . . . . . . . . . . . . . . . . . . . 353
Raghawendra Sinha and Dipti Verma
Detection of COVID-19 Infection Using Convolutional Neural
Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
D. Aravind, Neha Jabeen, and D. Nagajyothi
Hybrid Classification Algorithm for Early Prediction
of Alzheimer’s Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
B. A. Sujatha Kumari, Sudarshan Patil Kulkarni, and Ayesha Sultana

Data Analytics
Evaluating Models for Better Life Expectancy Prediction . . . . . . . . . . . . . 389
Amit, Reshov Roy, Rajesh Tanwar, and Vikram Singh
Future Gold Price Prediction Using Ensemble Learning
Techniques and Isolation Forest Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Nandipati Bhagya Lakshmi and Nagaraju Devarakonda
Second-Hand Car Price Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
N. Anil Kumar
x Contents

A Study on Air Pollution Over Hyderabad Using Factor


Analysis—Kaggle Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
N. Vasudha and P. Venkateswara Rao
A Comparative Study of Hierarchical Risk Parity Portfolio
and Eigen Portfolio on the NIFTY 50 Stocks . . . . . . . . . . . . . . . . . . . . . . . . . 443
Jaydip Sen and Abhishek Dutta
Collaborative Approach Toward Information Retrieval System
to Get Relevant News Articles Over Web: IRS-Web . . . . . . . . . . . . . . . . . . . 461
Shabina and Sonal Chawla
Patent Recommendation Engine Using Graph Database . . . . . . . . . . . . . . . 475
Aniruddha Chatterjee, Sagnik Biswas, and M. Kanchana
IFF: An Intelligent Fashion Forecasting System . . . . . . . . . . . . . . . . . . . . . . 487
Chakita Muttaraju, Ramya Narasimha Prabhu, S. Sheetal, D. Uma,
and S. S. Shylaja
SIR-M Epidemic Model: A SARS-CoV-2 Perspective . . . . . . . . . . . . . . . . . 499
Lekshmi S. Nair and Jo Cheriyan
MultiCity: A Personalized Multi-itinerary City Recommendation
Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Joy Lal Sarkar and Abhishek Majumder

Block Chain and Cloud Computing


A Fully Distributed Secure Approach for Database Security
in Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Srinu Banothu, A. Govardhan, and Karnam Madhavi
Blockchain Technology Adoption for General Elections During
COVID-19 Pandemic and Beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
Israel Edem Agbehadji, Abdultaofeek Abayomi, Richard C. Millham,
and Owusu Nyarko-Boateng
Blockchain Implementation Framework for Tracing the Dairy
Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
Mohammed Marhoun Khamis Al Nuaimi, K. P. Rishal,
Noel Varghese Oommen, and P. C. Sherimon
Addressing Most Common Vulnerabilities in Blockchain-Based
Voting Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
Ahmed Ben Ayed and Mohamed Amin Belhajji
Contents xi

Networks and Security


Privacy Preserving Intrusion Detection System for Low Power
Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
S. Prabavathy and I. Ravi Prakash Reddy
Identifying Top-N Influential Nodes in Large Complex Networks
Using Network Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
M. Venunath, P. Sujatha, and Prasad Koti
Push and Pull Factors for Successful Implementation of ERP
in SMEs Within Klang Valley: A Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . 609
Anusuyah Subbarao and Astra Hareyana
A Hybrid Social-Based Routing to Improve Performance
for Delay-Tolerant Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
Sudhakar Pandey, Nidhi Sonkar, Sanjay Kumar,
and Yeleti Sri Satyalakshmi

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631


About the Editors

Dr. Rajkumar Buyya is a Redmond Barry Distinguished Professor and Director


of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the
University of Melbourne, Australia. He is also serving as the founding CEO of
Manjrasoft, a spin-off company of the University, commercializing its innovations
in Cloud Computing. He has authored over 750 publications and published seven text
books which are more popular internationally. Dr. Buyya is one of the highly cited
authors in computer science and software engineering worldwide. “A Scientometric
Analysis of Cloud Computing Literature” by German scientists ranked Dr. Buyya as
the World’s Top-Cited (#1) Author and the World’s Most Productive (#1) Author in
Cloud Computing. He served as founding Editor-in-Chief of the IEEE Transactions
on Cloud Computing. He is currently serving as Editor-in-Chief of Software: Practice
and Experience, a long-standing journal in the field established nearly 50 years ago.

Susanna Munoz Hernandez is Ph.D. in Computer Science by the Technical Univer-


sity of Madrid (UPM), Master in Management of Information Technologies by the
Ramón Llull University of Barcelona and graduate in International Affairs by the
Society of International Studies of Madrid. She won the first prize in a national
competition for talented young people organized by the University of La Salle of
Madrid in 2003. She has professional experience in some companies (ICT and bank
sector) and join research (national and European) projects of recognized prestige.
She has been working as associate professor at the Computer Science School of
the Technical University of Madrid since 1998. She develops her research activity
in the BABEL group (http://babel.ls.fi.upm.es) with more than eighty publications.
She received the 2011 UPM prize of research in international cooperation for the
development. She is member of the Management Board of the itdUPM. She is part
of the organizing committees of more than 15 international conferences.

Dr. Ram Mohan Rao Kovvur received Ph.D. in Computer Science and Engi-
neering from Jawaharlal Nehru Technology University (JNTU) in the year 2014 with
Research specialization as Grid Computing. He has more than 25 years of teaching

xiii
xiv About the Editors

experience in various cadres and currently he is the Professor and Head, Informa-
tion Technology, Vasavi College of Engineering, Telangana, Hyderabad, India. He
received many prestigious awards from reputed organizations. Dr. Ram Mohan Rao
has published and presented more than 25 research articles in National and Inter-
national Journals and Conferences. He obtained a grant of Rs. 19.31 Lakhs from
AICTE under MODROBS and established Deep Learning Lab. As a part Research
work, he established a Grid Environment using Globus ToolKit (open source soft-
ware toolkit used for building Grid systems). Further he also Established Cloud Lab,
VCE using Aneka Platform (US Patented) of Manjrasoft Pvt Ltd. His research areas
include Distributed Systems and Cloud Computing and Data Science.

Dr. T. Hitendra Sarma received Ph.D. in Machine Learning from JNT Univer-
sity Anantapur, India, in December 2013. He has more than 14 years of teaching
and research experience. Served at different reputed institutes in various capaci-
ties. Currently, Dr. Sarma is working as Associate Professor at Vasavi College of
Engineering, Hyderabad. Dr. Sarma has published more than 30 research articles in
various peer-reviewed International Journals and Conferences by Springer, Elsevier,
and IEEE. His research interests include Machine Learning, Hyperspectral Image
Processing, Artificial Neural Networks, Data Mining and Data Science, etc. Dr.
Sarma holds a project funded by SERB, INDIA. Dr. Sarma is an active researcher.
He presented his research articles in reputed conferences like IEEE WCCI (2016
Vancouver, Canada), IEEE CEC (2018, Rio de Janeiro, Brazil) and IEEE ICECIE
(2019 Malaysia). He delivered an Invited Talk in the Third International Conference
on Data Mining (ICDM) 2017 at Hualien, Taiwan.
Container Orchestration in Edge and Fog
Computing Environments for Real-Time
IoT Applications

Zhiyu Wang, Mohammad Goudarzi, Jagannath Aryal, and Rajkumar Buyya

Abstract Resource management is the principal factor to fully utilize the potential
of Edge/Fog computing to execute real-time and critical IoT applications. Although
some resource management frameworks exist, the majority are not designed based
on distributed containerized components. Hence, they are not suitable for highly dis-
tributed and heterogeneous computing environments. Containerized resource man-
agement frameworks such as FogBus2 enable efficient distribution of framework’s
components alongside IoT applications’ components. However, the management,
deployment, health check, and scalability of a large number of containers are chal-
lenging issues. To orchestrate a multitude of containers, several orchestration tools are
developed. But, many of these orchestration tools are heavyweight and have a high
overhead, especially for resource-limited Edge/Fog nodes. Thus, for hybrid com-
puting environments, consisting of heterogeneous Edge/Fog and/or Cloud nodes,
lightweight container orchestration tools are required to support both resource-
limited resources at the Edge/Fog and resource-rich resources at the Cloud. Thus, in
this paper, we propose a feasible approach to build a hybrid and lightweight clus-
ter based on K3s, for the FogBus2 framework that offers containerized resource
management framework. This work addresses the challenge of creating lightweight
computing clusters in hybrid computing environments. It also proposes three design
patterns for the deployment of the FogBus2 framework in hybrid environments,
including (1) Host Network, (2) Proxy Server, and (3) Environment Variable. The
performance evaluation shows that the proposed approach improves the response
time of real-time IoT applications up to 29% with acceptable and low overhead.

Keywords Edge computing · Fog computing · Container orchestration · Internet


of Things · Resource management framework

Z. Wang · M. Goudarzi · J. Aryal · R. Buyya (B)


Cloud Computing and Distributed Systems (CLOUDS) Laboratory, School of Computing and
Information Systems,The University of Melbourne, Melbourne, Australia
e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 1
R. Buyya et al. (eds.), Computational Intelligence and Data Analytics,
Lecture Notes on Data Engineering and Communications Technologies 142,
https://doi.org/10.1007/978-981-19-3391-2_1
2 Z. Wang et al.

1 Introduction

With the rapid development of hardware, software, and communication technology,


IoT devices have become dominant in all aspects of our lives. Traditional physi-
cal devices are connected in the Internet of Things (IoT) environment to perform
humanoid information perception and collaborative interaction. They realize self-
learning, processing, decision-making, and control, thereby completing intelligent
production and service and promoting the innovation of people’s life and work pat-
terns [1].
On this premise, Cloud computing, with its powerful computing and storage capa-
bilities, becomes a shared platform for IoT big data analysis and processing. In most
cases, IoT devices offload complex applications to the Cloud for storage and pro-
cessing, and the output results are then sent from the Cloud to IoT devices [2, 3]. As
a result, users do not have concerns about insufficient storage space or computational
capacity for IoT devices. However, with the explosive growth in the number of IoT
devices, nowadays, the amount of raw data sensed and acquired by the IoT has been
significantly increasing. Consequently, filtering, processing, and analyzing the mas-
sive amount of data in a centralized approach has become an inevitable challenge
for the Cloud computing paradigm [3, 4].
Moreover, the number of real-time IoT applications has been significantly
increased. These applications require resources that support fast processing and low
access latency to minimize the total response time [5]. Some examples of these appli-
cations are autonomous robots and disaster management applications (e.g., natural
hazard management).

1.1 Case Study: Natural Disaster Management (NDM)

NDM comprises four phases, namely Prevention, Preparedness, Response, and


Recovery. It is commonly referred to as the PPRR framework for disaster manage-
ment. These four phases are not linear and independent as they overlap and support
each other for a better balance between risk reduction and community resilience for
better response and effective recovery. Geo-spatial solutions for different phases are
in offer in practice considering the availability of big earth observation satellite data
achieved from various satellite missions and IoT-enabled ground-based sensor infor-
mation [6]. However, the optimal fusion of satellite-based sensors and IoT sensors can
provide accurate and precise information in the case of natural disasters. As presented
in Figs. 1 and 2, for the case of bushfire problems in Australia, satellite-based sensors
and IoT-based sensors have been used in an ad hoc manner to inform the end users.
For example, repositories of satellite data primarily from NASA and Digital Earth
Australia and other location-based data are being used for the live alert feeds by the
emergency services in different states. The potential of satellite data and their fusion
in extracting the optimal information in real time is a challenge due to the granularity
Container Orchestration in Edge and Fog Computing … 3

Fig. 1 A visualization framework on how satellite and ground-based sensors can be fused utilizing
distributed computing paradigm such as edge computing in providing accurate real-time information
to end users

of sensor-specific spatial data structure on spatial, spectral, temporal, and radiomet-


ric resolutions. With the IoT-based real-time information, there is a strong potential
to validate and calibrate the satellite information captured in different resolutions to
inform bushfire alerts in space and time. For example, early bushfire detection, near-
real-time bushfire progression monitoring, and post-fire mapping and analysis are
possible with the optimal integration of ground-based sensors to the satellite-based
sensor’s information. The framework of integrating sensors and providing accurate
information to end users in real time will help in saving lives and properties.

1.2 Edge and Fog Computing

For smooth and efficient execution of IoT applications, distributed computing


paradigms, called Edge and Fog computing, have been emerged. They concentrate
data and processing units as close as possible to the end users, as opposed to the
4 Z. Wang et al.

Fig. 2 A detailed system model on various sensors integration and their utilization in disseminating
data to inform end users

traditional Cloud computing paradigm that concentrates data and processing units
in Cloud data centers [7]. The key idea behind Edge and Fog computing is to bring
Cloud-like services to the edge of the network, resulting in less application latency
and a better quality of experience for users [8, 9]. Edge computing can cope with
medium to lightweight tasks. However, when the users’ requirements consist of com-
plex and resource-hungry tasks, Edge devices are often unable to efficiently satisfy
those requirements since they have limited computing resources [7, 10]. To address
these challenges, Fog computing, also referred to as hybrid computing, is becoming
a popular solution. Figure 3 depicts an overview of the Fog/Hybrid computing envi-
ronment. In our view, Edge computing only harnesses the closest resources to the
end users while Fog computing uses deployed resources at Edge and Cloud layers. In
such computing environments, Cloud can act as an orchestrator, which is responsible
for big and long-period data analysis. It can operate in areas such as management,
cyclical maintenance, and execution of computation-intensive tasks. Fog computing,
on the other hand, efficiently manages the analysis of real-time data to better support
the timely processing and execution of latency-sensitive tasks. However, in practice,
contradicting the strong market demand, Fog computing is still in its infancy, with
problems including no unified architecture, the large number and wide distribution
of Edge/Fog nodes, and lack of technical standards and specifications.
Meanwhile, container technology has been significantly developing in recent
years. Compared with physical and virtual machines, containers are very lightweight,
simple to deploy, support multiple architectures, have a short start-up time, and are
easy to expand and migrate. These features provide a suitable solution to the prob-
lem of severe heterogeneity of Edge/Fog nodes [11]. Container technology is being
dominantly used by industry and academia to run commercial, scientific, and big
Container Orchestration in Edge and Fog Computing … 5

Fig. 3 Overview of Fog/Hybrid computing environment

data applications, build IoT systems, and deploy distributed containerized resource
management frameworks such as FogBus2 framework [12]. FogBus2, which is a
distributed and containerized framework, enables fast and efficient resource man-
agement in hybrid computing environments.
Considering the ever-increasing number of containerized applications and frame-
works, efficient management and orchestration of resources have become an impor-
tant challenge. While container orchestration tools such as Kubernetes have become
the ideal solution for managing and scaling deployments, nodes, and clusters in the
industry today [13], there are still many challenges with their practical deployments in
hybrid computing environments. Firstly, orchestration techniques need to consider
the heterogeneity of computing resources in different environments for complete
adaptability. Secondly, the complexity of installing and configuring hybrid comput-
ing environments should be addressed when implementing orchestration techniques.
Thirdly, a strategy needs to be investigated to solve potential conflicts between orches-
tration techniques and the network model in the hybrid computing environment.
Also, as Edge/Fog devices are resource-limited, lightweight orchestration techniques
should be deployed to free up the resources for the smooth execution of end-user
applications. Finally, integrating containerized resource management frameworks
with lightweight orchestration tools is another important yet challenging issue to
support the execution of a diverse range of IoT applications.
To address these problems, this paper investigates the feasibility of deploying
container orchestration tools in hybrid computing environments to enable scalability,
health checks, and fault tolerance for containers.
6 Z. Wang et al.

The main contributions of this paper can be summarized as follows:


• Presents feasible designs for implementing container orchestration techniques in
hybrid computing environments.
• Proposes three design patterns for the deployment of the FogBus2 framework
using container orchestration techniques.
• Puts forward the detailed configurations for the practical deployment of the Fog-
Bus2 framework using container orchestration tools.
The rest part of the paper is organized as follows. Section 2 provides a background
study on the relevant technologies and reviews the container orchestration techniques
in Fog computing environments. Section 3 describes the configuration properties of
the K3s cluster and the detailed implementation of deploying the FogBus2 framework
into the K3s cluster. Section 4 presents the performance evaluation. Finally, Sect. 5
concludes the paper and presents future directions.

2 Background Technologies and Related Work

This section discusses the resource management framework and container orches-
tration tools, including the FogBus2 framework and K3s. Moreover, it also reviews
the existing works on container orchestration in the Cloud and Edge/Fog computing
environments.

2.1 FogBus2 Framework

FogBus2 [12] is a lightweight distributed container-based framework, developed


from scratch using Python programming language, enabling distributed resource
management in hybrid computing environments. It integrates edge and Cloud envi-
ronments to implement multiple scheduling policies for scheduling heterogeneous
IoT applications. In addition, it proposes an optimized genetic algorithm for fast con-
vergence of resource discovery to implement a scalable mechanism that addresses
the problem that the number of IoT devices increases or resources become over-
loaded. Besides, the dynamic resource discovery mechanism of FogBus2 facilitates
the rapid addition of new entities to the system. Currently, several resource man-
agement policies and IoT applications are integrated with this framework. FogBus2
contains five key containerized components, namely Master, Actor, RemoteLogger,
TaskExecutor, and User, which are briefly described below.
• Master: It handles resource management mechanisms such as scheduling, scala-
bility, resource discovery, registry, and profiling. It also manages the execution of
requested IoT applications.
• Actor: It manages the physical resources of the node on which it is running. Also,
it receives commands from the Master component and initiates the appropriate
Task Executor components based on each requested IoT application.
Container Orchestration in Edge and Fog Computing … 7

Fig. 4 Key components of FogBus2 [12]

• Remote Logger: It collects periodic or event-driven logs from other components


(e.g., profiling logs, performance metrics) and stores them in persistent storage
using either a file system or database.
• TaskExecutor: Each IoT application consists of several dependent or independent
tasks. The logic of each task is containerized in one TaskExecutor. Accordingly,
it executes the corresponding task of the application and can be efficiently reused
for other requests of the same type.
• User: It runs on the user’s IoT device and handles the raw data received from
sensors and processed data from Master. It also sends placement requests to the
Master component for the initiation of an IoT application.

Figure 4 shows an overview of the five main components of the FogBus2 and their
interactions.

2.2 K3s: Lightweight Kubernetes

K3s is a lightweight orchestration tool designed for resource-limited environments,


suitable for IoT and Edge/Fog computing [14]. Compared to Kubernetes, K3s is half
the size in terms of memory footprint, but API consistency and functionality are
not compromised [15]. Figure 5 shows the architecture of a K3s cluster containing
one server and multiple agents. Users manage the entire system through the K3s
server and make appropriate usage of the resources of the K3s agents in the cluster to
8 Z. Wang et al.

Fig. 5 The architecture of a single server K3s cluster

achieve optimal operation of applications and services. K3s clusters allow pods (i.e.,
the smallest deployment unit) to be scheduled and managed on any node. Similar to
Kubernetes, K3s clusters also contain two types of nodes, with the server running the
control plane components and kubelet (i.e., the agent that runs on each node), and
the agent running only the kubelet [16]. Typically, a K3s cluster carries a server and
multiple agents. When the URL of a server is passed to a node, that node becomes
an agent; otherwise, it is a server in a separate K3s cluster [14, 16].

2.3 Related Work

Rodriguez et al. [17] investigates multiple container orchestration tools and proposes
a taxonomy of different mechanisms that can be used to cope with fault tolerance,
availability, scalability, etc. Zhong et al. [18] proposed a Kubernetes-based con-
tainer orchestration technique for cost-effective container orchestration in Cloud
environments. The FLEDGE, developed by Goethals et al. [19], implements con-
tainer orchestration in an Edge environment that is compatible with Kubernetes.
Pires et al. [20] proposed a framework, named Caravela, that employs a decentralized
architecture, resource discovery, and scheduling algorithms. It leverages users’ vol-
untary Edge resources to build an independent environment where applications can
be deployed using standard Docker containers. Alam et al. [21] proposed a modular
architecture that runs on heterogeneous nodes. Based on lightweight virtualization, it
creates a dynamic system by combining modularity with the orchestration provided
by the Docker Swarm. Ermolenko et al. [22] studied a framework for deploying
IoT applications based on Kubernetes in the Edge-Cloud environment. It achieves
Container Orchestration in Edge and Fog Computing … 9

lightweight scaling of task-based applications and allows the addition of external


data warehouses.
In the current literature, some techniques such as [18, 22] use Kubernetes directly
on Edge/Fog nodes, which have a high overhead on resource-limited Edge/Fog nodes.
Some techniques such as [21] are restricted to run a master node (i.e., server) only on
the Cloud, which does not support different cluster deployment approaches. More-
over, some orchestration techniques such as [20] are only working with nodes with
public IP addresses, which restricts many use-cases in Edge/Fog computing environ-
ments where nodes do not have public IP addresses. Considering the current litera-
ture, there exists no lightweight container orchestration technique for the complete
deployment of containerized resource management frameworks in hybrid comput-
ing environments, where heterogeneous nodes are distributed in Edge/Fog and Cloud
computing environments.

3 Container Orchestration Approach

In this section, we propose a feasible approach for deploying container orchestration


techniques in hybrid computing environments. First, we present a high-level overview
of the design. Next, we introduce the concrete implementation details of the proposed
approach.

3.1 Overview of the Design

To build a complete hybrid computing environment for different IoT scenarios, we


use several Cloud and Edge/Fog nodes. We choose K3s as the backbone for the hybrid
computing environment because it only occupies less than half of the resources of
Kubernetes, but fully implements the Kubernetes API, and is specially optimized
for the resource-constrained nodes at the Edge/Fog layer. In practice, we use three
Cloud instances at the Cloud layer and create several Linux virtual machines (aligned
with hardware specification of Raspberry Pi Zero) as our Edge/Fog nodes. Our Cloud
nodes have public IP addresses while Edge/Fog nodes do not hold public IP addresses.
To address this problem, we use Wireguard to set up a lightweight Peer-to-Peer (P2P)
VPN connection among all nodes. After creating the hybrid computing environment,
we start to embed the FogBus2 resource management framework into it. To take
advantage of the container orchestration tool, we allocate only one container to each
Pod created by K3s, with only one component of the FogBus2 framework running
inside each container. Also, to balance the load on each node between clusters,
we assign pods to different nodes. The initialization of the FogBus2 components
requires the binding of the host IP address, which will be used to pass information
between the different components. This means that in K3s clustering, the FogBus2
component needs to bind the IP address of the pod, which poses a difficulty for the
10 Z. Wang et al.

Fig. 6 Overview of the design pattern

implementation, as usually the pod is created at the same time as the application
is deployed. To address this challenge, we evaluate three approaches and finally
decide to use host network mode to deploy the FogBus2 framework in the K3s
hybrid environment. Host network mode allows pods to use the network configuration
of virtual instances or physical hosts directly, which addresses the communication
challenge of the FogBus2 components and the conflict between K3s network planning
service and VPN. Figure 6 shows a high-level overview of our proposed design
pattern.

3.2 Configuration of Nodes

The deployed hybrid computing environment consists of several instances, labeled A


through E. The node list, computing layer, specifications, public network IP address,
and private network IP address, after the VPN connection is established, are given
in Table 1.

3.3 P2P VPN Establishment

As shown in Table 1, Cloud nodes have public IP addresses, while in most cases,
devices in the Edge/Fog environment do not have public IP addresses. In this case,
in order to build a hybrid computing environment, we need to establish a VPN
connection to integrate the Cloud and Edge/Fog nodes. We use Wireguard to establish
a lightweight P2P VPN connection between all the nodes. In the implementation,
Table 1 Configuration of nodes in integrated computing environment
Node tag Node name Computing layer Specifications Public IP Private IP Port Preparation
A Nectar1 Cloud 16-core CPU, 64 45.113.235.156 192.0.0.1 Auto assign Docker
GB RAM
B Nectar2 Cloud 2-core CPU, 9 45.113.232.199 192.0.0.2 Auto assign Docker
GB RAM
C Nectar3 Cloud 2-core CPU, 9 45.113.232.232 192.0.0.3 Auto assign Docker
GB RAM
Container Orchestration in Edge and Fog Computing …

D VM1 Edge 1-core CPU, 512 – 192.0.0.4 Auto assign Docker


MB RAM
E VM2 Edge 1-core CPU, 512 – 192.0.0.5 Auto assign Docker
MB RAM
11
12 Z. Wang et al.

Fig. 7 A sample configuration script for the Wireguard

we install the Wireguard on each node and generate the corresponding configuration
scripts (based on the FogBus2 VPN scripts) to ensure that each node has direct access
to all other nodes in the cluster. A sample configuration script for the Wireguard VPN,
derived from FogBus2 scripts, is shown in Fig. 7.
Container Orchestration in Edge and Fog Computing … 13

Fig. 8 K3s deployment in computing environment

3.4 K3s Deployment

The K3s server can be located at the Cloud or at the edge, while the remaining
four nodes act as K3s agents. As the aim of this research is to enable container
orchestration on the FogBus2 framework, we need to install and enable Docker on
both the server and agents before building K3s. First, we install and start the K3s
server in Docker mode. K3s allows users to choose the appropriate container tool,
but as all components of FogBus2 run natively in Docker containers, we use Docker
mode to initialize the K3s server to allow it to access the Docker images. Then, we
extract a token from the server, which will be used to join other agents to the server.
After that, we install the K3s on each agent, specifying the IP of the server and
the token obtained from the server during installation to ensure that all agents can
properly connect to the server. Figure 8 shows the successful deployment of the K3s
cluster.

3.5 Fogbus2 Framework Integration

In the native design of the FogBus2 framework, all components are running in con-
tainers. The pod, as the smallest unit created and deployed by K3s, can wrap one or
more containers. Any containers in the same pod will share the same namespace and
local network. Containers can easily communicate with other containers in the same
or different pod as if they were on the same machine while maintaining a degree of
isolation. So first, we are faced with the choice of assigning only one container per
pod (i.e., a component that the FogBus2 framework is built on) or allowing each pod
to manage multiple containers. The former design would balance the load on K3s
nodes as much as possible to facilitate better management by the controller, while
the latter design would reduce the time taken to communicate between components
and provide faster feedback to users. We decide to adopt the former design to achieve
batch orchestration and self-healing from failures.
In order to integrate all types of FogBus2 framework’s components into K3s, we
first define the YAML deployment files for necessary components. This file is used to
provide the object’s statute, which describes the expected state of the object, as well
14 Z. Wang et al.

as some basic information about the object. In our work, the YAML deployment file
serves to declare the number of replicas of the pod, the node it is built on, the name
of the image, the image pulling policy, the parameters for application initialization,
and the location of the mounted volumes. Code Snippet 1 illustrates the YAML
deployment file for the Master component of the FogBus2 framework.
1 # YAML d e p l o y m e n t f i l e for the M a s t e r c o m p o n e n t
2 # of the F o g B u s 2 f r a m e w o r k
3 a p i V e r s i o n : apps / v1
4 kind : D e p l o y m e n t
5 metadata :
6 labels :
7 app : fogbus2 - m a s t e r
8 name : fogbus2 - m a s t e r
9 spec :
10 replicas : 1
11 selector :
12 matchLabels :
13 app : fogbus2 - m a s t e r
14 strategy :
15 type : R e c r e a t e
16 template :
17 metadata :
18 labels :
19 app : fogbus2 - m a s t e r
20 spec :
21 containers :
22 - env :
23 - name : PGID
24 v a l u e : " 1000 "
25 - name : PUID
26 v a l u e : " 1000 "
27 - name : P Y T H O N U N B U F F E R E D
28 value : "0"
29 - name : TZ
30 value : Australia / Melbourne
31 i m a g e : c l o u d s l a b / fogbus2 - m a s t e r
32 imagePullPolicy : ""
33 name : fogbus2 - m a s t e r
34 args : [ " -- b i n d I P " , " 1 9 2 . 0 . 0 . 1 " ,
35 " -- b i n d P o r t " , " 5001 " ,
36 " -- r e m o t e L o g g e r I P " , " 1 9 2 . 0 . 0 . 1 " ,
37 " -- r e m o t e L o g g e r P o r t " , " 5000 " ,
38 " -- s c h e d u l e r N a m e " , " R o u n d R o b i n " ,
39 " -- c o n t a i n e r N a m e " ,
40 " TempContainerName "]
41 r e s o u r c e s : {}
42 volumeMounts :
43 - m o u n t P a t h : / var / run / d o c k e r . sock
44 name : fogbus2 - master - h o s t p a t h 0
45 - mountPath : / workplace /
46 name : fogbus2 - master - h o s t p a t h 1
47 - m o u n t P a t h : / w o r k p l a c e /. m y s q l . env
Container Orchestration in Edge and Fog Computing … 15

48 name : fogbus2 - master - h o s t p a t h 2


49 restartPolicy : Always
50 serviceAccountName : ""
51 nodeName : master
52 h o s t N e t w o r k : true
53 volumes :
54 - hostPath :
55 path : / var / run / d o c k e r . s o c k
56 name : fogbus2 - master - h o s t p a t h 0
57 - hostPath :
58 path : / home / hehe / F o g B u s 2 / c o n t a i n e r s
59 / master / sources
60 name : fogbus2 - master - h o s t p a t h 1
61 - hostPath :
62 path : / home / hehe / F o g B u s 2 / c o n t a i n e r s
63 / m a s t e r / s o u r c e s /. m y s q l . env
64 name : fogbus2 - master - h o s t p a t h 2
65 s t a t u s : {}
Code Snippet 1 The YAML deployment file for the Master component of the FogBus2 framework

In the communication design of the FogBus2 framework, the initialization of


components requires the binding of the host IP address, which will be used to pass
information between components. For example, when a Master component is created,
the IP address of the host will be passed in as a required parameter, which will also
be passed in as a necessary parameter to initializing the Actor component. Because
the FogBus2 framework has some generic functions that will be used by multiple or
all components, the Master component will send its assigned host/VPN IP address
to the Actor component and requests to return the information to this address. If this
IP address is not the same as the IP address used to initialize the Actor component,
communication cannot be correctly established. When the FogBus2 framework is
deployed using Docker Compose (e.g., the native way that FogBus2 is deployed),
communication between the components is smooth because the containers are run-
ning directly on the host. However, when the FogBus2 framework starts in K3s,
the communication mechanism between the components should be updated since
containers are running in pods and each pod has its own IP address. Components
cannot listen to the IP address of the host because, by default, the pod’s network
environment is separate from the host, which poses a challenge for the deployment
of the FogBus2 framework. To cope with this problem, we propose the following
three design models.
Host Network When starting FogBus2 components in a K3s cluster, instead of using
the cluster’s own network services, we use the host’s network configuration directly.
Specifically, we connect each pod directly to the network of its host. In this case, the
components in the FogBus2 framework can be bound directly to the host’s network
at initialization, and the IP address notified to the target component is the same as
the one configured by the target component at initialization. This design implements
the following functions for the FogBus2 framework:
16 Z. Wang et al.

• Batch Orchestration: It allows containers to be orchestrated across multiple hosts.


In contrast, the native FogBus2 uses docker-compose, which can only create a
single container instance locally.
• Health Check: The system knows when the container is ready and can start accept-
ing traffic.
• Self-healing from Failure: When a running pod stops abnormally or is deleted
by mistake, the system can restart the pod.
• Dynamic Change: Users can dynamically change the resources limit of the run-
ning pods, including the size of physical memory footprint, the number of physical
CPUs, etc.
• Resource Utilization: The system can distribute each application on each node
and choose the one with the lowest physical resource usage to deploy.

However, this design pattern sacrifices some of the functionality of K3s. When
pods are connected directly to the network environment where the hosts are located,
the K3s controller will not be able to optimally manage all the containers within the
cluster because these services require the K3s controller to have the highest level of
access to the network services used by the pods. If the pods are on a VPN network,
we will not be able to implement all the features of K3s. We use Host Network mode
to deploy the FogBus2 framework in the K3s cluster in this paper.
Proxy Server As the problem stems from a conflict between the communication
design of the FogBus2 framework and the communication model between pods in
the K3s cluster, we can create a proxy server that defines the appropriate routing poli-
cies to receive and forward messages from different applications. When a FogBus2
component needs to send a message to another component, we import the message
into the proxy server, which analyzes the message to extract the destination and for-
ward it to the IP address of the target component according to its internal routing
policy. This approach bypasses the native communication model of the FogBus2
framework, and all communication between applications is done through the proxy
server.
There are two types of communication methods in the FogBus2 framework, pro-
prietary methods and generic methods. The proprietary methods are used to commu-
nicate with fixed components, such as master and remote logger, whose IP addresses
are configured and stored as global variables when most components are initialized.
In contrast, the generic methods are used by all components and are called by compo-
nents to transmit their IP addresses as part of the message for the target component.
Therefore, to enable all components to send messages to the proxy server for pro-
cessing, we need to change the source code of the FogBus2 framework so that all
components are informed of the IP address of the proxy server at initialization and
to unify the two types of communication methods so that components will include
information about the target in the message and send it to the proxy server. As a result,
this design would involve a redesign of the communication model of the FogBus2
framework.
Container Orchestration in Edge and Fog Computing … 17

Environment Variable In the K3s cluster, when the application is deployed, the
cluster controller will automatically create a pod to manage the container in which
the application resides. However, in the YAML file, we can obtain the IP address of
the created pod when configuring the container information, which allows us to pass
it in as an environment variable when initializing the components of the FogBus2
framework. Then, the IP address bound to the component is the IP address of the
pod and the component can transmit this address to the target component when
communicating and receiving a message back.
However, in our experiments, we find that pods on different nodes have problems
communicating at runtime. We trace the flow of information transmitted and find
that the reason for this is the conflict between the network services configured within
the cluster and the VPN used to build the hybrid computing environment. The pods
possess unique IP addresses and use them to communicate with each other, but
these addresses cannot be recognized by the VPN on the nodes, which prevents the
information from being transferred from the hosts. To solve this problem, we have
proposed two solutions:
• Solution 1: K3s uses flannel as the Container Network Interface (CNI) by default.
We can change the default network service configuration of the K3s cluster and
override the default flannel interface with the Wireguard Ethereum Name Service.
• Solution 2: We can change the Wireguard settings to add the interface of the
network service created by the K3s controller to the VPN profile to allow incoming
or outgoing messages from a specific range of IP addresses.

4 Performance Evaluation

In this section, two experiments are conducted using three real-time applications to
evaluate the performance of orchestrated FogBus2 (O-FogBus2) and native FogBus2,
as well as the performance of FogBus2 in the hybrid versus Cloud environment. The
real-time applications used in the experiments are described in Table 2.

Table 2 The list of applications


Application name Tag Description
NaiveFormulaSerialized Formula A mathematical formula where different
parts are calculated as different tasks
FaceDetection (480P Res) FD480 Face detection from real-time/recorded
video streams at 480P resolution
FaceDetection (240P Res) FD240 Face detection from real-time/recorded
video streams at 240P resolution
18 Z. Wang et al.

(a) Formula (b) FD480 (c) FD240

Fig. 9 Response times for Orchestrated FogBus2 (O-FogBus2) versus native FogBus2 in three
applications

4.1 Experiment 1: Orchestrated FogBus2 Versus Native


FogBus2

This experiment studies the performance of the FogBus2 framework deployed in


K3s and compares it with the native FogBus2. In the experiment, we run the systems
in the same network environment and set the same scheduling policy to ensure the
reliability of the experimental results.
The environment setup for this experiment is shown in Table 1. For both deploy-
ment types, we implement the same deployment strategy to ensure fairness, with the
Master and one Actor running on the Edge, and the Remote Logger and two other
Actors running on the Cloud.
Figure 9 shows the response time for orchestrated FogBus2 and native FogBus2
using three applications. The red dots represent the average response time, while the
top and bottom green lines represent the 95% confidence interval for the mean value.
For all tested applications, when FogBus2 is running in K3s, the average response
time is longer than the native FogBus2 framework by an average of 7%. This is
because the management of deployments by the K3s cluster itself requires some
overhead; however, given the resource management mechanisms, scheduling, and
automatic container health checks provided by K3s, we believe this overhead is very
lightweight and acceptable.

4.2 Experiment 2: Hybrid Environment Versus Cloud


Environment

This experiment studies the performance of O-FogBus2 deployed in the hybrid com-
puting environment versus the Cloud computing environment. Same as Sect. 4.1, the
environment setup for this experiment is shown in Table 1. For the hybrid computing
environment, the Master and one Actor are running on the Edge, and the Remote Log-
ger and two other Actors are running on the Cloud. And for the Cloud environment,
all the components are running on the Cloud.
Container Orchestration in Edge and Fog Computing … 19

(a) Formula (b) FD480 (c) FD240

Fig. 10 Response times for Orchestrated FogBus2 (O-FogBus2) in Hybrid versus Cloud deploy-
ment

Figure 10 depicts the response time of FogBus2 deployed in hybrid and Cloud
environments for three applications. For all tested applications, the average response
time is shorter by up to 29% when FogBus2 is running in the hybrid environment
than when FogBus2 is running in the Cloud. This is because the end users are usually
located at the edge of the network and the final result should be forwarded to them.
If all the components of FogBus2 are running in the Cloud, it will take longer and
will face the impact of the unstable Wide Area Network (WAN). Since FogBus2 is
designed for IoT devices to integrate Cloud and Edge/Fog environments, the intro-
duction of K3 does not deprive this function, so we believe that placing the entire
system in a hybrid computing environment can reasonably utilize the Cloud and
Edge/Fog computing resources and improve system performance.

5 Conclusions and Future Work

In this paper, we discussed the importance of resource management to support real-


time IoT applications. We presented feasible designs for implementing container
orchestration techniques in hybrid computing environments. This study proposed
three design patterns for deploying the containerized resource management frame-
works such as the FogBus2 framework into the hybrid environment. Besides, we
described the detailed configuration of K3s deployment and the integration of the
FogBus2 framework using the host network approach. The Host Network Pattern con-
nects the components of the cluster to the host network environment, using the native
communication model of the FogBus2 framework by masking the internal network
environment of the cluster while avoiding the network conflict problems related to
VPN. Compared to the native Fogbus2 framework, the new system (i.e., O-FogBus2)
enables resource limit control, health check, and self-healing from failure to cope
with the ever-changing number and functionality of connected IoT devices.
We identified several future works to further improve the container orchestra-
tion for efficient resource management in hybrid computing environments. Firstly,
we can consider implementing elastic scalability to automatically add or remove
computing resources according to the demands of IoT applications. To address this
20 Z. Wang et al.

challenge, the Proxy Server and Environment Variable design approaches can be
investigated to enable dynamic scalability. Secondly, lightweight security mecha-
nisms can be embedded into the container orchestration mechanisms. As IoT devices
are highly exposed to users, security and privacy become important. However, the
limited resources of Edge/Fog devices create difficulties for the implementation of
security mechanisms. Therefore, lightweight security mechanisms to ensure end-
to-end integrity and confidentiality of user information can be further investigated.
Next, integrating different orchestration tools, including KubeEdge, Docker Swarm,
and MicroK8s, can be considered as an important future direction. Different orches-
tration tools may be suitable for different computing environments, so it is essential
to find the best application scenarios for them. We can explore the impact of dif-
ferent integrated container orchestration tools for handling real-time and non-real-
time IoT applications. Also, a variety of scheduling policies can be implemented to
automate application deployment and improve resource usage efficiency for clus-
ters, ranging from heuristics to reinforcement learning techniques [2]. For example,
scheduling pods to nodes with smaller memory and CPU footprints to automatically
load-balancing on the cluster, or spreading replicative pods across different nodes
to avoid severe system failures. Furthermore, since machine learning techniques [2,
23] are becoming mature and widely used in various fields, we can consider integrat-
ing them into the Edge/Fog and Cloud computing environment. Machine learning
techniques can be used to analyze the state of the current computing environment,
improve the system’s ability to manage resources, and distribute workloads. As cur-
rent machine learning tools are often designed for powerful servers, future research
can optimize them to run on resource-constrained Edge/Fog devices. Finally, the
adopted techniques can consider the requirements of specific application domains
such as natural disaster management, which significantly affect human life.

References

1. Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of things (IoT): a vision, archi-
tectural elements, and future directions. Future Gener Comput Syst 29(7):1645–1660
2. Goudarzi M, Palaniswami MS, Buyya R (2021) A distributed deep reinforcement learning
technique for application placement in edge and fog computing environments. IEEE Trans
Mob Comput (accepted, in press)
3. Aazam M, Khan I, Alsaffar AA, Huh E-N (2014) Cloud of things: integrating internet of
things and cloud computing and the issues involved. In: Proceedings of the 11th International
Bhurban conference on applied sciences & technology (IBCAST) Islamabad, Pakistan, 14th–
18th January, 2014. IEEE, New York, pp 414–419
4. Goudarzi M, Wu H, Palaniswami M, Buyya R (2021) An application placement technique
for concurrent IoT applications in edge and fog computing environments. IEEE Trans Mob
Comput 20(4):1298–1311
5. Mohammad Goudarzi, Palaniswami M, Buyya R (2021) A distributed application placement
and migration management techniques for edge and fog computing environments. In: Proceed-
ings of the 16th conference on computer science and intelligence systems (FedCSIS). IEEE,
New York, pp 37–56
Container Orchestration in Edge and Fog Computing … 21

6. Ujjwal KC, Garg S, Hilton J, Aryal J, Forbes-Smith N (2019) Cloud computing in natural
hazard modeling systems: current research trends and future directions. Int J Disaster Risk
Reduct 38:101188
7. Buyya R, Srirama SN (2019) Fog and edge computing: principles and paradigms. Wiley
8. Dastjerdi AV, Buyya R (2016) Fog computing: helping the internet of things realize its potential.
Computer 49(8):112–116
9. Goudarzi M, Palaniswami M, Buyya R (2019) A fog-driven dynamic resource allocation tech-
nique in ultra dense femtocell networks. J Network Comput Appl 145:102407
10. Shi W, Cao J, Zhang Q, Li Y, Lanyu X (2016) Edge computing: vision and challenges. IEEE
Internet Things J 3(5):637–646
11. Bali A, Gherbi A (2019) Rule based lightweight approach for resources monitoring on IoT
edge devices. In: Proceedings of the 5th International workshop on container technologies and
container clouds, pp 43–48
12. Deng Q, Goudarzi M, Buyya R (2021) Fogbus2: a lightweight and distributed container-based
framework for integration of IoT-enabled systems with edge and cloud computing. In: Pro-
ceedings of the international workshop on big data in emergent distributed environments, pp
1–8
13. Cai Z, Buyya R (2022) Inverse queuing model based feedback control for elastic container
provisioning of web systems in Kubernetes. IEEE Trans Comput 71(2):337–348
14. Rancher Labs (2021) K3s—lightweight Kubernetes. https://rancher.com/docs/k3s/latest/en/.
Accessed 24 Jan 2022
15. Todorov MH (2021) Design and deployment of Kubernetes cluster on Raspberry pi OS. In:
Proceedings of the 29th National conference with international participation (TELECOM).
IEEE, New York, pp 104–107
16. Rancher Labs (2021) Architecture. https://rancher.com/docs/k3s/latest/en/architecture/.
Accessed 24 Jan 2022
17. Rodriguez MA, Buyya R (2019) Container-based cluster orchestration systems: a taxonomy
and future directions. Software: Pract Exp 49(5):698–719
18. Zhong Z, Buyya R (2020) A cost-efficient container orchestration strategy in Kubernetes-based
cloud computing infrastructures with heterogeneous resources. ACM Trans Internet Technol
(TOIT) 20(2):1–24
19. Goethals T, De Turck F, Volckaert B (2019) Fledge: Kubernetes compatible container orchestra-
tion on low-resource edge devices. In: Proceedings of the international conference on internet
of vehicles. Springer, Berlin, pp 174–189
20. Pires A, Simão J, Veiga L (2021) Distributed and decentralized orchestration of containers on
edge clouds. J Grid Comput 19(3):1–20
21. Alam M, Rufino J, Ferreira J, Ahmed SH, Shah N, Chen Y (2018) Orchestration of microser-
vices for IoT using docker and edge computing. IEEE Commun Maga 56(9):118–123
22. Ermolenko D, Kilicheva C, Muthanna A, Khakimov A (2021) Internet of things services orches-
tration framework based on Kubernetes and edge computing. In: Proceedings of the IEEE
conference of russian young researchers in electrical and electronic engineering (ElConRus).
IEEE, New York, pp 12–17
23. Agarwal S, Rodriguez MA, Buyya R (2021) A reinforcement learning approach to reduce
serverless function cold start frequency. In: Proceedings of the 21st IEEE/ACM international
symposium on cluster, cloud and internet computing (CCGrid). IEEE, New York, pp 797–803
Is Tiny Deep Learning the New Deep
Learning?

Manuel Roveri

Abstract The computing everywhere paradigm is paving the way for the pervasive
diffusion of tiny devices (such as Internet-of-Things or edge computing devices)
endowed with intelligent abilities. Achieving this goal requires machine and deep
learning solutions to be completely redesigned to fit the severe technological con-
straints on computation, memory, and power consumption typically characterizing
these tiny devices. The aim of this paper is to explore tiny machine learning (TinyML)
and introduce tiny deep learning (TinyDL) for the design, development, and deploy-
ment of machine and deep learning solutions for (an ecosystem of) tiny devices,
hence supporting intelligent and pervasive applications following the computing
everywhere paradigm.

Keywords Tiny machine learning (TinyML) · Tiny deep learning (TinyDL) ·


Internet of things · Edge computing

1 Introduction

The technological evolution and the algorithmic revolution have always represented
two sides of the same coin in the machine learning (ML) field. On the one hand,
advances in technological solutions (e.g., the design of high performance and energy-
efficient hardware devices) have supported the design and development of increas-
ingly complex and technologically demanding ML algorithms and solutions [33,
38]. On the other hand, novel ML algorithms and solutions have been specifically
designed for target hardware devices (e.g., embedded devices or Internet-of-Things
[IoT] units), enabling these devices to be endowed with advanced intelligent func-
tionalities [34, 42].
Interestingly, deep learning (DL) is a relevant and valuable example of this strict
and fruitful relationship between the technological evolution and the algorithmic rev-

M. Roveri (B)
Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano, Milan, Italy
e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 23
R. Buyya et al. (eds.), Computational Intelligence and Data Analytics,
Lecture Notes on Data Engineering and Communications Technologies 142,
https://doi.org/10.1007/978-981-19-3391-2_2
24 M. Roveri

olution. Indeed, since the appearance of the first deep neural networks [22, 24], DL
algorithms have completely revolutionized the ML field. Today, they represent the
state-of-the-art solution for recognition, detection, classification, and prediction (to
name but a few) in numerous different application scenarios [28, 31]. Noteworthily,
the basis of DL, namely the idea of deep-stacked processing layers in neural net-
works, dates back to the late1960s (see the seminal works in [11, 17]). At that time,
the available technological solutions were unable to support the effective and effi-
cient training of such deep neural networks. Thirty years later, the rise of hardware
accelerators, such as graphics processing units (GPUs) and tensor processing units
(TPUs), saw them become technological enablers of the DL revolution. This led to
what is today considered the standard computing paradigm in the field: DL models
trained and executed on hardware accelerators.
From scientific and technological perspectives, it is therefore crucial to identify the
next technological enabler capable of supporting the next algorithmic revolution. One
of the most promising and relevant technological directions is the “computing every-
where” paradigm [2, 18]. It represents a pervasive and heterogeneous ecosystem of
IoT and edge devices that support a wide range of ML-based pervasive applications,
from smart cars to smart homes and cities, and from Industry 4.0 to E-health. Due to
the technological evolution, these pervasive and heterogeneous devices are becoming
tinier and increasingly energy-efficient (often being battery-powered); hence, they
are able to support an effective and efficient on-device processing [2]. This is a cru-
cial ability since moving the processing and, in particular, the intelligent processing
as close as possible to where data are generated guarantees relevant advantages in
ML-based pervasive applications. Some of these advantages are as follows:
• an increase in the autonomy of these pervasive devices, which are therefore able
to make decisions locally (without sending acquired data to the Cloud through the
Internet for processing and then waiting for the results);
• a reduction in the latency with which a decision is made or a reaction is activated;
• a reduction in the required transmission bandwidth, hence enabling these devices
to operate even in areas where high-speed Internet connections are not available
(e.g., rural areas);
• an increase in the energy efficiency of these pervasive devices, since transmitting
the data is much more power-hungry than processing them locally;
• an increase in the privacy of these pervasive applications, since possibly sensitive
data remain on the device;
• the ability to exploit incremental or adaptive learning mechanisms to acquire fresh
knowledge directly from the field, hence improving or (whenever needed) main-
taining the accuracy of ML/DL models over time [8]; and
• the capability to distribute the inference and learning of ML/DL models in
the ecosystem of possibly heterogeneous pervasive devices (i.e., IoT and edge
devices).
The drawback of such an approach is that strict technological constraints char-
acterize these pervasive devices in terms of computation, memory, and power con-
sumption. Indeed, the CPU frequency of such tiny devices is typically in the order
Is Tiny Deep Learning the New Deep Learning? 25

of MHz, the RAM memory is in the order of a few hundred kB, and the power
consumption is typically below 100 mW. Such severe technological constraints pose
huge technical challenges from a design point of view on ML and, particularly, DL
solutions, which are typically highly demanding in terms of computation, memory,
and power consumption. This challenge is further emphasized both by the complexi-
ties in the development of embedded software for tiny devices (i.e., the firmware that
runs on them) and by the need to consider a strict co-design phase that comprises
hardware, software, and ML/DL algorithms.
This is exactly where tiny ML (hereinafter “TinyML”) comes into play, which
involves the design of ML solutions that can be executed on tiny devices, hence
being able to take into account constraints on memory, computation, and power con-
sumption. The aim of this paper is to shed the light on the state-of-the-art solutions
for the design of ML and DL solutions that can be executed on tiny devices. In
particular, this paper focuses on the design and development of DL solutions specif-
ically intended to be executed on tiny devices, hence paving the way for the tiny DL
(hereinafter “TinyDL”) revolution for smarter and more efficient pervasive systems
and applications.
The remainder of this paper is organized as follows. Section 2 provides an
overview of TinyML, and then Sect. 3 introduces TinyDL. Section 4 details approx-
imate computing mechanisms for TinyDL. Section 5 introduces specific TinyDL
solutions for the IoT. Lastly, Sect. 6 draws conclusions and presents open research
points.

2 Tiny Machine Learning: An Overview

TinyML [13, 42] is a new and promising area of ML and DL aimed at designing
ML solutions that can be executed on tiny devices. Solutions present in this area
aim to introduce tiny models and architectures characterized by reduced memory
and computational demands of the processing layers [9] as well as approximate
computing solutions, such as quantization [12] and pruning [27] to address the severe
technological constraints on computation, memory, and energy that characterize tiny
devices.
An overview of the TinyML paradigm is presented in Fig. 1. Specifically, TinyML
comprises the following two main modules:

• the hardware module: the physical resources of the tiny device, comprising the
embedded computing board, sensors, actuators, and battery (optional);
• the software module: all the software components that run on the tiny device, com-
prising the preprocessing module, TinyML module, and postprocessing module.

Data acquired through the sensors are preprocessed to remove noise or highlight
relevant features. The TinyML module receives as input the preprocessed data to
produce an inference (e.g., a classification, detection, or prediction) by means of the
26 M. Roveri

Fig. 1 Overview of tiny machine learning (TinyML) computing paradigm, which comprises an
embedded computing board, sensors, actuators, and software processing pipeline, including pre-
processing, TinyML model inference, and postprocessing

trained TinyML model. The output of the TinyML module is postprocessed to make
a decision or activate a reaction, which is then conducted by means of the actuators.
The basis of TinyML applications is that they are designed to be “always on”
in the sense that tiny devices continuously acquire and process data (through the
preprocessing, TinyML, and postprocessing modules); thus, decisions are made or
reactions are activated directly on the device.
Examples of TinyML applications are wake-word detection, where a given com-
mand or word acquired by a microphone is recognized by the TinyML model; person
detection, where images acquired by a camera are processed by the TinyML mod-
ule to detect the presence of persons therein; and gesture recognition, where data
acquired by MEMS accelerometers are processed by the TinyML module to recog-
nize gestures made by people.
The development chain of TinyML applications, detailed in Fig. 2, comprises
several steps that range from the hardware setup of tiny devices to the operational
mode of TinyML models.
Specifically, the development chain of TinyML applications comprises the fol-
lowing steps:

1. Hardware setup: The embedded computing board, sensors, and actuators are
selected for the purposes of the TinyML application. Remarkably, the choice of
the embedded computing board imposes technological constraints on memory
and computation for the design and development of the TinyML model.
2. Software setup: The development toolchain for the firmware of the embedded
computing board and the framework for the design of the TinyML application
Is Tiny Deep Learning the New Deep Learning? 27

Fig. 2 Development chain of tiny machine learning applications

(e.g., TensorFlow Lite for Micro or Quantized Pytorch) are selected and config-
ured.
3. Data collection: This step is intended to create the training set for training the
TinyML model (if needed, supervised information is provided by the expert).
4. TinyML model training: The selected TinyML model (e.g., a linear classifier,
decision tree, or feedforward neural network) is trained on the acquired training
set.
5. Firmware development: The trained TinyML model is included in the firmware.
The firmware comprises, in addition to the inference of the TinyML model, the
reading of the sensors, preprocessing, postprocessing, activation of the actuators,
and (if required) communication with other computing units or devices (e.g., an
edge computing unit or a gateway).
6. Firmware compilation: The developed firmware comprising all of the aforemen-
tioned software components is compiled for the embedded computing board
defined in Step 1 by means of the selected development toolchain selected in
Step 2. The compiled firmware is then flashed to the embedded computing board
(to accomplish this step the hardware constraints on memory and computation
posed by the embedded computing board must be satisfied).
7. TinyML operation: The tiny device, comprising the selected hardware and the
compiled and flashed firmware, operates in the environment for the purpose of the
given TinyML application. Information about effectiveness and efficiency can
be gathered by the tiny device to monitor the status of the TinyML application
over time (and, if needed, updates, patches, or bug-fixing can be introduced).

Notably, Steps 1–6 are conducted outside of the tiny devices (e.g., on the Cloud
or on personal computers), whereas only Step 7 is technically executed on it. This
approach guarantees sufficient computational load and memory for accomplishing
the goals with the highest computation and memory demands (i.e., training set cre-
Discovering Diverse Content Through
Random Scribd Documents
Posto se achasse costumado aos olhos admirativos, via agora em
toda a gente um aspecto parecido com a noticia de que elle ia casar.
As casuarinas de uma chacara, quietas antes que elle passasse por
ellas, disseram-lhe cousas mui particulares, que os levianos
attribuiriam á aragem que passava tambem, mas que os sapientes
reconheceriam ser nada menos que a linguagem nupcial das
casuarinas. Passaros saltavam de um lado para outro, pipilando um
madrigal. Um casal de borboletas,—que os japões têm por symbolo
da fidelidade, por observarem que, se pousam de flor em flor,
andam quasi sempre aos pares,—um casal dellas acompanhou por
muito tempo o passo do cavallo, indo pela cerca de uma chacara
que beirava o caminho, volteando aqui e alli, lepidas e amarellas. De
envolta com isto, um ar fresco, ceu azul, caras alegres de homens
montados em burros, pescoços estendidos pela janella fóra das
diligencias, para vel-o e ao seu garbo de noivo. Certo, era difficil crer
que todos aquelles gestos e attitudes da gente, dos bichos e das
arvores, exprimissem outro sentimento que não fosse a homenagem
nupcial da natureza.
As borboletas perderam-se em uma das moitas mais densas da
cêrca. Seguiu-se outra chacara, despida de arvores, portão aberto, e
ao fundo, fronteando com o portão, uma casa velha, que
encarquilhava os olhos sob a forma de cinco janellas de peitoril,
cançadas de perder moradores. Tambem ellas tinham visto bodas e
festins; o seculo ja as achou verdes de novidade e de esperança.
Não cuideis que esse aspecto contristou a alma do cavalleiro. Ao
contrario, elle possuia o dom particular de remoçar as ruinas e viver
da vida primitiva das cousas. Gostou até de ver a casa velhusca,
desbotada, em contraste com as borboletas tão vivas de ha pouco.
Parou o cavallo; evocou as mulheres que por alli entraram, outras
galas, outros rostos, outras maneiras. Porventura as proprias
sombras das pessoas felizes e extinctas vinham agora cumprimental-
o tambem, dizendo-lhe pela bocca invisivel todos os nomes sublimes
que pensavam delle. Chegou a ouvil-as e sorrir. Mas uma voz
estridula veiu mesclar-se ao concerto;—um papagaio, em gaiola
pendente da parede externa da casa: «Papagaio real, para Portugal;
quem passa? Currupá, papá. Grrrr... Grrrrr...» As sombras fugiram, o
cavallo foi andando. Carlos Maria aborrecia o papagaio, como
aborrecia o macaco, duas contrafacções da pessoa humana, dizia
elle.
—A felicidade que eu lhe der será assim tambem interrompida?
reflexionou andando.
Cambaxirras voaram de um para outro lado da rua, e pousaram
cantando a sua falla propria; foi uma reparação. Essa lingua sem
palavras era intelligivel, dizia uma porção de cousas claras e bellas.
Carlos Maria chegou a ver naquillo um symbolo de si mesmo.
Quando a mulher, aturdida dos papagaios do mundo, viesse caindo
de fastio, elle a faria erguer aos trillos da passarada divina, que
trazia em si, ideias de ouro, ditas por uma voz de ouro. Oh! como a
tornaria feliz! Já a antevia ajoelhada, com os braços postos nos seus
joelhos, a cabeça nas mãos e os olhos nelle, gratos, devotos,
amorosos, toda implorativa, toda nada.

CAPITULO CXXIII

Ora bem, aquelle quadro, na mesma hora em que apparecia aos


olhos da imaginação do noivo, reproduzia-se no espirito da noiva, tal
qual. Maria Benedicta, posta á janella, fitando as ondas que se
quebravam ao longe e na praia, via-se a si mesma, ajoelhada aos
pés do marido, quieta, contricta, como á mesa da communhão para
receber a hostia da felicidade. E dizia comsigo: «Oh! como elle me
fará feliz!» Phrase e pensamento eram outros, mas a attitude e a
hora eram as mesmas.

CAPITULO CXXIV
Casaram-se; tres mezes depois foram para a Europa. Ao despedir-se
delles, D. Fernanda estava tão alegre como se viesse recebel-os de
volta; não chorava. O prazer de os ver felizes era maior que o
desgosto da separação.
—Você vae contente? perguntou a Maria Benedicta, pela ultima vez,
junto á amurada do paquete.
—Oh! muito!
A alma de D. Fernanda debruçou-se-lhe dos olhos, fresca,
ingenua,cantando um trecho italiano,—porque a suberba guasca
preferia a musica italiana,—talvez esta aria da Lucia: O' bell'alma
innamorata. Ou este pedaço do Barbeiro:

Ecco ridente in cielo Già spunta la bella aurora.

CAPITULO CXXV

Sophia não foi a bordo, adoeceu e mandou o marido. Não vão crer
que era pezar nem dor; por occasião do casamento, houve-se com
grande discrição, cuidou do enxoval da noiva e despediu-se della
com muitos beijos chorados. Mas ir a bordo pareceu-lhe vergonha.
Adoeceu; e, para não desmentir do pretexto, deixou-se estar no
quarto. Pegou de um romance recente; fora-lhe dado pelo Rubião.
Outras cousas alli lhe lembravam o mesmo homem, teteias de toda
a sorte, sem contar joias guardadas. Finalmente, uma singular
palavra que lhe ouvira, na noite do casamento da prima, até essa
veiu alli para o inventario das recordações do nosso amigo.
—A senhora é já a rainha de todas, disse-lhe elle em voz baixa;
espere que ainda a farei imperatriz.
Sophia não pode entender esta phrase enigmatica. Quiz suppor que
era uma alliciação de grandeza para tornal-a sua amante; mas a
vaidade que essa ideia trazia fel-a excluir desde logo. Rubião, posto
não fosse agora o mesmo homem encolhido e timido de outros
tempos, não se mostrava tão cheio de si que lhe pudesse attribuir
tão alta presumpção. Mas que era então a phrase? Talvez um modo
figurado de dizer que a amaria ainda mais. Sophia acreditava
possivel tudo. Não lhe faltavam galanteios; chegou a ouvir aquella
declaração de Carlos Maria, provavelmente ouvira outras, a que deu
somente a attenção da vaidade. E todas passaram; Rubião é que
persistia. Tinha pausas, filhas de suspeitas; mas as suspeitas iam
como vinham.
«Il mérite d'être aimé», leu Sophia na pagina aberta do romance,
quando ia continuar a leitura; fechou o livro, fechou os olhos, e
perdeu-se em si mesma. A escrava que entrou d'ahi a pouco,
trazendo-lhe um caldo, suppoz que a senhora dormia e retirou-se pé
ante pé.

CAPITULO CXXVI

Entretanto, Rubião e Palha desciam do paquete para a lancha e


tornavam ao cáes Pharoux. Vinham cuidosos e calados. Palha foi o
primeiro que abriu a bocca:
—Ando ha tempos para dizer-lhe uma cousa importante, Rubião.

CAPITULO CXXVII

Rubião accordou. Era a primeira vez que ia a um paquete. Voltava


com a alma cheia dos rumores de bordo, a lufa-lufa das gentes que
entravam e sahiam, nacionaes, estrangeiros, estes de varia casta,
francezes, inglezes, allemães, argentinos, italianos, uma confusão de
linguas, um capharnaum de chapéos, de malas, cordoalha, sophás,
binoculos a tiracollo, homens que desciam ou subiam por escadas
para dentro do navio, mulheres chorosas, outras curiosas, outras
cheias de riso, e muitas que traziam de terra flores ou frutas,—tudo
aspectos novos. Ao longe, a barra por onde tinha de ir o paquete.
Para lá da barra, o mar immenso, o céo fechado e a solidão. Rubião
renovou os sonhos do mundo antigo, creou uma Atlantida, sem nada
saber da tradicção. Não tendo noções de geographia, formava uma
idéa confusa dos outros paizes, e a imaginação rodeava-os de um
nimbo mysterioso. Como não lhe custava viajar assim, navegou de
cór algum tempo, n'aquelle vapor alto e comprido, sem enjôo, sem
vagas, sem ventos, sem nuvens.

CAPITULO CXXVIII

—A mim? perguntou Rubião depois de alguns segundos.


—A você, confirmou o Palha. Devia tel-a dito ha mais tempo, mas
estas historias de casamento, de commissão das Alagoas, etc.,
atrapalharam-me, e não tive occasião; agora, porém, antes do
almoço... Você almoça commigo.
—Sim, mas que é?
—Uma cousa importante.
Dizendo isto, tirou um cigarro, abriu-o, desfiou o fumo com os
dedos, enrolou a palha outra vez, e riscou um phosphoro, mas o
vento apagou o phosphoro. Então pediu ao Rubião que lhe fizesse o
favor de segurar o chapéo, para poder accender outro. Rubião
obedeceu impaciente. Bem póde ser que o socio, esticando a
espera, quizesse justamente fazer-lhe crer que se tratava de um
terremoto; a realidade viria a ser um beneficio. Puxadas duas
fumaças:
—Estou com meu plano de liquidar o negocio; fallaram-me ahi para
uma casa bancaria, logar de director, e creio que acceito.
Rubião respirou.
—Pois sim; liquidar já?
—Não, lá para o fim do anno que vem.
—E é preciso liquidar?
—Cá para mim, é. Se a historia do banco não fosse segura, não me
animaria a perder o certo pelo duvidoso; mas é segurissima.
—Então no fim do anno que vem soltamos os laços que nos
prendem...
Palha tossiu.
—Não, antes, no fim deste anno.
Rubião não entendeu; mas o socio explicou-lhe que era util
desligarem já a sociedade, afim de que elle sósinho liquidasse a
casa. O banco podia organizar-se mais cedo ou mais tarde; e para
que sujeitar o outro ás exigencias da occasião? Demais, o Dr.
Camacho affirmava que, em breve, Rubião estaria na camara, e que
a queda do Itaborahy era certa.
—Seja o que fôr, concluiu; é sempre melhor desligarmos a sociedade
com tempo. Você não vive do commercio; entrou com o capital
necessario ao negocio,—como podia dal-o a outro ou guardal-o.
—Pois sim, não tenho duvida, concordou o Rubião.
E depois de alguns instantes:
—Mas diga-me uma cousa, essa proposta traz algum motivo
occulto? é rompimento de pessoas, de amizade... Seja franco, diga
tudo...
—Que caraminhola é essa? redarguiu o Palha. Separação de
amizade, de pessoas... Mas você está tonto. Isto é do balanço do
mar. Pois eu, que tenho trabalhado tanto por você, eu que o faço
amigo dos meus amigos, que o trato como um parente, como um
irmão, havia de brigar á toa? Aquelle mesmo casamento de Maria
Benedicta com o Carlos Maria devia ser com você, bem sabe, se não
fosse a sua recusa. A gente póde romper um laço sem romper os
outros. O contrario seria desproposito. Então todos os amigos de
sociedade ou de familia são socios de commercio? E os que não
forem commerciantes?
Rubião achou excellente a razão, e quiz abraçar o Palha. Este
apertou-lhe a mão satisfeitissimo; ia vêr-se livre de um socio, cuja
prodigalidade crescente podia trazer-lhe algum perigo. A casa estava
solida; era facil entregar ao Rubião a parte que lhe pertencesse,
menos as dividas pessoaes e anteriores. Restavam ainda algumas
daquellas que o Palha confessou á mulher, na noite de Santa
Thereza, cap. L. Pouco tinha pago; geralmente era o Rubião que
abanava as orelhas ao assumpto. Um dia, o Palha, querendo dar-lhe
á força algum dinheiro, repetiu o velho proverbio: «Paga o que
deves, vê o que te fica». Mas o Rubião, gracejando:
—Pois não pagues, e vê se te não fica ainda mais.
—É boa! redarguiu o Palha rindo e guardando o dinheiro no bolso.

CAPITULO CXXIX

Não havia banco, nem logar de director, nem liquidação; mas, como
justificaria o Palha a proposta de separação, dizendo a pura
verdade? Dahi a invenção, tanto mais prompta, quanto o Palha tinha
amor aos bancos, e morria por um. A carreira daquelle homem era
cada vez mais prospera e vistosa. O negocio corria-lhe largo; um dos
motivos da separação era justamente não ter que dividir com outro
os lucros futuros. Palha, além do mais, possuia acções de toda a
parte, apolices de ouro do emprestimo Itaborahy, e fizera uns dous
fornecimentos para a guerra, de sociedade com um poderoso, nos
quaes ganhou muito. Já trazia apalavrado um architecto para lhe
construir um palacete. Vagamente pensava em baronia.
CAPITULO CXXX

—Quem diria que a gente do Palha nos trataria deste modo? Já não
valemos nada. Excusa de os defender...
—Não defendo, estou explicando; ha de ter havido confusão.
—Fazer annos, casar a prima, e nem um triste convite ao major, ao
grande major, ao impagavel major, ao velho amigo major. Eram os
nomes que me davam; eu era impagavel, amigo velho, grande e
outros nomes. Agora, nada, nem um triste convite, um recado de
boca, ao menos, por um moleque: «Nhanhã faz annos, ou casa a
prima, diz que a casa esta ás suas ordens, e que vão com luxo.»
Não iriamos; luxo não é para nós. Mas era alguma cousa, era
recado, um moleque, ao impagavel major...
—Papae!
Rubião, vendo a intervenção de D. Tonica, animou-se a defender
longamente a familia Palha. Era em casa da major, não já na rua
Dous de Dezembro, mas na dos Barbonos, modesto sobradinho.
Rubião passava, elle estava á janella, e chamou-o. D. Tonica não
teve tempo de sair da sala, para dar, ao menos, uma vista d'olhos ao
espelho; mal pôde passar a mão pelo cabello, compôr o laço de fita
ao pescoço e descer o vestido para cobrir os sapatos, que não eram
novos.
—Digo-lhe que póde ter havido confusão, insistiu Rubião; tudo anda
por lá muito atrapalhado com esta commissão das Alagoas.
—Lembra bem, interrompeu o major Siqueira; porque não metteram
minha filha na commissão das Alagoas? Qual! Ha já muito que
reparo nisto; antigamente não se fazia festa sem nós. Nós éramos a
alma de tudo. De certo tempo para cá começou a mudança;
entraram a receber-nos friamente, e o marido, se pode esquivar-se,
não me falla na rua. Isto começou ha tempos; mas antes disso sem
nós é que não se fazia nada. Que está o senhor a fallar de confusão?
Pois se na vespera dos annos della, já desconfiando que não nos
convidariam, fui ter com elle ao armazem. Poucas palavras, por mais
que lhe fallasse em D. Sophia; disfarçava. Afinal disse-lhe assim:
«Hontem, lá em casa, eu e Tonica estivemos discutindo sobre a data
dos annos de D. Sophia; ella dizia que tinha passado, eu disse que
não, que era hoje ou amanhã.» Não me respondeu, fingiu que
estava absorvido em uma conta, chamou o guarda-livros, e pediu
explicações. Eu entendi o bicho, e repeti a historia; fez a mesma
cousa. Sahi. Ora o Palha, um pé-rapado! Já o envergonho.
Antigamente: major, um brinde. Eu fazia muitos brindes, tinha certo
desembaraço. Jogavamos o voltarete. Agora está nas grandezas;
anda com gente fina. Ah! vaidades deste mundo! Pois não vi outro
dia a mulher delle, n'um coupé, com outra? A Sophia de coupé!
Fingiu que me não via, mas arranjou os olhos de modo que
percebesse se eu a via, se a admirava. Vaidades desta vida! Quem
nunca comeu azeite, quando come se lambusa.
—Perdão, mas os trabalhos da commissão exigem certo apparato.
—Sim, acudiu Siqueira, é por isso que minha filha não entrou na
commissão; é para não estragar as carruagens...
—Demais, o coupé podia ser da outra senhora, que ia com ella.
O major deu dous passos, com as mãos atraz, e parou deante de
Rubião.
—Da outra... ou do padre Mendes. Como vae o padre? Boa vida,
naturalmente.
—Mas, papae, póde não haver nada, interrompeu D. Tonica. Ella
sempre me trata bem, e quando estive doente no mez passado,
mandou saber pelo moleque, duas vezes...
—Pelo moleque! bradou o pae. Pelo moleque! Grande favor!
«Moleque, vae alli á casa daquelle reformado e pergunta lhe se a
filha tem passado melhor; não vou, porque estou lustrando as
unhas!» Grande favor! Tu não lustras as unhas! tu trabalhas! tu és
digna filha minha! pobre, mas honesta!
Aqui o major chorou, mas suspendeu de repente as lagrimas. A filha,
commovida, sentiu-se tambem vexada. Certo, a casa dizia a pobreza
da familia, poucas cadeiras, uma meza redonda velha, um canapé
gasto; nas paredes duas lithographias encaixilhadas em pinho
pintado de preto, um era o retrato do major em 1857, a outra
representava o Veronez em Veneza, comprado na rua do Senhor dos
Passos. Mas o trabalho da filha transparecia em tudo; os moveis
reluziam de asseio, a meza tinha um panno de crivo, feito por ella, o
canapé uma almofada. E era falso que D. Tonica não lustrasse as
unhas; não teria o pó nem a camurça, mas acudia-lhes com um
retalho de panno todas as manhãs.

CAPITULO CXXXI

Rubião tratou-os com sympathia. Não continuou a defender a gente


Palha, para não desesperar o major, e fallou do exercito. Pouco
depois, despediu-se, promettendo, sem convite, que lá iria jantar
«um dia destes».
—Jantar de pobre, acudiu o major; se puder avisar, avise.
—Não quero banquetes; virei quando me der na cabeça.
Despediu-se. D. Tonica, depois de ir até o patamar, sem chegar á
frente por causa dos sapatos, foi á janella para vel-o sair.

CAPITULO CXXXII

Logo que Rubião dobrou a esquina da rua das Mangueiras, D. Tonica


entrou e foi ao pae, que se estendera no canapé, para reler o velho
Saint-Clair das ilhas ou os desterrados da ilha da Barra. Foi o
primeiro romance que conheceu; o exemplar tinha mais de vinte
annos; era toda a bibliotheca do pae e da filha. Siqueira abriu o
primeiro volume, e deitou os olhos ao começo do cap. II, que já
trazia de cór. Achava-lhe agora um sabor particular, por motivo dos
seus recentes desgostos: «Enchei bem os vossos copos, exclamou
Saint-Clair, e bebamos de uma vez; eis o brinde que vos proponho. Á
saude dos bons e valentes opprimidos, e ao castigo dos seus
oppressores. Todos acompanharam Saint-Clair, e foi de roda a
saude.»
—Sabe de uma cousa, papae? Papae compra amanhã latas de
conserva, petit-pois, peixe, etc. e ficam guardadas. No dia em que
elle apparecer para jantar, põe-se no fogo, é só aquecer, e daremos
um jantarzinho melhor.
—Mas eu só tenho o dinheiro do teu vestido.
—O meu vestido? Compra-se no mez que vem, ou no outro. Eu
espero.
—Mas não ficou ajustado?
—Desajusta-se; eu espero.
—E se não houver outro do mesmo preço?
—Hade haver; eu espero, papae.

CAPITULO CXXXIII

Ainda não disse,—porque os capitulos atropellam-se debaixo da


penna,—mas aqui está um para dizer que, por aquelle tempo, as
relações de Rubião tinham crescido em numero. Camacho puzera-o
em contacto com muitos homens politicos, a commissão das Alagoas
com varias senhoras, os bancos e companhias com pessoas do
commercio e da praça, os theatros com alguns frequentadores e a
rua do Ouvidor com toda a gente. Já então era um nome repetido.
Conhecia-se o homem. Quando appareciam as barbas e o par de
bigodes longos, uma sobre-casaca bem justa, um peito largo,
bengala de unicornio, e um andar firme e senhor, dizia-se logo que
era o Rubião,—um ricaço de Minas.
Tinham-lhe feito uma lenda. Diziam-n'o discipulo de um grande
philosopho, que lhe legára immensos bens,—um, tres, cinco mil
contos. Extranhavam alguns que elle não fallasse nunca de
philosophia, mas a lenda explicava esse silencio pelo proprio
methodo philosophico do mestre, que consistia em ensinar sómente
aos homens de boa vontade. Onde estavam esses discipulos? Iam á
casa delle, todos os dias,—alguns duas vezes, de manhã e de tarde;
e assim ficavam definidos os comensaes. Não seriam discipulos, mas
eram de boa vontade. Roiam fome, á espera, e ouviam calados e
risonhos os discursos do amphytrião. Entre os antigos e os novos,
houve tal ou qual rivalidade, que os primeiros accentuaram bem,
mostrando maior intimidade, dando ordens aos criados, pedindo
charutos, indo ao interior, assobiando, etc. Mas o costume os fez
supportaveis entre si, e todos acabaram na doce e commum
confissão das qualidades do dono da casa. Ao cabo de algum tempo,
tambem os novos lhe deviam dinheiro, ou em especie,—ou em
fiança no alfaiate, ou endosso de lettras, que elle pagava ás
escondidas, para não vexar os devedores.
Quincas Borba andava ao collo de todos. Davam estalinhos, para vel-
o saltar, alguns chegavam a beijar-lhe a testa; um delles, mais habil,
achou modo de o ter á mesa, ao jantar ou almoço, sobre as pernas,
para lhe dar migalhas de pão.
—Ah! isso não! protestou Rubião á primeira vez.
—Que tem? retorquiu o comensal. Não ha pessoas extranhas.
Rubião reflectiu um instante.
—Verdade é que está ahi dentro um grande homem, disse elle.
—O philosopho, o outro Quincas Borba, continuou o conviva,
circulando o olhar pelos novatos, para mostrar a intimidade das
relações entre elle e Rubião; mas, não logrou sosinho a vantagem,
por que os outros amigos da mesma éra, repetiram, em coro:

É
—É verdade, o philosopho.
E Rubião explicou aos novatos a allusão ao philosopho, e a razão do
nome do cão, que todos lhe attribuiam. Quincas Borba (o defuncto)
foi descripto e narrado como um dos maiores homens do tempo,—
superior aos seus patricios. Grande philosopho, grande alma, grande
amigo. E no fim, depois de algum silencio, batendo com os dedos na
borda da mesa, Rubião exclamou:
—Eu o faria ministro de Estado!
Um dos convivas exclamou, sem convicção, por simples officio:
—Oh! sem duvida!
Nenhum daquelles homens sabia, entretanto, o sacrificio que lhes
fazia o Rubião. Recusava jantares, passeios, interrompia
conversações apraziveis, só para correr a casa e jantar com elles.
Um dia achou meio de conciliar tudo. Não estando elle em casa ás
seis horas em ponto, os criados deviam pôr o jantar para os amigos.
Houve protestos; não, senhor, esperariam até sete ou oito horas. Um
jantar sem elle não tinha graça.
—Mas é que posso não vir, explicou Rubião.
Assim se cumpriu. Os convivas ajustaram bem os relogios pelos da
casa de Botafogo. Davam seis horas, todos á mesa. Nos dous
primeiros dias houve tal ou qual hesitação; mas os criados tinham
ordens severas. Ás vezes, Rubião chegava pouco depois. Eram então
risos, ditos, intrigas alegres. Um queria esperar, mas os outros... Os
outros desmentiam o o primeiro; ao contrario, foi este que os
arrastou, tal fome trazia,—a ponto que, se alguma cousa restava,
eram os pratos. E Rubião ria com todos.

CAPITULO CXXXIV
Fazer um capitulo só para dizer que, a principio, os convivas,
ausente o Rubião, fumavam os proprios charutos, depois do jantar,—
parecerá frivolo aos frivolos; mas os considerados dirão que algum
interesse haverá nesta circumstancia em apparencia minima.
De facto, uma noite, um dos mais antigos lembrou-se de ir ao
gabinete de Rubião; lá fôra algumas vezes, alli se guardavam as
caixas de charutos, não quatro nem cinco, mas vinte e trinta de
varias fabricas e tamanhos, muitas abertas. Um criado (o hespanhol)
accendeu o gaz. Os outros convivas seguiram o primeiro, escolheram
charutos e os que ainda não conheciam o gabinete admiraram os
moveis bem feitos e bem dispostos. A secretária captou as
admirações geraes; era de ebano, um primor de talha, obra severa e
forte. Uma novidade os esperava: dous bustos de marmore, postos
sobre ella, os dous Napoleões, o primeiro e o terceiro.
—Quando veiu isto?
—Hoje ao meio dia, respondeu o criado.
Dous bustos magnificos. Ao pé do olhar aquilino do tio, perdia-se no
vago o olhar scismatico do sobrinho. Contou o criado que o amo,
apenas recebidos e collocados os bustos, deixara-se estar grande
espaço em admiração, tão deslembrado do mais, que elle pode
miral-os tambem, sem admiral-os.—No me dicen nada estos dos
pícaros, concluiu o criado fazendo um gesto largo e nobre.

CAPITULO CXXXV

Rubião protegia largamente as lettras. Livros que lhe eram


dedicados, entravam para o prelo com a garantia de duzentos e
trezentos exemplares. Tinha diplomas e diplomas de sociedades
litterarias, coreographicas, pias, e era juntamente socio de uma
Congregação Catholica e de um Gremio Protestante, não se tendo
lembrado de um quando lhe fallaram do outro; o que fazia era pagar
regularmente as mensalidades de ambos. Assignava jornaes sem os
ler. Um dia, ao pagar o semestre de um, que lhe haviam mandado, é
que soube, pelo cobrador, que era do partido do governo; mandou o
cobrador ao diabo.

CAPITULO CXXXVI

O cobrador não foi ao diabo; recebeu o preço do semestre, e, como


possuia a observação natural dos cobradores, resmungou na rua:
—Ora aqui está um homem que detesta a folha e paga. Quantos a
adoram e não pagam!

CAPITULO CXXXVII

Mas—ó lance da fortuna! ó equidade da natureza!—os desperdicios


do nosso amigo, se não tinham remedio, tinham compensação. Já o
tempo não passava por elle como por um vadio sem ideias. Rubião,
á falta de ideias, tinha agora imaginação. Outr'ora vivia mais dos
outros que de si, não achava equilibrio interior, e o ocio esticava as
horas, que não acabavam mais. Tudo ia mudando; agora a
imaginação, que, a relampagos, lhe apparecia ultimamente, tendia a
pousar um pouco. Sentado na loja do Bernardo, gastava toda uma
manhã, sem que o tempo lhe trouxesse fadiga, nem a estreiteza da
rua do Ouvidor lhe tapasse o espaço. Repetiam-se as visões
deliciosas, como a das bodas (Cap. LXXXI) em termos a que a
grandeza não tirava a graça. Houve quem o visse, mais de uma vez,
saltar da cadeira e ir até á porta ver bem pelas costas alguma
pessoa que passava. Conhecel-a-hia? Ou seria alguem que,
casualmente, tinha as feições da creatura imaginaria que elle
estivera mirando? São perguntas de mais para um só capitulo; basta
dizer que uma dessas vezes nem passou ninguem, elle proprio
reconheceu a illusão, voltou para dentro, comprou uma teteia de
bronze para dar á filha do Camacho, que fazia annos, e ia casar em
breve, e saiu.

CAPITULO CXXXVIII

E Sophia? interroga impaciente a leitora, tal qual Orgon: Et Tartuffe?


Ai, amiga minha, a resposta é naturalmente a mesma,—tambem ella
comia bem, dormia largo e fofo,—cousas que, aliás, não impedem
que uma pessoa ame, quando quer amar. Se esta ultima reflexão é o
motivo secreto da vossa pergunta, deixai que vos diga que sois
muito indiscreta, e que eu não me quero senão com dissimulados.
Repito, comia bem, dormia largo e fofo. Chegára ao fim da
commissão das Alagoas, com elogios da imprensa; a Atalaia
chamou-lhe «o anjo da consolação». E não se pense que este nome
a alegrou, posto que a lisongeasse; ao contrario, resumindo em
Sophia toda a acção da caridade, podia mortificar as novas amigas,
e fazer-lhe perder em um dia o trabalho de longos mezes. Assim se
explica o artigo que a mesma folha trouxe no numero seguinte,
nomeando, particularisando e glorificando as outras commissarias
—-«estrellas de primeira grandeza».
Nem todas as relações subsistiram, mas a maior parte dellas
estavam atadas, e não faltam á nossa dona o talento de as tornar
definitivas. O marido é que peccava por turbulento, excessivo,
derramado, dando bem a ver que o cumulavam de favores, que
recebia finezas inesperadas e quasi immerecidas. Sophia, para
emendal-o, vexava-o com censuras e conselhos, rindo:
—«Você esteve hoje insupportavel; parecia um criado».
—«Christiano, fique mais senhor de si, quando tivermos gente de
fóra, não se ponha com os olhos fóra da cara, saltando de um lado
para outro, assim com ar de criança que recebe doce...»
Elle negava, explicava ou justificava-se; afinal, concluia que sim, que
era preciso não parecer estar abaixo dos obsequios; cortezia,
affabilidade, mais nada...
Justo, mas não vás cahir no extremo opposto, acudiu Sophia; não
vás ficar casmurro...
Palha era então as duas cousas; casmurro, a principio, frio, quasi
desdenhoso, fallando pouco, apenas respondendo. Mas, ou a
reflexão, ou o impulso inconsciente, restituia ao nosso homem a
animação habitual, e com ella, segundo o momento, a demasia e o
estrepito. Sophia é que, em verdade, corrigia tudo. Observava,
imitava. Necessidade e vocação fizeram-lhe adquirir, aos poucos, o
que não trouxera do nascimento nem da fortuna. Ao demais, estava
naquella edade média em que as mulheres inspiram egual confiança
ás sinhásinhas de vinte e ás sinhás de quarenta. Algumas morriam
por ella; muitas a cumulavam de louvores.
Foi assim que a nossa amiga, pouco a pouco, espanou a
atmosphera. Cortou as relações antigas, familiares, algumas tão
intimas que difficilmente se poderiam dissolver; mas a arte de
receber sem calor, ouvir sem interese e despedir-se sem saudade,
não era das suas menores prendas; e uma por uma, se foram indo
as pobres creaturas modestas, sem maneiras, nem vestidos,
amizades de pequena monta, de pagodes caseiros, de habitos
singelos e sem elevação. Com os homens fazia exactamente o que o
major contara, quando elles a viam passar de carruagem,—que era
sua,—entre parenthesis. A differença é que já nem os espreitava
para saber se a viam. Acabara a lua de mel da grandeza; agora
torcia os olhos duramente para outro lado, conjurando, de um gesto
definitivo, o perigo de alguma hesitação. Punha assim os velhos
amigos na obrigação de lhe não tirarem o chapéo. Como eram
poucos, foi breve a empreza.
CAPITULO CXXXIX

Rubião ainda quiz valer ao major, mas o ar de fastio com que Sophia
o interrompeu foi tal, que o nosso amigo preferiu perguntar-lhe se,
não chovendo na seguinte manhã, iriam sempre passear á Tijuca.
—Já fallei a Christiano; disse-me que tem um negocio, que fique
para domingo que vem.
Rubião, depois de um instante:
Vamos nós dous. Sahimos cedo, passeamos, almoçamos lá; ás tres
ou quatro horas estamos de volta...
Sophia olhou para elle, com tamanha vontade de acceitar o convite,
que Rubião não esperou resposta verbal.
—Está assentado, vamos, disse elle.
—Não.
—Como não?
E repetiu a pergunta, porque Sophia não lhe quiz explicar a
negativa, aliás, tão obvia. Obrigada a fazel-o, ponderou que o
marido ficaria com inveja, e era capaz de adiar o negocio só para ir
tambem. Não queria atrapalhar os negocios delle, e podiam esperar
oito dias. O olhar de Sophia acompanhava essa explicação, como um
clarim acompanharia um padre-nosso. Vontade tinha, oh! se tinha
vontade de ir na manhã seguinte, com Rubião, estrada acima, bem
posta no cavallo, não scismando á toa, nem poetica, mas valente,
fogo na cara, toda deste mundo, galopando, trotando, parando. Lá
no alto, desmontaria algum tempo; tudo só, a cidade ao longe e o
ceu por cima. Encostada ao cavallo, penteando-lhe as crinas com os
dedos, ouviria Rubião louvar-lhe a affouteza e o garbo... Chegou a
sentir um beijo na nuca...

CAPITULO CXL
Pois que se trata de cavallos, não fica mal dizer que a imaginação de
Sophia era agora um corsel brioso e petulante, capaz de galgar
morros e desbaratar mattos. Outra seria a comparação, se a
occasião fosse differente; mas corsel é o que vae melhor. Traz a
ideia do impeto, do sangue, da disparada, ao mesmo tempo que a
da serenidade com que torna ao caminho recto, e por fim á
cavallariça.

CAPITULO CXLI

—Está dito, vamos amanhã, repetiu Rubião, que espreitava o rosto


acceso de Sophia.
Mas o corsel viera fatigado da carreira, e deixou-se estar somnolento
na cavallariça. Sophia era já outra; passara a vertigem da empreza,
o ardor sonhado, o gosto de subir com elle a estrada da Tijuca.
Dizendo-lhe Rubião que fallaria ao marido para que a deixasse ir ao
passeio, redarguiu sem alma:
—Está tonto! Fica para o domingo que vem!
E fixou os olhos no trabalho de linha que fazia,—frioleira é o nome,
—emquanto Rubião voltava os seus para um trechosinho de jardim
mofino, ao pé da saleta de trabalho onde estavam. Sophia, sentada
no angulo da janella, ia meneando os dedos. Rubião viu em duas
rosas vulgares uma festa imperial, e esqueceu a sala, a mulher e a
si. Não se póde dizer, ao certo, que tempo estiveram assim calados,
alheios e remotos um do outro. Foi uma criada que os accordou,
trazendo-lhes café. Bebido o café, Rubião concertou as barbas, tirou
o relogio e despediu-se. Sophia, que espreitava a sabida, ficou
satisfeita, mas encobriu o gosto com o espanto.
—Já!
—Preciso de fallar a um sujeito antes das quatro horas, explicou
Rubião. Estamos entendidos; passeio de amanhã gorado. Vou
mandar desavisar os cavallos. Mas será certo no domingo que vem?
—Certo, certo, não posso affirmar; mas resolvendo-se em tempo o
Christiano, creio que sim. Sabe que meu marido é o homem dos
impedimentos.
Sophia acompanhou-o até á porta, estendeu-lhe a mão indifferente,
respondeu sorrindo alguma cousa chocha, tornou á salinha em que
estivera,—ao mesmo angulo,—da mesma janella. Não continuou
logo o trabalho, poz uma perna sobre outra, fazendo descer, por
habito, a saia do vestido, e lançou uma olhada ao jardim, onde as
duas rosas tinham dado ao nosso amigo uma visão imperial. Sophia
não viu mais que duas flores mudas. Fitou-as, não obstante, algum
tempo; em seguida, pegou da frioleira, trabalhou um pouco, deteve-
se outro pouco, deixando as mãos no regaço; e voltou á obra, outra
vez, para tornar a deixal-a. De repente, levantou-se e atirou as
linhas e a navette á cestinha de junco, onde guardava os seus
pretextos de trabalho. A cesta era ainda uma lembrança de Rubião!
—Que homem aborrecido!
Dalli foi encostar-se á janella, que dava para o jardim mofino, onde
iam murchando as duas rosas vulgares. Rosas, quando recentes,
importam-se pouco ou nada com as coleras dos outros; mas, se
definham, tudo lhes serve para vexar a alma humana. Quero crer
que este costume nasce da brevidade da vida. «Para as rosas,
escreveu Fontenelle, o jardineiro é eterno.» E que melhor maneira
de ferir o eterno que mofar das suas iras? Eu passo, tu ficas; mas eu
não fiz mais que florir e aromar, servi a donas e a donzellas, fui
lettra de amor, ornei a botoeira dos homens, ou expiro no proprio
arbusto, e todas as mãos e todos os olhos me trataram e me viram
com admiração e affecto. Tu não, ó eterno; tu zangas-te, tu
padeces, tu choras, tu affliges-te! a tua eternidade não vale um só
dos meus minutos.
Assim, quando Sophia chegou á janella que dava para o jardim,
ambas as rosas riram-se a petalas despregadas. Uma dellas disse
que era bem feito! bem feito! bem feito!
—Tens razão em te zangares, formosa creatura, acrescentou, mas
hade ser comtigo, não com elle. Elle que vale? Um triste homem
sem encantos, póde ser que bom amigo, e talvez generoso, mas
repugnante, não? E tu, requestada de outros, que demonio te leva a
dar ouvidos a esse intruso da vida? Humilha-te, ó suberba creatura,
porque és tu mesma a causa do teu mal. Tu juras esquecel-o, e não
o esqueces. E é preciso esquecel-o? Não te basta fital-o, escutal-o,
para desprezal-o? Esse homem não diz cousa nenhuma, ó singular
creatura, e tu...
—Não é tanto assim, interrompeu a outra rosa, com a voz ironica e
descançada; elle diz alguma cousa, e dil-a desde muito, sem
desapprendel-a, nem trocal-a; é firme, esquece a dor, crê na
esperança. Toda a sua vida amorosa é como o passeio á Tijuca, de
que vocês fallavam ha pouco: «Fica para o domingo que vem!» Eia,
piedade ao menos; sê piedosa, ó bonissima Sophia! Se hasde amar
a alguem, fóra do matrimonio, ama-o a elle, que te ama e é
discreto. Anda, arrepende-te do gesto de ha pouco. Que mal te fez
elle, e que culpa lhe cabe se és bonita? E quando haja culpa, a cesta
é que a não tem, só porque elle a comprou, e menos ainda as linhas
e a navette que tu mesma mandaste comprar pela criada. Tu és má,
Sophia, és injusta...

CAPITULO CXLII

Sophia deixou-se estar ouvindo, ouvindo... Interrogou outras


plantas, e não lhe disseram cousa diferente. Ha desses acertos
maravilhosos. Quem conhece o solo e o sub-solo da vida, sabe muito
bem que um trecho de muro, um banco, um tapete, um guarda-
chuva, são ricos de ideias ou de sentimentos, quando nós tambem o
somos, e que as reflexões de parceria entre os homens e as cousas
compõem um dos mais interessantes phenomenos da terra. A
expressão: «Conversar com os seus botões», parecendo simples
metaphora, é phrase de sentido real e directo. Os botões operam
synchronicamente comnosco; formam uma especie de senado,
commodo e barato, que vota sempre as nossas moções.

CAPITULO CXLIII

Fez-se o passeio á Tijuca, sem outro incidente mais que uma queda
do cavallo, ao descerem. Não foi Rubião que cahiu, nem o Palha,
mas a senhora deste, que vinha pensando em não sei quê, e
chicoteou o animal com raiva; elle espantou-se e deitou-a em terra.
Sophia cahiu com graça. Estava singularmente esbelta, vestida de
amazona, corpinho tentador de justeza. Othello exclamaria, se a
visse: «Oh! minha bella guerreira!» Rubião limitara-se a isto, ao
começar o passeio: «A senhora é um anjo!».

CAPITULO CXLIV

—Fiquei com o joelho dorido, disse ella entrando em casa e


coxeando.
—Deixa ver.
No quarto de vestir, Sophia levantou o pé sobre um banquinho e
mostrou ao marido o joelho pisado; inchára um pouco, muito pouco,
mas tocando-lhe, fazia-a gemer. Palha, não querendo machucal-a,
chegou-lhe a pontinha dos beiços apenas.
—Fiquei descomposta quando cahi?
—Não. Pois com um vestido tão comprido... Mal se pôde ver o bico
do pé. Não houve nada, acredita.
—Jura que não?
—Que desconfiada que você é, Sophia! Juro por tudo o que ha mais
sagrado, pela luz que me allumia, por Deus Nosso Senhor. Estás
satisfeita?
Sophia ia cobrindo o joelho.
—Deixa ver outra vez. Creio que não será nada de maior; bota um
pouco de qualquer cousa. Manda perguntar á botica.
—Está bom, deixa-me ir despir, disse ella forcejando por descer o
vestido.
Mas o Palha baixara os olhos do joelho até ao resto da perna, onde
pegava com o cano da bota. De feito, era um bello trecho da
natureza. A meia de seda dava ideia clara da perfeição do contorno.
Palha, por graça, ia perguntando á mulher se se machucára aqui, e
mais aqui, e mais aqui, indicando os logares com a mão que ia
descendo. Se apparecesse um pedacinho desta obra-prima, o céo e
as arvores ficariam assombrados, concluiu elle em quanto a mulher
descia o vestido e tirava o pé do banco.
—Póde ser, mas não havia só o céo e as arvores, disse ella; havia
tambem os olhos do Rubião.
—Ora, o Rubião! É verdade; elle nunca mais teve aquellas ideias de
Santa Thereza?
—Nunca; mas, emfim, não me agradaria... Jura de verdade,
Christiano?
—O que você quer é que eu vá subindo de sagrado em sagrado, até
á cousa mais sagrada. Jurei por Deus; não bastou. Juro por você;
está satisfeita?
Pieguices de lascivo. Sahiu finalmente do quarto da mulher e foi
para o seu. Aquelle pudor medroso e incredulo de Sophia fazia-lhe
bem. Mostrava que ella era sua, totalmente sua; mas, por isso
mesmo que elle a possuia, considerava que era de grande senhor
não se affligir com a vista casual e instantanea de um pedaço
occulto do seu reino. E lastimava que o casual tivesse parado na
ponta da bota. Era apenas a fronteira; as primeiras villas do
territorio, antes da cidade machucada pela queda, dariam ideia de
uma civilisação sublime e perfeita. E ensaboando-se, esfregando a
cara, o collo e a cabeça na vasta bacia de prata, escovando-se,
enxugando-se, aromando-se, Palha imaginava o pasmo e a inveja da
unica testemunha do desastre, se este fosse menos incompleto.

CAPITULO CXLV

Foi por esse tempo que Rubião poz em espanto a todos os seus
amigos. Na terça-feira seguinte ao domingo do passeio (era então
Janeiro de 1870) avisou a um barbeiro e cabelleireiro da rua do
Ouvidor que o mandasse barbear a casa, no outro dia, ás nove horas
da manhã. Lá foi um official francez,—chamado Lucien, creio eu,—
que entrou para o gabinete de Rubião, segundo as ordens dadas ao
criado.
—Uhm!... rosnou Quincas Borba, de cima dos joelhos do Rubião.
Lucien parou á porta do gabinete, e comprimemtou o dono da casa;
este, porem, não viu a cortezia, como não ouvira o signal do
Quincas Borba. Estava em uma longa cadeira de extensão, ermo do
espirito, que rompera o tecto e se perdera no ar. A quantas leguas
iria? Nem condor nem aguia o poderia dizer. Em marcha para a lua,
—não via cá em baixo mais que as felicidades perennes, chovidas
sobre elle, desde o berço, onde o embalaram fadas, até á praia de
Botafogo, aonde ellas o trouxeram, por um chão de rosas e bogaris.
Nenhum revez, nenhum mallogro, nenhuma pobreza;—vida placida,
cosida de goso, com rendas de superfluo. Em marcha para a lua!
Lucien relanceou os olhos pelo gabinete, onde fazia principal figura a
secretária, e sobre ella os dous bustos de Napoleão e Luiz Napoleão.
Relativamente a este ultimo, havia ainda, pendentes da parede, uma
gravura ou lithographia representando a Batalha de Solferino, e um
retrato da imperatriz Eugenia.
Rubião tinha nos pés um par de chinellas de damasco, bordadas a
ouro; na cabeça, um gorro com borla de seda preta. Na bocca, um
riso azul claro.

CAPITULO CXLVI

—Monsieur...
—Uhm! repetiu Quincas Borba, de pé nos joelhos do senhor.
Rubião voltou a si e deu com o barbeiro. Conhecia-o por tel-o visto
ultimamente na loja; ergueu-se da cadeira, Quincas Borba latia,
como a defendel-o contra o intruso.
—Socega! cala a boca! disse-lhe Rubião; e o cachorro foi, de orelha
baixa, metter-se por traz da cesta de papeis. Durante esse tempo,
Lucien desembrulhava os seus apparelhos.
—Monsieur veut se faire raser, n'est-ce pas? Pourquoi donc a-t-il
laisser croître cette belle barbe? Apparemment que c'est un voeu
d'amour? J'en connais qui ont fait de pareils sacrifices; j'ai même été
confident de quelques personnes aimables...
—Justamente! interrompeu Rubião.
Não entendera nada; posto soubesse algum francez, mal o
comprehendia lido—como sabemos,—e não o entendia fallado. Mas,
phenomeno curioso, não respondeu por impostura; ouviu as
palavras, como se fossem comprimento ou acclamação; e, ainda
mais curioso phenomeno, respondendo-lhe em portuguez, cuidava
fallar francez.
—Justamente! repetiu. Quero restituir a cara ao typo anterior; é
aquelle.
E, como apontasse para o busto de Napoleão III, respondeu-lhe o
barbeiro pela nossa lingua:
—Ah! o imperador! Bonito busto, em verdade. Obra fina. O senhor
comprou isto aqui ou mandou vir de Paris? São magnificos. Lá está o
primeiro, o grande; este era um genio. Se não fosse a traição, oh! os
traidores, vê o senhor? os traidores são peiores que as bombas de
Orsini.
—Orsini! um coitado!
—Pagou caro.
—Pagou o que devia. Mas não ha bombas nem Orsini contra o
destino de um grande homem, continuou Rubião. Quando a fortuna
de uma nação põe na cabeça de um grande homem a coroa
imperial, não ha maldades que valham... Orsini! um bobo!
Em poucos minutos, começou o barbeiro a deitar abaixo as barbas
do Rubião, para lhe deixar somente a pera e os bigodes de Napoleão
III; encarecia-lhe o trabalho; affirmava que era difficil compor
exactamente uma cousa como a outra, E á medida que lhe cortava
as barbas, ia-as gabando.—Que lindos fios! Era um grande e honesto
sacrificio que fazia, em verdade...
—Seu barbeiro, você é pernostico, interrompeu Rubião. Já lhe disse
o que quero; ponha-me a cara como estava. Alli tem o busto para
guial-o.
—Sim, senhor, cumprirei as suas ordens, e verá que semelhança vae
sair.
E zás, zás, deu os ultimos golpes ás barbas de Rubião, e começou a
rapar-lhe as faces e os queixos. Durou longo tempo a operação; o
barbeiro ia tranquillamente rapando, comparando, dividindo os olhos
entre o busto e o homem. Ás vezes, para melhor cotejal-os, recuava
dous passos, olhava-os alternadamente, inclinava-se, pedia ao
homem que se virasse de um lado ou de outro, e ia ver o lado
correspondente do busto.
—Vae bem? perguntava Rubião.
Lucien pedia-lhe com um gesto que se calasse, e proseguia.
Recortou a pera, deixou os bigodes, e escanhoou á vontade,
lentamente, amigamente, aborrecidamente, adivinhando com os
dedos alguma pontinha imperceptivel de cabello no queixo ou na
face, para não o consentir, nem por suspeita. Ás vezes Rubião,
cançado de estar a olhar para o tecto, emquanto o outro lhe
aperfeiçoava os queixos, pedia para descançar. Descançando,
apalpava o rosto e sentia pelo tacto a mudança.
—Os bigodes é que não estão muito compridos, observava.
—Falta arranjar-lhe as guias; aqui trago os ferrinhos para encurval-
os bem sobre o labio, e depois faremos as guias. Ah! eu prefiro
compor dez trabalhos originaes a uma só copia.
Volveram ainda dez minutos, antes que os bigodes e a pera fossem
bem retocados. Emfim, prompto. Rubião deu um salto, correu ao
espelho, no quarto, que ficava ao pé; era o outro, eram ambos, era
elle mesmo, em summa.
—Justamente! exclamou tornando ao gabinete, onde o barbeiro,
tendo arrecadado os apparelhos, fazia festas ao Quincas Borba.
E indo á secretária, abriu uma gaveta, tirou uma nota de vinte mil
réis, e deu-lh'a.
—Não tenho troco, disse o outro.
—Não precisa dar troco, acudiu Rubião com um gesto soberano; tire
o que houver de pagar á casa, e o resto é seu.

CAPITULO CXLVII

Ficando só, Rubião atirou-se a uma poltrona, e viu passar muitas


cousas sumptuosas. Estava em Biarritz ou Compiègne, não se sabe
bem; Compiègne, parece. Governou um grande Estado, ouviu
ministros e embaixadores, dansou, jantou,—e assim outras acções
narradas em correspondencias de jornaes, que elle lera e lhe ficaram
de memoria. Nem os ganidos de Quincas Borba logravam espertal-o.
Estava longe e alto. Compiègne era no caminho da lua. Em marcha
para a lua!

CAPITULO CXLVIII

Quando desceu da lua, ouviu os ganidos do cachorro e sentiu frio


nos queixos. Correu ao espelho e verificou que a differença entre a
cara barbada e a cara lisa era grande mas que, assim lisa, não lhe
afiava, mal. Os comensaes chegaram á mesma conclusão.
—Está perfeitamente bem! Ha muito que devia ter feito isso. Não é
que as barbas grandes lhe tirassem a nobreza do rosto; mas, assim
como está agora, tem o que tinha, e mais um tom moderno...
—Moderno, repetiu o amphytrião.
Fóra, egual espanto. Todos achavam sinceramente que este outro
aspecto lhe ia melhor que o anterior. Uma só pessoa, o Dr. Camacho,
posto julgasse que os bigodes e a pera ficavam muito bem no
amigo, ponderou que era de bom aviso não alterar o rosto,
verdadeiro espelho da alma, cuja firmeza e constancia devia
reproduzir.
—Não é por lhe fallar de mim, concluiu; mas, nunca me hade ver a
cara de outro modo. É uma necessidade moral da minha pessoa.
Minha vida, sacrificada aos principios,—porque eu nunca tentei
conciliar principios, mas homens,—minha vida, digo, é uma imagem
fiel da minha cara, e vice-versa.
Rubião ouvia com seriedade, e acenava de cabeça que sim, que
devia ser assim por força. Sentia-se então imperador dos francezes,
incognito, de passeio; descendo á rua, voltou ao que era. Dante, que
viu tantas cousas extraordinarias, affirma ter assistido no inferno ao
castigo de um espirito florentino, que uma serpente de seis pés
abraçou de tal modo, e tão confundidos ficaram, que afinal já se não
podia distinguir bem se era um ente unico, se dous. Rubião era
ainda dous. Não se misturavam nelle a propria pessoa com o
imperador dos francezes. Revesavam-se; chegavam a esquecer-se
um do outro. Quando era só Rubião, não passava do homem do
costume. Quando subia a imperador, era só imperador. Equilibravam-
se, um sem outro, ambos integraes.

CAPITULO CXLIX

—Que mudança é essa? perguntou Sophia, quando elle lhe


appareceu no fim da semana.
—Vim saber do seu joelho; está bom?
—Obrigada.
Eram duas horas da tarde. Sophia acabava de vestir-se para sair,
quando a criada lhe fora dizer que estava alli Rubião,—tão mudado
de cara que parecia outro. Desceu a vel-o curiosa; achara-o na sala,
de pé, lendo os cartões de visita.
—Mas que mudança é essa? repetiu ella. Rubião, sem nenhuma
ideia imperial, respondeu que suppunha ficarem-lhe melhor os
bigodes e a pera.
—Ou estou mais feio? concluiu.
—Está melhor, muito melhor.
E Sophia disse comsigo que talvez fosse ella a causa da mudança.
Sentou-se no sophá, e começou a enfiar os dedos nas luvas.
—Vae sahir?
—Vou, mas o carro ainda não veiu.
Cahiu-lhe uma das luvas. Rubião inclinou-se para apanhal-a, ella fez
a mesma cousa, ambos pegaram na luva, e teimando em levantal-a,
succedeu que as caras encontraram-se no ar, e bateram uma na
outra. Pangloss, se tem assistido ao episodio, emendaria a sua

You might also like