100% found this document useful (2 votes)
24 views78 pages

Intelligent Computing Proceedings Of The 2020 Computing Conference Volume 1 1st Ed Kohei Arai pdf download

The document is about the 'Intelligent Computing Proceedings of the 2020 Computing Conference, Volume 1', edited by Kohei Arai, Supriya Kapoor, and Rahul Bhatia. It includes a collection of research papers presented at the conference, covering various topics in intelligent systems and computing. The conference received 514 submissions, of which 160 were selected for publication after a double-blind peer review process.

Uploaded by

kraymaarle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
24 views78 pages

Intelligent Computing Proceedings Of The 2020 Computing Conference Volume 1 1st Ed Kohei Arai pdf download

The document is about the 'Intelligent Computing Proceedings of the 2020 Computing Conference, Volume 1', edited by Kohei Arai, Supriya Kapoor, and Rahul Bhatia. It includes a collection of research papers presented at the conference, covering various topics in intelligent systems and computing. The conference received 514 submissions, of which 160 were selected for publication after a double-blind peer review process.

Uploaded by

kraymaarle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Intelligent Computing Proceedings Of The 2020

Computing Conference Volume 1 1st Ed Kohei Arai


download

https://ebookbell.com/product/intelligent-computing-proceedings-
of-the-2020-computing-conference-volume-1-1st-ed-kohei-
arai-22501388

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Intelligent Computing Proceedings Of The 2020 Computing Conference


Volume 3 1st Ed Kohei Arai

https://ebookbell.com/product/intelligent-computing-proceedings-of-
the-2020-computing-conference-volume-3-1st-ed-kohei-arai-22501384

Intelligent Computing Proceedings Of The 2020 Computing Conference


Volume 2 1st Ed Kohei Arai

https://ebookbell.com/product/intelligent-computing-proceedings-of-
the-2020-computing-conference-volume-2-1st-ed-kohei-arai-22501386

Smart Intelligent Computing And Applications Proceedings Of The Third


International Conference On Smart Computing And Informatics Volume 1
1st Ed 2020 Suresh Chandra Satapathy

https://ebookbell.com/product/smart-intelligent-computing-and-
applications-proceedings-of-the-third-international-conference-on-
smart-computing-and-informatics-volume-1-1st-ed-2020-suresh-chandra-
satapathy-10805570

Smart Intelligent Computing And Applications Proceedings Of The Third


International Conference On Smart Computing And Informatics Volume 2
1st Ed 2020 Suresh Chandra Satapathy

https://ebookbell.com/product/smart-intelligent-computing-and-
applications-proceedings-of-the-third-international-conference-on-
smart-computing-and-informatics-volume-2-1st-ed-2020-suresh-chandra-
satapathy-10806516
Frontiers In Intelligent Computing Theory And Applications Proceedings
Of The 7th International Conference On Ficta 2018 Volume 1 1st Ed 2020
Suresh Chandra Satapathy

https://ebookbell.com/product/frontiers-in-intelligent-computing-
theory-and-applications-proceedings-of-the-7th-international-
conference-on-ficta-2018-volume-1-1st-ed-2020-suresh-chandra-
satapathy-10806312

Frontiers In Intelligent Computing Theory And Applications Proceedings


Of The 7th International Conference On Ficta 2018 Volume 2 1st Ed 2020
Suresh Chandra Satapathy

https://ebookbell.com/product/frontiers-in-intelligent-computing-
theory-and-applications-proceedings-of-the-7th-international-
conference-on-ficta-2018-volume-2-1st-ed-2020-suresh-chandra-
satapathy-10805802

Intelligent Computing Proceedings Of The 2021 Computing Conference


Volume 1 1st Edition Kohei Arai

https://ebookbell.com/product/intelligent-computing-proceedings-of-
the-2021-computing-conference-volume-1-1st-edition-kohei-arai-33448520

Intelligent Computing Proceedings Of The 2022 Computing Conference


Volume 1 Kohei Arai

https://ebookbell.com/product/intelligent-computing-proceedings-of-
the-2022-computing-conference-volume-1-kohei-arai-43882166

Intelligent Computing Proceedings Of The 2021 Computing Conference


Volume 2 Lecture Notes In Networks And Systems 284 1st Ed 2021 Kohei
Arai Editor

https://ebookbell.com/product/intelligent-computing-proceedings-of-
the-2021-computing-conference-volume-2-lecture-notes-in-networks-and-
systems-284-1st-ed-2021-kohei-arai-editor-34418886
Advances in Intelligent Systems and Computing 1228

Kohei Arai
Supriya Kapoor
Rahul Bhatia Editors

Intelligent
Computing
Proceedings of the 2020 Computing
Conference, Volume 1
Advances in Intelligent Systems and Computing

Volume 1228

Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland

Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, School of Computer Science and Electronic Engineering,
University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen , Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.
** Indexing: The books of this series are submitted to ISI Proceedings,
EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/11156


Kohei Arai Supriya Kapoor
• •

Rahul Bhatia
Editors

Intelligent Computing
Proceedings of the 2020 Computing
Conference, Volume 1

123
Editors
Kohei Arai Supriya Kapoor
Saga University The Science and Information
Saga, Japan (SAI) Organization
Bradford, West Yorkshire, UK
Rahul Bhatia
The Science and Information
(SAI) Organization
Bradford, West Yorkshire, UK

ISSN 2194-5357 ISSN 2194-5365 (electronic)


Advances in Intelligent Systems and Computing
ISBN 978-3-030-52248-3 ISBN 978-3-030-52249-0 (eBook)
https://doi.org/10.1007/978-3-030-52249-0
© Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Editor’s Preface

On behalf of the Committee, we welcome you to the Computing Conference 2020.


The aim of this conference is to give a platform to researchers with fundamental
contributions and to be a premier venue for industry practitioners to share and
report on up-to-the-minute innovations and developments, to summarize the state
of the art and to exchange ideas and advances in all aspects of computer sciences
and its applications.
The aim of this conference is to give a platform to researchers with fundamental
contributions and to be a premier venue for industry practitioners to share and
report on up-to-the-minute innovations and developments, to summarize the state
of the art and to exchange ideas and advances in all aspects of computer sciences
and its applications.
For this edition of the conference, we received 514 submissions from 50+
countries around the world. These submissions underwent a double-blind peer
review process. Of those 514 submissions, 160 submissions (including 15 posters)
have been selected to be included in this proceedings. The published proceedings
has been divided into three volumes covering a wide range of conference tracks,
such as technology trends, computing, intelligent systems, machine vision, security,
communication, electronics and e-learning to name a few. In addition to the con-
tributed papers, the conference program included inspiring keynote talks. Their
talks were anticipated to pique the interest of the entire computing audience by their
thought-provoking claims which were streamed live during the conferences. Also,
the authors had very professionally presented their research papers which were
viewed by a large international audience online. All this digital content engaged
significant contemplation and discussions amongst all participants.
Deep appreciation goes to the keynote speakers for sharing their knowledge and
expertise with us and to all the authors who have spent the time and effort to
contribute significantly to this conference. We are also indebted to the Organizing
Committee for their great efforts in ensuring the successful implementation of the
conference. In particular, we would like to thank the Technical Committee for their
constructive and enlightening reviews on the manuscripts in the limited timescale.

v
vi Editor’s Preface

We hope that all the participants and the interested readers benefit scientifically
from this book and find it stimulating in the process. We are pleased to present the
proceedings of this conference as its published record.
Hope to see you in 2021, in our next Computing Conference, with the same
amplitude, focus and determination.

Kohei Arai
Contents

Demonstrating Advanced Machine Learning and Neuromorphic


Computing Using IBM’s NS16e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Mark Barnell, Courtney Raymond, Matthew Wilson, Darrek Isereau,
Eric Cote, Dan Brown, and Chris Cicotta
Energy Efficient Resource Utilization: Architecture
for Enterprise Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Dilawar Ali, Fawad Riasat Raja, and Muhammad Asjad Saleem
Performance Evaluation of MPI vs. Apache Spark for Condition
Based Maintenance Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Tomasz Haupt, Bohumir Jelinek, Angela Card, and Gregory Henley
Comparison of Embedded Linux Development Tools for the WiiPiiDo
Distribution Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Diogo Duarte, Sérgio Silva, João M. Rodrigues, Salviano Pinto Soares,
and António Valente
FERA: A Framework for Critical Assessment of Execution
Monitoring Based Approaches for Finding Concurrency Bugs . . . . . . . 54
Jasmin Jahić, Thomas Bauer, Thomas Kuhn, Norbert Wehn,
and Pablo Oliveira Antonino
A Top-Down Three-Way Merge Algorithm for HTML/XML
Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Anastasios G. Bakaoukas and Nikolaos G. Bakaoukas
Traceability Framework for Requirement Artefacts . . . . . . . . . . . . . . . . 97
Foziah Gazzawe, Russell Lock, and Christian Dawson
Haptic Data Accelerated Prediction via Multicore Implementation . . . . 110
Pasquale De Luca and Andrea Formisano

vii
viii Contents

Finding the Maximal Independent Sets of a Graph Including the


Maximum Using a Multivariable Continuous Polynomial Objective
Optimization Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Maher Heal and Jingpeng Li
Numerical Method of Synthesized Control for Solution of the Optimal
Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Askhat Diveev
Multidatabase Location Based Services (MLBS) . . . . . . . . . . . . . . . . . . 157
Romani Farid Ibrahim
wiseCIO: Web-Based Intelligent Services Engaging Cloud
Intelligence Outlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Sheldon Liang, Kimberly Lebby, and Peter McCarthy
A Flexible Hybrid Approach to Data Replication
in Distributed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Syed Mohtashim Abbas Bokhari and Oliver Theel
A Heuristic for Efficient Reduction in Hidden Layer Combinations
for Feedforward Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Wei Hao Khoong
Personalized Recommender Systems with Multi-source Data . . . . . . . . . 219
Yili Wang, Tong Wu, Fei Ma, and Shengxin Zhu
Renormalization Approach to the Task of Determining the Number
of Topics in Topic Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Sergei Koltcov and Vera Ignatenko
Strategic Inference in Adversarial Encounters Using
Graph Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
D. Michael Franklin
Machine Learning for Offensive Security: Sandbox Classification
Using Decision Trees and Artificial Neural Networks . . . . . . . . . . . . . . . 263
Will Pearce, Nick Landers, and Nancy Fulda
Time Series Analysis of Financial Statements for Default Modelling . . . 281
Kirill Romanyuk and Yuri Ichkitidze
Fraud Detection Using Sequential Patterns from Credit
Card Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Addisson Salazar, Gonzalo Safont, and Luis Vergara
Retention Prediction in Sandbox Games with Bipartite
Tensor Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Rafet Sifa, Michael Fedell, Nathan Franklin, Diego Klabjan, Shiva Ram,
Arpan Venugopal, Simon Demediuk, and Anders Drachen
Contents ix

Data Analytics of Student Learning Outcomes Using Abet


Course Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Hosam Hasan Alhakami, Baker Ahmed Al-Masabi,
and Tahani Mohammad Alsubait
Modelling the Currency Exchange Rates Using Support
Vector Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Ezgi Deniz Ülker and Sadik Ülker
Data Augmentation and Clustering for Vehicle Make/Model
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Mohamed Nafzi, Michael Brauckmann, and Tobias Glasmachers
A Hybrid Recommender System Combing Singular Value
Decomposition and Linear Mixed Model . . . . . . . . . . . . . . . . . . . . . . . . 347
Tianyu Zuo, Shenxin Zhu, and Jian Lu
Data Market Implementation to Match Retail Customer Buying
Versus Social Media Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Anton Ivaschenko, Anastasia Stolbova, and Oleg Golovnin
A Study of Modeling Techniques for Prediction of Wine Quality . . . . . 373
Ashley Laughter and Safwan Omari
Quantifying Apparent Strain for Automatic Modelling, Simulation,
Compensation and Classification in Structural Health Monitoring . . . . . 400
Enoch A-iyeh
A New Approach to Supervised Data Analysis in Embedded Systems
Environments: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Pamela E. Godoy-Trujillo, Paul D. Rosero-Montalvo,
Luis E. Suárez-Zambrano, Diego H. Peluffo-Ordoñez,
and E. J. Revelo-Fuelagán
Smart Cities: Using Gamification and Emotion Detection
to Improve Citizens Well Fair and Commitment . . . . . . . . . . . . . . . . . . 426
Manuel Rodrigues, Ricardo Machado, Ricardo Costa,
and Sérgio Gonçalves
Towards a Smart Interface-Based Automated Learning Environment
Through Social Media for Disaster Management and Smart
Disaster Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Zair Bouzidi, Abdelmalek Boudries, and Mourad Amad
Is Social Media Still “Social”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
Chan Eang Teng and Tang Mui Joo
Social Media: Influences and Impacts on Culture . . . . . . . . . . . . . . . . . 491
Mui Joo Tang and Eang Teng Chan
x Contents

Cost of Dietary Data Acquisition with Smart Group Catering . . . . . . . . 502


Jiapeng Dong, Pengju Wang, and Weiqiang Sun
Social Engineering Defense Mechanisms: A Taxonomy and a Survey
of Employees’ Awareness Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Dalal N. Alharthi and Amelia C. Regan
How Information System Project Stakeholders Perceive
Project Success . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
Iwona Kolasa and Dagmara Modrzejewska
Fuzzy Logic Based Adaptive Innovation Model . . . . . . . . . . . . . . . . . . . 555
Bushra Naeem, Bilal Shabbir, and Juliza Jamaludin
A Review of Age Estimation Research to Evaluate Its Inclusion
in Automated Child Pornography Detection . . . . . . . . . . . . . . . . . . . . . . 566
Lee MacLeod, David King, and Euan Dempster
A Comprehensive Survey and Analysis on Path Planning Algorithms
and Heuristic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
Bin Yan, Tianxiang Chen, Xiaohui Zhu, Yong Yue, Bing Xu, and Kai Shi
Computational Conformal Mapping in Education
and Engineering Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Maqsood A. Chaudhry
Pilot Study of ICT Compliance Index Model to Measure the Readiness
of Information System (IS) at Public Sector in Malaysia . . . . . . . . . . . . 609
Mohamad Nor Hassan and Aziz Deraman
Preliminary Experiments on the Use of Nonlinear Programming
for Indoor Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
Stefania Monica and Federico Bergenti
Improved Deterministic Broadcasting for Multiple Access Channels . . . 645
Bader A. Aldawsari and J. Haadi Jafarian
Equivalent Thermal Conductivity of Metallic-Wire
for On-Line Monitoring of Power Cables . . . . . . . . . . . . . . . . . . . . . . . . 661
M. S. Al-Saud
A Novel Speed Estimation Algorithm for Mobile UE’s
in 5G mmWave Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
Alawi Alattas, Yogachandran Rahulamathavan, and Ahmet Kondoz
In-App Activity Recognition from Wi-Fi Encrypted Traffic . . . . . . . . . . 685
Madushi H. Pathmaperuma, Yogachandran Rahulamathavan,
Safak Dogan, and Ahmet M. Kondoz
A Novel Routing Based on OLSR for NDN-MANET . . . . . . . . . . . . . . . 698
Xian Guo, Shengya Yang, Laicheng Cao, Jing Wang, and Yongbo Jiang
Contents xi

A Comparative Study of Active and Passive Learning Approaches


in Hybrid Learning, Undergraduate, Educational Programs . . . . . . . . . 715
Khalid Baba, Nicolas Cheimanoff, and Nour-eddine El Faddouli
Mobile Learning Adoption at a Science Museum . . . . . . . . . . . . . . . . . . 726
Ruel Welch, Temitope Alade, and Lynn Nichol
Conceptualizing Technology-Enhanced Learning Constructs:
A Journey of Seeking Knowledge Using Literature-Based Discovery . . . 746
Amalia Rahmah, Harry B. Santoso, and Zainal A. Hasibuan
Random Sampling Effects on e-Learners Cluster Sizes Using
Clustering Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
Muna Al Fanah
Jupyter-Notebook: A Digital Signal Processing Course Enriched
Through the Octave Programming Language . . . . . . . . . . . . . . . . . . . . 774
Arturo Zúñiga-López, Carlos Avilés-Cruz, Andrés Ferreyra-Ramírez,
and Eduardo Rodríguez-Martínez
A Novel Yardstick of Learning Time Spent in a Programming
Language by Unpacking Bloom’s Taxonomy . . . . . . . . . . . . . . . . . . . . . 785
Alcides Bernardo Tello, Ying-Tien Wu, Tom Perry, and Xu Yu-Pei
Assessing and Development of Chemical Intelligence Through
e-Learning Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
E. V. Volkova
Injecting Challenge or Competition in a Learning Activity
for Kindergarten/Primary School Students . . . . . . . . . . . . . . . . . . . . . . 806
Bah Tee Eng, Insu Song, Chaw Suu Htet Nwe, and Tian Liang Yi

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827


Demonstrating Advanced Machine Learning
and Neuromorphic Computing Using IBM’s
NS16e

Mark Barnell1(B) , Courtney Raymond1 , Matthew Wilson2 , Darrek Isereau2 ,


Eric Cote2 , Dan Brown2 , and Chris Cicotta2
1 Air Force Research Laboratory, Information Directorate, Rome, NY 13441, USA
[email protected]
2 SRC, Inc., 6225 Running Ridge Road, North Syracuse, NY 13212, USA

Abstract. The human brain can be viewed as an extremely power-efficient bio-


logical computer. As such, there have been many efforts to create brain-inspired
processing systems to enable advances in low-power data processing. An exam-
ple of brain-inspired processing architecture is the IBM TrueNorth Neurosynaptic
System, a Spiking Neural Network architecture for deploying ultra-low power
machine learning (ML) models and algorithms. For the first time ever, an advanced
scalable computing architecture was demonstrated using 16 TrueNorth neuro-
morphic processors containing in aggregate over 16 million neurons. This sys-
tem, called the NS16e, was used to demonstrate new ML techniques including
the exploitation of optical and radar sensor data simultaneously, while consum-
ing a fraction of the power compared to traditional Von Neumann computing
architectures. The number of applications that have requirements for computing
architectures that can operate in size, weight and power-constrained environments
continues to grow at an outstanding pace. These applications include proces-
sors for vehicles, homes, and real-time data exploitation needs for intelligence,
surveillance, and reconnaissance missions. This research included the successful
exploitation of optical and radar data using the NS16e system. Processing perfor-
mance was assessed, and the power utilization was analyzed. The NS16e system
never used more than 15 W, with the contribution from the 16 TrueNorth proces-
sors utilizing less than 5 W. The image processing throughput was 16,000 image
chips per second, corresponding to 1,066 image chips per second for each watt of
power consumed.

Keywords: Machine vision · High Performance Computing (HPC) · Artificial


Intelligence (AI) · Machine learning image processing · Deep Learning (DL) ·
Convolutional Neural Networks (CNN) · Spiking Neural Network (SNN) ·
Neuromorphic processors

Received and approved for public release by the Air Force Research Laboratory (AFRL) on 11
June 2019, case number 88ABW-2019-2928. Any opinions, findings, and conclusions or recom-
mendations expressed in this material are those of the authors, and do not necessarily reflect the
views of AFRL or its contractors. This work was partially funded under AFRL’s Neuromorphic
- Compute Architectures and Processing contract that started in September 2018 and continues
until June 2020.

© Springer Nature Switzerland AG 2020


K. Arai et al. (Eds.): SAI 2020, AISC 1228, pp. 1–11, 2020.
https://doi.org/10.1007/978-3-030-52249-0_1
2 M. Barnell et al.

1 Background
Background and insight into the technical applicability of this research is discussed in
Sect. 1. Section 2 provides an overview of the hardware. Section 3 provides detail on
our technical approach. Section 4 provides a concise summary of our results. Section 5
addresses areas of future research, and conclusions are discussed in Sect. 6.
We are currently in a period where the interest and the pace of research and devel-
opment in ML and AI technological advances is high. In part, the progress is enabled
by increases in investment by government, industry, and academia.
Currently, ML algorithms, techniques, and methods are improving at an accelerated
pace, i.e., with methods to recognize objects and patterns outpacing human perfor-
mance. The communities’ interest is supported by the number of applications that can
use existing and emerging ML hardware and software technologies. These applications
are supported by the availability of large quantities of data, connectivity of information,
and new high-performance computing architectures. Such applications are now preva-
lent in many everyday devices. For example, data from low cost optical cameras and
radars provide automobiles the data needed to assist humans. These driver assistants
can identify road signs, pedestrians, and lane lines, while also controlling vehicle speed
and direction. Other devices include smart thermostats, autonomous home floor clean-
ers, robots that deliver towels to hotel guests, systems that track and improve athletic
performance, and devices that help medical professionals diagnose disease. These appli-
cations, and the availability of data collected on increasingly smaller devices, are driving
the need and interest in low-power neuromorphic chip architectures.
The wide applicability of information processing technologies has increased com-
petition and interest in computing hardware and software that can operate within the
memory, power, and cost constraints of the real world. This includes continued research
into computing systems that are structured like the human brain. The research includes
several decades of development, in-part pioneered by Carver Mead [1]. Current exam-
ples of the more advanced neuromorphic chips include SpiNNaker, Loihi, BrainScaleS-
1, NeuroGrid/Braindrop, DYNAP, ODIN and TrueNorth [2]. These systems improve
upon traditional computing architectures, such as the Von Neumann architecture, where
physical memory and logic are separated. In neuromorphic systems, the colocalization
of memory and computation, as well as reduced precision computing, increases energy
efficiencies, and provides a product that uses much less power than traditional compute
architectures.
IBM’s TrueNorth Neurosynaptic System represents an implementation of these
newly available specialized neuromorphic computing architectures [3]. The TrueNorth
NS1e, an evaluation board with a single TrueNorth chip, has the following technical
specifications: 1 million individually programmable neurons, 256 million individually
programmable synapses, and 4,096 parallel & distributed cores. Additionally, this chip
uses approximately 200 mW of total power, resulting in 20 mW/cm2 power density
[4–6].
Demonstrating Advanced Machine Learning and Neuromorphic Computing 3

The latest iteration of the TrueNorth Neurosynaptic System includes the NS16e, a
single board containing a tiled set of 16 TrueNorth processors, assembled in a four-by-
four grid. This state-of-the-art 16 chip computing architecture yields a 16 million neu-
ron processor, capable of implementing large, multi-processor models or parallelizing
smaller models, which can then process 16 times the data.
To demonstrate the processing capabilities of the TrueNorth, we developed multiple
classifiers. These classifiers were trained using optical satellite imagery from the United
States Geological Survey (USGS) [7]. Each image chip in the overall image was labeled
by identifying the existence or non-existence of a vehicle in the chip. The chips were
not centered and could include only a segment of a vehicle [8]. Figure 1 shows the raw
imagery is on the left and the processed imagery on the right. In this analysis, a single
TrueNorth chip was able to process one thousand, 32 × 32 pixel chips per second.

Advanced Network Classification


USGS imagery was
Raw Imagery Processed Imagery Chipped,
Human-labeled and
preprocessed to train, test,
and validate our network
model.

14 Layer
Neural Network

Results Vehicle Detection


Accuracy of 97.6% 24,336 Total Chips
Probability of Detection 89.5%, Classified at 1,000 chips/sec
Probability of False Alarm 1.4% 3 Watts

Fig. 1. Electro-optical (EO) image processing using two-class network to detect car/no car in
scene using IBM’s neuromorphic compute architecture, called TrueNorth (using one chip)

Previous work was extended upon through the use of new networks and placement of
those networks on the TrueNorth chip. Additionally, results were captured, and analyses
were completed to assess the performance of these new network models. The overall
accuracy of the best model was 97.6%. Additional performance measures are provided
at the bottom of Fig. 1.

2 Hardware Overview

Development of the TrueNorth architecture dates to the DARPA Systems of Neuro-


morphic Adaptive Plastic Scalable Electronics (SyNAPSE) project beginning in 2008.
This project sought to develop revolutionary new neuromorphic processors and design
tools. Each TrueNorth chip is made of 5.4 billion transistors, and is fabricated using
4 M. Barnell et al.

a 28 nm low-power complementary metal-oxide-semiconductor (CMOS) process tech-


nology with complimentary pairs of logic functions. The True North chip is 4.3 cm2
using under 200 mW of power per chip.
The NS16e board is configured with 16 TrueNorth chips in a 4 × 4 chip configuration.
In aggregate, this board provides users with access to 16 million programmable neurons
and over 4 billion programmable synapses. The physical size of the NS16e board is
215 mm × 285 mm [9].
To expand on this configuration, four of these NS16e boards were emplaced in a
standard 7U space. Thereby, this new neuromorphic system occupies about 19 by 23 by
7 inches in a rack space. The four-board configuration, called the NS16e-4, results in a
neuromorphic computing system with 64 million neurons and 16 billion synapses.
Such a configuration enables users to extend upon the single chip research described
in Sect. 1, and implement inferencing algorithms and data processing in parallel. Addi-
tionally, the system uses a fraction of the processing power compared to traditional
computing hardware occupying the same physical footprint. The Air Force Research
Laboratory’s (AFRL’s) rack mounted neurosynaptic system, called BlueRaven, is shown
in Fig. 2.

BlueRaven NeurosynapƟc System

• Rack mounted
• Four NS16e boards with an
aggregate of 64 million
neurons and 16 billion
synapses
• Enabling parallelization
research and design
• Used to process 5000 x
5000 pixels of information
every 3 seconds.

Fig. 2. AFRL’s BlueRaven system – equivalent to 64 million neurons and 16 billion synapses

The BlueRaven High Performance Computer (HPC) also contains a 2U Penguin


Computing Relion 2904GT. The Penguin server is utilized for training network models
Demonstrating Advanced Machine Learning and Neuromorphic Computing 5

before being deployed to the neuromorphic hardware, as well as for data pre-processing.
Table 1 details Blue Raven’s specifications.

Table 1. BlueRaven system architecture configuration detail

Specification Description
Form Factor 2U Server + 2U NS16e Sled
NS16e 4× IBM NS16e PCIe Cards
Neurosynaptic Cores 262,144
Programmable Neurons 67,108,864
Programmable Synapses 17,179,869,184
PCIe NS16e Interface 4× PCIe Gen 2
Ethernet - Server 1x 1 Gbit
Ethernet – NS16e 1x 1 Gbit per NS16e
Training GPUs 2x NVIDIA Tesla P100
Volatile Memory 256 GB
CPUs 2× 10-Core E5-2630

3 Approach
The NS16e processing approach includes the use of deep convolutional Spiking Neural
Networks (SNN) to perform classification inferencing of the input imagery. The deep
networks were designed and trained using IBM’s Energy-efficient Deep Neuromorphic
Networks (EEDN) framework [4].
The neurosynaptic resource utilization of the classifiers were purposely designed to
operate within the constraints of the TrueNorth architecture. Specifically, they stayed
within the limits of the single TrueNorth’s 1 million neurons and 256 million synapses.
The benefit of this technical approach is that it immediately allowed us to populate
an NS16e board with up to sixteen parallel image classifier networks, eight to process
optical imagery and eight to process radar imagery. Specifically, the processing chain
is composed of a collection of 8 duplicates of the same EEDN network trained on a
classification task for each chosen dataset.

3.1 USGS Dataset


EO imagery happens to be a very applicable data set to use to exercise the BlueRaven
system. It is applicable because it is freely available, and is of favorable quality (i.e., high-
resolution). The quality of the data enabled us to easily identify targets in the imagery.
Additionally, the data could be easily chipped and labeled to provide the information
necessary for network model training and validation.
6 M. Barnell et al.

This overhead optical imagery includes all 3 color channels (red, green and blue).
The scene analyzed included 5000 × 5000 pixels at 1-foot resolution. From this larger
scene, image chips were extracted. Each image chip from the scene was 32 × 32 pixels.
There was no overlap between samples, thereby sampling the input scene with a receptive
field of 32 × 32 pixels and a stride of 32 pixels. This resulted in over 24,336 (156 ×
156) sub-regions.
The USGS EO data was used to successfully build TrueNorth-based classifiers that
contained up to six object and terrain classes (e.g., vehicle, asphalt, structure, water,
foliage, and grass). For this multi-processor neurosynaptic hardware demonstration, a
subset of the classes was utilized to construct a binary classifier, which detected the
presence or absence of a vehicle within the image chip.
The data set was divided up into a training and test/validation sets. The training
set contained 60% of the chips (14,602 image chips). The remaining 40% of the chips
(9,734) were used for test/validation. The multi-processor demonstration construct and
corresponding imagery is shown in Fig. 3.

Multi-Processor Neurosynaptic Demonstration


Optical Imagery

Binary classification of a vehicle/no vehicle within the image


Example Imagery and Demonstration Construct

Fig. 3. Example USGS tile and demonstration construct

3.2 Multi-chip Neurosynaptic Electro-Optical Classification

The content of the chip was defined during data curation/labeling. The label was in one
of two categories: no vehicle or a vehicle. Additionally, the chips were not chosen with
the targets of interest centered in the image chip. Because of this, many of the image
chips contained portions of a vehicle, e.g., a chip may contain an entire vehicle, fractions
of a vehicle, or even fractions of multiple vehicles.
The process of classifying the existence of a vehicle in the image starts with object
detection. Recognizing that a chip may contain a portion of a vehicle, an approach was
developed to help ensure detection of the vehicles of interest. This approach created
Demonstrating Advanced Machine Learning and Neuromorphic Computing 7

multiple 32 × 32 × 3 image centroids. These centroids were varied in both the X and Y
dimensions to increase the probability of getting more of the target in the image being
analyzed.
A block diagram showing the processing flow from USGS imagery to USGS imagery
with predicted labels is shown in Fig. 4. This includes NS16e implementations with 8
parallel classifier networks, 1 per each TrueNorth on half the board.

USGS USGS Image w/


5000x5000 Predicted Labels
Image Model

Model

Image Model ClassificaƟon


Sub-region Processor
Model

Image Model
Chipper
Model

Model

TrueNorth
USGS Model

Fig. 4. NS16e USGS block diagram

The copies of the EO network were placed in two full columns of the NS16e, or
eight TrueNorth processors in a 4 × 2 configuration, with one network copy on each
processor. As a note, the remainder of the board was leveraged to study processing with
additional radar imagery data.

3.3 Electro-Optical Classification Hardware Statistics


Analyses of the systems power consumption was completed. The TrueNorth system
operates at a rate of 1 kHz. This rate directly correlates to the number of image chips
that can be processed per second. The USGS/Radar network models were replicated
across 16 TrueNorth chips and resulted in a processing speed of 16,000 inferences
per second. At this rate and using 8 TrueNorth chips, the new compute and exploitation
architecture was able to process the full 5,000 × 5,000 pixel optical imagery with 24,336
image chips in 3 s.
8 M. Barnell et al.

The NS16e card’s power usage during inferencing is shown in Table 2. The total uti-
lization of the board was less than 14 W. The runtime analyses included the measurement
of periphery circuits and input/output (I/O) on the board.

Table 2. NS16e board power usage

Board power
Board Voltage (V) Current (A) Power (W)
Nominal Measured Computed
Interposer (Inclrding MMP) +12 0.528 6.336
16-chip board (Including TN chips) +12 0.622 7.462
Total 1.150 13.798

Table 3 details the power utilization of the TrueNorth chips without the boards
peripheral power use. The contribution from the TrueNorth accounted for approximately
5 W of the total 15 W.

Table 3. NS16e TrueNorth power usage

TrueNorth power only


Component Voltage (V) Current (A) Power (W)
Measured Measured Computed
TrueNorth Core VDD 0.980 4.74 4.636
TrueNorth I/O Drivers 1.816 0.04 0.063
TrueNorth I/O Pads 1.000 0.00 0.002
Total 4.701

Table 4 provides detail on the power utilization without loading on the system (idle).

4 Results

In Fig. 5, we see an example of predictions (yellow boxes) overlaid with ground truth
(green tiles). Over the entirety of our full-scene image, we report a classification accuracy
of 84.29% or 3,165 of 3,755 vehicles found. Our misclassification rate, meaning the
number of false positives or false negatives, is 35.39%. Of that, 15.71% of targets are
false negatives, i.e. target misses. This can be tuned by changing the chipping algorithm
used with a trade off in the inference speed of a tile.
Demonstrating Advanced Machine Learning and Neuromorphic Computing 9

Table 4. Idle NS16e power usage

Board power
Board Voltage (V) Current (A) Power (W)
Nominal Measured Computed
Interposer (Including MMP) +12 0.518 6.216
16-chip board (Including TN chips) +12 0.605 7.265
TOTAL 1.123 13.481
TrueNorth power only
Component Voltage (V) Current (A) Power (W)
Measured Measured Computed
TrueNorth Core VDD 0.978 4.64 4.547
TrueNorth I/O Drivers 1.816 0.03 0.051
TrueNorth I/O Pads 0.998 0.00 0.001
TOTAL 4.599

Multi-Processor Neurosynaptic Demonstration


Legend
Target Positively Identified
Target Falsely Identified
Truth (Human Labeled Cars)

• Detected 3165 of 3755


(84.29%) of the Vehicles
• 8000 inferences per
second

Fig. 5. Example USGS tile results

5 Future Research

Neuromorphic research and development continue with companies such as Intel and
IBM. They are contributing to the communities’ interest in these low power processors.
As an example, the SpiNNaker system consists of many ARM cores and is highly
10 M. Barnell et al.

flexible since neurons are implemented at the software level, albeit somewhat more
energy intensive (each core consumes ~1 W) [10, 11].
As new SNN architectures continue to be developed, new algorithms and applica-
tions continue to surface. This includes technologies such as bioinspired vision systems
[12]. Additionally, Intel’s Loihi neuromorphic processor [13] is a new SNN neuromor-
phic architecture which enables a new set of capabilities on ultra-low power hardware.
Loihi also provides the opportunity for online learning. This makes the chip more flex-
ible as it allows various paradigms, such as supervisor/non-supervisor and reinforc-
ing/configurability. Additional research of these systems, data exploitation techniques,
and methods will continue to enable new low power and low-cost processing capabilities
with consumer interest and applicability.

6 Conclusions

The need for advanced processing algorithms and methods that operate on low-power
computing hardware continues to grow out an outstanding pace. This research has
enabled the demonstration of advanced image exploitation on the newly developed
NS16e neuromorphic hardware, i.e., a board with sixteen neurosynaptic chips on it.
Together, those chips never exceeded 5 W power utilization. The neuromorphic board
never exceeded 15 W power utilization.

References
1. Mead, C.: Neuromorphic electronic systems. Proc. IEEE 78(10), 1629–1636 (1990)
2. Rajendran, B., Sebastian, A., Schmuker, M., Srinivasa, N., Eleftheriou, E.: Low-power neu-
romorphic hardware for signal processing applications (2019). https://arxiv.org/abs/1901.
03690
3. Barnell, M., Raymond, C., Capraro, C., Isereau, D., Cicotta, C., Stokes, N.: High-performance
computing (HPC) and machine learning demonstrated in flight using Agile Condor®. In: IEEE
High Performance Extreme Computing Conference (HPEC), Waltham, MA (2018)
4. Esser, S.K., Merolla, P., Arthur, J.V., Cassidy, A.S., Appuswamy, R., Andreopoulos, A.,
et al.: CNNs for energy-efficient neuromorphic computing. In: Proceedings of the National
Academy of Sciences, p. 201604850, September 2016. https://doi.org/10.1073/pnas.160485
0113
5. R.F. Service: The brain chip. In: Science, vol. 345, no. 6197, pp. 614–615 (2014)
6. Cassidy, A.S., Merolla, P., Arthur, J.V., Esser, S.K., Jackson, B., Alvarez-Icaza, R., Datta, P.,
Sawada, J., Wong, T.M., Feldman, V., Amir, A., Rubin, D.B.-D., Akopyan, F., McQuinn, E.,
Risk, W.P., Modha, D.S.: Cognitive computing building block: a versatile and efficient digital
neuron model for neurosynaptic cores. In: The 2013 International Joint Conference on Neural
Networks (IJCNN), pp. 1–10, 4–9 August 2013
7. U.S. Geological Survey: Landsat Data Access (2016). http://landsat.usgs.gov/Landsat_S
earch_and_Download.php
8. Raymond, C., Barnell, M., Capraro, C., Cote, E., Isereau, D.: Utilizing high-performance
embedded computing, agile condor®, for intelligent processing: an artificial intelligence
platform for remotely piloted aircraft. In: 2017 IEEE Intelligent Systems Conference, London,
UK (2017)
Demonstrating Advanced Machine Learning and Neuromorphic Computing 11

9. Modha, D.S., Ananthanarayanan, R., Esser, S.K., Ndirango, A., et al.: Cognitive computing.
Commun. ACM 54(8), 62–71 (2011)
10. Furber, S.B., Galluppi, F., Temple, S., Plana, L.A.: The SpiNNaker project. Proc. IEEE 102(5),
652–665 (2014)
11. Schuman, C.D., Potok, T.E., Patton, R.M., Birdwell, J.D., Dean, M.E., Rose, G.S., Plank,
J.S.: A survey of neuromorphic computing and neural networks in hardware. CoRR
abs/1705.06963 (2017)
12. Dong, S., Zhu, L., Xu, D., Tian, Y., Huang, T.: An efficient coding method for spike camera
using inter-spike intervals. In: IEEE DCC, March 2019
13. Tang, G., Shah, A., Michmizos, K.P.: Spiking neural network on neuromorphic hardware for
energy-efficient unidimensional SLAM. CoRR abs/1903.02504. arXiv:1611.05141 (2019)
Energy Efficient Resource Utilization:
Architecture for Enterprise Network
Towards Reliability with SleepAlert

Dilawar Ali1(B) , Fawad Riasat Raja2 , and Muhammad Asjad Saleem2


1 Ghent University, Ghent, Belgium
[email protected]
2 University of Engineering and Technology Taxila, Taxila, Pakistan

Abstract. Enterprise networks usually require all the computing machines to


remain accessible (switched-on) at all times regardless of the workload in order to
entertain user requests at any instant. This comes at the cost of excessive energy
utilization. Many solutions have been put forwarded, however, only few of them
are tested in a real-time environment, where the energy saving is achieved by com-
promising the systems’ reliability. Therefore, energy-efficient resource utilization
without compromising the system’s reliability is still a challenge. In this research,
a novel architecture, “Sleep Alert”, is proposed that not only avoids the exces-
sive energy utilization but also improves the system reliability by using Resource
Manager (RM) concept. In contrary to traditional approaches, Primary and Sec-
ondary Resource Managers i.e. RMP and RMS respectively are used to avoid the
single point of failure. The proposed architecture is tested on a network where
active users were accessing the distributed virtual storage and other applications
deployed on the desktop machines, those are connected with each other through
a peer-to-peer network. Experimental results show that the solution can save con-
siderable amount of energy while making sure that reliability is not compromised.
This solution is useful for small enterprise networks, where saving energy is a big
challenge besides reliability.

Keywords: Enterprise networks · Resource manager · Green computing · Sleep


proxy · Energy-efficient computing

1 Introduction
Efficient utilization of energy is one of the biggest challenges around the globe. The dif-
ference between demand and supply is always on rise. For high performance, computing
a reliable, scalable, and cost-effective energy solution satisfying power requirements and
minimizing environmental pollution will have a high impact. The biggest challenge in
enterprise networks is how to manage power consumption. Data centers utilize huge
amount of energy in order to ensure the availability of data when accessed remotely.
Major problem now a day is that energy is scarce, that is why renewable energy
i.e. producing energy by wind, water, solar light, geothermal and bio-energy is a hot
issue in research. It is of equal importance that how efficiently this limited energy would

© Springer Nature Switzerland AG 2020


K. Arai et al. (Eds.): SAI 2020, AISC 1228, pp. 12–27, 2020.
https://doi.org/10.1007/978-3-030-52249-0_2
Energy Efficient Resource Utilization: Architecture for Enterprise Network 13

be utilized so an investment in green technology that leads to strengthen the economy


besides reducing the environment pollution be made. In US Department of Energy, office
of Energy Efficiency and Renewable Energy (EERE) is also working on energy efficiency
and renewable energy resources with an aim to reduce the dependence on imported oil
[1].
The number of internet users has increased 5 times from 2000 to 2009. Currently the
internet users are more than 2.4 billion [2]. Whereas Microsoft report shows, it will be
more than 4 billion in coming years. Some other sources say it will be more than 5 billion
by year 2020 [3]. More the number of users, more is the energy consumed, and an increase
amount of CO2 will be emitted The global Information and Communications Technology
(ICT) industry elucidate approximately 2% of global carbon dioxide (CO2 ) emissions,
number equivalent to aviation, which means rapid increase in environmental pollution.
The power deficiency issue and energy crisis are currently main topics of debate on
discussion forums and in professional conferences. It is considered a worldwide goal
to optimize energy consumption and minimize CO2 emissions in all critical sectors
of an economy [4]. Therefore, major concern is to reduce the utilization of energy in
enterprise networks using our prescribed scheme. A contribution to this scheme is to
widen the research area in Green computing, where main goal is not just to save the
operational energy of a product but also the overall energy, which is consumed from
product development till the completion of recycling process [4].
Data centers contain servers and storage systems, which operate and manage the
enterprise resource planning solutions. Major components of data center are environ-
mental controls (e.g., ventilation/air conditioning), redundant or backup power supplies,
multiple data communication connections and security devices (e.g. camera). Large data
centers are often considered as a major source of air pollution in the form of CO2. By
releasing plenty of heat, they raise global warming and consume as much energy as does
a small town [5]. In enterprise networks a machine (often a desktop) is usually in active
mode so as it remains available whenever accessed remotely. This reveals that some
machines having no load (i.e. idle one) may remain continuously in active state.
To cope with energy management issues, many hardware and software based solu-
tions have been put forward. Vendors have manufactured such devices e.g. Dynamic
Voltage and Frequency Scaling (DVFS) enabled devices, which will consume less energy
than the Non-DVFS enabled devices’. Many software solutions, too, have been proposed
which reduce energy consumption. The most prominent is the one that takes the machines
into sleep mode when they are idle. However, in the later scheme, there are a number of
issues e.g. reliability which will be addressed in next section.
This proposed architecture not only emphasizes on the shortcomings of existing
architectures which are identified in this research work but also cater most of the issues
occurred during real-time test environment. These include reliability of sleep proxy
architecture and overhead - where the proxy node sends a periodic signal to the proxy
server after every five minutes, notifying its presence on the network - as these have not
been addressed in the traditional approaches. The main benefit of proposed scheme is
no extra or costly hardware is required to implement the solution.
Sleep Alert, a cost effective solution, eliminates the concept of single point failure
and addition of any extra hardware to make the network reliable and energy efficient. This
14 D. Ali et al.

solution is useful for small enterprise networks, where saving energy is a big challenge
besides reliability.

1.1 Major Contributions

The major contributions are as follows:

1. A simple architecture to cope with the challenge of single point failure and ensure
the service until last node is available in the network.
2. Low cost solution to save considerable amount of energy while making sure that
reliability is not compromised.

2 Literature Review
The traditional approach regarding the design of a computer network is to ensure the
accessibility of each machine connected in the network at any cost. Energy consumption
is not taken into account while designing such networks [6]. Generally, a network design
is highly redundant for fault tolerance. All network equipment stays powered-on even
if unused or slightly used. Many solutions [7–16] have been proposed to efficiently use
the energy.
In Green Product Initiative [12], the use of energy efficient devices by using Dynamic
voltage and frequency scaling (DVFS) technique has been introduced but this solution
is not appropriate for the jobs having a short deadline. This is because DVFS-enabled
devices take more time than the normal execution time of a job. Hardware based solutions
[12, 17] require particular hardware such as GumStix [17], when the host sleeps these
low powered devices becomes active. Sometime particular hardware device is required
for each machine beside operating system alteration on both application and host. Apple
has come up with a sleep proxy that is compatible only with Apple-designed hardware
and is appropriate for home networks only [15].
Sleep proxy architecture [13] is a common software based solution to lesser energy
consumption. The major flaw with this technique is that if sleep proxy server is down,
then there is no other way to access any machine. Using the designated machine as sleep
proxy server is, too, not a good approach as the danger of Single Point Failure always
lurks. A different approach is SOCKs based approach, which includes the awareness
of the power state of a machine that how a Network Connectivity Proxy (NCP) could
enable substantial energy savings by letting idle machines to enter a low-power sleep
state and still ensures their presence in the network. This is just a design and prototype,
not yet implemented. Furthermore, there is always a significant difference in simulation-
based test of sleep proxy architecture and real time testing in enterprise networks [14].
Software based solution to energy efficient computing normally based on sleep approach
such as Wake on LAN (WOL) [18] is a standard methodology used to make available a
sleeping machine in a network when required by sending a magic packet.
Cloud and fog computing are the common trend introduced in recent decade. The
way the Information and Communication Technology (ICT) and computing systems are
increasing, it is necessary to take in account the future challenges of energy consumption
Energy Efficient Resource Utilization: Architecture for Enterprise Network 15

due to these latest technologies [19]. Cloud computing involve the large number of data
set on different location. Many companies like Google, Amazon, Yahoo, Microsoft,
having huge data centers are currently following the cloud architecture to handle the large
number of data sets. The progression toward the cloud architecture results in increase of
data centers which lead to increase in the huge amount of energy consumption. Survey
shows that the energy consumption by data centers in US was between 1.7% and 2.2% of
entire power consumption of US in 2010 [20]. To control the huge energy consumption
issues in cloud servers the concept of greening cloud computing [21] and Pico servers
[22], power aware computing, were introduced. Common methods to achieve energy
saving in green cloud computing data center are adaptive link rate, implementing virtual
network methodology, sleeping less utilized servers, green/power aware routing and
server load/network traffic consolidation [23].
Efficient resource allocation is a new diversion to cope with energy challenge.
Research shows different method of resource allocation one of common is allocation
based on priority [24]. This technique requires the intelligent system to predict the
behavior of a network based on different machine learning concepts [25]. Whereas
while implementing the virtual machine based solution for energy efficiency, the prior-
ity allocation and resource sharing algorithms were designed to allocate the resources,
to save maximum energy. The major flaw in technique is excessive load on the network
due to constant reallocations of resources [26]. Intelligent approach to minimize the
consumption of energy is to predict the network work load based on different models
this model turn the work load in different classes and then evaluate each class based on
several available models [27]. Most of available software based solutions are not tested
in real time environment; others are complex and have many shortcomings when tested
in real-time environment. Ensuring reliability is a big challenge for such kind of appli-
cations while saving sufficient amount of energy. Single point failure is one of major
point of concern in this research domain.
Many solutions were proposed but some solutions are complex to contrivance while
other requires a lot of infrastructure to implement. Most of companies in developing
countries even have the major budgetary constraints to set up such a large infrastructure.
In case of small enterprises environments, even if they establish such a large and complex
network solutions, the cost to operate, maintain and make these solutions serviceable
is too much to afford. This lead to even an expensive solution then they get the benefit
from energy savage. They normally need a short, simple and less cost effective energy
saving solutions and hoping the high reliability rate and the good performance as well.
Solution that is beneficial for these kind of organizations especially in under devel-
oped countries is simply the “Sleep Alert”, where they are afraid to upgrade the whole
network or the change cost is much large then the annual revenue of an organization and
need a simple and cost effective solution.

3 System Architecture
Concept of sleep proxy is much appreciated but as the time goes on researchers came to
know some of the major problems that caused the deadlocks in network and effect the
availability of the sleeping machine to respond to a remote request. Some issues with
current sleep proxy that we discuss in this research are as follows:
16 D. Ali et al.

• For making the environment green we can’t take a risk of letting the network on the
sack of just one proxy.
• Because of one proxy, if it went down due to some reason, states of all sleeping
machine lost and they are unable to come back in wake up mode when required.
• As a proxy is a dedicated machine, that have to maintain the state of sleeping machine,
an extra overhead as it consumes extra energy.
• Taking a decision when to go in sleep mode.
• Some sleep approaches lead system to shutdown and restart when required, these
requires lot of energy at start and also takes much time to restart then as from sleep
state.

In contrary to previous sleep proxy approaches, the concept of Resource Manager


(RM) is proposed to avoid the single point failure, further categorization of the RM is
Primary Resource Manager (RMp ) and Secondary Resource Manager (RMs ). In pro-
posed architecture, there is no dedicated machine to act as a RM like in traditional sleep
proxy approaches. Any ordinary machine can act as a RM when required.
To avoid the single point failure two machines were used to act as RMp and RMs
at the same time. Whenever RMp stops working RMs will take over as RMp and use
the next available machine as RMs and this cycle continues till there is a last machine
available in the network. If RMp went down or stop working the intermediate device
(router) will update its routing table and this update will redirect the incoming traffic
towards RMs . Receiving the traffic by RMs is the signal to RMs that RMp is down, so
RMs will update its status from RMs to RMp . It’s the time to ping the whole network to
get the current state of all the desktop machines and allow a next available machine to
act as RMs .
There are two modes of each desktop machine, Energy Saving (ES) Mode and Energy
Consumption (EC) Mode. A machine, as a Resource Manager, can be in two states, either
‘Working State’ (W) or ‘Down State’ (D). To access a sleeping machine remote user’s
request will be forwarded to RMp or RMs - incase RMp is down. RMp will send a WOL
packet to the particular machine to bring it back in the awake state or EC mode. Each
machine has a small database that contains the states of sleeping machines. Usually this
database is updated when a particular machine acts as a Resource Manager. The infor-
mation recorded in this database is IP of each machine, Mac address, Connecting ports
and current state of machine. Five states of a machine defined in proposed architecture
are; Primary Resource manager [RMp ], Secondary Resource manager [RMs ], Machine
Active [A], Energy saving Mode [S], Machine not available in the network [N]. Proposed
architecture is shown in Fig. 1. This is a software-based solution and no extra hardware
is required to implement this solution.

3.1 General Terminologies

Status Indicator. Status indicator is an application running on each machine and


conveys the present status (sleep or wake) of a particular machine to RMP .
Energy Efficient Resource Utilization: Architecture for Enterprise Network 17

Fig. 1. Proposed system architecture

Sleep Status. When there is no activity on any machine for a specific amount of time
i.e. a machine is in idle state then it will turn into sleep state and through status indicator
it will send the message about its status (sleep) to RMp . There is no need to keep track
which machine is acting as RMp . Ordinary machine will broadcast its status message
in the network and only RMp will save this status of a particular machine and other
machine will discard this message.

Wake Status. When a machine is in sleep state and there is network traffic (Remote
user accessing the machine) for that machine, RMp will send a WOL message to that
machine. If sleeping machine acknowledged this WOL message, then RMp will update
the status (sleep to wake) of that machine in its database. But if there is no response at
all from the machine after sending three WOL messages, RMp will consider that the
machine is no more in the network.

Message Packet. A message’s that is forwarded and received are of following format.
Protocol defined for the identification of message packet is presented below and shown
in Fig. 2. Message contains following five tokens, which are separated by [-]:

Message Protocol : [Message_Type − Requested _Ip − Source_Ip − Destination_Ip − Termination]

• Message Type: Code representing the task to be required from requested pc.
• Requested IP: IP of targeted machines, which respond to user request, is requested IP.
• Source IP: is the IP of sender machine
• Destination IP: is the IP of receiving machine.
• Termination: is usually a termination character, which in our case is ‘$’.

Message Type. Message type is predefined codes those are helpful in determining the
type of request. Message codes are shown in Table 1:
18 D. Ali et al.

Fig. 2. Message packet architecture

Table 1 Defined message type and message code

Message type Message code


– 00
– 01
Connect 02
End Connection 03
Error 04
Ok 05
Sleep Status 06
TakeOver 07
Wake Status 08
WakeOnLan 09

Table 2 shows that RMp and RMs keep swapping when required (such as failure of
RMp ). T1, T2 T3 … Tn, shows the time interval after which a new Resource Manager
takes over.

Table 2. Different machine states

Machine T1 T2 T3 T4 T5 … Tn
A RMp N N N A … RMp
B RMs RMp N N A … RMs
C A RMs N N N … A
D A A RMp N N … A
E A A RMs RMp N … A
F A S A RMs RMp … A
G A S A N N … A
H A S S S RMs … A
I A S S S S … A
Energy Efficient Resource Utilization: Architecture for Enterprise Network 19

3.2 Architecture Explanation


Some use-cases of proposed architecture are as follows:
Requested Machine is in EC Mode and RMp is in Working State. Request of remote
user directly forwarded to the particular machine without interrupting RMp as shown in
Fig. 3.

Fig. 3. Generalized flow of proposed architecture

Requested Machine is in EC Mode and RMp is in Down state. It does not matter
whether RMp is down or active, request will be forwarded to machine in EC mode as
shown in Fig. 3.
Requested Machine is in ES Mode and RMp is in Working State. If a remote user
wants to access a machine that is in ES mode then remote user request will be forwarded
to the requested machine via RMp as shown in Fig. 3. If there is no activity on a machine
for a specific amount of time then it will switch its mode from EC to ES. But before
entering into ES mode it will notify RMP about its new status. RMp will save the status
in the database. Intermediate device will update its routing table and will forward the
remote user’s request to RMp as it is not possible to access a machine that is in ES
mode. When RMp receives the request for a machine that is in ES mode, it will send
WOL message to that machine so that machine can switch its mode from ES to EC. In
case of no response from the machine after sending three WOL messages then RMp
will consider that the respective machine is not available in the network and this will be
notified to remote user. Otherwise, machine will be shifted to EC mode and remote user
request will be forwarded to the machine.
20 D. Ali et al.

Requested Machine is in ES Mode and RMp is in Down State While RMs is in Work-
ing State. When RMp is down then it cannot receive request from remote user. If RMp
is not responding to remote user’s request till third attempt then RMs will take over and
will play the role of RMp and respond to user request as shown in Fig. 3. Now, new
RMp will ping all the machines in the network. Machines those respond to ping will be
considered as in EC (active) mode and rest are considered in ES (sleep) mode.

Requested Machine is in ES Mode and RMp and RMs is in Down State. It is also
shown in Fig. 3 that when intermediate device receives no response from RMp and RMs
then it will send the remote user’s request to next available machine having EC mode in
the network. When that machine receives the request it will consider that both RMp and
RMs are down so it will take over as RMp and appoint a new RMs. If RMp and RMs are
down and all machines in the network are in ES mode then there will be a pause until
a machine awakes, on its turn, in the network. This delay is the only limitation in this
research work and is considered as a future work to overcome this delay to minimum by
analyzing and predicting the network behavior.

Some algorithms of proposed architecture are presented below.


Algorithm: Entertaining user requests

function SleepAlert(Request, Target_Machine)

while request
Pack <- Receive_Request();
[Msg_Type ip_Requested] <- ExtractPacket();
% Msg_Type can be Connect / Connect / ...
% End Connection / Error / Ok / ...
% TakeOver / Sleep Status / ...
% Wake Status / WakeOnLan.

Machine_Mode <- get_requested_Machine_Mode();

switch Machine_Mode

case 'EC' % Energy Cosumption


receive_request();
respond_to_request();

case 'ES' % Energy Saving


WOL(ip_Requested) % Sends WOL packets
% Attempt thrice:
% Mode Change -> 'ES' to 'EC')
Current_Machine_Mode <- Machine_Mode();

switch Current_Machine_Mode

case 'EC'
forward_request_to_Machine()
break; % Machine awakes

case 'ES'
display('Machine Not available');
break;

end

end

end
end

Algorithm: Working of Primary Resource Manager


Energy Efficient Resource Utilization: Architecture for Enterprise Network 21

function RMp

IF Primary_resource_Manager is Working
Received_Message();
Read_Message_Tokens();
Check_requested_machine_status();
If status == ‘n’
Machine not available in network
Else
For K = 1:3 % Try Thrice
SEND_WAKE_ON_LAN();
IF MACHINE_AWAKES
BREAK;
ELSE
CONTNUE;
END
END
IF MACHINE_AWAKES
Return ‘ok’;
Else
Return ‘Not available’;
End
end
ELSE
Request_To_Secondry_Resource_Manager();
End

end

Algorithm: Working of Secondary Resource Manager

function RMs

TakeOver();
Receive_Request();
ACT SAME AS Primary_resource_Manager
TAKEOVER()
SET_CURRENT_STATUS_TO_rm(P);
Current_State rm(p)
Ping_Whole_Network
If ping_Reply successfully
Machine is in EC Mode;
Else
Machine is in es Mode;
end
MAKE_NEW_RM(S)
Get_Current_Avilable_Machine();
Set_One_of_them_as_rM(S);

end

Algorithm: Message identification

function RMs
Received_Message();
Ch = Get_Character(); % Get first token
While ( ch != ‘-’)
Message_Type = Message_Type.append(ch);
Ch = next_Character();
End
While ( ch != ‘-’)
Requested_ip = Requested_ip.append(ch);
Ch = next_Character();
end
While ( ch != ‘-’)
Source_ip = source_ip.append(ch);
Ch = next_Character();
End
While ( ch != ‘-’)
Destination_ip = Destination_ip.append(ch);
end
end
22 D. Ali et al.

4 Experimental Setup, Test and Results


4.1 Test Setup and Data Collection
Proposed architecture (see Fig. 1) is implemented over a network of seventy-two
machines connected in a LAN environment for real time experimentation and test-
ing. Each machine has Intel (R) Core 2 Dual (2.00 GHz) processor, 2 GB RAM and
Microsoft Windows 7 Professional installed as an operating system. Experimentation
work lasts for nine weeks. “TREC Energy Monitor” was used to measures the network
energy consumption on hourly basis.

4.2 Test Result Analysis


Power Consumption can be Reduced in ES Mode Even by Achieving Considerable
Reliability. In Table 3, a comparison of energy consumption has been shown in different
modes. There are two modes i.e. EC mode and ES mode as defined in the previous section.
Wattmeter, TREC Energy Monitor, has been used to measure the power consumption.
The base case shows that a single machine consumes 100-110 watts of energy if it
remains powered-on for one hour. Thus per day consumption is 2.40 KWh that leads
to energy consumption per month 74.40 KWh and energy consumption per year 0.89
MWh. Based on above scale, 72 machines in EC mode, consume on average 70.71
MWh of energy in a year while in ES mode this reduces to 45.16 MWh. In ES mode two
machines those are acting as RMp and RMs remain powered-on throughout the year.
Other 70 machines remain powered-on for an average time of 15 h per day so they will
consume 42966 KWh energy per year while in ES mode, for an average time of 09 h per
day, these 70 machines will consume 234.36 KWh of energy per year. The total energy
consumed reaches 45164.52 KWh per year. This shows that approximately 32–36% of
energy would be saved. Thirty six percentage (36%) refers to more than one third cost of
running an enterprise system which is a considerable amount of saving in case of small
enterprise system architectures.
Table 3 Power consumption in different energy modes

Estimated power consumed Saving %


PC’s Time Power Per day Per month Per year Per year
hours (Watt) (KWh) (KWh) (KWh) (MWh)
Base Case
1 24.00 100.00 2.40 74.40 892.80 0.89
In Energy Consumption Mode (EC Mode)
72 24.00 110.00 190.08 5892.48 70709.76 70.71
In Energy Saving Mode (ES Mode)
2 24.00 110.00 5.28 163.68 1964.16
70 9.00 1.00 0.63 19.53 234.36
70 15.00 110.00 115.50 3580.50 42966.00
Total Power consumed 121.41 3763.71 45164.52 45.16 36
Energy Efficient Resource Utilization: Architecture for Enterprise Network 23

The comparison energy consumption in EC and ES mode is shown in Fig. 4. It can


be seen from the findings that a huge amount of energy can be saved by using Sleep
Alert. Figure 5 shows the average number of hours, a set of 10 machines remain in EC or
ES mode. Figure 6 shows the reliability of proposed architecture over a span of first two
weeks. The fluctuation in overall reliability is very less. An approximate steady function
of reliability is achieved because at one time if the RMp went down, RMs will take
over. Figure 7 shows after first three weeks, when these external issues were completely
engaged the response of proposed approach is much consistent and success rate is high.
Issue in the third weeks was due to power problem in network which was resolved and
not occurred later.

Fig. 4. Power consumed in different energy modes

Fig. 5. One day power consumption of different machines

Determination of Network Peak Hours. Figure 8 shows the peak working hours. This
distribution helps in making the algorithm more intelligent to have a quick response in
peak hours and defining an automatic sleep and awake criteria. Figure 8 also shows that
the six hours (Quarter day) from 00:00–6:00 the network work load is only the 4% of
the whole day work load so this is the less sever session of the day. Maximum number
of machines can be retained in sleep state during this time. Whereas from 09:00–12:00
24 D. Ali et al.

Success Rate / Day

Success Rate (%) 100

75

50

25

0
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 Day 8 Day 9 Day 10 Day 11 Day 12 Day 13 Day 14
Success Rate 90 54 54 72 81 54 99 81 72 81 72 90 99 84

Fig. 6. Power consumed in different energy modes

100

90
Success Rate (%)

80

70

60

50
0 2 4 6 8 10
Weeks

Success Rate (%) Average

Fig. 7. System success rate/percentage Reliability In ES mode

Fig. 8. System workload in tri-hour sessions


Energy Efficient Resource Utilization: Architecture for Enterprise Network 25

maximum work load is shown that’s 36% so to avoid the sleep-awake-sleep mechanism
and down grade the network performance most of the machines can be retained in awake
state to entertain the user request.
Life and Performance of Machines Increased. In the light of proposed architecture,
a machine is used an average of 15 h a day instead of 24 h. This employs that a machine
has enough time to relax thus producing lesser amount of heat. Less usage contributes
to an overall increase in the life of a machine and also increases its performance.
Reduce the Single Point Failure. By introducing the concept of two resource managers
the probability of single point failure, reduce to 1/4th relative to older approaches. The
possible combinations can be (1) both resource managers are in wake state (2) RMp is
in wake state while RMs is in sleep state (3) RMp is in sleep state while RMs is in wake
state or (4) both in sleep state. The chances that both sleeps at a same time and no network
node is active is lowered by introducing the concept of dynamic resource managers. In
proposed architecture the dynamic resource managers means each machine can act as
both the server and the monitor. But if a resource manager needs to fulfill a demanded
user request, it has to leave the charge of resource manager and assign it to other active
machine (or awake a machine to make it a new resource manager).
Sleep Alert, Performance. The sleep alert is an algorithm which don’t compromise
on the reliability but the overall response time to users first request, if machine is in ES
mode, increase’s a little bit. Therefore said application architecture is not recommended
for time critical systems. This is the only drawback of this approach.
Sleep Alert, A Cost-Effective Approach. Efficient use of energy means less power
is consumed and hence it will cost less in terms of electricity bills. As machines are
in EC mode for an average of 15 h a day instead of 24 h so less heat is produced that
minimizes the cooling requirements and saved a handsome amount of energy required by
cooling resources such as air conditioning, etc. thus maximizing the Return of Investment
(ROI). Low power refers to the low cost network which ultimately gives the solution
to budgetary problems. Product life increased also helps in lowering the cost of overall
maintenance of the network.
Green Environment. Lesser a machine remains in EC mode will results in lesser emis-
sion of CO2 and it will help us to control the global warming and thus step toward
making the environment Green.
System Wake Up Causes. System normally wakes up for fulfilling user requests, some
are presented below and shown in Fig. 9:

1. Remote Connection
2. Virus Scans
3. Updates
4. Interrupts
5. Other (like web etc.)
26 D. Ali et al.

Fig. 9. System Wake up Causes

5 Conclusions

Proposed solution is deployed over a network of 72 machines where remote users


were accessing the virtual storage and other customized applications, running on these
machines. Results indicate that not only a considerable amount of energy is saved i.e.
32–36% but also the significant reliability is achieved i.e. 72%. By adopting the proposed
architecture in enterprise networks, a huge amount of energy can be saved, without com-
promising the reliability, which can have a significant impact on the overall economy of
a country.
Sudden failure of both RMp and RMs is a major future challenge. Failure of both
resource managers at same time when no other machine is awake can cause problem
to access the network services until any other machine from the network wakes up and
took over the existing RMp .

References
1. US Department of Energy Efficiency and Renewable Energy. http://www.eere.energy.gov/
2. InternetWorldStats. http://www.internetworldstats.com
3. Network World. http://www.networkworld.com
4. Wang, D.: Meeting green computing challenges. In: 10th Electronics Packaging Technology
Conference. Teradata Corporation, USA. IEEE (2008)
5. Okaor Kennedy, C., Udeze Chidiebele, C., Okafor, E.C.N., Okezie, C.C.: Smart grids: a new
framework for efficient power management in datacenter networks. In: IJACSA, vol. 3, no.
7, pp. 59–66 (2012)
6. Gyarmati, L., Anh Trinh, T.: How can architecture help to reduce energy consumption in data
center networking? e-Energy (2010)
7. Marcos, A.: A survey on techniques for improving the energy efficiency of large scale
distributed systems. ACM Comput. Surv. 46(4), 1–35 (2014)
8. Chen, X., Li, C., Jiang, Y.: Optimization model and algorithm for energy efficient virtual node
embedding. IEEE Commun. Lett. 19, 1327–1330 (2015). ISSN 1089-7798
9. Panarello, C., et al.: Energy saving and network performance: a trade-off approach. e-Energy
(2010)
Energy Efficient Resource Utilization: Architecture for Enterprise Network 27

10. Hameed, A., Khoshkbarforoushha, A., Ranjan, R., Jayaraman, P.P., Kolodziej, J., Balaji, P.,
Zeadally, S., Malluhi, Q.M., Tziritas, N., Vishnu, A., Khan, S.U., Zomaya, A.: A survey and
taxonomy on energy efficient resource allocation techniques for cloud computing systems.
Computing 98, 1–24 (2014). https://doi.org/10.1007/s00607-014-0407-8
11. Choi, K., Soma, R., Pedram, M.: Dynamic voltage and frequency scaling based on workload
decomposition. Department of EE-Systems, University of Southern California, Los Angeles,
CA 90089
12. Green Manufacturing Initiative, Annual Report (2012). http://www.wmich.edu/mfe/mrc/gre
enmanufacturing/pdf/2012%20GMI%20Annual%20Report.pdf
13. Reich, J., Goraczko, M., Kansal, A., Padhye, J.: Sleepless in seattle no longer. In: Proceedings
of the 2010 USENIX Conference, Columbia University, Microsoft Research, June 2010
14. Nedevschi, S., Popa, L., Iallaccone, G., Ratnasamy, S., Wetherall, D.: Reducing network
energy consumption via sleeping and rate-adaptation. In: NSDI 2008, Berkeley, CA, USA
(2008)
15. Apple Wake On Lan. http://www.macworld.com/article/142468/2009/08/wake_on_demand.
html
16. Agarwal, Y., Hodges, S., Chandra, R., Scott, J., Bahl, P., Gupta, R.: Somniloquy: augmenting
network interfaces to reduce pc energy usage. In: NSDI 2009, Berkeley, CA, USA (2009)
17. Gumstix. http://www.gumstix.com
18. Wake on Lan. http://en.wikipedia.org/wiki/Wake-on-LAN
19. Kant, K.: Data center evolution: a tutorial on state of the art, issues, and challenges. Comput.
Netw. 53, 2939–2965 (2009)
20. Koomey, J.: Growth in data center electricity use 2005 to 2010. A report by Analytical Press,
completed at the request of The New York Times (2011)
21. Borah, J., Singh, S.K., Borah, A.D.: Cellular base station and its greening issues. Int. J. Adv.
Electron. Commun. Syst. (CSIR-NISCAIR Approved) 3(2), 1–4 (2014)
22. Chabarek, J., Sommers, J., Barford, P., Estan, C., Tsiang, D., Wright, S.: Power awareness in
network design and routing. In: The 27th Conference on Computer Communications IEEE
INFOCOM 2008, pp. 457–465 (2008)
23. Ghani, I., Niknejad, N., Jeong, S.R.: Energy saving in green cloud computing data centers: a
review. J. Theor. Appl. Inf. Technol. 74(1) (2015)
24. Song, Y., Wang, H., Li, Y., Feng, B., Sun, Y.: Multi-tiered on-demand resource scheduling for
vm-based data center. In: Proceedings of the 2009 9th IEEE/ACM International Symposium
on Cluster Computing and the Grid, pp. 148–155 (2009)
25. Cardosa, M., Korupolu, M., Singh, A.: Shares and utilities based power consolidation in vir-
tualized server environments. In: Proceedings of IFIP/IEEE Integrated Network Management
(IM) (2009)
26. Beloglazov, A., Buyya, R.: Energy efficient resource management in virtualized cloud data
centers. In: 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing
(2010). https://doi.org/10.1109/ccgrid.2010.46
27. Liu, C., Liu, C., Shang, Y., Chen, S., Cheng, B., Chen, J.: An adaptive prediction approach
based on workload pattern discrimination in the cloud. J. Netw. Comput. Appl. (2016, in
Press)
Performance Evaluation of MPI vs. Apache
Spark for Condition Based Maintenance Data

Tomasz Haupt(B) , Bohumir Jelinek, Angela Card, and Gregory Henley

Center for Advanced Vehicular Systems, Mississippi State University, Starkville, USA
[email protected]

Abstract. This paper presents the results of an exploratory research program


to compare the performance of typical data analysis patterns following two
approaches: one an MPI-based code in a classical HPC Linux cluster with a Lustre
parallel file system and the other, a Hadoop environment over HDFS parallel file
system. The selected analysis patterns relate to the requirements for building a
system for condition-based maintenance (CBM) to efficiently evaluate daily files
from thousands of vehicles. A similar rate of reading HDF5 files from Lustre as
reading parquet files from HDFS is observed. However, the first results indicate
much better performance of an MPI implementation in Python than the equivalent
implementation using SparkR, with its built-in functions, in the Hadoop environ-
ment. This result is surprising, but consistent with the results reported by other
authors. Furthermore, the scalability of the MPI code has been tested, indicating
a good performance of the Lustre file system.

Keywords: MPI · SPARK · Hadoop · Condition based maintenance

1 Introduction
The amount of available data and the advances in computer hardware and software make
deriving information from data ubiquitous. Business intelligence supports executives,
managers, and other corporate users in making informed business decisions, that include,
among others things, targeted advertisements, understating the market and users’ pref-
erences, or optimizing manufacturing processes. One of the emerging applications of
the business intelligence technologies is condition based maintenance (CBM).
CBM uses sensor devices to collect real-time measurements (such as temperature,
pressure, speed) on a piece of equipment. The collected data (from many pieces of the
equipment) is then analyzed, often using machine learning techniques, to derive signa-
tures of impending failures so that maintenance personnel may perform maintenance
at the exact moment it is needed, prior to failure. Furthermore, the CBM data allows
for a quantitative assessment of the effectiveness of maintenance procedures as well
as the design of new ones leading to cost reductions and an increased effectiveness of
the maintenance procedures. Independently, the CBM data can be used to assess the
readiness of the equipment to perform a specific task and to implement “retrospective”
analysis to identify the root cause of unpredicted failures.

© Springer Nature Switzerland AG 2020


K. Arai et al. (Eds.): SAI 2020, AISC 1228, pp. 28–41, 2020.
https://doi.org/10.1007/978-3-030-52249-0_3
Performance Evaluation of MPI vs. Apache Spark 29

The authors of this paper are involved in a project to prototype a CBM system for
ground vehicles (cars, trucks, etc.). To this end, a fleet of vehicles (ultimately about
80,000 vehicles) is instrumented with sensors monitoring the performance of all critical
vehicle subsystems (about 100 channels). The sensors are producing signals once per
second when the vehicle is “on.” This creates up to 86,400 rows in a timeseries table
per day (maximum of ~5.5 GB per day per vehicle; usually less since when the ignition
key is “off,” no data is collected). In addition, fault information is recorded in a separate
table, should faults occur. The data are transferred from the vehicles to the data center
daily (up to almost 0.5 TB a day).
The data are organized hierarchically. All vehicles are grouped by vehicle families.
Each family is divided into vehicle models. Individual vehicles of a given model are
identified by unique Vehicle Identification Numbers (VIN), and actual data (timeseries
and fault tables) are daily files for each VIN, as shown in Fig. 1.

Fig. 1. Hierarchical organization of the data

The data processing workflow at the data center is comprised of several steps:

1. Removing faulty data resulting from the sensors’ malfunctions and/or errors in data
transmission.
2. Loading the data to the storage.
3. Calculating derived variables and generating statistics.
4. Applying machine learning to discover hidden data patterns that can be utilized in
CBM.
5. Generating dashboards, reports and visualizations.

This paper describes exploratory research in order to compare the candidate tech-
nologies that can be used to implement this project: “classical HPC (high performance
computing) approach” (based on MPI/Lustre) versus “big data approach” (based on
Spark/HDFS). The focus is on step 3, as the technology used in step 2 (e.g., MPI to
populate Lustre or Kafka streaming to populate HDFS) depends on the solution selected
for step 3. Steps 1, 4, and 5 are outside the scope of this exploratory research.
30 T. Haupt et al.

The paper is organized as follows. Section 2 introduces the technologies considered


by this work and discusses the related work. Section 3 describes the set-ups for the exper-
iments. The results are presented and discussed in Sect. 4. The summary, conclusions,
and future work are given in Sect. 5 which conclude this paper.

2 Classical HPC Approach Versus Apache SPARK

High-performance computing (HPC) is usually understood as the ability to process data


on an aggregation of computing power of many processors (such as a computing cluster
or a supercomputer) in a way that delivers much higher performance than could be
obtained from a single processor. This is achieved either by assigning different tasks to
different processors to be performed concurrently (task parallelism) or by the splitting
the data into chunks tailored to fit the memory of individual processors, and then each
processor performs identical computations on the local chunk (data parallelism). The
case when all computations can be performed locally is referred to as an embarrassingly
parallel problem. Usually this is not the case, and some portion of the data must be
exchanged between memories of the processors. In the simplest case, for example,
when the computational task is to calculate the sum of all elements of a distributed array,
each processor computes a partial sum of all elements it “owns”, but in the final step,
the partial sums must be collected from all processors and added together to obtain the
final result. For the last thirty years or so, a plethora of spectacularly efficient algorithms
have been developed, especially for solving systems of partial differential equations,
specifically designed for distributed-memory data-parallel computations. The process
of exchanging data between processors is called “message-passing”.
Since the mid-1990s, there are standard libraries available with interfaces in many
languages (e.g., Fortran, C, C++, Python) implementing the Message Passing Interface
(MPI) [1] standard. The data-parallel paradigm is often described as “bring data to your
computations,” and the application developer is solely responsible for the orchestrating
the exchange of the messages, which can be a tedious and error prone task.
A successful implementation of a data-parallel application also requires a high-
performance I/O support. Fortunately, several solutions are available today. One of the
most popular is the Lustre [2] distributed parallel file system. Lustre file systems are scal-
able and can be part of multiple computer clusters with tens of thousands of client nodes,
tens of petabytes (PB) of storage on hundreds of servers, and more than a terabyte per
second (TB/s) of aggregate I/O throughput [3].
The advent of Big Data analytics (a process of examining large and varied data sets,
to uncover information - such as hidden patterns, unknown correlations, etc.) brought
new technologies for processing massive data sets, including Hadoop [4] available from
Apache Software Foundation. Hadoop is an open source distributed processing frame-
work that manages data processing and storage for big data applications in scalable
clusters of computer servers. Among core components of the Hadoop system are (1)
The Hadoop Distributed File System (HDFS), designed to provide rapid access across
the nodes in a cluster, (2) MapReduce, that uses map and reduce functions to split pro-
cessing jobs into multiple tasks that run at the cluster node where the data is stored
and then to combine what tasks produce into a coherent set of results, and (3) YARN
Performance Evaluation of MPI vs. Apache Spark 31

– a cluster resource management and job scheduler. In contrast to the “classical” HPC
systems described above, the Hadoop paradigm is often described as “bring computation
to your data,” and the distribution and scheduling of tasks (with “reshuffling data” when
necessary) provided automatically by the system. The latter dramatically reduces the
application developer effort: no explicitly written message-passing or synchronization
of nodes.
While MapReduce works efficiently for simple applications (to obtain the sum of
all elements of a distributed array takes just a few lines of code), expressing complex
algorithms in terms of a sequence of maps and reduce segments is a tedious and error
prone process. Apache Spark [5] system (originally developed at University of California
Berkeley but now part of Apache Software Foundation) remedies this problem. Spark
provides distributed task transmission, scheduling and I/O functionality that are faster
and more flexible alternatives to Hadoop’s MapReduce [6, 7]. Out of the box, Spark can
run on a single node or be layered on top of Hadoop YARN and HDFS or Lustre file
systems. Spark can also run on Apache Mesos, Kubernetes, and in other environments,
including computational clouds (e.g., Cloudera, Amazon, Hortonworks).
Spark is written in Scala, which is considered the primary language for interacting
with the Spark Core engine. However, Spark applications can be developed in Java,
Python, and R making it unnecessary for the developer to express the algorithms as a
sequence of map and reduce tasks.
Spark has become the primary technology for developing big data analytics applica-
tions, and therefore it was the intention to use Spark for implementing the CBM research
project for this paper. However, intrigued by several reports found in literature [8–10]
claiming a better performance of MPI-based implementations over equivalent imple-
mentations in Spark, it was decided to conduct some exploratory research using data
coming from our project. To this end, in this paper, the performance of typical data anal-
ysis patterns implemented in Python with MPI on a high-performance Linux with Lustre
file system was compared against implementation in SparkR over a Hadoop cluster with
HDFS.

3 Experimental Setup
3.1 Hardware

The experiments have been performed on two separate systems. One is a general purpose
HPC Linux cluster comprised of Intel Xeon E5-2680 v2 @ 2.80 GHz processors (with 20
cores and 128 GB RAM per processor), running CentOS6. Each node is interconnected
via FDR InfiniBand (56 GB/s). As the storage, a Lustre-based high performance parallel
filesystem with approximately 3 TB is used. For the experiment described in this paper,
up to 10 nodes (200 cores) has been used.
The second system is a dedicated Hadoop cluster with two workers nodes equipped
with Intel Xeon E5-2660 v3 @ 2.60 GHz processors (with 10 cores and 128 GB RAM
per processor), running CentOS6. Each node is interconnected via FDR InfiniBand
(56 GB/s). Hadoop is installed over the HDFS file systems with approximately 5 TB of
storage.
32 T. Haupt et al.

Although the processors are not the same, they have similar average CPU benchmark
scores [11]: 16213 and 15915, respectively.

3.2 Software
The analysis on the HPC cluster has been performed in Python (3.6.2) with h5py, mpi4py
(over openMPI 1.8.2), and numpy packages (1.13.3). The analysis on the Hadoop (2.7.3)
cluster has been performed using Spark (2.4.2). A SparkR package invoked from Rstudio
(0.99) with R (3.2.2) has been used to run Spark jobs.

3.3 Data
The data comes from a pilot program where 2290 vehicles have been instrumented,
representing 78 models of 5 vehicle families; in total about 300,000 daily files. Since
the data come from a pilot program, often the data are of poor quality because of sensor
(or sensor readout) malfunctions, data losses in the transmission, and adjustments in
the channel selection (some data sets have different channels recorded than others).
Therefore, for the purpose of this exploratory research, a representative subsample of
the data has been selected. Data from 6 vehicle models have been used. The selected
models have the same channels recorded, have a reasonably low rate of missing data
points (NaN entries), and have different sizes.
The data is collected as timeseries tables (with timestamps as rows, and channels as
columns) created daily. The size of a daily file is proportional to the number of rows
(the data is collected every second but only when the ignition key is on). The size of the
vehicle model data is the sum of the sizes of daily files for each individual vehicle of the
model (where the number of vehicles of the model varies from model to model).
Figure 2 shows an example of some of the timeseries data used in this project.
The data is provided in the Common Data Format (CDF) [12]. In addition to the
timeseries tables, the CDF data contain other information, like fault data or startup data
(e.g. odometer readout at the beginning of the day), but the additional information is not
used in this analysis.
As the preliminary step of this analysis, the CDF data is loaded to Lustre and HDFS,
respectively. To this end, Python and Java libraries [12], distributed by NASA, are used
with MPI and Kafka [13] streaming, respectively. The process of loading data to the
storage is outside the scope for this paper; instead, the focus is on the performance of
the data analysis once the data is loaded.

3.4 Data Formats


For classical HPC analysis, the data are stored in the Lustre file system in HDF5 file
format [14], each vehicle model as a separate file. The layout of datasets in HDF5 file
has been selected to preserve the original data structures as shown in Fig. 1: datasets are
daily files, grouped by VIN. The layout of the datasets is stored as metadata in XDMF
[15] format to speed up the selection of the datasets during the analysis phase.
For Hadoop-based analysis, the data are stored in HDFS in parquet [16] data format,
each vehicle data (all daily data) as a separate file. The vehicle data are streamed to
Performance Evaluation of MPI vs. Apache Spark 33

Fig. 2. Examples of timeseries data. The upper row shows the distributions of example channel
values for a single vehicle model. The lower row shows the channel data values for the first 1000
timesteps (note: the timestep number, not time, is used for x-axis; the channel values when the
vehicle is off are not shown because they are not recorded).

Spark memory to form a single Spark DataFrame that is then stored using the SparkR
function write.df with source = “parquet” as one of its arguments.

3.5 Data Analysis Patterns


Four data analysis base patterns have been considered in the research. The first involves
operations on a single channel (referred to as “vertical pattern”), that is used for gathering
the overall characterization of the channel data (its average, min and max values, etc.).
The second involves operation on a single row (“horizontal pattern”) that allows the
determination of the state of a vehicle at a given time. As an example of a horizontal
analysis, the periods of engine idling are determined. The third pattern is a combination
of vertical and horizontal analysis (“mixed pattern”); for example, it allows one to find
the difference between channel values when the engine is idling or not. Figure 3, 4, and
5 show example results of running these four analysis patterns.

3.6 The Codes


In the classical HPC approach, the computations are performed in four phases:

1. Read metadata (list of HDF5 datasets) and distribute workload (assign datasets to
cores in a round robin fashion).
2. Read assigned datasets (the daily data), one at a time, in a loop (using h5py library
functions).
34 T. Haupt et al.

Fig. 3. Example of results of the vertical analysis. The plots compare mean, standard deviation,
min, and max values of the engine oil pressure channel of 6 vehicle models used in this analysis.

Fig. 4. Example results of a vertical analysis. Average (over all vehicles of the model) percentage
of time when the engine was not idling for all vehicles. An engine is considered idling when
the vehicle speed is less than 0.1 mph, the engine speed is between 650 and 850 rpm, and the
accelerator pedal position is below 3%.
Performance Evaluation of MPI vs. Apache Spark 35

Fig. 5. Example results of a mixed analysis. The average and standard deviation of engine speed
(on the left) and engine percentage torque (on the right) are compared when the engine is idling
versus not idling.

3. Perform the analysis:


a. Remove rows with NaNs (by applying a mask created by using numpy function
numpy.any(numpy.isnan(dataset),axis=1).
b. make local analysis
i. for horizontal analysis compute for each channel the sum and sum of squares
of the channel’s values, find minimum and maximum value of each chan-
nel (using numpy functions min, max, and sum). At the end of the loop,
combine results with data obtained at the earlier iterations
ii. for vertical analysis, apply the idling criteria and count the number of lines
that survived the cut
iii. for mixed analysis, perform horizontal analysis, apply a mask to separate
the rows when engine is idling from those when the engine is not idling.
Apply the vertical analysis to both masked datasets.
4. Use MPI reduce functions to gather results from each worker, including the number
of processed lines. For vertical and mixed analysis, compute average and standard
deviation for each channel.
The Spark processing is performed in the following two steps:
1. In a loop, read (using SparkR function read.df) all vehicle data belonging to the
selected model and append it to the model data.frame (using R rbind function).
2. Perform the analysis:
a. Remove rows with NaNs (by applying SparkR filter function)
b. make local analysis
i. for horizontal analysis use SparkR describe function
ii. for vertical analysis, add a new column to the data frame using SparkR
withColumn function (c.f. Fig. 6) and count lines where the value of idle
is 0
36 T. Haupt et al.

iii. for mixed analysis, perform horizontal analysis, and subset the DataFrame
to separate the rows when engine is idling from those when the engine is
not idling. Apply the vertical analysis to both subsets.

Analogous to the classical HPC approach, the time was measured separately to
complete step 1 (read time), complete step 2 (calculate time), and the combined time for
both tasks (total analysis time).

df_model <- withColumn(df_model,"idle",ifelse(df_model$VehSpeedEng < 0.1


& df_model$MSUKeyOff==0
& df_model$AccelPedalPos < 3
& df_model$VehSpeedEng < 0.1
& df_model$EngSpeed > 650 & df_model$EngSpeed < 850
,1,0))

Fig. 6. A code snippet showing creation of a new DataFrame column called “idle”

4 Results

In order to compare the performance of the classical HPC approach and the Hadoop-
based one, the same types of analysis were run on similar, albeit not identical systems:
one on a Linux cluster and the other on a Hadoop cluster, as described in Sect. 3.1. The
HPC results presented in this section were obtained using just 20 cores to be comparable
with the computing power of the Hadoop cluster.
It is also important to stress that although the analysis types performed are the same,
the implementation of these codes is different, which may or may not impact the pre-
sented results. The HPC implementation is based on a Python numpy package, while in
the Hadoop environment, a SparkR package with its native functions such as describe
is used. The comparison results will be illustrated in the following sections/figures.

4.1 Comparison of File Reading Rates

As shown in Fig. 7, the reading rates of the files are comparable.

4.2 Comparison of Analysis Rates in HPC and Hadoop Environments

In Fig. 8, the rates of analyses in classical HPC approach are shown. The horizontal
analysis (row by row) is about 30 times faster than the vertical one (column by column).
Consequently, the rate of the mixed analysis is similar to the vertical one. This is different
from what was observed for Spark (Fig. 9): the vertical analysis is almost 3 times faster
than horizontal one. As a result, the mixed analysis is slower than the vertical analysis
alone (factor of ~3).
Furthermore, the analysis rates using MPI are much larger than those of Spark.
Performance Evaluation of MPI vs. Apache Spark 37

Fig. 7. Reading rates of HDF5 files from Lustre and parquet files from HDFS

Fig. 8. Analysis rates in classical HPC approach


38 T. Haupt et al.

Fig. 9. Analysis rates in Spark

Fig. 10. The comparisons of vertical analysis rates in MPI and Spark

While the rates of the horizontal analysis are comparable (Fig. 10), the vertical
analysis is about one hundred times faster (106 vs 108 lines per second), as shown in
Fig. 11.
It is thus, not surprising that the rate of the mixed analysis performed in MPI is about
15 times faster as compared to the SPARK analysis, as shown in Fig. 12.
Exploring the Variety of Random
Documents with Different Content
The Project Gutenberg eBook of Devonshire
This ebook is for the use of anyone anywhere in the United States and
most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Devonshire

Author: Francis A. Knight


Louie M. Knight Dutton

Release date: January 23, 2014 [eBook #44738]


Most recently updated: October 24, 2024

Language: English

Credits: Chris Curnow, Reiner Ruf and the Online Distributed


Proofreading Team

*** START OF THE PROJECT GUTENBERG EBOOK DEVONSHIRE ***


CAMBRIDGE COUNTY GEOGRAPHIES
General Editor: F. H. H. Guillemard, M.A., M.D.

DEVONSHIRE

CAMBRIDGE UNIVERSITY PRESS


London: FETTER LANE, E.C.
C. F. CLAY, Manager

Edinburgh: 100, PRINCES STREET


Berlin: A. ASHER AND CO.
Leipzig: F. A. BROCKHAUS
New York: G. P. PUTNAM'S SONS
Bombay and Calcutta: MACMILLAN AND CO., ltd.

[All rights reserved]

Cambridge County Geographies

DEVONSHIRE
by
FRANCIS A. KNIGHT
AND
LOUIE M. (KNIGHT) DUTTON
With Maps, Diagrams and Illustrations

Cambridge:
at the University Press
1910

Cambridge:
PRINTED BY JOHN CLAY, M.A.
AT THE UNIVERSITY PRESS
PREFACE
I N preparing this book much use has been made of the
Proceedings of the Devonshire Association and of the first
volume of the Victoria History of Devon. The authors also desire
to take this opportunity of recording their grateful thanks to Her
Gracious Majesty Queen Alexandra for her kindness in providing one
of the most interesting illustrations in the volume—the beautiful
photograph of the Armada trophy preserved among the Royal plate in
Windsor Castle, taken for the purpose of this volume by her
command.
F. A. K. and L. M. D.

March, 1910.
CONTENTS
PAGE
1. County and Shire. The Name Devonshire. 1
2. General Characteristics. 4
3. Size. Shape. Boundaries. 8
4. Surface and General Features. 11
5. Watershed. Rivers and the tracing of their courses. Lakes. 20
6. Geology. 30
7. Natural History. 41
8. A Peregrination of the Coast: 1, The Bristol Channel. 55
9. A Peregrination of the Coast: 2, The English Channel. 65
10. Coastal Gains and Losses. Sandbanks. Lighthouses. 79
11. Climate and Rainfall. 88
12. People—Race. Dialects. Settlements. Population. 97
13. Agriculture—Main Cultivations. Woodlands. Stock. 104
14. Industries and Manufactures. 111
15. Mines and Minerals. 119
16. Fisheries and Fishing Stations. 124
17. Shipping and Trade. 129
18. History of Devonshire. 136
19. Antiquities. 152
20. Architecture—(a) Ecclesiastical. 167
21. Architecture—(b) Military. 185
22. Architecture—(c) Domestic. 192
23. Communications: Past and Present. 202
24. Administration and Divisions—Ancient and Modern. 208
25. The Roll of Honour of the County. 213
26. The Chief Towns and Villages of Devonshire. 225
ILLUSTRATIONS
PAGE
Devonshire in the Exeter Domesday Book. Phot. Worth& Co 3
King Tor, near Tavistock. Phot. Frith 5
A Typical Devon Stream— Watersmeet, Lynmouth.
Phot.Coates & Co. 6
A Devon Valley—Yawl Bottom, Uplyme. Phot. Frith 7
Glen Lyn, near Lynmouth.Phot. Coates & Co. 9
The Upper Dart, from the Moors. Phot. Frith 12
Tavy Cleave, showing disintegrated granite 14
On Lundy 17
The River Exe at Tiverton. Phot. Frith 21
On the Dart; Sharpham Woods. Phot. Frith 24
The Axe at Axminster Bridge. Phot. Frith 26
Bideford and the Torridge Estuary. Phot. Frith 28
Geological Section across England 32
Logan Stone, Dartmoor. Phot. Frith 34
A smoothly-weathered granite Tor, Dartmoor. Phot. Frith 36
Footprints of Cheirotherium. Phot. H. G. Herring 38
A Red Deer. Phot. H. G. Herring 43
Otters. Phot. H. G. Herring 44
Spurge Hawk Moth, with Pupa and Caterpillar. Phot.H. G.
Herring 51
The Castle Rock, Lynton. Phot. Coates & Co. 56
Valley of Rocks, Lynton. Phot. Frith 57
Ilfracombe, from Hillsborough. Phot. Coates & Co. 59
Cliffs near Clovelly. Phot. Frith 63
Clovelly Harbour 64
Church Rock, Clovelly. Phot. Frith 65
Pinhay Landslip. Phot. Frith 67
White Cliff, Seaton. Phot. Barrett 68
Parson and Clerk Rocks, Dawlish. Phot. Frith 70
Anstis Cove, near Torquay. Phot. Frith 71
Torquay from Vane Hill. Phot. Frith 72
Brixham. Phot. Frith 74
"Britannia" and "Hindostan" in Dartmouth Harbour. Phot.
Frith 75
A Rough Sea at Ilfracombe. Phot.Frith 81
The Eddystone Lighthouse. Phot.Frith 84
The Start Lighthouse. Phot. Frith 87
The Winter Garden at Torquay. Phot. Frith 90
Upcott Lane, Bideford. Phot. Frith 94
A Cockle Woman, River Exe. Phot. Frith 101
A Honiton Lace-Worker. Phot. Frith 102
Old Ford Farm, Bideford. Phot. Frith 105
Exmoor Ponies. Phot. Sport and General Illustrations Co. 106
Red Devon Cow 107
Gathering Cider Apples 109
A Water-mill at Uplyme. Phot. Frith 110
Devonshire Lace 113
Devonshire Pottery from the Watcombe Works 115
Cider-making in the 17th Century 116
A Modern Cider Press 117
Ship-building Yard, Brixham. Phot. Frith 118
Devon Great Consols Mine. Phot. Frith 121
Stone Quarry, Beer. Phot. Frith 123
Fish Market at Brixham. Phot. Lake 126
Brixham Trawlers. Phot. Frith 128
Teignmouth. Phot. Coates & Co., Bristol 133
Drake's Island from Mt. Edgcumbe. Phot. Frith 135
Penny of Ethelred II, struck at Exeter. Phot. Worth & Co. 139
Signatures of Drake and Hawkyns 143
Flagon taken by Drake from the "Capitana" of the Armada.
From a photograph taken by the Queen's command 144
Drake's Drum. From a photograph presented by Lady Eliott
Drake 145
The "Mayflower" Stone on Plymouth Quay. Phot. Frith 146
Palaeolithic Implement from Kent's Cavern 152
Dolmen near Drewsteignton. Phot. Mr John S. Amery 154
Palstave of the Bronze Age, from Exeter Museum. Phot.
Worth & Co 155
Fernworthy Circle, near Chagford. Phot. Frith 156
Hurston Stone Alignment. Phot. Mr John S. Amery 157
Triple Stone Row and Circle near Headlands, Dartmoor.
Phot. Mr John S. Amery 158
Bronze Centaur forming the Head of a Roman Standard.
Phot. Worth & Co. 162
Saxon Sword-hilt 164
Cyclopean Bridge, Dartmoor. Phot. Frith 166
Norman Doorway, Axminster Church. Phot. Miss E. K.
Prideaux 169
Ottery St Mary Church. Phot. Frith 170
Decorated Window, Exeter Cathedral. Phot. Worth & Co. 171
Rood Screen and Pulpit, Harberton Church. Phot. Crossley,
Knutsford 174
The Seymour Tomb, Berry Pomeroy Church. Phot. Frith. 177
Exeter Cathedral, West Front. Phot. Worth & Co. 179
The Nave, Exeter Cathedral. Phot. Worth & Co. 181
Buckland Abbey from a photograph presented by Lady Eliott
Drake 184
Compton Castle. Phot. E. Kelly 189
An Old Devon Farmhouse Chimney Corner. Phot. Miss E. K.
Prideaux 193
Hayes Barton: Sir Walter Ralegh's House. Phot. Miss E. K.
Prideaux 196
Mol's Coffee House, Exeter. Phot. S. A. Moore of Exeter 198
Sydenham House 199
Dartmouth: Old Houses in the High Street. Phot. Frith 201
Newton Village. Phot. Frith 202
Teignmouth: the Coast Line and Sea-wall. Phot. Frith 206
The Guildhall, Exeter. Phot. Frith 211
Sir Francis Drake 214
Sir Walter Ralegh. Phot. Emery Walker. Signature. Phot.
Worth & Co. 216
Charles Kingsley. Phot. Emery Walker 220
Blundell's School, Tiverton. Phot. Frith 222
Samuel Taylor Coleridge. Phot. Emery Walker 223
Clovelly. Phot. Frith 228
Dartmouth, from Warfleet. Phot. Frith 230
Cherry Bridge, near Lynmouth. Phot. Frith 234
Lynmouth Harbour. Phot. Coates & Co. 235
Ogwell Mill, Newton Abbot. Phot. Frith 237
Shute Manor House. Phot. Barrett 240
Tiverton Bridge. Phot. Frith 241
Diagrams 243
MAPS
Devonshire, Topographical Front Cover
Devonshire, Geological Back Cover
England and Wales, showing Annual Rainfall 92
The authors are indebted to Mr John S. Amery for leave to
reproduce the pictures on pp. 154, 157 and 158.
1. County and Shire. The Name
Devonshire.
The word "shire," which is probably derived, like "shear" and
"share," from an Anglo-Saxon root meaning "to cut," was at one time
used in a wider sense than it is at present, and was formerly applied
to a division of a county or even of a town. Thus, there were once six
small "shires" in Cornwall.
The word shire was in use at the time of King Ina, and occurs in the
code of laws which that monarch drew up about the year 709; but the
actual division of England into shires was a gradual process, and was
not complete at the Norman Conquest. Lancashire, for example, was
not constituted a shire until the twelfth century. Alterations in the
extent and limits of some of the counties are, indeed, still being made;
and in the case of Devonshire the boundaries have been changed
several times within the memory of persons still living.
The object of thus dividing up the country was partly military and
partly financial. Every shire was bound to provide a certain number of
armed men to fight the king's battles, and was also bound to
contribute a certain sum of money towards his income and the
expenses of the state; and in each district a "shire-reeve"—or sheriff,
as we call the officer now—was appointed by the Crown to see that
the people did their duty in both respects. The shire was a Saxon
institution. County is a Norman word, which came into use after the
Conquest, when the government of each shire was entrusted to some
powerful noble, often a count, a title which originally meant a
companion of the King.
It has been suggested that the reason why the names of some
counties end or may end in "shire," while in other cases this syllable is
never used, is that the former were "shorn off" from some larger
district, while the latter represent entire ancient kingdoms or tribal
divisions. According to this theory, Yorkshire is a "shire" because it
originally formed part of the kingdom of Northumbria; and Kent is not
a "shire" because it practically represents the ancient kingdom of the
Cantii. The form "Kent-shire" is, however, found in a record of the time
of Athelstan.
In the case of our own county both forms are in use, and we say
either "Devon" or "Devonshire," although the two names are not
exactly interchangeable. Thus, while we generally talk of "Red Devon"
cattle, we always speak of "Devonshire" cream. "Devon," which is the
older form, may be derived either from Dumnonii, the name given by
Ptolemy, an Alexandrian geographer of the second century, to the
inhabitants of the south-west of Britain, perhaps from a Celtic word
Dumnos, "people"—or it may come from the old Welsh word Dyvnaint
or Dyfneint, "the land of the deeps," that is to say, of deep valleys or
deep seas. To the Saxon settlers the people they found in possession
of the district were Defn-saetan or "dwellers in Devon"; and in time
these settlers called themselves Defenas, or "men of Devon." In the
Exeter Domesday Book—the Norman survey of the five south-western
counties, completed probably before 1086—the name of the county is
given as Devenesira. It would appear, then, that the Britons called
their province "Devon," and that the Saxons called it "Devonshire." It
is characteristic of the peaceable nature of the Saxon occupation that
the two names, like the two nations, seem to have quietly settled
down side by side.
Devonshire in the Exeter Domesday Book

It is believed that it was Alfred the Great who marked out the
border-line between Devon and Somerset; and it was undoubtedly
Athelstan who, after his victory over the West Welsh, made the Tamar
the boundary between Devon and Cornwall.
2. General Characteristics.
Devonshire is a county in the extreme south-west of England,
occupying the greater part of the peninsula between the English and
Bristol Channels, and having a coast-line both on the south and on the
north. Situated thus, on two seas, and possessing, especially on its
southern sea-board, a remarkable number of bays and estuaries, it
has always been noted as a maritime county. And although many of
its harbours have, in the lapse of ages, become silted up with sand or
shingle, and are now of comparatively slight importance, it has one
great sea-port, which, while only thirtieth in rank among British
commercial ports, is the greatest naval station in the Empire.
The county has in the past been famous for its cloth-weaving and
for its tin and copper-mining, but these industries are now greatly
decayed, and the main occupation of the people is agriculture, to
which both the soil and the climate are particularly favourable.
King Tor, near Tavistock

A special characteristic of Devonshire is its scenery, which is so


striking that it is very generally considered the most beautiful county
in England; while there are probably very many who regard its mild
and genial, equable and health-giving climate as more noteworthy
still. It is a remarkably hilly country, and it also possesses not only
many rivers, but a great number of broad river estuaries. Another
characteristic with which every visitor to the district is struck is the
redness which distinguishes its soil, its southern cliffs, and its famous
breed of cattle, which is not less noticeable than the soft and pleasant
dialect, with its close sound of the letter "u" so typical both of Devon
and of West Somerset.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like